
Disney has been doing a lot of research within the VR, AR, and MR industry and their most recent project involves using mixed reality technology on a project called the “Magic Bench.”
The Disney Research team found that using head mounted displays can be quite isolating as it only immerses one user at a time. Others don’t get to experience the same simulations and don’t get to participate in the VR experience.
So what the Disney Research team is proposing is instrumenting an environment where everyone can see and interact with the digital animations; a walk-up and play experience per say.
The team managed to create a 3D reconstruction of a scene using a combination of the depth and color sensors on an off-the-shelf Microsoft Kinect. To do this, they drew polygons using each point in the point cloud as a vertex, creating the appearance of a solid mesh. The mesh is then aligned to the RGB camera feed of the scene from the same Kinect. This alignment gives the mesh color, and completes a 3D reconstructed video feed.
There are several problems that arise with the 3D constructed feed. First, the monocular feed creates “depth shadows” in areas where there is no direct line-of-sight to the depth sensor. Second, the depth camera is laterally offset from the RGB camera (since they cannot physically occupy the same space) and therefore have slightly different viewing angles, creating further depth shadowing. The resulting data feed is sparse and cannot represent the whole scene. To solve this, the Disney Research team aligned the 3D depth feed with the 2D RGB feed from the Kinect. By compositing the depth feed over a 2D backdrop, the system effectively masks these depth shadows, creating a seamless composite that can then be populated with 3D CG assets.
This mixed reality platform centers around the simple setting of a bench. The bench works in an novel way to constrain a few problems, such as identifying where a user is and subsequently inferring the direction of the users gaze (i.e., toward the screen). It creates a stage with a foreground and background, with the bench occupants in the middle ground. The bench also acts as a controller; the mixed reality experience won’t trigger until at least one person is detected sitting on the bench. Further, different seating formations on the bench trigger different experiences.
Magic Bench is a custom Software and custom Hardware platform, necessitating a solution to bridge both aspects. Between the two exists a series of patches created in Cycling ’74 Max designed to convert signals sent from the game engine (via OSC) about the positions and states of objects in the scene, into the haptic sensations felt on the bench. Haptic actuators are dynamically driven based on the location of animated content. e driving waveform for each actuator is designed according to the desired feel — in the current setup they can tweak base frequency, frequency of modulation, general amplitude, amplitude envelope, and three-dimensional position
Haptic actuators are dynamically driven based on the location of animated content. The driving waveform for each actuator is designed according to the desired feel — in the current setup they can tweak base frequency, frequency of modulation, general amplitude, amplitude envelope, and three-dimensional position
I got to see it at Siggraph and it’s a great take on haptic feedback and user experiences. Looking forward to what Disney can put out there using an amalgam this and other VR/AR technology.