If you’ve been present online over the past view days, you may have come across the video below. Two VR users interact with one another through human actuation, the process of applying force and feedback in VR experiences through the use of another human user. There’s a lot more than meets the eye, however. Introducing Mutual Human Actuation, the paper produced by the Mutual Turk researchers that gives a unique insight into the development of this human immersion engine.
Lung-Pan Chen, Sebastian Marwecki, and Patrick Baudisch have been experimenting with human actuation during their time at the Hasso Plattnew Institute in Germany. Human actuation is the use of human agents to enhance a virtual reality experience by providing force, haptics, and feedback to a VR user. The best demonstration of this concept comes from the group’s 2014 early experience with the process, the Haptic Turk.
As one user experiences a paragliding flight, four human actuators lift and manoeuvre him according to onscreen instructions synchronised with the VR experience itself.
Similarly, a year later, Turk Deck was revealed, a walking VR experience enhanced by physical props that are constantly altered by a number of human actuators.
To bring human actuation into the living room, however, researchers quickly realised they needed to scale down the number of people required for such immersive experiences. Similarly, human actuation in this way can only please one person at a time. This is where Mutual Turk comes in.
Mutual Turk introduces the concept of mutual human actuation, a process in which two users face different VR experiences requiring them to accentuate force on a single shared prop, therefore providing human feedback to both users, synchronised with their individual VR experiences. And, breath.
Overall, researchers present 10 experiences in this paper, all designed with shared props and mutual force feedback between users. For example, one user is trying to control a kite while it flies around in a storm by holding onto a handpiece. However, that handpiece is connected to the other end of a fishing line held by another user attempting to haul in a big catch. There’s also another where one user is walking through a storm and can see hailstones hitting their body while another is battling a monster with a foam bat. These interactions all run throughout a single 30-minute narrative, while users experience them in canon. When the first user is in the second half of the story, the second user is in the first, thereby providing the required ‘push and pull’ forces for the experience.
The paper does acknowledge the design difficulties incurred in creating concurrent yet differing VR experiences that are physically tethered to one another using human subjects. It’s difficult to read, let alone program. The potential for users to explore the world differently, or perhaps accidentally touch the other player presents situations that may shatter the illusion completely. Interacting with props in ways not envisaged by designers also poses a risk to both users’ immersion. Using visual guides, however, researchers demonstrate that discouraging players from these movements or behaviours through defined boundaries helps keep them in line, and they report no unexpected behaviours from players during their playtesting.
Mutual Turk works through two VR headsets each running Unity and Mutual Turk client connected to the Mutual Turk server where tracking data is received before communicating this data back to other clients. In the tests, OptiTrack Prime 17w cameras were used to track users, as well as motion capture suits and rigid bodysuits for users and props.
This is definitely a feature I see being employed in game design in the future, for VR and non-VR purposes. Online multiplayer experiences that use two players in different locations to provide virtual feedback or movements have been popping up for years now. Similarly, with Nintendo Switch’s joy-con controllers and their enthusiasm for fun, cooperative mini-games, these experiences are likely to be spotted in future Nintendo titles. What do you think? Is the gaming industry ready for human engines, and how quickly do you think game designers will jump into these mechanics? As always, let me know down in the comments.
The full paper can be found here.