Virtual reality gives companies completely new possibilities for visualisation.
In the virtual world, users can interact with their environment in many different ways. Probably the best-known interaction with virtual reality is the ability to move one’s head and thus change one’s field of view in 360-degree space.
Controllers are currently used for most interaction in VR (e.g. menu navigation or gripping and adjusting objects). However, there are always new possibilities that make interaction in 360 degree space easier, faster and more intuitive, such as gesture control with hand tracking, voice control or thought control.
These are the 3 best developed ways to interact in VR:
1 Head movement: Head tracking is often used to simulate the experience of freely looking around in virtual (VR) or augmented reality (AR), allowing the user to experience an immersive and natural way to look around in virtual environments.
There are a number of methods used for head tracking. Screen quality and head-tracking responsiveness are some of the most significant user experience differentiators between high-end headsets, like Oculus Rift, and low-end headsets and smartphone holding designs like Google Cardboard. Devices that use smartphones often rely on phone accelerometers and gyroscopes. High-end headsets have more accurate tracking with precise sensors, along with other systems including infrared LEDs, cameras and magnetometers.
Head tracking is used in a variety of fields like security, gaming and medicine. It can also be used for computer-aided design, 3-D modeling and general hands-free computing to improve computer accessibility.
2 Controller: Almost every common VR goggle has its own VR controller, which is adapted to the respective tracking system. They simulate finger, hand and arm movements and you can interact with the virtual reality at the touch of a button or via analog sticks or touchpads.
Acoustic systems use ultrasonic sound waves to identify the position and orientation of target objects. Mechanical tracking can use articulated arms/limbs/joysticks/sensors connected to headsets or inside them, much like the inertial tracking in phones often made possible by accelerometers and gyroscopes.
For example Oculus Rift
The headset for the Rift uses constellations of infrared LED’s built into the HMD and controllers. These are subsequently picked up by the two desktop sensors designed to recognise the LED’s specific glow and convert their placement into positional data.
“In the future, technologies like “ThermoReal” will even make it possible to simulate moving heat and cold in the controllers.”
3 Speech recognition: plays a very important role in bridging that gap between full immersion and a gamepad or keyboard interface. Speech is considered to be the most natural way of communicating, so should be considered seriously by any developer of virtual reality software.
Speech recognition would allow users to interface naturally with the game by simply speaking commands. You will probably not need lot of commands like you normally do using speech recognition. By only having a limited number of commands it will make it easier to remember what they do, and it will also make the processing faster as the speech recognition engine does not have to look up too many different actions.
You would simply need a context relevant command set, as it relates the game you’re playing. And it would be very cost effective to integrate a small digital USB noise cancelling microphone into the virtual reality headset mount display. Then you’ll have high-quality digital voice input into the game. This would make the speech recognition very accurate.
Speech recognition has a valuable role to play in the Virtual Reality world. Let’s hope company’s like Nuance, Oculus and Valve can work something together to make speech a cost effective, efficient mechanism for VR input.