The following is an interesting blog post about using Neural Networks and Machine Learning in conjunction with VR to explore better user interfaces.
The following is a link to an article describing an attempt to spend a whole day working in Windows 10 with a Microsoft MR headset. In short, there’s potential, but there’s still a number of hurdles both technical and social before it becomes easy and frictionless enough to be practical.
Update Nov 30th:
Here’s another take on usability on Windows MR for practical work. This one is more positive.
This manual is conceived from a project made in collaboration with, and funded by the The Emerging Media Lab and Department of Geography at the University of British Columbia , to provide VR field trips built using photogrammetry techniques, and showcased with the HTC Vive running in Unity. This project was also partially funded by UBC Teaching and Learning Enhancement Fund and BCcampus Open Education.
It is written by Andrew Yao-An Lee of Metanaut. Metanaut is an indie VR studio based in Vancouver, BC, Canada. Some examples in this manual are from the first field trip location made for the project, which is at Prospect Point of Stanley Park, located in Vancouver, BC, Canada.
This manual will provide best practices and complete workflow from capturing to creating a reasonably good photogrammetry model of an environment (versus an object) for viewing in real-time in Unity. Our methodology is for presenting the photogrammetry mesh that is captured and not about remodeling a scene based on photogrammetry meshes.
The manual is by no means fully complete nor tries to define the best way to approach creating photogrammetry models. It does, however, present one workflow for achieving good photogrammetry results.
To date, there still isn’t a really good way to handle text input while using a VR headset. However, this blog post covers some existing ideas and work that people have been looking at in this area.
First one is from Google Labs on using a drum like interface for keyboard entry. Read more about this here.
Here is an open sourced implementation building on that concept:
The other approach other than keyboards is to use some sort of chorded text input interface that you hold in your hand:
“In the near future, you may not need to touch your phone, tablet, or keyboard when you want to type. That’s the concept behind the Tap Strap, an amazing wearable Bluetooth keyboard that converts finger movements into key presses, so you can tap out messages using any surface as a virtual keyboard.”
This last part shows an interesting concept of what a merging of coding and development could be like while in VR:
At an event in San Francisco we unveiled our vision for Windows Mixed Reality, announced SteamVR and AltSpaceVR are coming to Windows Mixed Reality, introduced the new Samsung Odyssey HMD, and kicked off the holiday shopping season by announcing the availability of pre-orders for Windows Mixed Reality headsets at the Microsoft Store. Learn more on the Windows Blog: http://msft.social/b9ULwk
In a wide-ranging interview, the CEO of the biggest tech company in the world explains how AR will change our lives, and why he thinks the world is actually getting better.