Poly lets you quickly find 3D objects and scenes for use in your apps, and it was built from the ground up with AR and VR development in mind. It’s fully integrated with Tilt Brush and Blocks, and it also allows direct OBJ file upload, so there’s lots to discover and use. Whether you’re creating an intense space walk in VR or a serene garden of AR flowers, you’ll find the ingredients you need in Poly.
On November 15th, SteamVR will open up support for Windows MR headsets which will enable a large amount of content to be available. The good part of about Windows MR is that they are specing a lower min requirement for PC’s compared to Oculus Rift and Vive with the goal of helping to drive adoption.
If you’re wondering if you’re PC is up to snuff, here’s a link to Microsoft’s page with info and a program for checking your computers compatibility:
The following is an interesting blog post about using Neural Networks and Machine Learning in conjunction with VR to explore better user interfaces.
The following is a link to an article describing an attempt to spend a whole day working in Windows 10 with a Microsoft MR headset. In short, there’s potential, but there’s still a number of hurdles both technical and social before it becomes easy and frictionless enough to be practical.
This manual is conceived from a project made in collaboration with, and funded by the The Emerging Media Lab and Department of Geography at the University of British Columbia , to provide VR field trips built using photogrammetry techniques, and showcased with the HTC Vive running in Unity. This project was also partially funded by UBC Teaching and Learning Enhancement Fund and BCcampus Open Education.
It is written by Andrew Yao-An Lee of Metanaut. Metanaut is an indie VR studio based in Vancouver, BC, Canada. Some examples in this manual are from the first field trip location made for the project, which is at Prospect Point of Stanley Park, located in Vancouver, BC, Canada.
This manual will provide best practices and complete workflow from capturing to creating a reasonably good photogrammetry model of an environment (versus an object) for viewing in real-time in Unity. Our methodology is for presenting the photogrammetry mesh that is captured and not about remodeling a scene based on photogrammetry meshes.
The manual is by no means fully complete nor tries to define the best way to approach creating photogrammetry models. It does, however, present one workflow for achieving good photogrammetry results.
To date, there still isn’t a really good way to handle text input while using a VR headset. However, this blog post covers some existing ideas and work that people have been looking at in this area.
First one is from Google Labs on using a drum like interface for keyboard entry. Read more about this here.
Here is an open sourced implementation building on that concept:
The other approach other than keyboards is to use some sort of chorded text input interface that you hold in your hand:
“In the near future, you may not need to touch your phone, tablet, or keyboard when you want to type. That’s the concept behind the Tap Strap, an amazing wearable Bluetooth keyboard that converts finger movements into key presses, so you can tap out messages using any surface as a virtual keyboard.”
This last part shows an interesting concept of what a merging of coding and development could be like while in VR:
At an event in San Francisco we unveiled our vision for Windows Mixed Reality, announced SteamVR and AltSpaceVR are coming to Windows Mixed Reality, introduced the new Samsung Odyssey HMD, and kicked off the holiday shopping season by announcing the availability of pre-orders for Windows Mixed Reality headsets at the Microsoft Store. Learn more on the Windows Blog: http://msft.social/b9ULwk