A little while back I posted a video of myself playing a round of VR boxing in Thrill of the Fight. Here’s a video of some ‘real’ boxers having some fun on it too. Very cool.
The following is a detailed blog analysis on the technical details on the lighthouse trackers on the Vive. What’s amazing is that the data shows it has millimeter level accuracy within the room space with low latencies. That’s incredible to think about and maybe there some good ideas to come out of what you can do if you can track millimeter level precision of general objects in a room. Link
In the Episode 162 podcast with Alan Yates linked above, one of the discussion points I found interesting was on the differences in the approach to solve how to implement motion tracking. The HTC Vive uses two lighthouse base stations mounted on the wall that transmits a timed laser beam, and the hand controllers have photosensors that can see the beam and use the information it derives from the timing beam to calculate it’s own position. In contrast, the Oculus Rift uses two imaging cameras that point towards the room, and the hand controllers emit a patterned light signal which is picked up optically by the cameras to calculate the position. The best analogy is that the HTC Vive works like a GPS system where there is a satellite with a known position, and there can be multiple GPS receivers and users that can see the satellite signal and decode where they are. The Oculus is like a security camera system where the cameras are in known positions and based on what it sees, it can figure out where things are.
Although there are pros/cons to each system, there is one big difference between their approaches.. the ability to eventually go wireless headset when mobile computing capability improves. With a HTC Vive, you can have a wireless headset and motion tracked sensors because they are all colocated with the user. This means you can have many VR users within the same room space. However, with the Oculus Rift, because they use cameras to see the whole room and where everyone is, if there are multiple users, they would all need to somehow be connected to the one set of cameras either with wires or some other interface. Thus, in terms of scalability, the lighthouse approach definitely has the edge in terms of extensibility.
Porn has historical driven many new technologies into the mainstream and VR is no different. The following is a behind the scenes video about how they are approaching the scanning process of models into a virtual environment. Warning, this is NSFW (not suitable for work) viewing. What’s interesting is that fundamentally they are using photogrammatry to digitize the model. They are using a huge array of digital SLR cameras from fixed positions to take a snapshot of the model from many different angles simulataneously. From there, they can use the multiple views from the cameras to reconstruct a 3D representation of the person and have it overlaid with the photographic images resulting in a highly detailed and photorealistic rendering. From here, they can then animate the model using traditional techniques. Although they are focusing this on porn, the same techniques can be applied to digitizing anyone or anything using a similar technique. This is the beginning of how we start capturing the world and begin to moving it into 3D space. Link
I was interested in looking at what the possibilities are for being able to do LIVE video with 3D capture. This is different than a 360 video which is a 2D capture of a scene all around you. What I’m interested in is scene/video capture of 3D depth and 360 degree at the same time.
I think that Project Tango has a lot of strong potential moving forward because of a number of factors:
1. It allows for indoor and realtime mapping of the users environment which bridges the link between the virtual and physical world
2. It is wireless.
The things that I think would hamper this : a better method for interfacing hands is a key thing for immersion, frame rate and resoluion on the mobile units driving the display
HTC Vive X’s Accelerator is a program to help seed and support companies that are developing new VR applications. They released a list of the 33 initial companies being funded and you can see the breadth and types of applications that are being thought out and developed. It’s an exciting time. Link
I showed the VR to a friend of mine who I hadn’t seen in a long time who told me that he throwing around the idea to plan a trip to trek up to Everest base camp which itself is an adventure. We joked that it would probably be easier to just experience it in VR instead. Today, I saw this article come out.. perfect timing. Link
Valve has this game called DOTA which is a realtime strategy game. I don’t play it, nor do I have any interest in playing it. However, they released the ability to watch other people playing in a ‘VR hub’ and I can see how this kind of approach and interface could be incredibly engaging for viewing sports or other VR events. Link
Google is experimenting with storytelling through VR experiences with their Pearl project. I look forward to the experiences that will come out of projects like this in helping to tell stories and human experiences. Link