In the Episode 162 podcast with Alan Yates linked above, one of the discussion points I found interesting was on the differences in the approach to solve how to implement motion tracking. The HTC Vive uses two lighthouse base stations mounted on the wall that transmits a timed laser beam, and the hand controllers have photosensors that can see the beam and use the information it derives from the timing beam to calculate it’s own position. In contrast, the Oculus Rift uses two imaging cameras that point towards the room, and the hand controllers emit a patterned light signal which is picked up optically by the cameras to calculate the position. The best analogy is that the HTC Vive works like a GPS system where there is a satellite with a known position, and there can be multiple GPS receivers and users that can see the satellite signal and decode where they are. The Oculus is like a security camera system where the cameras are in known positions and based on what it sees, it can figure out where things are.
Although there are pros/cons to each system, there is one big difference between their approaches.. the ability to eventually go wireless headset when mobile computing capability improves. With a HTC Vive, you can have a wireless headset and motion tracked sensors because they are all colocated with the user. This means you can have many VR users within the same room space. However, with the Oculus Rift, because they use cameras to see the whole room and where everyone is, if there are multiple users, they would all need to somehow be connected to the one set of cameras either with wires or some other interface. Thus, in terms of scalability, the lighthouse approach definitely has the edge in terms of extensibility.