Jump to content

Best practice for synchronizing tracking space data with SteamVR and Unity?


dsitnick

Recommended Posts

I'm looking to set up multiple Vives, with the machines networked together so that all users can see each other. Additionally, I'm looking to use multiple 2.0 lighthouses to set up a shared tracking space for all of the players.

 

The trouble I'm running into is keeping the tracking spaces synchronized across machines. I want to show the positions of all of the HMDs in the same space, and I want the positions to represent their actual position relative to the user. This would imply all of the lighthouses have the same position and rotation data across machines.

 

Somebody mentioned I could do this in SteamVR by copying the config files for the lighthouses and chaperone, but I'm curious if there's alternative methods, or better practices. Another option I've been considering is to transform one of the spaces so that the lighthouses match up with each other as close as possible, but I feel like this could be janky and inconsistent.

 

I'm not necessarily looking to synchronize the boundaries, just the positions relative to the lighthouses.

 

Thanks!

Link to comment
Share on other sites

,


Your contact was right - sharing the config file is the only way to ensure that they systems are synced 1:1 with each other as it means they'll have the same fixed reference points and parameters to base the positional calculations off of. If this is a public facing LBE, as long as you're religious about not moving your basestations (without updating your config files across the four systems) this is the way to go as it ensures the greatest chance for users to not collide with each other (given the proper networking backend, avatars, ect...) and ensures that everybody has the same chaparones. You can setup a network folder to make the system of creating and distributing the file across the other systems easy. 

 

You can also handle this by manipulating the worldspace in-engine which is less accurate but allows for on the fly configuration. You'd basically run some sort of in-app config tool where you sample 1-3 fixed real-world points with a tracked object (controller, hmd, tracker) and then move your worldspace to fit the data that's provided. I've really only seen this done with Vive trackers as they can sit flat unlike the Vive controller and I've never seen this used IRL for a shared-user experiences; mostly for MR experiences that are portable/traveling. 

Link to comment
Share on other sites

Thanks a bunch! It seems that the config files will be the proper approach, however I'm intrigued about the tracked object approach. How would I go about using the same tracker with multiple Vives? Can I just perform the standard tracker setup on multiple machines without issues?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...