Here, https://developer.vive.com/resources/knowledgebase/vive-srworks-sdk/, is the SRWorks SDK download link. And, here, https://developer.vive.com/resources/knowledgebase/intro-vive-srworks-sdk/, demonstrate some scenarios of what it can do.
When calibration finished, LaunchEyeCalibration() API will return a value. If work properly, It will return ViveSR.Error.WORK; Otherwise, It will return one of ViveSR.Error's error code.
If the error is caused by SteamVR, It will return one of ViveSR.anipal.Eye.CalibrationExitCode's error code
The IntPtr is not in use now.
Only NVIDIA graphics supported.
Maxwell Family such as GTX970, 980
Pascal Family such as GTX1060, 1070, 1080, TitanX
Expecting to support Turing Family at the end of June/2019 such as RTX2060, 2070, 2080
Several directions FYR.
1. Have HMD closer to scanned objects
2. Have scanned object more textures
3. Configure depth engine via control APIs. For various use scenarios, refer to Sample2_DepthImage.unity, ex: configure the de-noise filter and enable Refinement mode.
You could refer to that demo example beside example 04 which has a SAVE function shown as the below figure able to save the scanned mesh to a OBJ file.
Sure, occlusion is one of major functions we are implementing like the video1 at 35s and video2 at 2:34s where the virtual objects on floor are occluded by the front real funiture.
Interact with environment: https://drive.google.com/open?id=16b5wWHAaVXwZ2slJK_zEmXIKSrAY24xr
A portal for the player passing through between the real and virtual world: https://drive.google.com/open?id=1Fl_VY-41fThn02083iA5yEwRe-Z2bqpG
Regarding occlusion by the moving object like hand you mentioned, it could be achieved by our dynamic mesh creation function though further improvements still under way.