Jump to content
 

LS_Tpowell

Verified Members
  • Content Count

    24
  • Joined

  • Last visited

Community Reputation

0 Neutral

About LS_Tpowell

  • Rank
    Explorer

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've had numerous issues with Win7, and also with non-intel usb host controllers. Ensure your motherboard has actual intel usb host controllers, or the inatek PCIE usb hub that HTC recommends for the vive/vive pro. Beyond that, yes I've seen some incompatibility with some of our mainboards.
  2. Can you post pictures of both cameras' undistort and distort feeds? If you're not sure how to get these in unity, the C API project as is will give you everything you need to grab those images. At this point I can only assume the camera itself is askew, but getting both sets of images would help with that.
  3. Ground plane needs realignment. Please do another room setup with steamVR Nevermind. If it's really on a level surface, this looks like either the camera is misaligned, or the camera's undistort is incorrect.
  4. There are too many undocumented params that are not given as examples in the sample application for almost anything to be understood, and too many generic params that don't indicate the unit measurement they accept, the type, the range of inputs, etc. Most of the Param enums don't have a type specified for their options, so we don't even know what can be get/set, nor do we know whether to get/set as bool, int, float, double, string, ptr, struct, or float array. A sample application on its own is not a form of documentation, we NEED documentation for the C API, at least for lines 3 through 436 of ViveSR_Enums.h and lines 51-156 of ViveSR_API.h.
  5. wrote: No, I´ve stopped working with SRWorks. My goal was to replace the camera, but use parts of the SRWorks plugin just with the videostream of the other camera. It did not work out well, because the plugin is really confusing and complicated.... Recently I bought a Zed Mini from Stereolabs, which has its own working SDK and UE plugin, attached it to the HMD and it works just fine ;) This actually looks pretty interesting, is there native C++ development support or is it only plugins for the well known game engines. Our product is already feature complete and for the most part done, but I'm not happy with the low resolution and the really low quality depth information, are these things you feel were addressed well by the zed? SDK documentation quality? Sorry, I hope you don't mind me picking your brain on this one.
  6. wrote: You could refer to the sample $(openvr)\samples\tracked_camera_openvr_sample included in OpenVR SDK for front camera access that maybe you want. It seems like Chitoss is asking if it is possible to retreive the camera feed WITHOUT using SRWorks SDK because of the number of dlls required.
  7. wrote: No plan yet, or you are willing to share your plan for further discussion. We have a third party developer that only has a 32 bit product which only accepts 32 bit plugins, while their future plans include support for 64 bit we still must support the 32 bit both in the short term and long term. One of the projects involved requires depth mapping and stereo video. Until such a time as this is supported what this means for us is that we need to have an external 64 bit program running with SRWorks, and IPC the data to a 32 bit plugin. Research suggests that with the high amount of shared memory required, IPC will likely have significant performance degradation, so doing this work natively and not through separate applications using IPC would be ideal.
  8. When will 32 bit dlls be released for the SRWorks SDK? Is this planned, in the works, not feasible? We unfortunately need to support both 64bit and 32bit, so it would be great to know if there are 32 bit dlls available. (ViveSR_API.dll, etc.)
  9. Awesome, thanks for the clarification! Again though, this is something I think could have been cleared up if there were more than a couple fragmented comments in the released headers. We've had a few updates to the DLLs since the released headers ( https://developer.vive.com/resources/knowledgebase/vive-srworks-sdk/ ). If there is a version of the headers somewhere with documentation in it, or if there's some doxygen unreleased somewhere, I think we'd all appreciate it greatly.
  10. wrote: Depth map is undistorted. Not sure what you exactly want? The depth map appears to be pre-undistort. IE: It's the matrix prior to any lens distortion correction. When I use the term undistort, I'm referring to the API references to Distorted and Undistorted images which represent pre-undistort and post-undistort respectively. This matches OpenCV terminology as well; the distorted image is actually the source image without any post processing, and the undistorted image is the image after it has been transformed to account for the camera lens distortion. Sorry for any confusion there. Edit: What I'm asking for is documentation on retrieving and applying the undistortion model used internally, since there is exactly zero documentation of that in the API and sample application. My particular use case is undistorting the depth data, but I'm sure there are other use cases. Alternatively, just a callback for already undistorted depth maps would be great for me. But again, the complete lack of documentation for the C API is something I'm sure more people would appreciate being addressed.
  11. Is there currently a method or callback in the works for grabbing the depth map undistorted to match the camera undistortion model? Or perhaps is there a way of grabbing the undistortion model so that I could apply it via OpenCV to the depth map or any other already distorted data. I've found no samples in the C API matching this need, and there's exactly zero documentation in the included headers relating to this.
  12. gjcope, I'm unable to give you a measure of the latency between image capture and my perception of a rendered update; however, the delay between the ID_SEETHROUGH callback and when I Submit the texture to the compositor is typically at or under 11ms. The delay between callbacks is typically around 16ms, putting the callbacks at 60Hz. I don't know enough about your application to assist; however, if you're using the C API, grab the Unreal or Unity Demo, and compare the results to your own to at least see if this is due to implementation. It sounds from that 21ms-33ms reference like your camera may not actually be running at the full 60Hz. Also worth verifying that any modification you're doing to the textures is done on your GPU, and that you're not juggling the texture between RAM and VRAM
  13. I think I follow what you're getting at. You want to render the images to a texture, and then render the texture to the HMD. I'm going to refer you to the wiki page for the compositor in OpenVR. The Submit function takes a number of formats, but they're all within typical graphics specifications (OpenGL, DirectX11, and Vulkan) and references to those textures are held within an OpenVR Texture_t struct but must already exist or be reserved in video memory. https://github.com/ValveSoftware/openvr/wiki/IVRCompositor_Overview In order to Submit() your images to the compositor and get them in the HMD, you're going to have to use OpenGL, DirectX, or Vulkan. Again, I'd recommend OpenGL because its learning curve is a little less steep and because there's already a good amount of integration with OpenCV (newer versions of OpenCV natively support converting cv::Mat to gl Textures IIRC and there are tons of tutorials to do so online).
  14. OpenVR has several sample projects. In this case, you're likely wanting to work with the OpenGL sample, as OpenGL and OpenCV play very nicely. You may need to rebuild OpenCV to include native OpenGL support, as I believe the distributed OpenCV that comes with the SRWorks SDK was not built with OpenGL support. If you're not already familiar with graphics programming or have not worked extensively with OpenGL or another similar specification, I'd recommend using the existing OpenVR sample application for OpenGL, and then integrating the SRWorks SDK into it, as opposed to integrating OpenVR into an existing SRWorks SDK project. The SRWorks SDK in its current state is exceptionally simple and portable, and again I'd recommend taking what you need from it into another existing project/application that already has rendering down-pat.
×
×
  • Create New...