Jump to content

Annabell

Verified Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by Annabell

  1. @Dario Yes I already looked into this example. But within this example, I have to "define" which objects can be focussed. But I just want to get the point of focus, indepent if a person focussed an object or just watch into the air. Therefore I thought about the following idea: I can receive the eye data via SRanipal_Eye_API.GetEyeData(ref data) and therefore have the eye data of the left, right eyes as well as the combined data. So in the next step I am calculating the focussed point by calculating the minimal distance between the two lines of the left and right eye. If the minimal distance is 0, they have an intersection point: lineLeft= gazeOrigin_left + l * normalizedGazeDirectionVector_left lineRight= gazeOrigin_right + k * normalizedGazeDirectionVector_right where: gazeOrigin_left == data.verbose_data.left.gaze_origin_mm normalizedGazeDirectionVector_left_left == data.verbose_data.left.gaze_direction_normalized gazeOrigin_right == data.verbose_data.right.gaze_origin_mm normalizedGazeDirectionVector_right == data.verbose_data.right.gaze_direction_normalized Unfortunately the two lines do not have an intersection point, but I do not understand why. Anyone has any ideas why there is no intersection point? @Corvus
  2. I am trying to get the focus of the focussed point. Therefore I have multiple ideas: 1) Calculate the intersection of the line consisting out of data.verbose_data.left.gaze_origin_mm and data.verbose_data.left.gaze_direction_normalized (left eye line) as well as data.verbose_data.right.gaze_origin_mm and data.verbose_data.right.gaze_direction_normalized (right eye line). Unfortunately, this doesn't return an intersection point because they do not cross each other. 2) I saw that Tobii XR SDK offers a function "GetEyeTrackingData" (https://vr.tobii.com/sdk/develop/unity/documentation/usage-examples/). Unfortunately the class TobiiXR has no function GetEyeTrackingData and therefore I am not able to use it. Does anyone has any idea/ideas?
  3. @chengnay Yeah I am looking for the in actions given in your screenshot (InteractUI, Teleport, GrabPinch, GrabGrip,...)
  4. @chengnay thanks for the tip. I already found another option in the asset shop
  5. @chengnay Okay this seems to be a way how to figure out if a button was pressed, but is it also possible to figure out which action belongs to this button?
  6. @VibrantNebula Yes exactly the developer has to define Actions and match them to one or multiple buttons. You can then trigger those actions, but only when you exactly know what those actions are. For example in Unity it is possible to get all exisiting gameObjects via "UnityEngine.Object.FindObjectsOfType<GameObject>();". So you do not have to tell the code which gameobjects you have. With actions I only figured out that you have to know the names of each action to trigger them, but I did not find a solution how to find out those actions and their corresponding names via a function. @chengnay @zzy @Jad @Corvus
  7. figured out a way. Basically used websocket for a bidirectional communication. therefore I had to set up a websocket server and the frontend, so at the end I have two clients (Frontend and Unity application) and the server.
  8. I would like to know if there is a method how to get all possible actions of the controllers (HTC vive Pro Eye) in a unity application programmatically in C#. Do you know any method how to do so? @Corvus @chengnay
  9. I would like to know if there is a method how to get all possible actions of the controllers (HTC vive Pro Eye) in a unity application programmatically in C#. Do you know any method how to do so?
  10. the goal is to set up some variables within a webbrowser for a Unity application. The communication needs to be bidirectional, because we also need to get first some information from Unity (which gameObjects/actions/... exists) for some pop up menus and secondly we have to fill out the "form" on the website which information should be sended to the Unity application in order to start the scene. Does anyone has any ideas? I already tried to work with this tutorial, but unfortunately Network.perrType is not available for Unity version 2019.2.9f1 anymore. @VibrantNebula
  11. I want to have something like a start canvas in the vr world. More precisely, before starting the "real application", the player has to type in his name and email adress I am using a HTC Vive Pro Eye, so the idea would be to have a virtual keyboard which the user can controll via the controllers. Are there any similar projects/ideas/... how to do this? I only found within the gameObjects the input field, which unfortunately does not occur in the vr world, just on the screen of the computer, but not visible with the headset. @chengnay
  12. Within Unity I can only download SteamVR Plugin and not OpenVR Plugin. On top of that the IVRSystem seems to be in C++, but within Unity you usually use C# @Corvus
  13. I am using Unity @VibrantNebula
  14. I want to get the position of the headset in order to decide if someone nodded or shaked his/her head. Anyone has any idea how I get the position data/values of the headset? Do I need any specific gameObjects?
  15. Yeah you were right. Going into more deeper parts shows me that Unity basically just rounded the floats. Thanks 🙂 @Corvus @Daniel_Y
  16. Yes exactly that is what I want. That was too easy. thank you for your help 🙂
  17. I was wondering how you can create your own Asset Package which can be imported by Assets -> Import Package. I am trying to implement a module which should also have a Prefab object. So how to create such a module and such Prefabs?
  18. Data Structure Vive.SR.Eye.SingleEyeData: ulong eye_data_validata_bit_mask float eye_openness Unity.Engine.Vector3 gaze_direction_normalized Unity.Engine.Vector3 gaze_origin_mm float pupil diameter_mm Unity.Engine.Vector2 pupil_position_in_sensor_area Theoretically it should be possible to get the focused point by calculating the intersection point of the linear equation of each eye (left and right) using gaze_direction_normalized and gaze_origin_mm. When debugging the data, I recognized that the vector gaze_direction_normalized is the same for each eye, which doesn't make any sense. Do you have any explanations why this is so? did I misunderstood something during the calculation of the focused point? Debugged Values: Combined Left Right gaze_direction_normalized (0.0, -0.1, 1.0) (0.0, -0.1, 1.0) (0.0, -0.1, 1.0) gaze_origin_mm (3.0, 5.4, -44.3) (32.4, 5.3, -44.3) (-28,6, 5.6, -44.2) @Daniel_Y @Corvus
×
×
  • Create New...