I have a HTC Vive Eye Pro and I am interested in recording eye tracking data as a way of seeing where the user's attention is focusing in the VR environment - for research purposes. So I am thinking being able to record the API check focus for example may be useful to this end. There would be different levels that the user would experience so the recording would need to be carried in such a way that displays what is being seen across the different levels of the simulated environment.
What is the best way to go about this? I have seen API check focus works for any object in the scene that has a collider on it but I could not find how this can be setup and recorded. Is there any tutorial available that explains the workflow to do it? (I am currently using UE but it would be useful to know how this would work for Unity as well).
I hope this makes sense and thanks so much for your help.