Jump to content

Downsampling Textures on Vive Focus Plus


tedw4rd

Recommended Posts

Hello!

I'm working on a Vive Focus Plus project using Unity 2018.3. We're trying to build a screensharing feature into our VR application. Currently, we do this by getting the active render texture off of the main camera and using Graphics.Blit to copy its contents to a different, equally sized render texture. That all happens in a Camera.onPostRender callback. Later, we stream the contents of that second texture to the receiver.

I'd like to have the second texture (the one that gets copied to) be smaller than the main camera texture so we can save on texture memory, bandwidth, frame time etc. However, if I try to blit the main camera texture to a smaller texture, the smaller texture always ends up black. I've tried changing the filter modes on both textures and come up with similar results. Is blitting to a different sized texture simply not supported on the Vive Focus, or am I missing something?

Link to comment
Share on other sites

  • 4 weeks later...

@Corvus I had to reconstruct some of the work I threw out, but here's the gist of what I was working with:

 

using UnityEngine;
using wvr;

namespace Core.Cognivive.Multiplayer.Streaming
{
    public class WaveScreenCapture : StreamingScreenCapture
    {
        private uint _screenWidth;
        private uint _screenHeight;
        private bool _setupComplete;

        private RenderTexture _internalTexture;
        private RenderTexture _botheyeTexture;

        protected override int ScreenWidth
        {
            get { return (int) _screenWidth; }
        }

        protected override int ScreenHeight
        {
            get { return (int) _screenHeight; }
        }

        protected override void PerformCapture()
        {
            _botheyeTexture = WaveVR_Render.Instance.botheyes.GetCamera().activeTexture;
            if (!_setupComplete)
            {
                Setup(_botheyeTexture);
            }
            Graphics.Blit(_botheyeTexture, _internalTexture);
        }

        protected override byte[] GetScreenData(int srcX, int srcY, int srcWidth, int srcHeight, int outputBlockWidth, int outputBlockHeight)
        {
            RenderTexture.active = _internalTexture;
            _tex.ReadPixels(new Rect(0, 0, _internalTexture.width, _internalTexture.height), 0, 0);
            _tex.Apply();

            return _tex.EncodeToJPG();
        }

        private void OnEnable()
        {
            Camera.onPostRender += TryTickStream;
        }

        private void OnDisable()
        {
            Camera.onPostRender -= TryTickStream;
        }

        private void TryTickStream(Camera cam)
        {
            if (cam != WaveVR_Render.Instance.botheyes.GetCamera())
            {
                return;
            }
            PerformCapture();
        }

        private void Setup(RenderTexture src)
        {
            Interop.WVR_GetRenderTargetSize(ref _screenWidth, ref _screenHeight);
            _internalTexture = RenderTexture.GetTemporary(src.width / 2, src.height / 2, src.depth, src.format);
            _tex = new Texture2D(_internalTexture.width, _internalTexture.height);

            _setupComplete = true;
        }
    }
}

 

The idea was to Blit the main camera texture to a smaller rendertexture and then pull that downscaled image out every couple of frames with GetScreenData. GetScreenData would always return a blank image. We're using single pass rendering, and I can confirm that the contents of _botheyetexture is correct when we perform the blit operation.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...