Skip to content

Prototype002

Untitled-1

Prototype Explanation:

There are currently two main camera in the scene, and two main objects. The cameras are mapped to each screen/eye in the rift and offset accordingly in the scene. Attached to each camera, and offset on the z axis are two cube objects scaled to fit the 4:3 FOV of the camera. Each object sits on a different layer i the scene. Each object is rendering a different camera input from two webcams as separate webcam.textures. The cube objects are offset to match the camera views. The camera have a cull so that they can only see one of the layers to match the feed.

This means that there are two screens attached to the users eyes, live feeding two web camera streams into the rift.

Prototype Plan:

Each camera will need to be post-processed to remove the red from the optical range, leaving a blue/yellow dichronic colour range.

142c.gif

For the first test of this i will try a LUT based post-processor using the Amplify Colour plugin for Unity which can grade footage using a pipeline between Unity and Photoshop to create a LUT. The LUT is then applied to each camera rather than to the object as a shader.

I think that a camera based post-processor will work better than a shader as it is using a dynamically produced webcam.texture for the objects. I will try the plugin and if the tests are unsuccessful i will look at programmed shaders.

 

Published inEquine EyesPhD