I have a first iteration prototype working in Unity2017 software for testing. The prototype brings in two webcamera feeds, one each assigned to different webcamera inputs and rendered onto separate cubes in unity. The C# code for the script generates a new webcam.texture for each found input and renders them separately in the scene. These cubes are then set to different layers in the scene (labelled ‘LeftEye’ and ‘RightEye’). The cubes are then attached to the ‘LeftEyeArchor’ and ‘RightEyeAnchor’ in the OVRCameraRig supplied as part of the Occulus Rift development package. The ‘LeftEyeArchor’ and ‘RightEyeAnchor’ cameras are then assigned to the appropriate layers, and the cameras set with Culling Masks for the ‘LeftEye’ and ‘RightEye’ layers. This means that there are two cameras in the scene, both with a video feed rendered in front of them as a child to the object. The OVRCameraRig is then set to ‘Use Per Eye Camera’.
This then renders each eye separately into the rift, with each eye only being able to see the feed rendered from the appropriate camera.

The camera that were used for testing are Logitech C250 which have a low resolution and are SD (800×600) and only a 63° Diagonal FOV which limits their use on the project but is a good starting point for testing. The camera only allows a small FOV compared to the human eye which is typically 30° superior (up, limited by the brow), 45° nasal (limited by the nose), 70° inferior (down), and 100° temporal (towards the temple). For both eyes combined (binocular) visual field is 100° vertical and 200° horizontal.
For initial testing a cardboard face was developed to help testing for left and right vision. It was a lot easier to navigate the project when it had a face around the cameras as eyes as the virtual cameras in Unity, the webcameras and the textures needed to be correlated and coordinated.

To help the project development and to position the project a new frame was developed for a horse to help future iterations of the project. This card base helps in the planning of the FOV in future iterations and helps to focus and frame the project development.

The project is a simple build to start the iteration process. There will need to be an increased FOV, this will likely be produced using different webcameras with fisheye lenses. There will need to be specialist shaders produced for the materials to render out the reds and a frame produced to hold the technology.
There is a zipped build of the project here which requires an Occulus Rift and two 4:3 ratio webcameras attached to a PC