The UVR Predict Engine Output component is a component designed to start a simulation manually or by script. It is best fitted to be included in VR applications or to be integrated in custom scripted behaviours.
The component can be added anywhere in the scene. If the Predict simulation is to be displayed on a geometry in the scene (preferably a plane), the component must be added to this geometry.
The start, stop, pause, reload and save buttons at the top enable you start (stop, reload,...) the simulation manually. You can also do so by script by calling the component's "StartSimulation()", "PauseSimulation()", "ResumeSimulation()", "StopSimulation()", "ReloadSimulation()", "SaveLDRSimulation()", and "SaveHDRSimulation()" functions. These functions can also be called from a UI button, a VR joystick,...
The simulation texture can be retrieved using the component's mOutputTexture variable.
The simulation computed by this process will be rendered from the selected sensor point of view, which can be completely dissociated from the Unity camera point of view.
The state of the process is displayed in the component's Process section and can be retrieved by script using the following functions : "IsStarted(bool pIncludeLoading = false)", "IsPaused()", "RequireReload()", and "GetState()". A preview of the output texture is visible in the component's Preview section.
The component is defined as follow :
Sensor : the sensor that will be rendered,
Resolution : the resolution at which the simulation will be computed. You can add new resolutions via the Game view resolution dropdown,
Set Texture on this GO : if true, the output simulation texture will be automatically placed on an Unlit material and set on the GameObject the component is placed on. This can only work if there is a MeshRenderer and a MeshFilter on the GameObject. The texture can also be used manually via the component's mOutputTexture variable,
Persistency : if true, the component's mOutputTexture texture will be stored in memory after the process is stopped. If false, the texture will be reset to null after the process is stopped,
Lock Transform : if true, the transform of the camera will not be automatically updated for the process, it will remain at the start position. The transform can then be updated using the "Update Transform" button or via script using "UpdateTransform()",
Auto Action : this enables you to automatically pause the process, stop the process, or save the simulation to a texture after a given time or number of samples per pixel (SPP),
Spectrometer : if true, you can define a GameObject (UI Parent) that will show the content of one pixel of the simulation,
Using the UI Display Mode "UI Element", the content of the pixel is displayed using Unity UI Elements just like when you pick a pixel in the Engine View,
Using the UI Display Mode "Cie XY Texture", you can visualize the CIE xy values of the pixel on a CIE XY curve,
You can change the displayed pixel by calling the following component's functions : "MoveSelectedPixelVertical(float delta)", "MoveSelectedPixelHorizontal(float delta)", or "MoveSelectedPixel(Vector2 delta, float sensitivity = 1)".
Some utility functions are available on the component to duplicate the plane on which the simulation is displayed. When the "Set Texture on this GO" and the "Lock Transform" options are enabled, you may want the plane to remain at the position where the simulation was started.
In a VR scene, the Predict Engine Output component enables you to use Predict Engine as if you were using an actual camera :
the Unity renderer is used for interactive stereo rendering as you would normally do in a VR scene,
the Predict Engine Output component is attached to a hand controller that represents the actual camera,
the Predict Engine simulation is started from the Predict Engine Output component and displayed on a plane in the scene, as if you were looking at the screen of your camera,
the hand controller trigger represents the trigger of the camera : when the trigger is called, the Predict Engine sensor transform is updated and a new simulation is started.
In the example here against :
The camera GameObject represents what the player is looking at, this can be the VR headset. This GameObject should have the following components :
Camera : the Unity camera,
UVR Camera Settings : the UVR settings,
Additionally, any components you need to move the camera for the player.
The hand-tool-cam GameObject represents the hand controller : it can move relatively to the player. This GameObject can hold the controller callbacks to be able to call the Predict Engine Output component's functions from the different controller triggers.
The hand-representation GameObject is the 3D representation of the sensor in the Scene (here a big black sphere). It can be replaced by the 3D model of the camera : a tablet, a phone, a camera,...
The visualisator is the plane on which the simulation will be rendered, the Predict Engine Output component is placed on this GameObject. This can also be a part of the camera 3D model so that it represents its screen.
The spectro is the plane on which the pixel informations will be rendered (optionnal).
In the example, the Predict Engine Output component is defined as follow :
The selected sensor is the one defined on the camera GameObject,
The Set Texture on this GO option is enabled : we want the simulation to be displayed on the screen plane,
The Lock Transform option is enabled : we don't want the sensor position to be updated at every frame but only when the hand controller trigger is called,
The Spectrometer is enabled so that we get information on the CIE xy value of one pixel on the simulation.
The hand controller has the following callbacks :
a key to start the simulation,
a key to stop the simulation,
a key to update the sensor transform and duplicate the visualization plane (we want the plane to remain at the position where the "picture" was taken),
a joystick to move the spectrometer selected pixel.