Render Runtime

This section discusses the usage of the render runtime. This topic involves:

Note

Most of the code in the following sections are pseudo code. The code may not represent what you see in the native sample app hellovr.

What is Render Runtime

The render runtime is a subsystem of WVR runtime. The render runtime is mainly to control render thread and update render target on display. The render thread is to deal with graphics context management, buffer/surface binding, and rendering synchronization. Render runtime is also responsible for updating the render target to display device. Visual experience improvement, distortion and prediction are also handled by render runtime after content is generated from game logic.

Initializing the Render Runtime

Before initializing the render runtime, please refer to VRActivity to check the readiness of the prerequisite. To invoke the interfaces of the render runtime, it is necessary to include the header file wvr_render.h. Before invoking the initializing interface as the following code example, render runtime can be configured on some platforms with the initializing parameters WVR_RenderInitParams_t including supported graphics library and runtime configurations. Currently, only OpenGL graphics is supported. The first member of the initializing parameter can be filled with WVR_GraphicsApiType_OpenGL to specify this library. Malformed library name could lead to the library encountering a not supported error from the initializing interface.

The second member of the initializing parameter specifies the combination of runtime configuration with a bit mask, despite the fact that the data type is unsigned long. It can refer to the enumeration WVR_RenderConfig. The option of the render configuration can specify WVR_RenderConfig_Direct_Mode, which determines whether runtime uses single buffer mode to reduce latency and improve visual experience. Render configuration can also specify to enable the MSAA effect with WVR_RenderConfig_MSAA.

The double buffering method is to show the front buffer on screen and to render the back buffer with graphics library off-line in the background. WVR_RenderConfig_Vertical_Sync can determine whether to switch front/back buffer at the moment of vertical retrace, known as vsync, on the display device. If it does not specify this render configuration, the buffers will be updated immediately.

Timewarp is a VR technique which warps the rendered scene before updating it to the display. To specify this render configuration, WVR_RenderConfig_Timewarp can help to reduce judder on the scene. Asynchronous Timewarp (ATW) is where timewarp occurs on another thread in parallel (asynchronously) with rendering. Specifying WVR_RenderConfig_Timewarp_Asynchronous can help fill in the missed frames and reduce judder.

#include <wvr/wvr_render.h>

WVR_RenderInitParams_t param = {WVR_GraphicsApiType_OpenGL, WVR_RenderConfig_Timewarp_Asynchronous};
WVR_RenderError pError = WVR_RenderInit(&param);
if (pError != WVR_RenderError_None) {
    LOGE("Render init failed - Error[%d]", pError);
}

The WVR_RenderError specifies the corresponding error of the render runtime initialization for debugging.

The loading page

Please also consider the cold launch may cost a quite long duration. It probably takes several seconds on some platforms. A loading page or welcome scene is better than the black screen at this waiting time for a good user experience.

Setup View and Projection Matrices

All the VR games are first-person content. The content should be shown on the display depending on the translation/rotation of the HMD. It means that the user’s position in VR should be provided to render runtime before refreshing the display (The moment when the display is refreshed is known as vsync.) The model positions in VR relative to the user’s position in the VR space constitute viewer space. To transform the center of the world to the position of the viewer, the easiest way is to apply the inverse of the position matrix of the viewer to everything that needs to be seen in mathematics. Invoking WVR_GetPoseState or WVR_GetSyncPose can obtain WVR_PoseSate_t which is an aggregation of position information. One of its member poseMatrix is the position matrix with a 4x4 matrix type WVR_Matrix4f.

typedef struct WVR_PoseState {
    WVR_Matrix4f_t poseMatrix;
    WVR_Vector3f_t velocity;
    WVR_Vector3f_t angularVelocity;
    bool isValidPose;
    int64_t timestamp;
} WVR_PoseState_t;

Although WVR_Matrix4f is not an effective form for OpenGL, it utilizes the column major form to present the pose. In this case, a row-major to column-major conversion is necessary. The following is an array-element mapping comparison between the two forms. The elements array indexes, 0~2, 4~6, 8~10 form a rotation matrix while the 3, 7, 11 indexes form the x,y,z translation vector.

WVR_Matrix4f::m[4][4]    OpenGL matrix 1 x 16 array, Matrix4
    0  1  2  3                0  4  8 12
    4  5  6  7                1  5  9 13
    8  9 10 11                2  6 10 14
   12 13 14 15                3  7 11 15

Based on the description above, a matrix transpose should be made after polling poses. The transpose should look similar to the following:

Matrix4 matrixtranspose(const WVR_Matrix4f_t& mat) {
    // Convert the HMD's pose to OpenGL matrix array.
    return Matrix4(
        mat.m[0][0], mat.m[1][0], mat.m[2][0], mat.m[3][0],
        mat.m[0][1], mat.m[1][1], mat.m[2][1], mat.m[3][1],
        mat.m[0][2], mat.m[1][2], mat.m[2][2], mat.m[3][2],
        mat.m[0][3], mat.m[1][3], mat.m[2][3], mat.m[3][3]
    );
}

The following example demonstrate to poll pose state via WVR_GetSyncPose and transpose the position matrix to an effective OpenGL matrix format.

Matrix4 DevicePoseArray[WVR_DEVICE_COUNT_LEVEL_0]
WVR_GetSyncPose(WVR_PoseOriginModel_OriginOnHead, mVRDevicePairs, WVR_DEVICE_COUNT_LEVEL_0);
for (int nDevice = 0; nDevice < WVR_DEVICE_COUNT_LEVEL_0; ++nDevice) {
    if (mVRDevicePairs[nDevice].pose.isValidPose) {
        DevicePoseArray[nDevice] = matrixtranspose(mVRDevicePairs[nDevice].pose.poseMatrix);
    }
}

As mentioned above, inverting the position matrix obtains the view transform matrix for the model in the scene. When the viewer becomes the center of the view space, the head moving left is equivalent to the object moving right. When the viewer becomes the center of the view space, the head rotating clockwise is equivalent to the object rotating counter-clockwise.

Matrix4 hmd = DevicePoseArray[WVR_DEVICE_HMD];
mHMDPose = hmd.invert();

The situation above is when the world space stands still. What if the world space rotates and translates spontaneously, the view transform matrix has to add external translation and rotation instead of just inverting itself. The first step is to separate the origin position matrix into rotation and translation matrices.

Matrix4 hmd = DevicePoseArray[WVR_DEVICE_HMD];

Matrix4 hmdRotation = hmd;
hmdRotation.setColumn(3, Vector4(0,0,0,1));

Matrix4 hmdTranslation;
hmdTranslation.setColumn(3, Vector4(hmd[12], hmd[13], hmd[14], 1));

Then update world rotation and translation.

// Update world rotation.
mWorldRotation += -mDriveAngle * mTimeDiff;
Matrix4 mat4WorldRotation;
mat4WorldRotation.rotate(mWorldRotation, 0, 1, 0);

// Update WorldTranslation
Vector4 direction = (mat4WorldRotation * hmdRotation) * Vector4(0, 0, 1, 0);  // Not apply the translation of hmdpose
direction *= -mDriveSpeed * mTimeDiff;
direction.w = 1;

// Move toward -z
Matrix4 update;
update.setColumn(3, direction);
mWorldTranslation *= update;

Finally, compose the corresponding rotation and translation matrix together and invert the matrix multiplying product. The modified view transform matrix is completed.

mHMDPose = (mWorldTranslation * hmdTranslation * mat4WorldRotation * hmdRotation).invert();

After applying the view matrix to model in games, a viewer space of scene is assumed to be established. In order to provide stereo disparity, a pair of per-eye transform matrix should be used to correct the viewer space into two-eye spaces. The eyes are in fact a pair of sensors on the outside-inside HMD. The relative position from sensor to head are different between 3DoF and 6DoF. Specify the left/right eye and the type of degree of freedom to the interface WVR_GetTransformFromEyeToHead. It returns a transform from eye space to viewer space with a 4x4 matrix type WVR_Matrix4f.

Matrix4 EyePosLeft = matrixConverter(
    WVR_GetTransformFromEyeToHead(WVR_Eye_Left)).invert();
Matrix4 EyePosRight = matrixConverter(
    WVR_GetTransformFromEyeToHead(WVR_Eye_Right)).invert();

Now, the eye space of scene is ready but is in the Cartesian coordinate which doesn’t fit the visual experience. The valid eyesight is usually presented as a side-down frustum inside which all things can actually be seen on the screen/display. A perspective projection should be applied to the scene. Mapping all objects in the eye space to homogeneous coordinates can be achieved by multiplying the result of the corresponding interface WVR_GetProjection. Mapping all objects in the eye space to homogeneous coordinates can be achieved by multiplying the result of the interface WVR_GetProjection. It returns a projection matrix that corresponds each eye with type WVR_Matrix4f.

Matrix4 ProjectionLeft = wvrmatrixConverter(
    WVR_GetProjection(WVR_Eye_Left, dNearClip, dFarClip));
Matrix4 ProjectionRight = wvrmatrixConverter(
    WVR_GetProjection(WVR_Eye_Right, dNearClip, dFarClip));

After perspectively projecting everything, deformation changes the shape of the objects that are near the camera to be bigger, and the others smaller. The render interface of graphics API library needs this multiplying result MVP to draw the model in the scene.

Updating the Scene Texture to the Display

In each frame, the stereoscopic scene rendering result should be updated to display with the reference of texture. The texture level resource is maintained by the render runtime for optimized resource management. There is a texture queue maintained in the runtime. In order to create the texture queue, the texture properties should be specified to the interface WVR_ObtainTextureQueue. The recommended size for texture and associated buffers is suitable for the current display resolution. The size can be obtained via WVR_GetRenderTargetSize.

uint32_t RenderWidth = 0, RenderHeight = 0;
WVR_GetRenderTargetSize(&RenderWidth, &RenderHeight);

//Get the texture queue handler
void* mLeftEyeQ = WVR_ObtainTextureQueue(WVR_TextureTarget_2D, WVR_TextureFormat_RGBA, WVR_TextureType_UnsignedByte, RenderWidth, RenderHeight, 0);
void* mRightEyeQ = WVR_ObtainTextureQueue(WVR_TextureTarget_2D, WVR_TextureFormat_RGBA, WVR_TextureType_UnsignedByte, RenderWidth, RenderHeight, 0);

If the developer determines to use the multiview extension for vertex shader, the target texture have to specify as WVR_TextureTarget_2D_ARRAY. The texture queue only needs to be created once for one draw call to render each scene to multiple texture layers of an array texture. This method helps to improve CPU loading and rendering latency. Please note that the multiview rendering is only supported in native application.

uint32_t RenderWidth = 0, RenderHeight = 0;
WVR_GetRenderTargetSize(&RenderWidth, &RenderHeight);

//Get the texture queue handler
void* mMultiviewQ = WVR_ObtainTextureQueue(WVR_TextureTarget_2D_Array, WVR_TextureFormat_RGBA, WVR_TextureType_UnsignedByte, RenderWidth, RenderHeight, 0);

WVR_GetTexture is the interface to get the texture from texture queue. It requires the texture queue handle and the index of texture queue. In the beginning, all the texture level resources are available. A thorough iterative check of the texture queue is performed to get the texture to work. The length of texture queue can be requested by the interface WVR_GetTextureQueueLength.

//Create frame buffer objects
for (int i = 0; i < WVR_GetTextureQueueLength(mLeftEyeQ); i++) {
    fbo = new FrameBufferObject((int)WVR_GetTexture(mLeftEyeQ, i).id, RenderWidth, RenderHeight);
    LeftEyeFBO.push_back(fbo);
}
for (int j = 0; j < WVR_GetTextureQueueLength(mRightEyeQ); j++) {
    fbo = new FrameBufferObject((int)WVR_GetTexture(mRightEyeQ, j).id, RenderWidth, RenderHeight);
    RightEyeFBO.push_back(fbo);
}

In contrast to normal stereoscopic scene rendering, multiview rendering only need frame buffer objects per frame instead of the eyes.

//Create frame buffer objects in the frame level
for (int i = 0; i < WVR_GetTextureQueueLength(mMultiviewQ); i++) {
    fbo = new FrameBufferObject((int)WVR_GetTexture(mMultiviewQ, i).id, RenderWidth, RenderHeight);
    MultiviewFBO.push_back(fbo);
}

The frame buffer object is used to generate the render buffer and attach texture together with the frame buffer.

FrameBufferObject::FrameBufferObject(int textureId, int width, int height) {
    glGenFramebuffers(1, &mFrameBufferId);
    glBindFramebuffer(GL_FRAMEBUFFER, mFrameBufferId);

    glGenRenderbuffers(1, &mDepthBufferId);
    glBindRenderbuffer(GL_RENDERBUFFER, mDepthBufferId);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, width, height);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, mDepthBufferId);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
}

In the sample code, the method of generating frame buffer and attaching texture for multiview extension are as the following.

FrameBufferObject::FrameBufferObject(int textureId, int width, int height) {
    glGenTextures(1, &mDepthBufferId);
    glBindTexture(GL_TEXTURE_2D_ARRAY, mDepthBufferId);
    glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, GL_DEPTH_COMPONENT24, mWidth, mHeight, 2);

    glGenFramebuffers(1, &mFrameBufferId);
    glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mFrameBufferId);

    glFramebufferTextureMultiviewOVR(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, mTextureId, 0, 0, 2);
    glFramebufferTextureMultiviewOVR(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, mDepthBufferId, 0, 0, 2);
}

When render runtime starts to process rendered texture, it is appropriate to query the available texture index via WVR_GetAvailableTextureIndex. Using the occupied texture should cause failure of submitting texture. Specifying the side of eye and texture to the interface WVR_SubmitFrame updates the scene texture to display.

//Get the available texture index in texture queue.
int32_t IndexLeft = WVR_GetAvailableTextureIndex(mLeftEyeQ);
int32_t IndexRight = WVR_GetAvailableTextureIndex(mRightEyeQ);

//Specify the texture in texture queue.
WVR_TextureParams_t leftEyeTexture = WVR_GetTexture(mLeftEyeQ, IndexLeft);
WVR_TextureParams_t RightEyeTexture = WVR_GetTexture(mRightEyeQ, IndexRight);

WVR_SubmitError e;
e = WVR_SubmitFrame(WVR_Eye_Left, &leftEyeTexture);

// Right eye
e = WVR_SubmitFrame(WVR_Eye_Right, &rightEyeTexture);

If the rendered texture is established with multiview extension, the first argument of WVR_SubmitFrame is arbitrary. Render runtime doesn’t care the side of the eye and only submitting texture once for each frame. The multiview texture is automatically handled by render runtime internally.

//Get the available texture index in texture queue.
int32_t IndexMultiview = WVR_GetAvailableTextureIndex(mMultiviewQ);

//Specify the texture in texture queue.
WVR_TextureParams_t multiviewEyeTexture = WVR_GetTexture(mMultiviewQ, IndexMultiview);

//Submit function only take care of the texture established with multiview extension. Eye side is arbitrary.
WVR_SubmitError e;
e = WVR_SubmitFrame(WVR_Eye_Left, &multiviewEyeTexture);

The third argument of the WVR_SubmitFrame is a reference pose state. The polling pose method inside the render runtime is a periodical pull system. The nearest timing of polling pose before submitting frame will help to reduce judder, although the optional argument is default null and skipped. Invoking WVR_GetSyncPose or WVR_GetPoseState multiple times to get pose with different predicted time is allowed in programming before calling WVR_SubmitFrame, and in this case, passing the newest rendered pose as the third parameter of WVR_SubmitFrame is necessary. Please note that never call these two pose-fetching APIs more than once before calling WVR_SubmitFrame without the referenced pose passed as a parameter. The fourth argument of WVR_SubmitFrame is an algorithm extension for future use, it is skipped in the current stage. WVR_SubmitError the specifies the corresponding error situation of the render runtime submitting for debugging.

Using Stereo Renderer Mode

There is an option in the coding design style to customize the callback functions. Render runtime invokes these callbacks at a specific timing. The coding design style is called WVR_StereoRenderer. It is only supported in native application.