Develop with Wave SDK

In this tutorial, you will learn how to:

  • Get the HMD tracking pose and draw a scene with VR view.
  • Get the properties and events of the VR system.
  • How to render model of the controller into the scene and control the interaction with the controller.

Note

Most of the code in the following sections are pseudo code. The code may not represent what you see in the native sample app hellovr.

Contents

Creating the VR View

To draw a binocular view, each eye needs an individual texture. In every frame, the scene must be rendered onto each eye’s texture with the frame buffer object (FBO). Then, we use WVR_SubmitFrame to send both frame buffers into the VR compositor for distortion, overlay, and rendering.

struct FrameBufferObject {
    GLuint m_nDepthBufferId;
    GLuint m_nTextureId;
    GLuint m_nFramebufferId;
};
FrameBufferObject mLeftEyeFBO;
FrameBufferObject mRightEyeFBO;

bool MainApplication::renderFrame() {
    bool ret = true;
    if ( !m_pHMD )
        return;

    mIndexLeft = WVR_GetAvailableTextureIndex(mLeftEyeQ);
    mIndexRight = WVR_GetAvailableTextureIndex(mRightEyeQ);

    DrawControllers();
    renderStereoTargets();

    // Left eye
    WVR_TextureParams_t leftEyeTexture = WVR_GetTexture(mLeftEyeQ, mIndexLeft);
    WVR_SubmitError e;
    e = WVR_SubmitFrame(WVR_Eye_Left, &leftEyeTexture);
    if (e != WVR_SubmitError_None) return true;

    // Right eye
    WVR_TextureParams_t rightEyeTexture = WVR_GetTexture(mRightEyeQ, mIndexRight);
    e = WVR_SubmitFrame(WVR_Eye_Right, &rightEyeTexture);
    if (e != WVR_SubmitError_None) return true;

    return ret;
}

The width and height of the FBO textures are prepared from invoking WVR_GetRenderTargetSize(). It is recommended to use the same textures size for both eyes.

bool MainApplication::initGL() {
    // Setup Scenes

    // Setup stereo render targets
    WVR_GetRenderTargetSize(&mRenderWidth, &mRenderHeight);
    if (mRenderWidth == 0 || mRenderHeight == 0) {
        return false;
    }

    mLeftEyeFBO = new FrameBufferObject(mRenderWidth, mRenderHeight);
    if (mLeftEyeFBO->hasError()) return false;

    mRightEyeFBO = new FrameBufferObject(mRenderWidth, mRenderHeight);
    if (mRightEyeFBO->hasError()) return false;
}

In renderStereoTargets(), the frame buffer is bound and calls renderScene() to draw on each texture. The viewport is set according to the width and height from WVR_GetRenderTargetSize().

void MainApplication::renderStereoTargets() {
    LOGENTRY();
    glClearColor(0.30f, 0.30f, 0.37f, 1.0f); // nice background color, but not black

    // Left Eye
    mLeftEyeFBO->bindFrameBuffer();
    mLeftEyeFBO->glViewportFull();
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    renderScene(WVR_Eye_Left);
    mLeftEyeFBO->unbindFrameBuffer();

    // Right Eye
    mRightEyeFBO->bindFrameBuffer();
    mRightEyeFBO->glViewportFull();
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    renderScene(WVR_Eye_Right);
    mRightEyeFBO->unbindFrameBuffer();
}

Applying the HMD Pose

Tracking a user’s head movements requires understanding how the HMD pose moves the scene.

The code below shows the pose update by UpdateHMDMatrixPose().

Matrix4 m_mat4HMDPose;
Matrix4 m_rmat4DevicePose[ WVR_DEVICE_COUNT_LEVEL_1 ];
WVR_DevicePosePair_t mVRDevicePairs[WVR_DEVICE_COUNT_LEVEL_1];

void UpdateHMDMatrixPose() {
    WVR_GetSyncPose(WVR_PoseOriginModel_OriginOnHead,mVRDevicePairs, WVR_DEVICE_COUNT_LEVEL_1 );
    m_strPoseClasses = "";
    for ( int nDevice = 0; nDevice < WVR_DEVICE_COUNT_LEVEL_1; ++nDevice ) {
        if ( mVRDevicePairs[nDevice].pose.isValidPose )
        m_mat4DevicePose[nDevice] = WVRmatrixConverter(mVRDevicePairs[nDevice].pose.poseMatrix);
    }

    if (mVRDevicePairs[WVR_DEVICE_HMD].pose.isValidPose )
        m_mat4HMDPose = hmd.invert();
}

The function WVR_GetSyncPose() also takes care of the proper timing to pass in for prediction.

All poses are based on the user’s point of view. User’s view in zero pose is face to -Z, up is Y, and right is X.

The pose’s struct, WVR_Matrix4f_t, which is a 4 by 4 float array. It presents a 16-element row-major matrix. The array indexes, 0~2, 4~6, 8~10 form a rotation matrix, and the 3, 7, 11 indexes form an X, Y, Z translation vector. The following graph compares the pose to the OpenGL form array.

 WVR_Matrix4f_t::m[4][4]    OpenGL matrix array, float glm[16]
 0  1  2  3                 0 4  8 12
 4  5  6  7                 1 5  9 13
 8  9 10 11                 2 6 10 14
12 13 14 15                 3 7 11 15

To adapt the matrix to OpenGL, it needs to be converted from row-major order to column-major order. The conversion is handled by:

// Convert the HMD's pose to OpenGL matrix array.
float glm[16];
WVR_Matrix4f_t pose = m_rTrackedDevicePose[WVR_DEVICE_HMD]
glm[0] = pose.m[0][0];  glm[4] = pose.m[0][1];  glm[8] = pose.m[0][2];  glm[12] = pose.m[0][3];
glm[1] = pose.m[1][0];  glm[5] = pose.m[1][1];  glm[9] = pose.m[1][2];  glm[13] = pose.m[1][3];
glm[2] = pose.m[2][0];  glm[6] = pose.m[2][1]; glm[10] = pose.m[2][2];  glm[14] = pose.m[2][3];
glm[3] = 0;             glm[7] = 0;            glm[11] = 0;             glm[15] = 1;

After converting, the pose still needs to be inverted because it will be used to rotate and move the world. If a user’s head turns left, the world should turn right. If a user’s head moves up, the world should move down. The MVP matrix is used to calculate the position vector of the vertices to get the final position of the vertices. The pose must be converted to the point of view of the vertices instead of the user’s head. First, the pose matrix is inverted. Then, the rotation and movement will be inverted.

Check the sample code to see an example of a matrix inversion.

In the binocular view, the point of view for each eye comes from the pose of the HMD. WVR_GetTransformFromEyeToHead() retrieves the transform from the eye space to the head space.

Matrix4 GetHMDMatrixPoseEye( WVR_Eye nEye ) {
    WVR_Matrix4f_t matEyeRight = WVR_GetTransformFromEyeToHead( nEye );
    Matrix4 matrixObj(
            matEyeRight.m[0][0], matEyeRight.m[1][0], matEyeRight.m[2][0], 0.0,
            matEyeRight.m[0][1], matEyeRight.m[1][1], matEyeRight.m[2][1], 0.0,
            matEyeRight.m[0][2], matEyeRight.m[1][2], matEyeRight.m[2][2], 0.0,
            matEyeRight.m[0][3], matEyeRight.m[1][3], matEyeRight.m[2][3], 1.0f
        );

    return matrixObj.invert();
}

Matrix4 m_mat4eyePosLeft = GetHMDMatrixPoseEye( WVR_Eye_Left );
Matrix4 m_mat4eyePosRight = GetHMDMatrixPoseEye( WVR_Eye_Right );

And WVR_GetProjection() retrieves the projection Matrix for each eye.

float m_fNearClip = 0.0001f;
float m_fFarClip = 30.0f;

Matrix4 GetHMDMatrixProjectionEye( WVR_Eye nEye ) {
    if ( !m_pHMD ) return Matrix4();

    Matrix4 mat = WVR_GetProjection( nEye, m_fNearClip, m_fFarClip );
    return Matrix4(
        mat.m[0][0], mat.m[1][0], mat.m[2][0], mat.m[3][0],
        mat.m[0][1], mat.m[1][1], mat.m[2][1], mat.m[3][1],
        mat.m[0][2], mat.m[1][2], mat.m[2][2], mat.m[3][2],
        mat.m[0][3], mat.m[1][3], mat.m[2][3], mat.m[3][3]
    );
}

Matrix4 m_mat4ProjectionLeft = GetHMDMatrixProjectionEye( WVR_Eye_Left );
Matrix4 m_mat4ProjectionRight = GetHMDMatrixProjectionEye( WVR_Eye_Right );

The OpenGL transform matrix MVP is Projection * View * Model. The MVP matrix should be:

Matrix4 GetModelViewProjectionMatrix( WVR_Eye nEye ) {
    Matrix4 model;  // default is identity.  The scene don't need move.
    if ( nEye == WVR_Eye_Left )
        return m_mat4ProjectionLmEyePosLefteft * m_mat4eyePosLeft * m_mat4HMDPose * model;
    else if ( nEye == WVR_Eye_Right )
        return m_mat4ProjectionRight * m_mat4eyePosRight * m_mat4HMDPose * model;
}

Rendering the Scene

renderScene() calls GetCurrentViewProjectionMatrix() to get the MVP matrix in order to render the scene.

void MainApplication::renderScene(WVR_Eye nEye) {
    glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); // Set clear color
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the window

    glUseProgram( m_unSceneProgramID );
    glUniformMatrix4fv( m_nSceneMatrixLocation, 1, GL_FALSE, GetCurrentViewProjectionMatrix( nEye ).get() );
    glBindVertexArray( m_unSceneVAO );
    glBindTexture( GL_TEXTURE_2D, m_iCubeTexture );
    glDrawArrays( GL_TRIANGLES, 0, m_uiVertcount );
    glBindVertexArray( 0 );

    // Render Model...
}

Event

You can get events by using WVR_PollEventQueue(). It will return in a struct, WVR_Event_t. You can identify event type from the member, WVR_CommonEvent_t::common. In the struct, WVR_CommonEvent_t, you can also extract specific data from the member WVR_EventType::type. The data format may contain different event types WVR_EventType in each event struct. For more information, check the definition of the header file, WVR_event.h.

void MainApplication::processVREvent(const WVR_Event_t & event) {
    switch(event.common.type) {
    case WVR_EventType_DeviceConnected:
        {
            setupControllerCubeForDevice(event.device.deviceType);
            LOGD("Device %u attached. Setting up controller cube.\n", event.device.deviceType);
        }
        break;
    case WVR_EventType_DeviceDisconnected:
        {
            LOGD("Device %u detached.\n", event.device.deviceType);
        }
        break;
    case WVR_EventType_DeviceStatusUpdate:
        {
            LOGD("Device %u updated.\n", event.device.deviceType);
        }
        break;
    }
}

bool MainApplication::handleInput() {
    // Others ignored...

    // Process WVR events
    WVR_Event_t event;
    while (WVR_PollEventQueue(&event) {
        ProcessVREvent(event);
    }
}

Controller

The controller is a kind of tracking device. Use WVR_GetSyncPose() to get a list of poses from all tracking devices, and then convert the matrix array into mDevicePoseArray[]. See the sample code below.

Use WVR_IsInputFocusCapturedBySystem() to check the input focus, WVR_IsDeviceConnected() to check the connection, and mVRDevicePairs[id].type to check the device type. If everything passes, use mDevicePoseArray[] to draw the controller.

void MainApplication::drawControllers() {
    // don't draw controllers if somebody else has input focus
    if (WVR_IsInputFocusCapturedBySystem())
        return;

    std::vector<float> buffer;

    int vertCount = 0;
    mControllerCount = 0;
    for (uint32_t id = WVR_DEVICE_HMD + 1; id < WVR_DEVICE_COUNT_LEVEL_1; ++id) {
        if (!WVR_IsDeviceConnected(mVRDevicePairs[id].type))
            continue;

        if ((mVRDevicePairs[id].type != WVR_DeviceType_Controller_Right) && (mVRDevicePairs[id].type != WVR_DeviceType_Controller_Left))
            continue;

        if (!mVRDevicePairs[id].pose.isValidPose)
            continue;

        Matrix4 mat;
        if (m3DOF) {
            // If the controller is 3DoF, always put the model in the bottom of the view.
            mat = mDevicePoseArray[WVR_DEVICE_HMD];
            mat.invert();
            float angleY = atan2f(-mat[8], mat[10]);
            mat.identity().rotateY(-angleY / M_PI * 180.0f);
            mat *= mDevicePoseArray[id];
            float x = (mControllerCount % 2) == 0 ? 0.1f : -0.1f;
            float z = -0.55f - (mControllerCount / 2) * 0.3f;
            mat.setColumn(3, Vector4(x,-0.2f,z,1));
        } else {
            mat = mDevicePoseArray[id];
        }
        vertCount += mControllerAxes->makeVertices(mat, buffer);
        mControllerCount += 1;
    }

    mControllerAxes->setVertices(buffer, vertCount);
}

Besides the pose of the controller, the controller button states also need to be retrieved. There are two ways to get the controller button state.

  1. Get events using WVR_PollEventQueue() as described in the Event section. The events representing the controller button states include: WVR_EventType_ButtonPressed, WVR_EventType_ButtonUnpressed, WVR_EventType_TouchTapped, and WVR_EventType_TouchUntapped.
  2. Get the latest controller state once per frame by using WVR_GetInputDeviceState().
bool MainApplication::handleInput() {
    // Others ignored...

    // Process WVR controller state
    WVR_DeviceType controllerArray[] = {WVR_DeviceType_Controller_Right, WVR_DeviceType_Controller_Left};
    int controllerCount = sizeof(controllerArray)/sizeof(controllerArray[0]);
    for (int idx = 0; idx < controllerCount; idx++) {
        ControllerState & cs = states[idx];

        uint32_t inputType = WVR_InputType_Button | WVR_InputType_Touch | WVR_InputType_Analog;
        uint32_t buttons = 0, touches = 0;
        uint32_t analogCount = WVR_GetInputTypeCount(controllerArray[idx], WVR_InputType_Analog);

        if (WVR_GetInputDeviceState(controllerArray[idx], inputType, &buttons, &touches, cs.mAnalogState, analogCount)) {
            LOGDIF("DeviceType[%u]: AnalogCount%d Pressed %010X Touched %010X", controllerArray[idx],
                analogCount, buttons, touches);
            cs.updateState(controllerArray[idx]);

    }