2017年7月13日 星期四

Setup TensorFlow with GPU support on Windows

TensorFlow with GPU support brings higher speed for computation than CPU-only. But, you need some additional settings especially for CUDA. First of all, we need to follow the guideline from https://www.tensorflow.org/install/install_windows. TensorFlow on Windows currently only has Python 3 support, I suggest to use python 3.5.3 or below. Then, install CUDA 8.0 and download cuDNN v5.1. In terms of cuDNN, please use v5.1 instead of v6.0. I used to try to setup my TensorFlow with cuDNN v6.0 for a couple of days, it doesn't work and takes me a lot of time :( .

Then, move the files from cuDNN v5.1 that you already download to the path where you installed CUDA 8.0, like "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0", following the steps as below:

cudnn-8.0-windows10-x64-v5.1\cuda\bin\cudnn64_5.dll -----------------CUDA\v8.0\bin
cudnn-8.0-windows10-x64-v5.1\cuda\include\cudnn.h------------------------CUDA\v8.0\include
cudnn-8.0-windows10-x64-v5.1\cuda\lib\x64\cudnn.lib-----------------------CUDA\v8.0\lib\x64

Don't need to add the folder path of cudnn-8.0-windows10-x64-v5.1 to your %PATH%. Now, we can start to confirm our installation is ready.

Steps:
1. Create a virtualenv under your working folder:
virtualenv --system-site-packages tensorflow
2. Activate it
tensorflow\Scripts\activate
It shows (tensorflow)$
3. Install TensorFlow with GPU support
pip3 install --upgrade tensorflow-gpu
4. Import TensorFlow to confirm it is ready
(tensorflow) %YOUR_PATH%\tensorflow>python
Python 3.5.2 [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>>

If it doesn't show anything, that means it works.
But, if you see error messages like, No module named '_pywrap_tensorflow_internal', you can take a look at issue 9469, 7705. It should be the cudnn version problem or cudnn can't be found. Please follow the method that I mentioned above.

 
 
 

2016年8月20日 星期六

Webrender 1.0

Source code: https://github.com/servo/webrender

2016年8月18日 星期四

AR on the Web

Because of the presence of Pokémon Go, lots of people start to discuss the possibility of AR (augmented reality) on the Web. Thanks for Jerome Etienne's slides, it brings me some idea to make this AR demo.

First of all, it is based on three.js and js-aruco. three.js is a WebGL framework that helps us construct and load 3D models. js-aruco is Javascript version of ArUco that is a minimal library for Augmented Reality applications based on OpenCV. These two project make it is possible to implement a Web AR proof of concept.

Then, I would like to introduce how to implement this demo. First, we need to use navigator.getUserMedia to give us the video stream from our webcam. This function is not supported on all browser vendors. Please take a look at the status.


navigator.getUserMedia = ( navigator.getUserMedia ||
                       navigator.webkitGetUserMedia ||
                       navigator.mozGetUserMedia ||
                       navigator.msGetUserMedia);

if (navigator.getUserMedia) {
    navigator.getUserMedia( { 'video': true }, gotStream, noStream);
}

The above code shows us how to get media stream in Javascript. In this demo, I just need video, and it will be sent to gotStream callback function. In gotSteam function, I give the stream to my video element that will be displayed on screen. And then, go to setupAR module. In setupAR(), I have to initialize my AR module, and setup my model and scene scale. Furthermore, I just need to wait the new videoStream coming and get my AR detect result from js-aruco at updateVideoStream() function.

In updateVideoStream(), like the above picture, it draws the current videoStream to an imageData that is maintained by a Canvas2D. Go on, the imageData is sent to arDetector to investigate if there is any marker on it. It will return a marker array that contains markers are detected from this imageData. Every marker owns the corners (x, y) coordinate of a marker. We can use these corner coordinates to do lots of applications. In my demo, I draw the corners and the marker id on it. The most interesting part is we can leverage markers to update the pose of a 3D model.

POS.Posit gives us a library to assist us get the transformation pose from the corners. In a pose, it contains a rotation matrix and a translation vector in a 3D space. Therefore, it is very easy for us to show a 3D model on markers except we need to do some coordinate conversion. First, we need to keep in mind video stream is in a 2D space, so it makes sense that we have to transform the corners to 3D space.


for (i = 0; i < corners.length; ++ i){
   corner = corners[i];
   // to 2D canvas space to 3D world space
   corner.x = corner.x - (canvas.width / 2);
   corner.y = (canvas.height/2) - corner.y;
}
Moreover, we need to apply this rotation matrix to the 3D model's rotation vector.
   dae.rotation.x = -Math.asin(-rotation[1][2]);
   dae.rotation.y = -Math.atan2(rotation[0][2], rotation[2][2]) - 90;
   dae.rotation.z = Math.atan2(rotation[1][0], rotation[1][1]);

At last, set the position to the 3D model.
   dae.position.x = translation[0];
   dae.position.y = translation[1];
   dae.position.z = -translation[2] * offsetScale;


Demo video: https://www.youtube.com/watch?v=68O5w1oIURM
Demo link: http://daoshengmu.github.io/ConsoleGameOnWeb/webar.html (Best for Firefox)

2016年7月17日 星期日

How to setup RustDT

RustDT is the IDE for Rust. If you are a guy like me who need a IDE for learning language and developing efficiently, you must have a try on RustDT(https://github.com/RustDT/RustDT/blob/latest/documentation/UserGuide.md#user-guide)

Enable code complete.















Here you go!

2016年1月31日 星期日

WebGL/VR on Worker thread

WebGL on main thread


As before. Developing a WebGL application, the only approach is put all the stuff at the main thread. But it would definitely bring some limitations for the performance. As the above picture shows, in a 3D game, it might need to do lots of stuff in a update frame. For example, updating the transformation of 3D objects, visibility culling, AI, network, and physics, etc. Then, we finally can hand over it to the render process for executing WebGL functions.

If we expect it could done all things in the V-Sync time (16 ms), it would be a challenge for developers. Therefore, people look forward if there is another way to share the performance bottleneck to other threads. WebGL on worker thread is happened under this situation. But, please don't consider anything put into WebWorker would resolve your problems totally. It will bring us some new challenge as well. Following, I would like to tell you how to use WebGL on worker to increase the performance and give you a physics WebGL demo that is based on three.js and cannon.js, even proving that I can integrate it with WebVR API as well.

WebWorker

First of all, I would like to introduce how to use WebWorker. WebWorker can help you execute your script at another thread to avoid pauses from the JavaScript Virtual Machine’s Garbage Collector. Therefore, this is a good idea for developers to use WebWorker to solve the performance bottleneck issue. The sample code is like below:

worker = new Worker("js/worker.js"); // load worker script
worker.onmessage = function( evt ) { // The receiver of worker's message
    //console.log('Message received from worker ' + evt.data );
};

worker.postMessage( { test: 'webgl_offscreen'); // Send message to worker

In worker.js
onmessage = function(evt) {
    //console.log( 'Message received from main script' );
    postMessage( 'Send script to the main script.' ); // Post message back to the main thread.
}

These script looks quite simple and we can start to put some computation at onmessage function in worker.js to relief the work of the main thread. However, we have to know WebWorker brings some constraints for us as well, it would make us feel inconvenient compare to the general JavaScript usage at the main thread.

The limitation of WebWorker are: 
 - Can't read/write DOM
 - Can't access global variable / function
 - Can't use file system (file://) to access local files
 - No requestAnimationFrame

WebGL on worker

After understanding how to use WebWorker and its constraints. Let's start to make our first WebGL on worker application.

The benefit of worker is we can put a part of tiny computation functions into another thread. In case of WebGL worker, we can put WebGL function calls into the Worker thread. So in the example of the above picture, I put my render part to the WebGL Worker.  Firefox Nightly has landed offscreencanvas feature for supporting WebGL on worker thread. In order to utilize this feature, we need to do some setup:
  • Download Firefox Nightly
  • Enter about:config, make gfx.offscreencanvas.enabled;true
Then, we have activated WebGL Worker. Go to finish it! The sample code is like below.
var canvas = document.getElementById('c');
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;

var proxy = canvas.transferControlToOffscreen();   // new interface added by offscreencanvas for getting offscreen canvas
var worker = new Worker("js/gl_worker.js");
var positions = new Float32Array(num*3);           // Transferable object of web worker. Transformation info
var quaternions = new Float32Array(num*4);         // For the update/render functions to update their variable.
                                                   // in the main/worker threads.
var cameraState = new Float32Array(7);             // Camera state for the update/render functions

worker.onmessage = function( evt ) {               // worker message receiving function
    if ( evt.data.positions && evt.data.quaternions
    && evt.data.cameraState ) {
    
      positions = evt.data.positions;
      quaternions = evt.data.quaternions;
      cameraState = evt.data.cameraState;
      updateWorker();
    }
}

worker.postMessage( { canvas: proxy }, [proxy]);    // Send offscreenCanvas to worker

function updateWorker() {
    // Update camera state

    // Update position, quaternion

    // Send these buffer back the worker
    worker.postMessage( { cameraState: cameraState, positions: positions, quaternions: quaternions }, 
    [cameraState.buffer, positions.buffer, quaternions.buffer]);
}


In worker.js
var renderer;
var canvas;
var scene = null;
var camera = null;

onmessage = function( evt ) {      // Receiving messages from the main thread
  var window = self;
  
  if ( typeof evt.data.canvas !== 'undefined') {
    console.log( 'import script... ' );
    importScripts('../lib/three.js');             // load script at worker
    importScripts('../js/threejs/VREffect.js');
    importScripts('../js/threejs/TGALoader.js');

    canvas = evt.data.canvas;
    renderer = new THREE.WebGLRenderer( { canvas: canvas } ); // Initialize THREE.js WebGLRenderer
    scene = new THREE.Scene();
    camera = new THREE.PerspectiveCamera( 30, canvas.width / canvas.height, 0.5, 10000 );

    window.addEventListener( 'resize', onWindowResize, false ); // Register 'resize' event

    // Get bufffers that are sent from main thread.
    var cameraState = evt.data.cameraState;
    var positions = evt.data.positions;
    var quaternions = evt.data.quaternions;
    camera.position.set( cameraState[0], cameraState[1], cameraState[2] );
    camera.quaternion.set( cameraState[3], cameraState[4], cameraState[5], cameraState[6] );

    for ( var i = 0; i < visuals.length; i++ ) {    // Setup transformation info for visual objects in scene
      visuals[i].position.set(
        positions[3 * i + 0],
        positions[3 * i + 1],
        positions[3 * i + 2] );

      visuals[i].quaternion.set(
        quaternions[4 * i + 0],
        quaternions[4 * i + 1],
        quaternions[4 * i + 2],
        quaternions[4 * i + 3] );
    }

    render();        // Call render via the main thread requestAnimationTime

    postMessage({ cameraState: cameraState, positions:positions, quaternions:quaternions}
  , [ cameraState.buffer, positions.buffer, quaternions.buffer ]);  // Send back transferable 
                                                                                  // object to the main thread
  }
}

function render() {
    renderer.render( scene, camera );
    renderer.context.commit();       // New for webgl worker to end this frame
}

function onWindowResize( width, height ) {  // Resize window listener
  canvas.width = width;
  canvas.height = height;
  camera.aspect = canvas.width / canvas.height;
  camera.updateProjectionMatrix();
  renderer.setSize( canvas.width, canvas.height, false );
}

WebVR on Worker



Although most parameters of WebVR exist at dom API, the worker thread can't get them directly. But it is not a big deal, we can get them at the main thread and pass them to the worker.

In the main thread
var vrHMD;
function gotVRDevices( devices ) {
vrHMD = devices[ 0 ];
worker.postMessage( {        // Pass them to the worker
    eyeTranslationL: eyeTranslationL.x, 
    eyeTranslationR: eyeTranslationR.x, 
    eyeFOVLUp: eyeFOVL.upDegrees, eyeFOVLDown: eyeFOVL.downDegrees, 
    eyeFOVLLeft: eyeFOVL.leftDegrees, eyeFOVLRight: eyeFOVL.rightDegrees, 
    eyeFOVRUp: eyeFOVR.upDegrees, eyeFOVRDown: eyeFOVR.downDegrees, 
    eyeFOVRLeft: eyeFOVR.leftDegrees, eyeFOVRRight: eyeFOVR.rightDegrees });
}

function updateVR() {       // Update camera orientation via VR state
  var state = vrPosSensor.getState();

  if ( state.hasOrientation ) {
    camera.quaternion.set(
      state.orientation.x, 
      state.orientation.y, 
      state.orientation.z, 
      state.orientation.w);
}

function triggerFullscreen() {
    canvas.mozRequestFullScreen( { vrDisplay: vrHMD } );  // Fullscreen must be requested at the main thread.
}                                                         // Thankfully, it works for WebGL on worker.

In worker.js
var vrDeviceEffect = new THREE.VREffect(renderer);

onmessage = function(evt) {                // Send VRDevice to work for stereo render.
    vrDeviceEffect.eyeTranslationL.x = evt.data.eyeTranslationL;
    vrDeviceEffect.eyeTranslationR.x = evt.data.eyeTranslationR;
    vrDeviceEffect.eyeFOVL.upDegrees = evt.data.eyeFOVLUp;
    vrDeviceEffect.eyeFOVL.downDegrees = evt.data.eyeFOVLDown;
    vrDeviceEffect.eyeFOVL.leftDegrees = evt.data.eyeFOVLLeft;
    vrDeviceEffect.eyeFOVL.rightDegrees = evt.data.eyeFOVLRight;
    vrDeviceEffect.eyeFOVR.upDegrees = evt.data.eyeFOVRUp;
    vrDeviceEffect.eyeFOVR.downDegrees = evt.data.eyeFOVRDown;
    vrDeviceEffect.eyeFOVR.leftDegrees = evt.data.eyeFOVRLeft;
    vrDeviceEffect.eyeFOVR.rightDegrees = evt.data.eyeFOVRRight;
}

Others

Besides WebGL and WebVR, some problems that are solved when I made this demo. I list them and discuss how I solve them:
  - Can’t access DOM (read / modify)
    var workerCanvas = canvas.transferControlToOffscreen();
    worker.postMessage( {canvas: workerCanvas}, [workerCanvas] );
  - Can’t use filesystem (file://) to access local files
    Use XMLHttpRequest. Taking load texture as an example, in three.js, we need to use
var loader = new THREE.TGALoader();
var texture = loader.load( 'images/brick_bump.tga' );
var solidMaterial = new THREE.MeshLambertMaterial( { map: texture } );
  - No requestAnimationFrame
    Updating transferable objects and render need to via worker.onmessage, we have to execute the worker update at the main reqestAnimationFrame. This limitation would bring the chance of blocking by the main thread requestAnimationFrame because it possibly would happen GC pauses in the main thread and block the worker thread. The best solution is by looking forward the implementation of requestAnimationFrame for Worker.

Demo

Physics/WebGL on the main thread
Physics on the main thread, WebGL on worker
Source code



2016年1月26日 星期二

Introduction to A-Frame

A-Frame is a WebVR framework for developers to make their VR content rapidly. It is based on Entity-Component system. So, it could bring us flexibility and usability for developing.

This is my slide for the sharing in WebGL Meetup Taipei #03.

2015年7月2日 星期四

WebVR on Mobile devices

We all know VR is a very popular stuff currently. There are lots of big companies start to develop their own headsets. Facebook and Youtube begin to support 360 degree videos on their platforms. Due to their works, we could expect there would be huge amounts of applications that target to Virtual Reality in the next couple of years. Currently, if you want to play VR content, you have to spend about $300 USD to buy a headset and plug several wires into your desktop. I think it would increase the wall to invite users to use VR. On the other hand, Google and Gear VR choose to use mobile devices as their headsets. You just need to spend $20 USD to buy a Google Cardboard and can start to enjoy the interaction with VR content.

How about the content part of VR applications? If you want to write an application for Oculus, you must write Windows version for Windows users, or MacOS version for MacOS users. On mobile devices, you also need to rewrite apps for Android and iOS versions. No one wants to spend lots of time in rewriting the same code. Actually, there is an approach can solve this problem. That is to use HTML5 tech to make web apps. Luckily, WebVR API could help us meet the requirement.

WebVR [1] is an experimental technology WebAPI in Mozilla and Google. In Mozilla, we call it MozVR [2]. It has been landed in Firefox Nightly. The goal is for any VR devices could display VR content from browsers. Therefore, we can find lots of demos on MozVR.com that you can view the content through Oculus Rift. However, the main topic of my writing is want to prove how to view VR content on your mobile devices, I want to prove the concept and tell everyone how to use their mobile phones to view WebVR. Go on, I will tell you how to use WebVR API on Firefox browser Nightly and Firefox OS.

First of all, I would like to say the results that I am very satisfied. But there are still some workarounds that have to been noticed.

1. Fix the image size to power of two. This limit actually is according to WebGL 1.0 spec. Under the desktop environment, some browsers can solve this exception. However, on mobile devices, we must follow this limit. Otherwise, it will be crashed.

2. Full screen. If we want to allow full-screen mode, we have to set:
full-screen-api.allow-trusted-requested-only; false

3. On mobile devices, we have no position tracker support.

4. Enable VR flag
dom.vr.enabled; true

5. Using dom.deviceOrientation APIs instead of PositionState
On Firefox Android, PositionState support has been landed to nightly version. However, on Firefox OS, it is not implemented yet. So, I decide to use dom.deviceOrientation APIs [3] on Firefox OS.

window.addEventListener( 'deviceorientation', onDeviceOrientationChangeEvent, false );

6. Position tracker on FirefoxOS
Although FirefoxOS devices didn't provide position tracker APIs natively, we would be difficult to control the camera movement in the 3D scene. But, thanks for the Open Web technology, we can try the Leap Motion JavaScript SDK [4]. In Leap Motion SDK, it would give use gesture data via WebSocket to control the camera movement.

ws = new WebSocket("ws://ipaddress:6437/v6.json");

ws.onopen = function( event ) {
   // Initial leapMotion
}

ws.onmessage = function( event ) {
   // Receive the messages
}

Finally, Let's see our demo that is done at Mozilla work week in Whistler.

Demo:


Reference:
[1] WebVR API https://developer.mozilla.org/en-US/docs/Web/API/WebVR_API
[2] MozVR, http://mozvr.com
[3] DeviceOrientation API, www.html5rocks.com/en/tutorials/device/orientation/
[4] LeapMotion JavaScript SDK, https://developer.leapmotion.com/documentation/javascript/supplements/Leap_JSON.html?proglang=javascript