Posts

Experimental integration Glean with Unity applications

Image
You might notice  Firefox Reality PC Preview  has been released in HTC’s Viveport store. That is a VR web browser that provides 2D overlay browsing alongside immersive content and supports web-based immersive experiences for PC-connected VR headsets. In order to easily deploy our product into the Viveport store, we take advantage of Unity to help make our application launcher. Also because of that, it brings us another challenge about how to use Mozilla’s existing telemetry system. As we know,  Glean SDK  has provided language bindings for different programming language requirements that include Kotlin, Swift, and Python. However, when we are talking about supporting applications that use Unity as their development toolkit, there are no existing bindings available to help us achieve it. Unity allows users using a Python interpreter to embed Python scripts in a Unity project; however, due to Unity’s technology being based on the Mono framework, that is not the same as our familiar Pytho

How to train custom objects in YOLOv2

Image
This article is based on [1]. We wanna a way to train the object tags that we are interested. Darknet has a Windows version that is ported by AlexeyAB [2]. First of all, we need to build darknet.exe from AlexeyAB to help us train and test data. Go to build/darknet, using VS 2015 to open darknet.sln, and config it to x64 solution platform. Rebuild solution! It should be success to generate darknet.exe. Then, we need to label objects from images that are used for training data. I use BBox label tool to help me label objects' coordinates in images for training data. (python ./main.py) This tool's image root folder is at ./Images, we can create a sub-folder ( 002 ) and insert 002 to let this tool load all *.jpg files from there. We will mark labels in this tool to help us generate objects' region to mark where objects are. The outputs are the image-space coordinate in images and stored at ./Labels/002 . However, the format of this coordinate is different from YOLOv2, YOLOv2

Fast subsurface scattering

Image
Fig.1 - Fast Subsurface scattering of Stanford Bunny Based on the implementation of three.js . It provides a cheap, fast, and convincing approach to do ray-tracing in translucent surfaces. It refers the sharing in GDC 2011  [1], and the approach is used by Frostbite 2 and Unity engines [1][2][3]. Traditionally, when a ray intersects with surfaces, it needs to calculate the bouncing result after intersections. Materials can be divided into three types roughly.  Opaque , lights can't go through its geometry and the ray will be bounced back.  Transparency , the ray passes and allow it through the surface totally, it probably would loose a little energy after leaving.  Translucency , the ray after entering the surface will be bounced internally like below Fig. 2. Fig.2 - BSSRDF [1] In the case of translucency, we have several subsurface scattering approaches to solve our problem. When a light is traveling inside the shape, that needs to consider the diffuse value influe

Physically-Based Rendering in WebGL

Image
According to the image from  Physically Based Shading At Disney  as below, the left is the real chrome, the middle is PBR approach, and the right is Blinn-Phong. We can find PBR is more closer to the real case, and the difference part is the specular lighting part. Blinn-Phong The most important part of specular term in Blinn-Phong is it uses half-vector instead of using dot(lightDir, normalDir) to avoid the traditional Phong lighting model hard shape problem. vec3 BRDF_Specular_BlinnPhong( vec3 lightDir, vec3 viewDir, vec3 normal, vec3 specularColor, float shininess ) { vec3 halfDir = normalize( lightDir + viewDir ); float dotNH = saturate( dot( normal, halfDir ) ); float dotLH = saturate( dot( lightDir, halfDir ) ); vec3 F = F_Schlick( specularColor, dotLH ); float G = G_BlinnPhong_Implicit( ); float D = D_BlinnPhong( shininess, dotNH ); return F * ( G * D ); } Physically-Based rendering Regarding to the lighting model of GGX, UE4 Shading presentation by B

Setup TensorFlow with GPU support on Windows

TensorFlow with GPU support brings higher speed for computation than CPU-only. But, you need some additional settings especially for CUDA. First of all, we need to follow the guideline from https://www.tensorflow.org/install/install_windows . TensorFlow on Windows currently only has Python 3 support, I suggest to use python 3.5.3 or below. Then, install CUDA 8.0 and download cuDNN v6.0. Then, move the files from cuDNN v6.0 that you already download to the path where you installed CUDA 8.0, like "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0", following the steps as below: cudnn-8.0-windows10-x64-v6.0\cuda\bin\cudnn64_6.dll ---------------- CUDA\v8.0\bin cudnn-8.0-windows10-x64-v6.0\cuda\include\cudnn.h ---------------------- CUDA\v8.0\include cudnn-8.0-windows10-x64-v6.0\cuda\lib\x64\cudnn.lib --------------------- CUDA\v8.0\lib\x64 Don't need to add the folder path of cudnn-8.0-windows10-x64-v6.0 to your %PATH%. Now, we can start to confirm our install

Webrender 1.0

Webrender 1.0 from Daosheng Mu Source code: https://github.com/servo/webrender

AR on the Web

Image
Because of the presence of Pokémon Go, lots of people start to discuss the possibility of AR (augmented reality) on the Web. Thanks for Jerome Etienne's slides , it brings me some idea to make this AR demo. First of all, it is based on  three.js and js-aruco . three.js is a WebGL framework that helps us construct and load 3D models. js-aruco is Javascript version of ArUco that is a minimal library for Augmented Reality applications based on OpenCV. These two project make it is possible to implement a Web AR proof of concept. Then, I would like to introduce how to implement this demo. First, we need to use  navigator .getUserMedia to give us the video stream from our webcam. This function is not supported on all browser vendors. Please take a look at the status. http://caniuse.com/#feat=stream navigator.getUserMedia = ( navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia ||