Posts

glTF loader for Android Vulkan

Image
This post is going to introduce how we integrate with an existing glTF loader library to make it be able to show glTF models in our Vulkan rendering framework,  vulkan-android . tinygltf In the beginning, we don't want to make our new own wheel, so choosing  tinygltf as our glTF loader. tinygltf is a C++11 based library that would help us support all possible cross-platform project easily. tinygltf setup in Android Studio In  vulkan-android  project, we put third party libraries into third_party folder. Therefore, we need to include tinygltf from third_party folder in app/CMakeLists.txt as below. set(THIRD_PARTY_DIR ../../third_party) include_directories(${THIRD_PARTY_DIR}/tinygltf) Model loading from tinygltf We are going to load a gltf ASCII or binary format model from the storage. tinygltf provides two APIs , they are LoadBinaryFromFile() and LoadASCIIFromFile() respectively based on the extension name is *.glb or *.gltf. TinyGLTF loader will return a tinygltf Model that incl

Drawing textured cube with Vulkan on Android

Image
Vulkan is a modern hardware-accelerated Graphics API. Its goal is providing an high efficient way in low-level graphics and compute on modern GPUs for PC, mobile, and embedded devices. I am personally working a self training project, vulkan-android , to teach myself how to use this new APIs. The difference between OpenGL and Vulkan OpenGL: Higher level API in comparison with Vulkan, and the next generation of OpenGL 4 will be Vulkan.   Cross-platform and Cross-language (mostly still based on C/C++, but people implemented diverse versions and expose similar API binding based on OpenGL C++, WebGL is a good example).  Mainly used in 3D graphics and 2D image processing to interact with GPU in order to achieve hardware acceleration. Don't have a command buffer can be manipulated at the application side. That means we will be easily see draw calls being the performance bottleneck in a big and complex 3D scene. Vulkan: Cross-platform and low-overhead. Erase the boundary between GPU API an

C++ unit testing & CI integration in GitHub (2/2)

Image
Based on the previous post , we are able to integrate our Android JNI project with CI tools, Circle-CI and GitHub Actions. However, we still have a little unperfected because we were unable to enable an Android emulator running for Android JNI unit tests. Now, I think I got a solution, it is using Android Emulator Runner . Add a job for Android Emulator When I first saw the instruction from  Android Emulator Runner , I was thinking it should be super easy and should not take me an hour, but I was wrong... Let's use the sample config from that Android Emulator Runner link. jobs: test: runs-on: macos-latest steps: - name: checkout uses: actions/checkout@v2 - name: run tests uses: reactivecircus/android-emulator-runner@v2 with: api-level: 29 script: ./gradlew connectedCheck This test job is run on a Mac OS machine and going to checkout your code from the repo and launch an Android emulators. I am supposed Linux and Windows machine a

C++ unit testing & CI integration in GitHub (I/2)

Image
I am working on a side project,  Vulkan-Android , that is based on Java, JNI, C++, and Vulkan for Android platform. It also uses my C++ math library. Therefore, the requirement of my build and unit tests are around C++. First of all, I would make sure the unit tests in local are run properly. C++ unit test on Mac OS On Mac OS, I think the most convenient way to do unit testing for C++ is writing XCT in Xcode. In the beginning, we need to create a test plan in Xcode. It will helps us create a schema, then, in the test navigator, create a new Unit Test Target. Due to XCT was originally designed for Objective-C or Swift, if we wanna test our C++ code, we need a workaround by making the file extension name to be *.mm . And then, write down the unit tests as below: #import <XCTest/XCTest.h> #include "Vector3d.h" using namespace gfx_math; @interface testVector3D : XCTestCase @end @implementation testVector3D - (void)setUp { // Put setup code here. This method is cal

Experimental integration Glean with Unity applications

Image
You might notice  Firefox Reality PC Preview  has been released in HTC’s Viveport store. That is a VR web browser that provides 2D overlay browsing alongside immersive content and supports web-based immersive experiences for PC-connected VR headsets. In order to easily deploy our product into the Viveport store, we take advantage of Unity to help make our application launcher. Also because of that, it brings us another challenge about how to use Mozilla’s existing telemetry system. As we know,  Glean SDK  has provided language bindings for different programming language requirements that include Kotlin, Swift, and Python. However, when we are talking about supporting applications that use Unity as their development toolkit, there are no existing bindings available to help us achieve it. Unity allows users using a Python interpreter to embed Python scripts in a Unity project; however, due to Unity’s technology being based on the Mono framework, that is not the same as our familiar Pytho

How to train custom objects in YOLOv2

Image
This article is based on [1]. We wanna a way to train the object tags that we are interested. Darknet has a Windows version that is ported by AlexeyAB [2]. First of all, we need to build darknet.exe from AlexeyAB to help us train and test data. Go to build/darknet, using VS 2015 to open darknet.sln, and config it to x64 solution platform. Rebuild solution! It should be success to generate darknet.exe. Then, we need to label objects from images that are used for training data. I use BBox label tool to help me label objects' coordinates in images for training data. (python ./main.py) This tool's image root folder is at ./Images, we can create a sub-folder ( 002 ) and insert 002 to let this tool load all *.jpg files from there. We will mark labels in this tool to help us generate objects' region to mark where objects are. The outputs are the image-space coordinate in images and stored at ./Labels/002 . However, the format of this coordinate is different from YOLOv2, YOLOv2

Fast subsurface scattering

Image
Fig.1 - Fast Subsurface scattering of Stanford Bunny Based on the implementation of three.js . It provides a cheap, fast, and convincing approach to do ray-tracing in translucent surfaces. It refers the sharing in GDC 2011  [1], and the approach is used by Frostbite 2 and Unity engines [1][2][3]. Traditionally, when a ray intersects with surfaces, it needs to calculate the bouncing result after intersections. Materials can be divided into three types roughly.  Opaque , lights can't go through its geometry and the ray will be bounced back.  Transparency , the ray passes and allow it through the surface totally, it probably would loose a little energy after leaving.  Translucency , the ray after entering the surface will be bounced internally like below Fig. 2. Fig.2 - BSSRDF [1] In the case of translucency, we have several subsurface scattering approaches to solve our problem. When a light is traveling inside the shape, that needs to consider the diffuse value influe