tag:blogger.com,1999:blog-18524650931887883062024-03-12T20:00:49.212-07:00Chromatic Coderdaoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.comBlogger334125tag:blogger.com,1999:blog-1852465093188788306.post-89276337337865820672021-09-11T23:26:00.003-07:002021-09-11T23:26:55.974-07:00glTF loader for Android Vulkan<p>This post is going to introduce how we integrate with an existing glTF loader library to make it be able to show glTF models in our Vulkan rendering framework, <a href="https://github.com/daoshengmu/vulkan-android" target="_blank">vulkan-android</a>.</p><p><br /></p><h2 style="text-align: left;">tinygltf</h2><p>In the beginning, we don't want to make our new own wheel, so choosing <a href="https://github.com/syoyo/tinygltf" target="_blank">tinygltf</a> as our glTF loader. tinygltf is a C++11 based library that would help us support all possible cross-platform project easily.</p><h2 style="text-align: left;">tinygltf setup in Android Studio</h2><p>In <a href="https://github.com/daoshengmu/vulkan-android" target="_blank">vulkan-android</a> project, we put third party libraries into <i>third_party</i> folder. Therefore, we need to include tinygltf from third_party folder in app/CMakeLists.txt as below.</p><p>set(THIRD_PARTY_DIR ../../third_party)</p><p>include_directories(${THIRD_PARTY_DIR}/tinygltf)</p><h3 style="text-align: left;">Model loading from tinygltf</h3><p>We are going to load a gltf ASCII or binary format model from the storage. tinygltf provides two APIs , they are <i>LoadBinaryFromFile() </i>and <i>LoadASCIIFromFile()</i> respectively based on the extension name is *.glb or *.gltf. TinyGLTF loader will return a tinygltf Model that include mesh, animation, node, material, texture, and skin data. In this article, I would like to focus on mesh and texture creating from tinygltf Model.</p><h3 style="text-align: left;">Buffer creating from tinygltf</h3><div>In tinygltf::Model::bufferViews, it includes vertex and index data. Index data usually owns scalar type, and vertex data is usually vector3. A bufferView represents a subset of data in a buffer, defined by a byte offset into the buffer specified in the byteOffset property and a total byte length specified by the byteLength property of the buffer view.</div><div><br /></div><div>bufferView.target would tell us the bufferView is an index buffer or a vertex buffer. If it is a <i>TINYGLTF_TARGET_ELEMENT_ARRAY_BUFFER</i>, it is an index buffer, otherwise, if it is a <i>TINYGLTF_TARGET_ARRAY_BUFFER</i>, </div><h3 style="text-align: left;">texture creating from tinygltf</h3><p>tinygltf handles image loading from its library. It loads image data no matter from *glb or PNG images from *.gltf, it packs its byte data into <i>model.images</i>. model.images stores the loaded buffers, and we need to allocate Vulkan memory and copy it to the image buffer.</p>
First of all, create a stage buffer and copy the image buffer to it.
<pre><code class="cpp">
VkBuffer stagingBuffer;
VkDeviceMemory stagingBufferMemory;
CreateBuffer(imageSize, VK_BUFFER_USAGE_TRANSFER_SRC_BIT,
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT, stagingBuffer, stagingBufferMemory);
void* data;
vkMapMemory(mDeviceInfo.device, stagingBufferMemory, 0, imageSize, 0, &data);
memcpy(data, aBuffer, static_cast<size_t>(imageSize));
vkUnmapMemory(mDeviceInfo.device, stagingBufferMemory);
</size_t></code></pre>
Create a Vulkan image and image memory and use a single time command buffer to copy the data from the stage buffer which we just created to this Vulkan image.
<pre><code class="cpp">
if (vkCreateImage(mDeviceInfo.device, &imageInfo, nullptr, &textureImage) != VK_SUCCESS) {
LOG_E(gAppName.data(), "failed to create image!");
return false;
}
VkMemoryRequirements memRequirements;
vkGetImageMemoryRequirements(mDeviceInfo.device, textureImage, &memRequirements);
VkMemoryAllocateInfo allocInfo{};
allocInfo.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
allocInfo.allocationSize = memRequirements.size;
MapMemoryTypeToIndex(memRequirements.memoryTypeBits,
VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, &allocInfo.memoryTypeIndex);
if (vkAllocateMemory(mDeviceInfo.device, &allocInfo, nullptr, &textureImageMemory) != VK_SUCCESS) {
LOG_E(gAppName.data(), "failed to allocate image memory!");
return false;
}
vkBindImageMemory(mDeviceInfo.device, textureImage, textureImageMemory, 0);
VkCommandBuffer commandBuffer = BeginSingleTimeCommands();
SetImageLayout(commandBuffer, textureImage, VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT);
EndSingleTimeCommands(commandBuffer);
commandBuffer = BeginSingleTimeCommands();
VkBufferImageCopy region{};
region.bufferOffset = 0;
region.bufferRowLength = 0;
region.bufferImageHeight = 0;
region.imageSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
region.imageSubresource.mipLevel = 0;
region.imageSubresource.baseArrayLayer = 0;
region.imageSubresource.layerCount = 1;
region.imageOffset = {0, 0, 0};
region.imageExtent = {
(uint32_t)aTexWidth,
(uint32_t)aTexHeight,
1
};
vkCmdCopyBufferToImage(commandBuffer, stagingBuffer, textureImage, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 1, &region);
EndSingleTimeCommands(commandBuffer);
</code></pre>
<h2 style="text-align: left;">Result</h2><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgz7ovwcoiOOmngqY1RiWEczUoLMkZHvHPy3Q90HrWijlH_GBV0ZMEh6pvbr_3cDCVBgrNWfUZWC_Du35h3Zhyphenhyphenjw6G7ohKzi_p9SNEe_E1TFHZ9PqoLyRnURcAs_j05tsBtqgAllp3VolEC/s2220/Screenshot_20210912-142140.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="2220" data-original-width="1080" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgz7ovwcoiOOmngqY1RiWEczUoLMkZHvHPy3Q90HrWijlH_GBV0ZMEh6pvbr_3cDCVBgrNWfUZWC_Du35h3Zhyphenhyphenjw6G7ohKzi_p9SNEe_E1TFHZ9PqoLyRnURcAs_j05tsBtqgAllp3VolEC/w312-h640/Screenshot_20210912-142140.png" width="312" /></a></div><br /><div><br /></div>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-78909354669548848682021-02-13T14:32:00.393-08:002021-08-15T00:28:09.297-07:00Drawing textured cube with Vulkan on Android <p>Vulkan is a modern hardware-accelerated Graphics API. Its goal is providing an high efficient way in low-level graphics and compute on modern GPUs for PC, mobile, and embedded devices. I am personally working a self training project, <a href="https://github.com/daoshengmu/vulkan-android">vulkan-android </a>, to teach myself how to use this new APIs.</p><h2 style="text-align: left;">The difference between OpenGL and Vulkan</h2><h3 style="text-align: left;">OpenGL:</h3><div><ul style="text-align: left;"><li>Higher level API in comparison with Vulkan, and the next generation of OpenGL 4 will be Vulkan. </li><li>Cross-platform and Cross-language (mostly still based on C/C++, but people implemented diverse versions and expose similar API binding based on OpenGL C++, WebGL is a good example). </li><li>Mainly used in 3D graphics and 2D image processing to interact with GPU in order to achieve hardware acceleration.</li><li>Don't have a command buffer can be manipulated at the application side. That means we will be easily see draw calls being the performance bottleneck in a big and complex 3D scene.</li></ul></div><h3 style="text-align: left;">Vulkan:</h3><div><ul style="text-align: left;"><li>Cross-platform and low-overhead. Erase the boundary between GPU API and driver to achieve hardware-accurate rendering and computing on modern GPUs, and expect high performance and efficient to access the resource from GPUs. People are saying Vulkan is the next generation of OpenGL.</li><li>Vulkan provides a command buffer over multi-thread to access it simultaneously between applications and GPUs.</li><li>Applications take over the management of memory and threads. That means video games or applications could customize their needs to fit their requirements and achieve using it in in more perform ways.</li><li>The validation layers can be enabled independently. For example, we can choose to turn off the validation layers when the product is shipped. That could help saving the performance in runtime.</li></ul></div><h2 style="text-align: left;">Vulkan setup in Android</h2><p>To support Vulkan in Android, we need to rely on Android SDK. I am using Android SDK 29, it has <i>libvulkan.so</i> under <i>android-29/arch-arm64/usr/lib/ </i>in its Android NDK folder. Besides, define the extern function pointers that we will use in <i>vulkan_wrapper.h. </i>In <i>vulkan_wrapper.cpp, </i>we load the library by</p><p><i>dlopen("libvulkan.so", RTLD_NOW | RTLD_LOCAL);</i></p><p>And dynamic mapping its symbols by following code.</p><p><i>vkCreateInstance = reinterpret_cast<PFN_vkCreateInstance>(dlsym(libvulkan, "vkCreateInstance"));</i></p>Then, let's initialize Vulkan context by calling <i>vkCreateInstance </i>to create a Vulkan instance. In order to render into Android screen buffer, we need to create an AndroidSurface from <i>vkCreateAndroidSurfaceKHR,</i> and this surface will bind to our SwapChain.<h2 style="text-align: left;">Command buffer</h2><div>When executing draw calls and doing memory transfers in Vulkan that actually are not run in direct calls. We record this calls and be performed in command buffer objects. The advantage is relieving this performance hard works by making it be done in advance and run it in multi-threads.</div><div><br /></div><div>The usage of command buffer in Vulkan is the very special if comparing with OpenGL. When we were using OpenGL, all GL commands are executed and put into a command buffer, some kinds of commands will ask GPU to execute them immediately, that will happen CPU <---> GPU mode transition, it makes CPU waits for GPU finishes its tasks in runtime. Besides, these API calls are run in runtime, it also spends CPU time.</div><div><br /></div><div>However, in the case of using command buffers in Vulkan, it provides an optimized approach. We are still able to set our API calls to a command buffer, but the API calls are set in offline. It would save runtime CPU time dramatically. In runtime, we only need to update our uniform buffer and bind this command buffer to swap chains.</div><h2 style="text-align: left;">Geometry buffers in Vulkan</h2><p>For rendering meshes in Vulkan, its concept is similar to OpenGL. We firstly need to construct the mesh's vertex and index buffers.</p><p>In terms of creating vertex and index buffers, both of them rely on <i>vkCreateBuffer, </i>this API will help create a new buffer object. In the process of creating a vertex/index buffer, we need to have two buffers, the first one is a <i>src</i> buffer, we copy the index or vertex data into this buffer object, we call it <i>staging buffer.</i> Then, creating another <i>dst</i> buffer, copying the staging buffer into the <i>dst</i> buffer. Lastly, free and destroy the staging buffer. The goal of staging buffer is for us temporarily copying the raw data into a Vulkan buffer object to make copying to the destination index/vertex buffers more efficiently.</p><p>The only difference of creating a vertex/index buffer is the flag, <i>VK_BUFFER_USAGE_VERTEX_BUFFER_BIT </i>vs <i>VK_BUFFER_USAGE_INDEX_BUFFER_BIT</i>. </p><p>Next, in a general 3D graphics pipeline, vertex data will need to transform the its model space -> world space -> view space -> clip space. We will assign and multiply a model-view-projection matrix(MVP) in the vertex shader. To do that, we need to know how to use a uniform buffer. The creating process of a uniform buffer is similar with a vertex/index buffer and use <i>VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT</i> flag, but we need to create one for each swap chain. In a uniform buffer, it owns a uniform buffer memory that is for uploading data from the application side. When updating the uniform buffer, we will do the following operations.</p>
<pre><code class="cpp">void* data;
vkMapMemory(mDeviceInfo.device, surf->mUniformBuffersMemory[aImageIndex], 0, surf->mUBOSize, 0, &data);
Matrix4x4f mvpMtx;
mvpMtx = mProjMatrix * mViewMatrix * surf->mTransformMatrix;
memcpy(data, &mvpMtx, surf->mUBOSize);
vkUnmapMemory(mDeviceInfo.device, surf->mUniformBuffersMemory[aImageIndex]);</code></pre><div style="text-align: left;">You might notice uniform buffer doesn't utilize a staging buffer to copy date, it is because we need to update it in runtime. Staging buffer is more efficient when creating buffers and copying data, but it is not suitable for using in runtime. </div><h2 style="text-align: left;">Create textures in Vulkan</h2><div>In terms of texture creation, there are a couple of things we need to handle.</div><div><br /></div><div><ol style="text-align: left;"><li>Loading textures from files. I choose to adopt KTX format textures that is a lightweight container for OpenGL, Vulkan, and it is supported by Khronos. Regarding to how to load textures from KTX library. Please take a look at <a href="https://github.com/KhronosGroup/KTX-Software">KTX Github repo</a>.</li><li>Copying image data into a Vulkan buffer object. We will do the same operations as vertex buffer creation. Create a buffer object but using <i>VK_IMAGE_USAGE_SAMPLED_BIT</i> flag. Then, submitting a command buffer to copy image data into a Vulkan texture.</li><li>Mipmap levels in texture<br />Creating a staging buffer for different levels of mipmap data.
<pre><code class="cpp">
for (int i = 0; i < aTexture.mipLevels; i++) {
ktx_size_t offset;
if (ktxTexture_GetImageOffset(ktxTexture, i, 0, 0, &offset) != KTX_SUCCESS) {
LOG_E(gAppName.data(), "%s: Create mipmap level failed,", aFilePath);
continue;
}
VkBufferImageCopy bufferCopyRegion = {};
bufferCopyRegion.imageSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
bufferCopyRegion.imageSubresource.mipLevel = i; // the level of mipmap
bufferCopyRegion.imageSubresource.baseArrayLayer = 0;
bufferCopyRegion.imageSubresource.layerCount = 1;
bufferCopyRegion.imageExtent.width = ktxTexture->baseWidth >> i;
bufferCopyRegion.imageExtent.height = ktxTexture->baseHeight >> i;
bufferCopyRegion.imageExtent.depth = 1;
bufferCopyRegion.bufferOffset = offset;
bufferCopyRegions.push_back(bufferCopyRegion);
}
// Copy mip levels from staging buffer
vkCmdCopyBufferToImage(
copyCommand,
stagingBuffer,
aTexture.image,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
static_cast<uint32_t>(bufferCopyRegions.size()),
bufferCopyRegions.data());
// Once the data has been uploaded we transfer to the texture image to the shader read layout, so it can be sampled from
imageMemoryBarrier.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT;
imageMemoryBarrier.dstAccessMask = VK_ACCESS_SHADER_READ_BIT;
imageMemoryBarrier.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
imageMemoryBarrier.newLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL; </code></pre></li></ol><h2 style="text-align: left;">DescriptorSetLayout in Vulkan</h2></div><div><i>"A descriptor set layout object is defined by an array of zero or more descriptor bindings. Each individual descriptor binding is specified by a descriptor type, a count (array size) of the number of descriptors in the binding". </i>As the definition in Khronos document, we need to describe our memory layout of data binding when passing data to Vulkan. In this textured cube example, we send uniform buffer and a texture to shaders to utilize. Their description info will be read through <i>vkCreateDescriptorSetLayout. </i></div>
<pre><code class="cpp">
// Uniform buffer descriptor layout.
layoutBindings.push_back(
{
.binding = 0, // the binding index of vertex shader.
// the amount of items of this layout, ex: for the case of a bone matrix, it will not be just one.
.descriptorCount = 1,
.descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
.pImmutableSamplers = nullptr,
.stageFlags = VK_SHADER_STAGE_VERTEX_BIT, // TODO: it needs to be adapted for FRAGMENT_BIT.
}
);
// texture descriptor layout.
layoutBindings.push_back(
{
.binding = 1, // the binding index of fragment shader.
.descriptorCount = 1,
.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, // decribe texture binds to the fragment shader.
.pImmutableSamplers = nullptr,
.stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT,
}
);
</code></pre>
<p></p><h2 style="text-align: left;">Texture mapping in Vulkan </h2><p></p><p>Implementing texture mapping is the same in both of OpenGL and Vulkan. We need to create a VertexInput data format that owns texture coordinate data. Then, it will be interpolated in pixels. Lastly, in a fragment shader, it can fetch texels according to the texture coordinate from the pixel level. We make the texture be bundled to a fragment shader via <i>VkWriteDescriptorSet.</i></p>
<pre><code class="cpp">
VkDescriptorImageInfo imageInfo {
// The image's view (images are never directly accessed by the shader,
// but rather through views defining subresources)
.imageView = aSurf->mTextures[0].view,
// The sampler (Telling the pipeline how to sample the texture,
// including repeat, border, etc.)
.sampler = aSurf->mTextures[0].sampler,
// The current layout of the image (Note: Should always fit
// the actual use, e.g. shader read)
.imageLayout = aSurf->mTextures[0].imageLayout,
};
descriptorWrite.push_back({
.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET,
.dstSet = aSurf->mDescriptorSets[i],
.dstBinding = 1,
.dstArrayElement = 0,
.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, // Binding a texture to a fragment shader.
.descriptorCount = 1,
.pImageInfo = &imageInfo,
});
vkUpdateDescriptorSets(mDeviceInfo.device, descriptorWrite.size(), descriptorWrite.data(), 0, nullptr);
</code></pre>
The texture mapping fragment shader is as below. It has no surprise, this is a very common GLSL code.<pre><code class="cpp">layout(location = 0) in vec4 fragColor;
layout(location = 1) in vec2 fragTexCoord;
layout(binding = 1) uniform sampler2D texSampler;
layout(location = 0) out vec4 outColor;
void main() {
outColor = texture(texSampler, fragTexCoord) * fragColor;
}
</code></pre>
Actually, Vulkan can't use GLSL code directly, it has to be compiled to <i>Standard, Portable Intermediate Representation - V</i> (<b>SPIR-V</b>). In Android SDK, it will use <i>glslc</i> to help compile GLSL code to SPIR-V when building projects.
<div><br /></div><div>Ultimately, the final result running on an Android device is as below.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhs6_Ua5f4AguT03Mjvyk8Myg9Q4bYxKrxec1lxoiJNafIzZjC60pm2ya0_r-DYk4ac_U5ltYcktcccurPmzsMsog9xM3rP7e6wtJTqpW_n352iSdT4ddjPZ1vR0ROkWATmqDMNtAcoaCqj/s2220/vkTextureMapping.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="2220" data-original-width="1080" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhs6_Ua5f4AguT03Mjvyk8Myg9Q4bYxKrxec1lxoiJNafIzZjC60pm2ya0_r-DYk4ac_U5ltYcktcccurPmzsMsog9xM3rP7e6wtJTqpW_n352iSdT4ddjPZ1vR0ROkWATmqDMNtAcoaCqj/w312-h640/vkTextureMapping.jpg" width="312" /></a></div><br /><div><br /></div>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com2tag:blogger.com,1999:blog-1852465093188788306.post-807524097364105602021-01-17T23:04:00.004-08:002021-01-17T23:17:36.132-08:00C++ unit testing & CI integration in GitHub (2/2)<p>Based on <a href="https://blog.dsmu.me/2021/01/c-unit-testing-ci-integration-in-github.html" target="_blank">the previous post</a>, we are able to integrate our Android JNI project with CI tools, Circle-CI and GitHub Actions. However, we still have a little unperfected because we were unable to enable an Android emulator running for Android JNI unit tests. Now, I think I got a solution, it is using <a href="https://github.com/marketplace/actions/android-emulator-runner" target="_blank">Android Emulator Runner</a>.</p><h3 style="text-align: left;">Add a job for Android Emulator</h3><p>When I first saw the instruction from <a href="https://github.com/marketplace/actions/android-emulator-runner" target="_blank">Android Emulator Runner</a>, I was thinking it should be super easy and should not take me an hour, but I was wrong... Let's use the sample config from that Android Emulator Runner link.</p><pre><code class="yaml">jobs:
test:
runs-on: macos-latest
steps:
- name: checkout
uses: actions/checkout@v2
- name: run tests
uses: reactivecircus/android-emulator-runner@v2
with:
api-level: 29
script: ./gradlew connectedCheck
</code></pre>This test job is run on a Mac OS machine and going to checkout your code from the repo and launch an Android emulators. I am supposed Linux and Windows machine are both available to support Android emulator, but it looks like the Mac OS one is more stable. In the final step, it will run through your tests on this emulator. It sounds all good. But what if... If you have your submodules, especially some of those are private repo. Or, your NDK version doesn't match with the machine's preinstalled version? Yep, these were what I was struggle with.<div><br /></div><div><h3>Private submodules support</h3></div><div>First of all, let's deal with the private repo. In my project, there is one submodule I can't access by using my public SSH key. Hence, I need to use my personal access token (PAT) to access it. To create a PAR, go to your <i>settings/Developer settings/Personal access tokens</i>, check the <i>workflow. </i>Then, save it.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjegW8uN_V7-FbWQoxBjiyZKOAHYHGi17c7bwVJK8Xjt4DNxX1fnoUnXiB7vsmDuiaW0nEcO7E7Ekm7EHkn2BzNG5zcZ8-6Yxtml-zb7z0zOrQV2RPK2aA-DC5Qg1v7tUGOeb82c-5c98yO/s2006/PAT.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1110" data-original-width="2006" height="354" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjegW8uN_V7-FbWQoxBjiyZKOAHYHGi17c7bwVJK8Xjt4DNxX1fnoUnXiB7vsmDuiaW0nEcO7E7Ekm7EHkn2BzNG5zcZ8-6Yxtml-zb7z0zOrQV2RPK2aA-DC5Qg1v7tUGOeb82c-5c98yO/w640-h354/PAT.jpg" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">Next, we are going to use this PAT to access submodules. We will add a job to do it.</div><pre><code class="yaml">- name: Checkout submodules using a PAT
run: |
git config --file .gitmodules --get-regexp url | while read url; do
git config --file=.gitmodules $(echo "$url" | sed -E "s/git@github.com:|https:\/\/github.com\//https:\/\/${{ secrets.CI_PAT }}:${{ secrets.CI_PAT }}@github.com\//")
done
git submodule sync
git submodule update --init --recursive
</code></pre>This config means we are gonna replace <i>git@github.com</i> and <i>https://github.com</i> with <i>https://${{</i><i>secrets.CI_PAT}}:${{secrets.CI_PAT}}@github.com </i>in order to access these submodules with our PAT. I got this answer from <a href="https://github.com/actions/checkout/issues/116#issuecomment-644419389" target="_blank">here</a>.<h3 style="font-family: -webkit-standard; white-space: normal;">Specific NDK version support</h3>Then, we start to adjust the NDK version based on our project <u>Build.gradle</u> request. My project is using <i>ndkVersion "21.0.6113669".</i> One of the reasons is CI-Circle is using <i>21.0.6113669</i>, I wanna it could be compatible with both CI tools. So, I'm gonna adjust the NDK version of my machine. There are two ways you can do.
The first one is adding an additional task to install NDK specific version.<pre><code class="yaml">- name: Install NDK
run: echo "y" | $ANDROID_HOME/tools/bin/sdkmanager --install "ndk;21.0.6113669"
</code></pre>
The other one is when you setup the environment, giving the NDK version you want.<pre><code class="yaml">uses: reactivecircus/android-emulator-runner@v2
with:
api-level: 29
ndk: 21.0.6113669
</code></pre><br /><div>After that, the unit test config setting for Android emulator should be good now, and don't forget to add a badge of GitHub Action<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdVErHNjJ2jxzwhdvc-z3l9LV0jhXgL2K6O6hGjXAPr6nbmZSIr60MP3ZTJuEQkHXF29lhkovTmy-G8ggmdA__T6bZucnNdG87b8iX8Pzbx8nepYUuJ2IugQfIlJJ1lRV_C9cEf2tJY-9q/s198/githubAction.jpg" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="64" data-original-width="198" height="28" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdVErHNjJ2jxzwhdvc-z3l9LV0jhXgL2K6O6hGjXAPr6nbmZSIr60MP3ZTJuEQkHXF29lhkovTmy-G8ggmdA__T6bZucnNdG87b8iX8Pzbx8nepYUuJ2IugQfIlJJ1lRV_C9cEf2tJY-9q/w87-h28/githubAction.jpg" width="87" /></a>as well.</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com1tag:blogger.com,1999:blog-1852465093188788306.post-39869441246787694592021-01-17T13:49:00.020-08:002021-01-18T11:03:43.335-08:00C++ unit testing & CI integration in GitHub (I/2)<p>I am working on a side project, <a href="https://github.com/daoshengmu/vulkan-android" target="_blank">Vulkan-Android</a>, that is based on Java, JNI, C++, and Vulkan for Android platform. It also uses my C++ math library. Therefore, the requirement of my build and unit tests are around C++. First of all, I would make sure the unit tests in local are run properly.</p><h3 style="text-align: left;"><b>C++ unit test on Mac OS</b></h3><p>On Mac OS, I think the most convenient way to do unit testing for C++ is writing XCT in Xcode. In the beginning, we need to create a test plan in Xcode. It will helps us create a schema, then, in the test navigator, create a new Unit Test Target. Due to XCT was originally designed for Objective-C or Swift, if we wanna test our C++ code, we need a workaround by making the file extension name to be <i>*.mm</i>.</p><p>And then, write down the unit tests as below:</p>
<pre><code class="cpp">
#import <XCTest/XCTest.h>
#include "Vector3d.h"
using namespace gfx_math;
@interface testVector3D : XCTestCase
@end
@implementation testVector3D
- (void)setUp {
// Put setup code here. This method is called before the invocation of each test method in the class.
}
- (void)tearDown {
// Put teardown code here. This method is called after the invocation of each test method in the class.
}
- (void)testVectorBasic {
Vector3Df a(1, 0, 0);
float res = a.DotProduct(Vector3Df(1, 0, 0));
XCTAssertEqual(res, 1.0f);
}
@end
</code></pre>
Press Cmd + U, Xcode will execute the test plan for you.<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXq-5WlkezcGfosKfHZVUmxEJsjX67esa3CBHMDNOfrLxVLfC9jVEwgelaPlLKw-_vvhHW2Afp2w9U_a_6STKCcEX91fEZxAIQ2I3_5i1T9mOIHkANvow-jrETkd4S1eW-FfAdzQJeRUAn/s654/xctest.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="654" data-original-width="614" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXq-5WlkezcGfosKfHZVUmxEJsjX67esa3CBHMDNOfrLxVLfC9jVEwgelaPlLKw-_vvhHW2Afp2w9U_a_6STKCcEX91fEZxAIQ2I3_5i1T9mOIHkANvow-jrETkd4S1eW-FfAdzQJeRUAn/s320/xctest.jpg" /></a></div><div><div><h3 style="text-align: left;"><b><br /></b></h3><h3 style="text-align: left;"><b><br /></b></h3><h3 style="text-align: left;"><b><br /></b></h3><h3 style="text-align: left;"><b><br /></b></h3><h3 style="text-align: left;"><b><br /></b></h3><h3 style="text-align: left;"><b><br /></b></h3><h3 style="text-align: left;"><b><br /></b></h3><h3 style="text-align: left;"><br /></h3><div><br /></div><h3 style="text-align: left;"><b>C++ unit tests with CI</b></h3>Technically, we should be able to find a CI tool to run these unit tests directly in their service. My first try was <u><a href="https://circleci.com" target="_blank">Circle-CI</a></u>, but its Mac OS instance is not free. So, I moved to using GitHub Actions. GitHub has two options for C++ build, one is<u> MS-Build</u>, the other one is <u>Make</u>. I rather chose <u>Make</u> because it is available to run on my Mac OS machine, and if you want, you can integrate it with Google Test as well. As a beginner, I would tend to be not too aggressive, so I used C++ <i>assert() </i>to verify my functions.<pre><code class="cpp"> std::vector<Vector3Df> interPointList;
result = RayIntersectWithSphere(ray, Vector3Df(-10, 0, 0), Vector3Df(0,0,0), 5.0f, interPointList);
assert(result);
assert(interPointList.size() == 2);
</code></pre>
Then, create a Makefile like <a href="https://github.com/daoshengmu/gfx-math/blob/master/Makefile" target="_blank">this</a>. We can verify if this Makefile works or not in local first by inserting <i style="background-color: #cccccc;">make clean && make build && ./unittest</i> and submit it to the GitHub repo. In GitHub Actions, choosing create a new workflow with <u>C/C++ with Make</u>, and setup its config file as blow, it will be put under <i>.github/workflows/</i><pre><code class="yaml">name: C/C++ CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: make clean
run: make clean
- name: make build
run: make build
- name: run unittest
run: ./unittest
</code></pre>This config file will execute a few steps: clean, build, and run the executable file that is our unit tests. After that, every time when committing a new patch to this repo (it could also run besides the master branch?), this workflow will be executed automatically. One last thing, don't forget to add its badge <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsiLnljhEvTd8qFPTle_DD_XyZNBSTWBeawd08meJK_1S4fKfhyAtY_gHUVwJDi4l9Nx6Gs3yl0BfvvqHDU3JuWJzFMvYhM6XR9VYrtk8j2e4iKvbemTm9eXaoWTWjPPzzATKXe7oxiAyJ/s278/ci-check.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="52" data-original-width="278" height="23" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsiLnljhEvTd8qFPTle_DD_XyZNBSTWBeawd08meJK_1S4fKfhyAtY_gHUVwJDi4l9Nx6Gs3yl0BfvvqHDU3JuWJzFMvYhM6XR9VYrtk8j2e4iKvbemTm9eXaoWTWjPPzzATKXe7oxiAyJ/w122-h23/ci-check.jpg" width="122" /></a></div>into your README.md file. because it looks really cute.<div><br /></div><div><br /></div><div>We are also available to use Circle-CI Windows or Linux based Docker images. Although we need to install additional C++ compiler at the beginning, the detail setup and discussion please refer <a href="https://discuss.circleci.com/t/how-to-get-a-c-compiler-with-circleci/37636" target="_blank">the discussion </a>and <a href="https://github.com/DynamicSquid/night/blob/master/.circleci/config.yml" target="_blank">the setting</a>.</div><div><div><h3><b>Android JNI build and testing with CI</b></h3></div><div>For Android projects building in local, it is pretty easy, just run <span style="background-color: #eeeeee;">./gradlew build.</span> Following, in terms of Android C++/JNI testing, I think <a href="https://github.com/google/googletest" target="_blank">Google Test</a> is the most convenient one to take. Thanks for <a href="https://github.com/paleozogt/AndroidGTestRunner" target="_blank">AndroidGTestRunner</a> [1] inspires me for doing this. Firstly, we need to add Google Test into our CMakeList build, so pointing to the googletest's CMakeList folder.</div><div><i style="background-color: #eeeeee;"><br /></i></div><div><i style="background-color: #eeeeee;">add_subdirectory(${THIRD_PARTY_DIR}/googletest gtest)</i><i style="background-color: #eeeeee;">.</i><i style="background-color: white;"> </i></div><div><span style="background-color: #eeeeee;"><i>target_link_libraries(gtest)</i></span></div><div><span style="background-color: #eeeeee;"><i><br /></i></span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTLr22zTITW3dMglFF_ejStsQb9i6JYzjaHOUvP4LklxfTFVH-Yoz4O9hgrlg88un94BwSwQM2OXTlbOhEZwjDX9A2cfL40N0f2NzxYJKzMQW5-V0pleQ9-Z1KxM9tYmHK2eNA4ZpCsVXk/s704/AndroidC%252B%252BTestFlow.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="368" data-original-width="704" height="334" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTLr22zTITW3dMglFF_ejStsQb9i6JYzjaHOUvP4LklxfTFVH-Yoz4O9hgrlg88un94BwSwQM2OXTlbOhEZwjDX9A2cfL40N0f2NzxYJKzMQW5-V0pleQ9-Z1KxM9tYmHK2eNA4ZpCsVXk/w640-h334/AndroidC%252B%252BTestFlow.jpg" width="640" /></a></div></div><div><br /></div><div>For the specific setting, please refer my <a href="https://github.com/daoshengmu/vulkan-android/tree/master/unittests" target="_blank">unittest subfolder.</a> Adding some simple gtest code as below. </div>
<pre><code class="cpp">TEST(TestVector3D, basic) {
Vector3Df a(1, 0, 0);
float res = a.DotProduct(Vector3Df(1, 0, 0));
ASSERT_EQ(res, 1.0f);
}
</code></pre>
<div>GTestRunner is responsible for talking to JNI and collect tests from gtest suite. It helps execute commands via CLI to <i>runner</i> that was built from googletest library. </div><div><br /></div><div>For launching an Android Emulator in Circle-CI, it is only available to be run on a Mac OS instance and with some complicated setup (I got an another idea by using GitHub Actions when I was writing this post, I will give another post then).</div><div><br /></div><div>Finally, setup the configuration for Circle-CI workflow.</div>
<pre><code class="yaml">
version: 2.1
orbs:
android: circleci/android@0.2.1
jobs:
build:
executor: android/android
steps:
- checkout
- run: git submodule update --init --recursive
- run:
name: Build unittests
working_directory: ~/project/unittests
command: ./gradlew build
</code></pre>As you can see, the workflow I did is checkout my repo, sync with submodules, and build my project.<br /><div><br /></div><div>That is all what I learned about C++ unit testing & CI integration in GitHub, and I will give another <a href="https://blog.dsmu.me/2021/01/c-unit-testing-ci-integration-in-github_17.html" target="_blank">post</a> to describe how to setup and run tests on an Android Emulator in GitHub Actions.</div><div><br /></div><div><br /></div><div>[1] AndroidGTestRunner, <a href="https://github.com/paleozogt/AndroidGTestRunner" target="_blank">https://github.com/paleozogt/AndroidGTestRunner</a></div><div>[2] android-studio-googletest, <a href="https://github.com/Mr-Goldberg/android-studio-googletest" target="_blank">https://github.com/Mr-Goldberg/android-studio-googletest</a></div><div><br /></div></div></div></div>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-90465785430408299822020-08-11T17:08:00.002-07:002021-01-09T21:43:29.169-08:00Experimental integration Glean with Unity applications<br /><div><p>You might notice <a href="https://blog.mozvr.com/introducing-firefox-reality-pc-preview/">Firefox Reality PC Preview</a> has been released in HTC’s Viveport store. That is a VR web browser that provides 2D overlay browsing alongside immersive content and supports web-based immersive experiences for PC-connected VR headsets. In order to easily deploy our product into the Viveport store, we take advantage of Unity to help make our application launcher. Also because of that, it brings us another challenge about how to use Mozilla’s existing telemetry system.</p><p>As we know, <a href="https://mozilla.github.io/glean/book/index.html">Glean SDK</a> has provided language bindings for different programming language requirements that include Kotlin, Swift, and Python. However, when we are talking about supporting applications that use Unity as their development toolkit, there are no existing bindings available to help us achieve it. Unity allows users using a Python interpreter to embed Python scripts in a Unity project; however, due to Unity’s technology being based on the Mono framework, that is not the same as our familiar Python runtime for running Python scripts. So, the alternative way we need to find out is how to run Python on .Net Framework or exactly on Mono framework. If we are discussing possible approaches to run Python script in the main process, using IronPython is the only solution. However, it is only available for Python 2.7, and the Glean SDK Python language binding needs Python 3.6. Hence, we start our plans to develop a new Glean binding for C#.</p></div><div><br /></div><div><h2 style="text-align: left;">Getting started</h2><h2>
</h2></div><div>The Glean team and I initialized the discussions about what are the requirements of running Glean in Unity to implement C# binding from Glean. We followed minimum viable product strategy and defined very simple goals to evaluate if the plan could be workable. Technically, we only need to send built-in and <a href="https://mozilla.github.io/glean/book/user/pings/custom.html">custom pings</a> as the current Glean Python binding mechanism, and we are able to just use <a href="https://mozilla.github.io/glean/book/user/metrics/string.html">StringMetricType</a> as our first metric in this experimental Unity project. Besides, we also notice .Net Frameworks have various versions, and it is essential to consider the compatibility with the Mono framework in Unity. Therefore, we decide Glean C# binding would be based on .Net Standard 2.0. Based on these efficient MVP goals and Glean team’s rapid production, we got our first alpha version of C# binding in a very short moment. I really appreciate Alessio, Mike, Travis, and other team members from the Glean team. Their hard work made it happen so quickly, and they were patient with my concerns and requirements.</div><div><br /></div><div><h2 style="text-align: left;">How it works</h2></div><div>In the beginning, it is worth it for us to explain how to integrate Glean into a Hello World C# application. We can choose either importing the C# bindings source code from<a href="https://github.com/mozilla/glean/tree/main/glean-core/csharp"> glean-core/csharp</a> or just building the <i>csharp.sln</i> from<a href="https://github.com/mozilla/glean/tree/main/glean-core/csharp"> the Glean repository</a> and then copy and paste the generated <i>Glean.dll</i> to your own project. Then, in your C# project’s Dependencies setting, add this <i>Glean.dll</i>. Aside from this, we also need to copy and paste <i>glean_ffi.dll </i>that is existing in the folder from pulling Glean after running `cargo build`. Lastly, add<a href="https://github.com/serilog/serilog"> Serilog</a> library into your project via NuGet. We can install it through NuGet Package Manager Console as below:</div><div><br /><script src="https://gist.github.com/daoshengmu/f7d1415940396f029d8c94bd6b75e0c6.js"></script></div>
<h3 id="initial-steps-defining-pings-and-metrics" style="text-align: left;">Defining pings and metrics</h3><p style="text-align: left;">Before we start to write our program, let’s design our metrics first.
Based on the current ability of Glean SDK C# language binding, we can
create a custom ping and set a string type metric for this custom ping.
Then, at the end of the program, we will submit this custom ping, this
string metric would be collected and uploaded to our data server. The
ping and metric description files are as below:</p><p style="text-align: left;"></p>
<script src="https://gist.github.com/daoshengmu/7bf95eadb048997f5289a510eeaaa705.js"></script>
<script src="https://gist.github.com/daoshengmu/710ff15bbe8da5393ee9d5c41fdb9f38.js"></script>
<h3 id="initial-steps-defining-pings-and-metrics" style="text-align: left;">Testing and verifying it<br /></h3><p style="text-align: left;">Now, it is time for us to write our HelloWorld program.</p><p style="text-align: left;"></p><p style="text-align: left;"></p>
<div><script src="https://gist.github.com/daoshengmu/4ac9029ae7d1b04f51adac345b296bdd.js"></script><p>As we can see, the code above is very straightforward. Although Glean
parser in C# binding hasn’t been supported to assist us create metrics,
we are still able to create these metrics in our program manually. One
thing you might notice is <code><i>Thread.Sleep(1000);</i></code> at the bottom part of the <i>main()</i>.
It means pausing the current thread for 1000 milliseconds. Because this
HelloWorld program is a console application, the application will quit
once there are no other operations. We need this line to make sure the
Glean ping uploader has enough time to finish its ping uploading before
the main process is quit. We also can choose to replace it with <i><code>Console.ReadKey();</code></i>
to let users quit the application instead of closing the application
itself directly. The better solution here is to launch a separate
process for Glean uploader; this is the current limitation of our C#
binding implementation. We will work on this in the future. Regarding
the extended code you can go to <a href="https://github.com/mozilla/glean/tree/main/samples/csharp">glean/samples/csharp</a> to see other pieces, we also will use the same code at the Unity section.</p>
<p>After that, we would be interested in seeing if our pings are
uploaded to a data server successfully. We can set the Glean debug view
environment variable by inserting <code><em>set GLEAN_DEBUG_VIEW_TAG=samplepings</em></code> in our command-line tool, <code><i>samplepings</i></code>
could be any you would like to be as a tag of pings. Then, run your
executable file in the same command-line window. Finally, you can see
this result from<a href="https://mozilla.github.io/glean/book/user/debugging/debug-ping-view.html"> Glean Debug Ping Viewer</a> , the result is as below:</p></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisJ1imHA01qlxAqlhyphenhyphenOXgEojgPZ8EYarL2UduEGvUQWAbtDP8jzx2S-q0OmYC9XqqRVg8FCAEbmyIzuFla2sSZnEGMskbI6pVytah5E_6Vk5bOQtiW9EDtEsXxi4Xmt7c8rBPy0r5z8YMz/s476/glean_ping.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="385" data-original-width="476" height="405" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisJ1imHA01qlxAqlhyphenhyphenOXgEojgPZ8EYarL2UduEGvUQWAbtDP8jzx2S-q0OmYC9XqqRVg8FCAEbmyIzuFla2sSZnEGMskbI6pVytah5E_6Vk5bOQtiW9EDtEsXxi4Xmt7c8rBPy0r5z8YMz/w500-h405/glean_ping.PNG" width="500" /></a></div><div>It looks great now. But you might notice the <em>os_version</em> looks not right if you are running it on a Windows 10 platform. This is due to the fact that we are using <code><i>Environment.OSVersion</i></code> to get the OS version, and it is not reliable. The follow up discussion please reference<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1653897"> Bug 1653897</a>. The solution is by<a href="https://stackoverflow.com/questions/6050478/how-do-i-create-edit-a-manifest-file"> adding a manifest file</a>, then unmarking the lines of Windows 10 compatibility.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyTw3LwN93kcSIT7ani_jYLtNIh4DUz88cyR9SYdDOa2ZKKmtPFw16QpMEaJ_b-fGar1E0F6jx4UFXsftPuoazWfgItrvhtf2TYEeypPx9elrDBJywLasU0LZXpdlH5zIVEoG7IAK5MmIJ/s680/manifest.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="442" data-original-width="680" height="325" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyTw3LwN93kcSIT7ani_jYLtNIh4DUz88cyR9SYdDOa2ZKKmtPFw16QpMEaJ_b-fGar1E0F6jx4UFXsftPuoazWfgItrvhtf2TYEeypPx9elrDBJywLasU0LZXpdlH5zIVEoG7IAK5MmIJ/w500-h325/manifest.PNG" width="500" /></a></div><div><h2 id="initial-steps-defining-pings-and-metrics" style="text-align: left;">Your first Glean program in Unity<br /></h2></div><div>Now, I think we are good enough about the explanation of general C#
program parts. Let’s move on the main point of this article, using Glean
in Unity. Using Glean in Unity is a little bit different, but it is not
too complicated. First of all, open the C# solution that your Unity
project creates for you. Because Unity needs more dependencies when
using Glean, we would let the Glean NuGet package help us install them
in advance. As<a href="https://www.nuget.org/packages/Mozilla.Telemetry.Glean/"> Glean NuGet package</a> mentions, install Glean NuGet package by the below command:<br /></div>
<script src="https://gist.github.com/daoshengmu/ab4cb91b0c7288f26994d394b0c64e8e.js"></script>
<div>Then, check your <i>Packages </i>folder, you could see there are some packages already downloaded into your folder.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLJ6VLNVl-_6zp1zd7w4sHECrjvzzddbCaSHEb6x5hhuT_eKQbKe32e1mqO3j6gIFOGwJB7OQsep_BuowydAwQX-PTS6h5_pObc1VN9mWp5F-4Qeyq0BjIvazlJzo6OUq3rPJTvvBdu-ou/s639/packages.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="285" data-original-width="639" height="224" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLJ6VLNVl-_6zp1zd7w4sHECrjvzzddbCaSHEb6x5hhuT_eKQbKe32e1mqO3j6gIFOGwJB7OQsep_BuowydAwQX-PTS6h5_pObc1VN9mWp5F-4Qeyq0BjIvazlJzo6OUq3rPJTvvBdu-ou/w500-h224/packages.PNG" width="500" /></a></div><div>However, the Unity project builder couldn’t be able to recognize this <i>Packages</i> folder, we need to move the libraries from these packages into the Unity <i>Assets\Scripts\Plugins</i>
folder. You might be able to see these libraries have some runtime
versions differently. The basic idea is that Unity is .Net Standard 2.0
compatible, so we can just grab them from <i>lib\netstandard2.0</i>
folders. In addition, Unity allows users to distribute their version to
x86 and x86_64 platforms. Therefore, when moving Glean FFI library into
this <i>Plugins</i> folder, we have to put Glean FFI library, <i>glean_ffi.dll</i>, x86 and x86_64 builds into <i>Assets\Scripts\Plugins\x86</i> and <i>Assets\Scripts\Plugins\x86_64</i>
individually. Fortunately, in Unity, we don’t need to worry about the
Windows compatibility manifest stuff, they already handle it for us.</div><div><br /></div><div>Now, we can start to copy the same code to its main C# script.</div>
<script src="https://gist.github.com/daoshengmu/5c5f77135cb5a6f4f1d66701a748bcab.js"></script>
<div>We might notice we didn’t add <code><i>Thread.Sleep(1000);</i></code> at the end of the <code><i>start()</i></code>
function. It is because in general, Unity is a Window application, the
main approach to exit an Unity application is by closing its window.
Hence, Glean ping uploader has enough time to finish its task. However,
if your application needs to use a Unity specific API,<code><a href="https://docs.unity3d.com/ScriptReference/Application.Quit.html"> <i>Application.Quit()</i></a></code>,
to close the program. Please make sure to make the main thread wait for
a couple of seconds in order to preserve time for Glean ping uploader
to finish its jobs. For more details about this example code, please go
to<a href="https://github.com/daoshengmu/GleanUnity"> GleanUnity</a> to see its source code.</div><div><br /></div><div style="text-align: left;"><h2 style="text-align: left;">Next steps</h2></div><div>C# binding is still at its initial stage. Although we already support<a href="https://github.com/mozilla/glean/tree/main/glean-core/csharp/Glean/Metrics"> a few metric types</a>,
we still have lots of metric types and features that are unavailable
compared to other bindings. We also look forward to providing a better
solution of<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1651340"> off-main process ping upload</a> to resolve needing a main thread to wait for Glean ping uploader before the main process can quit. We will keep working on it!</div><div><br /></div><div><strong>This is a syndicated copy of the original post at </strong><b><a href="https://blog.mozilla.org/data/2020/08/06/experimental-integration-glean-with-unity-applications/">https://blog.mozilla.org/data/2020/08/06/experimental-integration-glean-with-unity-applications/</a></b></div><div><br /></div>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-18899490665995736172018-11-17T01:39:00.002-08:002018-11-19T00:08:57.340-08:00How to train custom objects in YOLOv2 <br />
This article is based on [1]. We wanna a way to train the object tags that we are interested. Darknet has a Windows version that is ported by AlexeyAB [2]. First of all, we need to build <i>darknet.exe </i>from AlexeyAB to help us train and test data. Go to build/darknet, using VS 2015 to open darknet.sln, and config it to x64 solution platform. Rebuild solution! It should be success to generate darknet.exe. Then, we need to label objects from images that are used for training data. I use BBox label tool to help me label objects' coordinates in images for training data. (python ./main.py) This tool's image root folder is at ./Images, we can create a sub-folder (<i>002</i>) and insert <i>002 to </i>let this tool load all *.jpg files from there. We will mark labels in this tool to help us generate objects' region to mark where objects are. The outputs are the image-space coordinate in images and stored at <i><span style="color: orange;">./Labels/002</span>.</i><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHMw0S8Gqv60KM5VAACSju6lokpXcjJk-H4hRhN1bOaGp4mNaEuB1rz3rSqdkqQeU0Wu0KOGSWrpjoylT4f2gAv1y1qxe3dH23sLip97FN1nAxeDQQXGHTQU3Z-TqTGpquUWl7D-4KmqT8/s1600/bbtool.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="509" data-original-width="666" height="244" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHMw0S8Gqv60KM5VAACSju6lokpXcjJk-H4hRhN1bOaGp4mNaEuB1rz3rSqdkqQeU0Wu0KOGSWrpjoylT4f2gAv1y1qxe3dH23sLip97FN1nAxeDQQXGHTQU3Z-TqTGpquUWl7D-4KmqT8/s320/bbtool.JPG" width="320" /></a></div>
However, the format of this coordinate is different from YOLOv2, YOLOv2 needs the relative coordinate of the dimension of images. The BBox label output is<br />
<span style="color: orange;"><span style="font-size: 15px;"><i>[obj number]<br />[bounding box left X] [bounding box top Y] [bounding box right X] [bounding box bottom Y],</i> and YOLOv2 wants<i> </i></span></span><br />
<span style="color: orange;"><span style="font-size: 15px;"><i><span style="font-size: 15px;"><i>[category number] [object center in X] [object center in Y] [object width in X] [object width in Y]. </i></span></i><span style="font-size: 15px;"> </span></span></span><br />
<br />
<span style="font-size: 15px;"><span style="font-size: 15px;">Therefore, we need a converter to do this conversion. We can get the converter from this script [4]</span><span style="font-size: 15px;"> add</span><i><span style="font-size: 15px;"><i> </i></span></i><span style="font-size: 15px;">change Ln 34 and 35 for the path in and out. Then run <i>python ./convert.py. </i>Following, we have to move the output *.txt files and the *.jpg file to the same folder. Next, we begin to edit <i>train.txt</i> and <i>test.txt </i>to describe what images are our training set, and what are served as the test set.</span></span><br />
<br />
<span style="font-size: 15px;"><span style="font-size: 15px;">In train.txt </span></span><br />
data/002/images.jpg<br />
data/002/images1.jpg<br />
data/002/images2.jpg<br />
data/002/images3.jpg<br />
<br />
<span style="font-size: 15px;"><span style="font-size: 15px;">In test.txt</span></span><br />
<span style="font-size: 15px;"><span style="font-size: 15px;">data/002/large.jpg<br />data/002/maxresdefault1.jpg<br />data/002/testimage2.jpg</span></span><br />
<span style="font-size: 15px;"><span style="font-size: 15px;"></span></span><br />
<span style="font-size: 15px;"><span style="font-size: 15px;">Then, creating YOLOv2 configure files. In <i>cfg/obj.data</i>, editing it to define what train and test files are.</span></span><br />
<span style="font-size: 15px;"><span style="font-size: 15px;">classes= 1 <br />train = train.txt <br />valid = test.txt <br />names = cfg/obj.names <br />backup = backup/</span></span><br />
<span style="font-size: 15px;"><span style="font-size: 15px;"><br /> In <span style="color: orange;"><i>cfg/</i></span></span></span><span style="font-size: 15px;"><span style="font-size: 15px;"><i><span style="color: orange;">obj.names</span>, </i>adding the label names for training classes, like<br /><i>Subaru</i></span></span><br />
<span style="font-size: 15px;"><span style="font-size: 15px;"><br /></span></span>
<span style="font-size: 15px;"><span style="font-size: 15px;">The final file, we duplicate the yolo-voc.cfg file as <span style="color: orange;">yolo-obj.cfg</span>. Set <i>batch=2</i> to make using 64 images for every training step. <i>subdivisions=1 </i>to adjust GPU VRAM requirements. <i>classes=1</i>, the number of categories we want to detect. In line 237: set filters=(classes + 5)*5 in our case <i>filters=30</i>.</span></span><br />
<span style="font-size: 15px;"><span style="font-size: 15px;"><i><br /></i></span></span>
<u><span style="font-size: 15px;"><span style="font-size: 15px;"><b>Training</b></span></span></u><br />
<span style="font-size: 15px;"><span style="font-size: 15px;">YOLOv2 requires a set of convolutional weights for training data, Darknet provides a set that was </span></span><span style="font-size: 15px;"><span style="font-size: 15px;">pre-trained on</span></span> <span style="color: orange;"><a href="http://www.image-net.org/">Imagenet</a></span>. This <i>conv.23</i> file can be <span style="color: orange;"><a href="https://pjreddie.com/media/files/darknet19_448.conv.23">downloaded</a></span> (76Mb) from the official YOLOv2 website.<br />
<br />
Type <span style="font-size: 15px;"><span style="font-size: 15px;"><i><span style="color: orange;">darknet.exe detector train cfg/obj.data cfg/yolo-obj.cfg darknet19_448.conv.23</span> </i>to start training data in terminal.<i><br /></i></span></span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNlYy7Il4qRxtfV8YWdjQYiFrmY6u6szjygCL7BfLQYDHOkHn-IqH9EQwlD5PpejDwt8WMxlRUCznB0_KcUPB4P2brUFn8Uz8dlX-qVEEdf6nsbtmqclIBJxXIiurcXIY1YsSf4yEIL6nK/s1600/run1.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="590" data-original-width="993" height="190" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNlYy7Il4qRxtfV8YWdjQYiFrmY6u6szjygCL7BfLQYDHOkHn-IqH9EQwlD5PpejDwt8WMxlRUCznB0_KcUPB4P2brUFn8Uz8dlX-qVEEdf6nsbtmqclIBJxXIiurcXIY1YsSf4yEIL6nK/s320/run1.JPG" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRG7Hnw3TaRQS1MmiyZMVrJJZ7m_g0LxEq-3qOw5UDy4ZGBp9FRTseYbRYWCHvU2iNDIdVqdPXGzBJJIlJzRul9DVmR9WETR_K7ux7MVcyHMH7wFvGUfzGgQSBLMOsfA-QCzra-ETn5h1B/s1600/run2.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="991" data-original-width="790" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRG7Hnw3TaRQS1MmiyZMVrJJZ7m_g0LxEq-3qOw5UDy4ZGBp9FRTseYbRYWCHvU2iNDIdVqdPXGzBJJIlJzRul9DVmR9WETR_K7ux7MVcyHMH7wFvGUfzGgQSBLMOsfA-QCzra-ETn5h1B/s320/run2.JPG" width="255" /></a></div>
<span style="font-size: 15px;"><span style="font-size: 15px;"><u><b>Testing</b></u></span></span><span style="font-size: 15px;"><span style="font-size: 15px;"> </span></span><br />
<span style="font-size: 15px;"><span style="font-size: 15px;">After training, we will get the trained weight in the <i>backup </i>folder. Just type<i> <span style="color: orange;">darknet.exe detector test cfg/obj.data cfg/yolo-obj.cfg backup\yolo-obj_2000.weights data/testimage.jpg</span> </i>to verify our result.<i><br /></i></span></span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoYmp8HCOiOToLUb2VHZSXUxmXhphJgbp6AWjpIFQ4k5zldqf9ZEri4bNSN5H8kRoKDCHrUfptRgWJocS3jUB8Bzi5QTVelHZp0TxbncCKdWMhtyz2KN_cL57duVnyH6juCkz8LHgRwUsI/s1600/result.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="655" data-original-width="853" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoYmp8HCOiOToLUb2VHZSXUxmXhphJgbp6AWjpIFQ4k5zldqf9ZEri4bNSN5H8kRoKDCHrUfptRgWJocS3jUB8Bzi5QTVelHZp0TxbncCKdWMhtyz2KN_cL57duVnyH6juCkz8LHgRwUsI/s320/result.JPG" width="320" /></a></div>
[1] <a href="https://timebutt.github.io/static/how-to-train-yolov2-to-detect-custom-objects/">https://timebutt.github.io/static/how-to-train-yolov2-to-detect-custom-objects/</a><br />
[2] <a href="https://github.com/AlexeyAB/darknet">https://github.com/AlexeyAB/darknet</a><br />
[3] <a href="https://github.com/puzzledqs/BBox-Label-Tool">https://github.com/puzzledqs/BBox-Label-Tool</a><br />
[4] <a href="https://github.com/Guanghan/darknet/blob/master/scripts/convert.py">https://github.com/Guanghan/darknet/blob/master/scripts/convert.py</a> <br />
<br />daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-47212598059633634962018-04-27T00:27:00.005-07:002021-01-15T18:18:30.880-08:00Fast subsurface scattering<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiYsyn0SlLY-Z-Fock3v_yw5UC0hB7A_BQcNd-8udmogv6CJNCL3svF3D3jN8BDxoixPesq1nBNtouN7oSWF9cOJtZHv9IAaPcuQRbfIIboVV5Rv0i8M2_V9pqOKXnhCSFKmG2JqoZ09F0/s1600/Screen+Shot+2018-05-04+at+10.57.13+AM.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="842" data-original-width="850" height="316" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiYsyn0SlLY-Z-Fock3v_yw5UC0hB7A_BQcNd-8udmogv6CJNCL3svF3D3jN8BDxoixPesq1nBNtouN7oSWF9cOJtZHv9IAaPcuQRbfIIboVV5Rv0i8M2_V9pqOKXnhCSFKmG2JqoZ09F0/s320/Screen+Shot+2018-05-04+at+10.57.13+AM.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fig.1 - Fast Subsurface scattering of Stanford Bunny</div>
<br />
Based on the implementation of <a href="https://github.com/mrdoob/three.js/pull/13511">three.js</a>. It provides a cheap, fast, and convincing approach to do ray-tracing in translucent surfaces. It refers the sharing in <a href="https://colinbarrebrisebois.com/2011/03/07/gdc-2011-approximating-translucency-for-a-fast-cheap-and-convincing-subsurface-scattering-look/">GDC 2011</a> [1], and the approach is used by Frostbite 2 and Unity engines [1][2][3]. Traditionally, when a ray intersects with surfaces, it needs to calculate the bouncing result after intersections. Materials can be divided into three types roughly. <span style="color: orange;">Opaque</span>, lights can't go through its geometry and the ray will be bounced back. <span style="color: orange;">Transparency</span>, the ray passes and allow it through the surface totally, it probably would loose a little energy after leaving. <span style="color: orange;">Translucency</span>, the ray after entering the surface will be bounced internally like below Fig. 2.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNoPs-tDedjYFBH13GwMydnGTKxfsgsmXEQXD0aQYlB9LSZU3GAmGGQTp-dn_5_7aiTHqd8vW_04fEk1ArnpEBbh59H2b1yxXbtUbLi_75zXEmwKK1NRB1xwU5lIltHF1MURTT1dWYuuAn/s1600/Screen+Shot+2018-04-26+at+2.39.06+PM.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1022" data-original-width="778" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNoPs-tDedjYFBH13GwMydnGTKxfsgsmXEQXD0aQYlB9LSZU3GAmGGQTp-dn_5_7aiTHqd8vW_04fEk1ArnpEBbh59H2b1yxXbtUbLi_75zXEmwKK1NRB1xwU5lIltHF1MURTT1dWYuuAn/s320/Screen+Shot+2018-04-26+at+2.39.06+PM.jpg" width="243" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fig.2 - BSSRDF [1]</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
In the case of translucency, we have several subsurface scattering approaches to solve our problem. When a light is traveling inside the shape, that needs to consider the diffuse value influence according the varying thickness of objects. As the Fig. 3 below, when a light leaving a surface, it generates diffusion and has attenuation based on the thickness of the shapes.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwo8AIjZz2y6kph8p3Zca-EYyA175dy6PYAazv8byAFAdMc0AbnO3aGJIVqGn7ivQshl3n6z8Rwal0XNNynsxfC9MCWRaQ3xHiQKje-oxXK7ug2deIoa5J11Pfo3leWMGVWd2ViTVUdKeQ/s1600/Screen+Shot+2018-04-26+at+2.56.32+PM.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1119" data-original-width="1600" height="278" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwo8AIjZz2y6kph8p3Zca-EYyA175dy6PYAazv8byAFAdMc0AbnO3aGJIVqGn7ivQshl3n6z8Rwal0XNNynsxfC9MCWRaQ3xHiQKje-oxXK7ug2deIoa5J11Pfo3leWMGVWd2ViTVUdKeQ/s400/Screen+Shot+2018-04-26+at+2.56.32+PM.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fig.3 - Translucent lighting [1]</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Thus, we need to have a way to determine the thickness inside surfaces. The most direct way is calculating ambient occlusion to get its local thickness into a thickness map. The thickness map as below Fig.4 can be easy to generate from DCC tools.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgawEgdx-3Ea1qA3k7LYpvGG2KTMJniqX8wF63O6vCXEVOILcHUREmAGdPDXgnWSQkR6e7TxXq8roJZXz41jevm7pBYPQbCvc3SVccJlfAVs4Fhr4saxS_4eIVRJRHyGB6TR__vHa6RoR4p/s1600/bunny_thickness.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="512" data-original-width="512" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgawEgdx-3Ea1qA3k7LYpvGG2KTMJniqX8wF63O6vCXEVOILcHUREmAGdPDXgnWSQkR6e7TxXq8roJZXz41jevm7pBYPQbCvc3SVccJlfAVs4Fhr4saxS_4eIVRJRHyGB6TR__vHa6RoR4p/s320/bunny_thickness.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fig.4 - Local thickness map of Stanford Bunny</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Then, we can start to implement our approximate subsurface scattering approach.</div>
<pre><code class="glsl">
void Subsurface_Scattering(const in IncidentLight directLight, const in vec2 uv, const in vec3 geometryViewDir, const in vec3 geometryNormal, inout vec3 directDiffuse) {
vec3 thickness = thicknessColor * texture2D(thicknessMap, uv).r;
vec3 scatteringHalf = normalize(directLight.direction + (geometryNormal * thicknessDistortion));
float scatteringDot = pow(saturate(dot(geometryViewDir, -scatteringHalf)), thicknessPower) * thicknessScale;
vec3 scatteringIllu = (scatteringDot + thicknessAmbient) * thickness;
directDiffuse += scatteringIllu * thicknessAttenuation * directLight.color;
}
</code><pre><div class="separator" style="clear: both; text-align: left;">The tricky part of the exit light is its direction is opposite to the incident light. Therefore, we get the light attenuation with <i><span style="color: orange;">dot(geometryViewDir, -scatteringHalf)</span></i> as its attenuation. Besides, We have several parameters that can be discussed detailed.</div>
<span style="color: orange; font-style: italic;">thicknessAmbient</span>
- Ambient light value
- Visible from all angles even at the back side of surfaces<br />
<span style="color: orange;"><i>thicknessPower</i></span>
- Power value of direct translucency
- View independent<br />
<i><span style="color: orange;">thicknessDistortion</span></i>
- Subsurface distortion
- Shift the surface normal
- View <span style="font-family: inherit;">dependent</span><br />
<span style="color: orange;"><i>thicknessMap</i></span>
- Pre-computed local thickness map
- Attenuates the back diffuse color with the local thickness map
- Can be utilized for both of direct and indirect lights
Because the local thickness map is precomputed, it doesn't work for animated/morph objects and concave objects. The alternative way is via real-time ambient occlusion map and inverting its normal or doing real-time thickness map.<br /><br /></pre><pre><div>
</div><div><b><span style="color: orange;">Reference:</span></b></div><div>
[1] GDC 2011 – Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look, <a href="https://colinbarrebrisebois.com/2011/03/07/gdc-2011-approximating-translucency-for-a-fast-cheap-and-convincing-subsurface-scattering-look/">https://colinbarrebrisebois.com/2011/03/07/gdc-2011-approximating-translucency-for-a-fast-cheap-and-convincing-subsurface-scattering-look/</a></div><div>
[2] Fast Subsurface Scattering in Unity Part 1, <a href="https://www.alanzucconi.com/2017/08/30/fast-subsurface-scattering-1/">https://www.alanzucconi.com/2017/08/30/fast-subsurface-scattering-1/</a></div><div>
[3] Fast Subsurface Scattering in Unity Part 2, <a href="https://www.alanzucconi.com/2017/08/30/fast-subsurface-scattering-2/">https://www.alanzucconi.com/2017/08/30/fast-subsurface-scattering-2/</a></div><div>
</div></pre></pre>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-27064366247063822542018-02-22T01:05:00.004-08:002021-01-15T16:37:59.992-08:00Physically-Based Rendering in WebGLAccording to the image from <a href="https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf">Physically Based Shading At Disney</a> as below, the left is the real chrome, the middle is PBR approach, and the right is Blinn-Phong. We can find PBR is more closer to the real case, and the difference part is the specular lighting part.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipP4z-TEgy6XCRGqewxFl0U2TYG6vw48NQ2ONOskp2TDAkz-LMh6MLRB9CfKYPCnXdzoprIhBuH8UqT1usQ5_biRildURcSJLrgs2L4Z-iZ7FUSzy3fyMdWJxxHa_dklTh3CUnpqPoMJv3/s1600/Screen+Shot+2018-02-22+at+4.32.12+PM.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="414" data-original-width="1244" height="132" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipP4z-TEgy6XCRGqewxFl0U2TYG6vw48NQ2ONOskp2TDAkz-LMh6MLRB9CfKYPCnXdzoprIhBuH8UqT1usQ5_biRildURcSJLrgs2L4Z-iZ7FUSzy3fyMdWJxxHa_dklTh3CUnpqPoMJv3/s400/Screen+Shot+2018-02-22+at+4.32.12+PM.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<b><br /></b>
<b>Blinn-Phong</b><br />
<br />
The most important part of specular term in Blinn-Phong is it uses half-vector instead of using dot(lightDir, normalDir) to avoid the traditional Phong lighting model hard shape problem.<br />
<br />
<pre><code class="glsl">
vec3 BRDF_Specular_BlinnPhong( vec3 lightDir, vec3 viewDir, vec3 normal, vec3 specularColor, float shininess ) {
vec3 halfDir = normalize( lightDir + viewDir );
float dotNH = saturate( dot( normal, halfDir ) );
float dotLH = saturate( dot( lightDir, halfDir ) );
vec3 F = F_Schlick( specularColor, dotLH );
float G = G_BlinnPhong_Implicit( );
float D = D_BlinnPhong( shininess, dotNH );
return F * ( G * D );
}
</code></pre>
<br />
<b>Physically-Based rendering</b><br />
<b><br /></b>
Regarding to the lighting model of GGX, <a href="http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf">UE4 Shading presentation by Brian Karis</a>, it takes the Cook-Torrance separation of terms as three factors:<br />
<br />
D) GGX Distribution<br />
F) Schlick-Fresnel<br />
V) Schlick approximation of Smith solved with GGX<br />
<br />
<pre><code class="glsl">
float G1V(float dotNV, float k) {
return 1.0 / (dotNV * (1.0 - k) + k);
}
float BRDF_Specular_GGX(vec3 N, vec3 V, vec3 L, float roughness, float f0) {
float alpha = roughness * roughness;
float H = normalize(V+L);
float dotNL = saturate(dot(N, L));
float dotNV = saturate(dot(N, V));
float dotNH = saturate(dot(N, H));
float dotLH = saturate(dot(L, H));
float F, D, vis;
// D
float alphaSqr = alpha * alpha;
float pi = 3.14159;
float denom = dotNH * dotNH * (alphaSqr - 1.0) + 1.0;
D = alphaSqr / (pi * denom * denom);
// F
float dotLH5 = pow(1.0 - dotLH, 5);
F = f0 + (1.0 - f0) * (dotLH5);
// V
float k = alpha / 2.0;
vis = G1V(dotNL, k) * G1V(dotNL, k);
float specular = dotNL * D * F * vis;
return specular;
}
</code></pre>
<br />
Unreal engine utilizes an approximate approach from <a href="https://www.unrealengine.com/en-US/blog/physically-based-shading-on-mobile">Physically Based Shading on Mobile</a>. We can see the specular term is shorten for the performance of mobile platform. (<a href="http://threejs.org/">three.js</a>' Standard material adopts this approach as well)<br />
<br />
<pre><code class="glsl">
half3 EnvBRDFApprox( half3 SpecularColor, half Roughness,half NoV )
{
const half4 c0 = { -1, -0.0275, -0.572, 0.022 };
const half4 c1 = { 1, 0.0425, 1.04, -0.04 };
half4 r = Roughness * c0 + c1;
half a004 = min( r.x * r.x, exp2( -9.28 * NoV ) ) * r.x + r.y;
half2 AB = half2( -1.04, 1.04 ) * a004 + r.zw;
return SpecularColor * AB.x + AB.y;
}
</code></pre>
<br />
<b>Result:</b><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjp7g7qFs8Ku7Rsaswfft5AmD5OC7dUsZ35rubQ3uy3nj9bzEFA9yaxEHrivp1tryto8gmNuAY49gLBz9jWkVqkwo9tHmlOi2yIhpWPXDaIqbDS-sMmdlCTZoaAOlUOCWD8guk-mp2Fypcw/s1600/Screen+Shot+2018-02-21+at+1.58.35+PM.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="937" data-original-width="1600" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjp7g7qFs8Ku7Rsaswfft5AmD5OC7dUsZ35rubQ3uy3nj9bzEFA9yaxEHrivp1tryto8gmNuAY49gLBz9jWkVqkwo9tHmlOi2yIhpWPXDaIqbDS-sMmdlCTZoaAOlUOCWD8guk-mp2Fypcw/s400/Screen+Shot+2018-02-21+at+1.58.35+PM.png" width="400" /></a></div>
<br />
<a href="http://dsmu.me/pbr/webgl_materials_pbr.html">http://dsmu.me/pbr/webgl_materials_pbr.html</a><br />
<br />
<b><br /></b>
<b>Reference:</b><br />
[1] GGX Shading Model For Metallic Reflections, <a href="http://www.neilblevins.com/cg_education/ggx/ggx.htm">http://www.neilblevins.com/cg_education/ggx/ggx.htm</a><br />
[2] Optimizing GGX Shaders with dot(L,H), <a href="http://filmicworlds.com/blog/optimizing-ggx-shaders-with-dotlh/">http://filmicworlds.com/blog/optimizing-ggx-shaders-with-dotlh/</a><br />
[3] Physically Based Shading in Call of Duty: Black Ops, <a href="http://blog.selfshadow.com/publications/s2013-shading-course/lazarov/s2013_pbs_black_ops_2_notes.pdf">http://blog.selfshadow.com/publications/s2013-shading-course/lazarov/s2013_pbs_black_ops_2_notes.pdf</a><br />
<br />daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-74043408501355055472017-07-12T15:40:00.002-07:002021-01-15T16:46:15.402-08:00Setup TensorFlow with GPU support on WindowsTensorFlow with GPU support brings higher speed for computation than CPU-only. But, you need some additional settings especially for CUDA. First of all, we need to follow the guideline from<a href="https://www.tensorflow.org/install/install_windows"> https://www.tensorflow.org/install/install_windows</a>. TensorFlow on Windows currently only has Python 3 support, I suggest to use python 3.5.3 or below. Then, install CUDA 8.0 and download cuDNN v6.0.<br />
<br />
Then, move the files from cuDNN v6.0 that you already download to the path where you installed CUDA 8.0, like "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0", following the steps as below:<br />
<br />
cudnn-8.0-windows10-x64-v6.0\cuda\bin\cudnn64_6.dll ---------------- CUDA\v8.0\bin<br />
cudnn-8.0-windows10-x64-v6.0\cuda\include\cudnn.h ---------------------- CUDA\v8.0\include<br />
cudnn-8.0-windows10-x64-v6.0\cuda\lib\x64\cudnn.lib --------------------- CUDA\v8.0\lib\x64<br />
<br />
Don't need to add the folder path of cudnn-8.0-windows10-x64-v6.0 to your %PATH%. Now, we can start to confirm our installation is ready.<br />
<br />
Steps:<br />
1. Create a virtualenv under your working folder:<br />
<span style="font-family: "times" , "times new roman" , serif;"><b>virtualenv --system-site-packages tensorflow</b></span><br />
2. Activate it<br />
<b>tensorflow\Scripts\activate</b><br />
It shows (tensorflow)$<br />
3. Install TensorFlow with GPU support<br />
<span style="font-family: "times" , "times new roman" , serif;"><b>pip3 install --upgrade tensorflow-gpu</b></span><br />
<span style="font-family: "times" , "times new roman" , serif;">4. Import TensorFlow to confirm it is ready</span><br />
<span style="font-family: "times" , "times new roman" , serif;"><b>(tensorflow) %YOUR_PATH%\tensorflow>python<br />Python 3.5.2 [MSC v.1900 64 bit (AMD64)] on win32<br />Type "help", "copyright", "credits" or "license" for more information.<br />>>> import tensorflow<br />>>></b></span><br />
<span style="font-family: "times" , "times new roman" , serif;">If it doesn't show anything, that means it works.</span><br />
<span style="font-family: "times" , "times new roman" , serif;">But, if you see error messages like, <i>No module named '_pywrap_tensorflow_internal'</i>, you can take a look at issue<a href="https://github.com/tensorflow/tensorflow/issues/9469"> 9469</a>, <a href="https://github.com/tensorflow/tensorflow/issues/7705">7705</a>. It should be the </span><i>cudnn </i>version problem or <i>cudnn </i>can't be found. Please follow the method that I mentioned above.<br />
<br />
<span style="font-family: "times" , "times new roman" , serif;"> <b> </b></span><br />
daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-20567328148571365222016-08-19T20:39:00.000-07:002016-08-19T20:39:34.173-07:00Webrender 1.0<iframe src="//www.slideshare.net/slideshow/embed_code/key/16P50BrNwSwfh" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/ellisonmu/webrender-10" title="Webrender 1.0" target="_blank">Webrender 1.0</a> </strong> from <strong><a target="_blank" href="//www.slideshare.net/ellisonmu">Daosheng Mu</a></strong> </div>
Source code: <a href="https://github.com/servo/webrender">https://github.com/servo/webrender</a>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-14171098556590518642016-08-17T23:00:00.004-07:002021-01-15T16:49:49.582-08:00AR on the WebBecause of the presence of Pokémon Go, lots of people start to discuss the possibility of AR (augmented reality) on the Web. Thanks for <a href="http://jeromeetienne.github.io/slides/augmentedrealitywiththreejs/">Jerome Etienne's slides</a>, it brings me some idea to make this AR demo.<br />
<br />
First of all, it is based on <a href="http://three.js/">three.js</a> and <a href="https://github.com/jcmellado/js-aruco">js-aruco</a>. three.js is a WebGL framework that helps us construct and load 3D models. js-aruco is Javascript version of ArUco that is a minimal library for Augmented Reality applications based on OpenCV. These two project make it is possible to implement a Web AR proof of concept.<br />
<br />
Then, I would like to introduce how to implement this demo. First, we need to use <span class="s1"><i>navigator</i></span><span class="s2"><i>.getUserMedia </i>to give us the video stream from our webcam. This function is not supported on all browser vendors. Please take a look at the status.</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglrY0KGMUBSAE_MY9eUvNB7GfgMGCArKxpwCp1Lj9PllbIBW6MoRyokMNyKFEGgaQmK1d0O8F6PgIDjtN4YO5WXUpXVhaXKi-9RpYArFlevM8WH3O0lynHmhU17Yi2HsCIFRoaBbbx5k29/s1600/Screen+Shot+2016-08-18+at+2.06.14+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="120" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglrY0KGMUBSAE_MY9eUvNB7GfgMGCArKxpwCp1Lj9PllbIBW6MoRyokMNyKFEGgaQmK1d0O8F6PgIDjtN4YO5WXUpXVhaXKi-9RpYArFlevM8WH3O0lynHmhU17Yi2HsCIFRoaBbbx5k29/s400/Screen+Shot+2016-08-18+at+2.06.14+PM.png" width="400" /></a></div>
<div style="text-align: center;">
<span class="s2"><a href="http://caniuse.com/#feat=stream">http://caniuse.com/#feat=stream</a></span></div>
<span class="s2"><br /></span>
<br />
<pre><code class="javascript">
navigator.getUserMedia = ( navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
if (navigator.getUserMedia) {
navigator.getUserMedia( { 'video': true }, gotStream, noStream);
}
</code></pre>
<br />
The above code shows us how to get media stream in Javascript. In this demo, I just need <i>video</i>, and it will be sent to gotStream callback function. In gotSteam function, I give the stream to my video element that will be displayed on screen. And then, go to setupAR module. In setupAR(), I have to initialize my AR module, and setup my model and scene scale. Furthermore, I just need to wait the new videoStream coming and get my AR detect result from js-aruco at updateVideoStream() function.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidRAFvCnOwtQ3g_oHqhXj6kTfMoH_uSoLif8Vl0lsVsg4A7FHImFpuHYUkDyRvUPz6QTfNu24JFCZR15u8Hi7_mmueWX6jWzXnr-_qq76Bt9v_X0AWC7ctwrV-wYQioj-p-6KhYFs_CF5f/s1600/Untitled+drawing%25281%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidRAFvCnOwtQ3g_oHqhXj6kTfMoH_uSoLif8Vl0lsVsg4A7FHImFpuHYUkDyRvUPz6QTfNu24JFCZR15u8Hi7_mmueWX6jWzXnr-_qq76Bt9v_X0AWC7ctwrV-wYQioj-p-6KhYFs_CF5f/s320/Untitled+drawing%25281%2529.jpg" width="320" /></a></div>
<br />
In updateVideoStream(), like the above picture, it draws the current videoStream to an imageData that is maintained by a Canvas2D. Go on, the imageData is sent to arDetector to investigate if there is any marker on it. It will return a marker array that contains markers are detected from this imageData. Every marker owns the corners (x, y) coordinate of a marker. We can use these corner coordinates to do lots of applications. In my demo, I draw the corners and the marker id on it. The most interesting part is we can leverage markers to update the pose of a 3D model.<br />
<br />
<div class="p1">
<span class="s1">POS.Posit gives us a library to assist us get the transformation pose from the corners. In a pose, it contains a rotation matrix and a translation vector in a 3D space. Therefore, it is very easy for us to show a 3D model on markers except we need to do some coordinate conversion. First, we need to keep in mind video stream is in a 2D space, so it makes sense that we have to transform the corners to 3D space.</span></div>
<div class="p1">
<table cellpadding="0" cellspacing="0" class="t1"><tbody>
<tr><td class="td2" valign="top"><br /></td><td class="td1" valign="top"><div class="p1">
<span class="s1"><br /></span></div>
<pre><code class="javascript">
for (i = 0; i < corners.length; ++ i){
corner = corners[i];
// to 2D canvas space to 3D world space
corner.x = corner.x - (canvas.width / 2);
corner.y = (canvas.height/2) - corner.y;
}
</code></pre>
<div class="p1">
<span class="s1">Moreover, we need to apply this rotation matrix to the 3D model's rotation vector.</span></div>
</td></tr>
</tbody></table>
</div>
<pre><code class="javascript">
dae.rotation.x = -Math.asin(-rotation[1][2]);
dae.rotation.y = -Math.atan2(rotation[0][2], rotation[2][2]) - 90;
dae.rotation.z = Math.atan2(rotation[1][0], rotation[1][1]);
</code></pre>
<div class="p1">
<span class="s1">
</span></div>
<table cellpadding="0" cellspacing="0" class="t1"><tbody>
</tbody>
</table>
<br />
At last, set the position to the 3D model.<br />
<pre><code class="javascript">
dae.position.x = translation[0];
dae.position.y = translation[1];
dae.position.z = -translation[2] * offsetScale;
</code></pre>
<br />
<iframe width="520" height="415" src="https://www.youtube.com/embed/68O5w1oIURM" frameborder="0" allowfullscreen></iframe>
<br />
Demo video: <a href="https://www.youtube.com/watch?v=68O5w1oIURM">https://www.youtube.com/watch?v=68O5w1oIURM</a><br />
Demo link: <a href="http://daoshengmu.github.io/ConsoleGameOnWeb/webar.html">http://daoshengmu.github.io/ConsoleGameOnWeb/webar.html</a> (Best for Firefox)<br />
<br />daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-17834184155228715422016-07-17T08:42:00.006-07:002016-08-15T06:16:12.067-07:00How to setup RustDTRustDT is the IDE for Rust.
If you are a guy like me who need a IDE for learning language and developing efficiently, you must have a try on RustDT(<a href="https://github.com/RustDT/RustDT/blob/latest/documentation/UserGuide.md#user-guide">https://github.com/RustDT/RustDT/blob/latest/documentation/UserGuide.md#user-guide</a>)<br />
<br />
Enable code complete.<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicGzAi97hrj4UlytO3ku4ZnPnTNzj0XClnYbqTIO1JRI7gd-NaA2f-YA2UXFUkr8aPgjHo9II4CUuTaRp9HhWeIX8U-Uc49mivcwc1PLnFRnwXMsf1zZS3dJytWjjgRaDZnanCtCpWMw-_/s1600/Screen+Shot+2016-05-03+at+9.48.55+AM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" height="221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicGzAi97hrj4UlytO3ku4ZnPnTNzj0XClnYbqTIO1JRI7gd-NaA2f-YA2UXFUkr8aPgjHo9II4CUuTaRp9HhWeIX8U-Uc49mivcwc1PLnFRnwXMsf1zZS3dJytWjjgRaDZnanCtCpWMw-_/s320/Screen+Shot+2016-05-03+at+9.48.55+AM.png" width="320" /></a><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Here you go!<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTc-NuaZoL4GS3PVPr4qMO-LfjuOjGR1usR_PEHTUiDTz-KeFtoGf3O45DvKX86SzXAAvU5geYKIhyWH6u8V09ZyaQZ33YVHG5gDvoGWgzGMOMIhMImGI1YGQcGT0VfJxBQxP5_PpJfJAp/s1600/Screen+Shot+2016-05-03+at+9.03.31+AM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="265" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTc-NuaZoL4GS3PVPr4qMO-LfjuOjGR1usR_PEHTUiDTz-KeFtoGf3O45DvKX86SzXAAvU5geYKIhyWH6u8V09ZyaQZ33YVHG5gDvoGWgzGMOMIhMImGI1YGQcGT0VfJxBQxP5_PpJfJAp/s400/Screen+Shot+2016-05-03+at+9.03.31+AM.png" width="400" /></a>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-39508748304063802892016-01-31T06:25:00.013-08:002022-01-07T23:59:18.678-08:00WebGL/VR on Worker thread<div>
<b><u>WebGL on main thread</u></b><br />
<b><u><br /></u></b></div>
<ol>
</ol>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOzJDKBjF5_ZKu7hF4U45FBJhc-q7yfrTm3GnwfJ39Ob9fyYxukYvDqHT-rli2J1kPKkazrpsY0czugmC2nokfJVin4YQ3u5lfRNoKp5SIH792o9VVHXUsGLiHDnKH-I39zL2HeMevWmqx/s1600/Screen+Shot+2016-01-31+at+4.27.16+PM.png" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOzJDKBjF5_ZKu7hF4U45FBJhc-q7yfrTm3GnwfJ39Ob9fyYxukYvDqHT-rli2J1kPKkazrpsY0czugmC2nokfJVin4YQ3u5lfRNoKp5SIH792o9VVHXUsGLiHDnKH-I39zL2HeMevWmqx/w640-h320/Screen+Shot+2016-01-31+at+4.27.16+PM.png" width="640" /></a><br />
<div>
As before. Developing a WebGL application, the only approach is put all the stuff at the main thread. But it would definitely bring some limitations for the performance. As the above picture shows, in a 3D game, it might need to do lots of stuff in a update frame. For example, updating the transformation of 3D objects, visibility culling, AI, network, and physics, etc. Then, we finally can hand over it to the render process for executing WebGL functions.<br />
<br />
If we expect it could done all things in the V-Sync time (16 ms), it would be a challenge for developers. Therefore, people look forward if there is another way to share the performance bottleneck to other threads. WebGL on worker thread is happened under this situation. But, please don't consider anything put into WebWorker would resolve your problems totally. It will bring us some new challenge as well. Following, I would like to tell you how to use WebGL on worker to increase the performance and give you a physics WebGL demo that is based on <a href="http://threejs.org/">three.js</a> and <a href="http://www.cannonjs.org/">cannon.js</a>, even proving that I can integrate it with WebVR API as well.</div>
<div>
<br /></div>
<div>
<b><u>WebWorker</u></b><br />
<b><u><br /></u></b></div>
<div>
First of all, I would like to introduce how to use WebWorker. WebWorker can help you execute your script at another thread to avoid pauses from the JavaScript Virtual Machine’s Garbage Collector. Therefore, this is a good idea for developers to use WebWorker to solve the performance bottleneck issue. The sample code is like below:<br />
<br />
<pre><code class="javascript">worker = new Worker("js/worker.js"); // load worker script
worker.onmessage = function( evt ) { // The receiver of worker's message
//console.log('Message received from worker ' + evt.data );
};
worker.postMessage( { test: 'webgl_offscreen'); // Send message to worker
</code></pre>
<br />
In worker.js
<pre><code class="javascript">onmessage = function(evt) {
//console.log( 'Message received from main script' );
postMessage( 'Send script to the main script.' ); // Post message back to the main thread.
}
</code></pre>These script looks quite simple and we can start to put some computation at onmessage function in worker.js to relief the work of the main thread. However, we have to know WebWorker brings some constraints for us as well, it would make us feel inconvenient compare to the general JavaScript usage at the main thread.
The limitation of WebWorker are:
<div>
- Can't read/write DOM</div><div>
- Can't access global variable / function</div><div>
- Can't use file system (file://) to access local files</div><div>
- No requestAnimationFrame<br /><br />
<b><u>WebGL on worker</u></b><br />
<b><br /></b>After understanding how to use WebWorker and its constraints. Let's start to make our first WebGL on worker application.</div>
<div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvEb38DVob22EoAZiFq381k8aN6SuVcdRZUrKU419eMOj-fd89Hj-5XoC05HKo53HxLbkGJ4REraj6qSseO6PsKn7r8KqdH4M__kFgu-fZnwfW8qRG6lJy53uoSS05bltHHtgnhtwY5WEO/s1600/Screen+Shot+2016-01-27+at+10.58.13+PM.png"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvEb38DVob22EoAZiFq381k8aN6SuVcdRZUrKU419eMOj-fd89Hj-5XoC05HKo53HxLbkGJ4REraj6qSseO6PsKn7r8KqdH4M__kFgu-fZnwfW8qRG6lJy53uoSS05bltHHtgnhtwY5WEO/w640-h320/Screen+Shot+2016-01-27+at+10.58.13+PM.png" width="640" /></a><br />
The benefit of worker is we can put a part of tiny computation functions into another thread. In case of WebGL worker, we can put WebGL function calls into the Worker thread. So in the example of the above picture, I put my render part to the WebGL Worker. Firefox Nightly has landed offscreencanvas feature for supporting WebGL on worker thread. In order to utilize this feature, we need to do some setup:<br />
<ul>
<li><strike>Download <a href="https://nightly.mozilla.org/">Firefox Nightly</a></strike></li>
<li><strike>Enter about:config, make <i>gfx.offscreencanvas.enabled;true</i></strike></li><li>Now, it works in Chrome instead of Firefox</li>
</ul>
Then, we have activated WebGL Worker. Go to finish it! The sample code is like below.<br />
<pre><code class="javascript">
var canvas = document.getElementById('c');
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
var proxy = canvas.transferControlToOffscreen(); // new interface added by offscreencanvas for getting offscreen canvas
var worker = new Worker("js/gl_worker.js");
var positions = new Float32Array(num*3); // Transferable object of web worker. Transformation info
var quaternions = new Float32Array(num*4); // For the update/render functions to update their variable.
// in the main/worker threads.
var cameraState = new Float32Array(7); // Camera state for the update/render functions
worker.onmessage = function( evt ) { // worker message receiving function
if ( evt.data.positions && evt.data.quaternions
&& evt.data.cameraState ) {
positions = evt.data.positions;
quaternions = evt.data.quaternions;
cameraState = evt.data.cameraState;
updateWorker();
}
}
worker.postMessage( { canvas: proxy }, [proxy]); // Send offscreenCanvas to worker
function updateWorker() {
// Update camera state
// Update position, quaternion
// Send these buffer back the worker
worker.postMessage( { cameraState: cameraState, positions: positions, quaternions: quaternions },
[cameraState.buffer, positions.buffer, quaternions.buffer]);
}
</code></pre><br />
In worker.js
<br />
<pre><code class="javascript">
var renderer;
var canvas;
var scene = null;
var camera = null;
onmessage = function( evt ) { // Receiving messages from the main thread
var window = self;
if ( typeof evt.data.canvas !== 'undefined') {
console.log( 'import script... ' );
importScripts('../lib/three.js'); // load script at worker
importScripts('../js/threejs/VREffect.js');
importScripts('../js/threejs/TGALoader.js');
canvas = evt.data.canvas;
renderer = new THREE.WebGLRenderer( { canvas: canvas } ); // Initialize THREE.js WebGLRenderer
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera( 30, canvas.width / canvas.height, 0.5, 10000 );
window.addEventListener( 'resize', onWindowResize, false ); // Register 'resize' event
// Get bufffers that are sent from main thread.
var cameraState = evt.data.cameraState;
var positions = evt.data.positions;
var quaternions = evt.data.quaternions;
camera.position.set( cameraState[0], cameraState[1], cameraState[2] );
camera.quaternion.set( cameraState[3], cameraState[4], cameraState[5], cameraState[6] );
for ( var i = 0; i < visuals.length; i++ ) { // Setup transformation info for visual objects in scene
visuals[i].position.set(
positions[3 * i + 0],
positions[3 * i + 1],
positions[3 * i + 2] );
visuals[i].quaternion.set(
quaternions[4 * i + 0],
quaternions[4 * i + 1],
quaternions[4 * i + 2],
quaternions[4 * i + 3] );
}
render(); // Call render via the main thread requestAnimationTime
postMessage({ cameraState: cameraState, positions:positions, quaternions:quaternions}
, [ cameraState.buffer, positions.buffer, quaternions.buffer ]); // Send back transferable
// object to the main thread
}
}
function render() {
renderer.render( scene, camera );
renderer.context.commit(); // New for webgl worker to end this frame (only support in Firefox. Running in Chrome, we need mark this line.)
}
function onWindowResize( width, height ) { // Resize window listener
canvas.width = width;
canvas.height = height;
camera.aspect = canvas.width / canvas.height;
camera.updateProjectionMatrix();
renderer.setSize( canvas.width, canvas.height, false );
}
</code></pre><b><br /></b>
<b>WebVR on Worker</b><br />
<b><br /></b>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuzsVGomLOm5P22QIchLTvNRo2xkvoqtk4dxwoAgEsqu7hiDHo0eEArUgBIvH_IqlOYWGv8jVXY9pz_T1DmzEG6-mPQMkHzhzuaRzcs1-LYw0OpQGTk29OKga0f09CYkfda58E8mQJMIwX/s1600/Screen+Shot+2016-01-31+at+10.18.20+PM.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuzsVGomLOm5P22QIchLTvNRo2xkvoqtk4dxwoAgEsqu7hiDHo0eEArUgBIvH_IqlOYWGv8jVXY9pz_T1DmzEG6-mPQMkHzhzuaRzcs1-LYw0OpQGTk29OKga0f09CYkfda58E8mQJMIwX/w640-h400/Screen+Shot+2016-01-31+at+10.18.20+PM.png" width="640" /></a></div>
<br />
Although most parameters of WebVR exist at dom API, the worker thread can't get them directly. But it is not a big deal, we can get them at the main thread and pass them to the worker.<br />
<br />
In the main thread
<br />
<pre><code class="javascript">var vrHMD;
function gotVRDevices( devices ) {
vrHMD = devices[ 0 ];
worker.postMessage( { // Pass them to the worker
eyeTranslationL: eyeTranslationL.x,
eyeTranslationR: eyeTranslationR.x,
eyeFOVLUp: eyeFOVL.upDegrees, eyeFOVLDown: eyeFOVL.downDegrees,
eyeFOVLLeft: eyeFOVL.leftDegrees, eyeFOVLRight: eyeFOVL.rightDegrees,
eyeFOVRUp: eyeFOVR.upDegrees, eyeFOVRDown: eyeFOVR.downDegrees,
eyeFOVRLeft: eyeFOVR.leftDegrees, eyeFOVRRight: eyeFOVR.rightDegrees });
}
function updateVR() { // Update camera orientation via VR state
var state = vrPosSensor.getState();
if ( state.hasOrientation ) {
camera.quaternion.set(
state.orientation.x,
state.orientation.y,
state.orientation.z,
state.orientation.w);
}
function triggerFullscreen() {
canvas.mozRequestFullScreen( { vrDisplay: vrHMD } ); // Fullscreen must be requested at the main thread.
} // Thankfully, it works for WebGL on worker.
</code></pre><br />
In worker.js
<br />
<pre><code class="javascript">var vrDeviceEffect = new THREE.VREffect(renderer);
onmessage = function(evt) { // Send VRDevice to work for stereo render.
vrDeviceEffect.eyeTranslationL.x = evt.data.eyeTranslationL;
vrDeviceEffect.eyeTranslationR.x = evt.data.eyeTranslationR;
vrDeviceEffect.eyeFOVL.upDegrees = evt.data.eyeFOVLUp;
vrDeviceEffect.eyeFOVL.downDegrees = evt.data.eyeFOVLDown;
vrDeviceEffect.eyeFOVL.leftDegrees = evt.data.eyeFOVLLeft;
vrDeviceEffect.eyeFOVL.rightDegrees = evt.data.eyeFOVLRight;
vrDeviceEffect.eyeFOVR.upDegrees = evt.data.eyeFOVRUp;
vrDeviceEffect.eyeFOVR.downDegrees = evt.data.eyeFOVRDown;
vrDeviceEffect.eyeFOVR.leftDegrees = evt.data.eyeFOVRLeft;
vrDeviceEffect.eyeFOVR.rightDegrees = evt.data.eyeFOVRRight;
}
</code></pre><br />
<b><u>Others</u></b><br />
<b><u><br /></u></b>
Besides WebGL and WebVR, some problems that are solved when I made this demo. I list them and discuss how I solve them:<br />
- Can’t access DOM (read/modify)<br />
<pre><code class="javascript"> var workerCanvas = canvas.transferControlToOffscreen();
worker.postMessage( {canvas: workerCanvas}, [workerCanvas] );
</code></pre> - Can’t use filesystem (file://) to access local files<br />
Use XMLHttpRequest. Taking load texture as an example, in three.js, we need to use<br />
<pre><code class="javascript">var loader = new THREE.TGALoader();
var texture = loader.load( 'images/brick_bump.tga' );
var solidMaterial = new THREE.MeshLambertMaterial( { map: texture } );</code></pre> - No requestAnimationFrame<br />
Updating transferable objects and render need to via worker.onmessage(), we have to execute the worker update at the main reqestAnimationFrame. This limitation would bring the chance of blocking by the main thread requestAnimationFrame because it possibly would happen GC pauses in the main thread and block the worker thread. The best solution is by looking forward <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1203382">the implementation of requestAnimationFrame for Worker</a>.<br />
<br />
<b><u>Demo</u></b><br />
<b><u><br /></u></b>
<a href="https://daoshengmu.github.io/ConsoleGameOnWeb/physicsThreeDemo.html" target="_blank">Physics/WebGL on the main thread</a><br />
<a href="https://daoshengmu.github.io/ConsoleGameOnWeb//physicsWorkerThreeDemo.html" target="_blank">Physics on the main thread, WebGL on worker</a><br />
<a href="https://github.com/daoshengmu/ConsoleGameOnWeb" target="_blank">Source code</a><br />
<br />
</div></div>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-56293418089234639602016-01-26T07:20:00.000-08:002016-01-26T07:46:32.126-08:00Introduction to A-Frame <a href="http://aframe.io/">A-Frame</a> is a WebVR framework for developers to make their VR content rapidly. It is based on Entity-Component system. So, it could bring us flexibility and usability for developing.<br />
<br />
This is my slide for the sharing in WebGL Meetup Taipei #03.<br />
<iframe allowfullscreen="" frameborder="0" height="485" marginheight="0" marginwidth="0" scrolling="no" src="//www.slideshare.net/slideshow/embed_code/key/pH47lpIi9vADFe" style="border-width: 1px; border: 1px solid #CCC; margin-bottom: 5px; max-width: 100%;" width="595"> </iframe> <br />
<div style="margin-bottom: 5px;">
<strong> <a href="https://www.slideshare.net/ellisonmu/introduction-to-aframe-57170744" target="_blank" title="Introduction to A-Frame">Introduction to A-Frame</a> </strong> from <strong><a href="https://www.slideshare.net/ellisonmu" target="_blank">Daosheng Mu</a></strong> </div>
daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-28545859543749480842015-07-01T09:01:00.003-07:002015-07-01T09:19:00.904-07:00WebVR on Mobile devicesWe all know VR is a very popular stuff currently. There are lots of big companies start to develop their own headsets. Facebook and Youtube begin to support 360 degree videos on their platforms. Due to their works, we could expect there would be huge amounts of applications that target to Virtual Reality in the next couple of years. Currently, if you want to play VR content, you have to spend about $300 USD to buy a headset and plug several wires into your desktop. I think it would increase the wall to invite users to use VR. On the other hand, Google and Gear VR choose to use mobile devices as their headsets. You just need to spend $20 USD to buy a Google Cardboard and can start to enjoy the interaction with VR content.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3POMPR36TWBAAlItIu0ruuEXaKY6MnSWAAfmqeh4yb4jN8laS4gxIov_NjFml-RLhZNsrAYh28nCaQRYhFheu4KpTupXWTNmk6aUrBkxRa4xmXATeK9ebNnqsoUOSzTDLP6ZMy7qLdSDZ/s1600/googlecardboard.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="216" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3POMPR36TWBAAlItIu0ruuEXaKY6MnSWAAfmqeh4yb4jN8laS4gxIov_NjFml-RLhZNsrAYh28nCaQRYhFheu4KpTupXWTNmk6aUrBkxRa4xmXATeK9ebNnqsoUOSzTDLP6ZMy7qLdSDZ/s320/googlecardboard.jpg" width="320" /></a></div>
How about the content part of VR applications? If you want to write an application for Oculus, you must write Windows version for Windows users, or MacOS version for MacOS users. On mobile devices, you also need to rewrite apps for Android and iOS versions. No one wants to spend lots of time in rewriting the same code. Actually, there is an approach can solve this problem. That is to use HTML5 tech to make web apps. Luckily, WebVR API could help us meet the requirement.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBb-H-LBMH0lbiJuG7XPiQ5aKN6rxATBTJpmxvrbQc3r_zP3wvYwxSCPQfnklING1hYgzhclbOM7k7Y5douwqnzgCwCjfLj7hyCQKGR95CNoAnp2WSVdf4tZ98avU-MeO29uPArr32Fx6u/s1600/images.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBb-H-LBMH0lbiJuG7XPiQ5aKN6rxATBTJpmxvrbQc3r_zP3wvYwxSCPQfnklING1hYgzhclbOM7k7Y5douwqnzgCwCjfLj7hyCQKGR95CNoAnp2WSVdf4tZ98avU-MeO29uPArr32Fx6u/s1600/images.jpg" /></a></div>
WebVR [1] is an experimental technology WebAPI in Mozilla and Google. In Mozilla, we call it MozVR [2]. It has been landed in Firefox Nightly. The goal is for any VR devices could display VR content from browsers. Therefore, we can find lots of demos on MozVR.com that you can view the content through Oculus Rift. However, the main topic of my writing is want to prove how to view VR content on your mobile devices, I want to prove the concept and tell everyone how to use their mobile phones to view WebVR. Go on, I will tell you how to use WebVR API on Firefox browser Nightly and Firefox OS.<br />
<br />
First of all, I would like to say the results that I am very satisfied. But there are still some workarounds that have to been noticed.<br />
<br />
1. Fix the image size to power of two. This limit actually is according to WebGL 1.0 spec. Under the desktop environment, some browsers can solve this exception. However, on mobile devices, we must follow this limit. Otherwise, it will be crashed.<br />
<br />
2. Full screen. If we want to allow full-screen mode, we have to set:<br />
<pre class="brush: js">full-screen-api.allow-trusted-requested-only; false</pre>
<br />
3. On mobile devices, we have no position tracker support.<br />
<br />
4. Enable VR flag<br />
<pre class="brush: js">dom.vr.enabled; true</pre>
<br />
5. Using dom.deviceOrientation APIs instead of PositionState<br />
On Firefox Android, PositionState support has been landed to nightly version. However, on Firefox OS, it is not implemented yet. So, I decide to use dom.deviceOrientation APIs [3] on Firefox OS.<br />
<br />
<pre class="brush: js">window.addEventListener( 'deviceorientation', onDeviceOrientationChangeEvent, false );</pre>
<br />
6. Position tracker on FirefoxOS<br />
Although FirefoxOS devices didn't provide position tracker APIs natively, we would be difficult to control the camera movement in the 3D scene. But, thanks for the Open Web technology, we can try the Leap Motion JavaScript SDK [4]. In Leap Motion SDK, it would give use gesture data via WebSocket to control the camera movement.<br />
<br />
<pre class="brush: js">ws = new WebSocket("ws://ipaddress:6437/v6.json");
ws.onopen = function( event ) {
// Initial leapMotion
}
ws.onmessage = function( event ) {
// Receive the messages
}</pre>
<br />
Finally, Let's see our demo that is done at Mozilla work week in Whistler.<br />
<br />
Demo:<br />
<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/op9vvrwC9GM" width="510"></iframe><br />
<br />
Reference:<br />
[1] WebVR API <a href="https://developer.mozilla.org/en-US/docs/Web/API/WebVR_API">https://developer.mozilla.org/en-US/docs/Web/API/WebVR_API</a><br />
[2] MozVR, <a href="http://mozvr.com/">http://mozvr.com</a><br />
[3] DeviceOrientation API, <a href="http://www.html5rocks.com/en/tutorials/device/orientation/">www.html5rocks.com/en/tutorials/device/orientation/</a><br />
[4] LeapMotion JavaScript SDK, <a href="https://developer.leapmotion.com/documentation/javascript/supplements/Leap_JSON.html?proglang=javascript">https://developer.leapmotion.com/documentation/javascript/supplements/Leap_JSON.html?proglang=javascript</a><br />
<br />
<br />daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-21815598863221481972014-12-26T05:43:00.000-08:002014-12-26T20:03:16.635-08:00Using Chromium Embedded Framework (CEF) in their games<a href="http://coherent-labs.com/blog/what-developers-should-consider-when-using-chromium-embedded-framework-cef-in-their-games/">http://coherent-labs.com/blog/what-developers-should-consider-when-using-chromium-embedded-framework-cef-in-their-games/</a><br />
<br />
<a href="http://www.ogre3d.org/forums/viewtopic.php?f=11&t=79079">http://www.ogre3d.org/forums/viewtopic.php?f=11&t=79079</a><br />
<br />daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-68875049368826884292014-07-22T02:08:00.004-07:002020-12-12T15:00:45.668-08:00Web3D using software rendering<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrBdalG1Vtvmr_BKUTvB13vN6XIaKgVF92S_hUZzk-olXOGN9IWzAILXLkPQBP7qfcSiUpvYY6-MKAyxVXphaOhqyEG1LbPmTCBQyFQD66QoSaUt5TAxN4VVhcIASuDUzU2Vo7i0crGRl9/s476/software_earth.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="459" data-original-width="476" height="386" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrBdalG1Vtvmr_BKUTvB13vN6XIaKgVF92S_hUZzk-olXOGN9IWzAILXLkPQBP7qfcSiUpvYY6-MKAyxVXphaOhqyEG1LbPmTCBQyFQD66QoSaUt5TAxN4VVhcIASuDUzU2Vo7i0crGRl9/w400-h386/software_earth.jpg" width="400" /></a></div><div><br /></div> <br />
Nowadays, if people would like to implement a Web3D demo, most of them would use WebGL approach.<br />
<br />
WebGL seems to be the spec of Web3D, because Chrome, Firefox, Safari, IE, and Android Chrome have supported it. iOS 8 will be launched this year, and WebGL also can be executed on iOS 8 Safari.<br />
<br />
In order to support iOS 7 pervious version for Web3D, I survey the Software Renderer method of three.js and help them add texture mapping and pixel lighting.<br />
<br />
First, we get the image data from canvas.<br />
<pre class="brush: js">
var context = canvas.getContext( '2d', {
alpha: parameters.alpha === true
} );
imagedata = context.getImageData( 0, 0, canvasWidth, canvasHeight );
data = imagedata.data;
</pre>
<br />
<br />
Second, we have to project the faces of objects to the screen space, we needn't sort them by painter algorithm, because three.js has implemented a screen size z buffer to store the depth values for depth testing.<br />
<br />
And then, start to interpolate the pixels of these faces.<br />
<pre class="brush: js">var dz12 = z1 - z2, dz31 = z3 - z1;
var invDet = 1.0 / (dx12*dy31 - dx31*dy12);
var dzdx = (invDet * (dz12*dy31 - dz31*dy12)); // dz per one subpixel step in x
var dzdy = (invDet * (dz12*dx31 - dx12*dz31)); // dz per one subpixel step in y
</pre>
<br />
Using the same way to interpolate texture coordinate and vertex normal.<br />
<pre class="brush: js">var dtu12 = tu1 - tu2, dtu31 = tu3 - tu1;
var dtudx = (invDet * (dtu12*dy31 - dtu31*dy12)); // dtu per one subpixel step in x
var dtudy = (invDet * (dtu12*dx31 - dx12*dtu31)); // dtu per one subpixel step in y
var dtv12 = tv1 - tv2, dtv31 = tv3 - tv1;
var dtvdx = (invDet * (dtv12*dy31 - dtv31*dy12)); // dtv per one subpixel step in x
var dtvdy = (invDet * (dtv12*dx31 - dx12*dtv31)); // dtv per one subpixel step in y
var dnz12 = nz1 - nz2, dnz31 = nz3 - nz1;
var dnzdx = (invDet * (dnz12*dy31 - dnz31*dy12)); // dnz per one subpixel step in x
var dnzdy = (invDet * (dnz12*dx31 - dx12*dnz31)); // dnz per one subpixel step in y
</pre>
<div>
<br /></div>
<div>
Get the left/top corner of this area.<br />
<pre class="brush: js">var cz = ( z1 + ((minXfixscale) - x1) * dzdx + ((minYfixscale) - y1) * dzdy ) | 0; // z left/top corner
var ctu = ( tu1 + (minXfixscale - x1) * dtudx + (minYfixscale - y1) * dtudy ); // u left/top corner
var ctv = ( tv1 + (minXfixscale - x1) * dtvdx + (minYfixscale - y1) * dtvdy ); // v left/top corner
var cnz = ( nz1 + (minXfixscale - x1) * dnzdx + (minYfixscale - y1) * dnzdy ); // normal left/top corner
</pre>
</div>
<div>
<br />
Divide the screen into 8x8 size blocks, and draw the pixels in the blocks.
</div>
<div>
<pre class="brush: js">for ( var y0 = miny; y0 < maxy; y0 += q ) {
while ( x0 >= minx && x0 < maxx && cb1 >= nmax1 && cb2 >= nmax2 && cb3 >= nmax3 ) {
// Because the size of blocks are 8x8, we have to scan them 8 x 8 pixels
for ( var iy = 0; iy < q; iy ++ ) {
for ( var ix = 0; ix < q; ix ++ ) {
if ( z < zbuffer[ offset ] ) { // Checking z-testing
// if passed, write depth to z buffer, and draw pixel
zbuffer[ offset ] = z;
shader( data, offset * 4, cxtu, cxtv, cxnz, face, material );
}
}
}
}
}
</pre>
</div>
<br />
Put the image into canvas.<br />
<pre class="brush: js">context.putImageData( imagedata, 0, 0, x, y, width, height );
</pre>
<br />
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8aaF90k2VuBpJ3HupZiPruRFIU2TeFlb4t8wKRlUfSjLsyNaQm3Bb7sD9N2rY30j-zUAGS7Zby83yKNuAb1W8B0D61pY_LTUfcmFxUSGUUm5Gch7Nj3Zj-CTGoZwesCyqWFHnZR1QC3z-/s1600/Screen+Shot+2014-07-22+at+4.28.11+PM.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="238" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8aaF90k2VuBpJ3HupZiPruRFIU2TeFlb4t8wKRlUfSjLsyNaQm3Bb7sD9N2rY30j-zUAGS7Zby83yKNuAb1W8B0D61pY_LTUfcmFxUSGUUm5Gch7Nj3Zj-CTGoZwesCyqWFHnZR1QC3z-/s1600/Screen+Shot+2014-07-22+at+4.28.11+PM.png" width="320" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
If we want to support texture mapping, we have to store the texels into a texel buffer, like this way:<br />
<pre class="brush: js">var data;
try {
var ctx = canvas.getContext('2d');
if(!isCanvasClean) {
ctx.clearRect(0, 0, dim, dim);
ctx.drawImage(image, 0, 0, dim, dim);
var imgData = ctx.getImageData(0, 0, dim, dim);
data = imgData.data;
}
catch(e) {
return;
}
var size = data.length;
this.data = new Uint8Array(size);
var alpha;
for(var i=0, j=0; i < size; ) {
this.data[i++] = data[j++];
this.data[i++] = data[j++];
this.data[i++] = data[j++];
alpha = data[j++];
this.data[i++] = alpha;
if(alpha < 255)
this.hasTransparency = true;
}
</pre>
<br />
Computing pixel color with texels:
<br />
<pre class="brush: js">var tdim = material.texture.width;
var isTransparent = material.transparent;
var tbound = tdim - 1;
var tdata = material.texture.data;
var texel = tdata[((v * tdim) & tbound) * tdim + ((u * tdim) & tbound)];
if ( !isTransparent ) {
buffer[ offset ] = (texel & 0xff0000) >> 16;
buffer[ offset + 1 ] = (texel & 0xff00) >> 8;
buffer[ offset + 2 ] = texel & 0xff;
buffer[ offset + 3 ] = material.opacity * 255;
}
else {
var opaci = ((texel >> 24) & 0xff) * material.opacity;
if(opaci < 250) {
var backColor = buffer[ offset ] << 24 + buffer[ offset + 1 ] << 16 + buffer[ offset + 2 ] << 8;
texel = texel * opaci + backColor * (1-opaci);
}
buffer[ offset ] = (texel & 0xff0000) >> 16;
buffer[ offset + 1 ] = (texel & 0xff00) >> 8;
buffer[ offset + 2 ] = (texel & 0xff);
buffer[ offset + 3 ] = material.opacity * 255;
}
</pre>
And if you want to support lighting, we first store lighting color in the palette:
<br />
<pre class="brush: js">var diffuseR = material.ambient.r + material.color.r * 255;
if ( bSimulateSpecular ) {
var i = 0, j = 0;
while(i < 204) {
var r = i * diffuseR / 204;
var g = i * diffuseG / 204;
var b = i * diffuseB / 204;
if(r > 255)
r = 255;
if(g > 255)
g = 255;
if(b > 255)
b = 255;
palette[j++] = r;
palette[j++] = g;
palette[j++] = b;
++i;
}
while(i < 256) { // plus specular highlight
var r = diffuseR + (i - 204) * (255 - diffuseR) / 82;
var g = diffuseG + (i - 204) * (255 - diffuseG) / 82;
var b = diffuseB + (i - 204) * (255 - diffuseB) / 82;
if(r > 255)
r = 255;
if(g > 255)
g = 255;
if(b > 255)
b = 255;
palette[j++] = r;
palette[j++] = g;
palette[j++] = b;
++i;
}
} else {
var i = 0, j = 0;
while(i < 256) {
var r = i * diffuseR / 255;
var g = i * diffuseG / 255;
var b = i * diffuseB / 255;
if(r > 255)
r = 255;
if(g > 255)
g = 255;
if(b > 255)
b = 255;
palette[j++] = r;
palette[j++] = g;
palette[j++] = b;
++i;
}
}
</pre>
<br />
At run time, fetching lighting color according the pixel normal
<br />
<pre class="brush: js"> var tdim = material.texture.width;
var isTransparent = material.transparent;
var cIndex = (n > 0 ? (~~n) : 0) * 3;
var tbound = tdim - 1;
var tdata = material.texture.data;
var tIndex = (((v * tdim) & tbound) * tdim + ((u * tdim) & tbound)) * 4;
if ( !isTransparent ) {
buffer[ offset ] = (material.palette[cIndex] * tdata[tIndex]) >> 8;
buffer[ offset + 1 ] = (material.palette[cIndex+1] * tdata[tIndex+1]) >> 8;
buffer[ offset + 2 ] = (material.palette[cIndex+2] * tdata[tIndex+2]) >> 8;
buffer[ offset + 3 ] = material.opacity * 255;
} else {
var opaci = tdata[tIndex+3] * material.opacity;
var foreColor = ((material.palette[cIndex] * tdata[tIndex]) << 16)
+ ((material.palette[cIndex+1] * tdata[tIndex+1]) << 8 )
+ (material.palette[cIndex+2] * tdata[tIndex+2]);
if(opaci < 250) {
var backColor = buffer[ offset ] << 24 + buffer[ offset + 1 ] << 16 + buffer[ offset + 2 ] << 8;
foreColor = foreColor * opaci + backColor * (1-opaci);
}
buffer[ offset ] = (foreColor & 0xff0000) >> 16;
buffer[ offset + 1 ] = (foreColor & 0xff00) >> 8;
buffer[ offset + 2 ] = (foreColor & 0xff);
buffer[ offset + 3 ] = material.opacity * 255;
}
</pre>
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4DXP3_jmY_Ys_GsfkWqDNV8iQ-97Gqt1XfdwR5U7OPBz1qvYSWi1ZKhqLGay6azwpK_k0XEaZMQmYGd4K7IBPI4-q4Nyno0Q0LJ2ymiW2kL8Kds2JrWVyT0hOoXWeUi-mRDeYG-Sjj1jR/s1600/Screen+Shot+2014-07-22+at+4.36.54+PM.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4DXP3_jmY_Ys_GsfkWqDNV8iQ-97Gqt1XfdwR5U7OPBz1qvYSWi1ZKhqLGay6azwpK_k0XEaZMQmYGd4K7IBPI4-q4Nyno0Q0LJ2ymiW2kL8Kds2JrWVyT0hOoXWeUi-mRDeYG-Sjj1jR/s1600/Screen+Shot+2014-07-22+at+4.36.54+PM.png" width="320" /></a></div>
<br />
Optimization:
Using blockFlags array to manage which parts have to be cleaned, and blockMaxZ array records this block's depth. If depth > blockMaxZ[blockId], this block needn't to be drawn.<br />
<br />
<br />daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-76163557853282123872014-07-20T23:09:00.002-07:002014-07-21T08:29:07.346-07:00Printing your 3D models by using Arca3DArca3D is a platform can let you store your 3D models. There is a more interesting thing, it also can help you print your models.<br />
<br />
<b> First, select a model you uploaded.</b><br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKhef5khhi32ztKJ8vlABe4aLeSsFFnszUeEurBi8aSHpxukuXU8b9fZrHTbsy9dOrqAHheQHBzVHURddudM8PHL6xkZpvNPiNpfphMinB9tIkqUmxqN71_UqeDyR7a_jAKpmdYktz6rl2/s1600/Photo+2014-6-26+%E4%B8%8B%E5%8D%885+54+43.jpg" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKhef5khhi32ztKJ8vlABe4aLeSsFFnszUeEurBi8aSHpxukuXU8b9fZrHTbsy9dOrqAHheQHBzVHURddudM8PHL6xkZpvNPiNpfphMinB9tIkqUmxqN71_UqeDyR7a_jAKpmdYktz6rl2/s320/Photo+2014-6-26+%E4%B8%8B%E5%8D%885+54+43.jpg" /></a><br />
<br />
<b>Next, click the button, Download STL File</b><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnNYFwPNzVm6kiF_j9b6m3lHQ4Cn4nB4RQ157XpO2sws-Xi1DSAeCl1Pa5RGmYojQeGCrLAY2P-gPnx0d8QH2VbaQgAW6tCHWbxQUp6NydX9H8usItFqobmZVdIk8pJ-7qqIlw1_rkgMAX/s1600/Screen+Shot+2014-07-21+at+1.42.23+PM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnNYFwPNzVm6kiF_j9b6m3lHQ4Cn4nB4RQ157XpO2sws-Xi1DSAeCl1Pa5RGmYojQeGCrLAY2P-gPnx0d8QH2VbaQgAW6tCHWbxQUp6NydX9H8usItFqobmZVdIk8pJ-7qqIlw1_rkgMAX/s1600/Screen+Shot+2014-07-21+at+1.42.23+PM.png" height="164" width="320" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<b>Then, you can check it through STL viewer</b><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGgOI0o_nxvlH3nGRoEvsTgl2gskQ86Tv_GwdGKvjMuM9AcDVrUpgnHAOyKbC9noD7bjoMUoJV41KjJXJCkoVRtydsviWXSHgTD0UD3DzX_K9HaFgFm1e_qSiP4jiqgg_9SeVJezwaUSRC/s1600/Screen+Shot+2014-07-21+at+1.45.22+PM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGgOI0o_nxvlH3nGRoEvsTgl2gskQ86Tv_GwdGKvjMuM9AcDVrUpgnHAOyKbC9noD7bjoMUoJV41KjJXJCkoVRtydsviWXSHgTD0UD3DzX_K9HaFgFm1e_qSiP4jiqgg_9SeVJezwaUSRC/s1600/Screen+Shot+2014-07-21+at+1.45.22+PM.png" height="253" width="320" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<b>Finally, pick the *.stl file to your 3D printer and get your great physical 3D model.</b><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibbs6z9tOMjN6tLNGVTeZiCbmQH46uaNcjn77tWc7ICRwgWWO9xMuQ62iyeE308b40XgEqgNESQi_PwXAjjdEBvZMZ4fpVXMFvsVsRgobNQt6cyxVYOp-27KnvXvar25JrpUisS5vsz3Zx/s1600/Photo+2014-6-19+%E4%B8%8B%E5%8D%8812+38+41.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibbs6z9tOMjN6tLNGVTeZiCbmQH46uaNcjn77tWc7ICRwgWWO9xMuQ62iyeE308b40XgEqgNESQi_PwXAjjdEBvZMZ4fpVXMFvsVsRgobNQt6cyxVYOp-27KnvXvar25JrpUisS5vsz3Zx/s1600/Photo+2014-6-19+%E4%B8%8B%E5%8D%8812+38+41.jpg" height="240" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZlI-w8JX3KQmyu719_8CkTnPBDjf0FCXi3KyCXX6RlcMbW0vKFPN8n5IuOLHNHnGP6BU5eqN62ZnTUduTQ2gXbuRd-BOjDooftUjo0xazlZiEufxwBZyFk3PM0Cr2yuvvuzyTgMC6QEVn/s1600/Photo+2014-6-19+%E4%B8%8B%E5%8D%884+21+20.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZlI-w8JX3KQmyu719_8CkTnPBDjf0FCXi3KyCXX6RlcMbW0vKFPN8n5IuOLHNHnGP6BU5eqN62ZnTUduTQ2gXbuRd-BOjDooftUjo0xazlZiEufxwBZyFk3PM0Cr2yuvvuzyTgMC6QEVn/s1600/Photo+2014-6-19+%E4%B8%8B%E5%8D%884+21+20.jpg" height="320" width="240" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvgCtSDHxYi0sU9gnnlQbHfNB__cGmRwJK7rhkLPZ1yntJXKBfV5LVtFuQW3WPWKA2nuDeGUTziQTGSR7yq3zzQNu-pmpc6FM0TWyhOGMN_gTtKzI_mqM6Cd0W_V8NHS8E04FZt_AZZZHJ/s1600/Photo+2014-6-19+%25E4%25B8%258B%25E5%258D%25886+14+00.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvgCtSDHxYi0sU9gnnlQbHfNB__cGmRwJK7rhkLPZ1yntJXKBfV5LVtFuQW3WPWKA2nuDeGUTziQTGSR7yq3zzQNu-pmpc6FM0TWyhOGMN_gTtKzI_mqM6Cd0W_V8NHS8E04FZt_AZZZHJ/s1600/Photo+2014-6-19+%25E4%25B8%258B%25E5%258D%25886+14+00.jpg" height="320" width="240" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br />Enjoy it: <a href="http://dev.arca3d.com/">http://dev.arca3d.com</a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<br />
<br />daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-80191816132823500422014-07-08T07:04:00.000-07:002014-07-08T07:06:45.587-07:00Arca3D LCD<iframe src="http://dev.arca3d.com/embedView?model_id=60" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" width="400" height="300" allowfullscreen></iframe>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-69483910845442846532014-07-03T20:27:00.001-07:002014-07-03T20:28:21.326-07:00Arca3D model<iframe src="http://dev.arca3d.com/embedView?model_id=61" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" width="480" height="320" allowfullscreen></iframe>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-5606701691352099762014-06-08T21:25:00.000-07:002014-06-09T00:28:09.871-07:00JavaScript singleton patternIn JavaScript, it would let you feel struggle to implement singleton design pattern. This is my first experiment:<br />
<pre class="brush: js">
View = {
_viewer: undefined,
initView: function() {
// init view
},
getView: function() {
// get view
return _viewer;
}
};
</pre>
But I figure out this approach can't express 'encapsulate', so the user can get _viewer directly. Therefore, I decide to rewrite it. I rewrite it to:<br />
<pre class="brush: js">
var View = ( function() {
var _viewer = undefined;
function init() {
// init view
};
function getView() {
// get view
return _viewer;
};
return {
init(): { init(); },
getView(): { return getView(); }
};
};
} ) ();
</pre>
This method can achieve 'encapsulate', uses its method like View.getView(), but you have to program the interface two times on function and return parts. Finally, I decide to use the bottom method, this method is the perfect solution for me now.
<br />
<pre class="brush: js">// unstrict mode:
var View = ( function() {
if ( arguments.callee._singletonInstance )
return arguments.callee._singletonInstance;
arguments.callee._singletonInstance = this;
var _viewer = undefined;
this.init = function() {
// init view
};
this.getView = function() {
// get view
return _viewer;
};
};
} ) ();
new View();
// strict mode:
var View = ( function() {
if (View.prototype._singletonInstance)
return View.prototype._singletonInstance;
View.prototype._singletonInstance = this;
var _viewer = undefined;
this.init = function() {
// init view
};
this.getView = function() {
// get view
return _viewer;
};
};
} ) ();
new View();
</pre>
This method you also can use its method like this way View.getView();, however you needn't multiple define the interfaces.<br />
<br />
<span style="color: orange;">Reference:
</span><br />
<a href="http://stackoverflow.com/questions/1635800/javascript-best-singleton-pattern">http://stackoverflow.com/questions/1635800/javascript-best-singleton-pattern</a><br />
<a href="http://fstoke.me/blog/?p=1932">http://fstoke.me/blog/?p=1932</a><br />
<a href="http://www.dotblogs.com.tw/blackie1019/archive/2013/08/30/115977.aspx">http://www.dotblogs.com.tw/blackie1019/archive/2013/08/30/115977.aspx</a><br />
<br />daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-92023716847993446092014-05-26T03:55:00.000-07:002014-05-26T20:15:23.132-07:00Screen Space Ambient Occulsion demo<iframe frameborder="0" height="300" scrolling="no" src="http://dsmu.me/sponza/sponza.html" width="500"></iframe>
<br />
Link: <a href="http://dsmu.me/sponza/sponza.html" target="_blank">http://dsmu.me/sponza/sponza.html</a>
<br />
<br />
This demo supports Desktop Web browsers and Android Chrome. It is based on three.js framework. and the scene assets are downloaded from <a href="http://www.crytek.com/cryengine/cryengine3/downloads" target="_blank">Crytek Sponza</a>. In order to increase the loading time I suggest to use three.js's binary format that is converted from convert_obj_three tool. In my experience, it could save about 80% loading time, and the file size is just about a half size. In this SSAO demo, it uses two render targets and three passes:
<br />
<ol>
<li>Depth pass: Storing the log depth and pack it into the RGBA render target texture.</li>
<li>Diffuse pass: Storing diffuse value which has computed the lighting value into a RGB render target</li>
<li>Post-Processing pass: Unpacking the log depth value from depth pass texture and linearize it, and then fetching the diffuse value from diffuse render target. Finally, computing the ssao using Spiral sampling approach and multiply with the diffuse value.</li>
</ol>
<div>
Snapshots:</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEkshEZaa80_Rt7hGcwa6FBQf6LQHgsAYh7zwLb3wityzK-EX-iLx5jZ63yfru2kXYH7KBhIIXYjrOq-Am0YShyphenhyphenzV1nRAulGwP_cDGJRRPBN5IIE56r3llaeoP4SrS7GIXS0Q9Gekb7XLT/s1600/1969202_4112209500315_1560111235487939279_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEkshEZaa80_Rt7hGcwa6FBQf6LQHgsAYh7zwLb3wityzK-EX-iLx5jZ63yfru2kXYH7KBhIIXYjrOq-Am0YShyphenhyphenzV1nRAulGwP_cDGJRRPBN5IIE56r3llaeoP4SrS7GIXS0Q9Gekb7XLT/s1600/1969202_4112209500315_1560111235487939279_n.jpg" height="240" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Desktop web browser & Android Chrome</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0tdfiCcNvpGXtFP0yLLastLjO-OSBxbKTEm83FLjDTOTVOeBtKpYV7O3dzN1HypDLNZlrLv4qIP108mUoi7TvIgYLF21jIGi2FiyLbDuYHY68oObLwSfB9gfk517lPYx2w5K68rbyrXJa/s1600/10374017_4112209180307_7089066962477729328_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0tdfiCcNvpGXtFP0yLLastLjO-OSBxbKTEm83FLjDTOTVOeBtKpYV7O3dzN1HypDLNZlrLv4qIP108mUoi7TvIgYLF21jIGi2FiyLbDuYHY68oObLwSfB9gfk517lPYx2w5K68rbyrXJa/s1600/10374017_4112209180307_7089066962477729328_n.jpg" height="176" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Desktop web browser</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgO2eUTzEuJLUTWHBgl0mH-GsHZ9cgRJvDG1T2uxKp9ovL_CxkC-CSXX9iQ-3XJbBE_mRgRtYcI17K6LrJaacLQlHdMejRfBk0Bu-JNUbB2uH8iWfqdr-LcpCPY50znQSq42LpVKl-KPCsK/s1600/10367155_4112210140331_5961398509417569926_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgO2eUTzEuJLUTWHBgl0mH-GsHZ9cgRJvDG1T2uxKp9ovL_CxkC-CSXX9iQ-3XJbBE_mRgRtYcI17K6LrJaacLQlHdMejRfBk0Bu-JNUbB2uH8iWfqdr-LcpCPY50znQSq42LpVKl-KPCsK/s1600/10367155_4112210140331_5961398509417569926_n.jpg" height="200" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Android Chrome</div>
<div>
<br /></div>
daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-296516297102839332014-03-13T09:27:00.000-07:002014-03-13T09:27:25.724-07:0030+ Cross Platform Mobile App and Game Development Tools <br />
<div class="classic-post-title blog_head">
<b><span style="font-weight: normal;"><b>30+ Cross Platform Mobile App and Game Development Tools</b> </span></b></div>
<div class="classic-post-title blog_head">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJOaCwRK1HhH-Mn0_yxMFhinXMeNhMDkPUjtd3U-Pt2F_4IZqD3nRmGobxhlR8kXrHlTWMhl-w9zeaw59Okn-WPxQcJyLcXRLk-u1hQP0sE8LjhOTM3LC9cHwa9C3HmJivs5PCbG6ye5c4/s1600/service_image_71.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJOaCwRK1HhH-Mn0_yxMFhinXMeNhMDkPUjtd3U-Pt2F_4IZqD3nRmGobxhlR8kXrHlTWMhl-w9zeaw59Okn-WPxQcJyLcXRLk-u1hQP0sE8LjhOTM3LC9cHwa9C3HmJivs5PCbG6ye5c4/s1600/service_image_71.jpg" height="173" width="400" /></a></div>
<a href="http://www.riaxe.com/blog/top-cross-platform-mobile-development-tools/?fb_action_ids=10201601507067999&fb_action_types=og.comments&fb_source=other_multiline&action_object_map=[233188210202347]&action_type_map=[%22og.comments%22]&action_ref_map=[]">http://www.riaxe.com/blog/top-cross-platform-mobile-development-tools/?fb_action_ids=10201601507067999&fb_action_types=og.comments&fb_source=other_multiline&action_object_map=[233188210202347]&action_type_map=[%22og.comments%22]&action_ref_map=[]</a>daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-85559711198468233912014-03-10T03:38:00.000-07:002014-03-10T03:43:34.109-07:00Using remote debugging for HTML5 on iOS Debugging on mobile devices always has to back on your desktop devices, just like Android chrome debugging. (<a href="http://coderellis.blogspot.tw/2013/04/porting-html5-game-to-mobile-platform.html">http://coderellis.blogspot.tw/2013/04/porting-html5-game-to-mobile-platform.html</a>)<br />
<br />
This article I would like to describe the approach by using web-inspector, its advantage is native supported by Apple, we needn't install any plugin. However, its disadvantage is you must have a Mac system devices, because it requires to use Safari version 6 later, and Windows' Safari' version is only available to version 5.1.7.<br />
<br />
<b>1. Enable web inspector</b><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-kkNxyyhoSlPyuyoVkGz-kbPfA31elfC4xLS1VTBKeQ3leiFTO2GYZKFrah_ETcsEhiFSCRcZ9S7sdJHh6QnQ2xybEYahNLloj75yaQjyYK8LqAJ4emCMFLpKb3Kv7rhlyDqfhx9HNqlt/s1600/IMG_2390.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-kkNxyyhoSlPyuyoVkGz-kbPfA31elfC4xLS1VTBKeQ3leiFTO2GYZKFrah_ETcsEhiFSCRcZ9S7sdJHh6QnQ2xybEYahNLloj75yaQjyYK8LqAJ4emCMFLpKb3Kv7rhlyDqfhx9HNqlt/s1600/IMG_2390.PNG" height="320" width="213" /></a></div>
Go to your iOS device setting page. Enable the web inspector in the Safari->Advance. And then, connect your iOS device using USB connector to your Mac device.<br />
<br />
2. <b>Enable developer mode on Safari</b><br />
Go to the Safari of your Mac, open the Setting->Advance, check the the enable developer menu.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHVDuPgjzd88tte0I7CamH7JJjeKpO7cnjK3JeR30DsufNRCvBLTm3Ch3sLB9qbMiZvKfVuK6bckihOILq06IRAra29Xr8frQVRcW5RaTAFnRdaPYWO-YbVI4Sk-LnRUmD2ILckhEGPNM3/s1600/%E8%9E%A2%E5%B9%95%E5%BF%AB%E7%85%A7+2014-03-10+%E4%B8%8B%E5%8D%885.08.39.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHVDuPgjzd88tte0I7CamH7JJjeKpO7cnjK3JeR30DsufNRCvBLTm3Ch3sLB9qbMiZvKfVuK6bckihOILq06IRAra29Xr8frQVRcW5RaTAFnRdaPYWO-YbVI4Sk-LnRUmD2ILckhEGPNM3/s1600/%E8%9E%A2%E5%B9%95%E5%BF%AB%E7%85%A7+2014-03-10+%E4%B8%8B%E5%8D%885.08.39.png" height="214" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
Using the iOS device to open the web page you wanna debug. Then, go to Menu->Developer on your Safari of Mac, you could see your iOS device. Click the page to debug.</div>
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuF9vxLN9Rwq761L9bXZB9ubY027cqH5lmCZxH_3Gh1e0floKzAXX4R-_6k-Xf2_kPis_8veybZliXhuPi4WsiIC85eANNrMdRi2dY8nPa4XswchDZnEltSXr6-HI300EgGclamyC8Ps5L/s1600/%E8%9E%A2%E5%B9%95%E5%BF%AB%E7%85%A7+2014-03-10+%E4%B8%8B%E5%8D%885.07.47.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuF9vxLN9Rwq761L9bXZB9ubY027cqH5lmCZxH_3Gh1e0floKzAXX4R-_6k-Xf2_kPis_8veybZliXhuPi4WsiIC85eANNrMdRi2dY8nPa4XswchDZnEltSXr6-HI300EgGclamyC8Ps5L/s1600/%E8%9E%A2%E5%B9%95%E5%BF%AB%E7%85%A7+2014-03-10+%E4%B8%8B%E5%8D%885.07.47.png" height="51" width="400" /></a><br />
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<b>3. Enable WebGL</b><br />
At the default setting of Mac's Safari browser, the WebGL function is disable, if we wanna use this feature, we have to enable it in the Menu->Developer.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBImTewEX4v1x0FN1HQMA2JrZF2E4CP3QXm6Z9jRngKNbKlK8CabbQ0iURi9jcpsOkF_DhWOKCGL2sudlxQhP-xrZl6kdqzaHEkrSCridJAn-8RHJIoFjEKdtkr3BxhAbnOROqJ9xXYbE-/s1600/%E8%9E%A2%E5%B9%95%E5%BF%AB%E7%85%A7+2014-03-10+%E4%B8%8B%E5%8D%885.09.04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBImTewEX4v1x0FN1HQMA2JrZF2E4CP3QXm6Z9jRngKNbKlK8CabbQ0iURi9jcpsOkF_DhWOKCGL2sudlxQhP-xrZl6kdqzaHEkrSCridJAn-8RHJIoFjEKdtkr3BxhAbnOROqJ9xXYbE-/s1600/%E8%9E%A2%E5%B9%95%E5%BF%AB%E7%85%A7+2014-03-10+%E4%B8%8B%E5%8D%885.09.04.png" height="320" width="178" /></a></div>
<br />
<b>4. Debug</b><br />
Finally, we can start to debug iOS web page on our Mac. The debugging steps is simple and familiar, the usage approach is like other browsers' dev tool.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_0MKjTj8GuuLNSybBfs-7Ev-xqoeLWbX2I0v495gLdzXg_XHLkHapILx7DInUYTIFpEEc3181xgekVnXgk-u1B2HZ25_3HbXIU9EsmeQDLE-IhsrcNT0p43xoKQ8lhSY3ZOl6fx1pwZ-G/s1600/1497628_515347755248951_1826828857_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_0MKjTj8GuuLNSybBfs-7Ev-xqoeLWbX2I0v495gLdzXg_XHLkHapILx7DInUYTIFpEEc3181xgekVnXgk-u1B2HZ25_3HbXIU9EsmeQDLE-IhsrcNT0p43xoKQ8lhSY3ZOl6fx1pwZ-G/s1600/1497628_515347755248951_1826828857_n.jpg" height="240" width="320" /></a></div>
<b><br /></b>
<b><br /></b>
daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com0tag:blogger.com,1999:blog-1852465093188788306.post-69732146662985617922014-03-08T21:38:00.002-08:002014-03-09T19:58:56.027-07:00WebGL Demo 02: 3D model loader and previewWhen you want to display a complex 3D mesh in your 3D application, you must want to find an external model format for you to use. They would be COLLADA(.dae), FBX(.fbx), or OBJ(.obj). However, If you want to make a 3D engine, you must want to support more kinds of formats. Therefore, I start to research FBX SDK.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSi2KHURrgp2LDC48Jqgmi0HVWAYXCuARsEU1vy3rjaSiy2BT43mNuL3Fr1S5L1Xssrhj2Sv4KJBOxqb3EREShaUmnjps1XWtxL_lbM4594LBmkVIUZEnSvGj1HFq1FPnNE8Ap9AvgBosb/s1600/%E8%9E%A2%E5%B9%95%E5%BF%AB%E7%85%A7+2014-03-09+%E4%B8%8B%E5%8D%884.40.47.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSi2KHURrgp2LDC48Jqgmi0HVWAYXCuARsEU1vy3rjaSiy2BT43mNuL3Fr1S5L1Xssrhj2Sv4KJBOxqb3EREShaUmnjps1XWtxL_lbM4594LBmkVIUZEnSvGj1HFq1FPnNE8Ap9AvgBosb/s1600/%E8%9E%A2%E5%B9%95%E5%BF%AB%E7%85%A7+2014-03-09+%E4%B8%8B%E5%8D%884.40.47.png" height="100" width="400" /></a></div>
<br />
FBX SDK supports <span style="color: blue;">.fbx, .dxf, .dae</span>, and <span style="color: blue;">.obj</span> file formats' importer/exporter, and .3ds format has been retired. FBX technology can be used to sharing scene assets, storing, packing models for sale, and processing animation, it supports import/export functions in Autodesk's products(3ds Max, Maya, AutoCAD...). The FBX SDK is a part of Autodesk FBX technology, it is C++ Software Development Kit. You can use it to create plug-ins, converters, and other applications. FBX SDK's source code is not public opening, therefore, we just can use their SDK interface, and if you want to redistribute or repacked, you should write permission from Autodesk, and include a link to the Autodesk FBX website to user install the required version of the FBX SDK. (<a href="http://usa.autodesk.com/adsk/servlet/pc/item?siteID=123112&id=10775847">http://usa.autodesk.com/adsk/servlet/pc/item?siteID=123112&id=10775847</a>)<br />
<br />
FBX SDK is divided into three parts:<br />
<ul>
<li>FBX SDK<br />C++ library, you can integrate it with your content creating pipeline, to make file parser, converter, importer, and exporter.</li>
<li>FBX extension<br />For customizing the behavior of the FBX importing and exporting by define dynamically loaded library.</li>
<li>Python FBX<br />Using Python binding C++ library of FBX SDK, it allows us to write Python scripts that can use classes and functions of FBX SDK.</li>
</ul>
<b><u>FBX formats</u></b>:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifaLx59e1TZNwt8mupW0Se0-LTmjuTeB5wS11CsQL_xfFYyqkcNxnJD9dR1Q2-BjEnIJ003-YKwKeIsR_ArzgaLIOY5rY9o6ANFX76Ma0iKKLY7ak7E1Wc-mq1gWX-Uc-fCR-1eJb9GI2_/s1600/fbx+object.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifaLx59e1TZNwt8mupW0Se0-LTmjuTeB5wS11CsQL_xfFYyqkcNxnJD9dR1Q2-BjEnIJ003-YKwKeIsR_ArzgaLIOY5rY9o6ANFX76Ma0iKKLY7ak7E1Wc-mq1gWX-Uc-fCR-1eJb9GI2_/s1600/fbx+object.jpg" height="158" width="400" /></a></div>
<br />
Most of classes in the FBX SDK are derived from FbxObject. <br />
<br />
FbxScene, the FBX scene graph is organized as a tree of FbxNode objects. These nodes are cameras, meshes, lights...elements, These scene elements are specialized instances of FbxNodeAttribute.<br />
<br />
I/O objects, FbxImporter and FbxExporter objects are used to import and export scene.<br />
<br />
Collections, most container classes are derived from the FbxCollection class. FbxScene is just derived from FbxCollection through FbxDocument.<br />
<br />
FbxScene:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidddTC5fdKAi-SdCMe-aL1zhTDpI59DkxW4MSfpmL1VnmjSTKDW380Y1vVZ_o3KojUjIzYbqJYFkAz2PN8EZl2m5vGWyRnLzPBMTtEexHwaZmKrkCXe4BInIIgkU1bFEZlSKeXADMemrNV/s1600/fbxScene.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidddTC5fdKAi-SdCMe-aL1zhTDpI59DkxW4MSfpmL1VnmjSTKDW380Y1vVZ_o3KojUjIzYbqJYFkAz2PN8EZl2m5vGWyRnLzPBMTtEexHwaZmKrkCXe4BInIIgkU1bFEZlSKeXADMemrNV/s1600/fbxScene.jpg" height="400" width="342" /></a></div>
The scene graph is abstracted by FbxScene class. It is a hierarchy nodes. The scene element is defined by combining a FbxNode with a subclass of FbxNodeAttribute.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhG5J65QDGwCdylQ6WGXGEWURI5kDWAcgy5yF5rTP2GAnWqI4U_xnyErwPwRm8Gkl0SxCzfpQTmTHhh8TanyPWxONWhTcD7HoPSCu6fSaXckirdhDJpKqm3VvbUqPLlVgV9hD7p9AejoefP/s1600/fbx+attribute.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhG5J65QDGwCdylQ6WGXGEWURI5kDWAcgy5yF5rTP2GAnWqI4U_xnyErwPwRm8Gkl0SxCzfpQTmTHhh8TanyPWxONWhTcD7HoPSCu6fSaXckirdhDJpKqm3VvbUqPLlVgV9hD7p9AejoefP/s1600/fbx+attribute.jpg" height="180" width="320" /></a></div>
<br />
FbxNodeAttribute: FbxNode is combined by Transformation Data and Node attributes(FbxAttrubute).<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiS7N-q1bHshDRjBxycR0b_3XNJ2M8MAiOOLgWNlVkeWIgPdj975jeN-tQTBTgoH_QNcEywbbSgiSPrL2-4Y8B6tInyRnX-pDPDhkgMiDQUmoxiI39_s-AY62S0uBzWGvE96RLrzLzNWIiw/s1600/fbxAttribute.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiS7N-q1bHshDRjBxycR0b_3XNJ2M8MAiOOLgWNlVkeWIgPdj975jeN-tQTBTgoH_QNcEywbbSgiSPrL2-4Y8B6tInyRnX-pDPDhkgMiDQUmoxiI39_s-AY62S0uBzWGvE96RLrzLzNWIiw/s1600/fbxAttribute.jpg" height="165" width="400" /></a></div>
<br />
FbxSurfaceMaterial:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMO64ONulfxfDKLda23tU_PkNFqAJqq5O-bxIwVXF4WPNrs-0Dc6s2m7Tn3d7lmCh-o0yU7aqTWUaXywZh-m2RWSE3Sqky3fmWRO7FwpL2HvarY38u4tOvjlJwYhCY5e0D0scn1Dntyite/s1600/fbxMaterial.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMO64ONulfxfDKLda23tU_PkNFqAJqq5O-bxIwVXF4WPNrs-0Dc6s2m7Tn3d7lmCh-o0yU7aqTWUaXywZh-m2RWSE3Sqky3fmWRO7FwpL2HvarY38u4tOvjlJwYhCY5e0D0scn1Dntyite/s1600/fbxMaterial.jpg" height="260" width="320" /></a></div>
<br />
FbxTexture:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRRzoWzggvOtTBirFYJ2EKh24wQijFeI9ekEomljViq-_ux3l7kCm9m4GdQpRc0FuZwClcD9R-xaMbw_5Vi0GaR3aSCPTjG4qSIJvCRSg7PFPwpt_PPPfvnhH4CkgVsrQ__rkfFo0Jr50g/s1600/fbxTexture.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRRzoWzggvOtTBirFYJ2EKh24wQijFeI9ekEomljViq-_ux3l7kCm9m4GdQpRc0FuZwClcD9R-xaMbw_5Vi0GaR3aSCPTjG4qSIJvCRSg7PFPwpt_PPPfvnhH4CkgVsrQ__rkfFo0Jr50g/s1600/fbxTexture.jpg" height="208" width="400" /></a></div>
<br />
<br />
<b><u>Load fbx/dae/obj model to scene</u></b>:<br />
<pre class="brush: cpp">#include "fbxsdk .h"
#include "fbxfilesdk/fbxio/fbxiosettings.h"
// Create the FBX SDK manager
FbxManager* lSdkManager = FbxManager::Create();
// Create an IOSettings object.
FbxIOSettings * ios = FbxIOSettings::Create(lSdkManager, IOSROOT );
lSdkManager->SetIOSettings(ios);
// ... Configure the FbxIOSettings object ...
// Create an importer.
FbxImporter* lImporter = FbxImporter::Create(lSdkManager, "");
// Declare the path and filename of the file containing the scene.
// In this case, we are assuming the file is in the same directory as the executable.
const char* lFilename = "file.xxx"; // the file extension can be dae, obj, fbx
// Initialize the importer.
bool lImportStatus = lImporter->Initialize(lFilename, -1, lSdkManager->GetIOSettings());
// Create a new scene so it can be populated by the imported file.
FbxScene* lScene = FbxScene::Create(lSdkManager,"myScene");
// Import the contents of the file into the scene.
lImporter->Import(lScene);
// The file has been imported; we can get rid of the importer.
lImporter->Destroy();
</pre>
<br />
<b><u>Convert FBX model</u></b>: <br />
My demo engine uses three.js engine, it has provided the fbx/convert_to_threejs.py converter, which uses Python binding C++ library of FBX SDK. You can use the Python SDK import your fbx file by your converter. In your terminal window, insert <br />
<pre class="brush: bash">convert_to_threejs.py [source_file] [output_file] [options]</pre>
<br />
You can find the output folder has a three.js in-house model file, which is json format, and you also see the textures your model needs have been copied to your folder[that is my contribution on three.js].<br />
<br />
<b><u>Preview in three.js</u></b>: <br />
Load your model <br />
<pre class="brush: js">function loadScene() {
var loader = new THREE.SceneLoader();
loader.load( 'outFBX/basketball/basketball.js', callbackFinished );
}
function callbackFinished( result ) {
result.scene.traverse( function ( object ) {
_scene.add( object );
} );
}
</pre>
<br />
<b><u>Result:</u></b><br />
<iframe height="400" seamless="" src="https://dl.dropboxusercontent.com/u/75721204/WebGL/demo/demo02LoadFBX.html" width="400"></iframe><br />
<i><span style="font-size: x-small;">Model is downloaded from: <a href="http://tf3dm.com/3d-model/official-nba-spalding-basketball-86751.html">http://tf3dm.com/3d-model/official-nba-spalding-basketball-86751.html</a></span></i><br />
<br />
<i><span style="color: orange;">Reference:</span></i><br />
Autodesk FBX SDK doucment: <a href="http://docs.autodesk.com/FBX/2013/ENU/FBX-SDK-Documentation/index.html">http://docs.autodesk.com/FBX/2013/ENU/FBX-SDK-Documentation/index.html</a><br />
three.js: <a href="https://github.com/mrdoob/three.js">https://github.com/mrdoob/three.js</a><br />
<br />daoshengmuhttp://www.blogger.com/profile/01983325792747328738noreply@blogger.com1