glTF loader for Android Vulkan

This post is going to introduce how we integrate with an existing glTF loader library to make it be able to show glTF models in our Vulkan rendering framework, vulkan-android.


In the beginning, we don't want to make our new own wheel, so choosing tinygltf as our glTF loader. tinygltf is a C++11 based library that would help us support all possible cross-platform project easily.

tinygltf setup in Android Studio

In vulkan-android project, we put third party libraries into third_party folder. Therefore, we need to include tinygltf from third_party folder in app/CMakeLists.txt as below.

set(THIRD_PARTY_DIR ../../third_party)


Model loading from tinygltf

We are going to load a gltf ASCII or binary format model from the storage. tinygltf provides two APIs , they are LoadBinaryFromFile() and LoadASCIIFromFile() respectively based on the extension name is *.glb or *.gltf. TinyGLTF loader will return a tinygltf Model that include mesh, animation, node, material, texture, and skin data. In this article, I would like to focus on mesh and texture creating from tinygltf Model.

Buffer creating from tinygltf

In tinygltf::Model::bufferViews, it includes vertex and index data. Index data usually owns scalar type, and vertex data is usually vector3. A bufferView represents a subset of data in a buffer, defined by a byte offset into the buffer specified in the byteOffset property and a total byte length specified by the byteLength property of the buffer view. would tell us the bufferView is an index buffer or a vertex buffer. If it is a TINYGLTF_TARGET_ELEMENT_ARRAY_BUFFER, it is an index buffer, otherwise, if it is a TINYGLTF_TARGET_ARRAY_BUFFER

texture creating from tinygltf

tinygltf handles image loading from its library. It loads image data no matter from *glb or PNG images from *.gltf, it packs its byte data into model.images. model.images stores the loaded buffers, and we need to allocate Vulkan memory and copy it to the image buffer.

First of all, create a stage buffer and copy the image buffer to it.

  VkBuffer stagingBuffer;
  VkDeviceMemory stagingBufferMemory;
  void* data;
  vkMapMemory(mDeviceInfo.device, stagingBufferMemory, 0, imageSize, 0, &data);
  memcpy(data, aBuffer, static_cast(imageSize));
  vkUnmapMemory(mDeviceInfo.device, stagingBufferMemory);
Create a Vulkan image and image memory and use a single time command buffer to copy the data from the stage buffer which we just created to this Vulkan image.

  if (vkCreateImage(mDeviceInfo.device, &imageInfo, nullptr, &textureImage) != VK_SUCCESS) {
    LOG_E(, "failed to create image!");
    return false;

  VkMemoryRequirements memRequirements;
  vkGetImageMemoryRequirements(mDeviceInfo.device, textureImage, &memRequirements);

  VkMemoryAllocateInfo allocInfo{};
  allocInfo.allocationSize = memRequirements.size;
          VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, &allocInfo.memoryTypeIndex);

  if (vkAllocateMemory(mDeviceInfo.device, &allocInfo, nullptr, &textureImageMemory) != VK_SUCCESS) {
    LOG_E(, "failed to allocate image memory!");
    return false;

  vkBindImageMemory(mDeviceInfo.device, textureImage, textureImageMemory, 0);
  VkCommandBuffer commandBuffer = BeginSingleTimeCommands();

  commandBuffer = BeginSingleTimeCommands();

  VkBufferImageCopy region{};
  region.bufferOffset = 0;
  region.bufferRowLength = 0;
  region.bufferImageHeight = 0;
  region.imageSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
  region.imageSubresource.mipLevel = 0;
  region.imageSubresource.baseArrayLayer = 0;
  region.imageSubresource.layerCount = 1;
  region.imageOffset = {0, 0, 0};
  region.imageExtent = {

  vkCmdCopyBufferToImage(commandBuffer, stagingBuffer, textureImage, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 1, &region);



Popular posts from this blog

Physically-Based Rendering in WebGL

Fast subsurface scattering