2014年12月26日 星期五

Using Chromium Embedded Framework (CEF) in their games

http://coherent-labs.com/blog/what-developers-should-consider-when-using-chromium-embedded-framework-cef-in-their-games/

http://www.ogre3d.org/forums/viewtopic.php?f=11&t=79079

2014年7月22日 星期二

Web3D using software rendering


 Nowadays, if people would like to implement a Web3D demo, most of them would use WebGL approach.

WebGL seems to be the spec of Web3D,  because Chrome, Firefox, Safari, IE, and Android Chrome have supported it. iOS 8 will be launched this year, and WebGL also can be executed on iOS 8 Safari.

In order to support iOS 7 pervious version for Web3D, I survey the Software Renderer method of three.js and help them add texture mapping and pixel lighting.

First, we get the image data from canvas.
   
   var context = canvas.getContext( '2d', {
  alpha: parameters.alpha === true
   } );

   imagedata = context.getImageData( 0, 0, canvasWidth, canvasHeight );
   data = imagedata.data;


Second, we have to project the faces of objects to the screen space, we needn't sort them by painter algorithm, because three.js has implemented a screen size z buffer to store the depth values for depth testing.

And then, start to interpolate the pixels of these faces.
var dz12 = z1 - z2, dz31 = z3 - z1;

var invDet = 1.0 / (dx12*dy31 - dx31*dy12);

var dzdx = (invDet * (dz12*dy31 - dz31*dy12)); // dz per one subpixel step in x
var dzdy = (invDet * (dz12*dx31 - dx12*dz31)); // dz per one subpixel step in y

Using the same way to interpolate texture coordinate and vertex normal.
var dtu12 = tu1 - tu2, dtu31 = tu3 - tu1;

var dtudx = (invDet * (dtu12*dy31 - dtu31*dy12)); // dtu per one subpixel step in x
var dtudy = (invDet * (dtu12*dx31 - dx12*dtu31)); // dtu per one subpixel step in y
var dtv12 = tv1 - tv2, dtv31 = tv3 - tv1;

var dtvdx = (invDet * (dtv12*dy31 - dtv31*dy12)); // dtv per one subpixel step in x
var dtvdy = (invDet * (dtv12*dx31 - dx12*dtv31)); // dtv per one subpixel step in y

var dnz12 = nz1 - nz2, dnz31 = nz3 - nz1;

var dnzdx = (invDet * (dnz12*dy31 - dnz31*dy12)); // dnz per one subpixel step in x
var dnzdy = (invDet * (dnz12*dx31 - dx12*dnz31)); // dnz per one subpixel step in y

Get the left/top corner of this area.
var cz = ( z1 + ((minXfixscale) - x1) * dzdx + ((minYfixscale) - y1) * dzdy ) | 0;  // z left/top corner
var ctu = ( tu1 + (minXfixscale - x1) * dtudx + (minYfixscale - y1) * dtudy );  // u left/top corner
var ctv = ( tv1 + (minXfixscale - x1) * dtvdx + (minYfixscale - y1) * dtvdy );  // v left/top corner
var cnz = ( nz1 + (minXfixscale - x1) * dnzdx + (minYfixscale - y1) * dnzdy );  // normal left/top corner

Divide the screen into 8x8 size blocks, and draw the pixels in the blocks.
for ( var y0 = miny; y0 < maxy; y0 += q ) {

     while ( x0 >= minx && x0 < maxx && cb1 >= nmax1 && cb2 >= nmax2 && cb3 >= nmax3 ) {
        // Because the size of blocks are 8x8, we have to scan them 8 x 8 pixels
        for ( var iy = 0; iy < q; iy ++ ) {
           for ( var ix = 0; ix < q; ix ++ ) {

              if ( z < zbuffer[ offset ] ) {   // Checking z-testing
                   // if passed, write depth to z buffer, and draw pixel
                   zbuffer[ offset ] = z;        
                   shader( data, offset * 4, cxtu, cxtv, cxnz, face, material );                                
              }
           }

        }
   
     }

}

Put the image into canvas.
context.putImageData( imagedata, 0, 0, x, y, width, height );

















If we want to support texture mapping, we have to store the texels into a texel buffer, like this way:
var data;
              try {
              var ctx = canvas.getContext('2d');
              if(!isCanvasClean) {
                 ctx.clearRect(0, 0, dim, dim);
                 ctx.drawImage(image, 0, 0, dim, dim);
                 var imgData = ctx.getImageData(0, 0, dim, dim);
                 data = imgData.data;
              }
              catch(e) {
                 return;
               }

              var size = data.length;
              this.data = new Uint8Array(size);
              var alpha;

              for(var i=0, j=0; i < size; ) {
                  this.data[i++] = data[j++];
                  this.data[i++] = data[j++];
                  this.data[i++] = data[j++];
                  alpha = data[j++];
                  this.data[i++] = alpha;

                  if(alpha < 255)
                      this.hasTransparency = true;
              }
              

Computing pixel color with texels:
var tdim = material.texture.width;
        var isTransparent = material.transparent;
        var tbound = tdim - 1;
        var tdata = material.texture.data;
        var texel = tdata[((v * tdim) & tbound) * tdim + ((u * tdim) & tbound)];
        
        if ( !isTransparent ) {
            buffer[ offset ] = (texel & 0xff0000) >> 16;
            buffer[ offset + 1 ] = (texel & 0xff00) >> 8;
            buffer[ offset + 2 ] = texel & 0xff;
            buffer[ offset + 3 ] = material.opacity * 255;
        }
        else { 
            var opaci = ((texel >> 24) & 0xff) * material.opacity;
            if(opaci < 250) {
                var backColor = buffer[ offset ] << 24 + buffer[ offset + 1 ] << 16 + buffer[ offset + 2 ] << 8;
                texel = texel * opaci + backColor * (1-opaci);                         
            }
            
            buffer[ offset ] = (texel & 0xff0000) >> 16;
            buffer[ offset + 1 ] = (texel & 0xff00) >> 8;
            buffer[ offset + 2 ] = (texel & 0xff);
            buffer[ offset + 3 ] = material.opacity * 255;
        }
And if you want to support lighting, we first store lighting color in the palette:
var diffuseR = material.ambient.r + material.color.r * 255;
        if ( bSimulateSpecular ) {
            
            var i = 0, j = 0;
            while(i < 204) {
                var r = i * diffuseR / 204;
                var g = i * diffuseG / 204;
                var b = i * diffuseB / 204;
                if(r > 255)
                    r = 255;
                if(g > 255)
                    g = 255;
                if(b > 255)
                    b = 255;

                palette[j++] = r;
                palette[j++] = g;
                palette[j++] = b;
                ++i;
            }

            while(i < 256) { // plus specular highlight
                var r = diffuseR + (i - 204) * (255 - diffuseR) / 82;
                var g = diffuseG + (i - 204) * (255 - diffuseG) / 82;
                var b = diffuseB + (i - 204) * (255 - diffuseB) / 82;
                if(r > 255)
                    r = 255;
                if(g > 255)
                    g = 255;
                if(b > 255)
                    b = 255;
    
                palette[j++] = r;
                palette[j++] = g;
                palette[j++] = b;
                ++i;
            }
            
        } else {
                        
                          var i = 0, j = 0;
                          while(i < 256) {
                              var r = i * diffuseR / 255;
                              var g = i * diffuseG / 255;
                              var b = i * diffuseB / 255;
                              if(r > 255)
                                  r = 255;
                              if(g > 255)
                                  g = 255;
                              if(b > 255)
                                  b = 255;

                              palette[j++] = r;
                              palette[j++] = g;
                              palette[j++] = b;
                              ++i;
                          }           
                          
                      }

              

At run time, fetching lighting color according the pixel normal
        var tdim = material.texture.width;
        var isTransparent = material.transparent;
        var cIndex = (n > 0 ? (~~n) : 0) * 3;
        var tbound = tdim - 1;
        var tdata = material.texture.data;
        var tIndex = (((v * tdim) & tbound) * tdim + ((u * tdim) & tbound)) * 4;
        
        if ( !isTransparent ) {
          buffer[ offset ] = (material.palette[cIndex] * tdata[tIndex]) >> 8;
          buffer[ offset + 1 ] = (material.palette[cIndex+1] * tdata[tIndex+1]) >> 8;
          buffer[ offset + 2 ] = (material.palette[cIndex+2] * tdata[tIndex+2]) >> 8;
          buffer[ offset + 3 ] = material.opacity * 255;
        } else { 
          var opaci = tdata[tIndex+3] * material.opacity;
          var foreColor = ((material.palette[cIndex] * tdata[tIndex]) << 16) 
              + ((material.palette[cIndex+1] * tdata[tIndex+1]) << 8 )
              + (material.palette[cIndex+2] * tdata[tIndex+2]);

          
          if(opaci < 250) {
            var backColor = buffer[ offset ] << 24 + buffer[ offset + 1 ] << 16 + buffer[ offset + 2 ] << 8;
            foreColor = foreColor * opaci + backColor * (1-opaci);                          
          }
          buffer[ offset ] = (foreColor & 0xff0000) >> 16;
          buffer[ offset + 1 ] = (foreColor & 0xff00) >> 8;
          buffer[ offset + 2 ] = (foreColor & 0xff);
          buffer[ offset + 3 ] = material.opacity * 255;
       }

Optimization:    Using blockFlags array to manage which parts have to be cleaned, and blockMaxZ array records this block's depth. If depth > blockMaxZ[blockId], this block needn't to be drawn.


2014年7月21日 星期一

Printing your 3D models by using Arca3D

Arca3D is a platform can let you store your 3D models. There is a more interesting thing, it also can help you print your models.

 First, select a model you uploaded.
 

Next, click the button, Download STL File











Then, you can check it through STL viewer
















Finally, pick the *.stl file to your 3D printer and get your great physical 3D model.





2014年7月8日 星期二

Arca3D LCD

2014年7月4日 星期五

Arca3D model

2014年6月9日 星期一

JavaScript singleton pattern

In JavaScript,  it would let you feel struggle to implement singleton design pattern. This is my first experiment:
     
View = {
       
        _viewer: undefined,

        initView: function() {
          // init view
        },

        getView: function() {
          // get view
          return _viewer;
        }

     };
But I figure out this approach can't express 'encapsulate', so the user can get _viewer directly. Therefore, I decide to rewrite it. I rewrite it to:
     
var View = ( function() {
       
        var _viewer = undefined;

        function init() {
          // init view
        };

        function getView() {
          // get view
          return _viewer;
        };
  
        return {
           init(): { init(); },
           getView(): { return getView(); }
        };

     };
} ) ();
This method can achieve 'encapsulate', uses its method like View.getView(), but you have to program the interface two times on function and return parts. Finally, I decide to use the bottom method, this method is the perfect solution for me now.
// unstrict mode:
var View = ( function() {
       
        if ( arguments.callee._singletonInstance )
           return arguments.callee._singletonInstance;
        arguments.callee._singletonInstance = this;

        var _viewer = undefined;

        this.init = function() {
          // init view
        };

        this.getView = function() {
          // get view
          return _viewer;
        };

     };
} ) ();

new View();

// strict mode:     
var View = ( function() {
       
        if (View.prototype._singletonInstance)
           return View.prototype._singletonInstance;

        View.prototype._singletonInstance = this;

        var _viewer = undefined;

        this.init = function() {
          // init view
        };

        this.getView = function() {
          // get view
          return _viewer;
        };

     };
} ) ();

new View();
This method you also can use its method like this way View.getView();, however you needn't multiple define the interfaces.

Reference:
http://stackoverflow.com/questions/1635800/javascript-best-singleton-pattern
http://fstoke.me/blog/?p=1932
http://www.dotblogs.com.tw/blackie1019/archive/2013/08/30/115977.aspx

2014年5月26日 星期一

Screen Space Ambient Occulsion demo


Link: http://dsmu.me/sponza/sponza.html

This demo supports Desktop Web browsers and Android Chrome. It is based on three.js framework. and the scene assets are downloaded from Crytek Sponza. In order to increase the loading time I suggest to use three.js's binary format that is converted from convert_obj_three tool. In my experience, it could save about 80% loading time, and the file size is just about a half size. In this SSAO demo, it uses two render targets and three passes:
  1. Depth pass: Storing the log depth and pack it into the RGBA render target texture.
  2. Diffuse pass: Storing diffuse value which has computed the lighting value into a RGB render target
  3. Post-Processing pass: Unpacking the log depth value from depth pass texture and linearize it, and then fetching the diffuse value from diffuse render target. Finally, computing the ssao using Spiral sampling approach and multiply with the diffuse value.
Snapshots:
Desktop web browser & Android Chrome

Desktop web browser

Android Chrome

2014年3月14日 星期五

30+ Cross Platform Mobile App and Game Development Tools


30+ Cross Platform Mobile App and Game Development Tools

http://www.riaxe.com/blog/top-cross-platform-mobile-development-tools/?fb_action_ids=10201601507067999&fb_action_types=og.comments&fb_source=other_multiline&action_object_map=[233188210202347]&action_type_map=[%22og.comments%22]&action_ref_map=[]

2014年3月10日 星期一

Using remote debugging for HTML5 on iOS

Debugging on mobile devices always has to back on your desktop devices, just like Android chrome debugging. (http://coderellis.blogspot.tw/2013/04/porting-html5-game-to-mobile-platform.html)

This article I would like to describe the approach by using web-inspector, its advantage is native supported by Apple, we needn't install any plugin. However, its disadvantage is you must have a Mac system devices, because it requires to use Safari version 6 later, and Windows' Safari' version is only available to version 5.1.7.

1. Enable web inspector
Go to your iOS device setting page. Enable the web inspector in the Safari->Advance. And then, connect your iOS device using USB connector to your Mac device.
 
2. Enable developer mode on Safari
Go to the Safari of your Mac, open the Setting->Advance, check the the enable developer menu.
Using the iOS device to open the web page you wanna debug. Then, go to Menu->Developer on your Safari of Mac, you could see your iOS device. Click the page to debug.



3. Enable WebGL
At the default setting of Mac's Safari browser, the WebGL function is disable, if we wanna use this feature, we have to enable it in the Menu->Developer.

4. Debug
Finally, we can start to debug iOS web page on our Mac. The debugging steps is simple and familiar, the usage approach is like other browsers' dev tool.


2014年3月9日 星期日

WebGL Demo 02: 3D model loader and preview

When you want to display a complex 3D mesh in your 3D application, you must want to find an external model format for you to use. They would be COLLADA(.dae), FBX(.fbx), or OBJ(.obj). However, If you want to make a 3D engine, you must want to support more kinds of formats. Therefore, I start to research FBX SDK.


FBX SDK supports .fbx, .dxf, .dae, and .obj file formats' importer/exporter, and .3ds format has been retired. FBX technology can be used to sharing scene assets, storing, packing models for sale, and processing animation, it supports import/export functions in Autodesk's products(3ds Max, Maya, AutoCAD...). The FBX SDK is a part of Autodesk FBX technology, it is C++ Software Development Kit. You can use it to create plug-ins, converters, and other applications. FBX SDK's source code is not public opening, therefore, we just can use their SDK interface, and if you want to redistribute or repacked, you should write permission from Autodesk, and include a link to the Autodesk FBX website to user install the required version of the FBX SDK. (http://usa.autodesk.com/adsk/servlet/pc/item?siteID=123112&id=10775847)

FBX SDK is divided into three parts:
  • FBX SDK
    C++ library, you can integrate it with your content creating pipeline, to make file parser, converter, importer, and exporter.
  • FBX extension
    For customizing the behavior of the FBX importing and exporting by define dynamically loaded library.
  • Python FBX
    Using Python binding C++ library of FBX SDK, it allows us to write Python scripts that can use classes and functions of FBX SDK.
FBX formats:

Most of classes in the FBX SDK are derived from FbxObject.

FbxScene, the FBX scene graph is organized as a tree of FbxNode objects. These nodes are cameras, meshes, lights...elements, These scene elements are specialized instances of FbxNodeAttribute.

I/O objects, FbxImporter and FbxExporter objects are used to import and export scene.

Collections, most container classes are derived from the FbxCollection class. FbxScene is just derived from FbxCollection through FbxDocument.

FbxScene:
The scene graph is abstracted by FbxScene class. It is a hierarchy nodes. The scene element is defined by combining a FbxNode with a subclass of FbxNodeAttribute.

FbxNodeAttribute: FbxNode is combined by Transformation Data and Node attributes(FbxAttrubute).

FbxSurfaceMaterial:

FbxTexture:


Load fbx/dae/obj model to scene:
#include "fbxsdk .h"
#include "fbxfilesdk/fbxio/fbxiosettings.h"

// Create the FBX SDK manager
FbxManager* lSdkManager = FbxManager::Create();

// Create an IOSettings object.
FbxIOSettings * ios = FbxIOSettings::Create(lSdkManager, IOSROOT );
lSdkManager->SetIOSettings(ios);

// ... Configure the FbxIOSettings object ...

// Create an importer.
FbxImporter* lImporter = FbxImporter::Create(lSdkManager, "");

// Declare the path and filename of the file containing the scene.
// In this case, we are assuming the file is in the same directory as the executable.
const char* lFilename = "file.xxx"; // the file extension can be dae, obj, fbx

// Initialize the importer.
bool lImportStatus = lImporter->Initialize(lFilename, -1, lSdkManager->GetIOSettings());
// Create a new scene so it can be populated by the imported file.
FbxScene* lScene = FbxScene::Create(lSdkManager,"myScene");

// Import the contents of the file into the scene.
lImporter->Import(lScene);
// The file has been imported; we can get rid of the importer.
lImporter->Destroy();

Convert FBX model:
My demo engine uses three.js engine, it has provided the fbx/convert_to_threejs.py converter, which uses Python binding C++ library of FBX SDK. You can use the Python SDK import your fbx file by your converter. In your terminal window, insert
convert_to_threejs.py [source_file] [output_file] [options]

You can find the output folder has a three.js in-house model file, which is json format, and you also see the textures your model needs have been copied to your folder[that is my contribution on three.js].

Preview in three.js:
Load your model
function loadScene() {
  var loader = new THREE.SceneLoader();
  loader.load( 'outFBX/basketball/basketball.js', callbackFinished );
}

function callbackFinished( result ) {
                         
  result.scene.traverse( function ( object ) {                    
     _scene.add( object );
  } );                
}

Result:

Model is downloaded from: http://tf3dm.com/3d-model/official-nba-spalding-basketball-86751.html

Reference:
Autodesk FBX SDK doucment: http://docs.autodesk.com/FBX/2013/ENU/FBX-SDK-Documentation/index.html
three.js: https://github.com/mrdoob/three.js

2014年2月25日 星期二

WebGL Demo 01: 3D transformation and Lambert lighting


This demo would talk about some 3D computer graphics background knowledge, and to show how to use WebGL to do 3D transformation and Lambert lighting.


Using maximus.js, author by Ellis Mu


     In this demo, I develop a WebGL framework named maximus.js to proof the functions which been implemented by three.js. I choose the same 3d coordinate system as three.js, which is Y-up and right-hand system.

1. Vertex buffer
     In the 3d graphics pipeline, the data format of primitive which GPU can accept, They are vertex buffer and index buffer.

What is vertex buffer? Each primitive must be constructed by vertices. In the case of cube, we create a mesh which has 24 vertices, because it has six faces and per face has four vertices.
     _cubeVertexBuffer = gl.createBuffer();  // create a buffer
     gl.bindBuffer( gl.ARRAY_BUFFER, _cubeVertexBuffer ); // bind this buffer as vertex buffer
    
     var vertices = [             // Create these vertices
              // Front face
              -1.0, -1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0,  
              1.0, -1.0, 1.0,  0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0,
              1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0,
              -1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0,....
    ];
    
    gl.bufferData( gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW ); // And then uploading these vertices data to vertex buffer
2. Index buffer
     Index buffer can help us reduce the usage number of vertices. For example, when we want to draw a quad which has two triangle, so we need to define six vertices. But while using index buffer, we just need to record 4 vertices of this quad, and use indices to define as two primitives. This approach can help us reduce massive data while using high-polygon models.
         
In the index buffer, we record [0,1,2,2,3,0] to define two primitives that will search the no.0,1,2,3 order in the vertex data in the GPU primitive assembly stage.


3. 3D transformation
     In the 3D space, we must use lots of Matrix transformation. In this figure, we describe we want to transform our objects to the near clipping plane which is the final 2D screen space. Every model has its own modelToWorld matrix to transform to world space. While a camera moving in a 3D scene, the camera has its own camera view matrix, and the screen space define the projection matrix. Therefore, we use this formula to transform model from object space to screen space.

        Modelscreen = obj2worldMtx X world2cameraMtx X projectionMtx X Modelobject

4. Shader
    Graphics pipeline has several generation, From fixed-function, shader model pipeline, deferred rendering, to physical-based lighting. In this WebGL demo, this is a basic shader model pipeline. We have to send vertex and fragment shader to link program. There are three types of variable we need to know.
  • Attribute
    Attribute means the vertex input from vertex buffer data. This has its own order based on vertexPositionAttribute.
  • Varying
    Varying means the vertex output of vertex shader, that will be interpolated by Rasterizer and be sent to pixel shader.
  • Uniform
    Uniform variable are be set from your application program by uniformMatrix4fv or uniform4fv
5. Lighting
     Lambert lighting is a basic lighting algorithm in the 3D computer graphics. It just has to compute the dot of light vector and pixel normal vector. If the angle >= 90, that means no lighting, else it has lighting.


6. Using three.js
     Finally, I write the same demo by using three.js. It becomes so simple. We just need to the several lines code like this.
              function init() {
                renderer = new THREE.WebGLRenderer();
                renderer.setSize( 400, 300 );
                renderer.setClearColor( 0x000000, 1.0 );  // set frame buffer clear color
                document.body.appendChild( renderer.domElement );
                
                scene = new THREE.Scene();
                camera = new THREE.PerspectiveCamera( 45, 400/300, 0.1, 10000 ); // Projection matrix setup
                
                camera.position.set( 0, 0, 20 );  // Setup camera 
                camera.lookAt( scene.position );
                
                var geometry = new THREE.CubeGeometry( 5, 5, 5 );  // Create cube geometry and material
                var material = new THREE.MeshLambertMaterial( { color: 0xFF0000 } );
                mesh = new THREE.Mesh( geometry, material );
                
                scene.add( mesh );  // Add cube to scene manager
                
                var light = new THREE.DirectionalLight( 0xFFFFFF, 1.0 ); // Create lighting
                light.position.set( 10, 0, 10 );
                scene.add( light );
            }

            function updateFrame() {
                requestAnimationFrame( updateFrame );       
                mesh.rotation.y += THREE.Math.degToRad( 5 ); // Rotate the cube
                
                render();
            }
            
            function render() {
                renderer.render( scene, camera );  // render the meshes of the scene using this camera viewport.
            }

Using three.js


Github: https://github.com/DaoshengMu/Maximus-WebGL

2014年2月19日 星期三

Move your Wordpress website

  1. 簡介
    Wordpress網頁會用到的資料只有兩大區塊,一個為Wordpress安裝檔另一個為MySQL資料庫
  2. Wordpress備份
    直接將www目錄下的Wordpress檔案抓下來,裡面絕大部分都是Wordpress的系統檔,只有wp-content下的資料是網站資料.為了將整體環境一致最好還是都一次拷貝下來,未來如果需要升級也是直接取代新的Wordpress目錄就可以,不過保險起見wp-content目錄以及wp-config.php要先留備份.
  3. MySQL備份
    使用MySQL輸出database指令,將Wordpress所用到的database輸出.使用下列指令:
    mysqldump -u ellis -p wordpress > xxx.sql
    以上列指令為例,是在使用MySQL中的ellis帳號,並且需要密碼驗證,來將wordpress這個database輸出到xxx.sql檔案.
  4. 整合安裝
    首先,將要被移往的主機安裝好Wordpress環境,包含了Apache, MySQL, PHP.接著在MySQL創建一個database(以此說明文件為例我們用wordpress當作database名稱).匯入我們先前匯出的xxx.sql.
    接著,將之前主機抓下來Wordpress檔案取代到現在的Wordpress目錄.最後打開wp-cofig.php調整現在的設定,主要檢查這幾個欄位:
    // ** MySQL settings - You can get this info from your web host ** //
    /** The name of the database for WordPress */
    define('DB_NAME', 'wordpress');  /**資料庫的名稱?*/
    /** MySQL database username */
    define('DB_USER', 'ellis');   /**登入MySQL後台的使用者帳號*/
    /** MySQL database password */
    define('DB_PASSWORD', 'xxxxxxx');  /**使用者的密碼*/
    /** MySQL hostname */
    define('DB_HOST', 'localhost');   /** host的位置也是localhost?*/
    這時候,連到你的網址,應該就可以看到移機後的結果.
  5. 問題排解
    移機可能會遇到幾個問題,在這裡整理一下:
    • 移完機顯示網站不成功
      這個可能是資料表認定的網站位址不是目前新的位址,需要修正資料表,修正方式為,在你的wp-config.php加上
      define('WP_HOME','http://localhost/xxx/');  /**你的新網址*/
      define('WP_SITEURL','http://localhost/xxx/');
      然後在wp-content/themes/%目前所使用的theme檔%/function.php加上
      update_option('siteurl','http://localhost/xxx/'); /**你的新網址*/
      update_option('home','http://localhost/xxx/');
      重新進入網站,成功後記得要上面這幾行都移除,因為不應該永遠存在這些設定.
    • wp-admin登入不進去
      這有兩個原因,一個是管理員將登入的連結改掉,像我就改成booya-admin,另外一個就是有些plugin移機不成功,這時就先把wp-content/plugins目錄名稱先改掉,讓他在抓不到plugins的情況下帶我進去wp-admin,進去之後就可以看到所有plugin都unactivate,再把plugins目錄名稱改回來,重新登入後再一個一個activate.
    • 修改Wordpress語言設定
      首先去下載語言函示庫,繁體中文會在(http://tw.wordpress.org/),將wp-content/languages全部的檔案放到wp-content/languages,或是單獨複製zh_TW.mo亦可.接著修改wp-config.php

      定義你想用的語言,ex:
      define('WPLANG', 'zh_TW'); /** zh_TW is your zh_TW.mo file name */

2014年2月17日 星期一

回首2013年

2013年發生了很多事,我們自己的團隊大家宣布要休息一陣子,大家生活的重心從創業移轉到家庭以及未來的工作上了,可能是這次的開發讓大家都累了。也嘗試接一下案子來做,了解自己在談判能力的不足,有時候會給對方帶來期待,但其實自己的技術和伙伴的投入度並沒有到達水準。還是應該在初期就問問自己的初心是不是真的喜歡這個案子,如果不喜歡還是應該即時回絕,避免傷害到客戶的情誼。

之後我就開始繼續研究我有興趣的HTML5,從做出一個簡單的遊戲到開始能夠跟社群的人互動,了解到動機能夠幫助我們學習到很多東西。

2013年我結婚了,我實在沒想到我會這麼快結婚,其實很多事情都還沒做,一出社會就一直跟同學誏誏我錢存夠了我就要出國念書,結果國防役四年下來存的比我想像中還要少,只夠我出國念一年...也發現我最愛的電腦科學碩士會排斥已經有過國內碩士的學生...一切都不如我以前規劃那樣,這才發現在成長過程中能夠給我良性建議的人實在太少。

今年已經是我第三次做3D引擎了,只要跟我說要做,我其實大概都知道要怎麼做要多少時間,風險已經都會降到10%左右。但是要做的時候難免會想這次我做好了,我會因為這個成就甚麼嗎? 第一套引擎算是我的學習試驗品,但做完沒有人要用,就連想要做商業化宣傳也被老闆直接砍掉。第二套算是一個很有商業化的引擎,公司之前超過一半的專案都在用,但我們也害了很多人趕案子加班,逼了很多人離職,它其實是一個凶器。 第二套引擎的開發讓我開始在國外技術社群認識了很多有野心的團隊,他們是以團隊方式在做技術開發,有一個很強力的leader,團隊規模不會超過十人,但做得很有成就感。再看看我自己,我雖然已經可以跟他們leader討論重要的關鍵議題,也被認為很專業,但我自己語言能力還是沒辦法問得很詳細,再來也一直深耕一個念頭,為什麼我需要在大公司做這樣小團隊就能做到的事,反而會因為大公司的決策速度導致在市場面進展的緩慢,我們東西明明不錯,但社群的人就是不認識我們...

 第三套引擎做起來我已經有點像是機器人在做了,看到我最想競爭的Minko還是這麼的有野心,我曾經嘗試要跟他們一樣做出最棒的產品,但我的動力真的不足,做完我也大概知道它可能會比凶器好一點,因為可能會沒人用。而且他其實並不會幫我們改善周遭的生活,因為它的願景存在是有點模糊。

所以我離職了,離開了待了快六年的公司,曾經有人說公司跟員工的關係有三層,最下層就是建立在錢的關係,第二層是建立在情感關係,第三層是理念關係。第三層的最高境界就是雙方為了一個願景彼此努力甚至爭吵,但成就之後彼此會很開心的聊聊過往,分享成就,這是最喜歡的工作環境,也是我一直努力增進自己的原因,我想要出去尋找那樣的桃花源。


2014年2月11日 星期二

Mozilla can produce near-native performance on the Web

asm.js 有多快?

此文章整理了一連串的效能評比,asm.js對於我們帶來了可期待的效能,但並不是所有情形都能夠比native javascript快,甚至使用asm.js/Emscripten來處理我們的game engine會造成debugging以及與WebGL銜接上的麻煩,在設計前請多加考慮。

http://arstechnica.com/information-technology/2013/05/native-level-performance-on-the-web-a-brief-examination-of-asm-js/


2014年2月5日 星期三

移动GPU全解读

下列文章分析ARM Mali, PowerVR, Adreno等mobile GPU廠商的架構,會提到graphics pipepline對於mobile devices的演進,以及各家廠商為了節能及提高效能上所做的努力。

Reference:
http://www.igao7.com/1217-vv-gpu.html
http://www.igao7.com/1218-vv-gpu.html

2014年1月19日 星期日

HTML5DevConf: Kruger, Marcus, "Launching Goo Create - WebGL graphics made easy"



Goo Engine and Goo Create
http://www.gootechnologies.com/

2014年1月16日 星期四

Write your own WordPress plugin

我將介紹如何撰寫一個JQuery的WordPress plugin

首先在%WordPressRoot%/wp-content/plugins下建立一個你plugin名稱的資料夾,這個名稱儘量獨特一點,避免之後跟WordPress plugin store上的套件撞名,ex: ellis123456.接著再產生一個跟資料夾名稱一樣的php檔案,ellis123456.php.

首先加上描述,說明版本與作者資訊.此定義內容會存在WordPress的plugin簡介,請勿務必填寫完整
  
/**
 * Plugin Name: ellisJQuery
 * Plugin URI: http://URI_Of_Page_Describing_Plugin_and_Updates
 * Description: Ellis's specific version of JQuery for WordPress.
 * Version: 0.0.003
 * Author: Ellis Mu
 * Author URI: http://URI_Of_The_Plugin_Author
 * License: GPL2
 */
定義plugin的unique id以及需要多國語言時它的目錄
load_plugin_textdomain('ellis123456', false, basename( dirname( __FILE__ ) ) . '/languages' );

定義完成後,我們就可以開始載入JQuery函式庫,
function initJQuery() {
    wp_register_style( 'jquery', plugins_url('style/eggplant/jquery-ui-1.10.4.custom.min.css', __FILE__) );
    wp_enqueue_style( 'jquery' );

   /*
    * We should load jquery before load jquery ui.
    */
   wp_register_script( 'jquery', 'js/jquery-1.10.2.js');  // I don't know why newer version has error
   wp_enqueue_script( 'jquery' );
   wp_enqueue_script( 'jqueryui', plugins_url( '/js/jquery-ui-1.10.4.custom.min.js', __FILE__ ), array('jquery') );
   wp_enqueue_script( 'ellis123456', plugins_url( '/js/ellis123456.js', __FILE__ ), array('jquery') );
 }

 add_action('wp_enqueue_scripts', 'initJQuery'); // 呼叫wp_enqueue_scripts來載入外部的JQuery, JQuery-UI關聯檔
開始定義shortcode
function popup_dialog($atts, $content = null) {
     extract(shortcode_atts(array(
         "id" => '',
         "width" => '684',
         "height" => '385',
         "title" => '',
         "text" => '',
         "left" => ''
     ), $atts));

    return '
';  //popDialogFunc為JavaScript所定義的function待會我們再解釋
}

add_shortcode("popDialog", "popup_dialog");  //註冊popDialog為我們未來要使用WordPress的方式
接著我們在plugin的js/目錄下建立ellis123456.js,在裡面實做
function popDialogFunc() {

 var opt = {
        autoOpen: false,
        modal: true,
        open: function(){
            jQuery('.ui-widget-overlay').bind('click',function(){
                 jQuery("#dialog-modal1").dialog('close');
            })
        } };
 
        
 document.getElementById("dialog-modal1").style.visibility='visible';
 jQuery(document).ready(function($) {

        $("#dialog-modal1 p" ).show();
    $("#dialog-modal1").dialog(opt).dialog('open'); });
 
}
最後在我們的WordPress頁面寫入popDialog就可以完成我們的plugin實驗
[popDialog id ="abc" width="122" height="66" title="Who are you?" text="I am the god123" left = "40"]

Reference:
http://codex.wordpress.org/Writing_a_Plugin
http://markjaquith.wordpress.com/2006/03/04/wp-tutorial-your-first-wp-plugin/


2014年1月5日 星期日

Using MAMP to install WordPress

MAMP是一個可以在Mac本機上直接安裝設定Apache, MySQL, PHP的好用軟體.(Windows版本為WAMP).

1. 他十分簡單,首先去MAMP下載安裝檔(http://www.mamp.info/en/index.html).他有分為一般版本及Pro版,Pro版需要付錢,因此我們使用一般版本來說明.

安裝完後的界面如下:

安裝完成後,使用瀏覽器觀看我們的首頁會在port:8888


2. 接下來我們要安裝WordPress
在安裝WordPress之前記得要先設定一組db帳號在MySQL,設定步驟如下:
  • 在Terminal視窗輸入:/Applications/MAMP/Library/bin/mysql --host=localhost -uroot -proot
    (http://documentation.mamp.info/en/mamp/how-tos/using-mysql-command-line)
  • 接著觀察目前的db---  show databases;
  • 創建一組新的db--- CREATE DATABASE wordpress;
  • 創建你的使用者帳號--- CREATE USER ellis@localhost;
  • 建立使用者密碼--- SET PASSWORD FOR ellis@localhost= PASSWORD("xxxxxxxxx");
  • 綁定帳號與db--- GRANT ALL PRIVILEGES ON wordpress.* TO ellis@localhost IDENTIFIED BY 'xxxxxxxxx';
  • 刷新 FLUSH PRIVILEGES;
  • 離開 exit
3. 最後我們就可以安裝我們的WordPress了
去WordPress首頁下載zip檔(http://wordpress.org/download/).解壓縮後丟到/Applications/MAMP/htdocs/,接著在瀏覽器執行http://localhost:8888/wordpress/就會進入我們的設定頁





最後你的WordPress在Mac本機端的架設就完成了~~~