Planning the Spontaneous

It's more than just a blueprint.

Renderer: Scene Graph

Posted by Robert Chow on 01/12/2009

So next, I decided to implement the scene graph, and that comes with what I showed you previously with the API of Renderer.

Essentially, as I have morealess created the majority functionalities of Renderer, the API at the time of writing of the post, did not reflect what I had originally implemented. To render each vertex buffer object, you originally had to call Renderer.Draw(vertexBufferObject) and that would draw it to the device context. You would then call Renderer.DrawToTexture(..params..) afterwards to draw the scene to a texture. Quite lame really. Especially because it did not incorporate the scene graph.

The scene graph itself looks rather different this time to that I outlined in this post.  In that post, the client will push and pop matrices on and off the transformation stack itself.  In Renderer however, it is much different.  No push.  No pop.

OOP(s, I did it again)

Originally you would set up the transformation matrix, possibly using pushes and pops, and then submit your node – this would take on what the position specified by the current transformation matrix.  But for now, we live in an object-orientated world.  And as a result, you are able to manipulate the nodes themselves without having to go through a scene graph abstraction.  This means that you can specify each relative position each node has to its parent node – no more pushes and pops.

Going through a scene graph abstraction may look like this:

Scenegraph.LoadIdentity();
Scenegraph.Push();
Scenegraph.Translate(1.0f, 0.0f, 0.0f);
parentA = Scenegraph.AddNode(rootNode, renderableA);
Scenegraph.Push();
parentB = Scenegraph.AddNode(parentA, renderableB);
Scenegraph.Translate(0.0f, 1.0f, 0.0f);
Scenegraph.Pop();
Scenegraph.Scale(0.5f, 1.0f, 2.0f);
parentC = Scenegraph.AddNode(parentA, renderableC);
Scenegraph.Pop();

Changing to something more object-orientated can result in something like this:

RootNode = new Node(Matrix.Identity());
RootNode.AddNode(nodeA = new Node(Matrix.Translate(1.0f, 0.0f, 0.0f), renderableA));
nodeA.AddNode(nodeB = new Node(Matrix.Translate(0.0f, 1.0f, 0.0f), renderableB));
nodeA.AddNode(nodeC = new Node(Matrix.Scale(0.5f, 1.0f, 2.0f), renderableC));

I think you’ll agree it’s just that little bit cleaner.

Another great (I might add) functionality that I’ve included is that ability to detach and reattach branches of nodes.  This is particularly useful for background rendering.  For example, a client may want to render a large complex image onto a texture, and to then use that texture multiple times in a another complex scene.  This texture however is to be rendered offscreen.  You do not want to have to collapse and build the tree from scratch for each different image, especially if it means the tree will be constantly changing between only a couple of states.  So why not be able to detach BranchA, attach BranchB, render to texture, detach the BranchB and reattach BranchA in one swift movement?  Saves on all the time having to rebuild the original tree.

No snazzy demo this time.  Soon, I promise.

2 Responses to “Renderer: Scene Graph”

  1. […] back to the scene graph, there is one major problem. What about, for example’s sake, the camera was attached, but we […]

  2. […] this little demo already involves the concept of vertex buffer objects (drawing of the cube), the scene graph (rotating cube), the camera (viewing the cube, using the scene graph) and picking (turning faces […]

Leave a comment