Planning the Spontaneous

It's more than just a blueprint.

Posts Tagged ‘Scene Graph’

MultiOpenGlControl Demo

Posted by Robert Chow on 16/04/2010

I’ve managed to grab some video capture software and also produce a small demo of using multiple SimpleOpenGlControls.  Each screen has it’s own camera, focusing on to the same scene.  Using lighting and picking, clicking a face of the cube will change the scene, and thus be reflected in all of the cameras.

I did have a little trouble with the picking at first – when I picked, I forgot to change the current camera, so it would register the mouse click on one screen whilst using the camera of another.  With that fixed, it was nice to be able to play around with Renderer again.

True, the videos not suppose to be a hit, but at least it’s showing what I want it to.

Advertisements

Posted in Computing, Placement | Tagged: , , , , , , , , , | Leave a Comment »

Renderer: Demo

Posted by Robert Chow on 16/12/2009

So now that I’ve got a lot of my functionality going for me in this component, I thought I’d try and create a snazzy demo.

My supervisor’s obsessed with it. It’s like a little game – there is a rotating cube, and the challenge is you have to click on each face before you pass on to the next level. As you progress, the cube rotates just that little bit faster, making it harder to complete the challenge. Clicking on a face will turn that face on. Clicking on a face that is already on will turn it off – you can only progress when all 6 faces are switched on.

So you can automatically see that this little demo already involves the concept of vertex buffer objects (drawing of the cube), the scene graph (rotating cube), the camera (viewing the cube, using the scene graph) and picking (turning faces on/off). But what about the others?

Well, lets place the scene into dark surroundings – we’re going to need lights to see our cube – that’s lighting handled. So where does texturing fit in?

Switching the faces on/off need an indicator to show what state they are in.  We can easily do this by simply changing the colour of a face.  But that’s kinda boring.  So instead, we’re going to take the rendering to texture example, and slap that on a face which is on.  So that’s textures included.

Here are some screenshots of this demo.

Demo Setup. From left to right: initial scene – just a white cube; let’s dim the lighting to, well, none; include some lights: green – centre, red – bottom, blue – left; create a texture.

Demo Rotate. The demo in motion – the camera is stationary, whereas the cube is rotating, and the lights are rotating independantly around it.

Demo Play.  The first and second images show what happens when a face is clicked – the texture replaces the blank face.  Once all the faces are replaced with the texture, the demo advances to the next level.

Demo Camera.  This shows added functionality of how a camera can be fixed on to a light, thus the camera is rotating around the cube too.  Here it is fixed on to the blue light.  This is done with ease by manipulating the scene graph.

I guess you can’t really get a grasp of how this demo works in its entirety – sanpshots don’t really do it much justice.  I might try and upload a video or so – how, I’m unsure of – I’m sure I can probably find screen capture software around on the net.

And unfortunately, there is no score involved. Yet. That will come when fonts come.

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »

Renderer: Lighting

Posted by Robert Chow on 15/12/2009

So these are tricky too.

OpenGL only allows 8 lights at any one time, and should you need more, they suggest you ought to reconsider how you draw your model.

As it is not just one light available, we need to be able to differentiate between each light.  We do this by declaring an enum.

public enum LightNumber
{
GL_LIGHT0 = Gl.GL_LIGHT0,
GL_LIGHT1 = Gl.GL_LIGHT1,
..
..
GL_LIGHT7 = Gl.GL_LIGHT7
}

An enum, like classes, can be used as parameters, with one restriction – the value of the parameter must be equal to one of the values decalered within the enum.  In the context of lights, the LightNumber can only be either GL_LIGHT0, GL_LIGHT1, GL_LIGHT2 and so forth until GL_LIGHT7; giving us a maximum of 8 lights to choose from.  For an enum, the values on the left-hand side are names.  These names can be referred to.  However, you can then assign these names values, as I have done – in this case, it is their correspondant OpenGL value.

So now we can set our lights, and attach them to a node, similar to the camera.  Kind of.  Unlike the camera, these lights can either be switched on or off, and have different properties to one another – going back to the lighting post, the ambient, diffuse and specular properties of a light can be defined. We could do this using two methods.

Renderer.SwitchLightOn(LightNumber lightNumber, bool on);
Renderer.ChangeLightProperty(LightNumber lightNumber, LightProperty lightProperty, Colour colour);

We could.

But we shouldn’t.

I’ve been reading a lot lately about refactoring – cleaning up code, making it better, more extensible, and easier to read. A few months ago, I stumbled across this guide to refactoring. A lot of things were simple and made a lot of common sense. But it was intelligent. Something which, everyone becomes every now and then.  And it’s definately worth a read.  Another one I’ve ben reading of late is this book on clean code.  How good really is your code?

The Only Valid Measurement Of Code Quality.  This image has been taken from http://sayamasihbelajar.wordpress.com/2008/10/24/

Naming conventions are rather important when it comes to something like this, and using a bool as a parameter – doesn’t exactly mean much.  Once there is a bool, you will automatically know that the method does more than one thing – one thread for true, and another for false.  So split them.

Renderer.SwitchLightOn(LightNumber lightNumber);
Renderer.SwitchLightOff(LightNumber lightNumber);

There a lot of wrongs and rights, but no concrete answer – just advice that these books can give you.  So I’m not going to trip up on myself and try and give you some more advice – I’m only learning too.

Posted in Computing, Placement | Tagged: , , , , , , , | 3 Comments »

Renderer: Camera

Posted by Robert Chow on 14/12/2009

Just slighty tricky.

Starting with the camera, there is always one, no less, no more.  Or so there should be.  So the way I’ve managed this is to let the Renderer component deal with it.

You have the scene graph, made of multiple nodes.  Attached to one of these nodes is the camera – the great flexibility of having a model like this is that the camera can be easily attached to anything.  Ask the Renderer to attach the camera to a node, and it will automatically attach the camera abstraction inside Renderer to that specific node.  This way, you do not have to worry about multiple cameras – there is only one, and this is being constantly moved if asked to.

With the camera attached to some node, we will have to traverse up to the rootnode, multiplying the current matrix by transformation per node, and then inverting the result.  The result is then loaded as the base matrix for the model view.  Doing this shifts the entire world model by -camera position, and therefore the scene should appear in the right place according to the camera.

It’s a little like this.

Imagine you can move the world.  Move the world right.  Everything in your view shifts right.

But that’s impossible.  You move left.  Everything in your view still shifts right.

Camera Vs. World Space.  From top to bottom:  the original picture – the darker area shows what the camera is currently seeing in world space;  model shifted right – the camera sees what is expected – the world to have shifted to the right;  camera shifted left – as far as the camera view is aware, this creates the same effect as the image in the middle.  You don’t quite get beaches like that in Manchester.

Going back to the scene graph, there is one major problem. What about, for example’s sake, the camera was attached, but we then removed the node it was attached to from the scene graph? Or it just wasn’t attached in the first place? We can still traverse up the nodes to find camera position – this will be happily done until there are no more nodes to traverse up. The main problem is recognising whether or not the last node traveresed is the root node of the scene graph. We do a simple equality test, but then what?

My supervisor’s been saying something a lot – fail fast.

That means whenever you know there is something wrong, fail the execution as fast as possible. As far as I’m aware, there are two ways of doing this is C#. This is Debug.Assert() and Exception throwing.
Debug.Asserts() are tests used to make sure the program is working – the test should always prove to be true during execution, but you can never be too sure.  Exceptions can be used for when something has been entered incorrectly client-side.

For example.

Client Side

Renderer.SetCamera(null);

Internally

public void SetCamera(Node node)
{
if(node == null)
{
throw new CameraCannotBeNullException();
}
camera = node;
}

So coming back to the original case, when we’re traversing up the nodes from the camera, we check to see if the final node is the root node.  If not, we throw an exception.  That way the client can handle this when it picks it up.  And failing fast also means the client can deal with it as soon as possible.

Posted in Computing, Placement | Tagged: , , , , | 1 Comment »

Renderer: Scene Graph

Posted by Robert Chow on 01/12/2009

So next, I decided to implement the scene graph, and that comes with what I showed you previously with the API of Renderer.

Essentially, as I have morealess created the majority functionalities of Renderer, the API at the time of writing of the post, did not reflect what I had originally implemented. To render each vertex buffer object, you originally had to call Renderer.Draw(vertexBufferObject) and that would draw it to the device context. You would then call Renderer.DrawToTexture(..params..) afterwards to draw the scene to a texture. Quite lame really. Especially because it did not incorporate the scene graph.

The scene graph itself looks rather different this time to that I outlined in this post.  In that post, the client will push and pop matrices on and off the transformation stack itself.  In Renderer however, it is much different.  No push.  No pop.

OOP(s, I did it again)

Originally you would set up the transformation matrix, possibly using pushes and pops, and then submit your node – this would take on what the position specified by the current transformation matrix.  But for now, we live in an object-orientated world.  And as a result, you are able to manipulate the nodes themselves without having to go through a scene graph abstraction.  This means that you can specify each relative position each node has to its parent node – no more pushes and pops.

Going through a scene graph abstraction may look like this:

Scenegraph.LoadIdentity();
Scenegraph.Push();
Scenegraph.Translate(1.0f, 0.0f, 0.0f);
parentA = Scenegraph.AddNode(rootNode, renderableA);
Scenegraph.Push();
parentB = Scenegraph.AddNode(parentA, renderableB);
Scenegraph.Translate(0.0f, 1.0f, 0.0f);
Scenegraph.Pop();
Scenegraph.Scale(0.5f, 1.0f, 2.0f);
parentC = Scenegraph.AddNode(parentA, renderableC);
Scenegraph.Pop();

Changing to something more object-orientated can result in something like this:

RootNode = new Node(Matrix.Identity());
RootNode.AddNode(nodeA = new Node(Matrix.Translate(1.0f, 0.0f, 0.0f), renderableA));
nodeA.AddNode(nodeB = new Node(Matrix.Translate(0.0f, 1.0f, 0.0f), renderableB));
nodeA.AddNode(nodeC = new Node(Matrix.Scale(0.5f, 1.0f, 2.0f), renderableC));

I think you’ll agree it’s just that little bit cleaner.

Another great (I might add) functionality that I’ve included is that ability to detach and reattach branches of nodes.  This is particularly useful for background rendering.  For example, a client may want to render a large complex image onto a texture, and to then use that texture multiple times in a another complex scene.  This texture however is to be rendered offscreen.  You do not want to have to collapse and build the tree from scratch for each different image, especially if it means the tree will be constantly changing between only a couple of states.  So why not be able to detach BranchA, attach BranchB, render to texture, detach the BranchB and reattach BranchA in one swift movement?  Saves on all the time having to rebuild the original tree.

No snazzy demo this time.  Soon, I promise.

Posted in Computing, Placement | Tagged: , , , | 2 Comments »

Route = Scenic;

Posted by Robert Chow on 23/10/2009

A scene graph depicts how objects are fitted into the scene, and how objects are related by their position.  For example, in a room, there may be a table – it’s position within the room. On the table, a mug, and a computer. Both of which, their positions can be described as relative to the table which they are sat on. It has been argued that scene graphs are a complete waste of time. Some think so, others not so. I on the other hand believe that if they can be used to help the problem that you are trying to solve, then why not. The rendering component I am creating, will use an interface where each object that can be drawn, such as a square, or a triangle (using vertex buffer objects of course) will be “queued” into the component, ready to be drawn. To then use a scene graph that is opinionated towards this design is a major help, and will prevent many state changes that will happen when rendering in immediate mode – when coming to draw the object, the world matrix of the object is taken from the scene graph and loaded into OpenGL using a single call, whereas, typically, it may need several transformations to get the object to where we want it to be. Each one of these transformations is a state change – OpenGL is in effect, one huge state machine – and each state change is a cost in the rendering pipeline.

One of the best exercises to do to show you understand how a scene graph works is to model the solar system.  It’s relatively straight forward, and easy to picture.  You have your scene, whereby there is the sun.  Relative to the sun are the planets.  Relative to each of the planets, are their moons.

Solar SystemSolar System.  The top screenshot depicts the four planets Mercury (purple), Venus (yellow), Earth (blue) and Mars (red), with their respective satellites, circling the sun.  The bottom screenshot is an overhead view of the scene, at a different time frame.

Scene Graph

There are lots of different implementations of scene graphs, but the majority use composition, to produce a tree-like hierarchy.  At every branch in the tree is a node, often representing a transformation, and plays a part in determining the position of the rest of the branch.  The object being drawn, is finally represented as a leaf – it’s position determined by all the branches needed to pass to get to the root of the tree.  So if you were to plan out the solar system, it may look a little like this.

Solar System Scene Graph. Each transformation helps to form a part of a branch in the tree, and at the end of each branch is a planet, acting as a leaf.  The transformations have been vastly simplified – in reality there are a lot more transformations to consider when building a solar system that incorporates planet rotation, tilt and so forth.  And before you say anything, yes I know there is more to the Solar System than up to Mars.

To represent this, I have built my scene graph using 2 classes.  A class to manage the scene graph, SceneGraph, and a class to make the scene graph tree possible, GraphNode.  I have omitted the implementation of these for brevity.

GraphNode – represents a node in the scene graph.

public GraphNode Parent { get; private set; }
public Matrix Matrix { get; private set; }

SceneGraph – keeps a track of the current node, and manipulates the scene graph stack.

public GraphNode CurrentNode { get;  private set; }

public void PushMatrix();
public void PopMatrix();
public void LoadIdentity();
public void Clear();
public void AddMatrix(Matrix matrix);
public Matrix GetWorldMatrix();

private Stack<GraphNode> nodeStack;

It’s fairly simple really.  The Graph Node doesn’t really do anything, except it’s holder for the matrix it represents, and make sures it always has a constant reference to the node above it in the tree.  This reference can be traversered to find all other nodes the current node branches off, right to the root.  It plays its part when GetWorldMatrix() is called – this is when we want to draw an object.  Calling the method traverses up to the root using the parent reference, multiplying each matrix along the way.  You’ll remember me saying OpenGL is a state machine – we want to minimise the state changes as much as possible – these range from changing textures and enabling lighting, all the way to changing color, or even making a transformation.  Calling GetWorldMatrix() uses a matrix class, which is only used by OpenGL at the very end, when a call to glLoadMatrixfv() is called.  We are then able to draw the object at the current position, which is determined by the world matrix.

The rest of the SceneGraph implementation is a mirror of the OpenGL matrix stack – you can load identity, and push and pop matrices on and off a stack.

If you want to find out more about scene graphs, I’d suggest reading a couple of these links.

http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[Scene%20Graphs%20-%20just%20say%20no]] – Why this user believes scene graphs are a complete and utter waste of time.  I actually partially agree with him.  But you can’t always win.

http://web.archive.org/web/20031207141303/www.cfxweb.net/~baskuene/object.htm – This guy’s managed to produce a 3D engine, using this scene graph he created himself.  Everything in the scene plays a part in the scene graph, including the camera, and lights.  That makes sense.  But like I said, you can’t always win.

This post’s been cut a little short – it was intended to cover something else in the solar system, but you’ll just have to wait.

Posted in Computing, Placement | Tagged: , , , , , | 2 Comments »