Planning the Spontaneous

It's more than just a blueprint.

Archive for December, 2009

Time For A Break

Posted by Robert Chow on 25/12/2009

So I haven’t been posting recently, and that’s because I’m back in Manchester for Christmas.

Lo and behold, recently it’s started to chuck it down with snow – bringing many much delight, but also many to much despair.

Travelling has been the main problem, and thankfully, I myself have not been greatly affected. The icy roads and paths have made it hard for anyone to travel – train, plane, car, even walking. I do believe that the only time it should snow is when you’re on a mountain with a board strapped to your feet, but what do I know – the last time I did that, I broke a wrist and dislocated a collarbone. But it’s not all doom and gloom.

The scenery’s definately brightened up.

First Glimpse Of Snow And Sunset.

Snow Through The Trees.

Bridge Snowed Over.

I may make one another post every now and then, but for now, I’m putting my feet up.

Merry Christmas and have a Happy New Year.

Posted in Placement | Tagged: , | 1 Comment »

Renderer: Demo

Posted by Robert Chow on 16/12/2009

So now that I’ve got a lot of my functionality going for me in this component, I thought I’d try and create a snazzy demo.

My supervisor’s obsessed with it. It’s like a little game – there is a rotating cube, and the challenge is you have to click on each face before you pass on to the next level. As you progress, the cube rotates just that little bit faster, making it harder to complete the challenge. Clicking on a face will turn that face on. Clicking on a face that is already on will turn it off – you can only progress when all 6 faces are switched on.

So you can automatically see that this little demo already involves the concept of vertex buffer objects (drawing of the cube), the scene graph (rotating cube), the camera (viewing the cube, using the scene graph) and picking (turning faces on/off). But what about the others?

Well, lets place the scene into dark surroundings – we’re going to need lights to see our cube – that’s lighting handled. So where does texturing fit in?

Switching the faces on/off need an indicator to show what state they are in.  We can easily do this by simply changing the colour of a face.  But that’s kinda boring.  So instead, we’re going to take the rendering to texture example, and slap that on a face which is on.  So that’s textures included.

Here are some screenshots of this demo.

Demo Setup. From left to right: initial scene – just a white cube; let’s dim the lighting to, well, none; include some lights: green – centre, red – bottom, blue – left; create a texture.

Demo Rotate. The demo in motion – the camera is stationary, whereas the cube is rotating, and the lights are rotating independantly around it.

Demo Play.  The first and second images show what happens when a face is clicked – the texture replaces the blank face.  Once all the faces are replaced with the texture, the demo advances to the next level.

Demo Camera.  This shows added functionality of how a camera can be fixed on to a light, thus the camera is rotating around the cube too.  Here it is fixed on to the blue light.  This is done with ease by manipulating the scene graph.

I guess you can’t really get a grasp of how this demo works in its entirety – sanpshots don’t really do it much justice.  I might try and upload a video or so – how, I’m unsure of – I’m sure I can probably find screen capture software around on the net.

And unfortunately, there is no score involved. Yet. That will come when fonts come.

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »

Renderer: Lighting

Posted by Robert Chow on 15/12/2009

So these are tricky too.

OpenGL only allows 8 lights at any one time, and should you need more, they suggest you ought to reconsider how you draw your model.

As it is not just one light available, we need to be able to differentiate between each light.  We do this by declaring an enum.

public enum LightNumber
{
GL_LIGHT0 = Gl.GL_LIGHT0,
GL_LIGHT1 = Gl.GL_LIGHT1,
..
..
GL_LIGHT7 = Gl.GL_LIGHT7
}

An enum, like classes, can be used as parameters, with one restriction – the value of the parameter must be equal to one of the values decalered within the enum.  In the context of lights, the LightNumber can only be either GL_LIGHT0, GL_LIGHT1, GL_LIGHT2 and so forth until GL_LIGHT7; giving us a maximum of 8 lights to choose from.  For an enum, the values on the left-hand side are names.  These names can be referred to.  However, you can then assign these names values, as I have done – in this case, it is their correspondant OpenGL value.

So now we can set our lights, and attach them to a node, similar to the camera.  Kind of.  Unlike the camera, these lights can either be switched on or off, and have different properties to one another – going back to the lighting post, the ambient, diffuse and specular properties of a light can be defined. We could do this using two methods.

Renderer.SwitchLightOn(LightNumber lightNumber, bool on);
Renderer.ChangeLightProperty(LightNumber lightNumber, LightProperty lightProperty, Colour colour);

We could.

But we shouldn’t.

I’ve been reading a lot lately about refactoring – cleaning up code, making it better, more extensible, and easier to read. A few months ago, I stumbled across this guide to refactoring. A lot of things were simple and made a lot of common sense. But it was intelligent. Something which, everyone becomes every now and then.  And it’s definately worth a read.  Another one I’ve ben reading of late is this book on clean code.  How good really is your code?

The Only Valid Measurement Of Code Quality.  This image has been taken from http://sayamasihbelajar.wordpress.com/2008/10/24/

Naming conventions are rather important when it comes to something like this, and using a bool as a parameter – doesn’t exactly mean much.  Once there is a bool, you will automatically know that the method does more than one thing – one thread for true, and another for false.  So split them.

Renderer.SwitchLightOn(LightNumber lightNumber);
Renderer.SwitchLightOff(LightNumber lightNumber);

There a lot of wrongs and rights, but no concrete answer – just advice that these books can give you.  So I’m not going to trip up on myself and try and give you some more advice – I’m only learning too.

Posted in Computing, Placement | Tagged: , , , , , , , | 3 Comments »

Renderer: Camera

Posted by Robert Chow on 14/12/2009

Just slighty tricky.

Starting with the camera, there is always one, no less, no more.  Or so there should be.  So the way I’ve managed this is to let the Renderer component deal with it.

You have the scene graph, made of multiple nodes.  Attached to one of these nodes is the camera – the great flexibility of having a model like this is that the camera can be easily attached to anything.  Ask the Renderer to attach the camera to a node, and it will automatically attach the camera abstraction inside Renderer to that specific node.  This way, you do not have to worry about multiple cameras – there is only one, and this is being constantly moved if asked to.

With the camera attached to some node, we will have to traverse up to the rootnode, multiplying the current matrix by transformation per node, and then inverting the result.  The result is then loaded as the base matrix for the model view.  Doing this shifts the entire world model by -camera position, and therefore the scene should appear in the right place according to the camera.

It’s a little like this.

Imagine you can move the world.  Move the world right.  Everything in your view shifts right.

But that’s impossible.  You move left.  Everything in your view still shifts right.

Camera Vs. World Space.  From top to bottom:  the original picture – the darker area shows what the camera is currently seeing in world space;  model shifted right – the camera sees what is expected – the world to have shifted to the right;  camera shifted left – as far as the camera view is aware, this creates the same effect as the image in the middle.  You don’t quite get beaches like that in Manchester.

Going back to the scene graph, there is one major problem. What about, for example’s sake, the camera was attached, but we then removed the node it was attached to from the scene graph? Or it just wasn’t attached in the first place? We can still traverse up the nodes to find camera position – this will be happily done until there are no more nodes to traverse up. The main problem is recognising whether or not the last node traveresed is the root node of the scene graph. We do a simple equality test, but then what?

My supervisor’s been saying something a lot – fail fast.

That means whenever you know there is something wrong, fail the execution as fast as possible. As far as I’m aware, there are two ways of doing this is C#. This is Debug.Assert() and Exception throwing.
Debug.Asserts() are tests used to make sure the program is working – the test should always prove to be true during execution, but you can never be too sure.  Exceptions can be used for when something has been entered incorrectly client-side.

For example.

Client Side

Renderer.SetCamera(null);

Internally

public void SetCamera(Node node)
{
if(node == null)
{
throw new CameraCannotBeNullException();
}
camera = node;
}

So coming back to the original case, when we’re traversing up the nodes from the camera, we check to see if the final node is the root node.  If not, we throw an exception.  That way the client can handle this when it picks it up.  And failing fast also means the client can deal with it as soon as possible.

Posted in Computing, Placement | Tagged: , , , , | 1 Comment »

To London And Back

Posted by Robert Chow on 09/12/2009

To those who read my last post, no, I did not drive to London and back. Although it would’ve been quite an experience to tell having only had so few driving lessons.  No, this is a different experience to tell.

Instead, I’ve been to London over the past two days to visit a conference, PSI Graphics (PSI – Statisticians in Pharmaceutical Industry).  The conference is about displaying data, and also highlights how to do this using multiple ways and techniques between differing software packages.

The first talk was about how (not) to display data.  Most of it was common sense, but in particular, i was probably the one I found most useful because it highlighted, on paper, what a user should and shouldn’t do when creating a graph.  One of my personal favourites of how not to display data was rule #5: Make sure the numbers don’t add up.

And along came, yet another, Fox news fail.

Fox News GOP Candidates. This image has been taken from http://flowingdata.com/2009/11/26/fox-news-makes-the-best-pie-chart-ever/

Second up was about a program a developer had created to help statisticians understand the effects a drug/placebo may or may not have on several patients.  The program, developed within AstraZeneca, was especially useful and introduced many methods that helped statisticians to do their jobs more effectively.  The entire talk was essentially focused on how program interactivity meant allowing less time interpreting the data, thus creating more time for productivity.  I guess I should try to focusing on the Grapher component to do similar.

Last of the morning talks introduced why there should always be a control data set, and how to display this effectively so an interpreter can easily see the difference between the control and the sample sets.  It also included calculating regression to the mean, and to be completely honest, a fair bit of this talk went over, not just my head, but many others too.  But I think I got the gist of it.

The afternoon presented many approaches to the “Graphics Challenge” proposed by PSI – the challenges were to display and evaluate many data cases using a software package.  Of the software packages on show, MATLAB was the only one I’d heard of, having used it in my AI course last year.  Unfortunately, the MATLAB speaker was unable to turn up.  So I had to sit through a good few hours of talks about SAS, STATA, R/S+ and GenStat approaches.  All of this was new to me, and because of why I was at the conference and knowing why (to investigate how to display data, to find other methods of display data efficiently, how do others display their data, any other graphs that need to be considered, how can we better other packages, etc), I think I was probably one of the least bored of the many that attended.

Before going to this conference, I was a bit skeptical at first.  I knew the conference was going to be useful,  but didn’t quite know how much.  I guess, a lot of the things said, I’d considered, but that’s as far is went. It was very useful for it allowed me to put pen to paper, and actually have things written down instead of thinking about them, or at least thought I’d been thinking about them, when in fact I hadn’t.  I was a little skeptical of the people too – I didn’t know what to expect.  But I met a fair few characters, from an electrical engineer turned SAS programmer, to many statisticians, one who is a placement student like myself.  And lastly, because it was London.

Funnily enough though, I didn’t find myself outside of my comfort zone at all – which surprised me the most.  As tiring as it is to go to London and back from Bideford, it’s definitely an experience I’ll want to do again, and I urge many others to go to conferences too, skeptical or not.

On one last note, I have to say, over the past couple of days, I should have just shut my mouth when I’m in a car.  On Monday I was given that lecture about not being so hard on myself when driving.  Yesterday, I was in the taxi with a born and bred country man.  I simply remarked at how in Manchester I’d rarely have to get a taxi because of public transport, and then he goes and gives me a full-blown account of how being in the country isn’t so “slow-paced”, and how you have to just plan ahead.

Shutting up.

Posted in Computing, Placement | Tagged: , , , , , | 1 Comment »

To Barnstaple And Back

Posted by Robert Chow on 07/12/2009

So I’ve just had driving lesson #7, and we drove to Barnstaple and back.  In the rain.  It was a bit better than last week’s episode – I wasn’t so rusty this time due to not having spent the last 2 weeks in Manchester.

Just need more experience and practise – I guess that can only come with more lessons.

At the end I made a passing comment, “Slowly getting there.”  That’s all it was, a passing comment, but then my instructor seemed to take it a little less lightly than intended.

Bless him, he’s a nice guy, but he sure can talk. Gave me a 10 minute, what he calls, sermon on how people shouldn’t punish themselves for not making the progress they think they ought to.

He was teaching this one lad, who kept beating himself up, despite having only had 15 hours of lessons, and thinking he was assessment standard by then.  My instructor then went on to calculate how many hours of driving experience he reckons he’s had – and at that point (bearing in mind it was 12 years ago), it was ~28000 hours.  That’s a lot.  And a lot more now.

But what also fascinated me was his story about driving with a traffic cop.  My instructor always tells me to check what’s around, and more importantly ahead.  So this includes road signs, parked cars, bus stops.  The usual.  What got him however, was what the traffic cop noticed.

A cyclist for example – normal thing a driver should look out for. Traffic cop notices something else too – the wind direction. This is important should a gust of wind blows the cyclist into the path of the car. Not good.

Another example. Birds circling above a field. So. Who cares? Traffic cop spots trouble again.  Apparently, that means that there is a tractor below and the birds want their food.  The main potential problem to notice are plants – whether they’ve fallen on the road, over-growing bushes, etc. Another is tractor traffic.  And lo and behold, a tractor pulls out nearby.

They say the average driver takes around 45 hours contact time with an instructor before they pass.  I’ve a long way to go, and I’m more than happy to take those hours on.

Posted in Placement | Tagged: | Leave a Comment »

Renderer: Picking

Posted by Robert Chow on 03/12/2009

Remember that decorator pattern I used for writing to textures?  Also particularly useful for picking too.

Decorate the main drawer class with a picking class.  This allows for the picking class to start off by setting up the render-mode to select mode, and to set the X and Y co-ordinates of the mouse.  Render the image as normal, but because of the render-mode switch, the image is rendered using the select buffer instead of the device context.  After this, the picking class is also able to interpret the picking buffer, and then return the results.

Sounds fairly easy.  But I did run into a problem near the start.

First off was the mouse wasn’t picking where I wanted – it was interpreting the mouse as always in the bottom-left hand corner.  Why so?

I later learnt that when changing the render-mode, it automatically multiplies the projection matrix by some pre-configured matrix so it can do the picking appropriately.  In the original drawer class, I was resetting the projection matrix by loading the identity, and thus wiping what the picker class had set up.  Kind of like defeating itself.  So I had to change this by taking the matrix resets out of the main drawing method, and place them outside – these would have to be called elsewhere – likely to be through the Renderer facade (another GoF pattern), before calling the draw.

So with the matrix set up properly, I did encounter another problem.  And this time it was hardware based.  It would pick appropiately on my supervisor’s computer, yet not on mine.

An example scenario:  there are several images overlapping each other, each with their own pick Id.

My supervisors computer: picks appropriately – returns the object with the nearest Z value (most positive), under mouse (X,Y) .

My computer: picks unappropriately – returns the object with the nearest Z value, under mouse(X,Y) – providing the object has not moved in modelview space in terms of the Z value.  This accounts for transformations, rotations and scales.  So therefore it will sometimes return the correct object, but most of the time, it won’t.  Particularly annoying if you have a scene that is constantly rotating, and you want to pick the nearest item – it will always pick the same item every single time, simply because it was rendered at the front first before it was rotated.

Simple fix:  enable culling.  This means that anything that is not in view won’t be rendered, so technically there is nothing behind the frontmost object – it can be the only one that is picked!

But that’s also particularly annoying when you want to use 2-face rendering.  This is the ability to render shapes clockwise, and counter-clockwise.  Enabling culling in OpenGL will result in all shapes that would originally be drawn clockwise to the screen, not drawn at all.  Not so useful in rotating scenes!

Anyway.

Combining textures and picking, this demo allows the user to pick a quadrant, and that will change accordingly – to texture if off, or to non-texture if on.

Renderer Picking Demo. From top-left, clockwise:  empty canvas; bottom-left quadrant click to turn texture on; top-right quadrant click to turn texture on; top-left quadrant click to turn texture on; bottom-left quadrant click to turn texture off; top-right quadrant click to turn texture off; bottom-right quadrant click to turn texture on; top-left quadrant click to turn texture off.  To achieve the original image from the last step, the user would click bottom-right quadrant, turning the texture off.

Posted in Computing, Placement | Tagged: , , , , , , , | 1 Comment »

Line 0

Posted by Robert Chow on 01/12/2009

So I managed to get back to Devon yesterday afternoon, and later found myself helping to clean out the kitchen. One of our colleagues, of 3 and a half years has left, and he was also a former flat-mate of mine too.  It was quite interesting to find what was around the kitchen – it could have been a museum, where artifacts of foods would be on display – from as old as 2004.  It’s not pretty.

A driving lesson helped to perk me up this morning – lesson #6 it was, and I feel exactly the same way about driving as I have always done.  I’ll enjoy it when I do it, but when it comes around, I’m always thinking I could be better off doing something else.  But nevertheless, it was a good hour of driving.  Still starting to get the hang of it, and still a little rough having not done it for the last 3 weeks, but I’m getting there.

After the lesson, Kanban meeting, and then the annual review for myself.  It was a first, considering I’d never taken one quite so seriously, despite having them at John Lewis (where I’ve worked for 3 years, but that’s another story), and also for my supervisor, whereby he’s never managed anyone before.  It was rather thorough, and rather long I might add.  It was also quite useful too, helping me to plan out what I should expect when 2010 arrives – that’s less than 3 weeks of coding from now, and I’m a little freaked out at what I’ve set myself to do by then.

The afternoon was spent typing up those minutes – it takes a lot longer than you think!  The rest of the time, I’ve been trying to find travel arrangements for a conference I’m to attend next week in London.  The conference is about pharmaceuticals and displaying data in graphs.  Some of it will be interesting, and the rest not so.  I’m just a little worried about what will go wrong with travel, especially with after the conference – I’m just hoping I won’t have to book a hotel in Exeter, and that I can come back to Barnstaple on time.  The conference finishes at 5pm, and the last train is at 6.03pm.  Of course, I have to get to the train station by tube first.  Not nice.

After all of this, I’m a little disappointed really.  Mainly because, well, you know what I’ve also done for the past couple of days?

Not a single line of code.  Not 1.

Line 0.

Posted in Placement | Tagged: , | 1 Comment »

Renderer: Scene Graph

Posted by Robert Chow on 01/12/2009

So next, I decided to implement the scene graph, and that comes with what I showed you previously with the API of Renderer.

Essentially, as I have morealess created the majority functionalities of Renderer, the API at the time of writing of the post, did not reflect what I had originally implemented. To render each vertex buffer object, you originally had to call Renderer.Draw(vertexBufferObject) and that would draw it to the device context. You would then call Renderer.DrawToTexture(..params..) afterwards to draw the scene to a texture. Quite lame really. Especially because it did not incorporate the scene graph.

The scene graph itself looks rather different this time to that I outlined in this post.  In that post, the client will push and pop matrices on and off the transformation stack itself.  In Renderer however, it is much different.  No push.  No pop.

OOP(s, I did it again)

Originally you would set up the transformation matrix, possibly using pushes and pops, and then submit your node – this would take on what the position specified by the current transformation matrix.  But for now, we live in an object-orientated world.  And as a result, you are able to manipulate the nodes themselves without having to go through a scene graph abstraction.  This means that you can specify each relative position each node has to its parent node – no more pushes and pops.

Going through a scene graph abstraction may look like this:

Scenegraph.LoadIdentity();
Scenegraph.Push();
Scenegraph.Translate(1.0f, 0.0f, 0.0f);
parentA = Scenegraph.AddNode(rootNode, renderableA);
Scenegraph.Push();
parentB = Scenegraph.AddNode(parentA, renderableB);
Scenegraph.Translate(0.0f, 1.0f, 0.0f);
Scenegraph.Pop();
Scenegraph.Scale(0.5f, 1.0f, 2.0f);
parentC = Scenegraph.AddNode(parentA, renderableC);
Scenegraph.Pop();

Changing to something more object-orientated can result in something like this:

RootNode = new Node(Matrix.Identity());
RootNode.AddNode(nodeA = new Node(Matrix.Translate(1.0f, 0.0f, 0.0f), renderableA));
nodeA.AddNode(nodeB = new Node(Matrix.Translate(0.0f, 1.0f, 0.0f), renderableB));
nodeA.AddNode(nodeC = new Node(Matrix.Scale(0.5f, 1.0f, 2.0f), renderableC));

I think you’ll agree it’s just that little bit cleaner.

Another great (I might add) functionality that I’ve included is that ability to detach and reattach branches of nodes.  This is particularly useful for background rendering.  For example, a client may want to render a large complex image onto a texture, and to then use that texture multiple times in a another complex scene.  This texture however is to be rendered offscreen.  You do not want to have to collapse and build the tree from scratch for each different image, especially if it means the tree will be constantly changing between only a couple of states.  So why not be able to detach BranchA, attach BranchB, render to texture, detach the BranchB and reattach BranchA in one swift movement?  Saves on all the time having to rebuild the original tree.

No snazzy demo this time.  Soon, I promise.

Posted in Computing, Placement | Tagged: , , , | 2 Comments »