Planning the Spontaneous

It's more than just a blueprint.

Posts Tagged ‘Computing’

DrawInside(Stencil);

Posted by Robert Chow on 26/05/2010

So a few things have changed since I last made a post.

My targets for the end of the year have been reduced.  Which I’m very glad about.  No, it’s not because I’m a slacker, but more importantly because there simply isn’t enough time.

What’s new?

As a result, the target is now very focused on getting an end product out to show for Grapher.  This end product will have rather minimal functionalities, yet will still be comprehensible enough to use as a Grapher component.  Of course, this target will not actually be used as a component without further work; like those functionalities.

Unfortunately, there are still hidden complexities.  There are always hidden complexities in any piece of software you write, in particular those which you have no prior knowledge about.  My supervisor said to me that you shouldn’t start coding until you have answered all those question marks.  Yet another downfall to my side of planning.  But I think I’m starting to learn.

Once again, Grapher threw up a hidden complexity in the way that layers would be drawn.  The idea was to draw the plot in its own viewport, eliminating everything outside of the view such as data points that do not lie in the visible range.  Of course, this is all fine with a rectangular plot – the natural OpenGl viewport is rectangular.  However, come to nonrectangular plots such as triplots or spiderplots, the story is rather different.

The first solution, would be to use a mask to hide away the data outside of the plot.  Unfortunately, this mask would also hide away any items lying underneath that has already been renderered there.

As a result, the next solution states that, because of the action caused by the first solution, some layers would have to be fixed; for example, the data plot will always be at the bottom to make sure the mask does not obscure anything underneath it.

At this point in the placement, I never did think that I’d be adding something major to Renderer;  the final solution, and arguably the best would be to use a stencil buffer.

Stencil me this

A stencil buffer allows for a nonrectangular viewport.  Primitvives, acting as the viewport is drawn to the stencil buffer; using this, only items drawn inside the viewport outlined by the stencil buffer is rendered.

Simple right?

Of course, there are hidden complexities.  Again.

The best thing for me to do would be to spike the solution, instead of coding it straight into Renderer.

First point of action I took was to visit the OpenGl documentation pages, and they provided me with a few lines of code.

Gl.glEnable(Gl.GL_STENCIL_TEST);

// This will always write to the corresponding stencil buffer bit 1 for each pixel rendered to
Gl.glStencilFunc(Gl.GL_ALWAYS, 1, 1);

// This will only render pixels if the corresponding stencil buffer bit is 0
Gl.glStencilFunc(Gl.GL_EQUALS, 0, 1);

Using these should result in items declared after the final call, to be only visible if it lies outside of the viewport declared after the second call.  I continue to claim that by all means, I am no expert on OpenGl, but that’s the rough idea I get from these calls.

I tried this, but to no avail, nothing happened.

My worry was that SimpleOpenGlControl, the rendering device provided by the Tao Framework did not support a stencil buffer; an earlier look notified me that it did not support an alpha channel.  Luckily, the stencil buffer bits can be changed without having to change the source code.  This should be done before initialising the control contexts.

SimpleOpenGlControl.StencilBits = 1;
SimpleOpenGlControl.InitializeContexts();

Once again, nothing happened.

I had a look around the Internet to try and find some source code that worked;  I noticed in one of them, one particular call was present, yet in mine, it was not.

// This specifies what action to take when either the stencil test or depth test succeeds or fails.
Gl.glStencilOp(Gl.GL_KEEP, Gl.GL_KEEP, Gl.GL_REPLACE);

Placing this at the start of the code appears to make use of the stencil buffer correctly.  I believe this is because when the stencil test succeeds, it will replace the current pixel with the resultant pixel, instead of keeping the old one.  Again, I am no expert; there is a lot more information on the documentation page (however useful, I cannot tell).

Drawing a conclusion

Below is the code and a screenshot of the application I used to spike using the stencil buffer.

public partial class Client : Form
{
public Client()
{
InitializeComponent();

SimpleOpenGlControl.StencilBits = 1;
SimpleOpenGlControl.InitializeContexts();

Gl.glEnable(Gl.GL_STENCIL_TEST);
Gl.glStencilOp(Gl.GL_KEEP, Gl.GL_KEEP, Gl.GL_REPLACE);
// Enable hinting

}

private void simpleOpenGlControl_Paint(object sender, PaintEventArgs e)
{
Gl.glClearColor(1, 1, 1, 1);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_STENCIL_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(0, SimpleOpenGlControl.Width, 0, SimpleOpenGlControl.Height, 10, –10);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

// Draw to the stencil buffer
Gl.glStencilFunc(Gl.GL_ALWAYS, 1, 1);
// Turn off colour
Gl.glColorMask(Gl.GL_FALSE, Gl.GL_FALSE, Gl.GL_FALSE, Gl.GL_FALSE);
// Draw our new viewport
Gl.glBegin(Gl.GL_TRIANGLES);
Gl.glVertex2d(0, 0);
Gl.glVertex2d(SimpleOpenGlControl.Width, 0);
Gl.glVertex2d(SimpleOpenGlControl.Width / 2.0, SimpleOpenGlControl.Height);
Gl.glEnd();

// This has been changed so it will draw to the inside of the viewport
// To draw to the outside, use Gl.glStencilFunc(Gl.GL_EQUAL, 0, 1);

Gl.glStencilFunc(Gl.GL_EQUAL, 1, 1);
// Turn on colour
Gl.glColorMask(Gl.GL_TRUE, Gl.GL_TRUE, Gl.GL_TRUE, Gl.GL_TRUE);
// Draw triangles here

// Disable stencil test to be able to draw anywhere irrespective of the stencil buffer
Gl.glPushAttrib(Gl.GL_ENABLE_BIT);
Gl.glDisable(Gl.GL_STENCIL_TEST);
// Draw border here

Gl.glPopAttrib();

SimpleOpenGlControl.Invalidate();
}
}

Stencil Test.  This is created by first delaring the triangular viewport in the stencil buffer;  only primitives inside the stencil will be rendered.  The border was added on later, after disabling the stencil test.

Advertisements

Posted in Computing, Placement | Tagged: , , , , , , , | Leave a Comment »

One More Bite…

Posted by Robert Chow on 07/02/2010

I’ve been up in Manchester for the last few days, playing it out being a student again. And honestly, it’s felt rather good.

I signed for a house the first day I came back for the next academic year, and had a look around. It’s a pretty nice place; and thankfully, I didn’t have to do a single thing to find it! So I’m pretty grateful for all the hassle my future housemates went through to get the place – I’m sure I can find a lot more calming activities to do!

I’ve also just got back from the most amazing all-you-can-eat ever! It’s a Brazilian steakhouse called Bem Brazil in the northern quarter of Manchester. 12 different cuts of meat we managed to scoff down, ranging from the absolutely gorgeous beef fillet steak down to the little chicken hearts, which I wasn’t too particularly fond of.

I’m in Manchester because I’m going to a conference on Tuesday in Birmingham, and thought I’d take a long weekend at home since it’s only a couple of hours extra on the train. It’s a software developer’s conference about mapping software – hopefully something I’ll be touching upon later in my placement.  The link for anyone interested is http://www.esriuk.com/trainingevents/events/developerhub090210/.

The plans for the future are a bit up in the air for the time being.  Now with Renderer nearly finished, I am currently looking at spiking Grapher, and then after that, to possibly start on Mapper – hence the conference.  However, a lot of ideas have been proposed for Grapher, and it’s come to the point of it might be best for me to develope Grapher versions 1 and 2 during the rest of my placement and to do Mapper in the Summer if I am given the chance to come back.  My intentions are to try to do as much as possible – I want to, and if I don’t deliver, not only will I be annoyed and disappointed at myself, I’ll also think it’s a real shame that I am unable to bring to the table a good set of components usable in the next program.

The last few weeks I haven’t been blogging as often as I’d like, and that’s because things are just taking longer than the usual.  And it’s not like as if we have numerous time-consuming talks about health & safety to blame either.  Time after time after time, ideas come and go.  Try one out, and you end up in a dead-end, so you’re forced to try another.  It’s not until after 4-5 attempts do you quite get it right sometimes, and even then you’re not necessarily happy with the result because at the end of the day it’s not very good code.  So you have to try again.

My supervisor’s been telling me to come up with more than a few ideas before trying them out.  But I’m not much of a planner; I like to get stuck into it as soon as possible.  I guess I would like to become more of that planner type person – it’d also mean one more step in the right direction for personal development.  If you get it right and plan well, you do save yourself a lot of time from not having to implement all the wrong ideas.

Heck, even that bit of this blog wasn’t planned.

Posted in Computing, Placement | Tagged: , | Leave a Comment »

ConfusedEventHandler += (s, e) => ConfusedEventHandler(Me.Extend(s, e));

Posted by Robert Chow on 21/01/2010

So I’ve just been informed that there are a couple more additions to event handling in .NET.

Before notifying an event handler of any new events, you MUST always check to see if it is null.  If it is, then don’t send any events – apparently you get really messed up runtime errors.

Secondly, you can subscribe to an event handler using a lambda method too – it’s quite a pretty neat trick.  As a result, it also allows you to subscribe more than the one method to the event handler.

Here’s the code for the main part of the demo, but rewritten to accomodate the two factors.  I’ve also added in the functionality of one of the subscribers being taken away part way through the program.

public class Program
{
public static event IncreaseNumberEventHandler numberIncreasedEventHandler;

public static IncreaseNumberEventHandler n52Subscribe;

public static void Main(string[] args)
{
public static Number n27 = new Number(27);
public static Number n39 = new Number(39);
public static Number n52 = new Number(52);
numberIncreasedEventHandler += (s, e) => {n27.Increase(s, e); n39.Increase(s, e);};
numberIncreasedEventHandler += (n52Subscribe = (s, e) => n52.Increase(s, e));

Run();
}

public static void Run()
{
for (int i = 0; i < 10; ++i)
{
if (i == 5)
{
numberIncreasedEventHandler -= n52Subscribe;
}
if (numberIncreasedEventHandler != null)
{
numberIncreasedEventHandler(null, new IncreaseNumberEventArgs(i));
}
}
}
}

And the output:

n27: 27 increased by 0 to 27
n39: 39 increased by 0 to 39
n52: 52 increased by 0 to 52
n27: 27 increased by 1 to 28
n39: 39 increased by 1 to 40
n52: 52 increased by 1 to 53
n27: 28 increased by 2 to 30
n39: 40 increased by 2 to 42
n52: 53 increased by 2 to 55
n27: 30 increased by 3 to 33
n39: 42 increased by 3 to 45
n52: 55 increased by 3 to 58
n27: 33 increased by 4 to 37
n39: 45 increased by 4 to 49
n52: 58 increased by 4 to 62
n27: 37 increased by 5 to 42
n39: 49 increased by 5 to 54
n27: 42 increased by 6 to 48
n39: 54 increased by 6 to 60
n27: 48 increased by 7 to 55
n39: 60 increased by 7 to 67
n27: 55 increased by 8 to 63
n39: 67 increased by 8 to 75
n27: 63 increased by 9 to 72
n39: 75 increased by 9 to 84

Posted in Computing, Placement | Tagged: , , , , , , , , , | Leave a Comment »

ConfusedEventHandler += new ConfusedEventHandler(Me.Confused);

Posted by Robert Chow on 20/01/2010

… Or at least that’s how I first felt when I started looking at event handling in .NET.

And I still do slightly – I’ve literally just looked at this 10 minutes ago, and I’m writing this down now while it’s still fresh in my head.

Observer Pattern

The observer pattern uses the notion of an Observer interface.  The state of each object that implements this can be updated through the interface.  The idea of using this interface allows a publisher to update all the objects in one swift movement.  Obviously, only those that are referenced by the publisher can be updated – those that are, are known as subscribers.  This design pattern is also a subset of the publish/subscribe pattern – any object that is subscribed will recieve any notifications sent by the publisher.

Publish/Subscribe.  A publisher sends events to subscribers.  This image has been taken from http://msdn.microsoft.com/en-us/library/ms978603.aspx.

So yesterday, I set about creating my own observable pattern implementation.  Yet today I learnt that there is a .NET implementation using events.

Event Handling

So with a little help from Microsoft and a lot more help from CodeProject, I’m going to try and explain to myself how to use them.

We start of with declaring the publisher.  The publisher contains a list of all the methods that should be invoked when an event is recieved.  This is done using 2 keywords and the name.

event EventHandler eventHandler;

event – declares that this is an event handler.
EventHandler – this is the default delegate type for event handlers.  This 2nd keyword must always be a delegate type it seems.

To add a subscriber, this is relatively simple.

eventHandler += new EventHandler(method);

+= – this symbol implies that we are wanting to subscribe a method to the event handler.  Similarly, to unsubscribe use the -= operator instead.
method – this is the method to be invoked when the publisher recieves an event to be sent on.

So now that we have a publisher and subscribers, lastly is to send an event to the publisher so it can pass it on to the subscribers.

eventHandler(sender, new EventArgs());

sender – the object that sent the event.  This is not compulsory and can be null.
new EventArgs() – event arguments for the event just occurred.  Again, this is not compulsory and can be null.  The methods subscribed to the publisher can always be invoked regardless of event arguments.

It’s actually fairly easy really once you get your head around it – at first I got hurrendously confused because I kept seeing “event” everywhere.

And of course, the sample I’ve just shown is fairly useless – it’s a lot more useful when you use your own delegate type as the event handler in conjunction with your own event arguments – most probably derived from EventArgs.

Increase Number Event

Below is a simple demo I’ve made just to make it a little more understandable, and it was also particularly useful when I was trying to figure it all out.  It uses a publisher, numberIncreasedEventHandler, whereby the Number objects n27, n39, n52 all have their Increase method subscribed.  This is invoked during Run, passing on an IncreaseNumberEventArgs which increases each object by the number contained in the argument.

public delegate void IncreaseNumberEventHandler(object sender, IncreaseNumberEventArgs e);

public class IncreaseNumberEventArgs : EventArgs
{
public int Increase;

public IncreaseNumberEventArgs(int increase)
{
this.Increase = increase;
}
}

public class Program
{
public static event IncreaseNumberEventHandler numberIncreasedEventHandler;

public static void Main(string[] args)
{
Number n27 = new Number(27);
Number n39 = new Number(39);
Number n52 = new Number(52);
numberIncreasedEventHandler += new IncreaseNumberEventHandler(n27.Increase);
numberIncreasedEventHandler += new IncreaseNumberEventHandler(n39.Increase);
numberIncreasedEventHandler += new IncreaseNumberEventHandler(n52.Increase);
Run();
}

public static void Run()
{
for (int i = 0; i < 10; ++i)
{
numberIncreasedEventHandler(null, new IncreaseNumberEventArgs(i));
}
}
}

public class Number
{
int number;
string name;

public Number(int initialNumber)
{
this.number = initialNumber;
this.name = “n” + initialNumber;
}

public void Increase(object o, IncreaseNumberEventArgs e)
{
Console.WriteLine(name + “: “ + number + ” increased by “ + e.Increase + ” to “ + (number += e.Increase));
}
}

And the output:

n27: 27 increased by 0 to 27
n39: 39 increased by 0 to 39
n52: 52 increased by 0 to 52
n27: 27 increased by 1 to 28
n39: 39 increased by 1 to 40
n52: 52 increased by 1 to 53
n27: 28 increased by 2 to 30
n39: 40 increased by 2 to 42
n52: 53 increased by 2 to 55
n27: 30 increased by 3 to 33
n39: 42 increased by 3 to 45
n52: 55 increased by 3 to 58
n27: 33 increased by 4 to 37
n39: 45 increased by 4 to 49
n52: 58 increased by 4 to 62
n27: 37 increased by 5 to 42
n39: 49 increased by 5 to 54
n52: 62 increased by 5 to 67
n27: 42 increased by 6 to 48
n39: 54 increased by 6 to 60
n52: 67 increased by 6 to 73
n27: 48 increased by 7 to 55
n39: 60 increased by 7 to 67
n52: 73 increased by 7 to 80
n27: 55 increased by 8 to 63
n39: 67 increased by 8 to 75
n52: 80 increased by 8 to 88
n27: 63 increased by 9 to 72
n39: 75 increased by 9 to 84
n52: 88 increased by 9 to 97

I think I’m fairly calm and a lot less confused now…

(Before you say anything, yes I am aware that the code’s snazzed up a bit – it wasn’t the most readable at first, especially when there’s a massive chunk of it…)

Posted in Computing, Placement | Tagged: , , , , , , , , , | 1 Comment »

Renderer: Renderables

Posted by Robert Chow on 15/01/2010

So with the majority of the Renderer library nearly finished (with the exception of layers and fonts; well, there are fonts, they’re just not perfect), the process of refactoring has started.  On top of that is also creating a toolkit which will be used as a helper library to make using the Renderer library a bit more user-friendly. Less typing means for more time to do other things.

Renderable Objects

So Renderer is based on the concept of having Renderable objects.  These are exactly what they say on the tin.  They’re are renderable.  To render them, you will need to access their properties.  These properties are fairly straight forward, and are what you would expect a Renderable object to have.  Of course, these have been fine-tuned to the usage of vertex buffer objects, so the properties are:

Vector[] Vertices; // vertices of the shape
Colour[] Colours;  // colours specified at each vertex
Vector[] Normals;  // normals specified at each vertex
Vector[] TexCoords;// 2D texture co-ordinate specified at each vertex, these map to the Renderable texture
uint[] Indices;    // the order of how the vertices are to be drawn 
IMaterial Material;// any material properties this Renderable may hold
ITexture Texture;  // the texture, if, bound to the Renderable
DrawType DrawType; // an enum representing each OpenGL drawtype, specifying how the Renderable is drawn

Without the Vertices, Indices or DrawType, the Renderable is unrenderable, so these are forced in the constructor.  Using the vertices as a core property, this cannot be changed, but all the other properties can be.  That is, due to validity checks.

Duplicate Duplicate

Drawing each Renderable as a vertex buffer object seems rather silly to store the core data multiple times, as it will already be in the GPU, as well as the artificial buffering system in Renderer (used to control and keep a track of what is in the GPU), so it is not necessary to have it in the Renderable object itself.  Instead, I have used the notion of a ‘pointer’ per property.  This is not the conventional pointer traditionally used in computing, as it does not directly correspond to a place in memory.  Instead it points to a place in the artificial buffer where the corresponding data is held.  Using this method means the Renderable object is rather lightweight and is cost effective in memory.

To obtain the ‘pointer’, the client needs to ask the Renderer library.  This is done through the RendererFacade (Facade design pattern), and only takes in the one parameter – the core data.  This core data is sent to the artificial buffer, and the ‘pointer’ is returned.

Where the vertex buffer objects are concerned, this changes the properties in the Renderable object to type ‘pointer’.  Of course this could cause problems, allowing clients to assign Colours ‘pointer’s to TexCoords, Indices ‘pointer’s to Normals and so forth, eventually causing the computer to crash because of a GPU error.  To solve this, each ‘pointer’ is wrapped in a DataPacket interface, and deriving from this are corresponding DataPackets for Vertices, Colours, Normals etc. making sure that the correct ‘pointer’ is matched with the right property.

IVerticesDataPacket Vertices;
IColoursDataPacket Colours;
INormalsDataPacket Normals;
ITexCoordsDataPacket TexCoords;
IIndicesDataPacket Indices;

Using the standard get and set methods that properties have, changing a property is relatively easy (we can also write these manually to do our validity checking).  But it could be better.

Fluent Refactoring

Since we are refactoring, we might as well try to make this as easy to understand and read as possible.  For this, I have touched upon fluent interfaces.

The idea behind this is relatively simple when assigning a property.  It’s a little more difficult when it comes to multiple states.  But more on that later.

For example, we want to assign a ColourDataPacket, a TexCoordsDataPacket and a Texture to a Renderable, it’s as easy as this:

renderable.Colours = colourDataPacket;
renderable.TexCoords = texCoordsDataPacket;
renderable.Texture = texture;

However, to create a fluent interface, I have created extension methods that more or less do exactly the same thing.  The difference is that it reads better, it’s less typing, and it’s also a single statement.

renderable.Assign(colourDataPacket)
.Assign(texCoordsDataPacket)
.Assign(texture);

The extension methods are simple.  Not only do they do pretty much exactly the same as per normal assignment, but they also return the Renderable instead of void, allowing more methods to be called – this is a form of method chaining.  To make it even more simple, I have also used polymorphism to defer from what is assigned to what, allowing me to just use variants of the one method, Assign(), throughout.

Pretty cool, eh?

 

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »

Renderer: Demo

Posted by Robert Chow on 16/12/2009

So now that I’ve got a lot of my functionality going for me in this component, I thought I’d try and create a snazzy demo.

My supervisor’s obsessed with it. It’s like a little game – there is a rotating cube, and the challenge is you have to click on each face before you pass on to the next level. As you progress, the cube rotates just that little bit faster, making it harder to complete the challenge. Clicking on a face will turn that face on. Clicking on a face that is already on will turn it off – you can only progress when all 6 faces are switched on.

So you can automatically see that this little demo already involves the concept of vertex buffer objects (drawing of the cube), the scene graph (rotating cube), the camera (viewing the cube, using the scene graph) and picking (turning faces on/off). But what about the others?

Well, lets place the scene into dark surroundings – we’re going to need lights to see our cube – that’s lighting handled. So where does texturing fit in?

Switching the faces on/off need an indicator to show what state they are in.  We can easily do this by simply changing the colour of a face.  But that’s kinda boring.  So instead, we’re going to take the rendering to texture example, and slap that on a face which is on.  So that’s textures included.

Here are some screenshots of this demo.

Demo Setup. From left to right: initial scene – just a white cube; let’s dim the lighting to, well, none; include some lights: green – centre, red – bottom, blue – left; create a texture.

Demo Rotate. The demo in motion – the camera is stationary, whereas the cube is rotating, and the lights are rotating independantly around it.

Demo Play.  The first and second images show what happens when a face is clicked – the texture replaces the blank face.  Once all the faces are replaced with the texture, the demo advances to the next level.

Demo Camera.  This shows added functionality of how a camera can be fixed on to a light, thus the camera is rotating around the cube too.  Here it is fixed on to the blue light.  This is done with ease by manipulating the scene graph.

I guess you can’t really get a grasp of how this demo works in its entirety – sanpshots don’t really do it much justice.  I might try and upload a video or so – how, I’m unsure of – I’m sure I can probably find screen capture software around on the net.

And unfortunately, there is no score involved. Yet. That will come when fonts come.

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »

Renderer: Lighting

Posted by Robert Chow on 15/12/2009

So these are tricky too.

OpenGL only allows 8 lights at any one time, and should you need more, they suggest you ought to reconsider how you draw your model.

As it is not just one light available, we need to be able to differentiate between each light.  We do this by declaring an enum.

public enum LightNumber
{
GL_LIGHT0 = Gl.GL_LIGHT0,
GL_LIGHT1 = Gl.GL_LIGHT1,
..
..
GL_LIGHT7 = Gl.GL_LIGHT7
}

An enum, like classes, can be used as parameters, with one restriction – the value of the parameter must be equal to one of the values decalered within the enum.  In the context of lights, the LightNumber can only be either GL_LIGHT0, GL_LIGHT1, GL_LIGHT2 and so forth until GL_LIGHT7; giving us a maximum of 8 lights to choose from.  For an enum, the values on the left-hand side are names.  These names can be referred to.  However, you can then assign these names values, as I have done – in this case, it is their correspondant OpenGL value.

So now we can set our lights, and attach them to a node, similar to the camera.  Kind of.  Unlike the camera, these lights can either be switched on or off, and have different properties to one another – going back to the lighting post, the ambient, diffuse and specular properties of a light can be defined. We could do this using two methods.

Renderer.SwitchLightOn(LightNumber lightNumber, bool on);
Renderer.ChangeLightProperty(LightNumber lightNumber, LightProperty lightProperty, Colour colour);

We could.

But we shouldn’t.

I’ve been reading a lot lately about refactoring – cleaning up code, making it better, more extensible, and easier to read. A few months ago, I stumbled across this guide to refactoring. A lot of things were simple and made a lot of common sense. But it was intelligent. Something which, everyone becomes every now and then.  And it’s definately worth a read.  Another one I’ve ben reading of late is this book on clean code.  How good really is your code?

The Only Valid Measurement Of Code Quality.  This image has been taken from http://sayamasihbelajar.wordpress.com/2008/10/24/

Naming conventions are rather important when it comes to something like this, and using a bool as a parameter – doesn’t exactly mean much.  Once there is a bool, you will automatically know that the method does more than one thing – one thread for true, and another for false.  So split them.

Renderer.SwitchLightOn(LightNumber lightNumber);
Renderer.SwitchLightOff(LightNumber lightNumber);

There a lot of wrongs and rights, but no concrete answer – just advice that these books can give you.  So I’m not going to trip up on myself and try and give you some more advice – I’m only learning too.

Posted in Computing, Placement | Tagged: , , , , , , , | 3 Comments »

Renderer: Camera

Posted by Robert Chow on 14/12/2009

Just slighty tricky.

Starting with the camera, there is always one, no less, no more.  Or so there should be.  So the way I’ve managed this is to let the Renderer component deal with it.

You have the scene graph, made of multiple nodes.  Attached to one of these nodes is the camera – the great flexibility of having a model like this is that the camera can be easily attached to anything.  Ask the Renderer to attach the camera to a node, and it will automatically attach the camera abstraction inside Renderer to that specific node.  This way, you do not have to worry about multiple cameras – there is only one, and this is being constantly moved if asked to.

With the camera attached to some node, we will have to traverse up to the rootnode, multiplying the current matrix by transformation per node, and then inverting the result.  The result is then loaded as the base matrix for the model view.  Doing this shifts the entire world model by -camera position, and therefore the scene should appear in the right place according to the camera.

It’s a little like this.

Imagine you can move the world.  Move the world right.  Everything in your view shifts right.

But that’s impossible.  You move left.  Everything in your view still shifts right.

Camera Vs. World Space.  From top to bottom:  the original picture – the darker area shows what the camera is currently seeing in world space;  model shifted right – the camera sees what is expected – the world to have shifted to the right;  camera shifted left – as far as the camera view is aware, this creates the same effect as the image in the middle.  You don’t quite get beaches like that in Manchester.

Going back to the scene graph, there is one major problem. What about, for example’s sake, the camera was attached, but we then removed the node it was attached to from the scene graph? Or it just wasn’t attached in the first place? We can still traverse up the nodes to find camera position – this will be happily done until there are no more nodes to traverse up. The main problem is recognising whether or not the last node traveresed is the root node of the scene graph. We do a simple equality test, but then what?

My supervisor’s been saying something a lot – fail fast.

That means whenever you know there is something wrong, fail the execution as fast as possible. As far as I’m aware, there are two ways of doing this is C#. This is Debug.Assert() and Exception throwing.
Debug.Asserts() are tests used to make sure the program is working – the test should always prove to be true during execution, but you can never be too sure.  Exceptions can be used for when something has been entered incorrectly client-side.

For example.

Client Side

Renderer.SetCamera(null);

Internally

public void SetCamera(Node node)
{
if(node == null)
{
throw new CameraCannotBeNullException();
}
camera = node;
}

So coming back to the original case, when we’re traversing up the nodes from the camera, we check to see if the final node is the root node.  If not, we throw an exception.  That way the client can handle this when it picks it up.  And failing fast also means the client can deal with it as soon as possible.

Posted in Computing, Placement | Tagged: , , , , | 1 Comment »

To London And Back

Posted by Robert Chow on 09/12/2009

To those who read my last post, no, I did not drive to London and back. Although it would’ve been quite an experience to tell having only had so few driving lessons.  No, this is a different experience to tell.

Instead, I’ve been to London over the past two days to visit a conference, PSI Graphics (PSI – Statisticians in Pharmaceutical Industry).  The conference is about displaying data, and also highlights how to do this using multiple ways and techniques between differing software packages.

The first talk was about how (not) to display data.  Most of it was common sense, but in particular, i was probably the one I found most useful because it highlighted, on paper, what a user should and shouldn’t do when creating a graph.  One of my personal favourites of how not to display data was rule #5: Make sure the numbers don’t add up.

And along came, yet another, Fox news fail.

Fox News GOP Candidates. This image has been taken from http://flowingdata.com/2009/11/26/fox-news-makes-the-best-pie-chart-ever/

Second up was about a program a developer had created to help statisticians understand the effects a drug/placebo may or may not have on several patients.  The program, developed within AstraZeneca, was especially useful and introduced many methods that helped statisticians to do their jobs more effectively.  The entire talk was essentially focused on how program interactivity meant allowing less time interpreting the data, thus creating more time for productivity.  I guess I should try to focusing on the Grapher component to do similar.

Last of the morning talks introduced why there should always be a control data set, and how to display this effectively so an interpreter can easily see the difference between the control and the sample sets.  It also included calculating regression to the mean, and to be completely honest, a fair bit of this talk went over, not just my head, but many others too.  But I think I got the gist of it.

The afternoon presented many approaches to the “Graphics Challenge” proposed by PSI – the challenges were to display and evaluate many data cases using a software package.  Of the software packages on show, MATLAB was the only one I’d heard of, having used it in my AI course last year.  Unfortunately, the MATLAB speaker was unable to turn up.  So I had to sit through a good few hours of talks about SAS, STATA, R/S+ and GenStat approaches.  All of this was new to me, and because of why I was at the conference and knowing why (to investigate how to display data, to find other methods of display data efficiently, how do others display their data, any other graphs that need to be considered, how can we better other packages, etc), I think I was probably one of the least bored of the many that attended.

Before going to this conference, I was a bit skeptical at first.  I knew the conference was going to be useful,  but didn’t quite know how much.  I guess, a lot of the things said, I’d considered, but that’s as far is went. It was very useful for it allowed me to put pen to paper, and actually have things written down instead of thinking about them, or at least thought I’d been thinking about them, when in fact I hadn’t.  I was a little skeptical of the people too – I didn’t know what to expect.  But I met a fair few characters, from an electrical engineer turned SAS programmer, to many statisticians, one who is a placement student like myself.  And lastly, because it was London.

Funnily enough though, I didn’t find myself outside of my comfort zone at all – which surprised me the most.  As tiring as it is to go to London and back from Bideford, it’s definitely an experience I’ll want to do again, and I urge many others to go to conferences too, skeptical or not.

On one last note, I have to say, over the past couple of days, I should have just shut my mouth when I’m in a car.  On Monday I was given that lecture about not being so hard on myself when driving.  Yesterday, I was in the taxi with a born and bred country man.  I simply remarked at how in Manchester I’d rarely have to get a taxi because of public transport, and then he goes and gives me a full-blown account of how being in the country isn’t so “slow-paced”, and how you have to just plan ahead.

Shutting up.

Posted in Computing, Placement | Tagged: , , , , , | 1 Comment »

Renderer: Picking

Posted by Robert Chow on 03/12/2009

Remember that decorator pattern I used for writing to textures?  Also particularly useful for picking too.

Decorate the main drawer class with a picking class.  This allows for the picking class to start off by setting up the render-mode to select mode, and to set the X and Y co-ordinates of the mouse.  Render the image as normal, but because of the render-mode switch, the image is rendered using the select buffer instead of the device context.  After this, the picking class is also able to interpret the picking buffer, and then return the results.

Sounds fairly easy.  But I did run into a problem near the start.

First off was the mouse wasn’t picking where I wanted – it was interpreting the mouse as always in the bottom-left hand corner.  Why so?

I later learnt that when changing the render-mode, it automatically multiplies the projection matrix by some pre-configured matrix so it can do the picking appropriately.  In the original drawer class, I was resetting the projection matrix by loading the identity, and thus wiping what the picker class had set up.  Kind of like defeating itself.  So I had to change this by taking the matrix resets out of the main drawing method, and place them outside – these would have to be called elsewhere – likely to be through the Renderer facade (another GoF pattern), before calling the draw.

So with the matrix set up properly, I did encounter another problem.  And this time it was hardware based.  It would pick appropiately on my supervisor’s computer, yet not on mine.

An example scenario:  there are several images overlapping each other, each with their own pick Id.

My supervisors computer: picks appropriately – returns the object with the nearest Z value (most positive), under mouse (X,Y) .

My computer: picks unappropriately – returns the object with the nearest Z value, under mouse(X,Y) – providing the object has not moved in modelview space in terms of the Z value.  This accounts for transformations, rotations and scales.  So therefore it will sometimes return the correct object, but most of the time, it won’t.  Particularly annoying if you have a scene that is constantly rotating, and you want to pick the nearest item – it will always pick the same item every single time, simply because it was rendered at the front first before it was rotated.

Simple fix:  enable culling.  This means that anything that is not in view won’t be rendered, so technically there is nothing behind the frontmost object – it can be the only one that is picked!

But that’s also particularly annoying when you want to use 2-face rendering.  This is the ability to render shapes clockwise, and counter-clockwise.  Enabling culling in OpenGL will result in all shapes that would originally be drawn clockwise to the screen, not drawn at all.  Not so useful in rotating scenes!

Anyway.

Combining textures and picking, this demo allows the user to pick a quadrant, and that will change accordingly – to texture if off, or to non-texture if on.

Renderer Picking Demo. From top-left, clockwise:  empty canvas; bottom-left quadrant click to turn texture on; top-right quadrant click to turn texture on; top-left quadrant click to turn texture on; bottom-left quadrant click to turn texture off; top-right quadrant click to turn texture off; bottom-right quadrant click to turn texture on; top-left quadrant click to turn texture off.  To achieve the original image from the last step, the user would click bottom-right quadrant, turning the texture off.

Posted in Computing, Placement | Tagged: , , , , , , , | 1 Comment »