Planning the Spontaneous

It's more than just a blueprint.

Posts Tagged ‘Vertex Buffer Objects’

MultiOpenGlControl Demo

Posted by Robert Chow on 16/04/2010

I’ve managed to grab some video capture software and also produce a small demo of using multiple SimpleOpenGlControls.  Each screen has it’s own camera, focusing on to the same scene.  Using lighting and picking, clicking a face of the cube will change the scene, and thus be reflected in all of the cameras.

I did have a little trouble with the picking at first – when I picked, I forgot to change the current camera, so it would register the mouse click on one screen whilst using the camera of another.  With that fixed, it was nice to be able to play around with Renderer again.

True, the videos not suppose to be a hit, but at least it’s showing what I want it to.

Posted in Computing, Placement | Tagged: , , , , , , , , , | Leave a Comment »

MultiOpenGlControl

Posted by Robert Chow on 14/04/2010

One thing I’ve only just come to tackle is the prospect of having more than one rendering context visible at any time.

Make Current

Using the Tao SimpleOpenGlControl, it’s a relatively simple feat.

MultiOpenGlControl.  Displaying two SimpleOpenGlControls in immediate mode is relatively easy.


The SimpleOpenGlControl class allows you to make the control as the current rendering context. Any OpenGl calls made after this will be applied to the most recently made current rendering context.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();
}

public void SimpleOpenGlControlAPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlA.MakeCurrent(); // all code following will now apply to control A

Gl.glClearColor(1, 1, 0, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glColor3d(1, 0, 0);
Gl.glBegin(Gl.GL_POLYGON);
Gl.glVertex2d(-0.5, –0.5);
Gl.glVertex2d(0.5, –0.5);
Gl.glVertex2d(0, 0.5);
Gl.glEnd();

SimpleOpenGlControlB.Invalidate(); // I invalidate B so it gets refreshed – if I only invalidate A, B will never get drawn
}

public void SimpleOpenGlControlBPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlB.MakeCurrent();  // all code following will now apply to control B

Gl.glClearColor(0, 1, 1, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, 1, –1, 1, –1); // NOTE: I have changed the projection to show the images differently

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glColor3d(1, 0, 0);
Gl.glBegin(Gl.GL_POLYGON);
Gl.glVertex2d(-0.5, –0.5);
Gl.glVertex2d(0.5, –0.5);
Gl.glVertex2d(0, 0.5);
Gl.glEnd();

SimpleOpenGlControlA.Invalidate(); // I invalidate A so it gets refreshed – if I only invalidate B, A will never get drawn
}

Multiple Vertex Buffer Objects

I did run into a little trouble though. I found that when I was rendering in immediate mode, this worked fine.  However, when I came about to using vertex buffer objects and rendering in retained mode, I found the results were different.

MultiVertexBufferObjects.  Only one of the controls renders the image, despite making the exact same calls in each of the control’s paint methods.  You can tell the paint method of control A is called because the clear color is registered.


Only one of the rendering contexts shows an image, and the other did not, despite the obvious call to clear the color buffer.

int[] buffers = new int[3];
float[] vertices = new float[] { –0.5f, –0.5f, 0.5f, –0.5f, 0, 0.5f };
float[] colours = new float[] { 1, 0, 0, 1, 0, 0, 1, 0, 0 };
uint[] indices = new uint[] { 0, 1, 2 };

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

Gl.glGenBuffers(3, buffers);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[0]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(vertices.Length * sizeof(float)), vertices, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[1]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(colours.Length * sizeof(float)), colours, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[2]);
Gl.glBufferData(Gl.GL_ELEMENT_ARRAY_BUFFER, (IntPtr)(indices.Length * sizeof(uint)), indices, Gl.GL_STATIC_DRAW);
}

public void SimpleOpenGlControlAPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlA.MakeCurrent(); // all code following will now apply to control A

Gl.glClearColor(1, 1, 0, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[0]);
Gl.glVertexPointer(2, Gl.GL_FLOAT, 0, IntPtr.Zero);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[1]);
Gl.glColorPointer(3, Gl.GL_FLOAT, 0, IntPtr.Zero);

Gl.glEnableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glEnableClientState(Gl.GL_COLOR_ARRAY);

Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[2]);
Gl.glDrawElements(Gl.GL_POLYGON, indices.Length, Gl.GL_UNSIGNED_INT, IntPtr.Zero);

Gl.glDisableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glDisableClientState(Gl.GL_COLOR_ARRAY);

SimpleOpenGlControlB.Invalidate(); // I invalidate B so it gets refreshed – if I only invalidate A, B will never get drawn
}

public void SimpleOpenGlControlBPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlB.MakeCurrent(); // all code following will now apply to control B

Gl.glClearColor(0, 1, 1, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1); // NOTE: I have changed the projection to show the images differently

// same code as above

SimpleOpenGlControlA.Invalidate(); // I invalidate A so it gets refreshed – if I only invalidate B, A will never get drawn
}

Multiple Vertex Buffer Objects

This is because if you are to generate buffers, the buffers only apply to the current rendering context – in this case, it is SimpleOpenGlControlB (it was the last one that was created).  If you are using buffers and want to render the same data in a different rendering context, you have to recreate the data buffers again under the different context.  It seems like a bit of a waste really – having to recreate the same thing twice just to view it in a different place.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

SimpleOpenGlControlA.MakeCurrent();
// create bufferset

SimpleOpenGlControlB.MakeCurrent();
// repeat the same code to recreate the bufferset under a different rendering context
}

// Draw methods

Shared Vertex Buffer Objects

Fortunately, there is a small hack around. Wgl offers a method, wglShareLists, which allows for different rendering contexts to share, not only the same display lists, but also VBO, FBO and texture lists. To do this, you need to get the rendering context handles – unfortunately, you can only do this with a small hack around.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

SimpleOpenGlControlA.MakeCurrent();
var aRC = Wgl.wglGetCurrent();
SimpleOpenGlControlB.MakeCurrent();
var bRC = Wgl.wglGetCurrent();

Wgl.wglShareLists(aRC, bRC);

// start creating vbos here – these will now be shared between the two contexts.
}

// Draw methods

Multiple Controls For Renderer

This now provides two solutions for Renderer – I can either have seperate lists for each control, or have them all share the same one.  There are advantages and disadvantages to both.

In terms of management, it would be a lot easier to have them share the same lists – there is an overhead in tracking what lists are part of which control.  It would also cause a problem when a user tries to draw a scene for a particular control, when the vertex buffer objects in the scene are assigned to a another.

It would also be advantageous to have shared lists when using heavy OpenGl operations, such as texture binding – it would have to bind a new texture each time a different control is refreshed.  Sharing a texture list would erase this problem.

In the short run, using seperate lists has its disadvantages; the memory use is rather inefficient because it has to create a new set of lists each time a control is created.  However, the memory story is a different matter when sharing lists becomes long term.   This is because once a control is destroyed, so too are its lists.  Sharing lists would mean the list will continue to accumulate more and more data until all the controls have been destroyed – that is entirely up to the user, and could take some time.

As a result, for the time being, I am going to go for the option of sharing lists – mainly because it does have its advantages in terms of not having to deal with the management side, and it also minimises the frequency of heavy OpenGl operations.  Not only this, but also because it would take some time to implement the management.  If  I do have time nearer the end of my placement, I may decide to come back and revisit it.

MultiOpenGlControl-3D.  Here is Renderer using 4 different SimpleOpenGlControls simultaneously.  I’ve tried to replicate a (very simple) 3D editing suite, whereby there are 3 orthogonal views, Front, Top and Side, and then the projection view.  It uses the same model throughout, but each control has differing camera positions.

Posted in Computing, Placement | Tagged: , , , , , , , , | Leave a Comment »

Renderer: Renderables

Posted by Robert Chow on 15/01/2010

So with the majority of the Renderer library nearly finished (with the exception of layers and fonts; well, there are fonts, they’re just not perfect), the process of refactoring has started.  On top of that is also creating a toolkit which will be used as a helper library to make using the Renderer library a bit more user-friendly. Less typing means for more time to do other things.

Renderable Objects

So Renderer is based on the concept of having Renderable objects.  These are exactly what they say on the tin.  They’re are renderable.  To render them, you will need to access their properties.  These properties are fairly straight forward, and are what you would expect a Renderable object to have.  Of course, these have been fine-tuned to the usage of vertex buffer objects, so the properties are:

Vector[] Vertices; // vertices of the shape
Colour[] Colours;  // colours specified at each vertex
Vector[] Normals;  // normals specified at each vertex
Vector[] TexCoords;// 2D texture co-ordinate specified at each vertex, these map to the Renderable texture
uint[] Indices;    // the order of how the vertices are to be drawn 
IMaterial Material;// any material properties this Renderable may hold
ITexture Texture;  // the texture, if, bound to the Renderable
DrawType DrawType; // an enum representing each OpenGL drawtype, specifying how the Renderable is drawn

Without the Vertices, Indices or DrawType, the Renderable is unrenderable, so these are forced in the constructor.  Using the vertices as a core property, this cannot be changed, but all the other properties can be.  That is, due to validity checks.

Duplicate Duplicate

Drawing each Renderable as a vertex buffer object seems rather silly to store the core data multiple times, as it will already be in the GPU, as well as the artificial buffering system in Renderer (used to control and keep a track of what is in the GPU), so it is not necessary to have it in the Renderable object itself.  Instead, I have used the notion of a ‘pointer’ per property.  This is not the conventional pointer traditionally used in computing, as it does not directly correspond to a place in memory.  Instead it points to a place in the artificial buffer where the corresponding data is held.  Using this method means the Renderable object is rather lightweight and is cost effective in memory.

To obtain the ‘pointer’, the client needs to ask the Renderer library.  This is done through the RendererFacade (Facade design pattern), and only takes in the one parameter – the core data.  This core data is sent to the artificial buffer, and the ‘pointer’ is returned.

Where the vertex buffer objects are concerned, this changes the properties in the Renderable object to type ‘pointer’.  Of course this could cause problems, allowing clients to assign Colours ‘pointer’s to TexCoords, Indices ‘pointer’s to Normals and so forth, eventually causing the computer to crash because of a GPU error.  To solve this, each ‘pointer’ is wrapped in a DataPacket interface, and deriving from this are corresponding DataPackets for Vertices, Colours, Normals etc. making sure that the correct ‘pointer’ is matched with the right property.

IVerticesDataPacket Vertices;
IColoursDataPacket Colours;
INormalsDataPacket Normals;
ITexCoordsDataPacket TexCoords;
IIndicesDataPacket Indices;

Using the standard get and set methods that properties have, changing a property is relatively easy (we can also write these manually to do our validity checking).  But it could be better.

Fluent Refactoring

Since we are refactoring, we might as well try to make this as easy to understand and read as possible.  For this, I have touched upon fluent interfaces.

The idea behind this is relatively simple when assigning a property.  It’s a little more difficult when it comes to multiple states.  But more on that later.

For example, we want to assign a ColourDataPacket, a TexCoordsDataPacket and a Texture to a Renderable, it’s as easy as this:

renderable.Colours = colourDataPacket;
renderable.TexCoords = texCoordsDataPacket;
renderable.Texture = texture;

However, to create a fluent interface, I have created extension methods that more or less do exactly the same thing.  The difference is that it reads better, it’s less typing, and it’s also a single statement.

renderable.Assign(colourDataPacket)
.Assign(texCoordsDataPacket)
.Assign(texture);

The extension methods are simple.  Not only do they do pretty much exactly the same as per normal assignment, but they also return the Renderable instead of void, allowing more methods to be called – this is a form of method chaining.  To make it even more simple, I have also used polymorphism to defer from what is assigned to what, allowing me to just use variants of the one method, Assign(), throughout.

Pretty cool, eh?

 

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »

Renderer: Demo

Posted by Robert Chow on 16/12/2009

So now that I’ve got a lot of my functionality going for me in this component, I thought I’d try and create a snazzy demo.

My supervisor’s obsessed with it. It’s like a little game – there is a rotating cube, and the challenge is you have to click on each face before you pass on to the next level. As you progress, the cube rotates just that little bit faster, making it harder to complete the challenge. Clicking on a face will turn that face on. Clicking on a face that is already on will turn it off – you can only progress when all 6 faces are switched on.

So you can automatically see that this little demo already involves the concept of vertex buffer objects (drawing of the cube), the scene graph (rotating cube), the camera (viewing the cube, using the scene graph) and picking (turning faces on/off). But what about the others?

Well, lets place the scene into dark surroundings – we’re going to need lights to see our cube – that’s lighting handled. So where does texturing fit in?

Switching the faces on/off need an indicator to show what state they are in.  We can easily do this by simply changing the colour of a face.  But that’s kinda boring.  So instead, we’re going to take the rendering to texture example, and slap that on a face which is on.  So that’s textures included.

Here are some screenshots of this demo.

Demo Setup. From left to right: initial scene – just a white cube; let’s dim the lighting to, well, none; include some lights: green – centre, red – bottom, blue – left; create a texture.

Demo Rotate. The demo in motion – the camera is stationary, whereas the cube is rotating, and the lights are rotating independantly around it.

Demo Play.  The first and second images show what happens when a face is clicked – the texture replaces the blank face.  Once all the faces are replaced with the texture, the demo advances to the next level.

Demo Camera.  This shows added functionality of how a camera can be fixed on to a light, thus the camera is rotating around the cube too.  Here it is fixed on to the blue light.  This is done with ease by manipulating the scene graph.

I guess you can’t really get a grasp of how this demo works in its entirety – sanpshots don’t really do it much justice.  I might try and upload a video or so – how, I’m unsure of – I’m sure I can probably find screen capture software around on the net.

And unfortunately, there is no score involved. Yet. That will come when fonts come.

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »

Renderer: Textures

Posted by Robert Chow on 22/11/2009

Carrying on with the Renderer library,the next function I decided to implement was textures.  To be able to render the scene to texture is a major function, and for this library is a must.  It is also a requirement to also be able to use those textures as part of a scene too.  I had not previously used vertex buffer objects with textures either, so it was also a good time to test this out, and make sure it worked.

I decided to demo a similar function to that previously, but also, adding in textures too.  This demo takes the random triangles, renders them to texture, and then redraws another scene, using the textures.

Renderer: Textures.  This scene is made up of 4 images, using the same texture generated from code very similar to that of the vertex buffer objects demo.

As you can see, the original scene has been manipulated in a way that it would be hard to recreate the scene displayed without first rendering to texture.  The texture size I used was 128 x 128, as was the viewport for when I first rendered the triangles.  This scene however, is of size 512 x 512.  The original texture is seen in the bottom-right hand corner, and all the other textures have been rotated and resized.  The largest of the textures, bottom-left, is slightly fuzzy – this is because it is twice the size of the original.

Renderer API

The way I have implemented the rendering to texture function on the API is not much different to rendering as normal.  It’s a simple case of:

Renderer.RenderToDeviceContext();

or

Renderer.RenderToTexture(..params..);

Technically, rendering to texture will also render to the device context.  The main difference however, is that after rendering to the device context, the render to texture method takes in the parameters given, and will make a simple call to glCopyTexImage2D – as per normal when a client wishes to render to texture using OpenGL.  To save from code duplication, I have used the decorator pattern – the render to texture class “decorates” the original class.  In doing so, it means that I can set up texturing by first binding a texture, make the original call to render to device context, and then finish off by rendering to the bound texture.

Posted in Computing, Placement | Tagged: , , , , , , | 1 Comment »

Renderer: Vertex Buffer Objects

Posted by Robert Chow on 19/11/2009

So I’ve started on the Renderer library, and although this update might be coming a couple of weeks late, there’s a few demos that I’ve managed to produce as the library is slowly being developed. Much of the next few Renderer posts will be short, and will not contain much code – most of the code I am using I have already explained in earlier posts.

The first stage of Renderer I have incorporated is the use of vertex buffer objects. This is because it is pretty much the core drawing function I will be using to create shapes drawn on to the screen. Although this is not clearly shown, the first demo was to initially make sure that these vertex buffer objects work.

Using a blending function to incorporate the alpha values, I have loaded the buffers with several objects – each a triangle of random position and size, with 3 random colours, one for each vertex.  Even more so, the number of triangles is also random.

Renderer: Vertex Buffer Objects.  The screenshots show the results after loading the buffers with random triangles.

It might not be the most exciting demo in the world, but at least it shows that my vertex buffer objects work.  As a result of this, I am able to carry on, knowing that the most fundemental part of the Renderer library is in place.  For now.

Posted in Computing, Placement | Tagged: , , , , | 1 Comment »

Catchup: confused = (learningCurve > n^2) ? indefinately : possibly;

Posted by Robert Chow on 01/10/2009

I’ve been here for a few weeks, and I’ve already learnt a lot more than I have over the past few years. And I also know that what I’ve learnt is, and will be, useful for the future as well. It’s kinda scary really. The amount of knowledge that goes on around this building, although there are only around 15 personnel, is vast. Not to mention the level that my supervisor is at. Of course you don’t know yet do you? He came to the company as a PhD graduate, and is basically creating a revolution within the comany, despite having only been here for a year and a half now. He’s the reason I’m creating the new visuals for the new program, instead of maintaining the old one.  Good job really, I’m not the kind of person to be trawling through code looking for bugs – that’s someone elses job.   So the old program’s written in C++, and was essentially spaghetti code.  Unmaintainable and unscalable.  So my supervisor has proposed to build a new program based on his research – component based systems.  It’s like lego – you’ve got bricks that make parts and they all plug into one another – and because of this, you can easily change each part.  Well, alright, it’s not that simple, but it’s the most well-known analogy anyway.  Especially around here.  My point is, the amount of experience everyone here has; well, you can definately tell they’ve got it.  And it’s pretty scary.  Least I’m getting taught properly this year.  I hope.

Kanban Board

Kanban BoardKanban Board. Each post-stick note represents a process, and belongs to any of the 5 swim lanes.  These are Design, Implement and Review, with queue lanes to represent the finish of one section before proceeding onto the next.  The swim lanes should have limits to how many processes can appear in any one swim lane at a time.

We’ve just adopted a new way of time-management – my supervisor’s been a one-man team until I came along, so he figured we should probably get organised. So we’re using this thing called a Kanban board. Essentially you have swim lanes on the board, and you have to make your way across from the start to the finish.  Each thing on the board is a process – what you’re working on. Or should be working on. We’re still trying to get used to it – each process should only really be on the board a week or so at a time.  They’re ending up at about a month or so at a time. So that still needs to be refined. In addition to the board, we have a review of the board twice a week, in the form of stand-up meetings, which should only last around 5 minutes.  I think we’re still getting used to those too – we end up sitting down after a while and they inevitably last a lot longer than the intended 5.

At this point, I’ve managed to get the hang of VBOs, and drawn some pretty and interactive bar charts. It’s all done well if it’s hard-coded. But that’s the problem really. The abstraction’s a lot more difficult than it seems. Least I think so anyway. I guess those design patterns are coming handy at this point. Remember my retained mode rendering? Turns out I was missing something quite important – indices. Using indices allow you to specify which vertices get drawn in what order, so you don’t have to specify the same vertices more than once. Makes sense really. It’s a shame I dived in too quickly and became ignorant. They say ignorance is bliss. I disagree.

Vertex Buffer Objects: Revised

So similarly to before, we have to generate the buffers and bind the buffers appropriately. However, instead of draw the arrays, we draw the elements, defined in the element array buffer – indicated by the indices.

Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[0]);
Gl.glBufferData(Gl.GL_ELEMENT_ARRAY_BUFFER, (IntPtr)(indices.Length * sizeof(float)), indices, Gl.GL_STATIC_DRAW);
Gl.glDrawElements(drawtype, count, Gl.GL_UNSIGNED_INT, (IntPtr)(offset * sizeof(uint)));

Another thing you may have noticed is that I’ve added an offset for the pointer – this allows you to draw many different shapes with the same data in the graphics card without changing it – you just need to change the data it’s pointing to.

So by now, I’ve done a lot of prototyping.  In order for me to create the Grapher and Mapper components, I need to create the Renderer component for them to interact with.  I was a bit gutted thinking that I’d got VBOs pretty much sorted, to find I had to a lot more research to get this working. Saying that, I still need to apply textures to VBOs.  But I’m quite pleased with my progress as far as Renderer is concerned – I’ve prototyped the majority of VBOs, and how they will fit into a scenegraph. In addition to Renderer being a Tao wrapper, I’m also hoping to include layers – a bit like Adobe Photoshop.   It sounds easy enough – and it is.  When it’s hard-coded. It’s a little bit different when you’re working with shapes of any size, fitting into containers, which themselves have to be encapsulated into a layer.  And then there’s multiple layering to consider too.

Graph PrototypeGraph Prototype.  These are screen shots of a prototype I did just a couple of weeks into the placement.  It shows how layering can be considered in a graph.  All of the graph elements are hard-coded, but it does allow for interaction.  The layers can be ordered, and their visibility switched on/off – this will, in effect, produce a different view of the graph.  Sorry about the colours – that was my boss’ fault – he doesn’t like dark backgrounds.

As far as Grapher is concerned, it’s going rather well actually.  The research is morealess done – mainly because I was forced to do it.  On the first day, my supervisor informed me that I would be making a 45 minute deliverable in front of around 20 people, about graphs.  I was not a happy bunny to say the least.  But it went rather well actually.   And if I’m honest, having gone from doing 5 minute presentations at uni to doing a large one professionally, I quite enjoyed the challenge really. True, I was shaking like a leaf, but I felt pretty good coming out of it. Hopefully, I’ll still be here for the next one.

Posted in Computing, Placement | Tagged: , , , , | Leave a Comment »

Catchup: City > Town > Village > ?

Posted by Robert Chow on 01/10/2009

So I started work here a little over 2 months ago, and to keep this consistent by being inconsistent, I’m going to try and sum up, but detail what I’ve been up to over the past couple of months.

So it’s 13/07/2009. Some might say it’s unlucky to start work on the 13th. I couldn’t really care less. I’ve just been given a not-so-brief debriefing of what to expect for the upcoming year or so. IGI – Integrated Geochemical Interpretation. They develop and use a program to help the geochemists decipher on whether or not a rock sample is worth it’s value in money when looking for oil. My job – to help create the visual elements of the next release – these are Graphs and Maps.

I’ve had little experience with OpenGl, and I’ve come to the company having learnt Java on Linux for the first two years of university. You can probably imagine my thought process when I was asked to write in C# using VS. And if I’m honest, it’s brilliant compared to what I’ve been used to.

Lambda Expressions

x => x * x

I’ve never seen this EVER before I came here. I know C# isn’t exactly new, but to me, it is. This essentially will assign the value the square of the original value given. Another interesting thing:

Extension Methods

public static void Times(this int times, Action<int> action)
{
for (int i = 0; i < times; ++i)
{
action(i);
}
}

This will essentially take an integer, and do the required action the number of integer times.

Combine the two, and you can get something like this:

// data: a float array, containing random numbers
// Bar(float height): a vertical bar to be drawn on a bar chart, it’s value, determined from an entry in data

bars = new Bar[data.Length];
bars.Length.Times((i) => bars[i] = new Bar(data[i]));

So this will essentially take the lambda expression, and do it by integer times. Saves so much time, and space, and of course the extension method code is reusable. We’ve essentially taken 6 lines of code, and pushed it into 1.

So that’s July C# for you. But I haven’t begun on OpenGl.

The Tao Framework is a specifically designed framework that wraps around OpenGl, and works with C#. It’s fairly newish, which is the problem. I can’t seem to find examples ANYWHERE (turns out I found them a month and a half later in the Tao Framework examples). They’re all written in C++, of which, I can hardly say I’ve any prior experience of. Not to mention that, but my OpenGl skills aren’ t exactly amazing either. So I learn about this thing called retained mode – instead of drawing out your polygons as the code behaves, all the data is pushed into the graphics card itself, which is then displayed on screen. The great thing about this is that it frees up disc space because all the data is already stored in the graphics card. This in turn speeds up the process of drawing to screen. It was a nightmare trying to render the graphics version of “Hello World!” – a triangle – onto the screen with this retained mode, especially when I couldn’t find any examples whatsoever. But I got there in the end.

Vertex Buffer Objects

So we start off with creating pointers to the buffers – this should only need be done once. Here I have only used 2 buffers – one for vertices, and the other for colours. You can of course, use more buffers for normals and texture coordinates.

int[] buffers = new int[2];
Gl.glGenBuffers(2, buffers);

Now that the buffers are generated, load the data into the buffers, by first binding them, then loading the data. I have unbound the buffers at the end just as a safety precaution.

Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffer[0]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(vertices.Length * sizeof(float)), vertices, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffer[1]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(colors.Length * sizeof(float)), colors, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);

Here, I have used float arrays to determine the values of the vertices and the colors. This only needs to be done once, again, as long as the data doesn’t change. As the data is now stored in the graphics card, you no longer need the arrays on disc you originally used to store the data. With the data now binded to the buffers, we can now go about drawing the information onto the screen. This is done by binding the buffers again, setting an appropriate pointer, and drawing the arrays into the device context. To draw the arrays, you need to give it a drawing type, and the vertex count. Before this however, you need to enable the appropriate client states.

Gl.glEnableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glEnableClientState(Gl.GL_COLOR_ARRAY);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffer[0]);
Gl.glVertexPointer(3, Gl.GL_FLOAT, 0, (IntPtr)0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffer[1]);
Gl.glColorPointer(3, Gl.GL_FLOAT, 0, (IntPtr)0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glDrawArrays(drawType, 0, count);
Gl.glDisableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glDisableClientState(Gl.GL_COLOR_ARRAY);

Seems simple enough really doesn’t it? It gets harder (I found this out next month).

So on top of learning a new language and getting to grips with the advanced basics of OpenGl, a little light reading – Design Patterns. A design pattern is suppose to a “simple and elegant solution to a specific problem”. Ha. I must admit, yes some are particularly useful, and do help with the maintainability and scalability of a system, but some of them are more bother than they’re worth. Of course, with more than 23 to learn, I am going to be a little biased. An interesting read, but not really my cup of tea. Still, I shouldn’t really be dismissing what could be potentially the future.

Posted in Computing, Placement | Tagged: , , , , , , | 3 Comments »