Planning the Spontaneous

It's more than just a blueprint.

Archive for October, 2009

AndThenThereWas(light);

Posted by Robert Chow on 30/10/2009

Lighting and Materials. Theoretically, it’s one of the first things you should really learn when you’re getting into graphics – it’s usually within the first 10 tutorials that you’ll pick up from anywhere. In theory, no one should have much of a problem with it. In theory, theorists should have done their research before making these theories. But then again, I am no theorist.

This carries on from my last post.  So we’re going to touch up on how I made the planets glow like they do in the picture, and how lighting and materials have been an absolute pain (that is, until I finally decided to do a little reading).

Dim Lighting

The first time I touched lighting and materials was in university during an examples class – and it was easy.  All I had to do as change a couple of numbers, and voila, the lighting changed.  It’s a little bit more difficult when I was trying it out for myself.  The first thing I tried to demo was a rotating diamond, with a static light shining on it.  As the diamond rotated, the light was moving up and down.  How, and why, I have no idea, and I still don’t.  Second thing I tried was to have the light in the middle of the scene, and 2 gluSpheres either side, so each would have only the side facing the light shining, and the other dark as night.  Nope.  As I drew the second sphere, it would look exactly the same as the first – light shining on the same side.  And no idea still as to why.  Maybe I’m missing something really important, that not many people have encountered before (by the way, quite a few people have had the problem with the light moving up and down as the object rotates), or maybe I’m just unlucky.

It could be seen as a bad thing that I still don’t know why the lights weren’t working the way I wanted them to – I’ve managed to get over it.  Because now I know how lights work, and I don’t really see why I would want to replicate the problems again.

Lighting & Materials

The calls to create lighting and materials are very short and simple.  Initially, you need to enable lighting, and also, enable the lights – OpenGL allows you to use 8 lights in total, and say that you should really reconsider your model if you need any more.   So apart from the enable call, there are only 3 calls that you really need to consider. The first describes the general light model, and the other two each describe either the lighting or the materials.  These are, respectively:

Gl.glLightModelfv(pName, params);

Gl.glLightfv(light, pname, param);
where light is the light number, from Gl.GL_LIGHT0 to Gl.GL_LIGHT7.
and pName is either Gl.GL_POSITION, Gl.GL_AMBIENT, Gl.GL_DIFFUSE or Gl.GL_SPECULAR.*

Gl.glMaterialfv(face, pname, param);
where the face is either Gl.GL_FRONT, or Gl.GL_BACK.
and pName is either Gl.GL_AMBIENT, Gl.GL_DIFFUSE, Gl.GL_SPECULAR, Gl.GL_SHININESS or Gl.GL_EMISSION.*

(bearing in mind that the suffix fv applies to float arrays only – this needs to be changed if you were to use any other, say double – this would be dv).

*For now, I will only be looking at the specific parameters I have enlisted.  For further information, you can find it in the OpenGL Documentation.

Placing A Light

Where you place a light in the scene is very important – this is where the light source is, and will be used to calculate how light bounces and reflects off materials.  In order for you to be able to place your light in a desired position, you need to understand how it is set up.  To set up a light, you use the glLight() call, with parameter GL_POSITION.  Obvious right?  The params should be a vector, indicating x, y, z and w.  x, y and z are your co-ordinates of the light, and I’ve a feeling that w should always be 1.0f in order for it to work this way.  However, despite making this call, the light is not necessarily in the place you have specified in world space.  The call will always look at what is the current modelview matrix at the time of the call, and will always place the light specific to the matrix.  Therefore if the entire scene is to be translated backwards so it will fit into a frustum view volume, make a call to set the light position after the translation.

Lighting and Material Properties

The properties I have outlined, you will see everywhere in material and lighting.  The properties describe how the light reacts to surfaces.

Ambient
Describes the general lighting of an object – it has brilliantly been quoted from my 2nd year course as, “it just hangs there”.  As a result, it does not take into account how light reflects off a particular object – it is the general lighting – and as a result, it will not help to define the contours of any shape.

Diffuse
This property describes the diffuse reflection of a material.  This takes into account of how the light will reflect off a particular surface, and how some of the light will also be absorbed.  Using the ambient property will describe the general color of an object, whereas to define how the object looks in light, use the diffuse property.

Specular
Lighting will sometimes reflect entirely off an object – this depends on the angle the light is reflecting off the surface at.  Specular reflection reflects the light shining onto an object and makes it appear shiny; the scale is determined by the shininess property.

Shininess
The describes the sharpness of the light reflected due to specular lighting.  The higher the value, the more shiny it is, and therefore the sharper the image of the light reflecting off the object.

Emission
This property describes the amount of light that is emitted off an object.  In OpenGL, the property makes the object appear as if it is emitting light, however, in the model, this light does not affect any other objects – you will need to use an actual light for this.

Setting the material properties will affect how light reacts to an object – the light properties also need to be set for this to happen.  All of the above properties take a float array, representing red, green, blue and alpha, with the exception of shininess, which takes a number between 0 and 128.

Light PropertiesLight Properties.  This depicts how, from left to right, ambient, diffuse and specular lighting can be used together to represent a 3-D sphere, far right.

Solar System

So now you know how it is done, it’s time to put it all together.  In my solar system model, I used two lights: one to represent the sun; and the other, to add a little extra.

To set up, we describe the light model, and then the lights we’re going to use – do this at the start so it doesn’t get called millions of times in the render loop. As we are simulating outer space, the light model is set at very low, near black.  The ambient and diffuse settings for the lights are also set up.

Gl.glLightModelfv(Gl.GL_LIGHT_MODEL_AMBIENT, new float[] { 0.05f, 0.05f, 0.05f, 1.0f });

Gl.glLightfv(Gl.GL_LIGHT0, Gl.GL_AMBIENT, new float[] { 0.0f, 0.0f, 0.0f, 1.0f });
Gl.glLightfv(Gl.GL_LIGHT0, Gl.GL_DIFFUSE, new float[] { 1.0f, 1.0f, 1.0f, 1.0f });
Gl.glLightfv(Gl.GL_LIGHT1, Gl.GL_AMBIENT, new float[] { 0.0f, 0.0f, 0.0f, 1.0f });
Gl.glLightfv(Gl.GL_LIGHT1, Gl.GL_DIFFUSE, new float[] { 1.0f, 1.0f, 1.0f, 1.0f });

Gl.glEnable(Gl.GL_LIGHTING);
Gl.glEnable(Gl.GL_LIGHT0);

You will have noticed that I have also enabled lighting, and Light0, but not Light1.  Using light0 as my main light to represent the sun, this will be constantly on, and therefore we do not need to call this in the render loop, as it only needs to be called once.  I am using Light1 as an effect on the sun, to make it appear 3-D, and therefore should only be turned on when I render the sun, but turned off when I render everything else.  The reason behind using the light as an effect on the sun is because I am setting the sun to have an emission property.  The makes the sun appear as if it is giving out light.  Not only this, but the light source representing the sun is inside the sun sphere, and will not affect the outside, making the sphere look flat.  Turning a light onto the sun, and using a diffuse property, will make the sun appear 3-D.  Now to render the scene.

First we translate the scene into view, it is here, where we place the light representing the sun.  Note that I am placing the light position after the translation so the light is also in view.  The light is not, however, affected by the rotation after the position call, nor any other call after regarding the matrix stacks.

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glTranslatef(-2000.0f, 0.0f, -4000.0f);
Gl.glLightfv(Gl.GL_LIGHT0, Gl.GL_POSITION, new float[] { 0f, 0f, 0f, 1.0f });
Gl.glRotatef(30, 1, 0, 0);

We want to draw the sun, with a radius of about 140.0f, and to have a light shining onto it so it appears 3-D.  This is where Light1 comes in.  It is placed just outside of the sun so it is able to hint the sphere with a bit of light.

Gl.glLightfv(Gl.GL_LIGHT1, Gl.GL_POSITION, new float[] { 150.0f, 150.0f, 150.0f, 1.0f });
Gl.glEnable(Gl.GL_LIGHT1);
Gl.glMaterialfv(Gl.GL_FRONT, Gl.GL_AMBIENT_AND_DIFFUSE, new float[] {1.0f, 1.0f, 0.5f, 1.0f });
Gl.glMaterialfv(Gl.GL_FRONT, Gl.GL_EMISSION, new float[] { 1.0f, 1.0f, 0.5f, 1.0f });
// Draw GluSphere here
Gl.glDisable(Gl.GL_LIGHT1);

I have used a little shortcut here, declaring the ambient and diffuse property at the same time.  This is very useful, as the cahnces are, these will be the same values for the majority of basic objects anyway.  Enabling the light before and after drawing the sun makes sure that the light only affects the sun and no other object that is being rendered.  Now we are free to draw the other planets.  Using a similar approach, we set the position, and material, and draw.  We only need to set the material once if the following objects use the same material.  Similar to glColor() – you only set the color if the next vertex to be drawn is a different color to that of the current one being drawn.  Therefore when we draw the other planets, we have to make sure we set the emission to black, otherwise it will appear all the planets are giving off light!

Do bear in mind that in order for your lighting and materials to work properly, enable lighting.  The material settings will automatically override the color settings you have, so if you want to draw an object without the material lighting, just remember to disable it first!

I know this post has been very long, slightly overdue, and may seem like it’s been cut short, and to be fair, it could be a lot longer and more well covered.  I’ll try to keep the next one shorter, I promise!

 

Posted in Computing, Placement | Tagged: , , , , , , , | 1 Comment »

Route = Scenic;

Posted by Robert Chow on 23/10/2009

A scene graph depicts how objects are fitted into the scene, and how objects are related by their position.  For example, in a room, there may be a table – it’s position within the room. On the table, a mug, and a computer. Both of which, their positions can be described as relative to the table which they are sat on. It has been argued that scene graphs are a complete waste of time. Some think so, others not so. I on the other hand believe that if they can be used to help the problem that you are trying to solve, then why not. The rendering component I am creating, will use an interface where each object that can be drawn, such as a square, or a triangle (using vertex buffer objects of course) will be “queued” into the component, ready to be drawn. To then use a scene graph that is opinionated towards this design is a major help, and will prevent many state changes that will happen when rendering in immediate mode – when coming to draw the object, the world matrix of the object is taken from the scene graph and loaded into OpenGL using a single call, whereas, typically, it may need several transformations to get the object to where we want it to be. Each one of these transformations is a state change – OpenGL is in effect, one huge state machine – and each state change is a cost in the rendering pipeline.

One of the best exercises to do to show you understand how a scene graph works is to model the solar system.  It’s relatively straight forward, and easy to picture.  You have your scene, whereby there is the sun.  Relative to the sun are the planets.  Relative to each of the planets, are their moons.

Solar SystemSolar System.  The top screenshot depicts the four planets Mercury (purple), Venus (yellow), Earth (blue) and Mars (red), with their respective satellites, circling the sun.  The bottom screenshot is an overhead view of the scene, at a different time frame.

Scene Graph

There are lots of different implementations of scene graphs, but the majority use composition, to produce a tree-like hierarchy.  At every branch in the tree is a node, often representing a transformation, and plays a part in determining the position of the rest of the branch.  The object being drawn, is finally represented as a leaf – it’s position determined by all the branches needed to pass to get to the root of the tree.  So if you were to plan out the solar system, it may look a little like this.

Solar System Scene Graph. Each transformation helps to form a part of a branch in the tree, and at the end of each branch is a planet, acting as a leaf.  The transformations have been vastly simplified – in reality there are a lot more transformations to consider when building a solar system that incorporates planet rotation, tilt and so forth.  And before you say anything, yes I know there is more to the Solar System than up to Mars.

To represent this, I have built my scene graph using 2 classes.  A class to manage the scene graph, SceneGraph, and a class to make the scene graph tree possible, GraphNode.  I have omitted the implementation of these for brevity.

GraphNode – represents a node in the scene graph.

public GraphNode Parent { get; private set; }
public Matrix Matrix { get; private set; }

SceneGraph – keeps a track of the current node, and manipulates the scene graph stack.

public GraphNode CurrentNode { get;  private set; }

public void PushMatrix();
public void PopMatrix();
public void LoadIdentity();
public void Clear();
public void AddMatrix(Matrix matrix);
public Matrix GetWorldMatrix();

private Stack<GraphNode> nodeStack;

It’s fairly simple really.  The Graph Node doesn’t really do anything, except it’s holder for the matrix it represents, and make sures it always has a constant reference to the node above it in the tree.  This reference can be traversered to find all other nodes the current node branches off, right to the root.  It plays its part when GetWorldMatrix() is called – this is when we want to draw an object.  Calling the method traverses up to the root using the parent reference, multiplying each matrix along the way.  You’ll remember me saying OpenGL is a state machine – we want to minimise the state changes as much as possible – these range from changing textures and enabling lighting, all the way to changing color, or even making a transformation.  Calling GetWorldMatrix() uses a matrix class, which is only used by OpenGL at the very end, when a call to glLoadMatrixfv() is called.  We are then able to draw the object at the current position, which is determined by the world matrix.

The rest of the SceneGraph implementation is a mirror of the OpenGL matrix stack – you can load identity, and push and pop matrices on and off a stack.

If you want to find out more about scene graphs, I’d suggest reading a couple of these links.

http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[Scene%20Graphs%20-%20just%20say%20no]] – Why this user believes scene graphs are a complete and utter waste of time.  I actually partially agree with him.  But you can’t always win.

http://web.archive.org/web/20031207141303/www.cfxweb.net/~baskuene/object.htm – This guy’s managed to produce a 3D engine, using this scene graph he created himself.  Everything in the scene plays a part in the scene graph, including the camera, and lights.  That makes sense.  But like I said, you can’t always win.

This post’s been cut a little short – it was intended to cover something else in the solar system, but you’ll just have to wait.

Posted in Computing, Placement | Tagged: , , , , , | 2 Comments »

DrawCurve(Bézier);

Posted by Robert Chow on 20/10/2009

So there are two types of curves available that OpenGL offer.  These are NURBS (Non-unified-rational-B-splines) and Bézier Curves, or around here, known as the French-guy curve (if anyone knows how to correctly pronounce Bézier, do enlighten us!)  After playing around with NURBS, I decided it was too complex.  I didn’t understand a single parameter whatsoever, and I wasn’t really too prepared to spend hours trying to find out.  So I decided to launch into Bézier curves.

Bézier Curves

These are relatively easy to do in OpenGL.  There are just a few restrictions you have to understand.  The best way to understand these is to see how these curves are calculated.  Hello wikipedia.

There is this really useful animation on wikipedia that describes how a Bézier curve is drawn.  Assuming you’re not going to look for it yourself, I’ve kindly included it here too.

Cubic Bézier Curve Animation.  This animation describes how the curve is calculated using control points and time t, where 0 <= t <= 1.  This image has been taken from http://en.wikipedia.org/wiki/Bézier_curve.

As you can see, this curve is drawn using 4 control points, P0-3.  At time t, intermediate points are calculated length t along each of the lines connecting the control points.  The same is done with these intermediate points, and again, until we result with just a single point.  This dictates where, at time t, the next point of the Bézier curve is drawn.  The order of the curve depends on how many control points there are.  Here there are 4 control points, resulting in a cubic curve.  If there were 3, it would be a quadratic curve, and if there were 5, the curve would have an order of 4.

So now we know we need control points, and how the curve is drawn, we can do this now in OpenGL.

controlPoints = new float[] { -4.0f, -0.0f, 0.0f, -5.0f, 4.0f, 0.0f, 0.0f, 4.0f, 0.0f, 4.0f, -0.0f, 0.0f };
Gl.glMap1f(Gl.GL_MAP1_VERTEX_3, 0f, 1.0f, 3, 4, controlPoints);
Gl.glEnable(Gl.GL_MAP1_VERTEX_3);

Gl.glBegin(Gl.GL_LINE_STRIP);
for (int i = 0; i <= 30; i++)
{
Gl.glEvalCoord1f((float)i / 30.0f);
}
Gl.glEnd();

The control points array are our control points, with each point including a x, y and z co-ordinate, thus 4 points * 3 co-ordinates results in an array of size 12.  This, and the next two lines, only need to be initialised the once.  The first of these next two lines takes the control points and uses them to create the Bézier curve.  The first parameter tells the function that we want to map 3 components to one point – ie. x, y and z.  The second and third parameters tells the function what linear mapping to use – for now, use the values 0 and 1.  The 3 represents the stride between the points – in this case it is 3, because we have x, y, and z components of one point, before we immediately start another.  The second to last parameter defines the number of control points we have, and last but by no means least, is the array storing the control points.  The third line simply just enables the mapping of this function.

The second section is where we actually draw the curve.  Using an accuracy of 30, we are in effect drawing 30 straight lines from t=0 to t=1.  The call in the for loop does all the work for us – it takes the value of t, evaluates that to where it corresponds in the Bézier curve, and then maps that vertex to the device context.  As we initialised the drawing of a line, it is included as the next point of the line.

Add a little bit of extra infromation such as the control points onto the screen, and you get something similar to this.

OpenGL Bézier CurveOpenGL Bézier Curve.  This curve was created using the code above.  I have tried to replicate a similar shape to the one in the animation.

This is all very good and all, but in reality, it’s quite slow.  It’s calculating the curve every single frame.  It shouldn’t need to do that.  Unfortunately OpenGL doesn’t give you the option of saving the computed vertices, so you have to do it every time.  The advantages of being able to save the vertices are, not only having to recalculate the same values over and over again, but of course, you can then store these values in, let’s say an array, and transfer them to the graphics card to be stored and used there.  Sound familiar?  Vertex Buffer Objects of course.  Hello wikipedia again.

Bézier Curves (Manual)

The algorithm for creating a cubic Bézier curve is quite simple.  It’s as follows:

B(t) = (1 – t)3P0 + 3(1 – t)2tP1 + 3(1 – t)t2P2 + t3P3, t ∈ [0,1]

Alright, it’s not that simple, but easy enough to read it.  You don’t have to understand how it comes to the conclusion of what B(t) is, just as long as you can translate it into code.  And that looks a little like this.

int accuracy = 30;
Vertices = new Vertex[accuracy + 1];
for (int i = 0; i <= accuracy; ++i)
{
float t = (float)i / accuracy;
Vertices[i] =(((float)Math.Pow((1.0f -t), 3) * P0)
+ (3.0f * (float)Math.Pow((1.0f -t), 2) * t * P1)
+ (3.0f * (1.0f -t) * (float)Math.Pow(t, 2) * P2)
+ ((float)Math.Pow(t, 3) * P3));
}

Where P0, P1, P2 and P3 are the control points.  These themselves are stored as type Vertex, where I have included operater overload functions to make my life a little easier.  Convert the vertices array into a float array, and you can use as you please – directly in immediate mode, or store them for use as a vertex buffer array.  The only time you will need to recalculate the vertices is when any of the control points change position.

Being able to understand how Bézier curves work and to then able to replicate that myself has given me a huge advantage.  As a result of this, I have been able to bring together quite a snazzy demo.

Bézier Curves in Bézier Curves

This takes a single Bézier curve, cubic, and you are then able to manipulate this any way you want using the handles, or control points as you may.  Want more curvature?  Done.  The curve can be expanded so there are multiple cubic Bézier curves within the initial curve.  The only thing I had to worry about was to make sure that all the subcurves join smoothly together – this is a call for handle dependency.  If a handle is controlling the weight of a point directly attached the curve, then it will also affect the other handles attached to this point.  This is what ensures the smoothness of the curve.  I won’t show the code, but here are a few screenshots.

Bézier Curve DemoBézier Curve Demo.  From top to bottom:  initial Bézier curve the demo produces;  the left-hand size control point and its handle is altered to produce a bowl shape; the curve is split into two; the new control point is raised upwards – in doing so, so are it’s handles;  one of the handles corresponding to the new middle point is adjusted – as a result, the opposing handle attached to the same control point mirrors the action – this ensures the 2 subcurves join together smoothly.  Each handle that is connected to a control point acts like a weight, whereby its weight can be measured by the distance between the handle and its control point.

Just to finish, I couldn’t but resist to show off.

Bézier Curve NameBézier Curve Name.  This has been produced using just the one line – it has many subcurves within it.  The grey lines show the weightings at each control point.

 

Posted in Computing, Placement | Tagged: , , , , , | 5 Comments »

Print(TrueTypeFont)

Posted by Robert Chow on 16/10/2009

This problem’s been at me for the last couple of weeks.  And it’s now solved.  I don’t know whether to cry, or whether to jump for joy.  Either way, I’ll get odd looks in the office.

The Tao Framework is a very thin wrapper around the OpenGL libraray, and doesn’t explicitly support any easy way out of font rendering.  “Font rendering?”, I hear you say.  A LOT harder than it sounds.  In fact, a lot of things is a lot harder than it sounds – why do we bother?!  Anyway.  It’s more than just typing a couple of characters, and expecting them to appear on the screen – you have to find the character first, in the character library which you’ve had to either pull apart, or create yourself before you can draw it.  But after two weeks, many leads and dead-ends, it’s been solved.  And there’s a story behind it all.

Bitmap Fonts

These are pretty much the basic of the basics.  Each character is just a bitmap image, mono-color, and you can’t really do much with them either.  I was fairly happy to have achieved this hurdle – it took me a fair while to figure out what was happening and how.  Before you can pull the bitmap image out, you have to create the font first.

font = Gdi.CreateFont(-36,                // Height
0,                                        // Width
0,                                        // Escapement
0,                                        // Orientation
400,                                      // Weight
false,                                    // Italic
false,                                    // Underline
false,                                    // Strikeout
Gdi.ANSI_CHARSET,                         // Character Set
Gdi.OUT_TT_PRECIS,                        // Output Precision
Gdi.CLIP_DEFAULT_PRECIS,                  // Clipping Precision
Gdi.ANTIALIASED_QUALITY,                  // Output Quality
Gdi.FF_DONTCARE | Gdi.DEFAULT_PITCH,      // Pitch and Family
“Calibri”);                               // Face

Wgl.wglUseFontBitmapsA(User.GetDC(simpleOpenGlControl.Handle), 0, 255, fontbase);

You’re then (as far as I’m aware) using display lists (I’m not too sure on how these work… but I will do sooner than later) to call up each character, and you have to translate each time with a look up to the character width/height before drawing the next character.  The code’s not very nice, so I’m going to save you (and myself) the trouble.

Outline Fonts

These are setup exactly the same way, with one change really.  Instead of initiating a list of bitmaps we ask for outlines instead.

Wgl.wglUseFontOutlinesW(User.GetDC(simpleOpenGlControl.Handle), 0, 255, fontbase, 0, 0.2f, Wgl.WGL_FONT_POLYGONS, gmf);

The great thing that outlines offer that bitmaps don’t, is the flexibility they provide – they are 3D fonts, and therefore can be manipulated the same way a normal object in OpenGL can – that’s includes scaling, translating, rotating, lighting etc.

But in terms of quality, with basic anti-aliasing on, not so nice, and same with bitmap fonts too.  In fact, they’re both pretty much the same.

ISE.FreeType

This library is pretty much a wrapper around the Tao.Freetype library – the one used to “create” (loose term) fonts.  Score.  And it works as well.

Oh wait.

The license states that we can’t use it.  Got to go looking again.

FTGLSharp

There’s a library that works with C/C++ called FTGL, and someone, very nicely has created a wrapper around it so it would be compatible with C#.  The great thing about FTGL is it essentially has everything you need to render fonts onto the screen, with all the flexibility, and the appropriate license!  The problem is the wrapper itself – it was a struggle to compile, and even after then it was a huge struggle to get something on the screen – in fact, that bit didn’t even happen.

Frustrating really.  All of these amazing leads, and you think it’s all good to go, but then you get shot down for being too eager.  If you want a job doing, do it yourself.  So I did.  After poking around source code of solutions that ultimately didn’t work, I managed to create a solution.

Font Rendering

So this approach I’ve gone for is texture-mapped fonts – load a character, and draw it to texture.  The only problem is space – a way to solve that is texture atlasing, but that’s another chapter in the book.

The first thing to do is to make sure you have a reference to the FreeType library, and then using the library pointer, create a pointer reference to the font face.

FT.FT_Init_FreeType(out LibraryPtr);
FT.FT_New_Face(LibraryPtr, trueTypeFilePath, 0, out FacePtr);

This is the tricky part.  Now that we have a pointer to the font face, we have to extract the information.  It’s in the form of a struct, so we marshal the information so it’s useable.

FT_FaceRec faceRec = (FT_FaceRec)Marshal.PtrToStructure(FacePtr, typeof(FT_FaceRec));

Following that, we set the character size/pixel height. I have used my own function to determine the pixel height of the font point size.   Why?  Well, when you choose a font size in a text editor, you’re normally given the choice of point sizes, pt.  These point sizes do not directly correspond to the font height – the font height is actually x4/3 the size of the point.  In this method of producing the fonts, we take the max point size – I have set this to 96 (height is 128 – it’s a power of 2).  The functions also require the dots per inch (DPI) of the screen – I presume this is for detail.

FT.FT_Set_Char_Size(FacePtr, 0, MAX_POINT_SIZE * 64, (uint)DPI, (uint)DPI);
FT.FT_Set_Pixel_Sizes(FacePtr, 0, (uint)GetFontHeight(MAX_POINT_SIZE));

We then take the number of glyphs (essentially a character) this true type font has to offer, and create an array to store the character information.

Characters = new TTCharacter[faceRec.num_glyphs];

Now that we have the font set up, we have to set up each character.  This is done on the fly, so each character is only created if it’s needed – saving on space and time.  The TTCharacter class I have used to store the character information will hold the texture ID the character is drawn onto, and the glyph metrics – these are character unique, describing the width, height, offsets and so forth, of each character.  As each glyph is created from the face, it gives a bitmap type structure to use to draw the character.  This structure we will need to copy to texture, so we can use it for ourselves.  So as we create each TTCharacter, we also generate a texture ID.

Characters[c] = new TTCharacter(c, faceRec.num_glyphs);
Gl.glGenTextures(1, out Characters[c].TextureId);

We then ask the face to load up the appropriate glyph.  This, similar to the face, is in the form of a pointer, so it too needs to be marshalled out into a useable state.  The glyph properties are also copied into the TTCharacter class.

FT.FT_Load_Char(FacePtr, c, FT.FT_LOAD_RENDER);
glyphSlotRec = (FT_GlyphSlotRec)Marshal.PtrToStructure(faceRec.glyph, typeof(FT_GlyphSlotRec));
FT.FT_Render_Glyph(ref glyphSlotRec, FT_Render_Mode.FT_RENDER_MODE_NORMAL);
Characters[c].Metrics = glyphSlotRec.metrics;

Now we have the glyph, we can take the bitmap and copy it to texture.  There is only one problem tho – and this confused the hell out of me when I came across it.  I  first started to draw the texture as it was, without having modified it.  And the results weren’t pretty.  The bitmap needs to be adapted to a format OpenGL likes – that is, a texture with the width and height of a power of 2.  So we transpose it into the new structure, before copying it to texture.  Note: I’ve used an extension method to find the next power of 2! And also, lambda expressions for the for loops!

byte[] bmp = new byte[(glyphSlotRec.bitmap.width) * glyphSlotRec.bitmap.rows];
Marshal.Copy(glyphSlotRec.bitmap.buffer, bmp, 0, bmp.Length);

int texSize = (glyphSlotRec.bitmap.rows.NextPowerOf2() < glyphSlotRec.bitmap.width.NextPowerOf2()) ? glyphSlotRec.bitmap.width.NextPowerOf2() : glyphSlotRec.bitmap.rows.NextPowerOf2();

byte[] tex = new byte[texSize * texSize];
texSize.Times((j) => texSize.Times((i) =>
{
if (i < glyphSlotRec.bitmap.width && j < glyphSlotRec.bitmap.rows)
{
tex[j * texSize + i] = bmp[j * glyphSlotRec.bitmap.width + i];
}
}));

Gl.glBindTexture(Gl.GL_TEXTURE_2D, Characters[c].TextureId);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_S, Gl.GL_CLAMP_TO_EDGE);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_T, Gl.GL_CLAMP_TO_EDGE);
Gl.glTexParameterf(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR_MIPMAP_LINEAR);
Gl.glTexParameterf(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR_MIPMAP_LINEAR);
Glu.gluBuild2DMipmaps(Gl.GL_TEXTURE_2D, Gl.GL_ALPHA8, texSize, texSize, Gl.GL_ALPHA, Gl.GL_UNSIGNED_BYTE, tex);

You’ll notice that the lines at the end of the code above is a lot different to the code where I render to texture again and again.  This is because this method incorporates mipmapping.  Mipmapping is the process of taking a large image, and then scaling it down to the next power of 2 size, until we reach a 1×1 size.  As the image is resized, each pixel is sampled multiple times, therefore trying to keep precision in quality, whilst producing a smaller sized image.  This saves from having to add extra anti-aliasing functionalities, to stop images from looking blocky – this will happen when there is more, or less, than 1 texel fighting for a pixel.  Therefore, as a texture is called up, it will select the appropriate size of mipmap and load that up, instead of the original, and thus preserving some form of quality.  Expensive in memory, yes, but it’s the best I have so far.  The only other option is to produce each character multiple times for each point size – without mipmapping.  This will cost a lot more in memory.

Lastly, is to draw the textures.  As the textures are stored as alpha values (see in the code above), we don’t need to do much else, but just draw the texture.  Similarly to all other text renderers, after each character, we will have to find the character width and offsets, and translate in order to be able to draw the next character.

Results are acceptable – they’re not quite MS Word standard, but they’re a lot better than mono-colour.

Font ComparisonFont Comparison.  24pt normal weight, from right to left: MS Word, my font system and mono-colour bitmap font; from top to bottom: Calibri, Arial, Verdana and Times New Roman.

Font ProblemFont Problem.  The top shows what it should have looked like.  The bottom not so – this is what it looked like before copying the bitmap into an OpenGL friendly format – it’s not nice!

Mip Mapping.  Shows how a full scale image can be downsized whilst still maintaining a quality.  This image has been taken from http://www.actionscript.org/forums/showthread.php3?t=194116

It’s not perfect – but it’s a start nevertheless.   And the main thing to do is to make sure that the current solution is scalable – that way a new solution to producing fonts can be easily added into the system, without having to change much of the code that’s already there.

On a more personal note – I had my first driving lesson yesterday.  Can’t say I found it easy.  But we’ll have to see how that goes for the future.  Either way, at least it’s a start.


Posted in Computing, Placement | Tagged: , , , , , , , , , , , , , | 3 Comments »

enum Pick { Me, Me., Me! }

Posted by Robert Chow on 13/10/2009

So I’ve actually done quite a bit last week – true, I stayed in far longer than is starting to render as unhealthy, but there’s not much else to do is this place you call… Devon.

Picking

I started to look into picking – using the mouse, and being able to recognize what object I am hovering over.  It’s another thing that we all take for granted, but if it’s done incorrectly, it can be a huge nightmare.  You don’t really want to have to have your mouse 10 pixels to left of the object you want to select, and what’s worse is when there is an object behind another, and you can’t get to it, no matter how hard you try.  So I looked into it, and decided to combine it with the layering problem I had.  And I think I’ve got quite a lot of it sussed.

The picking is hardware based so you get all the hardware acceleration qualities – that’s a good thing.  The whole thing is essentially 2 processes.

1) Attach a number to an object, draw it, and repeat.
2) To find what you’ve clicked on, take the mouse position, change the render mode to select mode, and it should… hopefully… return the number you’ve assigned to the object you’ve clicked on.

Simples.

The code is below.  There’s not much to show, it really is quite simple!

We first set up the picking buffer – this stores the properties of the objects that have been picked – essentially every object under the cursor, regardless of visibility.  We must also set the mode – here RenderMode is used for reference.

PickBuffer = new int[PICKING_BUFFER_SIZE];
Gl.glSelectBuffer(PICKING_BUFFER_SIZE, PickBuffer);
RenderMode = Gl.GL_RENDER;

Wehn we come to render the scene, we check to see which mode we are in.  If it is the normal render mode, then render as normal… as it suggests.  If it’s in select mode, we have to take this into consideration, and set up the appropriate functions to allow the hardware to pick up what the mouse has just clicked on.  To do this, we only add a couple of lines – one of them sets up the pick matrix, and the other initializes the name stack that will allow you to assign a name to each object.  The viewport[] is the viewport you set normally, with viewport[3] being the height.  You can be lazy and get this from OpenGL by calling Gl.glGetIntegerv(Gl.GL_VIEWPORT, viewport). The picking tolerance are the x and y values of how big the surface your mouse should cover.

if (RenderMode == Gl.GL_SELECT)
{
Glu.gluPickMatrix(xmouse, (double)(viewport[3] -ymouse), PICKING_TOLERANCE, PICKING_TOLERANCE, viewport);
Gl.glInitNames();
Gl.glPushName(0xffffffff);
}

You can then render the scene, as per usual, but each time you draw an object, load a name to it by using a one call before drawing each object.  Note that name must be an integer value.

Gl.glLoadName(name);

Now for the interesting part!  As the mouse is clicked, make sure you set the mode to select mode.  Render the scene again, using the mouse x and y values, and hopefully, after all of this, you should have the number of items under the cursor – here as numberOfHits – and the pick buffer should be filled with the information regarding these objects.  There is a small glitch with OpenGL however regarding some graphics cards – it doesn’t register unless you double pick.  That’s easy to solve – just do it twice if at first you get no numberOfHits.

RenderMode = Gl.GL_SELECT;
Gl.glRenderMode(RenderMode);
RenderScene();
RenderMode = Gl.GL_RENDER;
int numberOfHits = Gl.glRenderMode(RenderMode);

Lastly, we sort out the information in the pick buffer.  The object that is directly visible under the cursor will be the last thing in the pick buffer.  What will end in pickedID will be the object name of what you’ve just clicked on!

for(int i = 0, index = 0; i < nHits; i++ )
{
int nItems = PickBuffer[index++];
int zMin = PickBuffer[index++];
int zMax = PickBuffer[index++];

for(int j = 0; j < nitems; j++ )
{
pickedID = PickBuffer[index++];
}
}

Simples!

EDIT:  Not so simple actually.  After trying picking on different computers, they each fill the pick buffer differently – I presume this is due to the fact that this picking is hardware based.  The best way to solve this is to enable GL_CULL_FACE.  That way, anything that is behind what you’ve picked won’t get rendered, and therefore your picking buffer will only hold the one item you’ve picked on.  (I think…)

 

Posted in Computing, Placement | Tagged: , , , , | 2 Comments »

RenderToTexture() { RenderToTexture(); }

Posted by Robert Chow on 09/10/2009

Rendering To Texture

Ever since I started, I’ve been given the task of rendering an image to a texture, and then to render it onto the screen.  And to be honest, I never really got the hang of it.  And I still kind of haven’t.  But it’s been a couple of months now, and I’m managed to grab hold of it by the thread.  Check this out.

RenderingToTextureRendering To Texture.  This effect is done by taking a snapeshot of the cube, and then rendering the image onto the cube itself, as it rotates.  Psychedelic, some might say.

Cool huh?!  So how’s it done?!

We need to render the cube, then copy that to a texture.  This texture is then renderered onto each face of the cube itself, and does so each time the display loop is called – creating a mise en abîme.  This is done using mainly two functions, but before we call those functions, we have to set up the texture first.

int texSize = 256;
byte[] texture = new byte[4 * texSize * texSize];
int textureId;

Gl.glGenTextures(1, out textureId);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, textureId);
Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_RGBA, texSize, texSize, 0, Gl.GL_RGBA, Gl.GL_UNSIGNED_BYTE, texture);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_S, Gl.GL_CLAMP);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_T, Gl.GL_CLAMP);

These calls initialize the texture – it is asking for a texture of size texSize * texSize (these have to be a power of 2 (although not so necessarily anymore these days)), and bind to a reference, textureId.  I’m still playing around a bit with the parameters for glTexImage2D – like I said, I’m only just starting to get to grips with this.  As far as I’m aware, the last 4 parameter calls are saying that when the texture is far away, and very close, render them using a smooth algorithm; and that the texture should not be repeated in either the X or Y direction (S and T in texture terms).  But again, I’m not too sure.

So the first of the function calls is the actual drawing of the cube.  If we draw this in immediate mode – where we define all the vertices – we have to preceed each one with a texture coordinate beforehand, so we know which part of the texture to bind to that vertex.  Of course, if there is no texture available, then the texture coordinate call is ignored, and the vertrex renderered as normal.  I’m not going to show you the whole code, but here is an example of one of the faces.

Gl.glTexCoord2i(0, 0); Gl.glColor3f(1.0f, 1.0f, 0.0f); Gl.glVertex3f(-1, -1, -1);
Gl.glTexCoord2i(1, 0); Gl.glColor3f(0.0f, 1.0f, 0.0f); Gl.glVertex3f(-1, -1, 1);
Gl.glTexCoord2i(1, 1); Gl.glColor3f(0.0f, 1.0f, 1.0f); Gl.glVertex3f(1, -1, 1);
Gl.glTexCoord2i(0, 1); Gl.glColor3f(1.0f, 1.0f, 1.0f); Gl.glVertex3f(1, -1, -1);

Now that we’ve got the cube, we have to be able to render it, copy it to a texture, then render it again, using the texture.  Before we can use the texture however, we do need to enable texturing – this is a simple call.

Gl.glEnable(Gl.GL_TEXTURE_2D);

So first we render the scene, appropriate to copy to the texture.

Gl.glClearColor( 1.0f, 1.0f, 1.0f, 0.0f);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Glu.gluPerspective(45, 1, 0.1f, 10);
Gl.glViewport(0, 0, texSize, texSize);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();
Gl.glTranslatef(0, 0, -5);
Gl.glRotatef(alpha, 0.75f, 1, 0.5f);

DrawCube();
Gl.glFlush();

Gl.glCopyTexSubImage2D(Gl.GL_TEXTURE_2D, 0, 10, 10, 0, 0, texSize -20, texSize -20);

This code first sets clears the scene as per usual, asking for a white background.  Secondly, it adjusts the projection matrix so it is suitable for the texture.  Thirdly it processes the model matrix so the cube can renderered appropriately onto the screen.  It is also taking in the variable alpha for its rotation adjustment.  Lastly, it draws the cube, and then copies the image onto the texture.  There are two different calls to copy an image to a texture, and the one I have used here is to only a copy the image to a sub-part of the texture.  I have done this so I can get a border to the texture without much trouble.  I need this border, otherwise the cube will disappear into itself, and I won’t know where the original cube is in comparison to the textured cubes.  Confused?  I am, a little.  This call is often also used when devising a texture-atlas.  A texture-atlas is a texture full of many different textures, mostly unrelated.  Instead of having to change texture to be drawn, you just change the coordinates pointing to the image in the texture.  This is a lot quicker and less-expensive way of using textures.

Now that we have our texture.  We just render the cube again, but this time, without copying the final output to texture – this one goes to the device context, aka the screen!

Posted in Computing, Placement | Tagged: , , , , , | 1 Comment »

Catchup: limit++

Posted by Robert Chow on 05/10/2009

So, after having already written a couple lengthy posts, I thought I’d finished, then it struck me.  I haven’t.  Only a couple more things I wanted to show you.

Windows Presentation Foundation
(WPF, pronounced wu-puff)

This is the new windows form that Microsoft offer – it pretty much takes the old window form, which was very Win98-esque, to a more present level – something that looks a lot like Windows XP/Vista.  And I presume similar to Windows 7, but I haven’t quite got that far yet.  I’m currently working with a tool from Tao, the SimpleOpenGLControl, which is a preset device-context for rendering to.  Later on (or quite soon rather) I’m going to have to change from the SimpleOpenGLControl, to making the window from scratch.  From therein, to then present it in a WPF window.

SimpleOpenGLControl in WPF.  This image has been taken from http://www.codeproject.com/KB/WPF/WPFOpenGL.aspx

To get to grips with WPF for now, we have started work on a WPF designed repository browser for the Buzzard Framework – the framework which my supervisor has created.  The WPF itself is quite easy to get to grips with really.  It uses a language called xaml, and looks quite similar to xml.  True, I don’t really have any experience with xml, but I’ll learn.  Then the really cool bit is the code-inline/behind.  The code that actually makes it work.  Of course, we haven’t quite got onto that bit yet – the repository browser is the least of my priorities.

NANT

NANT is a build script which we can use to automate builds, without having to go into VS itself and build the file.  I guess it works in a similar way to the build scripts I used to create in Linux last year, but it seems to be a lot more formal, structured.  The main problem I ran into, when running the build script was accessing the VS command prompt.  There doesn’t seem to be one in the express editions, so a quick hack around of placing the VS properties into the windows enviroment variables had to be done.

One last thing. The things in C# are just bizarre. But great too. I keep on being told to take a dynamic language (python, Ruby, etc), but I’m still getting to grips with this!

The ternary operator ?

You might have noticed the title of my last post as a little confusing (I couldn’t resist).  Much so.  Consider the next statement.

x = (y < 1) ? y : 1

Doesn’t really make sense if you’re not used to it.  But it is essentially an assignment of a variable, after the evaluation of an if statement.  It’s exactly the same as:

if (y < 1)
{
x = y;
}
else
{
x = 1;
}

To simplify – x = (conditional) ? true : false;
That’s 8 lines into 1.  It’s great isn’t it!

Nullable? Types ??

Last thing.  I promise.  C# have introduced the concept of nullable types.  Makes sense really.  The integer value of 0 isn’t really a null value – it’s still there, it’s just 0.  The great thing about nullable types in C# is that they can be checked really easily, without the need of an if statement, by using the ?? operator.

int? i = null;
int j = i ?? 0;

So what does this mean?  The first statement is declaring a nullable int – done with the ? after the int.  This is saying that the variable i can either take in any integer value (within the computational limits), or take null as a value.  The second statement declares a normal int, and therefore cannot be null.  It is assigned the value i holds.  But wait.  That cannot be for i is null, and j cannot hold a null value.  The ?? operator checks to see if i is null.  It i is not null, then j is assigned the same value as i, as per usual.  However, if it is null, then, because j cannot hold a null value, it looks at the next option – in this case, it is a 0.  Therefore j is assigned the value of 0 (which is not null!).

I know the past few (well, only) posts have been quite brief and quite technical.  I am primarily doing this for my final year report, and for my personal reference – I find it easier to search in a document on the computer, than it is to find something in a book of scrambled notes.  It should hopefully all make sense when I come to write the final thing.  From now on, the posts should hopfully be a little bit more exciting because I’m not having to rush 2/3 months into a few paragraphs!

Posted in Computing, Placement | Tagged: , , , | 1 Comment »

Catchup: confused = (learningCurve > n^2) ? indefinately : possibly;

Posted by Robert Chow on 01/10/2009

I’ve been here for a few weeks, and I’ve already learnt a lot more than I have over the past few years. And I also know that what I’ve learnt is, and will be, useful for the future as well. It’s kinda scary really. The amount of knowledge that goes on around this building, although there are only around 15 personnel, is vast. Not to mention the level that my supervisor is at. Of course you don’t know yet do you? He came to the company as a PhD graduate, and is basically creating a revolution within the comany, despite having only been here for a year and a half now. He’s the reason I’m creating the new visuals for the new program, instead of maintaining the old one.  Good job really, I’m not the kind of person to be trawling through code looking for bugs – that’s someone elses job.   So the old program’s written in C++, and was essentially spaghetti code.  Unmaintainable and unscalable.  So my supervisor has proposed to build a new program based on his research – component based systems.  It’s like lego – you’ve got bricks that make parts and they all plug into one another – and because of this, you can easily change each part.  Well, alright, it’s not that simple, but it’s the most well-known analogy anyway.  Especially around here.  My point is, the amount of experience everyone here has; well, you can definately tell they’ve got it.  And it’s pretty scary.  Least I’m getting taught properly this year.  I hope.

Kanban Board

Kanban BoardKanban Board. Each post-stick note represents a process, and belongs to any of the 5 swim lanes.  These are Design, Implement and Review, with queue lanes to represent the finish of one section before proceeding onto the next.  The swim lanes should have limits to how many processes can appear in any one swim lane at a time.

We’ve just adopted a new way of time-management – my supervisor’s been a one-man team until I came along, so he figured we should probably get organised. So we’re using this thing called a Kanban board. Essentially you have swim lanes on the board, and you have to make your way across from the start to the finish.  Each thing on the board is a process – what you’re working on. Or should be working on. We’re still trying to get used to it – each process should only really be on the board a week or so at a time.  They’re ending up at about a month or so at a time. So that still needs to be refined. In addition to the board, we have a review of the board twice a week, in the form of stand-up meetings, which should only last around 5 minutes.  I think we’re still getting used to those too – we end up sitting down after a while and they inevitably last a lot longer than the intended 5.

At this point, I’ve managed to get the hang of VBOs, and drawn some pretty and interactive bar charts. It’s all done well if it’s hard-coded. But that’s the problem really. The abstraction’s a lot more difficult than it seems. Least I think so anyway. I guess those design patterns are coming handy at this point. Remember my retained mode rendering? Turns out I was missing something quite important – indices. Using indices allow you to specify which vertices get drawn in what order, so you don’t have to specify the same vertices more than once. Makes sense really. It’s a shame I dived in too quickly and became ignorant. They say ignorance is bliss. I disagree.

Vertex Buffer Objects: Revised

So similarly to before, we have to generate the buffers and bind the buffers appropriately. However, instead of draw the arrays, we draw the elements, defined in the element array buffer – indicated by the indices.

Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[0]);
Gl.glBufferData(Gl.GL_ELEMENT_ARRAY_BUFFER, (IntPtr)(indices.Length * sizeof(float)), indices, Gl.GL_STATIC_DRAW);
Gl.glDrawElements(drawtype, count, Gl.GL_UNSIGNED_INT, (IntPtr)(offset * sizeof(uint)));

Another thing you may have noticed is that I’ve added an offset for the pointer – this allows you to draw many different shapes with the same data in the graphics card without changing it – you just need to change the data it’s pointing to.

So by now, I’ve done a lot of prototyping.  In order for me to create the Grapher and Mapper components, I need to create the Renderer component for them to interact with.  I was a bit gutted thinking that I’d got VBOs pretty much sorted, to find I had to a lot more research to get this working. Saying that, I still need to apply textures to VBOs.  But I’m quite pleased with my progress as far as Renderer is concerned – I’ve prototyped the majority of VBOs, and how they will fit into a scenegraph. In addition to Renderer being a Tao wrapper, I’m also hoping to include layers – a bit like Adobe Photoshop.   It sounds easy enough – and it is.  When it’s hard-coded. It’s a little bit different when you’re working with shapes of any size, fitting into containers, which themselves have to be encapsulated into a layer.  And then there’s multiple layering to consider too.

Graph PrototypeGraph Prototype.  These are screen shots of a prototype I did just a couple of weeks into the placement.  It shows how layering can be considered in a graph.  All of the graph elements are hard-coded, but it does allow for interaction.  The layers can be ordered, and their visibility switched on/off – this will, in effect, produce a different view of the graph.  Sorry about the colours – that was my boss’ fault – he doesn’t like dark backgrounds.

As far as Grapher is concerned, it’s going rather well actually.  The research is morealess done – mainly because I was forced to do it.  On the first day, my supervisor informed me that I would be making a 45 minute deliverable in front of around 20 people, about graphs.  I was not a happy bunny to say the least.  But it went rather well actually.   And if I’m honest, having gone from doing 5 minute presentations at uni to doing a large one professionally, I quite enjoyed the challenge really. True, I was shaking like a leaf, but I felt pretty good coming out of it. Hopefully, I’ll still be here for the next one.

Posted in Computing, Placement | Tagged: , , , , | Leave a Comment »

Catchup: City > Town > Village > ?

Posted by Robert Chow on 01/10/2009

So I started work here a little over 2 months ago, and to keep this consistent by being inconsistent, I’m going to try and sum up, but detail what I’ve been up to over the past couple of months.

So it’s 13/07/2009. Some might say it’s unlucky to start work on the 13th. I couldn’t really care less. I’ve just been given a not-so-brief debriefing of what to expect for the upcoming year or so. IGI – Integrated Geochemical Interpretation. They develop and use a program to help the geochemists decipher on whether or not a rock sample is worth it’s value in money when looking for oil. My job – to help create the visual elements of the next release – these are Graphs and Maps.

I’ve had little experience with OpenGl, and I’ve come to the company having learnt Java on Linux for the first two years of university. You can probably imagine my thought process when I was asked to write in C# using VS. And if I’m honest, it’s brilliant compared to what I’ve been used to.

Lambda Expressions

x => x * x

I’ve never seen this EVER before I came here. I know C# isn’t exactly new, but to me, it is. This essentially will assign the value the square of the original value given. Another interesting thing:

Extension Methods

public static void Times(this int times, Action<int> action)
{
for (int i = 0; i < times; ++i)
{
action(i);
}
}

This will essentially take an integer, and do the required action the number of integer times.

Combine the two, and you can get something like this:

// data: a float array, containing random numbers
// Bar(float height): a vertical bar to be drawn on a bar chart, it’s value, determined from an entry in data

bars = new Bar[data.Length];
bars.Length.Times((i) => bars[i] = new Bar(data[i]));

So this will essentially take the lambda expression, and do it by integer times. Saves so much time, and space, and of course the extension method code is reusable. We’ve essentially taken 6 lines of code, and pushed it into 1.

So that’s July C# for you. But I haven’t begun on OpenGl.

The Tao Framework is a specifically designed framework that wraps around OpenGl, and works with C#. It’s fairly newish, which is the problem. I can’t seem to find examples ANYWHERE (turns out I found them a month and a half later in the Tao Framework examples). They’re all written in C++, of which, I can hardly say I’ve any prior experience of. Not to mention that, but my OpenGl skills aren’ t exactly amazing either. So I learn about this thing called retained mode – instead of drawing out your polygons as the code behaves, all the data is pushed into the graphics card itself, which is then displayed on screen. The great thing about this is that it frees up disc space because all the data is already stored in the graphics card. This in turn speeds up the process of drawing to screen. It was a nightmare trying to render the graphics version of “Hello World!” – a triangle – onto the screen with this retained mode, especially when I couldn’t find any examples whatsoever. But I got there in the end.

Vertex Buffer Objects

So we start off with creating pointers to the buffers – this should only need be done once. Here I have only used 2 buffers – one for vertices, and the other for colours. You can of course, use more buffers for normals and texture coordinates.

int[] buffers = new int[2];
Gl.glGenBuffers(2, buffers);

Now that the buffers are generated, load the data into the buffers, by first binding them, then loading the data. I have unbound the buffers at the end just as a safety precaution.

Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffer[0]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(vertices.Length * sizeof(float)), vertices, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffer[1]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(colors.Length * sizeof(float)), colors, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);

Here, I have used float arrays to determine the values of the vertices and the colors. This only needs to be done once, again, as long as the data doesn’t change. As the data is now stored in the graphics card, you no longer need the arrays on disc you originally used to store the data. With the data now binded to the buffers, we can now go about drawing the information onto the screen. This is done by binding the buffers again, setting an appropriate pointer, and drawing the arrays into the device context. To draw the arrays, you need to give it a drawing type, and the vertex count. Before this however, you need to enable the appropriate client states.

Gl.glEnableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glEnableClientState(Gl.GL_COLOR_ARRAY);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffer[0]);
Gl.glVertexPointer(3, Gl.GL_FLOAT, 0, (IntPtr)0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffer[1]);
Gl.glColorPointer(3, Gl.GL_FLOAT, 0, (IntPtr)0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glDrawArrays(drawType, 0, count);
Gl.glDisableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glDisableClientState(Gl.GL_COLOR_ARRAY);

Seems simple enough really doesn’t it? It gets harder (I found this out next month).

So on top of learning a new language and getting to grips with the advanced basics of OpenGl, a little light reading – Design Patterns. A design pattern is suppose to a “simple and elegant solution to a specific problem”. Ha. I must admit, yes some are particularly useful, and do help with the maintainability and scalability of a system, but some of them are more bother than they’re worth. Of course, with more than 23 to learn, I am going to be a little biased. An interesting read, but not really my cup of tea. Still, I shouldn’t really be dismissing what could be potentially the future.

Posted in Computing, Placement | Tagged: , , , , , , | 3 Comments »