Planning the Spontaneous

It's more than just a blueprint.

Posts Tagged ‘OpenGL’

DrawInside(Stencil);

Posted by Robert Chow on 26/05/2010

So a few things have changed since I last made a post.

My targets for the end of the year have been reduced.  Which I’m very glad about.  No, it’s not because I’m a slacker, but more importantly because there simply isn’t enough time.

What’s new?

As a result, the target is now very focused on getting an end product out to show for Grapher.  This end product will have rather minimal functionalities, yet will still be comprehensible enough to use as a Grapher component.  Of course, this target will not actually be used as a component without further work; like those functionalities.

Unfortunately, there are still hidden complexities.  There are always hidden complexities in any piece of software you write, in particular those which you have no prior knowledge about.  My supervisor said to me that you shouldn’t start coding until you have answered all those question marks.  Yet another downfall to my side of planning.  But I think I’m starting to learn.

Once again, Grapher threw up a hidden complexity in the way that layers would be drawn.  The idea was to draw the plot in its own viewport, eliminating everything outside of the view such as data points that do not lie in the visible range.  Of course, this is all fine with a rectangular plot – the natural OpenGl viewport is rectangular.  However, come to nonrectangular plots such as triplots or spiderplots, the story is rather different.

The first solution, would be to use a mask to hide away the data outside of the plot.  Unfortunately, this mask would also hide away any items lying underneath that has already been renderered there.

As a result, the next solution states that, because of the action caused by the first solution, some layers would have to be fixed; for example, the data plot will always be at the bottom to make sure the mask does not obscure anything underneath it.

At this point in the placement, I never did think that I’d be adding something major to Renderer;  the final solution, and arguably the best would be to use a stencil buffer.

Stencil me this

A stencil buffer allows for a nonrectangular viewport.  Primitvives, acting as the viewport is drawn to the stencil buffer; using this, only items drawn inside the viewport outlined by the stencil buffer is rendered.

Simple right?

Of course, there are hidden complexities.  Again.

The best thing for me to do would be to spike the solution, instead of coding it straight into Renderer.

First point of action I took was to visit the OpenGl documentation pages, and they provided me with a few lines of code.

Gl.glEnable(Gl.GL_STENCIL_TEST);

// This will always write to the corresponding stencil buffer bit 1 for each pixel rendered to
Gl.glStencilFunc(Gl.GL_ALWAYS, 1, 1);

// This will only render pixels if the corresponding stencil buffer bit is 0
Gl.glStencilFunc(Gl.GL_EQUALS, 0, 1);

Using these should result in items declared after the final call, to be only visible if it lies outside of the viewport declared after the second call.  I continue to claim that by all means, I am no expert on OpenGl, but that’s the rough idea I get from these calls.

I tried this, but to no avail, nothing happened.

My worry was that SimpleOpenGlControl, the rendering device provided by the Tao Framework did not support a stencil buffer; an earlier look notified me that it did not support an alpha channel.  Luckily, the stencil buffer bits can be changed without having to change the source code.  This should be done before initialising the control contexts.

SimpleOpenGlControl.StencilBits = 1;
SimpleOpenGlControl.InitializeContexts();

Once again, nothing happened.

I had a look around the Internet to try and find some source code that worked;  I noticed in one of them, one particular call was present, yet in mine, it was not.

// This specifies what action to take when either the stencil test or depth test succeeds or fails.
Gl.glStencilOp(Gl.GL_KEEP, Gl.GL_KEEP, Gl.GL_REPLACE);

Placing this at the start of the code appears to make use of the stencil buffer correctly.  I believe this is because when the stencil test succeeds, it will replace the current pixel with the resultant pixel, instead of keeping the old one.  Again, I am no expert; there is a lot more information on the documentation page (however useful, I cannot tell).

Drawing a conclusion

Below is the code and a screenshot of the application I used to spike using the stencil buffer.

public partial class Client : Form
{
public Client()
{
InitializeComponent();

SimpleOpenGlControl.StencilBits = 1;
SimpleOpenGlControl.InitializeContexts();

Gl.glEnable(Gl.GL_STENCIL_TEST);
Gl.glStencilOp(Gl.GL_KEEP, Gl.GL_KEEP, Gl.GL_REPLACE);
// Enable hinting

}

private void simpleOpenGlControl_Paint(object sender, PaintEventArgs e)
{
Gl.glClearColor(1, 1, 1, 1);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_STENCIL_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(0, SimpleOpenGlControl.Width, 0, SimpleOpenGlControl.Height, 10, –10);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

// Draw to the stencil buffer
Gl.glStencilFunc(Gl.GL_ALWAYS, 1, 1);
// Turn off colour
Gl.glColorMask(Gl.GL_FALSE, Gl.GL_FALSE, Gl.GL_FALSE, Gl.GL_FALSE);
// Draw our new viewport
Gl.glBegin(Gl.GL_TRIANGLES);
Gl.glVertex2d(0, 0);
Gl.glVertex2d(SimpleOpenGlControl.Width, 0);
Gl.glVertex2d(SimpleOpenGlControl.Width / 2.0, SimpleOpenGlControl.Height);
Gl.glEnd();

// This has been changed so it will draw to the inside of the viewport
// To draw to the outside, use Gl.glStencilFunc(Gl.GL_EQUAL, 0, 1);

Gl.glStencilFunc(Gl.GL_EQUAL, 1, 1);
// Turn on colour
Gl.glColorMask(Gl.GL_TRUE, Gl.GL_TRUE, Gl.GL_TRUE, Gl.GL_TRUE);
// Draw triangles here

// Disable stencil test to be able to draw anywhere irrespective of the stencil buffer
Gl.glPushAttrib(Gl.GL_ENABLE_BIT);
Gl.glDisable(Gl.GL_STENCIL_TEST);
// Draw border here

Gl.glPopAttrib();

SimpleOpenGlControl.Invalidate();
}
}

Stencil Test.  This is created by first delaring the triangular viewport in the stencil buffer;  only primitives inside the stencil will be rendered.  The border was added on later, after disabling the stencil test.

Posted in Computing, Placement | Tagged: , , , , , , , | Leave a Comment »

MultiOpenGlControl Demo

Posted by Robert Chow on 16/04/2010

I’ve managed to grab some video capture software and also produce a small demo of using multiple SimpleOpenGlControls.  Each screen has it’s own camera, focusing on to the same scene.  Using lighting and picking, clicking a face of the cube will change the scene, and thus be reflected in all of the cameras.

I did have a little trouble with the picking at first – when I picked, I forgot to change the current camera, so it would register the mouse click on one screen whilst using the camera of another.  With that fixed, it was nice to be able to play around with Renderer again.

True, the videos not suppose to be a hit, but at least it’s showing what I want it to.

Posted in Computing, Placement | Tagged: , , , , , , , , , | Leave a Comment »

MultiOpenGlControl

Posted by Robert Chow on 14/04/2010

One thing I’ve only just come to tackle is the prospect of having more than one rendering context visible at any time.

Make Current

Using the Tao SimpleOpenGlControl, it’s a relatively simple feat.

MultiOpenGlControl.  Displaying two SimpleOpenGlControls in immediate mode is relatively easy.


The SimpleOpenGlControl class allows you to make the control as the current rendering context. Any OpenGl calls made after this will be applied to the most recently made current rendering context.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();
}

public void SimpleOpenGlControlAPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlA.MakeCurrent(); // all code following will now apply to control A

Gl.glClearColor(1, 1, 0, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glColor3d(1, 0, 0);
Gl.glBegin(Gl.GL_POLYGON);
Gl.glVertex2d(-0.5, –0.5);
Gl.glVertex2d(0.5, –0.5);
Gl.glVertex2d(0, 0.5);
Gl.glEnd();

SimpleOpenGlControlB.Invalidate(); // I invalidate B so it gets refreshed – if I only invalidate A, B will never get drawn
}

public void SimpleOpenGlControlBPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlB.MakeCurrent();  // all code following will now apply to control B

Gl.glClearColor(0, 1, 1, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, 1, –1, 1, –1); // NOTE: I have changed the projection to show the images differently

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glColor3d(1, 0, 0);
Gl.glBegin(Gl.GL_POLYGON);
Gl.glVertex2d(-0.5, –0.5);
Gl.glVertex2d(0.5, –0.5);
Gl.glVertex2d(0, 0.5);
Gl.glEnd();

SimpleOpenGlControlA.Invalidate(); // I invalidate A so it gets refreshed – if I only invalidate B, A will never get drawn
}

Multiple Vertex Buffer Objects

I did run into a little trouble though. I found that when I was rendering in immediate mode, this worked fine.  However, when I came about to using vertex buffer objects and rendering in retained mode, I found the results were different.

MultiVertexBufferObjects.  Only one of the controls renders the image, despite making the exact same calls in each of the control’s paint methods.  You can tell the paint method of control A is called because the clear color is registered.


Only one of the rendering contexts shows an image, and the other did not, despite the obvious call to clear the color buffer.

int[] buffers = new int[3];
float[] vertices = new float[] { –0.5f, –0.5f, 0.5f, –0.5f, 0, 0.5f };
float[] colours = new float[] { 1, 0, 0, 1, 0, 0, 1, 0, 0 };
uint[] indices = new uint[] { 0, 1, 2 };

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

Gl.glGenBuffers(3, buffers);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[0]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(vertices.Length * sizeof(float)), vertices, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[1]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(colours.Length * sizeof(float)), colours, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[2]);
Gl.glBufferData(Gl.GL_ELEMENT_ARRAY_BUFFER, (IntPtr)(indices.Length * sizeof(uint)), indices, Gl.GL_STATIC_DRAW);
}

public void SimpleOpenGlControlAPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlA.MakeCurrent(); // all code following will now apply to control A

Gl.glClearColor(1, 1, 0, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[0]);
Gl.glVertexPointer(2, Gl.GL_FLOAT, 0, IntPtr.Zero);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[1]);
Gl.glColorPointer(3, Gl.GL_FLOAT, 0, IntPtr.Zero);

Gl.glEnableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glEnableClientState(Gl.GL_COLOR_ARRAY);

Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[2]);
Gl.glDrawElements(Gl.GL_POLYGON, indices.Length, Gl.GL_UNSIGNED_INT, IntPtr.Zero);

Gl.glDisableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glDisableClientState(Gl.GL_COLOR_ARRAY);

SimpleOpenGlControlB.Invalidate(); // I invalidate B so it gets refreshed – if I only invalidate A, B will never get drawn
}

public void SimpleOpenGlControlBPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlB.MakeCurrent(); // all code following will now apply to control B

Gl.glClearColor(0, 1, 1, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1); // NOTE: I have changed the projection to show the images differently

// same code as above

SimpleOpenGlControlA.Invalidate(); // I invalidate A so it gets refreshed – if I only invalidate B, A will never get drawn
}

Multiple Vertex Buffer Objects

This is because if you are to generate buffers, the buffers only apply to the current rendering context – in this case, it is SimpleOpenGlControlB (it was the last one that was created).  If you are using buffers and want to render the same data in a different rendering context, you have to recreate the data buffers again under the different context.  It seems like a bit of a waste really – having to recreate the same thing twice just to view it in a different place.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

SimpleOpenGlControlA.MakeCurrent();
// create bufferset

SimpleOpenGlControlB.MakeCurrent();
// repeat the same code to recreate the bufferset under a different rendering context
}

// Draw methods

Shared Vertex Buffer Objects

Fortunately, there is a small hack around. Wgl offers a method, wglShareLists, which allows for different rendering contexts to share, not only the same display lists, but also VBO, FBO and texture lists. To do this, you need to get the rendering context handles – unfortunately, you can only do this with a small hack around.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

SimpleOpenGlControlA.MakeCurrent();
var aRC = Wgl.wglGetCurrent();
SimpleOpenGlControlB.MakeCurrent();
var bRC = Wgl.wglGetCurrent();

Wgl.wglShareLists(aRC, bRC);

// start creating vbos here – these will now be shared between the two contexts.
}

// Draw methods

Multiple Controls For Renderer

This now provides two solutions for Renderer – I can either have seperate lists for each control, or have them all share the same one.  There are advantages and disadvantages to both.

In terms of management, it would be a lot easier to have them share the same lists – there is an overhead in tracking what lists are part of which control.  It would also cause a problem when a user tries to draw a scene for a particular control, when the vertex buffer objects in the scene are assigned to a another.

It would also be advantageous to have shared lists when using heavy OpenGl operations, such as texture binding – it would have to bind a new texture each time a different control is refreshed.  Sharing a texture list would erase this problem.

In the short run, using seperate lists has its disadvantages; the memory use is rather inefficient because it has to create a new set of lists each time a control is created.  However, the memory story is a different matter when sharing lists becomes long term.   This is because once a control is destroyed, so too are its lists.  Sharing lists would mean the list will continue to accumulate more and more data until all the controls have been destroyed – that is entirely up to the user, and could take some time.

As a result, for the time being, I am going to go for the option of sharing lists – mainly because it does have its advantages in terms of not having to deal with the management side, and it also minimises the frequency of heavy OpenGl operations.  Not only this, but also because it would take some time to implement the management.  If  I do have time nearer the end of my placement, I may decide to come back and revisit it.

MultiOpenGlControl-3D.  Here is Renderer using 4 different SimpleOpenGlControls simultaneously.  I’ve tried to replicate a (very simple) 3D editing suite, whereby there are 3 orthogonal views, Front, Top and Side, and then the projection view.  It uses the same model throughout, but each control has differing camera positions.

Posted in Computing, Placement | Tagged: , , , , , , , , | Leave a Comment »

OOP

Posted by Robert Chow on 01/04/2010

OOP.  Object-orientated-programming.  Something that every programmer should be able to do nowadays writing a 3rd gen language.  With dynamic languages, I’m not entirely sure on how different it is, but I’m guessing the principles are the same.

But that’s not what I’m here to blog about.  I’ve managed to whip up a demo of the toolkit, OOP, or I’d like to call it, Object-Orientated-Paint.

Object-Orientated-Paint

So for those that use Microsoft Paint (and I still do), many will know how annoying it is to only have one canvas, and that every single dash or pixel “painted” on to the canvas is essentially there to stay unless another splash of paint writes over it.  It’s quite frustrating, especially when only a few minutes down the line do you realise that you forgot to draw something and therefore have to edit the whole picture.  Once you’ve edited that picture, you’ve also no way of going back either (the undo command only has a history of 3, something that has frustrated me rather a lot over the years).

So why better it by encapsulating each shape/splash of paint as an object?  It’s essentially what Microsoft Powerpoint uses.  True, you dont’ get the same precision and granularity to edit a particular shape as you would in Paint, but it’s a heck of a lot easier to use.  Although my knowledge is very limited in the area, I’m guessing you could make the same connection with C and C#.  C is a language (correct me if I’m wrong here) that is not designed for OOP, yet it allows you to reach the gutsy insides of a computer.  C# on the other-hand is designed to be used in OOP, and although it’s a lot harder to get to the lower-levels of computation, it is a lot faster, and ultimately easier to use.

Now jumping the gun a bit, it’s similar to Renderer and the toolkit.  Renderer is just a wrapper around OpenGl – it can do everything it wants, providing OpenGl supports it (although I haven’t wrapped around every single OpenGl call – that would be insane).  Yet if I were to use Renderer indirectly, via the toolkit, I am unable to do everything that Renderer supports – the toolkit limits my actions, yet makes the more commonly used functions of Renderer a lot easier to deal with.

Demo

Enough rambling.  I’ve implemented basic shapes in the toolkit, as well as very basic shape control.

Shapes.  Using the toolkit, I am able to ask for basic shapes.  Each shape has a fill colour, line colour and line width property.


Shapes Select.  I’ve also added picking functionality and shape control too.  When a user selects a fill/border of a shape, it will recognise it as a shape.  This is then decorated with a shape control.  This shape control has handles – these too are shapes.


Shapes Resize.  Clicking on a handle requires a couple of steps first before allowing to interact with it.  Although clicking on a border/fill will return the shape representing the handle, the handles act differently and need to be tested for.  Once this is done, we can notify the application that a handle is selected, not a shape.  This is important because if it was a shape, then the shape control will try to decorate the handle – clearly something that needs to be avoided.

Shapes Move.  clicking on a shape not only decorates it with a shape control, but it also allows you to drag and drop the shape too.


OOPhilosophy

Before this demo, I’ve had a lot of trouble creating similar objects. I’ve created bézier curves for the toolkit too, but I haven’t shown them because I ‘ve never been to happy with them.

If you recall from DrawCurve(Bézier);, near the end I was able to draw a master curve, encapsulating the bézier sub-curves.  To incorporate this into a scene graph, I needed an anchor, a node.  I did this by using a get property for the root of the curve.

public class Bezier : IRootNode
{
public Bezier()
{
RootNode = new Node(Matrix.Identity());
}

public INode RootNode { get; private set; }

private void CreateSubCurve()
{
var subCurve = new SubCurve();
RootNode.AddChild(subCurve.RootNode);
}

}

The problem with this is that once the Rootnode of the bézier curve is accessed, anything can be done with it.  A renderable can be assigned, all the children cleared, random children added.  It wasn’t very safe, and I never really did like it.

Why I never thought of simply inheriting from a model node in the first place, I’ll never know.  Not only does it allow me to change the node contents a lot easier, but it also means that I can change the interface, so that only a few select items can be modified.

public interface IShape
{
Colour FillColour { get; set; }
Colour BorderColour { get; set; }
double BorderThickness { get; set; }
}

internal class Shape : Node, IShape
{
public Shape(IFill fill, IBorder border) : base(Matrix.Identity())
{
Fill = fill;
Border = border;

base.AddChild((INode)Fill);
base.AddChild((INode)Border);
}

public Colour FillColour { get {…} set {…} }
public Colour BorderColour { get {…} set {…} }
public double BorderThickness { get {…} set {…} }

public IFill Fill { get; private set; }
public IBorder Border { get; private set; }

}

The only problem is that I have to cast from an IShape to an INode for when I want to add it to a graph, but at least the internals are protected from modification.  I suppose you could say someone could take an IShape and cast it into an INode and use it from there.  I guess there’s no stopping that.

Regardless, I’m finding it a lot easier to use the latter method than the former.  It took me only a week to create the shapes demo, but around two to get the bézier curves in the toolkit working.

Out of shape

Has anyone seen the statue designed for London 2012?  I thought I’d try and recreate it.  And if you ask me, I can’t say I’m particularly fond of it either.

Kapoor 2012.  Winning design for the London 2012 statue.  Apparently, it’s designed to stay.  No thanks.


OOP Kapoor 2012.  It might as well look like this.

As a side note, a couple of guys have released a rather nifty free tool, Paint.NET.  It’s a lightweight application that’s pretty much paint, but similar in the style and functionalities of Adobe Photoshop.  So those who can’t afford the extortionate prices of designer tools, well, I’d highly recommend giving it a try.   Did I also mention it’s free?

Posted in Computing, Placement | Tagged: , , , , , , , , , | Leave a Comment »

glGenTexture(Atlas); // part1

Posted by Robert Chow on 08/03/2010

So besides having a problem with fonts, I also know at a later stage, I will have to eventually incorporate texture atlases. A texture atlas is a large texture that is home to many smaller sub-images which can be each accessed by using different texture coordinates. The idea behind this is to minimise the number of texture changes is in OpenGl – texture changing is regarded as a very expensive operation and should be kept as minimal as possible.  Using this method would create a huge performance benefit, and would be used extensively with fonts.


Helicopter Texture Atlas.  This shows how someone has taken a model of a helicopter and broken it up into its components, before packing it into a single texture atlas.  This image is from http://www.raudins.com/glenn/projects/Atlas/default.htm

Where To Start?

Approaching this problem, there are multiple things for me to consider.

The first is how to compose the texture atlas. Through a little bit of research, I have found a couple of methods to consider – the Binary Split Partition (BSP) algorithm, and using Quadtrees.

Secondly, the possible notion of a optimisation method. This should take all the textures in use into consideration, and then sort them so the number of texture changes is at a minimum. This would be done by taking the most frequent sub-textures in use, and placing them on to a single texture. Of course, this alone there are many things to consider, such as what frequency algorithm should I use, and how often do the textures need to be optimised.

Another thing for me to consider is the internal format of the textures themselves, and also how the user will interact with the textures.  It would be ideal for the user to believe that they are using separate textures, yet behind the scenes, they are using a texture atlas.

Updating an Old Map

As I already have the Renderer library in place, ideally the introduction of texture atlases will not change the interface dramatically, if not at all.

For the current, the user asks Renderer for a texture handle, with an internal format specified for the texture.  Initially this handle does not correspond to a texture, and seems rather pointless.  Of course the obvious answer to that would be to only create a handle upon creating a texture.  The reason behind using the former of the two options was the flexibility it provided when a texture is used multiple times in a scene graph.   Using a texture multiple times in a scene means that the handle would have to be referred to in each node it is to appear at in the scene graph.  Using the first option means that if the texture image changes, only the contents of the handle is changed, and thus updates automatically in the scene graph.  If it was done using the latter of the two options, it would mean having to change the handle in every single node of the scene graph it corresponded to.  The handle acts as like a middle-man.

When incorporating texture atlases, I would like to keep this functionality; but it does mean that I will have to change the interface in another area of Renderer, and that is using texture co-ordinates.  For the current, the texture co-ordinates are nice and simple.  To access the whole texture, it would use texCoords (0,0) for the bottom-left hand corner, and (1,1) for the top-right hand corner.  To the user on the outside, this should be no different, especially considering the user is thinking that they are just using the single-texture.  To do this, a mapping would be needed to convert the (s,t) texture co-ordinates to the internal texture atlas co-ordinates.  Not really a problem.  But it is if we’re using Vertex Buffer Objects.

Just using the simple (0,0) to (1,1) co-ordinates all the time means that we only need to create the texture co-ordinates in our vertex buffers once, and refer to them all the time.  Yet, they are to change all the time if we are using internal mapping, and especially if we are going to be changing the texture contents that the handles are referring too as well.

I think the best way of solving this is to go about it by making sure that the texture co-ordinates inside the vertex buffer objects are created each time a new texture is bound to a handle.  How I go about this, I am not entirely sure, but it’s definitely something I need to consider about.  This would definitely mean a fairly tight relationship between texture co-ordinates in a scene graph node and the texture that it is bound to – of course they can’t really function without each other anyway, so it does make sense.  Unfortunately, the dependance is there and it is always will be.

Structure

Before all of that comes into place, I also need to make sure that the structure of the system is suitable too, on top of it being maintainable, readable and scalable.  For the current time being, I am using a 3-tier structure.

Texture Atlas Structure.  This shows the intended structure of my texture atlas system.

Starting at the top, there is the Atlas.  This is essentially what the user interface will work through.  The Atlas will hold on to a list of catalogs.  The reason I have done this is to separate the internal formats of the textures.  A sub-image with the intended internal format A cannot be on the same texture as a sub-image of internal format B.   Inside each catalog is a collection of Pages with the corresponding internal format.  A Page is host to a large texture with many smaller sub-images.  Outside of this structure are also the texture handles.  These refer to each of their own sub-images, and hide away the complexity of the atlas to the user, making it seem as if they are handling separate textures.

For the time being, this is still a large learning-curve for me, not only in learning OpenGl, but also in using design patterns and C# licks, and more recently, planning.  I don’t plan very often, so this is one time I hope on trying to do that!  These posts may come a little slow as I’m in Manchester at the moment, and although I thought my health was getting better, it isn’t seemingly all going away.  With a little luck, I’ll accomplish what it is that I want to do over the next couple of weeks.  In the next post, I hope to touch on BSP, which I have implemented; and Quadtrees – something I still need to look at.   Normally I write my blog posts after I”ve implemented what it is I’m blogging about, but this time I’ve decided to it a little differently.  The reason for this is because after I implemented the BSP, I’ve realised it is actually a lot more flexible to implement the Quadtrees instead.  Whether I do or not, I think is dependant on time.  As long as there is a suitable interface, then all in all, it is just an implementation detail, and nothing major to worry about.  This is, at the this current time, by no means, a finished product.

Posted in Computing, Placement | Tagged: , , , , , , | 3 Comments »

“{Binding OpenGl To WPF}” : Part 3

Posted by Robert Chow on 16/02/2010

So this is (hopefully) the last installment of my journey to creating my first WPF application using an OpenGl control.  You can find the first part here, and the second part here.  In this post, I am going to talk about where the magic happens behind a WPF form.

Model-View-View Model Pattern

So we have the GOF design patterns; these a best code practises and are just guidelines.  Similarly, there are also design patterns for implementing GUIs: Model-View-Presenter, Model-View-Controller and Model-View-View Model.  I don’t really know much about the former two, and I don’t claim to know too much about the MVVM pattern either, but it’s the one I am trying to use for developing my WPF applications.

In the pattern, we have the model, which is where all the code defining the application is; the view – this is the GUI; and the view-model, which acts as an abstraction between the view and the model.  The view is mostly written in XAML, and the view-model and model I have written in C#.  Technically, the view should never need to know about the model, and vice versa.  It is the view-model which interacts between the two layers.  I find it quite easy to look at it like this.

View Model Gear.  How the MVVM pattern may be portrayed in an image.  To be honest, I think the pattern should really be called the View-View Model-Model pattern, but of course, that’d just be stupid.  This image is from http://blogs.msdn.com/blogfiles/dphill/WindowsLiveWriter/CollectionsAndViewModels_EE56/.

Code Inline and Code Behind

You’ll notice for the majority of the time, I am using XAML to do most of the form.  The rest of the code that doesn’t directly interact with the view using the XAML is either code inline or code behind.  The code behind is essentially what we use to represents the view-model and the model.  The code inline is the extra code that is neither XAML or code behind.  It helps to define the view and is very rarely used, especially if the XAML can describe everything related to the view.  Incidentally, I used code inline at the start of the last post, where I defined a windows forms host using C# instead of doing it in the XAML.

In order to keep the intended separations of concerns in the MVVM pattern, the code behind has to interact with the view as little as possible, if not at all.  The flow of code should always travel in the direction of view to view-model, and never the other way round.  A way to do this is to introduce bindings.

Bindings

A binding in WPF is declared in the XAML and binds a XAML component to a property in the view-model.  In doing this, the component can take on the value specified in the view-model.  However, before we can create a binding, we have to form the connection between the view and the view-model.  To do this, we assign the view-model to the view data-context.  This is done in the logic in the code behind of the view.

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();
this.simpleOpenGlControl.InitializeContexts();
this.DataContext = new ViewModel();

}
}

Having this data context means that the view can now relate to our view-model.

Now with the view-model in place, we can now bind components of the view to properties in the view-model.  Declare the binding in the XAML using the property name in the view-model.  The example depicts how I am able to retrieve the score in the demo to a label.

<Label Content={Binding Path=Score, Mode=OneWay, UpdateSourceTrigger=PropertyChanged} Grid.Row=“1” Grid.Column=“0” FontSize=“48” />

As you can see, the property I have bound to is “Score”, defined using Path.  The Mode and UpdateSourceTrigger properties are XAML defined enums to explain how we want the binding to relate to the property in the view-model.  Specifying that the binding is only OneWay tells the application that all we want to do is read the property value in the view-model.  This is usually TwoWay in cases such as a textbox where the user can edit a field.

The UpdateSourceTrigger is necessary.  Without this, the label content will never update.  As the code in the view is travelling in the direction of the view-model, there is minimum code going the other way.  As a result, the label content will not know if the value has changed in the view-model, and thus will not update.  To make sure the label content does update, we have to notify that the property value has changed.  To do this we have to implement an interface.

INotifyPropertyChanged

You can’t really get much closer to what it says on the tin than this.  Implementing this interface gives you an event handler, an event handler that “notifies” the XAML that a property has changed and will ask it to update accordingly.  Below is the code I have used to invoke a change on the property Score.

public event PropertyChangedEventHandler PropertyChanged;

public int Score
{
get
{
return this.score;
}
set
{
this.score = value;
OnPropertyChanged(“Score”);

}
}

private void OnPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}

As far as I’m aware, invoking the event with the property name tells the XAML that the value for the property with that particular name has changed.  The XAML will then look for that property and update.  As a result, it is vital that the string passed on is exactly the same as how the property is defined, in both the view-model and the XAML binding.

Invalidate

A problem I did come across was trying to invoke a method provided by a XAML component.  Using the simpleOpenGlControl, I need to invalidate the control so the image will refresh.  Of course, invoking this in the view-model directly would violate the MVVM model.  As a result, a little bit of logic was needed to solve this problem.

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();
this.simpleOpenGlControl.InitializeContexts();
this.DataContext = this.viewModel = new ViewModel();

viewModel.SimpleOpenGlControlInvalidate = () => this.simpleOpenGlControl.Invalidate();
}

private ViewModel viewModel;
}

public class ViewModel : INotifyPropertyChanged
{
public Action SimpleOpenGlControlInvalidate
{
get;
set;
}

public int Score …

}

Assigning the method for use inside the view-model means that I can invoke this without having to go to the view, and therefore still keeping with the MVVM pattern.  Apparently, doing this causes a recurring memory leak, so when the view-model is finished with, you have to make sure that the action is nullified.

That pretty much concludes this 3-parter.  I’m sure there’ll be a lot more WPF to come in the near future, especially as I’m now trying to create all my future demos with it now.  It also means that I can use the XAML to create SilverLight applications too… or maybe not.

Posted in Computing, Placement | Tagged: , , , , , , , , , , , , , | Leave a Comment »

“{Binding OpenGl To WPF}” : Part 2

Posted by Robert Chow on 03/02/2010

This is the second part of my journey to creating an application using WPF, wrapping the Renderer demo I made earlier. You can find the first part here.

Subconscious Intentions

So, initially in this post, I was going to introduce the concept of the model-view-viewmodel, and code inline and code behind.  Or at least, how I understand them.  Yet due to recent activity, that will have to wait till the next post.

The great thing about writing these blog posts is that in order for me to seem like I know what I’m doing, I have to do the research.  I’ve always been taught that during an exam, always answer the question as if the examiner doesn’t know anything about the subject bar the basics.  This way, you should answer the question in a detailed and chronological order, thus making complete sense and gaining full marks.  In a way writing these posts are quite similar.  Sure, I may have broken the rule more than a few times, especially when I’m rushing myself, but I try to explain the topics I cover in enough detail for others than myself to understand it.  After all, although I am writing this primarily for myself to refer back to for when I get stuck, it’s nice to know that others are benefiting from this blog too.

It’s not because I haven’t done the research that I can’t bring to the table what I know (or seem to know) about the model-view-viewmodel, code inline and code behind.  It’s because during the research and much tinkering around, that I thought I should cover the main drive for using WPF in the first place, and that is to incorporate an OpenGl control.

From Scratch

So a couple of weeks ago, I did actually manage to create an OpenGl control, and place it inside a WPF form.  The way I did this was a bit long-winded compared to how I used to, by creating it in a Win32 form.  Instead of using the SimpleOpenGlControl provided by the Tao framework, I went about by creating the control entirely from scratch.

For this, I could have done all the research, and become an expert at creating the control manually.  But that simply wasn’t my intention.  Luckily for me, the examples provided by Tao included the source code ,and a quick copy and paste, I more or less had the code ready and waiting to be used.

One thing I am more aware of now is that you need two things, a  rendering context and a device context.  The rendering context is where the pixels are rendered; the device context is the intended form for where the rendering context will sit inside.  Of course, the only way to interact with these are using their handles.

To create a device context in the WPF application, I am using a Windows Forms Host.  This allows you to host a windows control, which we will use as our device context.  The XAML code for this is relatively simple.  Inside the default grid, I have just inserted the WindowsFormsHost as the only child element.  However, for the windows control, I have had to take from a namespace other than the defaults provided.  To declare a namespace, declare the alias (in this case I have used wf, shorthand for windows forms) and then follow it with the namespace path.  Inside the control, we are also going to use the x namespace.  Using this, we can assign a name for the control, and thus allowing us to retrieve the handle to use as the device context.

<Window x:Class=“OpenGlControlInWPF.Client”
xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation&#8221;
xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml&#8221;
xmlns:wf=“clr-namespace:System.Windows.Forms;assembly=System.Windows.Forms”
Title=“Client” Height=“480” Width=“640” WindowStyle=“SingleBorderWindow”>
<Grid ClipToBounds=“True”>
<WindowsFormsHost ClipToBounds=“True”>
<wf:Control x:Name=“control”/>
</WindowsFormsHost>
</Grid>
</Window>

With the form done, we can now dive into the code in the C# file attached to the XAML file.  It is here where we create the rendering context, and attach it to the device context.  I’m not really an expert on OpenGl at all when it comes to this kind of thing, so I’m not going to show the full code.  If you’re really stuck, the best place I can point you to is to look at NeHe’s first lesson, making an OpenGl window.  If you’re using the Tao framework and you installed all the examples, the source code should come with it.

The WPF interaction in the C# code is very minimal.  All we need to do with the XAML file is to retrieve the handle associated with the control we declared beforehand.  This is done simply by using the name we gave it in the XAML and letting Tao do the rest.  We hook this up to retrive a rendering context, and then we show the form.

Gdi.PIXELFORMATDESCRIPTOR pfd = new Gdi.PIXELFORMATDESCRIPTOR();
// use this to create and describe the rendering context – see NeHe for details

IntPtr hDC = User.GetDC(control.Handle);
int pixelFormat = Gdi.ChoosePixelFormat(hDC, ref pfd);
Gdi.SetPixelFormat(hDC, pixelFormat, ref pfd);
IntPtr hRC = Wgl.wglCreateContext(hDC);
Wgl.wglMakeCurrent(hDC, hRC);

this.Show();
this.Focus();

All there is left to do is to go into the OpenGl loop of rendering the scene at each frame. Unfortunately, because I am used to the SimpleOpenGlControl provided by Tao, I’ve never needed to go into it whilst I’ve been on placement. All I had to do was to call simpleOpenGlControl.Invalidate() and the frame would automatically refresh for me.  As a result of this, I decided to place the draw scene method in a while(true) loop so the rendering would be continuous. And true to my thoughts, I knew this wouldn’t work.  As a result, the loop was “throttling” the application when running – I was unable to interact with it because all the runtime concentrated on rendering the scene – there was no interrupt handling so pressing a button or typing a key didn’t have any effect whatsoever.

I did try to look for answers to the throttling, and I stumbled across something else.  Another solution to hosting an OpenGl control in WPF.

The Better Solution

Going back to the first post of this multi-part blog, you might recall I am using a Canvas to host the OpenGl control.  I found this solution only a couple of days ago, due to a recent post on the Tao forums.  It uses this canvas in the C# code and assigns a WindowsFormsHost.  This in turn is assigned a SimpleOpenGlControl.  A SimpleOpenGlControl!  This means that I am able to use all the abstractions, methods and properties that the SimpleOpenGlControl has to offer without having to manually create my own.

First of we have to assign the canvas a name in the XAML code so we can reference it in the C# counterpart.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“0″ Grid.Column=“0″ Background=“Black” BorderThickness=“1″ BorderBrush=”Navy” CornerRadius=“5″ Margin=“6, 6, 3, 3″>
<Canvas ClipToBounds=“True” Margin=“2″/ x:Name=“canvas”>
</Border>
</Grid>

The C# code for creating the SimpleOpenGlControl is short and sweet.  We create the WindowsFormsHost, attach a newly created SimpleOpenGlControl and attach the whole thing to the Canvas.  Here is the entire code for creating this.

namespace OpenGlWPFControl
{
using System.Windows;
using System.Windows.Forms.Integration;
using Tao.Platform.Windows;

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();

this.windowsFormsHost = new WindowsFormsHost();
this.simpleOpenGlControl = new SimpleOpenGlControl();
this.simpleOpenGlControl.InitializeContexts();
this.windowsFormsHost.Child = this.simpleOpenGlControl;
this.canvas.Children.Add(windowsFormsHost);
}

private WindowsFormsHost windowsFormsHost;
private SimpleOpenGlControl simpleOpenGlControl;
}
}

Now we have the SimpleOpenGlControl set up, we simply just add the event for rendering and we’re nearly done. There is one problem however, and that is the windows forms host does not know what size to take. We add an event for when the canvas is resized to update the windows forms host size too.

public Client()
{

this.simpleOpenGlControl.Paint += new PaintEventHandler(simpleOpenGlControl_Paint);
this.canvas.SizeChanged += new SizeChangedEventHandler(canvas_SizeChanged);
}

void simpleOpenGlControl_Paint(object sender, PaintEventArgs e)
{
// do your normal opengl drawing here
this.simpleOpenGlControl.Invalidate();
}

void mainCanvas_SizeChanged(object sender, SizeChangedEventArgs e)
{
this.windowsFormsHost.Width = this.canvas.ActualWidth;
this.windowsFormsHost.Height = this.canvas.ActualHeight;
}

A Revelation To An Even better Solution

So I said I was going to talk about other topics before delving into my journey of placing an OpenGl control inside a WPF application, and that’s because of what I found myself accomplishing last night.  In the first blog post of this multi-part series, I found myself using a Canvas to hold a Windows Forms Host, and in turn, to parent a SimpleOpenGlControl.  Yet with further understanding of WPF,  a revelation came.  The reason I was unable to insert a SimpleOpenGlControl directly into the application beforehand was because I wasn’t entirely aware of namespaces in XAML.  Soon after finding more about them, I find I am able to access the SimpleOpenGlControl by referencing Tao, and hence solving all the background work the C# had to do.

<Window
xmlns:tao=“clr-namespace:Tao.Platform.Windows;assembly=Tao.Platform.Windows”
…>
<Grid Background=“AliceBlue”>

<Border Grid.Row=“0” Grid.Column=“0” Background=“Black” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 6, 3, 3”>
<WindowsFormsHost Margin=“2” ClipToBounds=“True”>
<tao:SimpleOpenGlControl x:Name=“simpleOpenGlControl”/>
</WindowsFormsHost>
</Border>
</Grid>

So the only thing extra that to add to this is the event for rendering, which I included before. I can omit the need for having to resize the canvas, partially because there is now no canvas, and also because the WindowsFormsHost ClipTobounds property is true.

In the next part of this series I will hopefully be touching upon what I intended on touching upon in the first place, the model-view-viewmodel pattern.

Posted in Computing, Placement | Tagged: , , , , , , , , | 1 Comment »

“{Binding OpenGl To WPF}” : Part 1

Posted by Robert Chow on 02/02/2010

I’ll admit, it’s been a while.  But since I’ve been back, I’ve been working on a fair few things simultaneously, and they’ve taken a lot longer than planned.  But alas, here is one of them, in a multi-part blog.

Windows Presentation Foundation

Remember where I mention WPF a few times, but never really got into it?  Here’s a statement from the Microsoft website:

Windows Presentation Foundation was created to allow developers to easily build the types of rich applications that were difficult or impossible to build in Windows Forms, the type that required a range of other technologies which were often hard to integrate.”

Ha.

It’s not easy to develop on at all, especially for a developer just starting their first WPF project.  Not compared to creating a Win32 Form with a visual editor to say the least.

But it does allow you to build very rich applications that appeal to many by their look and feel alone.  And they look a lot better than Win32 forms too.

Remember that demo for Renderer?  I originally said I was going to try and incorporate fonts into it, but that’s still one bridge I have yet to cross.  Instead, I decided to learn a bit of WPF instead.  What do you think?

Demo In WPF.  Here I have incorporated the demo into a WPF form, and included panels on the right and bottom.  The right panel depicts the current state of the game, and allows the user to change the camera position and change the texture used when a cube face is picked.


Demo in Win32 Forms Mock.  I have included a mock of the original in a Win32 Form.  I think it’s fair to say the least that you would all rather use the WPF version.


XAML

The language of WPF is XAML, extensible-application-markup-language, and is very similar to XML.  It uses the open/close tag notation – one which I’m not particularly fond of, but it does mean that everything is explicit, and being explicit is good. Like all other languages, it’s very useful to know your ins and outs and what is available to use, and using XAML is no exception to this rule either. As a result, there are many ways, some better, some far worse, ways of creating the form I have made.  As I am no expert in this at all, I am going to leave it as it is, and take a look at the code I have generated for creating the form base.

To create this, I used Visual C# 2008 Express Edition.  This has proved rather handy as it updates the designer view as the code is changed.

Starting a WPF project gives you a very empty template.  With this comes 2 pieces of code, one in XAML and the other in C#.  For the time being, we are just going to concentrate on the XAML file.  This is where we create our layout.  The initial piece of XAML code is very simplistic, and doesn’t really mean too much.

<Window x:Class=“OpenGlWPFControl.Client”
xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation&#8221;
xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml&#8221;
Title=“Client” Height=“665” Width=“749”>
<Grid>
</Grid>
</Window>

The first few lines relate to the namespaces being used within the XAML file. The tags marked Grid relate to the box inside the window. This is a new concept different to the panels in a Win32 Form. Instead of having top, bottom, left and right panels, the grid can be split into a number of columns and rows using definitions.

Here I have split the grid into 4 components using 2 rows and 2 colums.  The code is relatively easy to deal with.  It also allows you to specify minimum, maximum and default dimensions.

<Grid Background=“AliceBlue”>
<Grid.RowDefinitions>
<RowDefinition Height=“*” MinHeight=“200”/>
<RowDefinition Height=“100”/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width=“*” MinWidth=“200”/>
<ColumnDefinition Width=“200”/>
</Grid.ColumnDefinitions>
</Grid>

Here, I have specified the default heights for each row and the default widths of each column.  The “*” is simply a marker to take what space is left.  As an extra,  I have also set the grid background colour.

Now that we have split the initial grid, we can now start to populate it.  This can be with other layout panels, like the grid, or a stackpanel or dockpanel and so forth to add extra layout details.  It can also be filled with more meaningful objects such as a label or a text box.

Starting off, I want to place the OpenGl control in the top-left panel.  For the time being, we are going to mock this using a Canvas.  This item will be used later in the C# code to attach the OpenGl control, but for the time being we are only handling XAML.  In addition, I have also decorated the canvas with a border.  Using a combination of the canvas and border properties, I have managed to achieve the rounded edges, making it more aesthetically appealing.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“0” Grid.Column=“0” Background=“Black” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 6, 3, 3”>
<Canvas ClipToBounds=“True” Margin=“2”/>
</Border>
</Grid>

This piece of code stays within the Grid tags, as it is a grid child.  For where in the grid it sits, I have explicitly stated which row and column of the grid it sits inside the Border tag.

The bottom panel is done in a similar fashion, only this time the border decorates a textblock.  In order to scroll a textblock, this itself needs to be decorated using a scroll viewer.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“1” Grid.Column=“0” Grid.ColumnSpan=“2” Background=“White” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 3, 6, 6”>
<ScrollViewer Margin=“2”>
<TextBlock/>
</ScrollViewer>
</Border>
</Grid>

There is only small change which we have to do so the panel is not confined to one grid space. This is done using the Grid.ColumnSpan property.

Now with only one panel left, I have decided to make my life that little easier by adding in extra grids.  These are done in the exact same way as the initial grid.  Using what I have done already, and combining it with new elements, the last panel is added.

<Grid>

<Border Grid.Row=“0” Grid.Column=“1” Background=“White” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“3, 6, 6, 3”>
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height=“Auto”/>
<RowDefinition Height=“Auto”/>
<RowDefinition Height=“*”/>
<RowDefinition Height=“30”/>
<RowDefinition Height=“30”/>
<RowDefinition Height=“60”/>
</Grid.RowDefinitions>
<Grid Grid.Row=“0”>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
<Label Grid.Row=“0” HorizontalAlignment=“Left” VerticalAlignment=“Bottom” FontSize=“36” Content=“Score:”/>
<Label Grid.Row=“1” HorizontalAlignment=“Right” VerticalAlignment=“Top” FontSize=“48” Content=“0”/>
</Grid>
<Grid Grid.Row=“1”>
<Grid.RowDefinitions>
<RowDefinition Height=“50”/>
<RowDefinition Height=“Auto”/>
<RowDefinition Height=“Auto”/>
</Grid.RowDefinitions>
<Label Grid.Row=“1” HorizontalAlignment=“Left” VerticalAlignment=“Bottom” FontSize=“24” Content=”Lives:”/>
<Label Grid.Row=“1” HorizontalAlignment=“Right” VerticalAlignment=“Bottom” FontSize=“24” Content=“0”/>
<Label Grid.Row=“2” HorizontalAlignment=“Left” VerticalAlignment=“Bottom” FontSize=“24” Content=“Level:”/>
<Label Grid.Row=“2” HorizontalAlignment=“Right” VerticalAlignment=“Bottom” FontSize=“24” Content=“0”/>
</Grid>
<Label Grid.Row=“3” HorizontalAlignment=“Center” VerticalAlignment=“Bottom” Content=“Camera Position”/>
<Grid Grid.Row=“4”>
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Button Grid.Column=“0” Margin=“15,0,15,0” Content=“&lt;&lt;”/>
<Button Grid.Column=“1” Margin=“15,0,15,0” Content=“>>”/>
</Grid>
<Button Grid.Row=“5” Margin=“15,15,15,15” Content=“Change Texture”/>
</Grid>
</Border>
</Grid>

As a result, we achieve a form created entirely from XAML.

Unfortunately however, there is no logic behind this. In part 2, we start looking at the other parts of the code and insert an OpenGl control into the application.

Posted in Computing, Placement | Tagged: , , , , , , , | 2 Comments »

Renderer: Renderables

Posted by Robert Chow on 15/01/2010

So with the majority of the Renderer library nearly finished (with the exception of layers and fonts; well, there are fonts, they’re just not perfect), the process of refactoring has started.  On top of that is also creating a toolkit which will be used as a helper library to make using the Renderer library a bit more user-friendly. Less typing means for more time to do other things.

Renderable Objects

So Renderer is based on the concept of having Renderable objects.  These are exactly what they say on the tin.  They’re are renderable.  To render them, you will need to access their properties.  These properties are fairly straight forward, and are what you would expect a Renderable object to have.  Of course, these have been fine-tuned to the usage of vertex buffer objects, so the properties are:

Vector[] Vertices; // vertices of the shape
Colour[] Colours;  // colours specified at each vertex
Vector[] Normals;  // normals specified at each vertex
Vector[] TexCoords;// 2D texture co-ordinate specified at each vertex, these map to the Renderable texture
uint[] Indices;    // the order of how the vertices are to be drawn 
IMaterial Material;// any material properties this Renderable may hold
ITexture Texture;  // the texture, if, bound to the Renderable
DrawType DrawType; // an enum representing each OpenGL drawtype, specifying how the Renderable is drawn

Without the Vertices, Indices or DrawType, the Renderable is unrenderable, so these are forced in the constructor.  Using the vertices as a core property, this cannot be changed, but all the other properties can be.  That is, due to validity checks.

Duplicate Duplicate

Drawing each Renderable as a vertex buffer object seems rather silly to store the core data multiple times, as it will already be in the GPU, as well as the artificial buffering system in Renderer (used to control and keep a track of what is in the GPU), so it is not necessary to have it in the Renderable object itself.  Instead, I have used the notion of a ‘pointer’ per property.  This is not the conventional pointer traditionally used in computing, as it does not directly correspond to a place in memory.  Instead it points to a place in the artificial buffer where the corresponding data is held.  Using this method means the Renderable object is rather lightweight and is cost effective in memory.

To obtain the ‘pointer’, the client needs to ask the Renderer library.  This is done through the RendererFacade (Facade design pattern), and only takes in the one parameter – the core data.  This core data is sent to the artificial buffer, and the ‘pointer’ is returned.

Where the vertex buffer objects are concerned, this changes the properties in the Renderable object to type ‘pointer’.  Of course this could cause problems, allowing clients to assign Colours ‘pointer’s to TexCoords, Indices ‘pointer’s to Normals and so forth, eventually causing the computer to crash because of a GPU error.  To solve this, each ‘pointer’ is wrapped in a DataPacket interface, and deriving from this are corresponding DataPackets for Vertices, Colours, Normals etc. making sure that the correct ‘pointer’ is matched with the right property.

IVerticesDataPacket Vertices;
IColoursDataPacket Colours;
INormalsDataPacket Normals;
ITexCoordsDataPacket TexCoords;
IIndicesDataPacket Indices;

Using the standard get and set methods that properties have, changing a property is relatively easy (we can also write these manually to do our validity checking).  But it could be better.

Fluent Refactoring

Since we are refactoring, we might as well try to make this as easy to understand and read as possible.  For this, I have touched upon fluent interfaces.

The idea behind this is relatively simple when assigning a property.  It’s a little more difficult when it comes to multiple states.  But more on that later.

For example, we want to assign a ColourDataPacket, a TexCoordsDataPacket and a Texture to a Renderable, it’s as easy as this:

renderable.Colours = colourDataPacket;
renderable.TexCoords = texCoordsDataPacket;
renderable.Texture = texture;

However, to create a fluent interface, I have created extension methods that more or less do exactly the same thing.  The difference is that it reads better, it’s less typing, and it’s also a single statement.

renderable.Assign(colourDataPacket)
.Assign(texCoordsDataPacket)
.Assign(texture);

The extension methods are simple.  Not only do they do pretty much exactly the same as per normal assignment, but they also return the Renderable instead of void, allowing more methods to be called – this is a form of method chaining.  To make it even more simple, I have also used polymorphism to defer from what is assigned to what, allowing me to just use variants of the one method, Assign(), throughout.

Pretty cool, eh?

 

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »

Renderer: Demo

Posted by Robert Chow on 16/12/2009

So now that I’ve got a lot of my functionality going for me in this component, I thought I’d try and create a snazzy demo.

My supervisor’s obsessed with it. It’s like a little game – there is a rotating cube, and the challenge is you have to click on each face before you pass on to the next level. As you progress, the cube rotates just that little bit faster, making it harder to complete the challenge. Clicking on a face will turn that face on. Clicking on a face that is already on will turn it off – you can only progress when all 6 faces are switched on.

So you can automatically see that this little demo already involves the concept of vertex buffer objects (drawing of the cube), the scene graph (rotating cube), the camera (viewing the cube, using the scene graph) and picking (turning faces on/off). But what about the others?

Well, lets place the scene into dark surroundings – we’re going to need lights to see our cube – that’s lighting handled. So where does texturing fit in?

Switching the faces on/off need an indicator to show what state they are in.  We can easily do this by simply changing the colour of a face.  But that’s kinda boring.  So instead, we’re going to take the rendering to texture example, and slap that on a face which is on.  So that’s textures included.

Here are some screenshots of this demo.

Demo Setup. From left to right: initial scene – just a white cube; let’s dim the lighting to, well, none; include some lights: green – centre, red – bottom, blue – left; create a texture.

Demo Rotate. The demo in motion – the camera is stationary, whereas the cube is rotating, and the lights are rotating independantly around it.

Demo Play.  The first and second images show what happens when a face is clicked – the texture replaces the blank face.  Once all the faces are replaced with the texture, the demo advances to the next level.

Demo Camera.  This shows added functionality of how a camera can be fixed on to a light, thus the camera is rotating around the cube too.  Here it is fixed on to the blue light.  This is done with ease by manipulating the scene graph.

I guess you can’t really get a grasp of how this demo works in its entirety – sanpshots don’t really do it much justice.  I might try and upload a video or so – how, I’m unsure of – I’m sure I can probably find screen capture software around on the net.

And unfortunately, there is no score involved. Yet. That will come when fonts come.

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »