Planning the Spontaneous

It's more than just a blueprint.

Posts Tagged ‘C#’

DrawInside(Stencil);

Posted by Robert Chow on 26/05/2010

So a few things have changed since I last made a post.

My targets for the end of the year have been reduced.  Which I’m very glad about.  No, it’s not because I’m a slacker, but more importantly because there simply isn’t enough time.

What’s new?

As a result, the target is now very focused on getting an end product out to show for Grapher.  This end product will have rather minimal functionalities, yet will still be comprehensible enough to use as a Grapher component.  Of course, this target will not actually be used as a component without further work; like those functionalities.

Unfortunately, there are still hidden complexities.  There are always hidden complexities in any piece of software you write, in particular those which you have no prior knowledge about.  My supervisor said to me that you shouldn’t start coding until you have answered all those question marks.  Yet another downfall to my side of planning.  But I think I’m starting to learn.

Once again, Grapher threw up a hidden complexity in the way that layers would be drawn.  The idea was to draw the plot in its own viewport, eliminating everything outside of the view such as data points that do not lie in the visible range.  Of course, this is all fine with a rectangular plot – the natural OpenGl viewport is rectangular.  However, come to nonrectangular plots such as triplots or spiderplots, the story is rather different.

The first solution, would be to use a mask to hide away the data outside of the plot.  Unfortunately, this mask would also hide away any items lying underneath that has already been renderered there.

As a result, the next solution states that, because of the action caused by the first solution, some layers would have to be fixed; for example, the data plot will always be at the bottom to make sure the mask does not obscure anything underneath it.

At this point in the placement, I never did think that I’d be adding something major to Renderer;  the final solution, and arguably the best would be to use a stencil buffer.

Stencil me this

A stencil buffer allows for a nonrectangular viewport.  Primitvives, acting as the viewport is drawn to the stencil buffer; using this, only items drawn inside the viewport outlined by the stencil buffer is rendered.

Simple right?

Of course, there are hidden complexities.  Again.

The best thing for me to do would be to spike the solution, instead of coding it straight into Renderer.

First point of action I took was to visit the OpenGl documentation pages, and they provided me with a few lines of code.

Gl.glEnable(Gl.GL_STENCIL_TEST);

// This will always write to the corresponding stencil buffer bit 1 for each pixel rendered to
Gl.glStencilFunc(Gl.GL_ALWAYS, 1, 1);

// This will only render pixels if the corresponding stencil buffer bit is 0
Gl.glStencilFunc(Gl.GL_EQUALS, 0, 1);

Using these should result in items declared after the final call, to be only visible if it lies outside of the viewport declared after the second call.  I continue to claim that by all means, I am no expert on OpenGl, but that’s the rough idea I get from these calls.

I tried this, but to no avail, nothing happened.

My worry was that SimpleOpenGlControl, the rendering device provided by the Tao Framework did not support a stencil buffer; an earlier look notified me that it did not support an alpha channel.  Luckily, the stencil buffer bits can be changed without having to change the source code.  This should be done before initialising the control contexts.

SimpleOpenGlControl.StencilBits = 1;
SimpleOpenGlControl.InitializeContexts();

Once again, nothing happened.

I had a look around the Internet to try and find some source code that worked;  I noticed in one of them, one particular call was present, yet in mine, it was not.

// This specifies what action to take when either the stencil test or depth test succeeds or fails.
Gl.glStencilOp(Gl.GL_KEEP, Gl.GL_KEEP, Gl.GL_REPLACE);

Placing this at the start of the code appears to make use of the stencil buffer correctly.  I believe this is because when the stencil test succeeds, it will replace the current pixel with the resultant pixel, instead of keeping the old one.  Again, I am no expert; there is a lot more information on the documentation page (however useful, I cannot tell).

Drawing a conclusion

Below is the code and a screenshot of the application I used to spike using the stencil buffer.

public partial class Client : Form
{
public Client()
{
InitializeComponent();

SimpleOpenGlControl.StencilBits = 1;
SimpleOpenGlControl.InitializeContexts();

Gl.glEnable(Gl.GL_STENCIL_TEST);
Gl.glStencilOp(Gl.GL_KEEP, Gl.GL_KEEP, Gl.GL_REPLACE);
// Enable hinting

}

private void simpleOpenGlControl_Paint(object sender, PaintEventArgs e)
{
Gl.glClearColor(1, 1, 1, 1);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_STENCIL_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(0, SimpleOpenGlControl.Width, 0, SimpleOpenGlControl.Height, 10, –10);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

// Draw to the stencil buffer
Gl.glStencilFunc(Gl.GL_ALWAYS, 1, 1);
// Turn off colour
Gl.glColorMask(Gl.GL_FALSE, Gl.GL_FALSE, Gl.GL_FALSE, Gl.GL_FALSE);
// Draw our new viewport
Gl.glBegin(Gl.GL_TRIANGLES);
Gl.glVertex2d(0, 0);
Gl.glVertex2d(SimpleOpenGlControl.Width, 0);
Gl.glVertex2d(SimpleOpenGlControl.Width / 2.0, SimpleOpenGlControl.Height);
Gl.glEnd();

// This has been changed so it will draw to the inside of the viewport
// To draw to the outside, use Gl.glStencilFunc(Gl.GL_EQUAL, 0, 1);

Gl.glStencilFunc(Gl.GL_EQUAL, 1, 1);
// Turn on colour
Gl.glColorMask(Gl.GL_TRUE, Gl.GL_TRUE, Gl.GL_TRUE, Gl.GL_TRUE);
// Draw triangles here

// Disable stencil test to be able to draw anywhere irrespective of the stencil buffer
Gl.glPushAttrib(Gl.GL_ENABLE_BIT);
Gl.glDisable(Gl.GL_STENCIL_TEST);
// Draw border here

Gl.glPopAttrib();

SimpleOpenGlControl.Invalidate();
}
}

Stencil Test.  This is created by first delaring the triangular viewport in the stencil buffer;  only primitives inside the stencil will be rendered.  The border was added on later, after disabling the stencil test.

Advertisements

Posted in Computing, Placement | Tagged: , , , , , , , | Leave a Comment »

MultiOpenGlControl Demo

Posted by Robert Chow on 16/04/2010

I’ve managed to grab some video capture software and also produce a small demo of using multiple SimpleOpenGlControls.  Each screen has it’s own camera, focusing on to the same scene.  Using lighting and picking, clicking a face of the cube will change the scene, and thus be reflected in all of the cameras.

I did have a little trouble with the picking at first – when I picked, I forgot to change the current camera, so it would register the mouse click on one screen whilst using the camera of another.  With that fixed, it was nice to be able to play around with Renderer again.

True, the videos not suppose to be a hit, but at least it’s showing what I want it to.

Posted in Computing, Placement | Tagged: , , , , , , , , , | Leave a Comment »

MultiOpenGlControl

Posted by Robert Chow on 14/04/2010

One thing I’ve only just come to tackle is the prospect of having more than one rendering context visible at any time.

Make Current

Using the Tao SimpleOpenGlControl, it’s a relatively simple feat.

MultiOpenGlControl.  Displaying two SimpleOpenGlControls in immediate mode is relatively easy.


The SimpleOpenGlControl class allows you to make the control as the current rendering context. Any OpenGl calls made after this will be applied to the most recently made current rendering context.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();
}

public void SimpleOpenGlControlAPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlA.MakeCurrent(); // all code following will now apply to control A

Gl.glClearColor(1, 1, 0, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glColor3d(1, 0, 0);
Gl.glBegin(Gl.GL_POLYGON);
Gl.glVertex2d(-0.5, –0.5);
Gl.glVertex2d(0.5, –0.5);
Gl.glVertex2d(0, 0.5);
Gl.glEnd();

SimpleOpenGlControlB.Invalidate(); // I invalidate B so it gets refreshed – if I only invalidate A, B will never get drawn
}

public void SimpleOpenGlControlBPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlB.MakeCurrent();  // all code following will now apply to control B

Gl.glClearColor(0, 1, 1, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, 1, –1, 1, –1); // NOTE: I have changed the projection to show the images differently

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glColor3d(1, 0, 0);
Gl.glBegin(Gl.GL_POLYGON);
Gl.glVertex2d(-0.5, –0.5);
Gl.glVertex2d(0.5, –0.5);
Gl.glVertex2d(0, 0.5);
Gl.glEnd();

SimpleOpenGlControlA.Invalidate(); // I invalidate A so it gets refreshed – if I only invalidate B, A will never get drawn
}

Multiple Vertex Buffer Objects

I did run into a little trouble though. I found that when I was rendering in immediate mode, this worked fine.  However, when I came about to using vertex buffer objects and rendering in retained mode, I found the results were different.

MultiVertexBufferObjects.  Only one of the controls renders the image, despite making the exact same calls in each of the control’s paint methods.  You can tell the paint method of control A is called because the clear color is registered.


Only one of the rendering contexts shows an image, and the other did not, despite the obvious call to clear the color buffer.

int[] buffers = new int[3];
float[] vertices = new float[] { –0.5f, –0.5f, 0.5f, –0.5f, 0, 0.5f };
float[] colours = new float[] { 1, 0, 0, 1, 0, 0, 1, 0, 0 };
uint[] indices = new uint[] { 0, 1, 2 };

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

Gl.glGenBuffers(3, buffers);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[0]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(vertices.Length * sizeof(float)), vertices, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[1]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(colours.Length * sizeof(float)), colours, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[2]);
Gl.glBufferData(Gl.GL_ELEMENT_ARRAY_BUFFER, (IntPtr)(indices.Length * sizeof(uint)), indices, Gl.GL_STATIC_DRAW);
}

public void SimpleOpenGlControlAPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlA.MakeCurrent(); // all code following will now apply to control A

Gl.glClearColor(1, 1, 0, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[0]);
Gl.glVertexPointer(2, Gl.GL_FLOAT, 0, IntPtr.Zero);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[1]);
Gl.glColorPointer(3, Gl.GL_FLOAT, 0, IntPtr.Zero);

Gl.glEnableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glEnableClientState(Gl.GL_COLOR_ARRAY);

Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[2]);
Gl.glDrawElements(Gl.GL_POLYGON, indices.Length, Gl.GL_UNSIGNED_INT, IntPtr.Zero);

Gl.glDisableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glDisableClientState(Gl.GL_COLOR_ARRAY);

SimpleOpenGlControlB.Invalidate(); // I invalidate B so it gets refreshed – if I only invalidate A, B will never get drawn
}

public void SimpleOpenGlControlBPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlB.MakeCurrent(); // all code following will now apply to control B

Gl.glClearColor(0, 1, 1, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1); // NOTE: I have changed the projection to show the images differently

// same code as above

SimpleOpenGlControlA.Invalidate(); // I invalidate A so it gets refreshed – if I only invalidate B, A will never get drawn
}

Multiple Vertex Buffer Objects

This is because if you are to generate buffers, the buffers only apply to the current rendering context – in this case, it is SimpleOpenGlControlB (it was the last one that was created).  If you are using buffers and want to render the same data in a different rendering context, you have to recreate the data buffers again under the different context.  It seems like a bit of a waste really – having to recreate the same thing twice just to view it in a different place.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

SimpleOpenGlControlA.MakeCurrent();
// create bufferset

SimpleOpenGlControlB.MakeCurrent();
// repeat the same code to recreate the bufferset under a different rendering context
}

// Draw methods

Shared Vertex Buffer Objects

Fortunately, there is a small hack around. Wgl offers a method, wglShareLists, which allows for different rendering contexts to share, not only the same display lists, but also VBO, FBO and texture lists. To do this, you need to get the rendering context handles – unfortunately, you can only do this with a small hack around.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

SimpleOpenGlControlA.MakeCurrent();
var aRC = Wgl.wglGetCurrent();
SimpleOpenGlControlB.MakeCurrent();
var bRC = Wgl.wglGetCurrent();

Wgl.wglShareLists(aRC, bRC);

// start creating vbos here – these will now be shared between the two contexts.
}

// Draw methods

Multiple Controls For Renderer

This now provides two solutions for Renderer – I can either have seperate lists for each control, or have them all share the same one.  There are advantages and disadvantages to both.

In terms of management, it would be a lot easier to have them share the same lists – there is an overhead in tracking what lists are part of which control.  It would also cause a problem when a user tries to draw a scene for a particular control, when the vertex buffer objects in the scene are assigned to a another.

It would also be advantageous to have shared lists when using heavy OpenGl operations, such as texture binding – it would have to bind a new texture each time a different control is refreshed.  Sharing a texture list would erase this problem.

In the short run, using seperate lists has its disadvantages; the memory use is rather inefficient because it has to create a new set of lists each time a control is created.  However, the memory story is a different matter when sharing lists becomes long term.   This is because once a control is destroyed, so too are its lists.  Sharing lists would mean the list will continue to accumulate more and more data until all the controls have been destroyed – that is entirely up to the user, and could take some time.

As a result, for the time being, I am going to go for the option of sharing lists – mainly because it does have its advantages in terms of not having to deal with the management side, and it also minimises the frequency of heavy OpenGl operations.  Not only this, but also because it would take some time to implement the management.  If  I do have time nearer the end of my placement, I may decide to come back and revisit it.

MultiOpenGlControl-3D.  Here is Renderer using 4 different SimpleOpenGlControls simultaneously.  I’ve tried to replicate a (very simple) 3D editing suite, whereby there are 3 orthogonal views, Front, Top and Side, and then the projection view.  It uses the same model throughout, but each control has differing camera positions.

Posted in Computing, Placement | Tagged: , , , , , , , , | Leave a Comment »

Linq me

Posted by Robert Chow on 07/04/2010

It’s been more than a few months into my placement now, and my grasp of C# has really gone further than just ‘beginner’ now.

There is a lot of material about Linq on the Internet, and because I should have posted about it a lot earlier, I’m only just going to touch upon a small problem I had yesterday.

Linq

Linq stands for language-intergrated query and consists of a small number of extension methods allowing users to query an IEnumerable collection.   The idea is to allow similar to SQL type queries be made on a collection and are very useful.

Invalid Operation Exception : Collection was modified;

The problem I had was taking a collection, applying a filter and then using the filter to alter the original collection.  Here’s an example of the code.

var toDelete = aCollection.Take(20).Skip(10);
foreach(CollectionItem collectionItem in toDelete)
{
aCollection.Remove(collectionItem);
}

This takes a collection, and applies a filter to it using Linq. The Take(20) will take the first 20 items in aCollection.   Apply Skip(10) will then filter the result to take the remaining items after skipping the first 10 items.  In the foreach loop, I then attempt to delete the items I have filtered from the original collection.

It works fine for the first iteration, yet on the second it throws a nasty error, “Invalid Operation Exception : Collection was modified”.  I know I am modifying the original collection, yet strangely enough it is throwing the exception, expressing that I have modified the toDelete collection.  Nowhere in the loop do I touch the toDelete collection at all.

So it seems

Unfortunately, I do. The reference held by toDelete is the same reference as aCollection – modifying aCollection will also modify toDelete.   This is because when I assign toDelete, it takes the same reference as aCollection, and then includes the filters.  My first impression of using Take and Skip was that it would return a new collection, with just the filtered elements, alas, it is not to be.

Knowing this, I realise I need to take the filtered elements and recreate the filter in a new collection.  Fortunately, Linq can also help me do this.

toDelete = toDelete.ToList();

Seems a bit of a pain that the filter doesn’t actually give a new collection item. I hope that after reading this, you can also avoid similar problems as I did.

Posted in Computing, Placement | Tagged: , , , | Leave a Comment »

OOP

Posted by Robert Chow on 01/04/2010

OOP.  Object-orientated-programming.  Something that every programmer should be able to do nowadays writing a 3rd gen language.  With dynamic languages, I’m not entirely sure on how different it is, but I’m guessing the principles are the same.

But that’s not what I’m here to blog about.  I’ve managed to whip up a demo of the toolkit, OOP, or I’d like to call it, Object-Orientated-Paint.

Object-Orientated-Paint

So for those that use Microsoft Paint (and I still do), many will know how annoying it is to only have one canvas, and that every single dash or pixel “painted” on to the canvas is essentially there to stay unless another splash of paint writes over it.  It’s quite frustrating, especially when only a few minutes down the line do you realise that you forgot to draw something and therefore have to edit the whole picture.  Once you’ve edited that picture, you’ve also no way of going back either (the undo command only has a history of 3, something that has frustrated me rather a lot over the years).

So why better it by encapsulating each shape/splash of paint as an object?  It’s essentially what Microsoft Powerpoint uses.  True, you dont’ get the same precision and granularity to edit a particular shape as you would in Paint, but it’s a heck of a lot easier to use.  Although my knowledge is very limited in the area, I’m guessing you could make the same connection with C and C#.  C is a language (correct me if I’m wrong here) that is not designed for OOP, yet it allows you to reach the gutsy insides of a computer.  C# on the other-hand is designed to be used in OOP, and although it’s a lot harder to get to the lower-levels of computation, it is a lot faster, and ultimately easier to use.

Now jumping the gun a bit, it’s similar to Renderer and the toolkit.  Renderer is just a wrapper around OpenGl – it can do everything it wants, providing OpenGl supports it (although I haven’t wrapped around every single OpenGl call – that would be insane).  Yet if I were to use Renderer indirectly, via the toolkit, I am unable to do everything that Renderer supports – the toolkit limits my actions, yet makes the more commonly used functions of Renderer a lot easier to deal with.

Demo

Enough rambling.  I’ve implemented basic shapes in the toolkit, as well as very basic shape control.

Shapes.  Using the toolkit, I am able to ask for basic shapes.  Each shape has a fill colour, line colour and line width property.


Shapes Select.  I’ve also added picking functionality and shape control too.  When a user selects a fill/border of a shape, it will recognise it as a shape.  This is then decorated with a shape control.  This shape control has handles – these too are shapes.


Shapes Resize.  Clicking on a handle requires a couple of steps first before allowing to interact with it.  Although clicking on a border/fill will return the shape representing the handle, the handles act differently and need to be tested for.  Once this is done, we can notify the application that a handle is selected, not a shape.  This is important because if it was a shape, then the shape control will try to decorate the handle – clearly something that needs to be avoided.

Shapes Move.  clicking on a shape not only decorates it with a shape control, but it also allows you to drag and drop the shape too.


OOPhilosophy

Before this demo, I’ve had a lot of trouble creating similar objects. I’ve created bézier curves for the toolkit too, but I haven’t shown them because I ‘ve never been to happy with them.

If you recall from DrawCurve(Bézier);, near the end I was able to draw a master curve, encapsulating the bézier sub-curves.  To incorporate this into a scene graph, I needed an anchor, a node.  I did this by using a get property for the root of the curve.

public class Bezier : IRootNode
{
public Bezier()
{
RootNode = new Node(Matrix.Identity());
}

public INode RootNode { get; private set; }

private void CreateSubCurve()
{
var subCurve = new SubCurve();
RootNode.AddChild(subCurve.RootNode);
}

}

The problem with this is that once the Rootnode of the bézier curve is accessed, anything can be done with it.  A renderable can be assigned, all the children cleared, random children added.  It wasn’t very safe, and I never really did like it.

Why I never thought of simply inheriting from a model node in the first place, I’ll never know.  Not only does it allow me to change the node contents a lot easier, but it also means that I can change the interface, so that only a few select items can be modified.

public interface IShape
{
Colour FillColour { get; set; }
Colour BorderColour { get; set; }
double BorderThickness { get; set; }
}

internal class Shape : Node, IShape
{
public Shape(IFill fill, IBorder border) : base(Matrix.Identity())
{
Fill = fill;
Border = border;

base.AddChild((INode)Fill);
base.AddChild((INode)Border);
}

public Colour FillColour { get {…} set {…} }
public Colour BorderColour { get {…} set {…} }
public double BorderThickness { get {…} set {…} }

public IFill Fill { get; private set; }
public IBorder Border { get; private set; }

}

The only problem is that I have to cast from an IShape to an INode for when I want to add it to a graph, but at least the internals are protected from modification.  I suppose you could say someone could take an IShape and cast it into an INode and use it from there.  I guess there’s no stopping that.

Regardless, I’m finding it a lot easier to use the latter method than the former.  It took me only a week to create the shapes demo, but around two to get the bézier curves in the toolkit working.

Out of shape

Has anyone seen the statue designed for London 2012?  I thought I’d try and recreate it.  And if you ask me, I can’t say I’m particularly fond of it either.

Kapoor 2012.  Winning design for the London 2012 statue.  Apparently, it’s designed to stay.  No thanks.


OOP Kapoor 2012.  It might as well look like this.

As a side note, a couple of guys have released a rather nifty free tool, Paint.NET.  It’s a lightweight application that’s pretty much paint, but similar in the style and functionalities of Adobe Photoshop.  So those who can’t afford the extortionate prices of designer tools, well, I’d highly recommend giving it a try.   Did I also mention it’s free?

Posted in Computing, Placement | Tagged: , , , , , , , , , | Leave a Comment »

Because testing_isnt_all_bad

Posted by Robert Chow on 26/03/2010

I took a course module last year that didn’t end so well. My results were fine, but the teaching was plain appalling.  One of our assignments wanted us to try testing, yet the lesson hadn’t been covered at the time – it was covered after the assignment deadline. The reason behind this was so we had to do the research ourselves instead of being taught it textbook style.  I guess that way does work better – most of the stuff I’m doing now is all research I’ve done myself.  Still, it doesn’t excuse them for the time they handed an assignment out 2 days before the deadline, despite it being a 2 week workload.

My point is, testing is important.  As mundane as it is, it’s necessary.  We all try to avoid it.  We all seem to think we’re already doing it.  But that’s a lie. Unless we are actually devoting time and effort into it, we’re not actually doing it.  And fortunately for myself, I found this the easy way.

First glance tells me it’s fine

I have 3 methods, each an extension method for an integer.  These are:

bool IsPowerOf2();

int NextPowerOf2Up();

int NextPowerOf2Down();

I think from the names they’re pretty much self-explanatory.

When writing these methods, they worked the way they should do.

2.IsPowerOf2(); // returns true

7.IsPowerOf2(); // returns false

5.NextPowerOf2Up(); // returns 8

21.NextPowerOf2Down(); // returns 16

Or at least that’s the way I thought.

I’ve learnt over the few years of my experience that if you’re not thinking testing, then you’re not thinking outside the box.  All you see are the cases that work, and that should work.  Fine.  But then what if someone else thinks outside of the box?

(-1).NextPowerOf2Down();

Logically, no power of 2 should ever exist less than zero, and no integer power of 2 should exist less than 1.  Lucky for me, I already thought of this case, so I had it covered by throwing an exception.  And that’s basically it.

Second glance tells me different

Around Christmas time, I remember blogging that I wanted to look into using a tool called Machine.Specifiations, more commonly known as M.Spec.  M.Spec provides a handy library designed for writing and implementing tests in a fluent manner.  My supervisor asked me to write some of my own tests to cover the extension methods I’ve written, and that was when I started to think testing.

The idea behind testing is to try to cover every single case, without actually testing every single case.  Taking NextPowerOf2Up(), I can identify the different cases; these are values less than 1, and values equal to or greater than 1.  This is because the pivotal point in this method is 1 itself – it’s is the smallest possible power of 2.  We expect that any case less than 1 will result in 1, and any case that is equal to or greater than 1 will run as expected.  I can also split the latter into two more sections – those that are already a power of 2, and those that are not.  This is to ensure that the result does end in the next power of 2 and not the current one.

Sounds fairly simple.  But I did find one case that actually forced me to change the method code. This case covers all numbers equal to and greater than the highest possible power of 2 supported by an integer value.   Trying to ask for a power of two greater than that supported by int should throw an exception.

It was because of this case that I really saw the benefit of testing.  I knew it was important, but just not entirely how.  Sure I knew it was beneficial to see if your method works as expected.  But that’s pretty much really.  No one expects to use the corner cases, but it still needs to be covered.  And it’s using testing that’s helped me to do that.

M.Spec

Anyway, here’s the M.Spec code that I used to write my tests.  Like I said, it’s fairly fluent – M.Spec provides types and extension methods to aid this.

Note: SpecWithResult<> and FailingSpec are very simple containers designed by my supervisor.  SpecWithResult<> holds a result, and FailingSpec holds an exception.

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_0 : SpecWithResult<int>
{
Because of = () => result = 0.NextPowerOf2Up();
It should_equal_1 = () => result.ShouldEqual(1);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_1 : SpecWithResult<int>
{
Because of = () => result = 1.NextPowerOf2Up();
It should_equal_2 = () => result.ShouldEqual(2);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_2 : SpecWithResult<int>
{
Because of = () => result = 2.NextPowerOf2Up();
It should_equal_4 = () => result.ShouldEqual(4);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_3 : SpecWithResult<int>
{
Because of = () => result = 3.NextPowerOf2Up();
It should_equal_4 = () => result.ShouldEqual(4);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_4 : SpecWithResult<int>
{
Because of = () => result = 4.NextPowerOf2Up();
It should_equal_8 = () => result.ShouldEqual(8);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_5 : SpecWithResult<int>
{
Because of = () => result = 5.NextPowerOf2Up();
It should_equal_8 = () => result.ShouldEqual(8);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_negative_1 : SpecWithResult<int>
{
Because of = () => result = (-1).NextPowerOf2Up();
It should_equal_1 = () => result.ShouldEqual(1);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_127 : SpecWithResult<int>
{
Because of = () => result = 127.NextPowerOf2Up();
It should_equal_128 = () => result.ShouldEqual(128);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_128 : SpecWithResult<int>
{
Because of = () => result = 128.NextPowerOf2Up();
It should_equal_256 = () => result.ShouldEqual(256);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_129 : SpecWithResult<int>
{
Because of = () => result = 129.NextPowerOf2Up();
It should_equal_256 = () => result.ShouldEqual(256);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_negative_128 : SpecWithResult<int>
{
Because of = () => result = (-128).NextPowerOf2Up();
It should_equal_1 = () => result.ShouldEqual(1);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_min_value : SpecWithResult<int>
{
Because of = () => result = int.MinValue.NextPowerOf2Up();
It should_equal_1 = () => result.ShouldEqual(1);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_max_value : FailingSpec
{
Because of = () => exception = Catch.Exception(() => int.MaxValue.NextPowerOf2Up());
It should_throw_an_exception = () => exception.ShouldBeOfType<ArgumentException>();
It should_throw_an_exception_with_an_inner_exception = () => exception.InnerException.ShouldBeOfType<OverflowException>();
}

Here are the results after running the library with M.Spec

Int32Extensions .NextPowerOf2Up, when given 0
» should equal 1

Int32Extensions .NextPowerOf2Up, when given 1
» should equal 2

Int32Extensions .NextPowerOf2Up, when given 2
» should equal 4

Int32Extensions .NextPowerOf2Up, when given 3
» should equal 4

Int32Extensions .NextPowerOf2Up, when given 4
» should equal 8

Int32Extensions .NextPowerOf2Up, when given 5
» should equal 8

Int32Extensions .NextPowerOf2Up, when given negative 1
» should equal 1

Int32Extensions .NextPowerOf2Up, when given 127
» should equal 128

Int32Extensions .NextPowerOf2Up, when given 128
» should equal 256

Int32Extensions .NextPowerOf2Up, when given 129
» should equal 256

Int32Extensions .NextPowerOf2Up, when given negative 128
» should equal 1

Int32Extensions .NextPowerOf2Up, when given min value
» should equal 1

Int32Extensions .NextPowerOf2Up, when given max value
» should throw an exception
» should throw an exception with an inner exception

Contexts: 13, Specifications: 14

Oh, and for the record, we managed a petition against that particular course and got over 75% backing from the students who took it, including some who took it in previous years too.

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_0 : SpecWithResult<int>
{
Because of = () => result = 0.NextPowerOf2Up();
It should_equal_1 = () => result.ShouldEqual(1);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_1 : SpecWithResult<int>
{
Because of = () => result = 1.NextPowerOf2Up();
It should_equal_2 = () => result.ShouldEqual(2);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_2 : SpecWithResult<int>
{
Because of = () => result = 2.NextPowerOf2Up();
It should_equal_4 = () => result.ShouldEqual(4);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_3 : SpecWithResult<int>
{
Because of = () => result = 3.NextPowerOf2Up();
It should_equal_4 = () => result.ShouldEqual(4);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_4 : SpecWithResult<int>
{
Because of = () => result = 4.NextPowerOf2Up();
It should_equal_8 = () => result.ShouldEqual(8);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_5 : SpecWithResult<int>
{
Because of = () => result = 5.NextPowerOf2Up();
It should_equal_8 = () => result.ShouldEqual(8);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_negative_1 : SpecWithResult<int>
{
Because of = () => result = (-1).NextPowerOf2Up();
It should_equal_1 = () => result.ShouldEqual(1);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_127 : SpecWithResult<int>
{
Because of = () => result = 127.NextPowerOf2Up();
It should_equal_128 = () => result.ShouldEqual(128);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_128 : SpecWithResult<int>
{
Because of = () => result = 128.NextPowerOf2Up();
It should_equal_256 = () => result.ShouldEqual(256);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_129 : SpecWithResult<int>
{
Because of = () => result = 129.NextPowerOf2Up();
It should_equal_256 = () => result.ShouldEqual(256);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_negative_128 : SpecWithResult<int>
{
Because of = () => result = (-128).NextPowerOf2Up();
It should_equal_1 = () => result.ShouldEqual(1);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_min_value : SpecWithResult<int>
{
Because of = () => result = int.MinValue.NextPowerOf2Up();
It should_equal_1 = () => result.ShouldEqual(1);
}

[Subject(typeof(Magpie.Core.Extensions.Int32Extensions), “.NextPowerOf2Up”)]
public class when_given_max_value : FailingSpec
{
Because of = () => exception = Catch.Exception(() => int.MaxValue.NextPowerOf2Up());
It should_throw_an_exception = () => exception.ShouldBeOfType<ArgumentException>();
It should_throw_an_exception_with_an_inner_exception = () => exception.InnerException.ShouldBeOfType<OverflowException>();
}

Posted in Computing, Placement | Tagged: , , , , , , , | Leave a Comment »

glGenTexture(Atlas); // part1

Posted by Robert Chow on 08/03/2010

So besides having a problem with fonts, I also know at a later stage, I will have to eventually incorporate texture atlases. A texture atlas is a large texture that is home to many smaller sub-images which can be each accessed by using different texture coordinates. The idea behind this is to minimise the number of texture changes is in OpenGl – texture changing is regarded as a very expensive operation and should be kept as minimal as possible.  Using this method would create a huge performance benefit, and would be used extensively with fonts.


Helicopter Texture Atlas.  This shows how someone has taken a model of a helicopter and broken it up into its components, before packing it into a single texture atlas.  This image is from http://www.raudins.com/glenn/projects/Atlas/default.htm

Where To Start?

Approaching this problem, there are multiple things for me to consider.

The first is how to compose the texture atlas. Through a little bit of research, I have found a couple of methods to consider – the Binary Split Partition (BSP) algorithm, and using Quadtrees.

Secondly, the possible notion of a optimisation method. This should take all the textures in use into consideration, and then sort them so the number of texture changes is at a minimum. This would be done by taking the most frequent sub-textures in use, and placing them on to a single texture. Of course, this alone there are many things to consider, such as what frequency algorithm should I use, and how often do the textures need to be optimised.

Another thing for me to consider is the internal format of the textures themselves, and also how the user will interact with the textures.  It would be ideal for the user to believe that they are using separate textures, yet behind the scenes, they are using a texture atlas.

Updating an Old Map

As I already have the Renderer library in place, ideally the introduction of texture atlases will not change the interface dramatically, if not at all.

For the current, the user asks Renderer for a texture handle, with an internal format specified for the texture.  Initially this handle does not correspond to a texture, and seems rather pointless.  Of course the obvious answer to that would be to only create a handle upon creating a texture.  The reason behind using the former of the two options was the flexibility it provided when a texture is used multiple times in a scene graph.   Using a texture multiple times in a scene means that the handle would have to be referred to in each node it is to appear at in the scene graph.  Using the first option means that if the texture image changes, only the contents of the handle is changed, and thus updates automatically in the scene graph.  If it was done using the latter of the two options, it would mean having to change the handle in every single node of the scene graph it corresponded to.  The handle acts as like a middle-man.

When incorporating texture atlases, I would like to keep this functionality; but it does mean that I will have to change the interface in another area of Renderer, and that is using texture co-ordinates.  For the current, the texture co-ordinates are nice and simple.  To access the whole texture, it would use texCoords (0,0) for the bottom-left hand corner, and (1,1) for the top-right hand corner.  To the user on the outside, this should be no different, especially considering the user is thinking that they are just using the single-texture.  To do this, a mapping would be needed to convert the (s,t) texture co-ordinates to the internal texture atlas co-ordinates.  Not really a problem.  But it is if we’re using Vertex Buffer Objects.

Just using the simple (0,0) to (1,1) co-ordinates all the time means that we only need to create the texture co-ordinates in our vertex buffers once, and refer to them all the time.  Yet, they are to change all the time if we are using internal mapping, and especially if we are going to be changing the texture contents that the handles are referring too as well.

I think the best way of solving this is to go about it by making sure that the texture co-ordinates inside the vertex buffer objects are created each time a new texture is bound to a handle.  How I go about this, I am not entirely sure, but it’s definitely something I need to consider about.  This would definitely mean a fairly tight relationship between texture co-ordinates in a scene graph node and the texture that it is bound to – of course they can’t really function without each other anyway, so it does make sense.  Unfortunately, the dependance is there and it is always will be.

Structure

Before all of that comes into place, I also need to make sure that the structure of the system is suitable too, on top of it being maintainable, readable and scalable.  For the current time being, I am using a 3-tier structure.

Texture Atlas Structure.  This shows the intended structure of my texture atlas system.

Starting at the top, there is the Atlas.  This is essentially what the user interface will work through.  The Atlas will hold on to a list of catalogs.  The reason I have done this is to separate the internal formats of the textures.  A sub-image with the intended internal format A cannot be on the same texture as a sub-image of internal format B.   Inside each catalog is a collection of Pages with the corresponding internal format.  A Page is host to a large texture with many smaller sub-images.  Outside of this structure are also the texture handles.  These refer to each of their own sub-images, and hide away the complexity of the atlas to the user, making it seem as if they are handling separate textures.

For the time being, this is still a large learning-curve for me, not only in learning OpenGl, but also in using design patterns and C# licks, and more recently, planning.  I don’t plan very often, so this is one time I hope on trying to do that!  These posts may come a little slow as I’m in Manchester at the moment, and although I thought my health was getting better, it isn’t seemingly all going away.  With a little luck, I’ll accomplish what it is that I want to do over the next couple of weeks.  In the next post, I hope to touch on BSP, which I have implemented; and Quadtrees – something I still need to look at.   Normally I write my blog posts after I”ve implemented what it is I’m blogging about, but this time I’ve decided to it a little differently.  The reason for this is because after I implemented the BSP, I’ve realised it is actually a lot more flexible to implement the Quadtrees instead.  Whether I do or not, I think is dependant on time.  As long as there is a suitable interface, then all in all, it is just an implementation detail, and nothing major to worry about.  This is, at the this current time, by no means, a finished product.

Posted in Computing, Placement | Tagged: , , , , , , | 3 Comments »

“{Binding OpenGl To WPF}” : Part 3

Posted by Robert Chow on 16/02/2010

So this is (hopefully) the last installment of my journey to creating my first WPF application using an OpenGl control.  You can find the first part here, and the second part here.  In this post, I am going to talk about where the magic happens behind a WPF form.

Model-View-View Model Pattern

So we have the GOF design patterns; these a best code practises and are just guidelines.  Similarly, there are also design patterns for implementing GUIs: Model-View-Presenter, Model-View-Controller and Model-View-View Model.  I don’t really know much about the former two, and I don’t claim to know too much about the MVVM pattern either, but it’s the one I am trying to use for developing my WPF applications.

In the pattern, we have the model, which is where all the code defining the application is; the view – this is the GUI; and the view-model, which acts as an abstraction between the view and the model.  The view is mostly written in XAML, and the view-model and model I have written in C#.  Technically, the view should never need to know about the model, and vice versa.  It is the view-model which interacts between the two layers.  I find it quite easy to look at it like this.

View Model Gear.  How the MVVM pattern may be portrayed in an image.  To be honest, I think the pattern should really be called the View-View Model-Model pattern, but of course, that’d just be stupid.  This image is from http://blogs.msdn.com/blogfiles/dphill/WindowsLiveWriter/CollectionsAndViewModels_EE56/.

Code Inline and Code Behind

You’ll notice for the majority of the time, I am using XAML to do most of the form.  The rest of the code that doesn’t directly interact with the view using the XAML is either code inline or code behind.  The code behind is essentially what we use to represents the view-model and the model.  The code inline is the extra code that is neither XAML or code behind.  It helps to define the view and is very rarely used, especially if the XAML can describe everything related to the view.  Incidentally, I used code inline at the start of the last post, where I defined a windows forms host using C# instead of doing it in the XAML.

In order to keep the intended separations of concerns in the MVVM pattern, the code behind has to interact with the view as little as possible, if not at all.  The flow of code should always travel in the direction of view to view-model, and never the other way round.  A way to do this is to introduce bindings.

Bindings

A binding in WPF is declared in the XAML and binds a XAML component to a property in the view-model.  In doing this, the component can take on the value specified in the view-model.  However, before we can create a binding, we have to form the connection between the view and the view-model.  To do this, we assign the view-model to the view data-context.  This is done in the logic in the code behind of the view.

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();
this.simpleOpenGlControl.InitializeContexts();
this.DataContext = new ViewModel();

}
}

Having this data context means that the view can now relate to our view-model.

Now with the view-model in place, we can now bind components of the view to properties in the view-model.  Declare the binding in the XAML using the property name in the view-model.  The example depicts how I am able to retrieve the score in the demo to a label.

<Label Content={Binding Path=Score, Mode=OneWay, UpdateSourceTrigger=PropertyChanged} Grid.Row=“1” Grid.Column=“0” FontSize=“48” />

As you can see, the property I have bound to is “Score”, defined using Path.  The Mode and UpdateSourceTrigger properties are XAML defined enums to explain how we want the binding to relate to the property in the view-model.  Specifying that the binding is only OneWay tells the application that all we want to do is read the property value in the view-model.  This is usually TwoWay in cases such as a textbox where the user can edit a field.

The UpdateSourceTrigger is necessary.  Without this, the label content will never update.  As the code in the view is travelling in the direction of the view-model, there is minimum code going the other way.  As a result, the label content will not know if the value has changed in the view-model, and thus will not update.  To make sure the label content does update, we have to notify that the property value has changed.  To do this we have to implement an interface.

INotifyPropertyChanged

You can’t really get much closer to what it says on the tin than this.  Implementing this interface gives you an event handler, an event handler that “notifies” the XAML that a property has changed and will ask it to update accordingly.  Below is the code I have used to invoke a change on the property Score.

public event PropertyChangedEventHandler PropertyChanged;

public int Score
{
get
{
return this.score;
}
set
{
this.score = value;
OnPropertyChanged(“Score”);

}
}

private void OnPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}

As far as I’m aware, invoking the event with the property name tells the XAML that the value for the property with that particular name has changed.  The XAML will then look for that property and update.  As a result, it is vital that the string passed on is exactly the same as how the property is defined, in both the view-model and the XAML binding.

Invalidate

A problem I did come across was trying to invoke a method provided by a XAML component.  Using the simpleOpenGlControl, I need to invalidate the control so the image will refresh.  Of course, invoking this in the view-model directly would violate the MVVM model.  As a result, a little bit of logic was needed to solve this problem.

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();
this.simpleOpenGlControl.InitializeContexts();
this.DataContext = this.viewModel = new ViewModel();

viewModel.SimpleOpenGlControlInvalidate = () => this.simpleOpenGlControl.Invalidate();
}

private ViewModel viewModel;
}

public class ViewModel : INotifyPropertyChanged
{
public Action SimpleOpenGlControlInvalidate
{
get;
set;
}

public int Score …

}

Assigning the method for use inside the view-model means that I can invoke this without having to go to the view, and therefore still keeping with the MVVM pattern.  Apparently, doing this causes a recurring memory leak, so when the view-model is finished with, you have to make sure that the action is nullified.

That pretty much concludes this 3-parter.  I’m sure there’ll be a lot more WPF to come in the near future, especially as I’m now trying to create all my future demos with it now.  It also means that I can use the XAML to create SilverLight applications too… or maybe not.

Posted in Computing, Placement | Tagged: , , , , , , , , , , , , , | Leave a Comment »

“{Binding OpenGl To WPF}” : Part 2

Posted by Robert Chow on 03/02/2010

This is the second part of my journey to creating an application using WPF, wrapping the Renderer demo I made earlier. You can find the first part here.

Subconscious Intentions

So, initially in this post, I was going to introduce the concept of the model-view-viewmodel, and code inline and code behind.  Or at least, how I understand them.  Yet due to recent activity, that will have to wait till the next post.

The great thing about writing these blog posts is that in order for me to seem like I know what I’m doing, I have to do the research.  I’ve always been taught that during an exam, always answer the question as if the examiner doesn’t know anything about the subject bar the basics.  This way, you should answer the question in a detailed and chronological order, thus making complete sense and gaining full marks.  In a way writing these posts are quite similar.  Sure, I may have broken the rule more than a few times, especially when I’m rushing myself, but I try to explain the topics I cover in enough detail for others than myself to understand it.  After all, although I am writing this primarily for myself to refer back to for when I get stuck, it’s nice to know that others are benefiting from this blog too.

It’s not because I haven’t done the research that I can’t bring to the table what I know (or seem to know) about the model-view-viewmodel, code inline and code behind.  It’s because during the research and much tinkering around, that I thought I should cover the main drive for using WPF in the first place, and that is to incorporate an OpenGl control.

From Scratch

So a couple of weeks ago, I did actually manage to create an OpenGl control, and place it inside a WPF form.  The way I did this was a bit long-winded compared to how I used to, by creating it in a Win32 form.  Instead of using the SimpleOpenGlControl provided by the Tao framework, I went about by creating the control entirely from scratch.

For this, I could have done all the research, and become an expert at creating the control manually.  But that simply wasn’t my intention.  Luckily for me, the examples provided by Tao included the source code ,and a quick copy and paste, I more or less had the code ready and waiting to be used.

One thing I am more aware of now is that you need two things, a  rendering context and a device context.  The rendering context is where the pixels are rendered; the device context is the intended form for where the rendering context will sit inside.  Of course, the only way to interact with these are using their handles.

To create a device context in the WPF application, I am using a Windows Forms Host.  This allows you to host a windows control, which we will use as our device context.  The XAML code for this is relatively simple.  Inside the default grid, I have just inserted the WindowsFormsHost as the only child element.  However, for the windows control, I have had to take from a namespace other than the defaults provided.  To declare a namespace, declare the alias (in this case I have used wf, shorthand for windows forms) and then follow it with the namespace path.  Inside the control, we are also going to use the x namespace.  Using this, we can assign a name for the control, and thus allowing us to retrieve the handle to use as the device context.

<Window x:Class=“OpenGlControlInWPF.Client”
xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation&#8221;
xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml&#8221;
xmlns:wf=“clr-namespace:System.Windows.Forms;assembly=System.Windows.Forms”
Title=“Client” Height=“480” Width=“640” WindowStyle=“SingleBorderWindow”>
<Grid ClipToBounds=“True”>
<WindowsFormsHost ClipToBounds=“True”>
<wf:Control x:Name=“control”/>
</WindowsFormsHost>
</Grid>
</Window>

With the form done, we can now dive into the code in the C# file attached to the XAML file.  It is here where we create the rendering context, and attach it to the device context.  I’m not really an expert on OpenGl at all when it comes to this kind of thing, so I’m not going to show the full code.  If you’re really stuck, the best place I can point you to is to look at NeHe’s first lesson, making an OpenGl window.  If you’re using the Tao framework and you installed all the examples, the source code should come with it.

The WPF interaction in the C# code is very minimal.  All we need to do with the XAML file is to retrieve the handle associated with the control we declared beforehand.  This is done simply by using the name we gave it in the XAML and letting Tao do the rest.  We hook this up to retrive a rendering context, and then we show the form.

Gdi.PIXELFORMATDESCRIPTOR pfd = new Gdi.PIXELFORMATDESCRIPTOR();
// use this to create and describe the rendering context – see NeHe for details

IntPtr hDC = User.GetDC(control.Handle);
int pixelFormat = Gdi.ChoosePixelFormat(hDC, ref pfd);
Gdi.SetPixelFormat(hDC, pixelFormat, ref pfd);
IntPtr hRC = Wgl.wglCreateContext(hDC);
Wgl.wglMakeCurrent(hDC, hRC);

this.Show();
this.Focus();

All there is left to do is to go into the OpenGl loop of rendering the scene at each frame. Unfortunately, because I am used to the SimpleOpenGlControl provided by Tao, I’ve never needed to go into it whilst I’ve been on placement. All I had to do was to call simpleOpenGlControl.Invalidate() and the frame would automatically refresh for me.  As a result of this, I decided to place the draw scene method in a while(true) loop so the rendering would be continuous. And true to my thoughts, I knew this wouldn’t work.  As a result, the loop was “throttling” the application when running – I was unable to interact with it because all the runtime concentrated on rendering the scene – there was no interrupt handling so pressing a button or typing a key didn’t have any effect whatsoever.

I did try to look for answers to the throttling, and I stumbled across something else.  Another solution to hosting an OpenGl control in WPF.

The Better Solution

Going back to the first post of this multi-part blog, you might recall I am using a Canvas to host the OpenGl control.  I found this solution only a couple of days ago, due to a recent post on the Tao forums.  It uses this canvas in the C# code and assigns a WindowsFormsHost.  This in turn is assigned a SimpleOpenGlControl.  A SimpleOpenGlControl!  This means that I am able to use all the abstractions, methods and properties that the SimpleOpenGlControl has to offer without having to manually create my own.

First of we have to assign the canvas a name in the XAML code so we can reference it in the C# counterpart.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“0″ Grid.Column=“0″ Background=“Black” BorderThickness=“1″ BorderBrush=”Navy” CornerRadius=“5″ Margin=“6, 6, 3, 3″>
<Canvas ClipToBounds=“True” Margin=“2″/ x:Name=“canvas”>
</Border>
</Grid>

The C# code for creating the SimpleOpenGlControl is short and sweet.  We create the WindowsFormsHost, attach a newly created SimpleOpenGlControl and attach the whole thing to the Canvas.  Here is the entire code for creating this.

namespace OpenGlWPFControl
{
using System.Windows;
using System.Windows.Forms.Integration;
using Tao.Platform.Windows;

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();

this.windowsFormsHost = new WindowsFormsHost();
this.simpleOpenGlControl = new SimpleOpenGlControl();
this.simpleOpenGlControl.InitializeContexts();
this.windowsFormsHost.Child = this.simpleOpenGlControl;
this.canvas.Children.Add(windowsFormsHost);
}

private WindowsFormsHost windowsFormsHost;
private SimpleOpenGlControl simpleOpenGlControl;
}
}

Now we have the SimpleOpenGlControl set up, we simply just add the event for rendering and we’re nearly done. There is one problem however, and that is the windows forms host does not know what size to take. We add an event for when the canvas is resized to update the windows forms host size too.

public Client()
{

this.simpleOpenGlControl.Paint += new PaintEventHandler(simpleOpenGlControl_Paint);
this.canvas.SizeChanged += new SizeChangedEventHandler(canvas_SizeChanged);
}

void simpleOpenGlControl_Paint(object sender, PaintEventArgs e)
{
// do your normal opengl drawing here
this.simpleOpenGlControl.Invalidate();
}

void mainCanvas_SizeChanged(object sender, SizeChangedEventArgs e)
{
this.windowsFormsHost.Width = this.canvas.ActualWidth;
this.windowsFormsHost.Height = this.canvas.ActualHeight;
}

A Revelation To An Even better Solution

So I said I was going to talk about other topics before delving into my journey of placing an OpenGl control inside a WPF application, and that’s because of what I found myself accomplishing last night.  In the first blog post of this multi-part series, I found myself using a Canvas to hold a Windows Forms Host, and in turn, to parent a SimpleOpenGlControl.  Yet with further understanding of WPF,  a revelation came.  The reason I was unable to insert a SimpleOpenGlControl directly into the application beforehand was because I wasn’t entirely aware of namespaces in XAML.  Soon after finding more about them, I find I am able to access the SimpleOpenGlControl by referencing Tao, and hence solving all the background work the C# had to do.

<Window
xmlns:tao=“clr-namespace:Tao.Platform.Windows;assembly=Tao.Platform.Windows”
…>
<Grid Background=“AliceBlue”>

<Border Grid.Row=“0” Grid.Column=“0” Background=“Black” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 6, 3, 3”>
<WindowsFormsHost Margin=“2” ClipToBounds=“True”>
<tao:SimpleOpenGlControl x:Name=“simpleOpenGlControl”/>
</WindowsFormsHost>
</Border>
</Grid>

So the only thing extra that to add to this is the event for rendering, which I included before. I can omit the need for having to resize the canvas, partially because there is now no canvas, and also because the WindowsFormsHost ClipTobounds property is true.

In the next part of this series I will hopefully be touching upon what I intended on touching upon in the first place, the model-view-viewmodel pattern.

Posted in Computing, Placement | Tagged: , , , , , , , , | 1 Comment »

ConfusedEventHandler += (s, e) => ConfusedEventHandler(Me.Extend(s, e));

Posted by Robert Chow on 21/01/2010

So I’ve just been informed that there are a couple more additions to event handling in .NET.

Before notifying an event handler of any new events, you MUST always check to see if it is null.  If it is, then don’t send any events – apparently you get really messed up runtime errors.

Secondly, you can subscribe to an event handler using a lambda method too – it’s quite a pretty neat trick.  As a result, it also allows you to subscribe more than the one method to the event handler.

Here’s the code for the main part of the demo, but rewritten to accomodate the two factors.  I’ve also added in the functionality of one of the subscribers being taken away part way through the program.

public class Program
{
public static event IncreaseNumberEventHandler numberIncreasedEventHandler;

public static IncreaseNumberEventHandler n52Subscribe;

public static void Main(string[] args)
{
public static Number n27 = new Number(27);
public static Number n39 = new Number(39);
public static Number n52 = new Number(52);
numberIncreasedEventHandler += (s, e) => {n27.Increase(s, e); n39.Increase(s, e);};
numberIncreasedEventHandler += (n52Subscribe = (s, e) => n52.Increase(s, e));

Run();
}

public static void Run()
{
for (int i = 0; i < 10; ++i)
{
if (i == 5)
{
numberIncreasedEventHandler -= n52Subscribe;
}
if (numberIncreasedEventHandler != null)
{
numberIncreasedEventHandler(null, new IncreaseNumberEventArgs(i));
}
}
}
}

And the output:

n27: 27 increased by 0 to 27
n39: 39 increased by 0 to 39
n52: 52 increased by 0 to 52
n27: 27 increased by 1 to 28
n39: 39 increased by 1 to 40
n52: 52 increased by 1 to 53
n27: 28 increased by 2 to 30
n39: 40 increased by 2 to 42
n52: 53 increased by 2 to 55
n27: 30 increased by 3 to 33
n39: 42 increased by 3 to 45
n52: 55 increased by 3 to 58
n27: 33 increased by 4 to 37
n39: 45 increased by 4 to 49
n52: 58 increased by 4 to 62
n27: 37 increased by 5 to 42
n39: 49 increased by 5 to 54
n27: 42 increased by 6 to 48
n39: 54 increased by 6 to 60
n27: 48 increased by 7 to 55
n39: 60 increased by 7 to 67
n27: 55 increased by 8 to 63
n39: 67 increased by 8 to 75
n27: 63 increased by 9 to 72
n39: 75 increased by 9 to 84

Posted in Computing, Placement | Tagged: , , , , , , , , , | Leave a Comment »