Planning the Spontaneous

It's more than just a blueprint.

Posts Tagged ‘Tao Framework’

DrawInside(Stencil);

Posted by Robert Chow on 26/05/2010

So a few things have changed since I last made a post.

My targets for the end of the year have been reduced.  Which I’m very glad about.  No, it’s not because I’m a slacker, but more importantly because there simply isn’t enough time.

What’s new?

As a result, the target is now very focused on getting an end product out to show for Grapher.  This end product will have rather minimal functionalities, yet will still be comprehensible enough to use as a Grapher component.  Of course, this target will not actually be used as a component without further work; like those functionalities.

Unfortunately, there are still hidden complexities.  There are always hidden complexities in any piece of software you write, in particular those which you have no prior knowledge about.  My supervisor said to me that you shouldn’t start coding until you have answered all those question marks.  Yet another downfall to my side of planning.  But I think I’m starting to learn.

Once again, Grapher threw up a hidden complexity in the way that layers would be drawn.  The idea was to draw the plot in its own viewport, eliminating everything outside of the view such as data points that do not lie in the visible range.  Of course, this is all fine with a rectangular plot – the natural OpenGl viewport is rectangular.  However, come to nonrectangular plots such as triplots or spiderplots, the story is rather different.

The first solution, would be to use a mask to hide away the data outside of the plot.  Unfortunately, this mask would also hide away any items lying underneath that has already been renderered there.

As a result, the next solution states that, because of the action caused by the first solution, some layers would have to be fixed; for example, the data plot will always be at the bottom to make sure the mask does not obscure anything underneath it.

At this point in the placement, I never did think that I’d be adding something major to Renderer;  the final solution, and arguably the best would be to use a stencil buffer.

Stencil me this

A stencil buffer allows for a nonrectangular viewport.  Primitvives, acting as the viewport is drawn to the stencil buffer; using this, only items drawn inside the viewport outlined by the stencil buffer is rendered.

Simple right?

Of course, there are hidden complexities.  Again.

The best thing for me to do would be to spike the solution, instead of coding it straight into Renderer.

First point of action I took was to visit the OpenGl documentation pages, and they provided me with a few lines of code.

Gl.glEnable(Gl.GL_STENCIL_TEST);

// This will always write to the corresponding stencil buffer bit 1 for each pixel rendered to
Gl.glStencilFunc(Gl.GL_ALWAYS, 1, 1);

// This will only render pixels if the corresponding stencil buffer bit is 0
Gl.glStencilFunc(Gl.GL_EQUALS, 0, 1);

Using these should result in items declared after the final call, to be only visible if it lies outside of the viewport declared after the second call.  I continue to claim that by all means, I am no expert on OpenGl, but that’s the rough idea I get from these calls.

I tried this, but to no avail, nothing happened.

My worry was that SimpleOpenGlControl, the rendering device provided by the Tao Framework did not support a stencil buffer; an earlier look notified me that it did not support an alpha channel.  Luckily, the stencil buffer bits can be changed without having to change the source code.  This should be done before initialising the control contexts.

SimpleOpenGlControl.StencilBits = 1;
SimpleOpenGlControl.InitializeContexts();

Once again, nothing happened.

I had a look around the Internet to try and find some source code that worked;  I noticed in one of them, one particular call was present, yet in mine, it was not.

// This specifies what action to take when either the stencil test or depth test succeeds or fails.
Gl.glStencilOp(Gl.GL_KEEP, Gl.GL_KEEP, Gl.GL_REPLACE);

Placing this at the start of the code appears to make use of the stencil buffer correctly.  I believe this is because when the stencil test succeeds, it will replace the current pixel with the resultant pixel, instead of keeping the old one.  Again, I am no expert; there is a lot more information on the documentation page (however useful, I cannot tell).

Drawing a conclusion

Below is the code and a screenshot of the application I used to spike using the stencil buffer.

public partial class Client : Form
{
public Client()
{
InitializeComponent();

SimpleOpenGlControl.StencilBits = 1;
SimpleOpenGlControl.InitializeContexts();

Gl.glEnable(Gl.GL_STENCIL_TEST);
Gl.glStencilOp(Gl.GL_KEEP, Gl.GL_KEEP, Gl.GL_REPLACE);
// Enable hinting

}

private void simpleOpenGlControl_Paint(object sender, PaintEventArgs e)
{
Gl.glClearColor(1, 1, 1, 1);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_STENCIL_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(0, SimpleOpenGlControl.Width, 0, SimpleOpenGlControl.Height, 10, –10);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

// Draw to the stencil buffer
Gl.glStencilFunc(Gl.GL_ALWAYS, 1, 1);
// Turn off colour
Gl.glColorMask(Gl.GL_FALSE, Gl.GL_FALSE, Gl.GL_FALSE, Gl.GL_FALSE);
// Draw our new viewport
Gl.glBegin(Gl.GL_TRIANGLES);
Gl.glVertex2d(0, 0);
Gl.glVertex2d(SimpleOpenGlControl.Width, 0);
Gl.glVertex2d(SimpleOpenGlControl.Width / 2.0, SimpleOpenGlControl.Height);
Gl.glEnd();

// This has been changed so it will draw to the inside of the viewport
// To draw to the outside, use Gl.glStencilFunc(Gl.GL_EQUAL, 0, 1);

Gl.glStencilFunc(Gl.GL_EQUAL, 1, 1);
// Turn on colour
Gl.glColorMask(Gl.GL_TRUE, Gl.GL_TRUE, Gl.GL_TRUE, Gl.GL_TRUE);
// Draw triangles here

// Disable stencil test to be able to draw anywhere irrespective of the stencil buffer
Gl.glPushAttrib(Gl.GL_ENABLE_BIT);
Gl.glDisable(Gl.GL_STENCIL_TEST);
// Draw border here

Gl.glPopAttrib();

SimpleOpenGlControl.Invalidate();
}
}

Stencil Test.  This is created by first delaring the triangular viewport in the stencil buffer;  only primitives inside the stencil will be rendered.  The border was added on later, after disabling the stencil test.

Advertisements

Posted in Computing, Placement | Tagged: , , , , , , , | Leave a Comment »

MultiOpenGlControl Demo

Posted by Robert Chow on 16/04/2010

I’ve managed to grab some video capture software and also produce a small demo of using multiple SimpleOpenGlControls.  Each screen has it’s own camera, focusing on to the same scene.  Using lighting and picking, clicking a face of the cube will change the scene, and thus be reflected in all of the cameras.

I did have a little trouble with the picking at first – when I picked, I forgot to change the current camera, so it would register the mouse click on one screen whilst using the camera of another.  With that fixed, it was nice to be able to play around with Renderer again.

True, the videos not suppose to be a hit, but at least it’s showing what I want it to.

Posted in Computing, Placement | Tagged: , , , , , , , , , | Leave a Comment »

MultiOpenGlControl

Posted by Robert Chow on 14/04/2010

One thing I’ve only just come to tackle is the prospect of having more than one rendering context visible at any time.

Make Current

Using the Tao SimpleOpenGlControl, it’s a relatively simple feat.

MultiOpenGlControl.  Displaying two SimpleOpenGlControls in immediate mode is relatively easy.


The SimpleOpenGlControl class allows you to make the control as the current rendering context. Any OpenGl calls made after this will be applied to the most recently made current rendering context.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();
}

public void SimpleOpenGlControlAPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlA.MakeCurrent(); // all code following will now apply to control A

Gl.glClearColor(1, 1, 0, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glColor3d(1, 0, 0);
Gl.glBegin(Gl.GL_POLYGON);
Gl.glVertex2d(-0.5, –0.5);
Gl.glVertex2d(0.5, –0.5);
Gl.glVertex2d(0, 0.5);
Gl.glEnd();

SimpleOpenGlControlB.Invalidate(); // I invalidate B so it gets refreshed – if I only invalidate A, B will never get drawn
}

public void SimpleOpenGlControlBPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlB.MakeCurrent();  // all code following will now apply to control B

Gl.glClearColor(0, 1, 1, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, 1, –1, 1, –1); // NOTE: I have changed the projection to show the images differently

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glColor3d(1, 0, 0);
Gl.glBegin(Gl.GL_POLYGON);
Gl.glVertex2d(-0.5, –0.5);
Gl.glVertex2d(0.5, –0.5);
Gl.glVertex2d(0, 0.5);
Gl.glEnd();

SimpleOpenGlControlA.Invalidate(); // I invalidate A so it gets refreshed – if I only invalidate B, A will never get drawn
}

Multiple Vertex Buffer Objects

I did run into a little trouble though. I found that when I was rendering in immediate mode, this worked fine.  However, when I came about to using vertex buffer objects and rendering in retained mode, I found the results were different.

MultiVertexBufferObjects.  Only one of the controls renders the image, despite making the exact same calls in each of the control’s paint methods.  You can tell the paint method of control A is called because the clear color is registered.


Only one of the rendering contexts shows an image, and the other did not, despite the obvious call to clear the color buffer.

int[] buffers = new int[3];
float[] vertices = new float[] { –0.5f, –0.5f, 0.5f, –0.5f, 0, 0.5f };
float[] colours = new float[] { 1, 0, 0, 1, 0, 0, 1, 0, 0 };
uint[] indices = new uint[] { 0, 1, 2 };

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

Gl.glGenBuffers(3, buffers);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[0]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(vertices.Length * sizeof(float)), vertices, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[1]);
Gl.glBufferData(Gl.GL_ARRAY_BUFFER, (IntPtr)(colours.Length * sizeof(float)), colours, Gl.GL_STATIC_DRAW);
Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[2]);
Gl.glBufferData(Gl.GL_ELEMENT_ARRAY_BUFFER, (IntPtr)(indices.Length * sizeof(uint)), indices, Gl.GL_STATIC_DRAW);
}

public void SimpleOpenGlControlAPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlA.MakeCurrent(); // all code following will now apply to control A

Gl.glClearColor(1, 1, 0, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();

Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[0]);
Gl.glVertexPointer(2, Gl.GL_FLOAT, 0, IntPtr.Zero);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, buffers[1]);
Gl.glColorPointer(3, Gl.GL_FLOAT, 0, IntPtr.Zero);

Gl.glEnableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glEnableClientState(Gl.GL_COLOR_ARRAY);

Gl.glBindBuffer(Gl.GL_ELEMENT_ARRAY_BUFFER, buffers[2]);
Gl.glDrawElements(Gl.GL_POLYGON, indices.Length, Gl.GL_UNSIGNED_INT, IntPtr.Zero);

Gl.glDisableClientState(Gl.GL_VERTEX_ARRAY);
Gl.glDisableClientState(Gl.GL_COLOR_ARRAY);

SimpleOpenGlControlB.Invalidate(); // I invalidate B so it gets refreshed – if I only invalidate A, B will never get drawn
}

public void SimpleOpenGlControlBPaint(object sender, PaintEventArgs e)
{
SimpleOpenGlControlB.MakeCurrent(); // all code following will now apply to control B

Gl.glClearColor(0, 1, 1, 0);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Gl.glOrtho(-1, 1, –1, 1, 1, –1); // NOTE: I have changed the projection to show the images differently

// same code as above

SimpleOpenGlControlA.Invalidate(); // I invalidate A so it gets refreshed – if I only invalidate B, A will never get drawn
}

Multiple Vertex Buffer Objects

This is because if you are to generate buffers, the buffers only apply to the current rendering context – in this case, it is SimpleOpenGlControlB (it was the last one that was created).  If you are using buffers and want to render the same data in a different rendering context, you have to recreate the data buffers again under the different context.  It seems like a bit of a waste really – having to recreate the same thing twice just to view it in a different place.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

SimpleOpenGlControlA.MakeCurrent();
// create bufferset

SimpleOpenGlControlB.MakeCurrent();
// repeat the same code to recreate the bufferset under a different rendering context
}

// Draw methods

Shared Vertex Buffer Objects

Fortunately, there is a small hack around. Wgl offers a method, wglShareLists, which allows for different rendering contexts to share, not only the same display lists, but also VBO, FBO and texture lists. To do this, you need to get the rendering context handles – unfortunately, you can only do this with a small hack around.

public void Init()
{
SimpleOpenGlControlA.InitializeContexts();
SimpleOpenGlControlB.InitializeContexts();

SimpleOpenGlControlA.MakeCurrent();
var aRC = Wgl.wglGetCurrent();
SimpleOpenGlControlB.MakeCurrent();
var bRC = Wgl.wglGetCurrent();

Wgl.wglShareLists(aRC, bRC);

// start creating vbos here – these will now be shared between the two contexts.
}

// Draw methods

Multiple Controls For Renderer

This now provides two solutions for Renderer – I can either have seperate lists for each control, or have them all share the same one.  There are advantages and disadvantages to both.

In terms of management, it would be a lot easier to have them share the same lists – there is an overhead in tracking what lists are part of which control.  It would also cause a problem when a user tries to draw a scene for a particular control, when the vertex buffer objects in the scene are assigned to a another.

It would also be advantageous to have shared lists when using heavy OpenGl operations, such as texture binding – it would have to bind a new texture each time a different control is refreshed.  Sharing a texture list would erase this problem.

In the short run, using seperate lists has its disadvantages; the memory use is rather inefficient because it has to create a new set of lists each time a control is created.  However, the memory story is a different matter when sharing lists becomes long term.   This is because once a control is destroyed, so too are its lists.  Sharing lists would mean the list will continue to accumulate more and more data until all the controls have been destroyed – that is entirely up to the user, and could take some time.

As a result, for the time being, I am going to go for the option of sharing lists – mainly because it does have its advantages in terms of not having to deal with the management side, and it also minimises the frequency of heavy OpenGl operations.  Not only this, but also because it would take some time to implement the management.  If  I do have time nearer the end of my placement, I may decide to come back and revisit it.

MultiOpenGlControl-3D.  Here is Renderer using 4 different SimpleOpenGlControls simultaneously.  I’ve tried to replicate a (very simple) 3D editing suite, whereby there are 3 orthogonal views, Front, Top and Side, and then the projection view.  It uses the same model throughout, but each control has differing camera positions.

Posted in Computing, Placement | Tagged: , , , , , , , , | Leave a Comment »

“{Binding OpenGl To WPF}” : Part 3

Posted by Robert Chow on 16/02/2010

So this is (hopefully) the last installment of my journey to creating my first WPF application using an OpenGl control.  You can find the first part here, and the second part here.  In this post, I am going to talk about where the magic happens behind a WPF form.

Model-View-View Model Pattern

So we have the GOF design patterns; these a best code practises and are just guidelines.  Similarly, there are also design patterns for implementing GUIs: Model-View-Presenter, Model-View-Controller and Model-View-View Model.  I don’t really know much about the former two, and I don’t claim to know too much about the MVVM pattern either, but it’s the one I am trying to use for developing my WPF applications.

In the pattern, we have the model, which is where all the code defining the application is; the view – this is the GUI; and the view-model, which acts as an abstraction between the view and the model.  The view is mostly written in XAML, and the view-model and model I have written in C#.  Technically, the view should never need to know about the model, and vice versa.  It is the view-model which interacts between the two layers.  I find it quite easy to look at it like this.

View Model Gear.  How the MVVM pattern may be portrayed in an image.  To be honest, I think the pattern should really be called the View-View Model-Model pattern, but of course, that’d just be stupid.  This image is from http://blogs.msdn.com/blogfiles/dphill/WindowsLiveWriter/CollectionsAndViewModels_EE56/.

Code Inline and Code Behind

You’ll notice for the majority of the time, I am using XAML to do most of the form.  The rest of the code that doesn’t directly interact with the view using the XAML is either code inline or code behind.  The code behind is essentially what we use to represents the view-model and the model.  The code inline is the extra code that is neither XAML or code behind.  It helps to define the view and is very rarely used, especially if the XAML can describe everything related to the view.  Incidentally, I used code inline at the start of the last post, where I defined a windows forms host using C# instead of doing it in the XAML.

In order to keep the intended separations of concerns in the MVVM pattern, the code behind has to interact with the view as little as possible, if not at all.  The flow of code should always travel in the direction of view to view-model, and never the other way round.  A way to do this is to introduce bindings.

Bindings

A binding in WPF is declared in the XAML and binds a XAML component to a property in the view-model.  In doing this, the component can take on the value specified in the view-model.  However, before we can create a binding, we have to form the connection between the view and the view-model.  To do this, we assign the view-model to the view data-context.  This is done in the logic in the code behind of the view.

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();
this.simpleOpenGlControl.InitializeContexts();
this.DataContext = new ViewModel();

}
}

Having this data context means that the view can now relate to our view-model.

Now with the view-model in place, we can now bind components of the view to properties in the view-model.  Declare the binding in the XAML using the property name in the view-model.  The example depicts how I am able to retrieve the score in the demo to a label.

<Label Content={Binding Path=Score, Mode=OneWay, UpdateSourceTrigger=PropertyChanged} Grid.Row=“1” Grid.Column=“0” FontSize=“48” />

As you can see, the property I have bound to is “Score”, defined using Path.  The Mode and UpdateSourceTrigger properties are XAML defined enums to explain how we want the binding to relate to the property in the view-model.  Specifying that the binding is only OneWay tells the application that all we want to do is read the property value in the view-model.  This is usually TwoWay in cases such as a textbox where the user can edit a field.

The UpdateSourceTrigger is necessary.  Without this, the label content will never update.  As the code in the view is travelling in the direction of the view-model, there is minimum code going the other way.  As a result, the label content will not know if the value has changed in the view-model, and thus will not update.  To make sure the label content does update, we have to notify that the property value has changed.  To do this we have to implement an interface.

INotifyPropertyChanged

You can’t really get much closer to what it says on the tin than this.  Implementing this interface gives you an event handler, an event handler that “notifies” the XAML that a property has changed and will ask it to update accordingly.  Below is the code I have used to invoke a change on the property Score.

public event PropertyChangedEventHandler PropertyChanged;

public int Score
{
get
{
return this.score;
}
set
{
this.score = value;
OnPropertyChanged(“Score”);

}
}

private void OnPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}

As far as I’m aware, invoking the event with the property name tells the XAML that the value for the property with that particular name has changed.  The XAML will then look for that property and update.  As a result, it is vital that the string passed on is exactly the same as how the property is defined, in both the view-model and the XAML binding.

Invalidate

A problem I did come across was trying to invoke a method provided by a XAML component.  Using the simpleOpenGlControl, I need to invalidate the control so the image will refresh.  Of course, invoking this in the view-model directly would violate the MVVM model.  As a result, a little bit of logic was needed to solve this problem.

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();
this.simpleOpenGlControl.InitializeContexts();
this.DataContext = this.viewModel = new ViewModel();

viewModel.SimpleOpenGlControlInvalidate = () => this.simpleOpenGlControl.Invalidate();
}

private ViewModel viewModel;
}

public class ViewModel : INotifyPropertyChanged
{
public Action SimpleOpenGlControlInvalidate
{
get;
set;
}

public int Score …

}

Assigning the method for use inside the view-model means that I can invoke this without having to go to the view, and therefore still keeping with the MVVM pattern.  Apparently, doing this causes a recurring memory leak, so when the view-model is finished with, you have to make sure that the action is nullified.

That pretty much concludes this 3-parter.  I’m sure there’ll be a lot more WPF to come in the near future, especially as I’m now trying to create all my future demos with it now.  It also means that I can use the XAML to create SilverLight applications too… or maybe not.

Posted in Computing, Placement | Tagged: , , , , , , , , , , , , , | Leave a Comment »

“{Binding OpenGl To WPF}” : Part 2

Posted by Robert Chow on 03/02/2010

This is the second part of my journey to creating an application using WPF, wrapping the Renderer demo I made earlier. You can find the first part here.

Subconscious Intentions

So, initially in this post, I was going to introduce the concept of the model-view-viewmodel, and code inline and code behind.  Or at least, how I understand them.  Yet due to recent activity, that will have to wait till the next post.

The great thing about writing these blog posts is that in order for me to seem like I know what I’m doing, I have to do the research.  I’ve always been taught that during an exam, always answer the question as if the examiner doesn’t know anything about the subject bar the basics.  This way, you should answer the question in a detailed and chronological order, thus making complete sense and gaining full marks.  In a way writing these posts are quite similar.  Sure, I may have broken the rule more than a few times, especially when I’m rushing myself, but I try to explain the topics I cover in enough detail for others than myself to understand it.  After all, although I am writing this primarily for myself to refer back to for when I get stuck, it’s nice to know that others are benefiting from this blog too.

It’s not because I haven’t done the research that I can’t bring to the table what I know (or seem to know) about the model-view-viewmodel, code inline and code behind.  It’s because during the research and much tinkering around, that I thought I should cover the main drive for using WPF in the first place, and that is to incorporate an OpenGl control.

From Scratch

So a couple of weeks ago, I did actually manage to create an OpenGl control, and place it inside a WPF form.  The way I did this was a bit long-winded compared to how I used to, by creating it in a Win32 form.  Instead of using the SimpleOpenGlControl provided by the Tao framework, I went about by creating the control entirely from scratch.

For this, I could have done all the research, and become an expert at creating the control manually.  But that simply wasn’t my intention.  Luckily for me, the examples provided by Tao included the source code ,and a quick copy and paste, I more or less had the code ready and waiting to be used.

One thing I am more aware of now is that you need two things, a  rendering context and a device context.  The rendering context is where the pixels are rendered; the device context is the intended form for where the rendering context will sit inside.  Of course, the only way to interact with these are using their handles.

To create a device context in the WPF application, I am using a Windows Forms Host.  This allows you to host a windows control, which we will use as our device context.  The XAML code for this is relatively simple.  Inside the default grid, I have just inserted the WindowsFormsHost as the only child element.  However, for the windows control, I have had to take from a namespace other than the defaults provided.  To declare a namespace, declare the alias (in this case I have used wf, shorthand for windows forms) and then follow it with the namespace path.  Inside the control, we are also going to use the x namespace.  Using this, we can assign a name for the control, and thus allowing us to retrieve the handle to use as the device context.

<Window x:Class=“OpenGlControlInWPF.Client”
xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation&#8221;
xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml&#8221;
xmlns:wf=“clr-namespace:System.Windows.Forms;assembly=System.Windows.Forms”
Title=“Client” Height=“480” Width=“640” WindowStyle=“SingleBorderWindow”>
<Grid ClipToBounds=“True”>
<WindowsFormsHost ClipToBounds=“True”>
<wf:Control x:Name=“control”/>
</WindowsFormsHost>
</Grid>
</Window>

With the form done, we can now dive into the code in the C# file attached to the XAML file.  It is here where we create the rendering context, and attach it to the device context.  I’m not really an expert on OpenGl at all when it comes to this kind of thing, so I’m not going to show the full code.  If you’re really stuck, the best place I can point you to is to look at NeHe’s first lesson, making an OpenGl window.  If you’re using the Tao framework and you installed all the examples, the source code should come with it.

The WPF interaction in the C# code is very minimal.  All we need to do with the XAML file is to retrieve the handle associated with the control we declared beforehand.  This is done simply by using the name we gave it in the XAML and letting Tao do the rest.  We hook this up to retrive a rendering context, and then we show the form.

Gdi.PIXELFORMATDESCRIPTOR pfd = new Gdi.PIXELFORMATDESCRIPTOR();
// use this to create and describe the rendering context – see NeHe for details

IntPtr hDC = User.GetDC(control.Handle);
int pixelFormat = Gdi.ChoosePixelFormat(hDC, ref pfd);
Gdi.SetPixelFormat(hDC, pixelFormat, ref pfd);
IntPtr hRC = Wgl.wglCreateContext(hDC);
Wgl.wglMakeCurrent(hDC, hRC);

this.Show();
this.Focus();

All there is left to do is to go into the OpenGl loop of rendering the scene at each frame. Unfortunately, because I am used to the SimpleOpenGlControl provided by Tao, I’ve never needed to go into it whilst I’ve been on placement. All I had to do was to call simpleOpenGlControl.Invalidate() and the frame would automatically refresh for me.  As a result of this, I decided to place the draw scene method in a while(true) loop so the rendering would be continuous. And true to my thoughts, I knew this wouldn’t work.  As a result, the loop was “throttling” the application when running – I was unable to interact with it because all the runtime concentrated on rendering the scene – there was no interrupt handling so pressing a button or typing a key didn’t have any effect whatsoever.

I did try to look for answers to the throttling, and I stumbled across something else.  Another solution to hosting an OpenGl control in WPF.

The Better Solution

Going back to the first post of this multi-part blog, you might recall I am using a Canvas to host the OpenGl control.  I found this solution only a couple of days ago, due to a recent post on the Tao forums.  It uses this canvas in the C# code and assigns a WindowsFormsHost.  This in turn is assigned a SimpleOpenGlControl.  A SimpleOpenGlControl!  This means that I am able to use all the abstractions, methods and properties that the SimpleOpenGlControl has to offer without having to manually create my own.

First of we have to assign the canvas a name in the XAML code so we can reference it in the C# counterpart.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“0″ Grid.Column=“0″ Background=“Black” BorderThickness=“1″ BorderBrush=”Navy” CornerRadius=“5″ Margin=“6, 6, 3, 3″>
<Canvas ClipToBounds=“True” Margin=“2″/ x:Name=“canvas”>
</Border>
</Grid>

The C# code for creating the SimpleOpenGlControl is short and sweet.  We create the WindowsFormsHost, attach a newly created SimpleOpenGlControl and attach the whole thing to the Canvas.  Here is the entire code for creating this.

namespace OpenGlWPFControl
{
using System.Windows;
using System.Windows.Forms.Integration;
using Tao.Platform.Windows;

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();

this.windowsFormsHost = new WindowsFormsHost();
this.simpleOpenGlControl = new SimpleOpenGlControl();
this.simpleOpenGlControl.InitializeContexts();
this.windowsFormsHost.Child = this.simpleOpenGlControl;
this.canvas.Children.Add(windowsFormsHost);
}

private WindowsFormsHost windowsFormsHost;
private SimpleOpenGlControl simpleOpenGlControl;
}
}

Now we have the SimpleOpenGlControl set up, we simply just add the event for rendering and we’re nearly done. There is one problem however, and that is the windows forms host does not know what size to take. We add an event for when the canvas is resized to update the windows forms host size too.

public Client()
{

this.simpleOpenGlControl.Paint += new PaintEventHandler(simpleOpenGlControl_Paint);
this.canvas.SizeChanged += new SizeChangedEventHandler(canvas_SizeChanged);
}

void simpleOpenGlControl_Paint(object sender, PaintEventArgs e)
{
// do your normal opengl drawing here
this.simpleOpenGlControl.Invalidate();
}

void mainCanvas_SizeChanged(object sender, SizeChangedEventArgs e)
{
this.windowsFormsHost.Width = this.canvas.ActualWidth;
this.windowsFormsHost.Height = this.canvas.ActualHeight;
}

A Revelation To An Even better Solution

So I said I was going to talk about other topics before delving into my journey of placing an OpenGl control inside a WPF application, and that’s because of what I found myself accomplishing last night.  In the first blog post of this multi-part series, I found myself using a Canvas to hold a Windows Forms Host, and in turn, to parent a SimpleOpenGlControl.  Yet with further understanding of WPF,  a revelation came.  The reason I was unable to insert a SimpleOpenGlControl directly into the application beforehand was because I wasn’t entirely aware of namespaces in XAML.  Soon after finding more about them, I find I am able to access the SimpleOpenGlControl by referencing Tao, and hence solving all the background work the C# had to do.

<Window
xmlns:tao=“clr-namespace:Tao.Platform.Windows;assembly=Tao.Platform.Windows”
…>
<Grid Background=“AliceBlue”>

<Border Grid.Row=“0” Grid.Column=“0” Background=“Black” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 6, 3, 3”>
<WindowsFormsHost Margin=“2” ClipToBounds=“True”>
<tao:SimpleOpenGlControl x:Name=“simpleOpenGlControl”/>
</WindowsFormsHost>
</Border>
</Grid>

So the only thing extra that to add to this is the event for rendering, which I included before. I can omit the need for having to resize the canvas, partially because there is now no canvas, and also because the WindowsFormsHost ClipTobounds property is true.

In the next part of this series I will hopefully be touching upon what I intended on touching upon in the first place, the model-view-viewmodel pattern.

Posted in Computing, Placement | Tagged: , , , , , , , , | 1 Comment »

“{Binding OpenGl To WPF}” : Part 1

Posted by Robert Chow on 02/02/2010

I’ll admit, it’s been a while.  But since I’ve been back, I’ve been working on a fair few things simultaneously, and they’ve taken a lot longer than planned.  But alas, here is one of them, in a multi-part blog.

Windows Presentation Foundation

Remember where I mention WPF a few times, but never really got into it?  Here’s a statement from the Microsoft website:

Windows Presentation Foundation was created to allow developers to easily build the types of rich applications that were difficult or impossible to build in Windows Forms, the type that required a range of other technologies which were often hard to integrate.”

Ha.

It’s not easy to develop on at all, especially for a developer just starting their first WPF project.  Not compared to creating a Win32 Form with a visual editor to say the least.

But it does allow you to build very rich applications that appeal to many by their look and feel alone.  And they look a lot better than Win32 forms too.

Remember that demo for Renderer?  I originally said I was going to try and incorporate fonts into it, but that’s still one bridge I have yet to cross.  Instead, I decided to learn a bit of WPF instead.  What do you think?

Demo In WPF.  Here I have incorporated the demo into a WPF form, and included panels on the right and bottom.  The right panel depicts the current state of the game, and allows the user to change the camera position and change the texture used when a cube face is picked.


Demo in Win32 Forms Mock.  I have included a mock of the original in a Win32 Form.  I think it’s fair to say the least that you would all rather use the WPF version.


XAML

The language of WPF is XAML, extensible-application-markup-language, and is very similar to XML.  It uses the open/close tag notation – one which I’m not particularly fond of, but it does mean that everything is explicit, and being explicit is good. Like all other languages, it’s very useful to know your ins and outs and what is available to use, and using XAML is no exception to this rule either. As a result, there are many ways, some better, some far worse, ways of creating the form I have made.  As I am no expert in this at all, I am going to leave it as it is, and take a look at the code I have generated for creating the form base.

To create this, I used Visual C# 2008 Express Edition.  This has proved rather handy as it updates the designer view as the code is changed.

Starting a WPF project gives you a very empty template.  With this comes 2 pieces of code, one in XAML and the other in C#.  For the time being, we are just going to concentrate on the XAML file.  This is where we create our layout.  The initial piece of XAML code is very simplistic, and doesn’t really mean too much.

<Window x:Class=“OpenGlWPFControl.Client”
xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation&#8221;
xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml&#8221;
Title=“Client” Height=“665” Width=“749”>
<Grid>
</Grid>
</Window>

The first few lines relate to the namespaces being used within the XAML file. The tags marked Grid relate to the box inside the window. This is a new concept different to the panels in a Win32 Form. Instead of having top, bottom, left and right panels, the grid can be split into a number of columns and rows using definitions.

Here I have split the grid into 4 components using 2 rows and 2 colums.  The code is relatively easy to deal with.  It also allows you to specify minimum, maximum and default dimensions.

<Grid Background=“AliceBlue”>
<Grid.RowDefinitions>
<RowDefinition Height=“*” MinHeight=“200”/>
<RowDefinition Height=“100”/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width=“*” MinWidth=“200”/>
<ColumnDefinition Width=“200”/>
</Grid.ColumnDefinitions>
</Grid>

Here, I have specified the default heights for each row and the default widths of each column.  The “*” is simply a marker to take what space is left.  As an extra,  I have also set the grid background colour.

Now that we have split the initial grid, we can now start to populate it.  This can be with other layout panels, like the grid, or a stackpanel or dockpanel and so forth to add extra layout details.  It can also be filled with more meaningful objects such as a label or a text box.

Starting off, I want to place the OpenGl control in the top-left panel.  For the time being, we are going to mock this using a Canvas.  This item will be used later in the C# code to attach the OpenGl control, but for the time being we are only handling XAML.  In addition, I have also decorated the canvas with a border.  Using a combination of the canvas and border properties, I have managed to achieve the rounded edges, making it more aesthetically appealing.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“0” Grid.Column=“0” Background=“Black” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 6, 3, 3”>
<Canvas ClipToBounds=“True” Margin=“2”/>
</Border>
</Grid>

This piece of code stays within the Grid tags, as it is a grid child.  For where in the grid it sits, I have explicitly stated which row and column of the grid it sits inside the Border tag.

The bottom panel is done in a similar fashion, only this time the border decorates a textblock.  In order to scroll a textblock, this itself needs to be decorated using a scroll viewer.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“1” Grid.Column=“0” Grid.ColumnSpan=“2” Background=“White” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 3, 6, 6”>
<ScrollViewer Margin=“2”>
<TextBlock/>
</ScrollViewer>
</Border>
</Grid>

There is only small change which we have to do so the panel is not confined to one grid space. This is done using the Grid.ColumnSpan property.

Now with only one panel left, I have decided to make my life that little easier by adding in extra grids.  These are done in the exact same way as the initial grid.  Using what I have done already, and combining it with new elements, the last panel is added.

<Grid>

<Border Grid.Row=“0” Grid.Column=“1” Background=“White” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“3, 6, 6, 3”>
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height=“Auto”/>
<RowDefinition Height=“Auto”/>
<RowDefinition Height=“*”/>
<RowDefinition Height=“30”/>
<RowDefinition Height=“30”/>
<RowDefinition Height=“60”/>
</Grid.RowDefinitions>
<Grid Grid.Row=“0”>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
<Label Grid.Row=“0” HorizontalAlignment=“Left” VerticalAlignment=“Bottom” FontSize=“36” Content=“Score:”/>
<Label Grid.Row=“1” HorizontalAlignment=“Right” VerticalAlignment=“Top” FontSize=“48” Content=“0”/>
</Grid>
<Grid Grid.Row=“1”>
<Grid.RowDefinitions>
<RowDefinition Height=“50”/>
<RowDefinition Height=“Auto”/>
<RowDefinition Height=“Auto”/>
</Grid.RowDefinitions>
<Label Grid.Row=“1” HorizontalAlignment=“Left” VerticalAlignment=“Bottom” FontSize=“24” Content=”Lives:”/>
<Label Grid.Row=“1” HorizontalAlignment=“Right” VerticalAlignment=“Bottom” FontSize=“24” Content=“0”/>
<Label Grid.Row=“2” HorizontalAlignment=“Left” VerticalAlignment=“Bottom” FontSize=“24” Content=“Level:”/>
<Label Grid.Row=“2” HorizontalAlignment=“Right” VerticalAlignment=“Bottom” FontSize=“24” Content=“0”/>
</Grid>
<Label Grid.Row=“3” HorizontalAlignment=“Center” VerticalAlignment=“Bottom” Content=“Camera Position”/>
<Grid Grid.Row=“4”>
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Button Grid.Column=“0” Margin=“15,0,15,0” Content=“&lt;&lt;”/>
<Button Grid.Column=“1” Margin=“15,0,15,0” Content=“>>”/>
</Grid>
<Button Grid.Row=“5” Margin=“15,15,15,15” Content=“Change Texture”/>
</Grid>
</Border>
</Grid>

As a result, we achieve a form created entirely from XAML.

Unfortunately however, there is no logic behind this. In part 2, we start looking at the other parts of the code and insert an OpenGl control into the application.

Posted in Computing, Placement | Tagged: , , , , , , , | 2 Comments »

Renderer: Demo

Posted by Robert Chow on 16/12/2009

So now that I’ve got a lot of my functionality going for me in this component, I thought I’d try and create a snazzy demo.

My supervisor’s obsessed with it. It’s like a little game – there is a rotating cube, and the challenge is you have to click on each face before you pass on to the next level. As you progress, the cube rotates just that little bit faster, making it harder to complete the challenge. Clicking on a face will turn that face on. Clicking on a face that is already on will turn it off – you can only progress when all 6 faces are switched on.

So you can automatically see that this little demo already involves the concept of vertex buffer objects (drawing of the cube), the scene graph (rotating cube), the camera (viewing the cube, using the scene graph) and picking (turning faces on/off). But what about the others?

Well, lets place the scene into dark surroundings – we’re going to need lights to see our cube – that’s lighting handled. So where does texturing fit in?

Switching the faces on/off need an indicator to show what state they are in.  We can easily do this by simply changing the colour of a face.  But that’s kinda boring.  So instead, we’re going to take the rendering to texture example, and slap that on a face which is on.  So that’s textures included.

Here are some screenshots of this demo.

Demo Setup. From left to right: initial scene – just a white cube; let’s dim the lighting to, well, none; include some lights: green – centre, red – bottom, blue – left; create a texture.

Demo Rotate. The demo in motion – the camera is stationary, whereas the cube is rotating, and the lights are rotating independantly around it.

Demo Play.  The first and second images show what happens when a face is clicked – the texture replaces the blank face.  Once all the faces are replaced with the texture, the demo advances to the next level.

Demo Camera.  This shows added functionality of how a camera can be fixed on to a light, thus the camera is rotating around the cube too.  Here it is fixed on to the blue light.  This is done with ease by manipulating the scene graph.

I guess you can’t really get a grasp of how this demo works in its entirety – sanpshots don’t really do it much justice.  I might try and upload a video or so – how, I’m unsure of – I’m sure I can probably find screen capture software around on the net.

And unfortunately, there is no score involved. Yet. That will come when fonts come.

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »

Renderer: Lighting

Posted by Robert Chow on 15/12/2009

So these are tricky too.

OpenGL only allows 8 lights at any one time, and should you need more, they suggest you ought to reconsider how you draw your model.

As it is not just one light available, we need to be able to differentiate between each light.  We do this by declaring an enum.

public enum LightNumber
{
GL_LIGHT0 = Gl.GL_LIGHT0,
GL_LIGHT1 = Gl.GL_LIGHT1,
..
..
GL_LIGHT7 = Gl.GL_LIGHT7
}

An enum, like classes, can be used as parameters, with one restriction – the value of the parameter must be equal to one of the values decalered within the enum.  In the context of lights, the LightNumber can only be either GL_LIGHT0, GL_LIGHT1, GL_LIGHT2 and so forth until GL_LIGHT7; giving us a maximum of 8 lights to choose from.  For an enum, the values on the left-hand side are names.  These names can be referred to.  However, you can then assign these names values, as I have done – in this case, it is their correspondant OpenGL value.

So now we can set our lights, and attach them to a node, similar to the camera.  Kind of.  Unlike the camera, these lights can either be switched on or off, and have different properties to one another – going back to the lighting post, the ambient, diffuse and specular properties of a light can be defined. We could do this using two methods.

Renderer.SwitchLightOn(LightNumber lightNumber, bool on);
Renderer.ChangeLightProperty(LightNumber lightNumber, LightProperty lightProperty, Colour colour);

We could.

But we shouldn’t.

I’ve been reading a lot lately about refactoring – cleaning up code, making it better, more extensible, and easier to read. A few months ago, I stumbled across this guide to refactoring. A lot of things were simple and made a lot of common sense. But it was intelligent. Something which, everyone becomes every now and then.  And it’s definately worth a read.  Another one I’ve ben reading of late is this book on clean code.  How good really is your code?

The Only Valid Measurement Of Code Quality.  This image has been taken from http://sayamasihbelajar.wordpress.com/2008/10/24/

Naming conventions are rather important when it comes to something like this, and using a bool as a parameter – doesn’t exactly mean much.  Once there is a bool, you will automatically know that the method does more than one thing – one thread for true, and another for false.  So split them.

Renderer.SwitchLightOn(LightNumber lightNumber);
Renderer.SwitchLightOff(LightNumber lightNumber);

There a lot of wrongs and rights, but no concrete answer – just advice that these books can give you.  So I’m not going to trip up on myself and try and give you some more advice – I’m only learning too.

Posted in Computing, Placement | Tagged: , , , , , , , | 3 Comments »

Renderer: Picking

Posted by Robert Chow on 03/12/2009

Remember that decorator pattern I used for writing to textures?  Also particularly useful for picking too.

Decorate the main drawer class with a picking class.  This allows for the picking class to start off by setting up the render-mode to select mode, and to set the X and Y co-ordinates of the mouse.  Render the image as normal, but because of the render-mode switch, the image is rendered using the select buffer instead of the device context.  After this, the picking class is also able to interpret the picking buffer, and then return the results.

Sounds fairly easy.  But I did run into a problem near the start.

First off was the mouse wasn’t picking where I wanted – it was interpreting the mouse as always in the bottom-left hand corner.  Why so?

I later learnt that when changing the render-mode, it automatically multiplies the projection matrix by some pre-configured matrix so it can do the picking appropriately.  In the original drawer class, I was resetting the projection matrix by loading the identity, and thus wiping what the picker class had set up.  Kind of like defeating itself.  So I had to change this by taking the matrix resets out of the main drawing method, and place them outside – these would have to be called elsewhere – likely to be through the Renderer facade (another GoF pattern), before calling the draw.

So with the matrix set up properly, I did encounter another problem.  And this time it was hardware based.  It would pick appropiately on my supervisor’s computer, yet not on mine.

An example scenario:  there are several images overlapping each other, each with their own pick Id.

My supervisors computer: picks appropriately – returns the object with the nearest Z value (most positive), under mouse (X,Y) .

My computer: picks unappropriately – returns the object with the nearest Z value, under mouse(X,Y) – providing the object has not moved in modelview space in terms of the Z value.  This accounts for transformations, rotations and scales.  So therefore it will sometimes return the correct object, but most of the time, it won’t.  Particularly annoying if you have a scene that is constantly rotating, and you want to pick the nearest item – it will always pick the same item every single time, simply because it was rendered at the front first before it was rotated.

Simple fix:  enable culling.  This means that anything that is not in view won’t be rendered, so technically there is nothing behind the frontmost object – it can be the only one that is picked!

But that’s also particularly annoying when you want to use 2-face rendering.  This is the ability to render shapes clockwise, and counter-clockwise.  Enabling culling in OpenGL will result in all shapes that would originally be drawn clockwise to the screen, not drawn at all.  Not so useful in rotating scenes!

Anyway.

Combining textures and picking, this demo allows the user to pick a quadrant, and that will change accordingly – to texture if off, or to non-texture if on.

Renderer Picking Demo. From top-left, clockwise:  empty canvas; bottom-left quadrant click to turn texture on; top-right quadrant click to turn texture on; top-left quadrant click to turn texture on; bottom-left quadrant click to turn texture off; top-right quadrant click to turn texture off; bottom-right quadrant click to turn texture on; top-left quadrant click to turn texture off.  To achieve the original image from the last step, the user would click bottom-right quadrant, turning the texture off.

Posted in Computing, Placement | Tagged: , , , , , , , | 1 Comment »

Renderer: Textures

Posted by Robert Chow on 22/11/2009

Carrying on with the Renderer library,the next function I decided to implement was textures.  To be able to render the scene to texture is a major function, and for this library is a must.  It is also a requirement to also be able to use those textures as part of a scene too.  I had not previously used vertex buffer objects with textures either, so it was also a good time to test this out, and make sure it worked.

I decided to demo a similar function to that previously, but also, adding in textures too.  This demo takes the random triangles, renders them to texture, and then redraws another scene, using the textures.

Renderer: Textures.  This scene is made up of 4 images, using the same texture generated from code very similar to that of the vertex buffer objects demo.

As you can see, the original scene has been manipulated in a way that it would be hard to recreate the scene displayed without first rendering to texture.  The texture size I used was 128 x 128, as was the viewport for when I first rendered the triangles.  This scene however, is of size 512 x 512.  The original texture is seen in the bottom-right hand corner, and all the other textures have been rotated and resized.  The largest of the textures, bottom-left, is slightly fuzzy – this is because it is twice the size of the original.

Renderer API

The way I have implemented the rendering to texture function on the API is not much different to rendering as normal.  It’s a simple case of:

Renderer.RenderToDeviceContext();

or

Renderer.RenderToTexture(..params..);

Technically, rendering to texture will also render to the device context.  The main difference however, is that after rendering to the device context, the render to texture method takes in the parameters given, and will make a simple call to glCopyTexImage2D – as per normal when a client wishes to render to texture using OpenGL.  To save from code duplication, I have used the decorator pattern – the render to texture class “decorates” the original class.  In doing so, it means that I can set up texturing by first binding a texture, make the original call to render to device context, and then finish off by rendering to the bound texture.

Posted in Computing, Placement | Tagged: , , , , , , | 1 Comment »