Planning the Spontaneous

It's more than just a blueprint.

9

Posted by Robert Chow on 22/03/2010

Over nine hours it’s taken to get back to Bideford from Manchester. It’s an absolute disgrace. At first I thought the problem would lie at Birmingham New Street, with a sketchy 15 minute wait for the connecting train, and the least of my worries at Exeter with a 45 minute changeover. Yet, it was the latter, with the Birmingham to Exeter train delayed 1 hour 30 minutes. As a result, I had to catch a much later train back to Barnstaple.

I can honestly say, it’s the journeys that I am going to miss the least when I leave my placement in the summer. Don’t get me wrong, I absolutely love the work I’m doing (that reminds me, I’ve got a lot left to do), and being a student will be very disorienting again, I am sure. However, it’s no secret that I’m not too fond of the location. It’s great for a holiday, or a weekend break; but a year, especially for someone growing up in Manchester and loving the city life, it’s not quite my cup of tea.

Anyway.

I’m back in Bideford now. Hopefully I’ll be able to cram in a few more posts before I disappear back up north again.

Advertisements

Posted in Placement, Travel | Leave a Comment »

glGenTexture(Atlas); // part2

Posted by Robert Chow on 19/03/2010

So from the plan I started to devise for my texture atlasing system, I decided to have a quick look at two methods. These two methods can help me to split up the texture in an efficient and appropriate way so I can maximise texture use and efficiency. These are the Binary Split Partitioning method, and the use of Quadtrees.

Binary Split Partitioning

This method takes a space and splits this into two, keeping a reference to each part. Using a binary tree, the tree can be used to keep a track of how the space is split at each stage.  This method works by taking the requested size (the image/sub-texture) and seeing if it fits into the space represented by a leaf node.  If not, then it will see if the requested size fits into the next leaf node using tree traversal.  To maximise efficiency, once a space that is free and large enough for the requested size is found, it splits this space into two;  both are represented as leaf nodes, yet one is the exact size as the requested size (and this is also where the requested texture is allocated) and the other is the remaining unallocated space.

Binary Split Partitioning.  This image represents the stages of how to insert a sub-texture(1) into a space managed by BSP.  Starting off at the top, (1) is requested and seen if it fits into space A.  A is too small, and it is split into 2 partitions, B and C.  (1) is compared to space B, and again splits this space into D and E.  As D is the perfect fit, (1) is allocated to this space.  The last image shows the final state of the tree after inserting (1).  The reason it cannot skip from stage 1 to stage 3 is because there is no possible way of splitting the space and keeping to an efficient BSP system – it is best to split each space linearly.  The splitting measurements are derived from the sub-texture to maximise efficiency.

I managed to find this method rather easy to implement, especially with the help from this page on lightmaps.  Using this, I also managed to come up with a simple demo, showcasing how it fits several random sub-textures into one.

BSP Texture Atlas.  This image shows my demo of inserting random sized images into a texture managed by BSP in progression.

Although easy to implement, there are a couple of drawbacks.  As this is an unbalanced binary tree, it becomes computationally very expensive to insert a sub-texture as the tree structure grows.  Adding tree rotation to balance the tree is possible – it doesn’t affect how the tree works when inserting a sub-texture.  However, it does affect the tree when it comes to deleting allocated space.  This is because the nodes derive from their parents; losing the connection between the parent and the children (which occurs heavily during tree rotation) means that to successfully delete an allocated space at node B derived from parent A and has sibling C, is more-or-less a very difficult feat.  If C is empty, A will be empty after the deallocation of space B – and thus B and C need not exist, leaving the big space referenced by A unsplit.  Taking away the natural bond between parent and child in a BSP tree due to tree rotation (or any other method for that matter) makes this problem rather hard to tackle as it could mean having to search the entire tree for the parent and other sibling.  These leaves me with a problem – do I keep the tree balanced, and make insertion less costly at the expense of not being able to delete; or do I allow for deallocation, whilst keeping insertion computationally expensive?  In the end, I will be inserting more times than not, and very rarely deleting, so I think the former is the best option for now.

Or I could look at quadtrees.

Quadtrees

This is where my planning goes wrong.  And I’ve also not researched quadtrees to the full extent.  I know what they are designed to do, I’m just unsure of how to implement them.

A quadtree is a structure whereby each node has four children.  Each node represents a square block in the texture, and each block is split into four smaller squares – the children.  This is repeatedly done until the size of a single block is a 1×1 square.

Quadtree Colour.  This displays how a texture is split up using quadtrees.  I have used colour to differentiate each node.  I guess it looks quite similar to one of those Dulux colour pickers.

As this structure is set in stone, its depth will be constant and we need not to worry about insertion/deletion costs.  It’s already a rather shallow structure in itself because each node has 4 children, as opposed to the typical 2.

The structure works by marking off what is allocated at the leaf nodes, and this is interpreted at the top.  If all of a node’s children is allocated, then that node too is marked as allocated.  From using this system, we can find if there is any space available in a node, and then check to see if the space provided is big enough to fit in the sub-texture.  My problem is that I’m unsure of how to do implement this efficiently.  For example, a very large sub-texture is inserted at the root, where everything is unallocated so far.  This takes up more than one quadrant of the root, and spills into the other quadrants.  By definition, the one quadrant the texture covers up is marked as allocated, yet the other quadrants are only partly allocated – some of their children are allocated but they also have children that are unallocated.  It is the next step which I cannot figure out; insert in another large texture – there is enough space in the root to place this, but it needs to take up the space given by two already partially filled quadrants.  How does it know how these quadrants are related in terms of position, and how does it then interpret the unallocated space in each of these quadrants?  They obviously need to be interpreted as adjacent, but then the available children of these quadrants also need to be interpreted as adjacent too.  It’s a puzzle I’ve decided to give this a miss, simply because I don’t have the time.

Quadtree Insert.  This depicts the problem I am having trouble with.  Inserting the first image will mark off the top-right quadrant as occupied, whilst the others are partially occupied.  It is easy to see that the second image will fit into the remaining space, but I am unsure of how to implement this.   This is because the second image will look at the root and find it is small enough to fit inside.  It will then look at the quadrants to see if it will fit in either of them, and it cannot because it is too big.   I do not know how to implement the detail where it looks at the quadrants together – I’m not even entirely sure if this is how quadtrees are supposed to work.

Posted in Computing, Placement | Tagged: , , , , | Leave a Comment »

glGenTexture(Atlas); // part1

Posted by Robert Chow on 08/03/2010

So besides having a problem with fonts, I also know at a later stage, I will have to eventually incorporate texture atlases. A texture atlas is a large texture that is home to many smaller sub-images which can be each accessed by using different texture coordinates. The idea behind this is to minimise the number of texture changes is in OpenGl – texture changing is regarded as a very expensive operation and should be kept as minimal as possible.  Using this method would create a huge performance benefit, and would be used extensively with fonts.


Helicopter Texture Atlas.  This shows how someone has taken a model of a helicopter and broken it up into its components, before packing it into a single texture atlas.  This image is from http://www.raudins.com/glenn/projects/Atlas/default.htm

Where To Start?

Approaching this problem, there are multiple things for me to consider.

The first is how to compose the texture atlas. Through a little bit of research, I have found a couple of methods to consider – the Binary Split Partition (BSP) algorithm, and using Quadtrees.

Secondly, the possible notion of a optimisation method. This should take all the textures in use into consideration, and then sort them so the number of texture changes is at a minimum. This would be done by taking the most frequent sub-textures in use, and placing them on to a single texture. Of course, this alone there are many things to consider, such as what frequency algorithm should I use, and how often do the textures need to be optimised.

Another thing for me to consider is the internal format of the textures themselves, and also how the user will interact with the textures.  It would be ideal for the user to believe that they are using separate textures, yet behind the scenes, they are using a texture atlas.

Updating an Old Map

As I already have the Renderer library in place, ideally the introduction of texture atlases will not change the interface dramatically, if not at all.

For the current, the user asks Renderer for a texture handle, with an internal format specified for the texture.  Initially this handle does not correspond to a texture, and seems rather pointless.  Of course the obvious answer to that would be to only create a handle upon creating a texture.  The reason behind using the former of the two options was the flexibility it provided when a texture is used multiple times in a scene graph.   Using a texture multiple times in a scene means that the handle would have to be referred to in each node it is to appear at in the scene graph.  Using the first option means that if the texture image changes, only the contents of the handle is changed, and thus updates automatically in the scene graph.  If it was done using the latter of the two options, it would mean having to change the handle in every single node of the scene graph it corresponded to.  The handle acts as like a middle-man.

When incorporating texture atlases, I would like to keep this functionality; but it does mean that I will have to change the interface in another area of Renderer, and that is using texture co-ordinates.  For the current, the texture co-ordinates are nice and simple.  To access the whole texture, it would use texCoords (0,0) for the bottom-left hand corner, and (1,1) for the top-right hand corner.  To the user on the outside, this should be no different, especially considering the user is thinking that they are just using the single-texture.  To do this, a mapping would be needed to convert the (s,t) texture co-ordinates to the internal texture atlas co-ordinates.  Not really a problem.  But it is if we’re using Vertex Buffer Objects.

Just using the simple (0,0) to (1,1) co-ordinates all the time means that we only need to create the texture co-ordinates in our vertex buffers once, and refer to them all the time.  Yet, they are to change all the time if we are using internal mapping, and especially if we are going to be changing the texture contents that the handles are referring too as well.

I think the best way of solving this is to go about it by making sure that the texture co-ordinates inside the vertex buffer objects are created each time a new texture is bound to a handle.  How I go about this, I am not entirely sure, but it’s definitely something I need to consider about.  This would definitely mean a fairly tight relationship between texture co-ordinates in a scene graph node and the texture that it is bound to – of course they can’t really function without each other anyway, so it does make sense.  Unfortunately, the dependance is there and it is always will be.

Structure

Before all of that comes into place, I also need to make sure that the structure of the system is suitable too, on top of it being maintainable, readable and scalable.  For the current time being, I am using a 3-tier structure.

Texture Atlas Structure.  This shows the intended structure of my texture atlas system.

Starting at the top, there is the Atlas.  This is essentially what the user interface will work through.  The Atlas will hold on to a list of catalogs.  The reason I have done this is to separate the internal formats of the textures.  A sub-image with the intended internal format A cannot be on the same texture as a sub-image of internal format B.   Inside each catalog is a collection of Pages with the corresponding internal format.  A Page is host to a large texture with many smaller sub-images.  Outside of this structure are also the texture handles.  These refer to each of their own sub-images, and hide away the complexity of the atlas to the user, making it seem as if they are handling separate textures.

For the time being, this is still a large learning-curve for me, not only in learning OpenGl, but also in using design patterns and C# licks, and more recently, planning.  I don’t plan very often, so this is one time I hope on trying to do that!  These posts may come a little slow as I’m in Manchester at the moment, and although I thought my health was getting better, it isn’t seemingly all going away.  With a little luck, I’ll accomplish what it is that I want to do over the next couple of weeks.  In the next post, I hope to touch on BSP, which I have implemented; and Quadtrees – something I still need to look at.   Normally I write my blog posts after I”ve implemented what it is I’m blogging about, but this time I’ve decided to it a little differently.  The reason for this is because after I implemented the BSP, I’ve realised it is actually a lot more flexible to implement the Quadtrees instead.  Whether I do or not, I think is dependant on time.  As long as there is a suitable interface, then all in all, it is just an implementation detail, and nothing major to worry about.  This is, at the this current time, by no means, a finished product.

Posted in Computing, Placement | Tagged: , , , , , , | 3 Comments »

Sleep Deprivation

Posted by Robert Chow on 04/03/2010

I have to admit, it’s been a while.

Unfortunately, I’ve been a bit under-the-weather the last week, and as a result unable to attend work.  I haven’t been sleeping properly for x-amount of weeks, and so I’ve ventured home to Manchester to try to solve my problems there.  For the time being, it seems to be paying off.

I did have my placement tutor visit take place just before I came back up and from the sounds of it, she was very pleased with the progress on my part, and on the part of my supervisor.  This is indeed very good news.  She did have one concern tho, and that was the fact I am secluded in the shires of North Devon.  Fair enough, it’s definitely something I know I’m not particularly ecstatic about.

Hopefully, with a bit of luck and well-being, I’ll get my blog posts rolling again.

Posted in Computing, Placement | Leave a Comment »

It’s a Hard Drive

Posted by Robert Chow on 20/02/2010

So someone mentioned to me the other day that I haven’t been blogging about my driving lessons as such anymore. Well, here’s some news for you.

Unfortunately my instructor has broken a bone in his foot whilst trying to push a car.  As a result, he is in plaster and cannot operate the clutch anymore, thus unable to give anymore lessons for at least the next 6 weeks. Kinda sucks really.

Fortunately, he is willing enough to rearrange my lessons with another instructor in the company he is part of, so all is not lost. It’d be a shame if all the lessons I’ve had gone to waste because I haven’t been able to practise for a few weeks.

All in all, it’s going relatively good.  Pretty much still practising when I’m on the road, and I’ve started my manoeuvres too – so far, the turn-in-the-road (more commonly know as the 3-point-turn) and reversing around a corner.  I’ve only had 14 hours contact, and 0 hours off.  I would like to practise without having to have lessons, but at this current moment in time, it’s just not feasible.

I still haven’t taken my theory test yet – I guess I should book that relatively soon.  When the practical test will come, I don’t know – my instructor hasn’t given me any notice, and I guess it is still early days.

I’ve just read in the news that drivers from as young as 11 are taking lessons in the UK at the moment.  And if I’m terribly honest, I don’t agree with it at all.  Maybe from around 15-16, possibly, but at 11; it’s rather ridiculous really.  They say that people who take on a skill at a younger age find it a lot easier to learn than those who leave it till later in life.  Fair enough, nothing wrong with that.  But the fact that although these children are learning to drive, they’re not able to get the experience because they’re legally not allowed to drive on public roads.  And in my small number of 14 hours of driving experience, I can say for myself, that’s what it’s all about.  It’s the experience that counts.

Young drivers might be able to think they can drive, because they’ve not had to deal with other users on the road.  So when they first get on to the roads, they’re probably filled to the brink with confidence, and inevitably more likely to cause an accident.  I wouldn’t mind if learner drivers were able to learn on the roads at around 16, but still keep the licensing at 17.  At least that way, they’d be able to achieve the same amount of experience a normal learner would.

And get this – it’s pretty ridiculous (and comical) to read “I’m 12 and too small to see over [the wheel] but I use a cushion.”  I think that in itself is an argument against the idea.  Is it just me who thinks that driving at 11 isn’t a clever idea, or does everyone else think that too?

Posted in Placement | Tagged: , | 1 Comment »

“{Binding OpenGl To WPF}” : Part 3

Posted by Robert Chow on 16/02/2010

So this is (hopefully) the last installment of my journey to creating my first WPF application using an OpenGl control.  You can find the first part here, and the second part here.  In this post, I am going to talk about where the magic happens behind a WPF form.

Model-View-View Model Pattern

So we have the GOF design patterns; these a best code practises and are just guidelines.  Similarly, there are also design patterns for implementing GUIs: Model-View-Presenter, Model-View-Controller and Model-View-View Model.  I don’t really know much about the former two, and I don’t claim to know too much about the MVVM pattern either, but it’s the one I am trying to use for developing my WPF applications.

In the pattern, we have the model, which is where all the code defining the application is; the view – this is the GUI; and the view-model, which acts as an abstraction between the view and the model.  The view is mostly written in XAML, and the view-model and model I have written in C#.  Technically, the view should never need to know about the model, and vice versa.  It is the view-model which interacts between the two layers.  I find it quite easy to look at it like this.

View Model Gear.  How the MVVM pattern may be portrayed in an image.  To be honest, I think the pattern should really be called the View-View Model-Model pattern, but of course, that’d just be stupid.  This image is from http://blogs.msdn.com/blogfiles/dphill/WindowsLiveWriter/CollectionsAndViewModels_EE56/.

Code Inline and Code Behind

You’ll notice for the majority of the time, I am using XAML to do most of the form.  The rest of the code that doesn’t directly interact with the view using the XAML is either code inline or code behind.  The code behind is essentially what we use to represents the view-model and the model.  The code inline is the extra code that is neither XAML or code behind.  It helps to define the view and is very rarely used, especially if the XAML can describe everything related to the view.  Incidentally, I used code inline at the start of the last post, where I defined a windows forms host using C# instead of doing it in the XAML.

In order to keep the intended separations of concerns in the MVVM pattern, the code behind has to interact with the view as little as possible, if not at all.  The flow of code should always travel in the direction of view to view-model, and never the other way round.  A way to do this is to introduce bindings.

Bindings

A binding in WPF is declared in the XAML and binds a XAML component to a property in the view-model.  In doing this, the component can take on the value specified in the view-model.  However, before we can create a binding, we have to form the connection between the view and the view-model.  To do this, we assign the view-model to the view data-context.  This is done in the logic in the code behind of the view.

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();
this.simpleOpenGlControl.InitializeContexts();
this.DataContext = new ViewModel();

}
}

Having this data context means that the view can now relate to our view-model.

Now with the view-model in place, we can now bind components of the view to properties in the view-model.  Declare the binding in the XAML using the property name in the view-model.  The example depicts how I am able to retrieve the score in the demo to a label.

<Label Content={Binding Path=Score, Mode=OneWay, UpdateSourceTrigger=PropertyChanged} Grid.Row=“1” Grid.Column=“0” FontSize=“48” />

As you can see, the property I have bound to is “Score”, defined using Path.  The Mode and UpdateSourceTrigger properties are XAML defined enums to explain how we want the binding to relate to the property in the view-model.  Specifying that the binding is only OneWay tells the application that all we want to do is read the property value in the view-model.  This is usually TwoWay in cases such as a textbox where the user can edit a field.

The UpdateSourceTrigger is necessary.  Without this, the label content will never update.  As the code in the view is travelling in the direction of the view-model, there is minimum code going the other way.  As a result, the label content will not know if the value has changed in the view-model, and thus will not update.  To make sure the label content does update, we have to notify that the property value has changed.  To do this we have to implement an interface.

INotifyPropertyChanged

You can’t really get much closer to what it says on the tin than this.  Implementing this interface gives you an event handler, an event handler that “notifies” the XAML that a property has changed and will ask it to update accordingly.  Below is the code I have used to invoke a change on the property Score.

public event PropertyChangedEventHandler PropertyChanged;

public int Score
{
get
{
return this.score;
}
set
{
this.score = value;
OnPropertyChanged(“Score”);

}
}

private void OnPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}

As far as I’m aware, invoking the event with the property name tells the XAML that the value for the property with that particular name has changed.  The XAML will then look for that property and update.  As a result, it is vital that the string passed on is exactly the same as how the property is defined, in both the view-model and the XAML binding.

Invalidate

A problem I did come across was trying to invoke a method provided by a XAML component.  Using the simpleOpenGlControl, I need to invalidate the control so the image will refresh.  Of course, invoking this in the view-model directly would violate the MVVM model.  As a result, a little bit of logic was needed to solve this problem.

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();
this.simpleOpenGlControl.InitializeContexts();
this.DataContext = this.viewModel = new ViewModel();

viewModel.SimpleOpenGlControlInvalidate = () => this.simpleOpenGlControl.Invalidate();
}

private ViewModel viewModel;
}

public class ViewModel : INotifyPropertyChanged
{
public Action SimpleOpenGlControlInvalidate
{
get;
set;
}

public int Score …

}

Assigning the method for use inside the view-model means that I can invoke this without having to go to the view, and therefore still keeping with the MVVM pattern.  Apparently, doing this causes a recurring memory leak, so when the view-model is finished with, you have to make sure that the action is nullified.

That pretty much concludes this 3-parter.  I’m sure there’ll be a lot more WPF to come in the near future, especially as I’m now trying to create all my future demos with it now.  It also means that I can use the XAML to create SilverLight applications too… or maybe not.

Posted in Computing, Placement | Tagged: , , , , , , , , , , , , , | Leave a Comment »

All Mapped Out

Posted by Robert Chow on 10/02/2010

So I went to the ESRI DeveloperHub conference yesterday to start out my research for the Mapper component.  Despite being held in Birmingham, I did feel like we were in Scotland.  The Scottish seemed to be everywhere – in the crowd and on the stage.

The first person I spoke to (is from, no awards for guessing where, Scotland) is in a similar company to IGI, dealing with oil and gas in the North Sea.  However, he isn’t part of the same background as I – he is a geochemist, and a map analyst.  Similarly, other attendees I spoke to are map analysts.  Yet I did come across a few developers.

Unfortunately for me, these developers specialised in working with and creating products using ArcGIS.  From what I learnt yesterday, this is a platform used for Geographic Information Systems, and is widely used in the majority of independent mapping applications.  With the start of the conference showcasing the new ArcGIS 10, I can now get a feel to what I am up against for when I come to create Mapper.  And yes, once again, there is a lot to do.  The rest of the day concentrated on using ArcGIS in server applications and wrapping it up in web-based APIs, such as Silverlight, JavaScript and Flex.  Although I did find this interesting at first, and rather attractive, I didn’t really get much out of it.  There were the odd moments of how an application could be and would be used, but for most of the time, there didn’t really seem to be much substance; the talks concentrated (very shallowly I’ll admit) on how each demonstration wrapped around the existing ArcGIS server platform to deliver the web-application.

From what I gathered, those working with ArcGIS were very attracted to projecting their current work using tools such as Silverlight, JavaScript and Flex; in fact the day ended with pretty much an advertisement on how ESRI provided training days on creating such applications.  Although interested, I wasn’t overly.  Not to mention I didn’t really know what GIS was until now.

All was not lost.  You know you’re a developer when you start laughing at something like this.

This Way To Java.  A sign posted for Java, Isle of Mull, Scotland. This image is from http://chillyinside.com/blog/

It was strange to see the level of developing that some people were coding at.  From the questions, they certainly knew their way around ArcGIS, and in terms of computer science, they seemed to know an awful lot more than me.  A lot of developers had heard of Python, and ArcGIS 10 was supporting it with ArcPy.  Yet despite more than 90% in the room as .NET developers, the majority had not heard of the Microsoft Design Patterns, in particular, the command pattern using the ICommand interface.

For the time being, I still want to complete both Grapher and Mapper before I finish my placement.  I think I can do it.  I am able to do it.

Posted in Computing, Placement | Tagged: , , , , , | 1 Comment »

One More Bite…

Posted by Robert Chow on 07/02/2010

I’ve been up in Manchester for the last few days, playing it out being a student again. And honestly, it’s felt rather good.

I signed for a house the first day I came back for the next academic year, and had a look around. It’s a pretty nice place; and thankfully, I didn’t have to do a single thing to find it! So I’m pretty grateful for all the hassle my future housemates went through to get the place – I’m sure I can find a lot more calming activities to do!

I’ve also just got back from the most amazing all-you-can-eat ever! It’s a Brazilian steakhouse called Bem Brazil in the northern quarter of Manchester. 12 different cuts of meat we managed to scoff down, ranging from the absolutely gorgeous beef fillet steak down to the little chicken hearts, which I wasn’t too particularly fond of.

I’m in Manchester because I’m going to a conference on Tuesday in Birmingham, and thought I’d take a long weekend at home since it’s only a couple of hours extra on the train. It’s a software developer’s conference about mapping software – hopefully something I’ll be touching upon later in my placement.  The link for anyone interested is http://www.esriuk.com/trainingevents/events/developerhub090210/.

The plans for the future are a bit up in the air for the time being.  Now with Renderer nearly finished, I am currently looking at spiking Grapher, and then after that, to possibly start on Mapper – hence the conference.  However, a lot of ideas have been proposed for Grapher, and it’s come to the point of it might be best for me to develope Grapher versions 1 and 2 during the rest of my placement and to do Mapper in the Summer if I am given the chance to come back.  My intentions are to try to do as much as possible – I want to, and if I don’t deliver, not only will I be annoyed and disappointed at myself, I’ll also think it’s a real shame that I am unable to bring to the table a good set of components usable in the next program.

The last few weeks I haven’t been blogging as often as I’d like, and that’s because things are just taking longer than the usual.  And it’s not like as if we have numerous time-consuming talks about health & safety to blame either.  Time after time after time, ideas come and go.  Try one out, and you end up in a dead-end, so you’re forced to try another.  It’s not until after 4-5 attempts do you quite get it right sometimes, and even then you’re not necessarily happy with the result because at the end of the day it’s not very good code.  So you have to try again.

My supervisor’s been telling me to come up with more than a few ideas before trying them out.  But I’m not much of a planner; I like to get stuck into it as soon as possible.  I guess I would like to become more of that planner type person – it’d also mean one more step in the right direction for personal development.  If you get it right and plan well, you do save yourself a lot of time from not having to implement all the wrong ideas.

Heck, even that bit of this blog wasn’t planned.

Posted in Computing, Placement | Tagged: , | Leave a Comment »

“{Binding OpenGl To WPF}” : Part 2

Posted by Robert Chow on 03/02/2010

This is the second part of my journey to creating an application using WPF, wrapping the Renderer demo I made earlier. You can find the first part here.

Subconscious Intentions

So, initially in this post, I was going to introduce the concept of the model-view-viewmodel, and code inline and code behind.  Or at least, how I understand them.  Yet due to recent activity, that will have to wait till the next post.

The great thing about writing these blog posts is that in order for me to seem like I know what I’m doing, I have to do the research.  I’ve always been taught that during an exam, always answer the question as if the examiner doesn’t know anything about the subject bar the basics.  This way, you should answer the question in a detailed and chronological order, thus making complete sense and gaining full marks.  In a way writing these posts are quite similar.  Sure, I may have broken the rule more than a few times, especially when I’m rushing myself, but I try to explain the topics I cover in enough detail for others than myself to understand it.  After all, although I am writing this primarily for myself to refer back to for when I get stuck, it’s nice to know that others are benefiting from this blog too.

It’s not because I haven’t done the research that I can’t bring to the table what I know (or seem to know) about the model-view-viewmodel, code inline and code behind.  It’s because during the research and much tinkering around, that I thought I should cover the main drive for using WPF in the first place, and that is to incorporate an OpenGl control.

From Scratch

So a couple of weeks ago, I did actually manage to create an OpenGl control, and place it inside a WPF form.  The way I did this was a bit long-winded compared to how I used to, by creating it in a Win32 form.  Instead of using the SimpleOpenGlControl provided by the Tao framework, I went about by creating the control entirely from scratch.

For this, I could have done all the research, and become an expert at creating the control manually.  But that simply wasn’t my intention.  Luckily for me, the examples provided by Tao included the source code ,and a quick copy and paste, I more or less had the code ready and waiting to be used.

One thing I am more aware of now is that you need two things, a  rendering context and a device context.  The rendering context is where the pixels are rendered; the device context is the intended form for where the rendering context will sit inside.  Of course, the only way to interact with these are using their handles.

To create a device context in the WPF application, I am using a Windows Forms Host.  This allows you to host a windows control, which we will use as our device context.  The XAML code for this is relatively simple.  Inside the default grid, I have just inserted the WindowsFormsHost as the only child element.  However, for the windows control, I have had to take from a namespace other than the defaults provided.  To declare a namespace, declare the alias (in this case I have used wf, shorthand for windows forms) and then follow it with the namespace path.  Inside the control, we are also going to use the x namespace.  Using this, we can assign a name for the control, and thus allowing us to retrieve the handle to use as the device context.

<Window x:Class=“OpenGlControlInWPF.Client”
xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation&#8221;
xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml&#8221;
xmlns:wf=“clr-namespace:System.Windows.Forms;assembly=System.Windows.Forms”
Title=“Client” Height=“480” Width=“640” WindowStyle=“SingleBorderWindow”>
<Grid ClipToBounds=“True”>
<WindowsFormsHost ClipToBounds=“True”>
<wf:Control x:Name=“control”/>
</WindowsFormsHost>
</Grid>
</Window>

With the form done, we can now dive into the code in the C# file attached to the XAML file.  It is here where we create the rendering context, and attach it to the device context.  I’m not really an expert on OpenGl at all when it comes to this kind of thing, so I’m not going to show the full code.  If you’re really stuck, the best place I can point you to is to look at NeHe’s first lesson, making an OpenGl window.  If you’re using the Tao framework and you installed all the examples, the source code should come with it.

The WPF interaction in the C# code is very minimal.  All we need to do with the XAML file is to retrieve the handle associated with the control we declared beforehand.  This is done simply by using the name we gave it in the XAML and letting Tao do the rest.  We hook this up to retrive a rendering context, and then we show the form.

Gdi.PIXELFORMATDESCRIPTOR pfd = new Gdi.PIXELFORMATDESCRIPTOR();
// use this to create and describe the rendering context – see NeHe for details

IntPtr hDC = User.GetDC(control.Handle);
int pixelFormat = Gdi.ChoosePixelFormat(hDC, ref pfd);
Gdi.SetPixelFormat(hDC, pixelFormat, ref pfd);
IntPtr hRC = Wgl.wglCreateContext(hDC);
Wgl.wglMakeCurrent(hDC, hRC);

this.Show();
this.Focus();

All there is left to do is to go into the OpenGl loop of rendering the scene at each frame. Unfortunately, because I am used to the SimpleOpenGlControl provided by Tao, I’ve never needed to go into it whilst I’ve been on placement. All I had to do was to call simpleOpenGlControl.Invalidate() and the frame would automatically refresh for me.  As a result of this, I decided to place the draw scene method in a while(true) loop so the rendering would be continuous. And true to my thoughts, I knew this wouldn’t work.  As a result, the loop was “throttling” the application when running – I was unable to interact with it because all the runtime concentrated on rendering the scene – there was no interrupt handling so pressing a button or typing a key didn’t have any effect whatsoever.

I did try to look for answers to the throttling, and I stumbled across something else.  Another solution to hosting an OpenGl control in WPF.

The Better Solution

Going back to the first post of this multi-part blog, you might recall I am using a Canvas to host the OpenGl control.  I found this solution only a couple of days ago, due to a recent post on the Tao forums.  It uses this canvas in the C# code and assigns a WindowsFormsHost.  This in turn is assigned a SimpleOpenGlControl.  A SimpleOpenGlControl!  This means that I am able to use all the abstractions, methods and properties that the SimpleOpenGlControl has to offer without having to manually create my own.

First of we have to assign the canvas a name in the XAML code so we can reference it in the C# counterpart.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“0″ Grid.Column=“0″ Background=“Black” BorderThickness=“1″ BorderBrush=”Navy” CornerRadius=“5″ Margin=“6, 6, 3, 3″>
<Canvas ClipToBounds=“True” Margin=“2″/ x:Name=“canvas”>
</Border>
</Grid>

The C# code for creating the SimpleOpenGlControl is short and sweet.  We create the WindowsFormsHost, attach a newly created SimpleOpenGlControl and attach the whole thing to the Canvas.  Here is the entire code for creating this.

namespace OpenGlWPFControl
{
using System.Windows;
using System.Windows.Forms.Integration;
using Tao.Platform.Windows;

/// <summary>
/// Interaction logic for Client.xaml
/// </summary>
public partial class Client : Window
{
public Client()
{
InitializeComponent();

this.windowsFormsHost = new WindowsFormsHost();
this.simpleOpenGlControl = new SimpleOpenGlControl();
this.simpleOpenGlControl.InitializeContexts();
this.windowsFormsHost.Child = this.simpleOpenGlControl;
this.canvas.Children.Add(windowsFormsHost);
}

private WindowsFormsHost windowsFormsHost;
private SimpleOpenGlControl simpleOpenGlControl;
}
}

Now we have the SimpleOpenGlControl set up, we simply just add the event for rendering and we’re nearly done. There is one problem however, and that is the windows forms host does not know what size to take. We add an event for when the canvas is resized to update the windows forms host size too.

public Client()
{

this.simpleOpenGlControl.Paint += new PaintEventHandler(simpleOpenGlControl_Paint);
this.canvas.SizeChanged += new SizeChangedEventHandler(canvas_SizeChanged);
}

void simpleOpenGlControl_Paint(object sender, PaintEventArgs e)
{
// do your normal opengl drawing here
this.simpleOpenGlControl.Invalidate();
}

void mainCanvas_SizeChanged(object sender, SizeChangedEventArgs e)
{
this.windowsFormsHost.Width = this.canvas.ActualWidth;
this.windowsFormsHost.Height = this.canvas.ActualHeight;
}

A Revelation To An Even better Solution

So I said I was going to talk about other topics before delving into my journey of placing an OpenGl control inside a WPF application, and that’s because of what I found myself accomplishing last night.  In the first blog post of this multi-part series, I found myself using a Canvas to hold a Windows Forms Host, and in turn, to parent a SimpleOpenGlControl.  Yet with further understanding of WPF,  a revelation came.  The reason I was unable to insert a SimpleOpenGlControl directly into the application beforehand was because I wasn’t entirely aware of namespaces in XAML.  Soon after finding more about them, I find I am able to access the SimpleOpenGlControl by referencing Tao, and hence solving all the background work the C# had to do.

<Window
xmlns:tao=“clr-namespace:Tao.Platform.Windows;assembly=Tao.Platform.Windows”
…>
<Grid Background=“AliceBlue”>

<Border Grid.Row=“0” Grid.Column=“0” Background=“Black” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 6, 3, 3”>
<WindowsFormsHost Margin=“2” ClipToBounds=“True”>
<tao:SimpleOpenGlControl x:Name=“simpleOpenGlControl”/>
</WindowsFormsHost>
</Border>
</Grid>

So the only thing extra that to add to this is the event for rendering, which I included before. I can omit the need for having to resize the canvas, partially because there is now no canvas, and also because the WindowsFormsHost ClipTobounds property is true.

In the next part of this series I will hopefully be touching upon what I intended on touching upon in the first place, the model-view-viewmodel pattern.

Posted in Computing, Placement | Tagged: , , , , , , , , | 1 Comment »

“{Binding OpenGl To WPF}” : Part 1

Posted by Robert Chow on 02/02/2010

I’ll admit, it’s been a while.  But since I’ve been back, I’ve been working on a fair few things simultaneously, and they’ve taken a lot longer than planned.  But alas, here is one of them, in a multi-part blog.

Windows Presentation Foundation

Remember where I mention WPF a few times, but never really got into it?  Here’s a statement from the Microsoft website:

Windows Presentation Foundation was created to allow developers to easily build the types of rich applications that were difficult or impossible to build in Windows Forms, the type that required a range of other technologies which were often hard to integrate.”

Ha.

It’s not easy to develop on at all, especially for a developer just starting their first WPF project.  Not compared to creating a Win32 Form with a visual editor to say the least.

But it does allow you to build very rich applications that appeal to many by their look and feel alone.  And they look a lot better than Win32 forms too.

Remember that demo for Renderer?  I originally said I was going to try and incorporate fonts into it, but that’s still one bridge I have yet to cross.  Instead, I decided to learn a bit of WPF instead.  What do you think?

Demo In WPF.  Here I have incorporated the demo into a WPF form, and included panels on the right and bottom.  The right panel depicts the current state of the game, and allows the user to change the camera position and change the texture used when a cube face is picked.


Demo in Win32 Forms Mock.  I have included a mock of the original in a Win32 Form.  I think it’s fair to say the least that you would all rather use the WPF version.


XAML

The language of WPF is XAML, extensible-application-markup-language, and is very similar to XML.  It uses the open/close tag notation – one which I’m not particularly fond of, but it does mean that everything is explicit, and being explicit is good. Like all other languages, it’s very useful to know your ins and outs and what is available to use, and using XAML is no exception to this rule either. As a result, there are many ways, some better, some far worse, ways of creating the form I have made.  As I am no expert in this at all, I am going to leave it as it is, and take a look at the code I have generated for creating the form base.

To create this, I used Visual C# 2008 Express Edition.  This has proved rather handy as it updates the designer view as the code is changed.

Starting a WPF project gives you a very empty template.  With this comes 2 pieces of code, one in XAML and the other in C#.  For the time being, we are just going to concentrate on the XAML file.  This is where we create our layout.  The initial piece of XAML code is very simplistic, and doesn’t really mean too much.

<Window x:Class=“OpenGlWPFControl.Client”
xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation&#8221;
xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml&#8221;
Title=“Client” Height=“665” Width=“749”>
<Grid>
</Grid>
</Window>

The first few lines relate to the namespaces being used within the XAML file. The tags marked Grid relate to the box inside the window. This is a new concept different to the panels in a Win32 Form. Instead of having top, bottom, left and right panels, the grid can be split into a number of columns and rows using definitions.

Here I have split the grid into 4 components using 2 rows and 2 colums.  The code is relatively easy to deal with.  It also allows you to specify minimum, maximum and default dimensions.

<Grid Background=“AliceBlue”>
<Grid.RowDefinitions>
<RowDefinition Height=“*” MinHeight=“200”/>
<RowDefinition Height=“100”/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width=“*” MinWidth=“200”/>
<ColumnDefinition Width=“200”/>
</Grid.ColumnDefinitions>
</Grid>

Here, I have specified the default heights for each row and the default widths of each column.  The “*” is simply a marker to take what space is left.  As an extra,  I have also set the grid background colour.

Now that we have split the initial grid, we can now start to populate it.  This can be with other layout panels, like the grid, or a stackpanel or dockpanel and so forth to add extra layout details.  It can also be filled with more meaningful objects such as a label or a text box.

Starting off, I want to place the OpenGl control in the top-left panel.  For the time being, we are going to mock this using a Canvas.  This item will be used later in the C# code to attach the OpenGl control, but for the time being we are only handling XAML.  In addition, I have also decorated the canvas with a border.  Using a combination of the canvas and border properties, I have managed to achieve the rounded edges, making it more aesthetically appealing.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“0” Grid.Column=“0” Background=“Black” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 6, 3, 3”>
<Canvas ClipToBounds=“True” Margin=“2”/>
</Border>
</Grid>

This piece of code stays within the Grid tags, as it is a grid child.  For where in the grid it sits, I have explicitly stated which row and column of the grid it sits inside the Border tag.

The bottom panel is done in a similar fashion, only this time the border decorates a textblock.  In order to scroll a textblock, this itself needs to be decorated using a scroll viewer.

<Grid Background=“AliceBlue”>

<Border Grid.Row=“1” Grid.Column=“0” Grid.ColumnSpan=“2” Background=“White” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“6, 3, 6, 6”>
<ScrollViewer Margin=“2”>
<TextBlock/>
</ScrollViewer>
</Border>
</Grid>

There is only small change which we have to do so the panel is not confined to one grid space. This is done using the Grid.ColumnSpan property.

Now with only one panel left, I have decided to make my life that little easier by adding in extra grids.  These are done in the exact same way as the initial grid.  Using what I have done already, and combining it with new elements, the last panel is added.

<Grid>

<Border Grid.Row=“0” Grid.Column=“1” Background=“White” BorderThickness=“1” BorderBrush=“Navy” CornerRadius=“5” Margin=“3, 6, 6, 3”>
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height=“Auto”/>
<RowDefinition Height=“Auto”/>
<RowDefinition Height=“*”/>
<RowDefinition Height=“30”/>
<RowDefinition Height=“30”/>
<RowDefinition Height=“60”/>
</Grid.RowDefinitions>
<Grid Grid.Row=“0”>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
<Label Grid.Row=“0” HorizontalAlignment=“Left” VerticalAlignment=“Bottom” FontSize=“36” Content=“Score:”/>
<Label Grid.Row=“1” HorizontalAlignment=“Right” VerticalAlignment=“Top” FontSize=“48” Content=“0”/>
</Grid>
<Grid Grid.Row=“1”>
<Grid.RowDefinitions>
<RowDefinition Height=“50”/>
<RowDefinition Height=“Auto”/>
<RowDefinition Height=“Auto”/>
</Grid.RowDefinitions>
<Label Grid.Row=“1” HorizontalAlignment=“Left” VerticalAlignment=“Bottom” FontSize=“24” Content=”Lives:”/>
<Label Grid.Row=“1” HorizontalAlignment=“Right” VerticalAlignment=“Bottom” FontSize=“24” Content=“0”/>
<Label Grid.Row=“2” HorizontalAlignment=“Left” VerticalAlignment=“Bottom” FontSize=“24” Content=“Level:”/>
<Label Grid.Row=“2” HorizontalAlignment=“Right” VerticalAlignment=“Bottom” FontSize=“24” Content=“0”/>
</Grid>
<Label Grid.Row=“3” HorizontalAlignment=“Center” VerticalAlignment=“Bottom” Content=“Camera Position”/>
<Grid Grid.Row=“4”>
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Button Grid.Column=“0” Margin=“15,0,15,0” Content=“&lt;&lt;”/>
<Button Grid.Column=“1” Margin=“15,0,15,0” Content=“>>”/>
</Grid>
<Button Grid.Row=“5” Margin=“15,15,15,15” Content=“Change Texture”/>
</Grid>
</Border>
</Grid>

As a result, we achieve a form created entirely from XAML.

Unfortunately however, there is no logic behind this. In part 2, we start looking at the other parts of the code and insert an OpenGl control into the application.

Posted in Computing, Placement | Tagged: , , , , , , , | 2 Comments »