Planning the Spontaneous

It's more than just a blueprint.

Archive for November, 2009

PhD : Postgraduate

Posted by Robert Chow on 26/11/2009

So I went to the postgraduate open day yesterday at the School of Computer Science at Manchester just to check things out. Unfortunately for me, a lot of the talks was about masters courses, and only about 20 minutes dedicated to information about PhD courses. And there wasn’t a lot of information included that I already didn’t know.

So the course is 3 years, there are a lot of full-paid studentships offered by the school (which is the main reason I’d want to apply), and the minimum requirement is a 2.1 honours in a computer/engineering/physics/maths related degree. To apply, you need 2 written references, and also a proposal about your area of research. Failing the latter, you can also choose an area of research from the list already proposed by the school itself.

I do suppose though, however, there isn’t really much to learn about doing a PhD course, until you choose the area of research you want to expertise in. So it’s essentially, what you make of it.

I think for the time being, it’ll be just a thought, and it will probably stay that way until I reach my final year of my degree next September.

Advertisements

Posted in Computing | Tagged: , , , | 2 Comments »

Ryuichi Sakamoto

Posted by Robert Chow on 24/11/2009

Another computer-unrelated post.

The reason I’ve been in Manchester for the past week or so, is for my love of live music. You don’t really get that in the North of Devon as such, and if you do, then it’s usually some form of low-key entertainment – possibly good, but no reason to travel down the road for (bearing in mind the road is a good 2km or so).

So I’ve already been to see Faithless at the Warehouse Project last weekend (rather disappointing I admit, but I got tickets for free), and a small band from Sheffield by the name of 65DaysOfStatic (absolutely amazing).

But the main act was tonight, and he’s a Japanese pianist, by the name of Ryuichi Sakamoto. Seeing him at the Bridgewater Hall is one day I will always remember.  The acoustics were brilliant, and the stage decorated with a large projection screen providing visual entertainment, is just a highlight of many.  As for Sakamoto himself, he was flawless.

If you don’t understand the pure beauty of the this track, then I pity you, I do.

Posted in Music | Tagged: , , , , | 1 Comment »

Renderer: Textures

Posted by Robert Chow on 22/11/2009

Carrying on with the Renderer library,the next function I decided to implement was textures.  To be able to render the scene to texture is a major function, and for this library is a must.  It is also a requirement to also be able to use those textures as part of a scene too.  I had not previously used vertex buffer objects with textures either, so it was also a good time to test this out, and make sure it worked.

I decided to demo a similar function to that previously, but also, adding in textures too.  This demo takes the random triangles, renders them to texture, and then redraws another scene, using the textures.

Renderer: Textures.  This scene is made up of 4 images, using the same texture generated from code very similar to that of the vertex buffer objects demo.

As you can see, the original scene has been manipulated in a way that it would be hard to recreate the scene displayed without first rendering to texture.  The texture size I used was 128 x 128, as was the viewport for when I first rendered the triangles.  This scene however, is of size 512 x 512.  The original texture is seen in the bottom-right hand corner, and all the other textures have been rotated and resized.  The largest of the textures, bottom-left, is slightly fuzzy – this is because it is twice the size of the original.

Renderer API

The way I have implemented the rendering to texture function on the API is not much different to rendering as normal.  It’s a simple case of:

Renderer.RenderToDeviceContext();

or

Renderer.RenderToTexture(..params..);

Technically, rendering to texture will also render to the device context.  The main difference however, is that after rendering to the device context, the render to texture method takes in the parameters given, and will make a simple call to glCopyTexImage2D – as per normal when a client wishes to render to texture using OpenGL.  To save from code duplication, I have used the decorator pattern – the render to texture class “decorates” the original class.  In doing so, it means that I can set up texturing by first binding a texture, make the original call to render to device context, and then finish off by rendering to the bound texture.

Posted in Computing, Placement | Tagged: , , , , , , | 1 Comment »

Renderer: Vertex Buffer Objects

Posted by Robert Chow on 19/11/2009

So I’ve started on the Renderer library, and although this update might be coming a couple of weeks late, there’s a few demos that I’ve managed to produce as the library is slowly being developed. Much of the next few Renderer posts will be short, and will not contain much code – most of the code I am using I have already explained in earlier posts.

The first stage of Renderer I have incorporated is the use of vertex buffer objects. This is because it is pretty much the core drawing function I will be using to create shapes drawn on to the screen. Although this is not clearly shown, the first demo was to initially make sure that these vertex buffer objects work.

Using a blending function to incorporate the alpha values, I have loaded the buffers with several objects – each a triangle of random position and size, with 3 random colours, one for each vertex.  Even more so, the number of triangles is also random.

Renderer: Vertex Buffer Objects.  The screenshots show the results after loading the buffers with random triangles.

It might not be the most exciting demo in the world, but at least it shows that my vertex buffer objects work.  As a result of this, I am able to carry on, knowing that the most fundemental part of the Renderer library is in place.  For now.

Posted in Computing, Placement | Tagged: , , , , | 1 Comment »

Work || Academic

Posted by Robert Chow on 16/11/2009

So I went into university today, just to say hi to all my friends, and what not, and I managed to get dragged into the Advanced Graphics lecture they had.  And I have to say, I did find it all very interesting.

Although I’m pretty much doing just OpenGL at IGI for now on the graphics side, there is no theory based around the work I am doing.  We (I say that, as if I’m taking the course, but I’m blatently not) learnt about surface reflectance, and 3 different models used to calculate how light interact with a particular object.

Without going into too much detail (because I can’t):

Bidirectional Reflectance Distribution Funtcion: simple reflection from a surface

Bidirectional Surface Scattering Reflectance Distribution Function: subsurface scattering for translucent materials

Bidirectional Texture Function: to simulate self-shadowing/scattering

Very interesting, and very theoretical.

But that’s the problem.

The majority of academic material learnt during university, college, or school, doesn’t become relevant in the working world, unless you are doing, for example, a job such as research.  You don’t need to know the theory, you just need to know how to apply and use it, and the majority of the time, it’s already done for you.

Take OpenGL for example.  I can’t say I’m an expert at OpenGL, and I don’t think I ever will be.  But in this context of theories, OpenGL takes in parameters to describe the light model.  All the theory I learnt during university,  I don’t need to know, for OpenGL does all of it for me – all I need to know is how to use the interface to create the model.  And this annoys me sometimes.

A lot of stuff I learn at university will be very irrelevant when I come to work in a job.  Of what I learn, a lot will be interesting.  Also, a lot, not so.

And I’ve been battling with myself on whether or not to take a PhD after I graduate.  The University of Manchester, School of Computer Science offer very good scholarships for PhDs, and at this current climate in the economic situation, it might definately be something worth considering.  And maybe doing some research into graphics might be rather ideal for myself.  Okay, I can use that research to create a graphical program that uses the theories of my research, but then where does it go from there?  It may be taught in other universities, but the students don’t actually need to know how it’s done – it’s already been done for them.  Like every argument, there’s always a for and an against.

There’s a PhD open day a week next Wednesday – although it’s for entry in September 2010, I guess it wouldn’t hurt to keep my options open.

Posted in Computing, Placement | Tagged: , , , | 1 Comment »

Refresh(F5)

Posted by Robert Chow on 14/11/2009

So we’ve all heard the term “turning over a new leaf”.  I’m not quite doing that.

You may have noticed that I haven’t been updating this as regularly as I may have liked.  And that the posts have been rather long, and can get a little tedious.  But, in all honesty, I don’t believe they’re bad.  They just could be, better.  And thus, I am not turning over a new leaf, but just improving on what I already have.

This means that the blog will introduce shorter stories.  Not ones which span the whole page, and take a week to write.  It will also introduce more technical blog post, more than already.  It will also introduce less technical blog posts, less than already.  And also blog posts related to my placement, but not necessarily computing.  This blog is, my industrial experience.

So we start off with a non-computer related post.  And funnily (and ironically) enough, in Manchester – not Bideford.  I’m happily taking a couple of weeks, “working from home”, or from time in lieu.  And to get here was the worst journey I’ve had to endure, ever.

From Town…

I’ve flown from Manchester to New Zealand – 27 hours that took.  Compared to the 9 hour stint from my doorstep in Bideford to where I was staying in Manchester, it seemed like what I was about to do was easy.

4:00pm
I set off in the car to Barnstaple train station, knowing full well, I wanted to make it early so I could get my train tickets with enough time to spare.  Traffic the second we hit Bideford, but at least it passed a few minutes afterwards.

4:30pm
I arrive at Barnstaple train station – the train wasn’t for another half hour.  Oh, and I remember it wasn’t a train – there was a rail replacement service instead.  That meant coach.

… To City

5:00pm
We set off.  The coach wasn’t crowded.  Comfortable actually.

6:30pm
We arrive at Exeter.  And I think I’m getting old.  I’m starting to feel travel sick now – winding country roads, enclosed by trees either side, in the dark and on a coach – I’m not claiming to have the strongest mind or stomach for this.  But I survive.

Still 6 More Hours

7:30pm
We board the train to Birmingham from Exeter.  It’s only 2 minutes late, which is fine, as there is a 25 minute window to change train at Birmingham.  Plenty of time.

8:30pm
We arrive at Bristol Temple Meads.  Drunk youths board the train to get to Bristol town centre.  I’m slightly worried because at this point I remember this place as one of the worst hit areas for racial abuse.  But in the public eye, I’m rather calm.  I just want to set off.

8:40pm
We should have left 10 minutes ago.  We stuck on the platform, waiting for a driver to take our train to Birmingham.

8:50pm
Still waiting.  Apparently there is a train heading straight to Manchester from another platform, but it hasn’t arrive yet – that’s been delayed too.  Many of us are rushing from platform to platform, wondering which train will go first.

9:00pm
We finally set off.  This is bad news.  We have to make up at least 5 minutes to allow us to get onto the connecting train.  Bad news – if we don’t pass through Cheltenham Spa with ease, we’ve lost our chance.

10:00pm
We should be near Birmingham by now.  We’re not – we’re half an hour away, and the Manchester train is to set off before we arrive – and it’s also the last direct train of the night too.  We’re all worried.  Alas, we have good news.  Credit to the very brave conductor giving the announcement.  He was hoping to be able to delay the train setting off for Manchester, but there’s no need.  The train travelling from Birmingham to Manchester – turns out we’re already on it!  The last hour and a half spent worrying was all for nothing.

Relief At Last

10:30pm
We set off from Birmingham with haste to make up for a few minutes.  All is well.

Nearly Home

12:15am
I arrive in Manchester.  I can feel a grin on my face forming, and I can’t help myself.

Rather a different change to previous posts, but I hope it’s a refreshing new read!


Posted in Placement, Travel | 2 Comments »

Anti-Alias = Blend * (1 – Alpha);

Posted by Robert Chow on 05/11/2009

So the term alpha, in computing, has many meanings. There is alpha testing, where a piece of software is tested by giving it out to the internal users of an organisation to tear to pieces – the stage before the public get their betas (I think this is the appropriate time to say that I just got VS2010 beta 2 the other day, and it’s as sexy as hell, but more on that another time). It can also be used to describe the weighting of data in machine learning. But I’m not doing machine learning. Or testing (yet). I’m doing graphics.

The alpha I am referring to is the alpha channel used to represent a colour, and it’s very handy. Although it can be a little misunderstood, as I learnt during my experiments.

The Alpha Channel

To represent a color, you usually have the option of choosing the red, green and blue components of a color, but what if you want it to be slightly translucent?  There is an alpha component – this determines how “full” the color is.  On a scale of 0 to 1, 0 is transparent, and 1 is opaque; anywhere in between is translucent.  So this is fairly easy to declare in OpenGL.  Where red, green, blue and alpha are float values between 0 and 1:

Gl.glColor4f(red, green, blue, alpha);

Of course, OpenGL doesn’t like to make life easy for us, no.  We have to also enable blending.  This allows for the alpha channel to do its magic, and the blend function also determines on how the alpha channel is used.

Gl.glEnable(Gl.GL_BLEND);
Gl.glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA);

The 2 parameters of the blend function determine how the alpha channel is used – the parameters I have used in the example are the ones best used for transparency.  These parameters allow you to do lots of different blending techniques, such as blend towards no transparency or full transparency, or reverse blend – I’m not really the one to ask on how these work, because I’m not entirely sure – you’re best looking it up.  Of course, the complexity doesn’t stop there either.

A Trick of Light

When working with transparency, the order of the objects when being drawn is important.  Especially if you are using lighting.

As a general rule, for opaque objects, draw them in the order of front to back (taking the front as the nearest to you), and for transparent or translucent objects, back to front.

For opaque objects, this is because (I believe, but not entirely sure)  OpenGL is clever enough to deduce on if an object at the back needs to drawn or not – it will not be drawn if there is an object in front of it, obscuring the camera view from the object.  Of course, the only way to deduce if there is an object in front of it, is if there is an object already there.

For transparent and translucent objects, drawing them back to front ensures that the lighting algorithm in OpenGL works properly.  As the object is drawn, it OpenGL will always look at what is behind the object, and try to blend the current object being drawn with what is behind it accordingly.  An object at the front with transparency property will not take into account what is behind it, if the object behind it is being drawn after it.  I found this out when I was trying to draw partially transparent boxes.

Transparency DrawingTransparency Drawing.  This shows how drawing partially transparent objects has to be done in a particular way for OpenGL to display the scen properly.  The top two pictures show the component this cube is made out of: an blue opaque base, and a clear, green front.  The picture in the bottom-left shows what happens when the translucent part, at the front, is drawn before the opaque part, behind it.  It looks exactly the same as the picture in the top-right – this is because it both pictures, the front is blending what was initially behind the it – the black background.  The bottom-right picture shows the drawing when drawn in the correct order, base first, then the translucent front.

Anti-Aliasing

Anti-aliasing is an effect used to make lines and shapes appear smoother, and less discrete, or pixelated.  This effect does this by a slight blurring, and in effect, is using blending.  As a result, the same rule of drawing transparent objects from back to front should be applied to when trying to use an anti-alias effect.

There are 3 types of anti-aliasing that OpenGL offers with easy access.  Any other types of anti-aliasing, you will have to look elsewhere – I am no expert of OpenGL.  Essentially, if you refer back to when I was creating fonts, it uses mip-mapping as a way of keeping detail and quality as images are getting smaller and smaller – you could argue that this is a form of anti-aliasing too.

The 3 OpenGL offers are line, point and polygon anti-aliasing, and unsurprisingly, they refer to lines, points and polygons.

These are easy to switch on with an enable call.  You can then provide OpenGL with a parameter on how you want the anti-aliasing effect to behave.

Gl.glEnable(Gl.GL_LINE_SMOOTH);
Gl.glHint(Gl.GL_LINE_SMOOTH_HINT, Gl.GL_FASTEST);

Gl.glEnable(Gl.GL_POINT_SMOOTH);
Gl.glHint(Gl.GL_POINT_SMOOTH_HINT, Gl.GL_NICEST);

Gl.glEnable(Gl.GL_POLYGON_SMOOTH);
Gl.glHint(Gl.GL_POLYGON_SMOOTH_HINT, Gl.GL_DONT_CARE);

The hints describe how you want OpenGL to do the anti-aliasing – you can either have it as the nicest OpenGL will allow, the fastest to calculate to save on speed, or nor preference.

Anti-Alias TrianglesAnti-Alias Triangles.  The picture on the left is similar to the picture on the right, with the exception that anti-aliasing is on.  You can notice a subtle difference in the smoothness of the edges on the triangles.  OpenGL achieves this effect using blurring and blending techniques.

As a side note, I’ve now finally started the Renderer component!  So the majority, if not all, of what I have covered so far regarding OpenGL will go into this component.  This will mean that I won’t be updating big tutorial-esque posts on OpenGL anymore – there might be something every now and then.  I will however, be posting any revelations I come across!

 

Posted in Computing, Placement | Tagged: , , , , , , , | Leave a Comment »