Planning the Spontaneous

It's more than just a blueprint.

  • Categories

  • Archives

  • Advertisements

Posts Tagged ‘Texture’

glGenTexture(Atlas); // part2

Posted by Robert Chow on 19/03/2010

So from the plan I started to devise for my texture atlasing system, I decided to have a quick look at two methods. These two methods can help me to split up the texture in an efficient and appropriate way so I can maximise texture use and efficiency. These are the Binary Split Partitioning method, and the use of Quadtrees.

Binary Split Partitioning

This method takes a space and splits this into two, keeping a reference to each part. Using a binary tree, the tree can be used to keep a track of how the space is split at each stage.  This method works by taking the requested size (the image/sub-texture) and seeing if it fits into the space represented by a leaf node.  If not, then it will see if the requested size fits into the next leaf node using tree traversal.  To maximise efficiency, once a space that is free and large enough for the requested size is found, it splits this space into two;  both are represented as leaf nodes, yet one is the exact size as the requested size (and this is also where the requested texture is allocated) and the other is the remaining unallocated space.

Binary Split Partitioning.  This image represents the stages of how to insert a sub-texture(1) into a space managed by BSP.  Starting off at the top, (1) is requested and seen if it fits into space A.  A is too small, and it is split into 2 partitions, B and C.  (1) is compared to space B, and again splits this space into D and E.  As D is the perfect fit, (1) is allocated to this space.  The last image shows the final state of the tree after inserting (1).  The reason it cannot skip from stage 1 to stage 3 is because there is no possible way of splitting the space and keeping to an efficient BSP system – it is best to split each space linearly.  The splitting measurements are derived from the sub-texture to maximise efficiency.

I managed to find this method rather easy to implement, especially with the help from this page on lightmaps.  Using this, I also managed to come up with a simple demo, showcasing how it fits several random sub-textures into one.

BSP Texture Atlas.  This image shows my demo of inserting random sized images into a texture managed by BSP in progression.

Although easy to implement, there are a couple of drawbacks.  As this is an unbalanced binary tree, it becomes computationally very expensive to insert a sub-texture as the tree structure grows.  Adding tree rotation to balance the tree is possible – it doesn’t affect how the tree works when inserting a sub-texture.  However, it does affect the tree when it comes to deleting allocated space.  This is because the nodes derive from their parents; losing the connection between the parent and the children (which occurs heavily during tree rotation) means that to successfully delete an allocated space at node B derived from parent A and has sibling C, is more-or-less a very difficult feat.  If C is empty, A will be empty after the deallocation of space B – and thus B and C need not exist, leaving the big space referenced by A unsplit.  Taking away the natural bond between parent and child in a BSP tree due to tree rotation (or any other method for that matter) makes this problem rather hard to tackle as it could mean having to search the entire tree for the parent and other sibling.  These leaves me with a problem – do I keep the tree balanced, and make insertion less costly at the expense of not being able to delete; or do I allow for deallocation, whilst keeping insertion computationally expensive?  In the end, I will be inserting more times than not, and very rarely deleting, so I think the former is the best option for now.

Or I could look at quadtrees.

Quadtrees

This is where my planning goes wrong.  And I’ve also not researched quadtrees to the full extent.  I know what they are designed to do, I’m just unsure of how to implement them.

A quadtree is a structure whereby each node has four children.  Each node represents a square block in the texture, and each block is split into four smaller squares – the children.  This is repeatedly done until the size of a single block is a 1×1 square.

Quadtree Colour.  This displays how a texture is split up using quadtrees.  I have used colour to differentiate each node.  I guess it looks quite similar to one of those Dulux colour pickers.

As this structure is set in stone, its depth will be constant and we need not to worry about insertion/deletion costs.  It’s already a rather shallow structure in itself because each node has 4 children, as opposed to the typical 2.

The structure works by marking off what is allocated at the leaf nodes, and this is interpreted at the top.  If all of a node’s children is allocated, then that node too is marked as allocated.  From using this system, we can find if there is any space available in a node, and then check to see if the space provided is big enough to fit in the sub-texture.  My problem is that I’m unsure of how to do implement this efficiently.  For example, a very large sub-texture is inserted at the root, where everything is unallocated so far.  This takes up more than one quadrant of the root, and spills into the other quadrants.  By definition, the one quadrant the texture covers up is marked as allocated, yet the other quadrants are only partly allocated – some of their children are allocated but they also have children that are unallocated.  It is the next step which I cannot figure out; insert in another large texture – there is enough space in the root to place this, but it needs to take up the space given by two already partially filled quadrants.  How does it know how these quadrants are related in terms of position, and how does it then interpret the unallocated space in each of these quadrants?  They obviously need to be interpreted as adjacent, but then the available children of these quadrants also need to be interpreted as adjacent too.  It’s a puzzle I’ve decided to give this a miss, simply because I don’t have the time.

Quadtree Insert.  This depicts the problem I am having trouble with.  Inserting the first image will mark off the top-right quadrant as occupied, whilst the others are partially occupied.  It is easy to see that the second image will fit into the remaining space, but I am unsure of how to implement this.   This is because the second image will look at the root and find it is small enough to fit inside.  It will then look at the quadrants to see if it will fit in either of them, and it cannot because it is too big.   I do not know how to implement the detail where it looks at the quadrants together – I’m not even entirely sure if this is how quadtrees are supposed to work.

Advertisements

Posted in Computing, Placement | Tagged: , , , , | Leave a Comment »

glGenTexture(Atlas); // part1

Posted by Robert Chow on 08/03/2010

So besides having a problem with fonts, I also know at a later stage, I will have to eventually incorporate texture atlases. A texture atlas is a large texture that is home to many smaller sub-images which can be each accessed by using different texture coordinates. The idea behind this is to minimise the number of texture changes is in OpenGl – texture changing is regarded as a very expensive operation and should be kept as minimal as possible.  Using this method would create a huge performance benefit, and would be used extensively with fonts.


Helicopter Texture Atlas.  This shows how someone has taken a model of a helicopter and broken it up into its components, before packing it into a single texture atlas.  This image is from http://www.raudins.com/glenn/projects/Atlas/default.htm

Where To Start?

Approaching this problem, there are multiple things for me to consider.

The first is how to compose the texture atlas. Through a little bit of research, I have found a couple of methods to consider – the Binary Split Partition (BSP) algorithm, and using Quadtrees.

Secondly, the possible notion of a optimisation method. This should take all the textures in use into consideration, and then sort them so the number of texture changes is at a minimum. This would be done by taking the most frequent sub-textures in use, and placing them on to a single texture. Of course, this alone there are many things to consider, such as what frequency algorithm should I use, and how often do the textures need to be optimised.

Another thing for me to consider is the internal format of the textures themselves, and also how the user will interact with the textures.  It would be ideal for the user to believe that they are using separate textures, yet behind the scenes, they are using a texture atlas.

Updating an Old Map

As I already have the Renderer library in place, ideally the introduction of texture atlases will not change the interface dramatically, if not at all.

For the current, the user asks Renderer for a texture handle, with an internal format specified for the texture.  Initially this handle does not correspond to a texture, and seems rather pointless.  Of course the obvious answer to that would be to only create a handle upon creating a texture.  The reason behind using the former of the two options was the flexibility it provided when a texture is used multiple times in a scene graph.   Using a texture multiple times in a scene means that the handle would have to be referred to in each node it is to appear at in the scene graph.  Using the first option means that if the texture image changes, only the contents of the handle is changed, and thus updates automatically in the scene graph.  If it was done using the latter of the two options, it would mean having to change the handle in every single node of the scene graph it corresponded to.  The handle acts as like a middle-man.

When incorporating texture atlases, I would like to keep this functionality; but it does mean that I will have to change the interface in another area of Renderer, and that is using texture co-ordinates.  For the current, the texture co-ordinates are nice and simple.  To access the whole texture, it would use texCoords (0,0) for the bottom-left hand corner, and (1,1) for the top-right hand corner.  To the user on the outside, this should be no different, especially considering the user is thinking that they are just using the single-texture.  To do this, a mapping would be needed to convert the (s,t) texture co-ordinates to the internal texture atlas co-ordinates.  Not really a problem.  But it is if we’re using Vertex Buffer Objects.

Just using the simple (0,0) to (1,1) co-ordinates all the time means that we only need to create the texture co-ordinates in our vertex buffers once, and refer to them all the time.  Yet, they are to change all the time if we are using internal mapping, and especially if we are going to be changing the texture contents that the handles are referring too as well.

I think the best way of solving this is to go about it by making sure that the texture co-ordinates inside the vertex buffer objects are created each time a new texture is bound to a handle.  How I go about this, I am not entirely sure, but it’s definitely something I need to consider about.  This would definitely mean a fairly tight relationship between texture co-ordinates in a scene graph node and the texture that it is bound to – of course they can’t really function without each other anyway, so it does make sense.  Unfortunately, the dependance is there and it is always will be.

Structure

Before all of that comes into place, I also need to make sure that the structure of the system is suitable too, on top of it being maintainable, readable and scalable.  For the current time being, I am using a 3-tier structure.

Texture Atlas Structure.  This shows the intended structure of my texture atlas system.

Starting at the top, there is the Atlas.  This is essentially what the user interface will work through.  The Atlas will hold on to a list of catalogs.  The reason I have done this is to separate the internal formats of the textures.  A sub-image with the intended internal format A cannot be on the same texture as a sub-image of internal format B.   Inside each catalog is a collection of Pages with the corresponding internal format.  A Page is host to a large texture with many smaller sub-images.  Outside of this structure are also the texture handles.  These refer to each of their own sub-images, and hide away the complexity of the atlas to the user, making it seem as if they are handling separate textures.

For the time being, this is still a large learning-curve for me, not only in learning OpenGl, but also in using design patterns and C# licks, and more recently, planning.  I don’t plan very often, so this is one time I hope on trying to do that!  These posts may come a little slow as I’m in Manchester at the moment, and although I thought my health was getting better, it isn’t seemingly all going away.  With a little luck, I’ll accomplish what it is that I want to do over the next couple of weeks.  In the next post, I hope to touch on BSP, which I have implemented; and Quadtrees – something I still need to look at.   Normally I write my blog posts after I”ve implemented what it is I’m blogging about, but this time I’ve decided to it a little differently.  The reason for this is because after I implemented the BSP, I’ve realised it is actually a lot more flexible to implement the Quadtrees instead.  Whether I do or not, I think is dependant on time.  As long as there is a suitable interface, then all in all, it is just an implementation detail, and nothing major to worry about.  This is, at the this current time, by no means, a finished product.

Posted in Computing, Placement | Tagged: , , , , , , | 3 Comments »

Renderer: Demo

Posted by Robert Chow on 16/12/2009

So now that I’ve got a lot of my functionality going for me in this component, I thought I’d try and create a snazzy demo.

My supervisor’s obsessed with it. It’s like a little game – there is a rotating cube, and the challenge is you have to click on each face before you pass on to the next level. As you progress, the cube rotates just that little bit faster, making it harder to complete the challenge. Clicking on a face will turn that face on. Clicking on a face that is already on will turn it off – you can only progress when all 6 faces are switched on.

So you can automatically see that this little demo already involves the concept of vertex buffer objects (drawing of the cube), the scene graph (rotating cube), the camera (viewing the cube, using the scene graph) and picking (turning faces on/off). But what about the others?

Well, lets place the scene into dark surroundings – we’re going to need lights to see our cube – that’s lighting handled. So where does texturing fit in?

Switching the faces on/off need an indicator to show what state they are in.  We can easily do this by simply changing the colour of a face.  But that’s kinda boring.  So instead, we’re going to take the rendering to texture example, and slap that on a face which is on.  So that’s textures included.

Here are some screenshots of this demo.

Demo Setup. From left to right: initial scene – just a white cube; let’s dim the lighting to, well, none; include some lights: green – centre, red – bottom, blue – left; create a texture.

Demo Rotate. The demo in motion – the camera is stationary, whereas the cube is rotating, and the lights are rotating independantly around it.

Demo Play.  The first and second images show what happens when a face is clicked – the texture replaces the blank face.  Once all the faces are replaced with the texture, the demo advances to the next level.

Demo Camera.  This shows added functionality of how a camera can be fixed on to a light, thus the camera is rotating around the cube too.  Here it is fixed on to the blue light.  This is done with ease by manipulating the scene graph.

I guess you can’t really get a grasp of how this demo works in its entirety – sanpshots don’t really do it much justice.  I might try and upload a video or so – how, I’m unsure of – I’m sure I can probably find screen capture software around on the net.

And unfortunately, there is no score involved. Yet. That will come when fonts come.

Posted in Computing, Placement | Tagged: , , , , , , , , | 2 Comments »

Renderer: Picking

Posted by Robert Chow on 03/12/2009

Remember that decorator pattern I used for writing to textures?  Also particularly useful for picking too.

Decorate the main drawer class with a picking class.  This allows for the picking class to start off by setting up the render-mode to select mode, and to set the X and Y co-ordinates of the mouse.  Render the image as normal, but because of the render-mode switch, the image is rendered using the select buffer instead of the device context.  After this, the picking class is also able to interpret the picking buffer, and then return the results.

Sounds fairly easy.  But I did run into a problem near the start.

First off was the mouse wasn’t picking where I wanted – it was interpreting the mouse as always in the bottom-left hand corner.  Why so?

I later learnt that when changing the render-mode, it automatically multiplies the projection matrix by some pre-configured matrix so it can do the picking appropriately.  In the original drawer class, I was resetting the projection matrix by loading the identity, and thus wiping what the picker class had set up.  Kind of like defeating itself.  So I had to change this by taking the matrix resets out of the main drawing method, and place them outside – these would have to be called elsewhere – likely to be through the Renderer facade (another GoF pattern), before calling the draw.

So with the matrix set up properly, I did encounter another problem.  And this time it was hardware based.  It would pick appropiately on my supervisor’s computer, yet not on mine.

An example scenario:  there are several images overlapping each other, each with their own pick Id.

My supervisors computer: picks appropriately – returns the object with the nearest Z value (most positive), under mouse (X,Y) .

My computer: picks unappropriately – returns the object with the nearest Z value, under mouse(X,Y) – providing the object has not moved in modelview space in terms of the Z value.  This accounts for transformations, rotations and scales.  So therefore it will sometimes return the correct object, but most of the time, it won’t.  Particularly annoying if you have a scene that is constantly rotating, and you want to pick the nearest item – it will always pick the same item every single time, simply because it was rendered at the front first before it was rotated.

Simple fix:  enable culling.  This means that anything that is not in view won’t be rendered, so technically there is nothing behind the frontmost object – it can be the only one that is picked!

But that’s also particularly annoying when you want to use 2-face rendering.  This is the ability to render shapes clockwise, and counter-clockwise.  Enabling culling in OpenGL will result in all shapes that would originally be drawn clockwise to the screen, not drawn at all.  Not so useful in rotating scenes!

Anyway.

Combining textures and picking, this demo allows the user to pick a quadrant, and that will change accordingly – to texture if off, or to non-texture if on.

Renderer Picking Demo. From top-left, clockwise:  empty canvas; bottom-left quadrant click to turn texture on; top-right quadrant click to turn texture on; top-left quadrant click to turn texture on; bottom-left quadrant click to turn texture off; top-right quadrant click to turn texture off; bottom-right quadrant click to turn texture on; top-left quadrant click to turn texture off.  To achieve the original image from the last step, the user would click bottom-right quadrant, turning the texture off.

Posted in Computing, Placement | Tagged: , , , , , , , | 1 Comment »

Renderer: Textures

Posted by Robert Chow on 22/11/2009

Carrying on with the Renderer library,the next function I decided to implement was textures.  To be able to render the scene to texture is a major function, and for this library is a must.  It is also a requirement to also be able to use those textures as part of a scene too.  I had not previously used vertex buffer objects with textures either, so it was also a good time to test this out, and make sure it worked.

I decided to demo a similar function to that previously, but also, adding in textures too.  This demo takes the random triangles, renders them to texture, and then redraws another scene, using the textures.

Renderer: Textures.  This scene is made up of 4 images, using the same texture generated from code very similar to that of the vertex buffer objects demo.

As you can see, the original scene has been manipulated in a way that it would be hard to recreate the scene displayed without first rendering to texture.  The texture size I used was 128 x 128, as was the viewport for when I first rendered the triangles.  This scene however, is of size 512 x 512.  The original texture is seen in the bottom-right hand corner, and all the other textures have been rotated and resized.  The largest of the textures, bottom-left, is slightly fuzzy – this is because it is twice the size of the original.

Renderer API

The way I have implemented the rendering to texture function on the API is not much different to rendering as normal.  It’s a simple case of:

Renderer.RenderToDeviceContext();

or

Renderer.RenderToTexture(..params..);

Technically, rendering to texture will also render to the device context.  The main difference however, is that after rendering to the device context, the render to texture method takes in the parameters given, and will make a simple call to glCopyTexImage2D – as per normal when a client wishes to render to texture using OpenGL.  To save from code duplication, I have used the decorator pattern – the render to texture class “decorates” the original class.  In doing so, it means that I can set up texturing by first binding a texture, make the original call to render to device context, and then finish off by rendering to the bound texture.

Posted in Computing, Placement | Tagged: , , , , , , | 1 Comment »

Print(TrueTypeFont)

Posted by Robert Chow on 16/10/2009

This problem’s been at me for the last couple of weeks.  And it’s now solved.  I don’t know whether to cry, or whether to jump for joy.  Either way, I’ll get odd looks in the office.

The Tao Framework is a very thin wrapper around the OpenGL libraray, and doesn’t explicitly support any easy way out of font rendering.  “Font rendering?”, I hear you say.  A LOT harder than it sounds.  In fact, a lot of things is a lot harder than it sounds – why do we bother?!  Anyway.  It’s more than just typing a couple of characters, and expecting them to appear on the screen – you have to find the character first, in the character library which you’ve had to either pull apart, or create yourself before you can draw it.  But after two weeks, many leads and dead-ends, it’s been solved.  And there’s a story behind it all.

Bitmap Fonts

These are pretty much the basic of the basics.  Each character is just a bitmap image, mono-color, and you can’t really do much with them either.  I was fairly happy to have achieved this hurdle – it took me a fair while to figure out what was happening and how.  Before you can pull the bitmap image out, you have to create the font first.

font = Gdi.CreateFont(-36,                // Height
0,                                        // Width
0,                                        // Escapement
0,                                        // Orientation
400,                                      // Weight
false,                                    // Italic
false,                                    // Underline
false,                                    // Strikeout
Gdi.ANSI_CHARSET,                         // Character Set
Gdi.OUT_TT_PRECIS,                        // Output Precision
Gdi.CLIP_DEFAULT_PRECIS,                  // Clipping Precision
Gdi.ANTIALIASED_QUALITY,                  // Output Quality
Gdi.FF_DONTCARE | Gdi.DEFAULT_PITCH,      // Pitch and Family
“Calibri”);                               // Face

Wgl.wglUseFontBitmapsA(User.GetDC(simpleOpenGlControl.Handle), 0, 255, fontbase);

You’re then (as far as I’m aware) using display lists (I’m not too sure on how these work… but I will do sooner than later) to call up each character, and you have to translate each time with a look up to the character width/height before drawing the next character.  The code’s not very nice, so I’m going to save you (and myself) the trouble.

Outline Fonts

These are setup exactly the same way, with one change really.  Instead of initiating a list of bitmaps we ask for outlines instead.

Wgl.wglUseFontOutlinesW(User.GetDC(simpleOpenGlControl.Handle), 0, 255, fontbase, 0, 0.2f, Wgl.WGL_FONT_POLYGONS, gmf);

The great thing that outlines offer that bitmaps don’t, is the flexibility they provide – they are 3D fonts, and therefore can be manipulated the same way a normal object in OpenGL can – that’s includes scaling, translating, rotating, lighting etc.

But in terms of quality, with basic anti-aliasing on, not so nice, and same with bitmap fonts too.  In fact, they’re both pretty much the same.

ISE.FreeType

This library is pretty much a wrapper around the Tao.Freetype library – the one used to “create” (loose term) fonts.  Score.  And it works as well.

Oh wait.

The license states that we can’t use it.  Got to go looking again.

FTGLSharp

There’s a library that works with C/C++ called FTGL, and someone, very nicely has created a wrapper around it so it would be compatible with C#.  The great thing about FTGL is it essentially has everything you need to render fonts onto the screen, with all the flexibility, and the appropriate license!  The problem is the wrapper itself – it was a struggle to compile, and even after then it was a huge struggle to get something on the screen – in fact, that bit didn’t even happen.

Frustrating really.  All of these amazing leads, and you think it’s all good to go, but then you get shot down for being too eager.  If you want a job doing, do it yourself.  So I did.  After poking around source code of solutions that ultimately didn’t work, I managed to create a solution.

Font Rendering

So this approach I’ve gone for is texture-mapped fonts – load a character, and draw it to texture.  The only problem is space – a way to solve that is texture atlasing, but that’s another chapter in the book.

The first thing to do is to make sure you have a reference to the FreeType library, and then using the library pointer, create a pointer reference to the font face.

FT.FT_Init_FreeType(out LibraryPtr);
FT.FT_New_Face(LibraryPtr, trueTypeFilePath, 0, out FacePtr);

This is the tricky part.  Now that we have a pointer to the font face, we have to extract the information.  It’s in the form of a struct, so we marshal the information so it’s useable.

FT_FaceRec faceRec = (FT_FaceRec)Marshal.PtrToStructure(FacePtr, typeof(FT_FaceRec));

Following that, we set the character size/pixel height. I have used my own function to determine the pixel height of the font point size.   Why?  Well, when you choose a font size in a text editor, you’re normally given the choice of point sizes, pt.  These point sizes do not directly correspond to the font height – the font height is actually x4/3 the size of the point.  In this method of producing the fonts, we take the max point size – I have set this to 96 (height is 128 – it’s a power of 2).  The functions also require the dots per inch (DPI) of the screen – I presume this is for detail.

FT.FT_Set_Char_Size(FacePtr, 0, MAX_POINT_SIZE * 64, (uint)DPI, (uint)DPI);
FT.FT_Set_Pixel_Sizes(FacePtr, 0, (uint)GetFontHeight(MAX_POINT_SIZE));

We then take the number of glyphs (essentially a character) this true type font has to offer, and create an array to store the character information.

Characters = new TTCharacter[faceRec.num_glyphs];

Now that we have the font set up, we have to set up each character.  This is done on the fly, so each character is only created if it’s needed – saving on space and time.  The TTCharacter class I have used to store the character information will hold the texture ID the character is drawn onto, and the glyph metrics – these are character unique, describing the width, height, offsets and so forth, of each character.  As each glyph is created from the face, it gives a bitmap type structure to use to draw the character.  This structure we will need to copy to texture, so we can use it for ourselves.  So as we create each TTCharacter, we also generate a texture ID.

Characters[c] = new TTCharacter(c, faceRec.num_glyphs);
Gl.glGenTextures(1, out Characters[c].TextureId);

We then ask the face to load up the appropriate glyph.  This, similar to the face, is in the form of a pointer, so it too needs to be marshalled out into a useable state.  The glyph properties are also copied into the TTCharacter class.

FT.FT_Load_Char(FacePtr, c, FT.FT_LOAD_RENDER);
glyphSlotRec = (FT_GlyphSlotRec)Marshal.PtrToStructure(faceRec.glyph, typeof(FT_GlyphSlotRec));
FT.FT_Render_Glyph(ref glyphSlotRec, FT_Render_Mode.FT_RENDER_MODE_NORMAL);
Characters[c].Metrics = glyphSlotRec.metrics;

Now we have the glyph, we can take the bitmap and copy it to texture.  There is only one problem tho – and this confused the hell out of me when I came across it.  I  first started to draw the texture as it was, without having modified it.  And the results weren’t pretty.  The bitmap needs to be adapted to a format OpenGL likes – that is, a texture with the width and height of a power of 2.  So we transpose it into the new structure, before copying it to texture.  Note: I’ve used an extension method to find the next power of 2! And also, lambda expressions for the for loops!

byte[] bmp = new byte[(glyphSlotRec.bitmap.width) * glyphSlotRec.bitmap.rows];
Marshal.Copy(glyphSlotRec.bitmap.buffer, bmp, 0, bmp.Length);

int texSize = (glyphSlotRec.bitmap.rows.NextPowerOf2() < glyphSlotRec.bitmap.width.NextPowerOf2()) ? glyphSlotRec.bitmap.width.NextPowerOf2() : glyphSlotRec.bitmap.rows.NextPowerOf2();

byte[] tex = new byte[texSize * texSize];
texSize.Times((j) => texSize.Times((i) =>
{
if (i < glyphSlotRec.bitmap.width && j < glyphSlotRec.bitmap.rows)
{
tex[j * texSize + i] = bmp[j * glyphSlotRec.bitmap.width + i];
}
}));

Gl.glBindTexture(Gl.GL_TEXTURE_2D, Characters[c].TextureId);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_S, Gl.GL_CLAMP_TO_EDGE);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_T, Gl.GL_CLAMP_TO_EDGE);
Gl.glTexParameterf(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR_MIPMAP_LINEAR);
Gl.glTexParameterf(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR_MIPMAP_LINEAR);
Glu.gluBuild2DMipmaps(Gl.GL_TEXTURE_2D, Gl.GL_ALPHA8, texSize, texSize, Gl.GL_ALPHA, Gl.GL_UNSIGNED_BYTE, tex);

You’ll notice that the lines at the end of the code above is a lot different to the code where I render to texture again and again.  This is because this method incorporates mipmapping.  Mipmapping is the process of taking a large image, and then scaling it down to the next power of 2 size, until we reach a 1×1 size.  As the image is resized, each pixel is sampled multiple times, therefore trying to keep precision in quality, whilst producing a smaller sized image.  This saves from having to add extra anti-aliasing functionalities, to stop images from looking blocky – this will happen when there is more, or less, than 1 texel fighting for a pixel.  Therefore, as a texture is called up, it will select the appropriate size of mipmap and load that up, instead of the original, and thus preserving some form of quality.  Expensive in memory, yes, but it’s the best I have so far.  The only other option is to produce each character multiple times for each point size – without mipmapping.  This will cost a lot more in memory.

Lastly, is to draw the textures.  As the textures are stored as alpha values (see in the code above), we don’t need to do much else, but just draw the texture.  Similarly to all other text renderers, after each character, we will have to find the character width and offsets, and translate in order to be able to draw the next character.

Results are acceptable – they’re not quite MS Word standard, but they’re a lot better than mono-colour.

Font ComparisonFont Comparison.  24pt normal weight, from right to left: MS Word, my font system and mono-colour bitmap font; from top to bottom: Calibri, Arial, Verdana and Times New Roman.

Font ProblemFont Problem.  The top shows what it should have looked like.  The bottom not so – this is what it looked like before copying the bitmap into an OpenGL friendly format – it’s not nice!

Mip Mapping.  Shows how a full scale image can be downsized whilst still maintaining a quality.  This image has been taken from http://www.actionscript.org/forums/showthread.php3?t=194116

It’s not perfect – but it’s a start nevertheless.   And the main thing to do is to make sure that the current solution is scalable – that way a new solution to producing fonts can be easily added into the system, without having to change much of the code that’s already there.

On a more personal note – I had my first driving lesson yesterday.  Can’t say I found it easy.  But we’ll have to see how that goes for the future.  Either way, at least it’s a start.


Posted in Computing, Placement | Tagged: , , , , , , , , , , , , , | 3 Comments »

RenderToTexture() { RenderToTexture(); }

Posted by Robert Chow on 09/10/2009

Rendering To Texture

Ever since I started, I’ve been given the task of rendering an image to a texture, and then to render it onto the screen.  And to be honest, I never really got the hang of it.  And I still kind of haven’t.  But it’s been a couple of months now, and I’m managed to grab hold of it by the thread.  Check this out.

RenderingToTextureRendering To Texture.  This effect is done by taking a snapeshot of the cube, and then rendering the image onto the cube itself, as it rotates.  Psychedelic, some might say.

Cool huh?!  So how’s it done?!

We need to render the cube, then copy that to a texture.  This texture is then renderered onto each face of the cube itself, and does so each time the display loop is called – creating a mise en abîme.  This is done using mainly two functions, but before we call those functions, we have to set up the texture first.

int texSize = 256;
byte[] texture = new byte[4 * texSize * texSize];
int textureId;

Gl.glGenTextures(1, out textureId);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, textureId);
Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_RGBA, texSize, texSize, 0, Gl.GL_RGBA, Gl.GL_UNSIGNED_BYTE, texture);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_S, Gl.GL_CLAMP);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_T, Gl.GL_CLAMP);

These calls initialize the texture – it is asking for a texture of size texSize * texSize (these have to be a power of 2 (although not so necessarily anymore these days)), and bind to a reference, textureId.  I’m still playing around a bit with the parameters for glTexImage2D – like I said, I’m only just starting to get to grips with this.  As far as I’m aware, the last 4 parameter calls are saying that when the texture is far away, and very close, render them using a smooth algorithm; and that the texture should not be repeated in either the X or Y direction (S and T in texture terms).  But again, I’m not too sure.

So the first of the function calls is the actual drawing of the cube.  If we draw this in immediate mode – where we define all the vertices – we have to preceed each one with a texture coordinate beforehand, so we know which part of the texture to bind to that vertex.  Of course, if there is no texture available, then the texture coordinate call is ignored, and the vertrex renderered as normal.  I’m not going to show you the whole code, but here is an example of one of the faces.

Gl.glTexCoord2i(0, 0); Gl.glColor3f(1.0f, 1.0f, 0.0f); Gl.glVertex3f(-1, -1, -1);
Gl.glTexCoord2i(1, 0); Gl.glColor3f(0.0f, 1.0f, 0.0f); Gl.glVertex3f(-1, -1, 1);
Gl.glTexCoord2i(1, 1); Gl.glColor3f(0.0f, 1.0f, 1.0f); Gl.glVertex3f(1, -1, 1);
Gl.glTexCoord2i(0, 1); Gl.glColor3f(1.0f, 1.0f, 1.0f); Gl.glVertex3f(1, -1, -1);

Now that we’ve got the cube, we have to be able to render it, copy it to a texture, then render it again, using the texture.  Before we can use the texture however, we do need to enable texturing – this is a simple call.

Gl.glEnable(Gl.GL_TEXTURE_2D);

So first we render the scene, appropriate to copy to the texture.

Gl.glClearColor( 1.0f, 1.0f, 1.0f, 0.0f);
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);

Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
Glu.gluPerspective(45, 1, 0.1f, 10);
Gl.glViewport(0, 0, texSize, texSize);

Gl.glMatrixMode(Gl.GL_MODELVIEW);
Gl.glLoadIdentity();
Gl.glTranslatef(0, 0, -5);
Gl.glRotatef(alpha, 0.75f, 1, 0.5f);

DrawCube();
Gl.glFlush();

Gl.glCopyTexSubImage2D(Gl.GL_TEXTURE_2D, 0, 10, 10, 0, 0, texSize -20, texSize -20);

This code first sets clears the scene as per usual, asking for a white background.  Secondly, it adjusts the projection matrix so it is suitable for the texture.  Thirdly it processes the model matrix so the cube can renderered appropriately onto the screen.  It is also taking in the variable alpha for its rotation adjustment.  Lastly, it draws the cube, and then copies the image onto the texture.  There are two different calls to copy an image to a texture, and the one I have used here is to only a copy the image to a sub-part of the texture.  I have done this so I can get a border to the texture without much trouble.  I need this border, otherwise the cube will disappear into itself, and I won’t know where the original cube is in comparison to the textured cubes.  Confused?  I am, a little.  This call is often also used when devising a texture-atlas.  A texture-atlas is a texture full of many different textures, mostly unrelated.  Instead of having to change texture to be drawn, you just change the coordinates pointing to the image in the texture.  This is a lot quicker and less-expensive way of using textures.

Now that we have our texture.  We just render the cube again, but this time, without copying the final output to texture – this one goes to the device context, aka the screen!

Posted in Computing, Placement | Tagged: , , , , , | 1 Comment »