Sam MacPherson

Flash, Haxe, Game Dev and more…

Category Archives: graphics

To Batch or not to Batch

Once again I’ve revisited my rendering system. This time with the intention of switching over to a batched setup instead of a one-draw-call-per-sprite setup. Overall the switch was relatively painless, but I did learn some lessons along the way. I wanted to share some of the pitfalls that I experienced.

First, I will briefly explain what a batched sprite rendering system is. The basic idea behind batched sprite rendering is that everytime you call drawTriangles() you incur some overhead. If you are using a naive setup (Like I was) you are probably calling drawTriangles() once for every sprite. This is okay for a few sprites, but once you get a couple thousand it becomes extremely inefficient. Basically the name of the game is to minimize the number of drawTriangles() calls. To do this, we don’t immediately render the sprite when render is called. Instead we batch the sprite’s verticies onto a global vertex buffer with the intent of rendering everything at the end with one drawTriangles() call. Simple. Well not really. Using this method has some implications:

1. We can no longer use the GPU to apply sprite-specific transforms.
2. A batch of sprites must share the same texture.

The implications of (1) mean that we must do the coordinate transforms on the CPU. This not the best, but it is unavoidable. If you have a reasonable amount of sprites (Say a couple thousand) then this should be fine.

Because of (2) we must make seperate drawTriangles() calls everytime we need to switch textures. But wait hang on. If we need to make another call everytime we switch textures then doesn’t that leave us where we started seeing as different sprites will likely have different images? The answer is yes — and no. First and foremost you can group sprites of the same image into the same draw call. However, you can even go one step further.Instead of allocating a texture for every image we allocate a global store of massive textures (2048×2048 pixels). We then stamp in all the smaller textures and give the render jobs appropriate U,V coordinates. Very nice! If your game has a lot of small sprites this will be lightning fast. Probably under 3 draw calls.

Ok so we go ahead and do this and oops we have another problem. Because we are grouping the sprites by texture they will no longer necessarily be sorted by depth. Ok, so this is a set-back, but perhaps we could just batch the sprites until we encounter a texture change then flush the buffer and start again. This will of course work, but it is only efficient if sprites of similar depth also share the same texture which may not be true. For example when testing this with my game I went from 3 draw calls to about 300. This is unacceptable.

Well we are rendering on a 3D graphics card so why not use the depth buffer to do the sorting for us! Every frame update we assign a global depth value to each sprite accordingly and enable the depth buffer. This may appear to be the best solution possible, but it does have one major flaw. You can’t use translucent textures. The reason behind this is the depth buffer does not understand alpha compositing. All it understands is geometry. Either a triangle is blocking something behind it or it is not. This is where we have to make a decision. There are four equally valid solutions that I have come up with.

General Purpose Solutions

1. Fall back to the render on texture switch method and do some optimizations to try and group depth-locality with texture-locality. (Could be optimal depending on setup)

2. Render all opaque images first entirely on the GPU. Then do transparent images after using method (1). (Works very well if there isn’t many transparent images)

Specific Solutions

3. Only use opaque textures. (Optimal)

4. Only use textures with quantized alpha values of either 0 or 1. (Optimal, but requires an extra instruction in the shader)

All of these methods have there strengths and weaknesses. Personally I decided to go with method (4) for my game. Method (4) is very similar to method (3). They only differ by one instruction in the shader. For (4) you include a KIL opcode (kill() in hxsl) which when the alpha channel is less than one will abort the pixel and depth buffer writes.

There are definitely other solutions out there, but these were the best ones I could come up with after working on this for several hours. Hope this helps.

Advertisements

2D Dynamic Lighting Demo

In my previous post I went over how to implement 2d dynamic lighting on the GPU. After some tweaking I finally came up with a suitable solution for general purpose dynamic lighting. Here is a video demonstration of this in action in a game I am currently developing – codename Zed.

Currently the lighting only works with static objects as light blockers, but it’s not very hard to extend this to moving objects as well.

To get the effect right I had to modify my previous code and split the rendering into 2 separate tasks. First I do an additive light pass which fills in the glowing light you can see in the video. Next I had to do a subtractive shadow pass which worked by just starting with a black texture and subtracting off the alpha component as necessary. After each of these runs I do a 5x Gaussian blur effect to make things look nice and smooth and voila you have general purpose lighting that runs reasonably fast.

Depending on the quality of the light and your graphics card there are some limitations. I still plan on doing some more optimizations and benchmarking, but it seems that you are stuck with ~40 lights max on screen at any given moment. Possible optimizations could be caching of light textures that are static which would allow for a lot more stationary lights like what you saw in the game.

2D Dynamic Lighting with Molehill/HxSL

So I’ve really been digging into molehill lately and have come up with a solution for dynamic 2d lighting on the GPU. Basically what I did was ported this (http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/) method into flash. The method described in that article is built on XNA 4.0 which I’m pretty sure is a C# library for writing gpu shaders.

There are still a few things to iron out and optimize, but the first draft is looking good. Here is a screen shot of my algorithm working with 9 lights with a bunch of 10×10 pillers:

As of now this is about the limit that my graphics card can handle, but I’m fairly certain that I can get at least a 200% speed increase with some adjustments I plan on making (Probably a lot more if I really go at it).

My plan of attack for this was to start simple and only allow rectangles for obstacles and circles for the lights. I started by creating the vertex and index buffers for the obstacles.

for (i in rects) {
	//Vertex buffer
	vpts.push(i.xmin);
	vpts.push(i.ymin);
	
	vpts.push(i.xmax);
	vpts.push(i.ymin);
	
	vpts.push(i.xmin);
	vpts.push(i.ymax);
	
	vpts.push(i.xmax);
	vpts.push(i.ymax);
	
	//Index buffer
	ipts.push(index);
	ipts.push(index + 1);
	ipts.push(index + 3);
	
	ipts.push(index);
	ipts.push(index + 3);
	ipts.push(index + 2);
	
	index += 4;
}

_vbuf = Canvas.getContext().createVertexBuffer(Std.int(vpts[i].length / 2), 2);
_vbuf.uploadFromVector(vpts[i], 0, Std.int(vpts[i].length / 2));
_ibuf = Canvas.getContext().createIndexBuffer(ipts[i].length);
_ibuf.uploadFromVector(ipts[i], 0, ipts[i].length);

I also setup 4 textures for intermediate buffers between shader calls. Two of the textures are used for rendering individual lights while the other two are used to store the overall shadow map as lights are added. Really this is all that is needed for initialization. Now we move onto the render cycle which occurs during every frame update.

For every visible light the follow sequence of shaders gets run.

//The Shader Program
@:shader({
	var input:{
		pos:Float2
	};
	function vertex (mpos:M44, mproj:M44) {
		out = pos.xyzw * mpos * mproj;
	}
	function fragment () {
		out = [1, 1, 1, 1];
	}
}) class ObjectShader extends format.hxsl.Shader {
}

//The shader call
m.identity();
m.appendTranslation(-i.bounds.xmin, -i.bounds.ymin, 0);
var texCam:Matrix3D = Molehill.get2DOrthographicMatrix(i.bounds.intervalX, i.bounds.intervalY);
c.setRenderToTexture(_tbuf1);
c.clear();
_objectShader.init(
	{ mpos:m, mproj:texCam },
	{ }
);
_objectShader.draw(_vbuf, _ibuf);

This shader’s job is fairly straight forward. All it does is center the camera around the light and make every pixel which is inside an obstacle white. Once this shader has done its job we now have an image which looks like the following stored in _tbuf1.

In the interest of saving time and seeing as this is a port of an existing method I will re-use the pictures provided in the original post. Okay, we can now store the distances to the pixels as outlined in the first step of the original post.

//The Shader Program
@:shader({
	var input:{
		pos:Float2,
		uv:Float2
	};
	var tuv:Float2;
	function vertex (mproj:M44) {
		out = pos.xyzw * mproj;
		tuv = uv;
	}
	function fragment (t:Texture) {
		out = if (t.get(tuv, nearest).x > 0) len(tuv - [0.5, 0.5]).xxxx else 1.xxxx;
	}
}) class DistanceShader extends format.hxsl.Shader {
}

//The shader call
var vbuf:VertexBuffer3D = Molehill.getTextureVertexBuffer(c, 0, 0, i.bounds.intervalX, i.bounds.intervalY);
var ibuf:IndexBuffer3D = Molehill.getTextureIndexBuffer(c);
c.setRenderToTexture(_tbuf2);
c.clear();
_distanceShader.init(
	{ mproj:texCam },
	{ t:_tbuf1 }
);
_distanceShader.draw(vbuf, ibuf);

Now that we have shaded the pixels based on how far they are from the center of the image we have completed step one and have the following image stored in _tbuf2.

Now here is where things get cool. We take the image stored in _tbuf2 and distort it so the rays of light from the light source are aligned along the horizontal axis as outlined in step 2 from the original post.

//The Shader Program
@:shader({
	var input:{
		pos:Float2,
		uv:Float2
	};
	var tuv:Float2;
	function vertex (mproj:M44) {
		out = pos.xyzw * mproj;
		tuv = uv;
	}
	function fragment (t:Texture) {
		var u0 = tuv.x * 2 - 1;
		var v0 = tuv.y * 2 - 1;
		v0 = v0 * abs(u0);
		v0 = (v0 + 1) / 2;
		out = [t.get([tuv.x, v0], nearest).x, t.get([v0, tuv.x], nearest).x, 0, 1];
	}
}) class DistortionShader extends format.hxsl.Shader {
}

//The shader call
c.setRenderToTexture(_tbuf1);
c.clear();
_distortionShader.init(
	{ mproj:texCam },
	{ t:_tbuf2 }
);
_distortionShader.draw(vbuf, ibuf);

After this step we are left with the following image stored in _tbuf1.

This may look a bit weird and if you are confused at this point I would recommend reading over the original post. I know I found this a bit confusing when I first looked at it.

Ok, now that we have a view from the light’s perspective we need to calculate the closest obstacle edge by successively halving the image.

//The Shader Program
@:shader({
	var input:{
		pos:Float2,
		uv:Float2
	};
	var tuv:Float2;
	var dx:Float;
	function vertex (mproj:M44, pixel:Float) {
		out = pos.xyzw * mproj;
		tuv = uv;
		dx = pixel;
	}
	function fragment (t:Texture) {
		out = min(t.get(tuv + [-dx, 0], nearest), t.get(tuv + [0, 0], nearest));
	}
}) class MinDistanceShader extends format.hxsl.Shader {
}

//The shader call
for (i in 0 ... _distBufs.length) {
	c.setRenderToTexture(_distBufs[i]);
	c.clear();
	_minDistanceShader.init(
		{ mproj:texCam, pixel:1/(tdim >> i) },
		{ t:if (i == 0) _tbuf1 else _distBufs[i - 1] }
	);
	_minDistanceShader.draw(vbuf, ibuf);
}

The variable “tdim” is the size of the texture buffers. For the sake of clarity we will assume that tdim is 512 pixels. We need to call this shader 8 times to get the image down to 2×512. At each stage we compare every pixel with its closest neighbor in the x direction and throw away the higher of the two. The end result is a 2×512 image where each pixel contains the minimum distance to an obstacle in that direction. Again, if you are confused at this point please refer to the original article. It does a much better job at explaining the reasoning.

After we have the minimum distances we can now draw the shadow map for this light.

//The Shader Program
@:shader({
	var input:{
		pos:Float2,
		uv:Float2,
		copy:Float,
		suv:Float2
	};
	function getShadowDistanceH (t:Texture, pos:Float2):Float {
		var u:Float = pos.x;
		var v:Float = pos.y;
		
		u = abs(u-0.5) * 2;
		v = v * 2 - 1;
		var v0:Float = v/u;
		v0 = (v0 + 1) / 2;
		
		return t.get([pos.x, v0], nearest).x;
	}
	function getShadowDistanceV (t:Texture, pos:Float2):Float {
		var u:Float = pos.y;
		var v:Float = pos.x;
		
		u = abs(u-0.5) * 2;
		v = v * 2 - 1;
		var v0:Float = v/u;
		v0 = (v0 + 1) / 2;
		
		return t.get([pos.y, v0], nearest).y;
	}
	var tuv:Float2;
	var tcopy:Float;
	var stuv:Float2;
	function vertex (mproj:M44) {
		out = pos.xyzw * mproj;
		tuv = uv;
		tcopy = copy;
		stuv = suv;
	}
	function fragment (t:Texture, sm:Texture, baseIntensity:Float, intensity:Float) {
		var d:Float = len(tuv - [0.5, 0.5]);
		var sd:Float = if (tcopy > 0) 0 else if (abs(duv.y) < abs(duv.x)) getShadowDistanceH(t, tuv) else getShadowDistanceV(t, tuv);
		var a:Float = (0.5 - d) * 2;
		var shadow:Float = min((1 - intensity) * a + (1 - baseIntensity) * (1 - a), sm.get(stuv).w);
		var inside:Float4 = [0, 0, 0, shadow];
		var outside:Float4 = min([0, 0, 0, (1 - baseIntensity)], sm.get(stuv));
		out = if (tcopy > 0) sm.get(tuv) else if (d < sd) inside else outside;
	}
}) class ShadowMapShader extends format.hxsl.Shader {
}

//The shader call
var camera:Matrix3D = Molehill.get2DOrthographicMatrix(SCREEN_WIDTH, SCREEN_HEIGHT);
var svbuf:VertexBuffer3D = _getShadowMapVertexBuffer(i.bounds);
var sibuf:IndexBuffer3D = _getShadowMapIndexBuffer();
c.setRenderToTexture(if (index % 2 == 0) _sbuf2 else _sbuf1);
c.clear(0, 0, 0, 1 - _baseIntensity);
_shadowMapShader.init(
	{ mproj:camera },
	{ t:_distBufs[_distBufs.length - 1], sm:if (index % 2 == 0) _sbuf1 else _sbuf2, baseIntensity:_baseIntensity, intensity:i.light.getLightIntensity() }
);
_shadowMapShader.draw(svbuf, sibuf);

index++

This is probably the most complicated piece because I couldn’t think of a better way of doing this. Currently (to my knowledge) molehill does not allow drawing to the same texture twice. This was kind of annoying so I put together a work-around which takes the previously rendered lights and does a straight one to one copy. After this is done, during the same pass, the new light is supplied. The difference between the two is handled by the copy value in the vertex buffer. If copy is set to one then the shader just does a straight one to one copy. If not then it will render the light by performing the normal routine described in the original post. The “suv” coordinates are used to compare the previous render with the current one for pixel blending.

As you can see I’ve also added in a simple gradient effect which makes things look a bit nicer. I plan on expanding on the post-processing in the near future with blurring and colored light. There is also a lot of room for optimization. For one, I can get rid of some of the extra shaders and combine them. Also, since there is only ever 2 color channels used at once there is the possibility of rendering two lights at the same time. This is the 200% efficiency increase I was talking about earlier. I also plan on allowing the user to scale down the quality of the image in order to improve render time.

So there you have it — dynamic lighting on the GPU. All of the code above was taken from my gaming library and will be available to the public once I feel it is stable enough.

2D GPU-Accelerated Rendering with Molehill/HxSL

Ok, so I’ve been working on this rendering problem for probably a little over a month now. The problem being that Flash’s vector renderer is just too slow for my needs.

My first approach which I illustrated in the last post was to use blitting with a cached store for repeated affine transformations. I ended up with a decent renderer more or less, but it was not without issues. For one, it did not work very well in general cases. I could only get decent performance with very contrived examples which of course is not very helpful.

Another issue with the blitting renderer was the complexity of the code. I was not aware when I started out just how complicated it would be to optimize the code. I went through several revisions with many hours of hair pulling before I got something that was reasonable (Reasonable meaning 30 fps).

When I started working on the blitting engine I was aware of the release of Flash Player 11 beta and molehill, but I was determined to get my own version working. I don’t really have any good reasons for not using molehill right away as a rendering engine other then perhaps my own ignorance. That shortly changed after I started playing around with the molehill api using HxSL (Haxe Shader Language) – http://haxe.org/manual/hxsl.

If you don’t know what HxSL is dont worry. For now you can just remember it as an easier way of writing Shaders (Computer Programs) for the GPU. Currently the adobe alternative involves writing low-level assembly code which is not very pretty. Once again Haxe is on the fore-front of Flash technology.

At first glance 3D programming seems very complicated. I myself have never done anything with the graphics card. I was actually surprised at how quickly I picked the whole thing up. I’m not going to explain how to use HxSL or program 3D applications. There are a ton of tutorials out there. The ones I used were http://haxe.org/doc/advanced/flash3d for examples on how to use HxSL and http://lab.polygonal.de/2011/02/27/simple-2d-molehill-example/ as well as some conceptual stuff on matricies. This post is about the 2D rendering engine I made using Haxe.

When I started learning molehill about two weeks ago I was surprised at how few examples there were for HxSL specifically 2D rendering with HxSL. Really the only things I could find were the general 3D examples in HxSL and a whole ton of actionscript examples. I did find another 2D rendering engine written in actionscript (https://github.com/egreenfield/M2D), but I wanted a solution in Haxe! So I decided to do it myself.

So after some learning/experimentation I was finally ready to put it all together. Basically the idea is to setup a simulated 2D environment by fixing the camera (the screen) and representing all the display objects as flat rectangles that ‘hover’ slightly in front of the camera.

The end result looks identical to a normal 2D environment (Excuse my bad 3d drawing skills).

So let’s get into some code!

private function _initFrame ():Void {
     _s = flash.Lib.current.stage.stage3Ds[0];
     _s.viewPort = new Rectangle(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
     _s.addEventListener(Event.CONTEXT3D_CREATE, _onReady);
     _s.requestContext3D();
}
private function _onReady (e:Event):Void {
     _c = _s.context3D;
     _c.configureBackBuffer(Std.int(_s.viewPort.width), Std.int(_s.viewPort.height), ANTI_ALIAS, true);

     //Setup projection matrix
     _mproj = new Matrix3D();
     _mproj.appendTranslation(-SCREEN_WIDTH/2, -SCREEN_HEIGHT/2, 0);
     _mproj.appendScale(2/SCREEN_WIDTH, -2/SCREEN_HEIGHT, -1);
     _mproj.appendTranslation(2/SCREEN_WIDTH, 2/SCREEN_HEIGHT, 1);

     //Setup shader
     _shader = new Shader(_c);
     _ready = true;
}

This may look scary, but most of the code above is just boilerplate code. The one piece that isn’t boilerplate code is the projection matrix. Now the math behind projection matrix can get complicated fast, but you can think of it like this. The projection matrix can be thought of as a camera and we need to translate pixels in the 3D world onto this 2D screen. We do this using the projection matrix. According to the code above, we fix the camera at (0, 0, 1) facing towards the origin and we apply a perspective change to make the x/y plane at z=0 show exactly (SCREEN_WIDTH x SCREEN_HEIGHT) units. This will give us the exact same setup as a regular 2D environment.

@:shader({
     var input:{
          pos:Float3,
          uv:Float2
     };
     var tuv:Float2;
     function vertex (mpos:M44, mproj:M44) {
          out = pos.xyzw * mpos * mproj;
          tuv = uv;
     }
     function fragment (t:Texture) {
          out = t.get(tuv);
     }
}) class Shader extends format.hxsl.Shader {
}

Now here is where the beauty of HxSL comes in. The above piece of code is a Shader. If you did this in actionscript you would of had to write that in assembly. Yuck! I don’t want to get into too much detail as to how this shader works as that is not really the purpose of this tutorial. If you are interested you can read the HxSL documentation (http://haxe.org/manual/hxsl). It’s a pretty basic shader.

private inline function _render ():Void {
     //Clear last render and setup next one
     _c.clear(0, 0, 0, 0);
     _c.setDepthTest(true, Context3DCompareMode.ALWAYS);
     _c.setCulling(Context3DTriangleFace.BACK);
     _c.setBlendFactors(Context3DBlendFactor.SOURCE_ALPHA, Context3DBlendFactor.ONE_MINUS_SOURCE_ALPHA);

     //Render children and display
     _renderChild(this);
     _c.present();
}
private function _renderChild (child:CanvasObject):Void {
     var frame:Frame = child.getFrame();
     if (frame != null) {
          _shader.init(
               { mpos:child.getStageTransform(), mproj:_mproj },
               { t:frame.texture }
          );
          _shader.bind(frame.vbuf);
          _c.drawTriangles(frame.ibuf);
     }
     for (i in 0 ... child.getSize()) {
          _renderChild(child.get(i));
     }
}

The above code is performed once every frame update. The function _render() clears the last render and sets up the properties for the next one. You don’t have to worry too much about that. The magic comes in from the _renderChild() function.

To start you may be wondering what the CanvasObject class is. It is not part of molehill. It is my own top-level class that represents an graphics object. The implementation of CanvasObject is extensive and not part of this tutorial. Basically you need to concentrate on these two functions:

CanvasObject.getFrame():Frame;
CanvasObject.getStageTransform():Matrix3D;

getStageTransform() will return a 3D matrix which has rotations/translations/scalings/etc (All the standard 2D transformations) to get the object into stage coordinates. In the simplest case you can just return an identity matrix and have the graphic be drawn to [0,0] (Really (0, 0, 0)).

The other function getFrame() returns a Frame object which is basically just a wrapper class for a bunch of properties. The three most important of which are:

Frame.texture:Texture;
Frame.vbuf:VectexBuffer3D;
Frame.ibuf:IndexBuffer3D;

All of which are part of the molehill API. There are a bunch of tutorials out there on how you create these objects, but for the case of 2D bitmaps the setup is static.

Since the graphics card can only draw triangles we need to draw two triangles for each bitmap to make a rectangle. First we create the vertex buffer:

vbuf = c.createVertexBuffer(4, 5);
var vpts:flash.Vector<Float> = new flash.Vector<Float>();
vpts.push(bounds.xmin);
vpts.push(bounds.ymin);
vpts.push(0);
vpts.push(0);
vpts.push(0);

vpts.push(bounds.xmax);
vpts.push(bounds.ymin);
vpts.push(0);
vpts.push(bounds.intervalX / bmdPow2.width);
vpts.push(0);

vpts.push(bounds.xmin);
vpts.push(bounds.ymax);
vpts.push(0);
vpts.push(0);
vpts.push(bounds.intervalY / bmdPow2.height);

vpts.push(bounds.xmax);
vpts.push(bounds.ymax);
vpts.push(0);
vpts.push(bounds.intervalX / bmdPow2.width);
vpts.push(bounds.intervalY / bmdPow2.height);
vbuf.uploadFromVector(vpts, 0, 4);

In the code above we define 4 verticies. Each vertex has 5 coordinates defined. The first 3 are (x, y, z) (Notice how the z coordinate in all four verticies is 0). The last two are (u,v) coordinates which map the (x,y,z) to the texture. The reason behind the division in the (u,v) coord is because the bounds might not be a power of 2. The graphics driver requires that all textures have dimensions that are powers of 2. The bounds variable is just a rectangle which defines the bounds of the bitmap.

ibuf = c.createIndexBuffer(6);
var ipts:flash.Vector<UInt> = new flash.Vector<UInt>();
ipts.push(0);
ipts.push(1);
ipts.push(3);

ipts.push(0);
ipts.push(3);
ipts.push(2);
ibuf.uploadFromVector(ipts, 0, 6);

The index buffer just links together the verticies to define two triangles. There are four verticies we defined in the vertex buffer indexed by (0-3). So we link vertex 0, 1 and 3 to form the first triangle and 0, 3 and 2 to form the second. Voila we have defined a 2D sprite which can be written to the screen. Well almost.

texture = c.createTexture(bmdPow2.width, bmdPow2.height, flash.display3D.Context3DTextureFormat.BGRA, false);
texture.uploadFromBitmapData(bmdCpy);

We have to upload the image into a texture. Ok, now we are done.

So there we have it, and if you pre-compute every vector graphic into a bitmap then you can render everything through this method REALLY fast. I have yet to fully test this, but initial results are looking good. Full 30 FPS on my current zombie game project (Even in software rendering mode).

I know the above code is sort of in bits and pieces, but I wanted to describe how to do 2D rendering in general so I pulled the code directly from my game dev library. I will probably release my game dev library into the open source world sometime in the near future. It has support for converting MovieClips/Sprites into my rendering framework as well as a unified asset loading system.

Cheers.

Blitting with Caching = Real Time Rendering

Ok, so this is my first time blogging or even really publishing my thoughts anywhere. There are a few things I like to share from time to time so I figured I would start a blog to publish some ideas/experiments of mine. A few of you may already know me under the alias Blank101 on pawngame.com. For those of you who don’t know me, I design video games with my friend Justin (alias JPillz). I am also a CS undergrad at the University of Waterloo, Canada.

Mostly this blog will be concerned with actionscript and haxe with some java sprinkled in. Specifically relating to game design and programming. I may also decide to move this blog to our new site once it is ready.

Well the reason I started this blog in the first place was to write about something I achieved today so I’ll get right to it…

So up until recently I’ve been doing all my rendering using the flash player’s built in vector renderer. I don’t really have a good reason for doing this other then just I hadn’t considered an alternative. That was until I had a talk with Sean McGee (Creator of games like Thing-Thing, etc) at the FGS this year. He told me about the widely known concept of blitting and we went over it for a bit.

For those of you who don’t know what blitting is. Blitting is a way of rendering a game by using the fast BitmapData.copyPixels() method. A practical way of utilizing this is to pre-render all of your vector graphics as bitmaps using the BitmapData.draw() method. Then to draw the graphic you just call BitmapData.copyPixels().

However, there are some drawbacks to blitting. For one the BitmapData.copyPixels() method does not allow general affine transformations (rotation, scaling, etc). This is a problem.

The obvious solution to this problem is to figure out which assets will need to be rotated/scaled and pre-render all possible orientations at some small increment epsilon. This will of course work, but at the cost of huge memory consumption. To give you an idea of how much memory we are talking say we had a small 50×50 pixel movieclip with 10 frames and pre-rendered all the images at increments of 5 degrees. A single render will be 10KiB. So that gives 10KiB*10 frames*(360/5 renders per frame) = ~7MiB to store this movieclip. Now this may be acceptable if you have only a few different assets, but if you have say ~1000 different assets then this will add up. I don’t know many machines with 7GiB of ram available to the flash player.

So my idea was to only store the base images without any rotations/scaling and allow the user to choose if they want to speed up rendering by including an option to cache recently rendered affine transforms in a global cache. This may not be appropriate in all cases, but if you have a lot of instances of the same movieclip playing over and over while rotating then the speed up will be very noticable. Not only will this cache the base assets, but it can also cache static images that have been generated on-the-fly.

In essence you get the full flexability of the flash player renderer with an optional caching option for similar images that need to be transformed a lot. An example of where this would be useful is when you are rendering a lot of enemies that all use the same graphic.
Now the memory usage can still be fairly high depending on the numbers you give the cache, so I have included a QUALITY property which is a number between 0 and 1 which gets factored with the dimensions of the images to increase render time/reduce memory usage. The factor is quadratically related to the memory usage so this is good news if you don’t mind a loss of quality for a LOT of memory saved.

I was going to upload an example, but it seems like too much of a hastle on here. I will start posting flash examples when I transfer this blog to my new site.

Also, I welcome any feedback on my writing style. This is my first time doing this so let me know if my writing is too verbose, not verbose enough, etc.