Sam MacPherson

Flash, Haxe, Game Dev and more…

Category Archives: haxe

Web 2.0 with Haxe

When it comes to modern web infrastructure there are three main things you need to worry about when programming: Front-end development (JS, DOM, CSS, etc), stateless http(s) service and within recent years stateful push services. I’ve been building a website over the past year and a half, and I must say that Haxe has been a great asset for this project. Traditional web development would have required me to write in php, javascript, html, css and perhaps java for the stateful service (modulo some other similar languages). But NO! With the exception of css everything is written in Haxe. The client, the http server and the stateful push server are all using the same code base.

To demonstrate how awesome Haxe (and the community) is I will explain how the chat on my site works. Below is an example of some client sending a message to two different people. All 3 of which are on completely different devices and browsers.

As you can see all 3 clients have the capability to communicate with the http server. I hope you can agree that it is a reasonable assumption that all browsers are able to communicate via http. 😀 On the other hand Client 3 is unable to communicate with the push server because of lack of support for web sockets and flash.

The first message that Client 1 sends is addressed to Client 2. The message gets sent to the http server which notices this is a chat message and immediately forwards it to the push server that Client 2 is connected to. It might make sense for Client 1 to send the message directly to the push server, but this may have issues if Client 1 and Client 2 are not connected to the same push server. Anyways, Client 2’s push server then forwards the message to Client 2. Behold instant messaging!

In the second scenario Client 1 sends another message to Client 3 who is browsing the site with his Android device. Unfortunately as of the time of this writing Android’s native browser does not support web sockets (nor flash), so we are stuck with http polling. What happens here is even though Client 3 is not connected to a designated push server we can still agree on a stateful server to hold the message until Client 3 requests it. The http server can’t hold the message because it is stateless. If the user won’t be requesting the message for a while, long term database storage is a good idea.

Now if we were doing this with 2 different languages for both the http and push servers things would be a little nightmareish. Duplicate code would pop up all the time with interpreting protocols and backwards compatibility for users who need to rely on http polling. Not to mention duplication in the client code as well. Haxe unites all 3 essential programming components of websites into a largely overlapping code base.

I want to say also that https://code.google.com/p/haxe-websocket/source/browse/library/js/WebSocket.hx is a great library for Haxe-based modern web design. It allows for setting up a Haxe push server which can communicate with web socket enabled browsers as well as a client implementation for server-server communication. If the client’s browser doesn’t support web sockets or flash then the http server can act on behalf of the client to communicate with the push server.

To Batch or not to Batch

Once again I’ve revisited my rendering system. This time with the intention of switching over to a batched setup instead of a one-draw-call-per-sprite setup. Overall the switch was relatively painless, but I did learn some lessons along the way. I wanted to share some of the pitfalls that I experienced.

First, I will briefly explain what a batched sprite rendering system is. The basic idea behind batched sprite rendering is that everytime you call drawTriangles() you incur some overhead. If you are using a naive setup (Like I was) you are probably calling drawTriangles() once for every sprite. This is okay for a few sprites, but once you get a couple thousand it becomes extremely inefficient. Basically the name of the game is to minimize the number of drawTriangles() calls. To do this, we don’t immediately render the sprite when render is called. Instead we batch the sprite’s verticies onto a global vertex buffer with the intent of rendering everything at the end with one drawTriangles() call. Simple. Well not really. Using this method has some implications:

1. We can no longer use the GPU to apply sprite-specific transforms.
2. A batch of sprites must share the same texture.

The implications of (1) mean that we must do the coordinate transforms on the CPU. This not the best, but it is unavoidable. If you have a reasonable amount of sprites (Say a couple thousand) then this should be fine.

Because of (2) we must make seperate drawTriangles() calls everytime we need to switch textures. But wait hang on. If we need to make another call everytime we switch textures then doesn’t that leave us where we started seeing as different sprites will likely have different images? The answer is yes — and no. First and foremost you can group sprites of the same image into the same draw call. However, you can even go one step further.Instead of allocating a texture for every image we allocate a global store of massive textures (2048×2048 pixels). We then stamp in all the smaller textures and give the render jobs appropriate U,V coordinates. Very nice! If your game has a lot of small sprites this will be lightning fast. Probably under 3 draw calls.

Ok so we go ahead and do this and oops we have another problem. Because we are grouping the sprites by texture they will no longer necessarily be sorted by depth. Ok, so this is a set-back, but perhaps we could just batch the sprites until we encounter a texture change then flush the buffer and start again. This will of course work, but it is only efficient if sprites of similar depth also share the same texture which may not be true. For example when testing this with my game I went from 3 draw calls to about 300. This is unacceptable.

Well we are rendering on a 3D graphics card so why not use the depth buffer to do the sorting for us! Every frame update we assign a global depth value to each sprite accordingly and enable the depth buffer. This may appear to be the best solution possible, but it does have one major flaw. You can’t use translucent textures. The reason behind this is the depth buffer does not understand alpha compositing. All it understands is geometry. Either a triangle is blocking something behind it or it is not. This is where we have to make a decision. There are four equally valid solutions that I have come up with.

General Purpose Solutions

1. Fall back to the render on texture switch method and do some optimizations to try and group depth-locality with texture-locality. (Could be optimal depending on setup)

2. Render all opaque images first entirely on the GPU. Then do transparent images after using method (1). (Works very well if there isn’t many transparent images)

Specific Solutions

3. Only use opaque textures. (Optimal)

4. Only use textures with quantized alpha values of either 0 or 1. (Optimal, but requires an extra instruction in the shader)

All of these methods have there strengths and weaknesses. Personally I decided to go with method (4) for my game. Method (4) is very similar to method (3). They only differ by one instruction in the shader. For (4) you include a KIL opcode (kill() in hxsl) which when the alpha channel is less than one will abort the pixel and depth buffer writes.

There are definitely other solutions out there, but these were the best ones I could come up with after working on this for several hours. Hope this helps.

Why Haxe is Just Awesome

Recently I’ve switched gears and am working on a website. For this website I had originally decided to use the Drupal CMS as the backend. This was before I actually started working with Drupal on another unrelated site. Drupal to me seems like a fine choice for certain kinds of websites, but in the end it just seems to rigid for my needs. It’s also built to minimize the amount of programming you need to do. The thing is that programming is one of my strengths, and I don’t want to ignore that when choosing a backend. However, I am not very knowledgeable in scripting languages — PHP/Javascript in particular. So I decided to give Haxe a try and so far I am loving it.

The fact that Haxe can compile to both javascript and php allows me to merge the server and client logic into the same codebase. Very nice for event driven behavior. For example, if I want to do client-side form validation, traditionally I would have to design the form in php and have some sort of javascript validation function loaded in somewhere. This is okay for small projects, but I am aiming big and want everything to be as nice as possible.

My solution was to abstract all the HTML nodes into an Element class and use that as the base for everything. The element class has a lot of fancy functionality for attaching and detaching classes and such, but at the core everything would operate through the getAttribute and setAttribute methods which (using conditional compile flags) compile differently depending on whether I was targeting javascript or php. Here are the methods:

public function setAttribute (key:String, value:String):Void {
	#if js
	domNode.setAttribute(key, value);
	#else
	attribs.set(key, value);
	#end
}

public function getAttribute (key:String):String {
	#if js
	return domNode.getAttribute(key);
	#else
	return attribs.get(key);
	#end
}

What the php version does is prepare the attributes for printing to the html response while the javascript version will access the properties directly on the html nodes. I then have a special bootstrap function in javascript (Idea taken from the Distill Haxe library) which will prepare the html into a tree of Element’s on load. If you want to do something similar then have a look at the Distill library for reference.

What does this allow me to do? Things like this.

class DivWithClickListener extends Element {
	public function new () {
		super("div");
		
		add(new P("Some text produced in PHP"));
		addEventListener(Element.EVENT_CLICK, this);
	}

	public static function event (e:Event):Bool {
		Lib.alert("This is a javascript click event!");
		return true;
	}
}

For technical reasons the event handler must be static, but I do include several useful properties.

Event.source:Element – The source object.
Event.handler:Element – The handler object.
Event.type:String – The event type.
Event.jsEvent:Dynamic – The original javascript event. (Browser dependant so should be used with care)

Accessing and changing the html elements is very simple. As I’ve made sure the Element class is completely functional from both php and javascript. So for example if you want to add/remove a class from an element then all you have to do is:

Element.addClass(“myCssClass”);
Element.removeClass(“anotherCssClass”);

This code will do the exact same thing no matter if it’s run from the server or the client. Very cool!

Compiling Formulas at Runtime

During development of my newest game I ran into a bit of a barrier with the recently implemented lighting system. I wanted to give my level designer a large degree of control over the lighting in the game. My first thought was to associate predefined values with each of the lights. For example if you wanted a flickering light you could attach a “light-type” property with a value set to “flickering” or something similar. At first this seemed like a good idea, but we quickly found out that there is just too much variation to have to rely on the programmer to implement a new light everytime one is needed. The solution? Allow the level designer to specify time-evolution formulas for each of the properties of the light.

I decided to generalize this approach to arbitrary formulas of n input variables. Given a valid formula string this class will compile it and allow you to pass input variables. The answer will be returned as if the programmer had wrote it in the language. Here is an example equation that one might use:

(x0 + x1) / 5 – sin(r)

What this will do is add input variable 0 to variable 1 then divide that by 5 and subtract sin of a random value. As you can see I have included some common functions that you may want to use. Valid functions include:

sin()
cos()
tan()
sqrt()
hs()

Most of these functions are self-explanatory except for perhaps the hs() one. hs() is just the Heaviside step function (http://en.wikipedia.org/wiki/Heaviside_step_function) which is useful for converting continuous functions into discrete ON/OFF style functions.

Also included are a couple of useful identifiers. The constant PI which can be written as “pi” anywhere in the formula. Also you can specify a random value by writing “r” as seen in the above formula. The random value is always 0 <= r < 1).

So what can you do with this? Make cool lights!

Given that the variable x0 will always contain a value from 0 to 1 which will increment by a delta time value every frame we can basically do anything. Want to make a random flickering light that is on 95% of the time? Easy.

on = r – 0.05

Just a side note, "on" is a predefined light value which is either on (>= 0) or not (< 0).

Want to create the tv light from the video in the last post?

red = hs(sin(x0*11*pi)) + 0.5
blue = hs(sin(x0*5*pi)) + 0.5
green = 0
period = 50seconds

What the above formulas do is cycle through 4 distinct possibilities (RED off, BLUE off), (RED on, BLUE off), (RED off, BLUE on) and (RED on, BLUE on). We put the sin function through the Heaviside step function to make these shifts abrupt. If we left the hs() function out we would have more of a gradual color change (which doesn't look very good for a tv).

The compiler is fairly straight forward. First the string is lexed into tokens and stored in an array. After that calls to compute() will return the appropriate value.

public inline function compute (input:Array):Float {
	_index = 0;
	return _expr(input);
}

private inline function _err (token:Token):Void {
	throw new Error("Syntax error at token '" + token.str + "'.");
}

private inline function _eoi ():Bool {
	return _index >= _tokens.length;
}

private function _expr (input:Array):Float {
	var token:Token = _tokens[_index];
	if (Std.is(token, ValueTerminalToken) || Std.is(token, FunctionToken)) {
		return _term(input);
	} else {
		_err(token);
		return Mathematics.NaN;
	}
}

private function _term (input:Array):Float {
	var token:Token = _tokens[_index];
	if (Std.is(token, ValueTerminalToken) || Std.is(token, FunctionToken)) {
		return _moreTerm(_factor(input), input);
	} else {
		_err(token);
		return Mathematics.NaN;
	}
}

private function _moreTerm (left:Float, input:Array):Float {
	var token:Token = _tokens[_index];
	if (Std.is(token, AddToken)) {
		_index++;
		return left + _term(input);
	} else if (Std.is(token, SubtractToken)) {
		_index++;
		return left - _term(input);
	} else if (Std.is(token, RightParenToken) || _eoi()) {
		return left;
	} else {
		_err(token);
		return Mathematics.NaN;
	}
}

private function _factor (input:Array):Float {
	var token:Token = _tokens[_index];
	if (Std.is(token, ValueTerminalToken)) {
		return _moreFactor(_val(input), input);
	} else if (Std.is(token, LeftParenToken)) {
		_index++;
		var v:Float = _expr(input);
		if (!Std.is(_tokens[_index++], RightParenToken)) {
			_err(token);
			return Mathematics.NaN;
		}
		return _moreFactor(v, input);
	} else if (Std.is(token, SinToken)) {
		_index++;
		var v:Float = Math.sin(_expr(input));
		if (!Std.is(_tokens[_index++], RightParenToken)) {
			_err(token);
			return Mathematics.NaN;
		}
		return _moreFactor(v, input);
	} else if (Std.is(token, CosToken)) {
		_index++;
		var v:Float = Math.cos(_expr(input));
		if (!Std.is(_tokens[_index++], RightParenToken)) {
			_err(token);
			return Mathematics.NaN;
		}
		return _moreFactor(v, input);
	} else if (Std.is(token, TanToken)) {
		_index++;
		var v:Float = Math.tan(_expr(input));
		if (!Std.is(_tokens[_index++], RightParenToken)) {
			_err(token);
			return Mathematics.NaN;
		}
		return _moreFactor(v, input);
	} else if (Std.is(token, SqrtToken)) {
		_index++;
		var v:Float = Math.sqrt(_expr(input));
		if (!Std.is(_tokens[_index++], RightParenToken)) {
			_err(token);
			return Mathematics.NaN;
		}
		return _moreFactor(v, input);
	} else if (Std.is(token, HeavisideToken)) {
		_index++;
		var v:Float = if (_expr(input) >= 0) 1 else 0;
		if (!Std.is(_tokens[_index++], RightParenToken)) {
			_err(token);
			return Mathematics.NaN;
		}
		return _moreFactor(v, input);
	} else {
		_err(token);
		return Mathematics.NaN;
	}
}

private function _moreFactor (left:Float, input:Array):Float {
	var token:Token = _tokens[_index];
	if (Std.is(token, MultiplyToken)) {
		_index++;
		return left * _factor(input);
	} else if (Std.is(token, DivideToken)) {
		_index++;
		return left / _factor(input);
	} else if (Std.is(token, AddToken) || Std.is(token, SubtractToken) || Std.is(token, RightParenToken) || _eoi()) {
		return left;
	} else {
		_err(token);
		return Mathematics.NaN;
	}
}

private function _val (input:Array):Float {
	var token:Token = _tokens[_index];
	if (Std.is(token, VariableToken)) {
		_index++;
		return input[Std.int(cast(token, VariableToken).val)];
	} else if (Std.is(token, NumberToken)) {
		_index++;
		return cast(token, NumberToken).val;
	} else if (Std.is(token, RandomToken)) {
		_index++;
		return Math.random();
	} else {
		_err(token);
		return Mathematics.NaN;
	}
}

I want to note that this is not the most efficient way to do this since you have to essentially recompile every time you want to compute a value. A better solution is to compile the formula into actionscript bytecode (ABC). I will likely switch over to that method soon.

This class can be used like this:

var eqn:Equation = Equation.compile("x0 + 5");
trace(eqn.compute([2]));

The above piece of code should print 7.

2D Dynamic Lighting Demo

In my previous post I went over how to implement 2d dynamic lighting on the GPU. After some tweaking I finally came up with a suitable solution for general purpose dynamic lighting. Here is a video demonstration of this in action in a game I am currently developing – codename Zed.

Currently the lighting only works with static objects as light blockers, but it’s not very hard to extend this to moving objects as well.

To get the effect right I had to modify my previous code and split the rendering into 2 separate tasks. First I do an additive light pass which fills in the glowing light you can see in the video. Next I had to do a subtractive shadow pass which worked by just starting with a black texture and subtracting off the alpha component as necessary. After each of these runs I do a 5x Gaussian blur effect to make things look nice and smooth and voila you have general purpose lighting that runs reasonably fast.

Depending on the quality of the light and your graphics card there are some limitations. I still plan on doing some more optimizations and benchmarking, but it seems that you are stuck with ~40 lights max on screen at any given moment. Possible optimizations could be caching of light textures that are static which would allow for a lot more stationary lights like what you saw in the game.

2D Dynamic Lighting with Molehill/HxSL

So I’ve really been digging into molehill lately and have come up with a solution for dynamic 2d lighting on the GPU. Basically what I did was ported this (http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/) method into flash. The method described in that article is built on XNA 4.0 which I’m pretty sure is a C# library for writing gpu shaders.

There are still a few things to iron out and optimize, but the first draft is looking good. Here is a screen shot of my algorithm working with 9 lights with a bunch of 10×10 pillers:

As of now this is about the limit that my graphics card can handle, but I’m fairly certain that I can get at least a 200% speed increase with some adjustments I plan on making (Probably a lot more if I really go at it).

My plan of attack for this was to start simple and only allow rectangles for obstacles and circles for the lights. I started by creating the vertex and index buffers for the obstacles.

for (i in rects) {
	//Vertex buffer
	vpts.push(i.xmin);
	vpts.push(i.ymin);
	
	vpts.push(i.xmax);
	vpts.push(i.ymin);
	
	vpts.push(i.xmin);
	vpts.push(i.ymax);
	
	vpts.push(i.xmax);
	vpts.push(i.ymax);
	
	//Index buffer
	ipts.push(index);
	ipts.push(index + 1);
	ipts.push(index + 3);
	
	ipts.push(index);
	ipts.push(index + 3);
	ipts.push(index + 2);
	
	index += 4;
}

_vbuf = Canvas.getContext().createVertexBuffer(Std.int(vpts[i].length / 2), 2);
_vbuf.uploadFromVector(vpts[i], 0, Std.int(vpts[i].length / 2));
_ibuf = Canvas.getContext().createIndexBuffer(ipts[i].length);
_ibuf.uploadFromVector(ipts[i], 0, ipts[i].length);

I also setup 4 textures for intermediate buffers between shader calls. Two of the textures are used for rendering individual lights while the other two are used to store the overall shadow map as lights are added. Really this is all that is needed for initialization. Now we move onto the render cycle which occurs during every frame update.

For every visible light the follow sequence of shaders gets run.

//The Shader Program
@:shader({
	var input:{
		pos:Float2
	};
	function vertex (mpos:M44, mproj:M44) {
		out = pos.xyzw * mpos * mproj;
	}
	function fragment () {
		out = [1, 1, 1, 1];
	}
}) class ObjectShader extends format.hxsl.Shader {
}

//The shader call
m.identity();
m.appendTranslation(-i.bounds.xmin, -i.bounds.ymin, 0);
var texCam:Matrix3D = Molehill.get2DOrthographicMatrix(i.bounds.intervalX, i.bounds.intervalY);
c.setRenderToTexture(_tbuf1);
c.clear();
_objectShader.init(
	{ mpos:m, mproj:texCam },
	{ }
);
_objectShader.draw(_vbuf, _ibuf);

This shader’s job is fairly straight forward. All it does is center the camera around the light and make every pixel which is inside an obstacle white. Once this shader has done its job we now have an image which looks like the following stored in _tbuf1.

In the interest of saving time and seeing as this is a port of an existing method I will re-use the pictures provided in the original post. Okay, we can now store the distances to the pixels as outlined in the first step of the original post.

//The Shader Program
@:shader({
	var input:{
		pos:Float2,
		uv:Float2
	};
	var tuv:Float2;
	function vertex (mproj:M44) {
		out = pos.xyzw * mproj;
		tuv = uv;
	}
	function fragment (t:Texture) {
		out = if (t.get(tuv, nearest).x > 0) len(tuv - [0.5, 0.5]).xxxx else 1.xxxx;
	}
}) class DistanceShader extends format.hxsl.Shader {
}

//The shader call
var vbuf:VertexBuffer3D = Molehill.getTextureVertexBuffer(c, 0, 0, i.bounds.intervalX, i.bounds.intervalY);
var ibuf:IndexBuffer3D = Molehill.getTextureIndexBuffer(c);
c.setRenderToTexture(_tbuf2);
c.clear();
_distanceShader.init(
	{ mproj:texCam },
	{ t:_tbuf1 }
);
_distanceShader.draw(vbuf, ibuf);

Now that we have shaded the pixels based on how far they are from the center of the image we have completed step one and have the following image stored in _tbuf2.

Now here is where things get cool. We take the image stored in _tbuf2 and distort it so the rays of light from the light source are aligned along the horizontal axis as outlined in step 2 from the original post.

//The Shader Program
@:shader({
	var input:{
		pos:Float2,
		uv:Float2
	};
	var tuv:Float2;
	function vertex (mproj:M44) {
		out = pos.xyzw * mproj;
		tuv = uv;
	}
	function fragment (t:Texture) {
		var u0 = tuv.x * 2 - 1;
		var v0 = tuv.y * 2 - 1;
		v0 = v0 * abs(u0);
		v0 = (v0 + 1) / 2;
		out = [t.get([tuv.x, v0], nearest).x, t.get([v0, tuv.x], nearest).x, 0, 1];
	}
}) class DistortionShader extends format.hxsl.Shader {
}

//The shader call
c.setRenderToTexture(_tbuf1);
c.clear();
_distortionShader.init(
	{ mproj:texCam },
	{ t:_tbuf2 }
);
_distortionShader.draw(vbuf, ibuf);

After this step we are left with the following image stored in _tbuf1.

This may look a bit weird and if you are confused at this point I would recommend reading over the original post. I know I found this a bit confusing when I first looked at it.

Ok, now that we have a view from the light’s perspective we need to calculate the closest obstacle edge by successively halving the image.

//The Shader Program
@:shader({
	var input:{
		pos:Float2,
		uv:Float2
	};
	var tuv:Float2;
	var dx:Float;
	function vertex (mproj:M44, pixel:Float) {
		out = pos.xyzw * mproj;
		tuv = uv;
		dx = pixel;
	}
	function fragment (t:Texture) {
		out = min(t.get(tuv + [-dx, 0], nearest), t.get(tuv + [0, 0], nearest));
	}
}) class MinDistanceShader extends format.hxsl.Shader {
}

//The shader call
for (i in 0 ... _distBufs.length) {
	c.setRenderToTexture(_distBufs[i]);
	c.clear();
	_minDistanceShader.init(
		{ mproj:texCam, pixel:1/(tdim >> i) },
		{ t:if (i == 0) _tbuf1 else _distBufs[i - 1] }
	);
	_minDistanceShader.draw(vbuf, ibuf);
}

The variable “tdim” is the size of the texture buffers. For the sake of clarity we will assume that tdim is 512 pixels. We need to call this shader 8 times to get the image down to 2×512. At each stage we compare every pixel with its closest neighbor in the x direction and throw away the higher of the two. The end result is a 2×512 image where each pixel contains the minimum distance to an obstacle in that direction. Again, if you are confused at this point please refer to the original article. It does a much better job at explaining the reasoning.

After we have the minimum distances we can now draw the shadow map for this light.

//The Shader Program
@:shader({
	var input:{
		pos:Float2,
		uv:Float2,
		copy:Float,
		suv:Float2
	};
	function getShadowDistanceH (t:Texture, pos:Float2):Float {
		var u:Float = pos.x;
		var v:Float = pos.y;
		
		u = abs(u-0.5) * 2;
		v = v * 2 - 1;
		var v0:Float = v/u;
		v0 = (v0 + 1) / 2;
		
		return t.get([pos.x, v0], nearest).x;
	}
	function getShadowDistanceV (t:Texture, pos:Float2):Float {
		var u:Float = pos.y;
		var v:Float = pos.x;
		
		u = abs(u-0.5) * 2;
		v = v * 2 - 1;
		var v0:Float = v/u;
		v0 = (v0 + 1) / 2;
		
		return t.get([pos.y, v0], nearest).y;
	}
	var tuv:Float2;
	var tcopy:Float;
	var stuv:Float2;
	function vertex (mproj:M44) {
		out = pos.xyzw * mproj;
		tuv = uv;
		tcopy = copy;
		stuv = suv;
	}
	function fragment (t:Texture, sm:Texture, baseIntensity:Float, intensity:Float) {
		var d:Float = len(tuv - [0.5, 0.5]);
		var sd:Float = if (tcopy > 0) 0 else if (abs(duv.y) < abs(duv.x)) getShadowDistanceH(t, tuv) else getShadowDistanceV(t, tuv);
		var a:Float = (0.5 - d) * 2;
		var shadow:Float = min((1 - intensity) * a + (1 - baseIntensity) * (1 - a), sm.get(stuv).w);
		var inside:Float4 = [0, 0, 0, shadow];
		var outside:Float4 = min([0, 0, 0, (1 - baseIntensity)], sm.get(stuv));
		out = if (tcopy > 0) sm.get(tuv) else if (d < sd) inside else outside;
	}
}) class ShadowMapShader extends format.hxsl.Shader {
}

//The shader call
var camera:Matrix3D = Molehill.get2DOrthographicMatrix(SCREEN_WIDTH, SCREEN_HEIGHT);
var svbuf:VertexBuffer3D = _getShadowMapVertexBuffer(i.bounds);
var sibuf:IndexBuffer3D = _getShadowMapIndexBuffer();
c.setRenderToTexture(if (index % 2 == 0) _sbuf2 else _sbuf1);
c.clear(0, 0, 0, 1 - _baseIntensity);
_shadowMapShader.init(
	{ mproj:camera },
	{ t:_distBufs[_distBufs.length - 1], sm:if (index % 2 == 0) _sbuf1 else _sbuf2, baseIntensity:_baseIntensity, intensity:i.light.getLightIntensity() }
);
_shadowMapShader.draw(svbuf, sibuf);

index++

This is probably the most complicated piece because I couldn’t think of a better way of doing this. Currently (to my knowledge) molehill does not allow drawing to the same texture twice. This was kind of annoying so I put together a work-around which takes the previously rendered lights and does a straight one to one copy. After this is done, during the same pass, the new light is supplied. The difference between the two is handled by the copy value in the vertex buffer. If copy is set to one then the shader just does a straight one to one copy. If not then it will render the light by performing the normal routine described in the original post. The “suv” coordinates are used to compare the previous render with the current one for pixel blending.

As you can see I’ve also added in a simple gradient effect which makes things look a bit nicer. I plan on expanding on the post-processing in the near future with blurring and colored light. There is also a lot of room for optimization. For one, I can get rid of some of the extra shaders and combine them. Also, since there is only ever 2 color channels used at once there is the possibility of rendering two lights at the same time. This is the 200% efficiency increase I was talking about earlier. I also plan on allowing the user to scale down the quality of the image in order to improve render time.

So there you have it — dynamic lighting on the GPU. All of the code above was taken from my gaming library and will be available to the public once I feel it is stable enough.

2D GPU-Accelerated Rendering with Molehill/HxSL

Ok, so I’ve been working on this rendering problem for probably a little over a month now. The problem being that Flash’s vector renderer is just too slow for my needs.

My first approach which I illustrated in the last post was to use blitting with a cached store for repeated affine transformations. I ended up with a decent renderer more or less, but it was not without issues. For one, it did not work very well in general cases. I could only get decent performance with very contrived examples which of course is not very helpful.

Another issue with the blitting renderer was the complexity of the code. I was not aware when I started out just how complicated it would be to optimize the code. I went through several revisions with many hours of hair pulling before I got something that was reasonable (Reasonable meaning 30 fps).

When I started working on the blitting engine I was aware of the release of Flash Player 11 beta and molehill, but I was determined to get my own version working. I don’t really have any good reasons for not using molehill right away as a rendering engine other then perhaps my own ignorance. That shortly changed after I started playing around with the molehill api using HxSL (Haxe Shader Language) – http://haxe.org/manual/hxsl.

If you don’t know what HxSL is dont worry. For now you can just remember it as an easier way of writing Shaders (Computer Programs) for the GPU. Currently the adobe alternative involves writing low-level assembly code which is not very pretty. Once again Haxe is on the fore-front of Flash technology.

At first glance 3D programming seems very complicated. I myself have never done anything with the graphics card. I was actually surprised at how quickly I picked the whole thing up. I’m not going to explain how to use HxSL or program 3D applications. There are a ton of tutorials out there. The ones I used were http://haxe.org/doc/advanced/flash3d for examples on how to use HxSL and http://lab.polygonal.de/2011/02/27/simple-2d-molehill-example/ as well as some conceptual stuff on matricies. This post is about the 2D rendering engine I made using Haxe.

When I started learning molehill about two weeks ago I was surprised at how few examples there were for HxSL specifically 2D rendering with HxSL. Really the only things I could find were the general 3D examples in HxSL and a whole ton of actionscript examples. I did find another 2D rendering engine written in actionscript (https://github.com/egreenfield/M2D), but I wanted a solution in Haxe! So I decided to do it myself.

So after some learning/experimentation I was finally ready to put it all together. Basically the idea is to setup a simulated 2D environment by fixing the camera (the screen) and representing all the display objects as flat rectangles that ‘hover’ slightly in front of the camera.

The end result looks identical to a normal 2D environment (Excuse my bad 3d drawing skills).

So let’s get into some code!

private function _initFrame ():Void {
     _s = flash.Lib.current.stage.stage3Ds[0];
     _s.viewPort = new Rectangle(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
     _s.addEventListener(Event.CONTEXT3D_CREATE, _onReady);
     _s.requestContext3D();
}
private function _onReady (e:Event):Void {
     _c = _s.context3D;
     _c.configureBackBuffer(Std.int(_s.viewPort.width), Std.int(_s.viewPort.height), ANTI_ALIAS, true);

     //Setup projection matrix
     _mproj = new Matrix3D();
     _mproj.appendTranslation(-SCREEN_WIDTH/2, -SCREEN_HEIGHT/2, 0);
     _mproj.appendScale(2/SCREEN_WIDTH, -2/SCREEN_HEIGHT, -1);
     _mproj.appendTranslation(2/SCREEN_WIDTH, 2/SCREEN_HEIGHT, 1);

     //Setup shader
     _shader = new Shader(_c);
     _ready = true;
}

This may look scary, but most of the code above is just boilerplate code. The one piece that isn’t boilerplate code is the projection matrix. Now the math behind projection matrix can get complicated fast, but you can think of it like this. The projection matrix can be thought of as a camera and we need to translate pixels in the 3D world onto this 2D screen. We do this using the projection matrix. According to the code above, we fix the camera at (0, 0, 1) facing towards the origin and we apply a perspective change to make the x/y plane at z=0 show exactly (SCREEN_WIDTH x SCREEN_HEIGHT) units. This will give us the exact same setup as a regular 2D environment.

@:shader({
     var input:{
          pos:Float3,
          uv:Float2
     };
     var tuv:Float2;
     function vertex (mpos:M44, mproj:M44) {
          out = pos.xyzw * mpos * mproj;
          tuv = uv;
     }
     function fragment (t:Texture) {
          out = t.get(tuv);
     }
}) class Shader extends format.hxsl.Shader {
}

Now here is where the beauty of HxSL comes in. The above piece of code is a Shader. If you did this in actionscript you would of had to write that in assembly. Yuck! I don’t want to get into too much detail as to how this shader works as that is not really the purpose of this tutorial. If you are interested you can read the HxSL documentation (http://haxe.org/manual/hxsl). It’s a pretty basic shader.

private inline function _render ():Void {
     //Clear last render and setup next one
     _c.clear(0, 0, 0, 0);
     _c.setDepthTest(true, Context3DCompareMode.ALWAYS);
     _c.setCulling(Context3DTriangleFace.BACK);
     _c.setBlendFactors(Context3DBlendFactor.SOURCE_ALPHA, Context3DBlendFactor.ONE_MINUS_SOURCE_ALPHA);

     //Render children and display
     _renderChild(this);
     _c.present();
}
private function _renderChild (child:CanvasObject):Void {
     var frame:Frame = child.getFrame();
     if (frame != null) {
          _shader.init(
               { mpos:child.getStageTransform(), mproj:_mproj },
               { t:frame.texture }
          );
          _shader.bind(frame.vbuf);
          _c.drawTriangles(frame.ibuf);
     }
     for (i in 0 ... child.getSize()) {
          _renderChild(child.get(i));
     }
}

The above code is performed once every frame update. The function _render() clears the last render and sets up the properties for the next one. You don’t have to worry too much about that. The magic comes in from the _renderChild() function.

To start you may be wondering what the CanvasObject class is. It is not part of molehill. It is my own top-level class that represents an graphics object. The implementation of CanvasObject is extensive and not part of this tutorial. Basically you need to concentrate on these two functions:

CanvasObject.getFrame():Frame;
CanvasObject.getStageTransform():Matrix3D;

getStageTransform() will return a 3D matrix which has rotations/translations/scalings/etc (All the standard 2D transformations) to get the object into stage coordinates. In the simplest case you can just return an identity matrix and have the graphic be drawn to [0,0] (Really (0, 0, 0)).

The other function getFrame() returns a Frame object which is basically just a wrapper class for a bunch of properties. The three most important of which are:

Frame.texture:Texture;
Frame.vbuf:VectexBuffer3D;
Frame.ibuf:IndexBuffer3D;

All of which are part of the molehill API. There are a bunch of tutorials out there on how you create these objects, but for the case of 2D bitmaps the setup is static.

Since the graphics card can only draw triangles we need to draw two triangles for each bitmap to make a rectangle. First we create the vertex buffer:

vbuf = c.createVertexBuffer(4, 5);
var vpts:flash.Vector<Float> = new flash.Vector<Float>();
vpts.push(bounds.xmin);
vpts.push(bounds.ymin);
vpts.push(0);
vpts.push(0);
vpts.push(0);

vpts.push(bounds.xmax);
vpts.push(bounds.ymin);
vpts.push(0);
vpts.push(bounds.intervalX / bmdPow2.width);
vpts.push(0);

vpts.push(bounds.xmin);
vpts.push(bounds.ymax);
vpts.push(0);
vpts.push(0);
vpts.push(bounds.intervalY / bmdPow2.height);

vpts.push(bounds.xmax);
vpts.push(bounds.ymax);
vpts.push(0);
vpts.push(bounds.intervalX / bmdPow2.width);
vpts.push(bounds.intervalY / bmdPow2.height);
vbuf.uploadFromVector(vpts, 0, 4);

In the code above we define 4 verticies. Each vertex has 5 coordinates defined. The first 3 are (x, y, z) (Notice how the z coordinate in all four verticies is 0). The last two are (u,v) coordinates which map the (x,y,z) to the texture. The reason behind the division in the (u,v) coord is because the bounds might not be a power of 2. The graphics driver requires that all textures have dimensions that are powers of 2. The bounds variable is just a rectangle which defines the bounds of the bitmap.

ibuf = c.createIndexBuffer(6);
var ipts:flash.Vector<UInt> = new flash.Vector<UInt>();
ipts.push(0);
ipts.push(1);
ipts.push(3);

ipts.push(0);
ipts.push(3);
ipts.push(2);
ibuf.uploadFromVector(ipts, 0, 6);

The index buffer just links together the verticies to define two triangles. There are four verticies we defined in the vertex buffer indexed by (0-3). So we link vertex 0, 1 and 3 to form the first triangle and 0, 3 and 2 to form the second. Voila we have defined a 2D sprite which can be written to the screen. Well almost.

texture = c.createTexture(bmdPow2.width, bmdPow2.height, flash.display3D.Context3DTextureFormat.BGRA, false);
texture.uploadFromBitmapData(bmdCpy);

We have to upload the image into a texture. Ok, now we are done.

So there we have it, and if you pre-compute every vector graphic into a bitmap then you can render everything through this method REALLY fast. I have yet to fully test this, but initial results are looking good. Full 30 FPS on my current zombie game project (Even in software rendering mode).

I know the above code is sort of in bits and pieces, but I wanted to describe how to do 2D rendering in general so I pulled the code directly from my game dev library. I will probably release my game dev library into the open source world sometime in the near future. It has support for converting MovieClips/Sprites into my rendering framework as well as a unified asset loading system.

Cheers.