🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

2D engine design/architecture: model transformations

Started by
3 comments, last by Shaarigan 4 years, 2 months ago

Hello! I'm working on this hobby little 2D engine in JavaScript because I want to learn how all this things are implemented and architectured in general. Internet is full of resources but most things are mathematical or theoretical stuff, there is very little about implementation or architecture for small 2D engines like the one I'm working in. So, this is my context and at the end are some questions I hope you can help me with:

I have a Body object that has different properties like (pseudo-code):

Body {
    angle: float
    scale: float
    acceleration: 2d vector
    velocity: 2d vector
    position: 2d vector
    update: function()
}

A Body object also has a Model object, which has (pseudo-code):

Model {
    points: Array of [x, y] points
    fillColor: string (hex)
    transform: function(rotate, scale, translate)
}

The transform function receives Angle, Scale and Translate, and returns a new transformed instance of the model.

On every frame I will update the Body object to represent the current state: adding accelerator vector to velocity, velocity to location (and some similar stuff to update the angle and scale if required):

Body {
    update() {
        this.velocity.add(this.acceleration)
        this.location.add(this.velocity)
        this.acceleration.reset()
    }
}

Once the Body object is updated I must do a world-transform of the model so that I can start doing physics related stuff (like collision detection/resolving, interaction forces between bodies, etc). After that I will do a view-transform, to transform the world-objects into the "perspective" of some other object.

At the moment this is my solution is based in keeing three instances of the Model object withing the Body object, like this:

Body {
    model: the original *Model* with no transformations
    model_world: result of *Model.transform(body.angle, body.scale, body.location)*
    model_view: the result of transforming **model_world** instance with viewport properties
}

Questions:

- Do you think there is a better way to handle model transformations? Like maybe one the do not require the Body object to mantain the three instances of the three transformations itself, perhaps.

- Can you please give me some suggestions on how to better organize this functionality for a simple 2D engine?

Thank you very much in advance!

Advertisement

Normally you have a single matrix that represents an objects location in the world. Placing a cube at {1;1;1} means it is always rendered at that location. This is the model's local transform.

Second is, depending on your game, the scene graph transform. A model may be child of another model that may also be child of another model. The original transform {1;1;1} is then multiplied with all parent transforms to get the final “world-space” transform.

Last but not least the view transform. You are not moving the camrea around rather than the whole game world so the camera transform is multiplied to each top-level model in your scene graph to have always the right view rendered to the screen and anything else fall through the culling check.

So if you need your world transform, store it, otherwise just store the local model transform matrix in your model instance. The view dosen't need to be stored because it should be applied anything anyways while rendering, through the Shader in hardware accelerated rendering.

Hi Shaarigan, thanks for your reply. Cool, so storing transformations is common. The only “weird” thing I'm doing is storing copies of the model instead a transformation matrix. I guess doing this with matrices is less resource consuming as I would not be instantiating new Model objects for each Body on every frame update. I'll check how to implement that, thanks for the tip.

Shaarigan said:

Last but not least the view transform. You are not moving the camrea around rather than the whole game world so the camera transform is multiplied to each top-level model in your scene graph to have always the right view rendered to the screen and anything else fall through the culling check.

At the moment I don't have a “camera”, I have a Viewport object that I'm actually moving around the word as I move any other Body. This viewport usually has the same dimensions of the screen and I can transform it to:

  1. Move it around (translate)
  2. Zoom in or out (scale)
  3. Rotate the view (rotate)

I use it as a “camera”, I can attach it to some object and “see” the world from the perspective of that object (for example I can attach it to a planet and see how the sun moves around).

On every frame I check which objects are intersecting with my rectangle Viewport and then I apply this objects the inverse transformation of the Viewport, in two steps:

1) Get inverse value of all the transformation properties:

const t = [
	this.location.getX() * -1,
	this.location.getY() * -1
];
const r = this.getAngle() * -1;
const s = body.getScale() / this.getScale(); // Relative scale

2) Inverse transformation:

Model::transformInversion(t, s, r) {
	const transformed = [];
	for (let j=0,len=this.points.length; j<len; j++) {
		let p = this.points[j];
		p = this.translatePoint(p, t);
		p = this.scalePoint(p, s);
		p = this.rotatePoint(p, r);
		transformed.push(p);
	}
	return new Model({
        	points: transformed,
      		strokeColor: this.strokeColor,
      		fillColor: this.fillColor
     	});
}

I guess I'm doing it all wrong… I mean, it works but maybe moving the entire world may be more efficient?

Matrix multiplication is considered to be less work than what you do there I think. I don't know much about performance in Javascript, maybe what you are doing is more efficient or less but I know that code like this

const Vector4& row1 = m2.Row(0);
const Vector4& row2 = m2.Row(1);
const Vector4& row3 = m2.Row(2);
const Vector4& row4 = m2.Row(3);

for(uint32 i = 0; i < 4; i++)
{
    result.Rows.Row[i] = ((row1 * m1.Value[4*i + 0]) + (row2 * m1.Value[4*i + 1])) + ((row3 * m1.Value[4*i + 2]) + (row4 * m1.Value[4*i + 3]));
}

is translated into SSE intrinsicts from a clever optimizer in C++, maybe there is something similar in Javascript too.

The above code (once optimized) is able to run a million times per second from my tests with LLVM/ clang (if anyone is curious about)

Linear Algebra has some benefits in game programming, because they already solved some problems we are facing in ou day-to-day work. Matrices are arrays of data that have certain length, 16 fields for a Matrix4 (or Matrix4x4 depending on the developer you ask), 9 fields for a Matrix3 (used mostly in 2D but hardware acceleration processes 4x4 matrices only so using the 5 extra decimal number type fields is ok, even for 2D and don't miss the benefits of the Z-value) and even non-symetric ones like 4x3 or 3x2 are also valid.

Transforming a point in space is no more than multiplying the a matrix with the coresponding vector and you have your final position.

Rotating a model is done by multiplying a quanternion with some fields of your matrix.

Scaling something is done by simply changing values in in your array to the desired scale before translation and rotation are applied (as I learned many years ago).

What you are doing is not wrong as long as it generates the desired results! But it also isn't a huge impact to change that to a camera matrix as a matrix is just a plain array of numbers.

In common graphics rendering, every model will be streamed to the GPU as a set of vertices, transformed by the camera matrix and then added to the pixel-buffer if not out of bounds (out of screen). Because this might be a costly process, most graphics APIs and engines try to sort out those objetcs that aren't visible and so don't need to be rendered. Frustum culling is the process of deciding by software if a model is in view while hardware culling, mostly reffered to as the max-view-distance and backface culling, is the API based way to decide if a vertex needs further processing.

Modern game engines use a combination of both btw., first decide if a model is in view by frustum checks and then have the hardware cull anything else not visible.

Using a matrix also in 3D for example gives the opportunity to map an object into 2D space to see where it is supposed to be on the screen and depending on it's bounding volume you are able to test if it is anywhere on the visible screen rectangle but as you are already 2D, the additional transform might not be necessary

This topic is closed to new replies.

Advertisement