Tuesday, October 29, 2013

Components and Actions

In my last post, i talked about implementing the input system. Apparently, i it's much more work than i thought. However, i now have a working input system, which works as expected.

The whole system is implemented using several concepts: a device, which maps OS events into an Event structure, a key binding system, which maps an Event structure into an Intent structure, and the IntentSystem, which acts like a message dispatcher for Intents, which is poll based (hey mr. intent system, do you by any chance have a JUMP intent that i could use?).
At the top of the food chain, there's an IntentHandler action, which can have any number of entities registered into it, and they are set as targets of every output Intent. Then, at least one other action will poll for each Intent, and execute some logic for it.

Sounds pretty simple, but took me over a month to finally implement it. Though, to be fair, i was returning from my business trip from Sweden, had a 5k puzzle to finish, see a bunch of people, and was kinda lazy. :)

My next step is going to be designing the actions so they have clearly defined inputs and outputs as far as components are concerned. For instance, right now i'm thinking about how to make the physics simulation Box2D gives me work together with the components. An obvious one is that the output of the Physics2d action is the new position after Box2D has updated all bodies. The position is later used by the Render action to draw sprites at the right places on the screen. However, i'm discussing with myself (who else) about what  should be the input components. The input components should contain data required by the action to do its per frame update logic. Something like, in the case of Physics2d, read the Velocity2d component, and give the body a push in the right direction so that after the simulation, the velocity of the body within Box2D is the same as the one in the Velocity2d component. This is easily done by using impulses every frame, the faster the body is going, the smaller the impulse strength needs to be. Also, acceleration is done by providing the body with a force. If i'd want the body to have a specific acceleration, the formula given in physics classes is F=m*a, so i would just multiply the wanted acceleration with the body mass, and apply the given force to the body each frame.

The same goes for pretty much all actions, but the Physics2d is the most interesting one because a lot of the game depends on the simulation (collision detection and response being the biggest factor), so i need to be careful here.
Also, i think i'm gonna need a better way of handling intents, as right now each intent needs to be handled in three ways, either as a one time action (jumping, opening doors), a continuous action (running left, shooting an automatic weapon) or a range based action (moving the mouse around the screen, panning the camera). Any of these three actions could in theory be applied to any intent, like moving the entity based on the range a mouse movement causes, or having a jump that peeks only after you've held down your button for 2 seconds, making you able to do small jumps or large jumps.

Hope i'll make it for next weeks post. :)

Saturday, September 21, 2013

The Input System

Two weeks went by without a post, and i've been busy.
I've been busy playing Borderlands 2, and brainstorming and implementing my Input System.

I've been thinking on how to implement this system in a way to allow more than two types of devices (mouse and keyboard). There was a good question somewhere on the web that made me think about this: "What if i decided to support a gamepad? How would that change my code?"

So i set to work, and after some google-fu sessions decided on the following approach.

I changed the code a bit to allow the Window to accept listeners which want to listen to all OS messages. Then i registered my InputEngine to the Window, so it gets the messages.

When a message is received, this invokes handle() on the InputEngine, which in turn calls handle() for every device that has been created.

The devices are a way to translate the OS specific messages to my own, device specific, events. MouseDevice, KeyboardDevice, GamepadDevice, TouchscreenDevice, etc, are very platform specific in how they work. Each device has a chance to translate the OS message to a specific event, and in theory, only one device should be able to do that. If the event is successfully mapped, the event is stored in an event queue for later use.

So now that i have a sequence of device specific events, i need to translate them to game specific events. Think of this as "When the user presses <some key>, his character should run forward". So basically, i need to translate the device specific event into an intent (which in this case would be a RUN intent). When it's time to handle input in the game loop, the InputEngine goes through all of the device specific events, and gives them to a context dependent intent generator (name taken from this SO thread).

The intent generator is a sequence of sets of functions that try to map the events to intents, with a specific priority. For instance, in Assassins Creed games, when the player presses and holds the right mouse button, the context changes so all of the buttons map to High Profile actions. When RMB is let go, the context is popped off, and the buttons map back to low profile actions.

So when all device events are processed, the result is a list/sequence of intents (if there was anything mapped to those device specific events) which is returned to the Input Action (my whole engine is based on these actions). The Input Action then traverses all entities that are marked as input controllable, and gives them a chance to execute every intent. If any entity is not capable of executing some intent, it simply doesn't, and the loop goes on.

So this is my input system. I still haven't fully implemented it (i'm currently working on the intent generator), and it took me a while to reach a consensus with myself on how to do it. I tried a couple of different approaches, but they weren't as flexible and/or layered as this one.

Monday, September 2, 2013

Polishing my Pong

It's been a while since my last post. I took a week off to spend time with a friend who came to visit me for a week here in Stockholm, which should explain the absence of my standard monday post. But it felt good to take a week off from work after work. :)

But now i'm here, and i can show the progress i've made 2 weeks ago.
The first piece of progress has been my finalization of the Entity Component System. It probably has places which could be made better (and i have an idea on how), but that will have to wait for a later time. I need to see how much i can do with the current implementation before i start pushing changes. Again. :)

In my previous post, i talked about the small-ish problem of having no control over the destruction of more complex data the Entity holds. I solved it by registering a callback to my Entity Pool class which is invoked on destruction of an Entity instance, and which then seeks the complex data and does it's magic. Using this concept, i made the Physics logic class register the appropriate lambda to the pool, so everything is cleaned up properly.

Then i started doing some work on actually getting Pong to a playable state. I haven't finished with that, but i have managed to get the foundation for it set up.


Right now, the paddles are there, the ball is set with a starting impulse in a specific direction, and the walls and goals are there. There's more work to be done here, and thankfully, i'm able to make a TODO list of what's left:
1. get input working so the paddles can be moved up and down
2. set up a random direction for the ball when it starts moving from the center of the field
3. get collision working between the paddles and the walls
4. make the goals sensors, which reset the ball back to the center of the screen, and increase the score
5. get the ball to bounce with a bigger angle when it hits the sides of the paddle
6. add awesome smashing next-gen graphics which will make this sell a ton of copies
7. add GUI element for score
8. make the ball increase it's speed with one of two possibilities:
8.a. as a function of time
8.b. as a function of the number of times it collided with anything

Most of these should be self-explanatory, but number 3. bears (rawr) a bit more explanation.
The way Box2D is setup, there are 3 types of objects: static (cannot move), dynamic (move and collide with everything) and kinematic (exactly like static, but can move). In order to detect a collision, at least one of the bodies needs to be a dynamic body. The above screenshot has the paddles as kinematic bodies, however, in my first setup, the paddles were also dynamic bodies, and when the ball hit a paddle, the paddle would suddenly fly away because of the collision, which is funny to see.


You can see the left paddle down in the corner, and the right paddle being hit and having collision way off.
One way i could solve this is by having the ball density really low, and the paddle density really high. That way the paddles will be almost immune to any forces being applied as a result of the collision, and the ball will behave as expected. And now, after putting this on paper, it seems as a very obvious solution, which i'll implement. :)

And that's it for this week. Tune in next monday to (hopefully) see a finished Pong game in my framework. :)

Thanks for reading.

Monday, August 19, 2013

Let there be collision!

I had a lot of fun last week, and not just by getting drunk on friday.

It took me a bit more time than i would have liked, but i managed to get my physics and collision library (Box2D) to draw it's debug data to the screen.

To do so i hacked up a small class called Polygon, which has a couple of parameters to control whether the polygon is solid or not (aka, filled or just the outlines), it's color and a few others, and to set up its shape. For now i implemented functions to make it a square, ellipse, circle (which is a special case of an ellipse), line and a triangle, plus the extra function to pass in your own data about the shape (so you could make any shape you wanted).
This is my first class that is able to draw itself on the screen, and i used it to draw all of the shapes that Box2D is able to draw.

Box2D is (as its name implies) a 2D library for physics and collision. There are a bunch of other libraries out there to handle just that, but this one seemed easy enough to learn, and i didn't want to spend too much time researching different physics libraries and comparing them.

Box2D works by first making an instance of a b2World object (it's, well, the world where box2d bodies live in), and asking it to create bodies. When a body is created, you tell it to create a fixture, which is something that gives the body a shape and physical properties like mass, velocity etc. And if you want to kill a body, you have to give it back to the world, so it knows which of the many possible bodies to kill. More on this a bit later.

Then on each game update you tell the world to simulate by solving movement and collisions.

The result is something like this:



This is super simple for now, it's just two boxes, one of them being a static body, the other one dynamic (can you guess which is which?). What this opens up now is the ability to at least get something more interesting done, and having it draw on the screen.

My next step is to finish off my Entity structure, so when entities die, they are cleaned up properly. This is a small problem for me right now, because of the way i setup my entities.

My Entity class is more or less just a container of States, where a state is any set of variables grouped together in a logical manner. For instance, there might be a state which holds the amount of gold, silver and copper a person has, or it might be a simple flag on whether the person can do a double jump. The problem lies in the way states are destroyed. The entity knows absolutely nothing about the states it contains, because they are (for the sake of simplicity) just unique IDs.
Since an entity has absolutely no idea about what states it has, it becomes a problem when they are destroyed. I wanted to keep them simple, to i made them hold just the data they should have, and removed any piece of logic from them.
So how do you delete a state which holds a box2d body? If you don't return the body to b2World, it just stays in there forever, because nobody knows it exists anymore (except the world, but it doesn't decide when to destroy the bodies unless it's dying as well).

The first solution i'm going to try out will be callbacks on entity destruction, by using lambdas.
A lambda is a concept from functional languages, which is basically an anonymous function which does some work, and it's results can be passed to other lambdas, etc. By using a lamed for the callbacks, i can register a function to happen in case an entity is deleted, and it will auto-magically return the body to the b2World (if the body exists on the entity being destroyed).

Monday, August 12, 2013

My cubicle...

Last week was a good week.

After some (well, a lot of) reading, i decided i'd go from D3D9 API to D3D11 API.
The decision is based on three things:
1. D3D11 APIs are cleaner and feel better designed.
2. D3D11 has ways to run on all hardware types (even those that don't support D3D11 features) by using feature levels.
3. The number of XP users that play games should be non-existent by the time a make a game that i'd want to sell.

So with that in mind, i decided to try my luck in D3D11 programming. I found a couple of web sites that have good explanations of the code they give in their tutorials, and have been following them to understand why something is used the way it is. So far so good.

In particular, the things i want to learn are:
- using vertex and index buffers ......... DONE
- using and writing shaders ......... half way there
- texture mapping ......... need to understand it a bit more
- alpha blending and transparency ......... TODO
- orthogonal projection ......... know the principle, but have yet to use it

As you can see, some things are easier than others. =D

Using vertex and index buffers differs slightly from D3D9 in the way they are created, and the method name for locking their GPU memory so you can copy your data into it, but the principles behind it are still valid. Same goes for index buffers. The difference between D3D9 and 11 is that in D3D11, all types of buffers are created in the exact same way with the exact same method, the only thing you change are the parameters of the struct that is passed to the method.

One change that i did not expect in the transfer was the need to write my own shaders, which is not optional. There are ten stages in the D3D11 graphics pipeline as can be seen on this page of the tutorial i'm reading (scroll down a small bit), and half of them are programmable by the user, and the other half is configurable by setting certain flags or values. The part that surprised me was the fact that if you don't write two of those five programmable shaders, you don't get any output on the screen (namely, the vertex and pixel shaders). Coming from D3D9 where you'd just shove vertex data to the GPU and it would render them on its own, this came as a bit of a shock at first, but later i figured how much flexible this system actually was. You get to control how things get done, like outputting the entire scene in grey scale instead of color =D

Going forward, the texture mapping seems a bit more complicated than what i thought it would be. It requires much more control than i thought it would, but (hopefully) most of it i related to just loading the texture. I still have some reading to do on how it actually works, and i need to try cutting out parts of the code down to see what are the bare essentials to get it done. I did, however, manage to get it working by setting the texture of my long lived test subject to my spinning cubes.

The orthogonal projection (aka, viewing the scene in 2D instead of perspective 3D) is a matter of calling the right function to set the projection matrix for the camera, so it shouldn't be much work (seeing as i already have the data i need to set the perspective projection).

Alpha blending and transparency is my next step, so by next week i should have it well under my belt. :)

Finally, to show that i am making some progress with it, here's a video which should explain the title of the post. The center cube is enlarged by a factor of 1.5 on all sides, and made to rotate on the Y and Z axis, while the outer cube is set to rotate around the Y in one direction, then translated away from the center (where the first cube is), then set to rotate in the other direction on the Y axis. Anyway, the video speaks more than words or pictures, so enjoy.



Monday, August 5, 2013

Attack of the DLLs, and The Eclipse wars

Last week was an interesting week, filled with despair and joy in differing intervals

For whatever reason, i decided to change my current DLL renderer setup. I knew i wanted code and dependency isolation between my main project and the rendering code, but i also didn't see the reason to do it using inheritance, since i wasn't using polymorphism at all (and didn't need to).
My original rendering code was setup by having an ABC (Abstract Base Class) pretending to be an interface class to the rest of my code, by hiding all of the grisly details of how the rendering works behind it. But by doing so, i was using inheritance for something i had no need for. In my opinion, you should use inheritance only if you need the runtime polymorphism, and for my renderer, i didn't need it, as i was planning to statically link the different renderers into my code by using the same class names. And while this is possible, it's also impossible to do so all the while keeping the code and its dependencies separated in a DLL. I mean, it is, but not without sacrificing time and a whole lot of hair.

So after a small discussion on gamedev.net, i decided that using an interface wasn't so bad if i get to keep all of my rendering dependencies tucked away nicely in a DLL file. This does, however, mean that my renderer is going to have to manage all classes that in any way use or reference a concrete rendering API data type. However, this isn't that big of a deal, because this requires that:
1. Each of those classes needs to have an ABC interface
2. Each of those classes can only be referenced by a pointer or reference
3. The memory and lifetime of those classes needs to be managed by the DLL
4. The creation of these class instances is done through the renderer interface
5. (optional) Getting the reference/pointer to those classes requires an ID or name
The first four points just mean that i will need to have two managers for these objects, one to be inside the DLL and handle their memory and lifetime, and one outside of the DLL that controls the lifetime. And here i make a difference between handling and controlling the lifetime, because while i can only delete the object from inside the the DLL (thus ending its life), i get to control WHEN i do that from the outside.
The last point is a feature that i'd really like to have, because it allows me to reload and replace any resource (texture, model, etc) without needing a restart of the executable. Fun!

Thursday, July 25, 2013

Decisions about rendering APIs

Since i started on this path of trying to make my game framework, i was thinking about implementing an abstract rendering API which would be able to have anything beneath it, such as Direct3D[9|10|11] or OpenGL[2.0|3.0|4.0|ES|...]. However, there are a few problems with that.

First, i don't know any OpenGL version, and i barely know how to do stuff in Direct3D9. I'm familiar with concepts of the vertex and index buffers, the deprecated-fixed-pipeline FVF (Flexible Vertex Format), texture binding and such, but i haven't actually used them.
Second, i don't know the differences between OpenGL and Direct3D. I've read about some of them, like having a different orientation of the coordinate system (OpenGL is right handed, D3D is left), which is just the tip of the iceberg (there are too many to list, and i've forgotten most of them).
Third, it takes a huge amount of time to write an abstract rendering API which can successfully handle all of those differences, and that's time i simply don't have. I don't want to be bothered thinking about how to make something abstract enough so it would work with.... something i know nothing about!

So in interest of my not going crazy, i've decided to stop thinking about OGL, and simply code like only one rendering API exists, and that is Direct3D9. I know people are moving on from D3D9 to D3D11, but that one is locked to Win7 or higher, while D3D9 can still be used by WinXP users. Even worse would be to start working with D3D11.1 or .2, which is locked to Win8, but as luck would have it, i don't have Win8, so it's out of the question anyway.

So the real question is, how do i make a rendering API which wraps around D3D9, but without having most of my game code (or engine code for that matter) riddled with D3D calls which would be insanely hard to refactor or maintain sometime in the future?
Thanks to the super awesome site GameDev.net, i found a semi solution. Well, several, actually, but i'm gonna talk about one i managed to understand.

The low level renderer implementation (D3D9 in this case) has a certain number of things that it knows about, and these are vertex and index buffers, textures, vertex definitions, shaders and render states (the list is maybe slightly larger than this, but these are the concepts that i remembered and understood). In order to draw anything to the screen, all the renderer wants and needs are these.

So you make a class for each of these concepts, that has a concrete implementation under the hood. You still only support the one rendering API, but everything is tucked nicely in a small class that handles only its responsibility. Which is what the SRP (Single Responsibility Principle) is all about. And the solution doesn't mean you're going to have D3D calls all over the place, it means you're going to have a bunch of small classes passed around, you'll interact with them by using their interface (like, passing a vector<vertex> to a function that internally copies the whole thing to IDirect3DVertexBuffer's memory location), and the only class that knows how to use their data is the low level renderer.

This also has another benefit: by implementing these classes to use a different underlying rendering API, i can just recompile the project, and get another executable which uses the other API. So there's actually no need to make an abstract interface to support multiple rendering APIs in runtime, but rather have another executable for the other API. This concept is similar to how you make multi platform code, simply replace a small set of classes to use a different implementation (but uses the exact same name), recompile and you're done.

My focus is going to be purely on D3D from now on, until now i still had some lingering thoughts about making it abstract for runtime usage via DLL loading, but having so little time, i think it's better to just do what i know right now, and worry about stuff i know nothing about later.

Thursday, July 18, 2013

The beggining

First post. Version two.

The first version was written in croatian. I didn't like it. It limited the audience to croatian speakers, which is somewhere around 0.00005% of the worlds population. My targeted audience are technically literate people (as the blog is gonna contain lots of computer/coding terms), and these people most probably know english.

So, the purpose of this blog is gonna be to document my continuous attempt to become a game developer. I'm still calling it an attempt because i haven't actually done anything worthwhile, as far as i'm concerned. I made Pong, Tetris and Snake. I'm trying (fourth year going strong) to make my own engine, which was (WAS) gonna be similair to what this article is describing. However, i ran into a bit of a snag.

There is (as far as i see it) one stopping issue.
The article describes something which is a perfect fit for managed languages. The example which i was able to find from the author is written in C#. Problem? I'm using C++. The memory management is the biggest hurdle to get over, and i'm far too inexperienced to make something that works.
I've spent too much time pursuing something that i can't possibly implement with my current skillset. And there's too much of it to implement without another C++ programmer who could work with me on it, so we can bounce ideas back and forth.

So with great joy i'm giving up on that concept.
I thought i would be devastated, but i'm pretty happy about moving on away from it, because that means i will actually start doing things i want to be doing.

Or will i?
Not quite. Turns out i still have things i want to do before i can do that. I'm certain i could skip it all and just go straight for what i want, but this is part of the experience. Plus, i'm a stubborn idiot.

First, i want to learn Direct3D. I want my framework to use any graphical API implementation under the hood, but first, i want to learn Direct3D. So far i've been using a Direct3D Sprite class which handles sprite drawing for me, but it limited me from using the whole extent of what Direct3D has to offer. Drawing models is still several steps away from my current position, but in learning how to draw primitives using vertex and index buffers, the crossover from 2D to 3D is gonna be much simpler.

Second, i want to learn what Box2D can do for me. I've decided to skip implementing my own physics simulation (which would take me forever), and just use something that's already done. Hence, Box2D. I'm gonna use it for physics simulation and for collision detection.

Third, i want to implement the Entity Component System of my own design (slightly influenced by the previously mentioned article), which would offer me greater flexibility for making games. The implementation will be as much data driven as possible, with Lua behind it for the scripting and data definitions.

So, three things to do. Sounds simple enough. We'll see if the sound properly reflects its source. I have a lot of it already done (i haven't spent 4 years doing nothing), so i'm fairly optimistic.

As i'm currently living in Sweden (here on business), my goal is to have all three things finished by fall. The first of september, to be more exact.

Wish me luck, i guess.