Monday, August 12, 2013

My cubicle...

Last week was a good week.

After some (well, a lot of) reading, i decided i'd go from D3D9 API to D3D11 API.
The decision is based on three things:
1. D3D11 APIs are cleaner and feel better designed.
2. D3D11 has ways to run on all hardware types (even those that don't support D3D11 features) by using feature levels.
3. The number of XP users that play games should be non-existent by the time a make a game that i'd want to sell.

So with that in mind, i decided to try my luck in D3D11 programming. I found a couple of web sites that have good explanations of the code they give in their tutorials, and have been following them to understand why something is used the way it is. So far so good.

In particular, the things i want to learn are:
- using vertex and index buffers ......... DONE
- using and writing shaders ......... half way there
- texture mapping ......... need to understand it a bit more
- alpha blending and transparency ......... TODO
- orthogonal projection ......... know the principle, but have yet to use it

As you can see, some things are easier than others. =D

Using vertex and index buffers differs slightly from D3D9 in the way they are created, and the method name for locking their GPU memory so you can copy your data into it, but the principles behind it are still valid. Same goes for index buffers. The difference between D3D9 and 11 is that in D3D11, all types of buffers are created in the exact same way with the exact same method, the only thing you change are the parameters of the struct that is passed to the method.

One change that i did not expect in the transfer was the need to write my own shaders, which is not optional. There are ten stages in the D3D11 graphics pipeline as can be seen on this page of the tutorial i'm reading (scroll down a small bit), and half of them are programmable by the user, and the other half is configurable by setting certain flags or values. The part that surprised me was the fact that if you don't write two of those five programmable shaders, you don't get any output on the screen (namely, the vertex and pixel shaders). Coming from D3D9 where you'd just shove vertex data to the GPU and it would render them on its own, this came as a bit of a shock at first, but later i figured how much flexible this system actually was. You get to control how things get done, like outputting the entire scene in grey scale instead of color =D

Going forward, the texture mapping seems a bit more complicated than what i thought it would be. It requires much more control than i thought it would, but (hopefully) most of it i related to just loading the texture. I still have some reading to do on how it actually works, and i need to try cutting out parts of the code down to see what are the bare essentials to get it done. I did, however, manage to get it working by setting the texture of my long lived test subject to my spinning cubes.

The orthogonal projection (aka, viewing the scene in 2D instead of perspective 3D) is a matter of calling the right function to set the projection matrix for the camera, so it shouldn't be much work (seeing as i already have the data i need to set the perspective projection).

Alpha blending and transparency is my next step, so by next week i should have it well under my belt. :)

Finally, to show that i am making some progress with it, here's a video which should explain the title of the post. The center cube is enlarged by a factor of 1.5 on all sides, and made to rotate on the Y and Z axis, while the outer cube is set to rotate around the Y in one direction, then translated away from the center (where the first cube is), then set to rotate in the other direction on the Y axis. Anyway, the video speaks more than words or pictures, so enjoy.



No comments:

Post a Comment