Original Author: James Podesta
There are a lot of great engines out there now with very reasonable prices. Unity and Unreal being the most well known. However, for small iPhone games it is still possible to roll your own engine without getting bogged down in endless months of development. Even with the adequate 3d power of the iPhone4 and iPad, low-tech 2D games are still topping the charts and proving that a sole developer can still make a mark in the appstore.
Making your own engine can be a great learning experience, and gives you complete control over any performance bottlenecks and the ability to fix any bugs and implement any obscure features your game might need.
I’m currently developing an RPG called Magelore. I am using my own engine for this, which I initially wrote with Daniel Krenn for Venger, released under the Wretched Games label. It has been used in 4 released apps, with Magelore to be the 5th. The engine is simple and contains the minimum set of features to streamline iOS app development without any bells and whistles. While Venger was 3D, our subsequent games have been 2D as well as Magelore which is all 2D.
The most important point worth noting, is that if you don’t want to get bogged down writing an engine for months and never getting onto the game part, it is best not to get too obsessed with optimising the engine. A typical iOS 2D app is generally fill rate limited, so spending too much of your precious time optimisations engine components isn’t going to pay off in the short term. Physics and collisions are the other likely bottleneck. Unless you’re doing your own physics system, there is not much you can improve here from the engine side. You simply need to adjust your design and gameplay code so as not to need so too many expensive operations per frame.
I’ll now go through the minimum elements you need for an engine in order to get a game onto the AppStore. I’ll go into some of these modules in much greater detail in future blogs. This will need to be a very brief overview or the blog would be massive.
It can be a huge boon for development to make your engine cross platform, and really doesn’t cost a lot. Our engine runs on PC also, which means we can use Visual Studio which as a much superior debugger to XCode. It also means we have access to mouse and keyboard for debugging shortcuts, easy screen shots and frapping of videos. The PC version has been set up so it will run in 320×480, 640×960 or 1024×768 modes in both portrait and landscape modes. This way I can test what the game will look like on any device by changing a commandline parameter. Since you’re using OpenGL, to make something cross platform you mainly just need to isolate a few minor modules – startup, sound, textures, input, and minor discrepancies between OpenGL and OpenGLES. Our objective C OSX files are restricted to the AppDelegate, the View class, the OpenGLRenderer1 and 2 classes, SoundEngine and the Texture class.
C++ versus Objective C
My engine is completely C++. The Objective C parts are restricted to the AppDelegate, the EAGLView class and a small OpenGLRenderer1 and OpenGLRenderer2 class. Most importantly, this means the PC version shares 99% of the code with the iOS version.
On most games, you can get by with just a ReadFile and a WriteFile method. Most games do not require streaming files, and its generally more efficient to read them in one chunk and process them in memory. I always use a flat filesystem with my games. This means every resource is accessed by just a unique filename with no path. This removes a lot of possibilities for human error when managing your assets since they can be moved anywhere and still work, and can never get mixed up. I’ve used that technique for console games with 10 gigs of assets, so there’s no scaling issue once you have worked out good file naming standards.
Asynchronous loading is an optional extra that will give you some power to polish UI transition between levels, but its not essential in most games. You sound system will probably need streaming for music, but I have used middleware for sound (namely Apple’s SoundEngine.cpp/h) which handles its own threading and soundbank streaming.
Data File Reader
You’ll want to data drive as much content as possible, so you’ll need to pick a data file format and stick to that standard for all your data files. I use XML and have a smaller helper object that just manages construction and destruction of TinyXML documents. Generally the runtime hit for loading and parsing your data will not be an issue, but if it is it generally means you need to break up your XMLs into smaller parts so they don’t need to be loaded all at once. Supporting Auto-Refresh is very useful for development here. That is, your class that reads an XML should continually check if the file has changed and reload it. This is especially good for any system that needs rapid iteration like particle systems or combat balancing, and saves you having to spend a lot of development time implementing an editor for those data files.
Good maths support is essential in any game. Vectors, trigonometry wrappers, random number generators, interpolators, and Min, Max, Clamp templates should be there. In a 3d game you’ll need matrix and quaternion support as well. Code for any complex routines, like quaternion maths, are easily found on the net so no need to be a maths genius to put together a good library of maths helpers. Depending on your game design, you will generally not need to worry about low-level optimisations of your maths library. When choosing your vector classes, be careful not to fall into the optimising trap. The readability difference between a optimised vector library that efficiently pipelines a vector math unit, versus a simple operator overloaded type-safe vector library is massive. However the speed improvement you might get for that payout may be insignificant as maths were not your bottleneck.
You need to provide a good set of stacks, lists, maps, hashtables. I’d recommend just using STL for this but keep an eye on your object file sizes. A common trick to avoid code bloat is to use containers of pointers, as generally compilers can share the same code for a std::vector<classA *> and std::vector<classB *>. The are some of other advantages to using a container of pointers, so its well worth looking into.
Of course, you’ll need to be able to render graphics and some sort of render manager class can help co-ordinate sprite systems, material systems, and models and provide an extendable framework for adding new types of rendering primitives. I have taken a layered approach to rendering. The scene is split into a lot of layers and each layer holds a bunch of Render Jobs. Each layer is given a view object which determines its camera and projection matrix, thus determining if it is a 2D or 3D layer. You can specify a sorting mode for each layer. Each render job is added with a priority value. In a 3D layer, that priority might be the render objects centre Z co-ordinate and you can specify that this layer gets sorted. Other layers can be left unsorted and so the order you add render jobs becomes the order they render. This can be useful for UI. You can also set render targets and capture targets for each layer. A render target specifies that this layer will render to a specific texture, instead of the back buffer. A capture target specifies that a copy of the backbuffer will be taken when the layer is finished and stored in a specific texture. In practice, capture targets are quite slow on iOS, so you’re better off sticking with render targets only. Note that this is really only used for special effects and so you can happily skip this feature for a simple iOS game.
Render Jobs are an extendable callback driven rendering system. Any engine or game module can implement their own render job. Examples of render jobs I support are Sprites, Fonts and Primitive Rendering. The concept is system. You tell the engine that you are creating a rendering job on a specific layer and priority and supply it with a callback function. The engine returns a pointer to the some raw memory you can use for your job data. Once you have filled as much memory as you require you tell the engine that you are done. At the appropriate time in the Draw pass it will invoke your callback function, passing it your render job data. You can then access OpenGL directly to implement whatever kind of rendering you need.
Materials and Textures Pipeline
You’ll certainly need textures. Support for compressed textures (PVR) is also desirable but not essential . In 2D artwork the compression artifacts can be quite noticable. They are really better suited for 3d object textures. I’ve restricted myself to just PNGs and PVRs so as to simplify the art requirements. It’s fairly easy to support a range of textures through middleware, but it can be best to just stick to one format that does all your needs and not end up with massive bloats of code and inconsistent resources. Materials are currently just a simple table of information detailing what texture they use, and what blend modes and render states to activate.
You’ll need to be able to render fonts. I use AngelCode’s excellent BMFont utility to convert any font into a PNG with a text data file. Font renderers will need to support all justification modes, and it is very handy to have a render-in-box method that will word wrap text into a box area. This comes in handy for drawing requesters. I also like to have support embedded commands like change text colours or jumping to a tab position so you can easily syntax highlight messages without having to split it into font draw calls.
Sprites are your moving, animating elements in a 2D game, though they can be used as billboards in a 3D game also. For sprites I use PSDs as my input source. Each sprite animation frame comes in as a separate PSD layer and hotspots can also be defined through layers. Since I do all my art creation in photoshop, this is a convenient storage format. It may also be useful to support making a sprite out of lots of single png files – which would allow you to render something animated in
3dsMax and bring it quickly into the game. I only support PSDs on the PC platform, which then dumps out a binary sprite file with all sprites quad packed into large texture pages. This sprite file is then used by the iOS device for its textures – which saves on load times and storage size on the iPhone, with the trade off that I need to run the game on the PC to generate data for the iOS version.
At some point you’ll want some explosions or sparkles or some autonomous animating effect. For Magelore, I use a simple envelope driven sprite manager with movement defined in an XML file. This gives me a very flexible system with minimal coding effort. If your game is heavily focussed on retro pixel based effects where you need 100s or 1000s of points, you may want to spend a lot of time here optimising for data cache access, vectorising the updates and reducing the dynamic memory accesses during the draws. This can bog you down for a long time and needs many iterations of profiling and testing and very low-level knowledge of the memory pipeline for your hardware. However, for a lot of game styles you can get by with just a simple flexible generic system. Fillrate is probably going to be your biggest bottleneck with particle systems.
Being able to dynamically generate and render raw textured uv’d polygons is always useful for prototyping new effects or one off effects. I prefer to put a wrapper over OpenGL to hide any minor OpenGLES discrepancies. It also bad for someone to touch the raw render states as your material pipeline will be unaware of these changes to state.
Many iOS games will need accelerometer input. This comes to your app via the accelerometer message so you can just provide your appDelegate as the delegate for that message, and then wrap it up and send it to your C++ input module for everything else to access. I put an adaptive smoothing over it so the values look smoother when rendered on screen but are still responsive to fast movement.
My UI System is, in most respects like a standard windowing input system – frames that have area on screen and children. It deviates in a few areas to be more useful in a multitouch with your fingers environment like iOS devices. Frames can be oversized and overlap. Touches will go to the frame they are most inside. Frames can also register for multitouch, which means while held, all future touches are captured by that frame. An example use might be a zoom icon. When you touch zoom you can use the second finger to pinch the screen in to control the zoom. I still expose the raw touch list to the game if they want to do something very freestyle that doesn’t map to the simple concept of buttons.
One popular way to save games is via a serializer. Some important features for a serializer are:
• endian agnostic – always serialize bytes at a time so the endian of the platform doesn’t matter. Useful if you ever make a PS3 or X360 version of your game and want data compatibility.
• bit serializing – I like to include a method that lets you serialise an int to specified number of bits. If you use the serializer for creating network packets, you’ll want to pack them as small as possible.
• version number – important for your future upgrade path so you can add new content to your objects.
• compression – supporting gzip compression is easy and fast. If your posting large serializations to web servers, any reduced bandwidth is a win.
• blocks – I support a StartBlock and EndBlock serializer command. This inserts a size into the stream for you enabling you to skip whole blocks during deserialization if that object is no longer important to your game.
• ascii output – I include an ascii serializer (write only) for debugging purposes. When saving my games, I run the process a second time using an ascii serializer so if anything is going wrong I can easily see what data is causing it.
Scripting language is essential for data driven games. I use a custom scripting language in my engine. When taking an off-the-shelf language like Lua or GameMonkey, just be sure you’re not going to run into performance and memory bottlenecks. A lot of existing script languages can eat up a considerable portion of your CPU just doing garbage collection. My script language was meant purely to automate simple tasks and has no dynamic memory usage. It is meant to be used as high level control I can embed scripts like “open door; delay 3; close door” into any system. Note that some scripting languages are not designed to handle a command like “delay 3″ – one that stalls the script for 3 seconds – unless they are run as co-routines or threads, which can add unwanted complexity.
I’ve not used any physics systems in any of our released games and Magelore is no exception. Venger used a simple collision system for ray collides and custom 3d movement routines. For a 3d physics system I’d recommend looking at Bullet or Newton Physics. For a 2d physics system its hard to go past Box2D. Writing your own requires very strong mathematics skills and I’ve seen many failed attempts by good programmers.
For iOS sound we’ve used SoundEngine.cpp which comes with Apples sample programs. It provides simple access to playing mono and stereo caf files, and for streaming m4a files.
Often useful. Fortunately this is extremely easy to implement on iOS through Apples MPMoviePlayerController class. You just create one of this and add its view on top of your current view. You later get a callback when the video is complete.
It’s important to support a basic suite of Asserts, Errors and Logging. Lately I’ve taken to using HTTP debugging where you can listen on port 80 and deliver game information as HTML pages on HTTP GET requests. This is great for getting debug information from remote machines like someone QA’ing your game. On my last project, which was very script heavy, I did a full debugger for my script system via the HTTP webpage. That turned out to be a great help to the designers, and if they really got stuck, I could just browse to their X360 web page and see what was going wrong.
Scene Manager and Entity System
You’ll need some kind of object framework. Something to keep track of all your objects and call Update and Draw methods. I’m just using a old style fat entity system for simplicity. My scene management is a simple list of entities. I can get away with this because I generally only have objects alive if they are near being visible, and I don’t have too many live objects for it to become a bottleneck. Venger was a tunnel shooter and so we were able to just create and destroy all objects as you progressed through the tunnel, meaning all live objects where valid for Update
I try to keep my hierarchies as flat as possible. If you’re ever finding that your hierarchies are getting deep, it’s time to switch to a component system. However, components come with some baggage since you have extra intra-entity communication, dependency and ordering issues to deal with. So there is still some advantages to keeping with a fat entity system, especially if you already have one working and the alternative would be to scrap it and start all over again 😉
Phew… Now that I have all that off my chest, I hope to write about more focussed topics in future blogs where I’ll be able to describe in better detail with pictures, code sample and videos how aspects of the game are implemented. Areas I have ear-marked for discussion include how content is generated, applying updates to serialized levels, script usage in xml content, real-time lighting system, and so forth.