Type Names Matter

Original Author: Thaddaeus Frogley 

Names matter. Some 10 years ago or so, I wrote an article for a blog on using typedef when working with templated C++ types, such as the STL containers.

You can read the original article (Using Typedefs) if you want. I’d like now to try to distil why I think this advice still holds up, 10 years later, even if you aren’t using templates, the STL or C++.

Programming is a form of communication. As a programmer you write for two audiences: You write for the compiler, and you write for the future other programmer.

For most programmers who are game developers, that future other programmer is someone already on your team, or as yet hired future team member.

Even for the solo programmer, the indie developer, working from home, will on occasion return to code written many years ago, and use it as the basis of new work.

Who knows, if that solo project goes viral, one day you might be hiring people to port it to the latest console or handheld device.

Any professional writer will tell you, one of the many important principles of good writing is to consider your audience. This applies to programming as well.

Now, I’m not going to say that the human audience comes first. A programmers primary job is to get the computer to Do The Right Things, and sacrificing how clearly you are instructing the computer in order to better communicate with the human reader may well be a questionable trade off, especially when performance is an important factor in the engineering goals.

But even with the primary function of writing code that works dealt with, there is often room to improve how literate your programming is, and one of the most powerful tools as your disposal as a programmer is giving things good names.

A good type name can tell the reader not just what a type is for, but also what it is not for. Consider a Name vs a String, or a NameList vs a StringContainer.

So, please, think carefully about how you name your

types, your functions, methods, and variables, because it’s not just the compiler reading the code.

One day it might be me.


Have you done your 10,000 hours?

Original Author: Phil Carlisle

10,000 hours.

I’m not a big fan of Malcolm Gladwell’s books, but one of the things I read about after seeing a review of his book Outliers got me thinking about my own students.

His book Anders Ericsson in which he studies the commonalities of “experts” and how they came to be that way. The upshot of which is that it is all about “practice”. Notionally, the book suggests that you become an expert at something by fulfilling a regimen of practice for “10,000 hours”. You can read the research yourself to see how far Gladwell has stretched that notion. The thing is, 10,000 hours works out to rougly 3.5 years of full time 8 hours-a-day work!

To give you a bit of background, I teach part-time at a University in the UK and one of my classes involves a module called Advanced Games Technology in which we ask students to produce a full game for themselves over a two semester period. We don’t really specify much in terms of constraints other than “it has to be 3D” and “it has to use some middleware”. Ultimately it is meant to be a portfolio peice for them, so they can throw all of what they’ve learnt into the game and produce something really polished.

Now I’ve been teaching this module for a few years now, so I’ve started seeing some patterns in terms of student response to this relatively simple and open brief. Can you guess what it is?

Yep, it’s paralysis. Literally, for some reason the sheer concept of starting a project from scratch on this scale seems to throw most of the students into a cartwheel of indecision. My thinking is that this is because up until that point, they have largely worked off fairly strict briefs and fairly regimented development schedules with specialized classes and teaching to support them. But then they are faced with “do whatever you really want to do” it seems what they really want to do is nothing!

Only, it’s not strictly that simple. Because this isn’t just common garden student laziness (anyone who has taught knows what that looks like). This is something else and I feel like it has some relationship to the experitise issue. Which is that they simply have never practiced actually thinking of making a project from scratch for themselves. Or rather, they haven’t practiced often enough for it to be second nature that you have a hundred ideas you want to create and never have enough time for them. Almost everyone I know in the games industry has both done more than 10,000 hours worth of work in their respective roles and has also gotten hundreds of ideas for games they would like to make.

Analysis paralysis

The other issue I think, is that many of them fail to start because they are stuck in the limbo of indecision trying to choose which technologies to build and which middleware to use. I’ve seen this issue happening with indie projects as well. Literally never starting on something because the perfect technology is not quite there, even if one is “close enough”. The main factor is that if you find that there is always a barrier to you starting a particular project (be it a new peice of technology, a new art peice, or a new feature for a game) then you should maybe start thinking that practice in terms of just producing something is a good idea. I used to get that kind of feeling a lot when I was doing my own indie games using Torque. I always found myself waiting for the garagegames.com guys to develop some new technology they’d been talking about so I could then work on using it for my game. That whole notion of waiting for someone else to develop core technology simply wasn’t something I ever had when I was working in commercial development, where we would rather work on our own solutions and not use someone else’s. But the end result was that I had an excuse not to simply practice and make progress even if ultimately I would end up using their solution further down the line.

How to fix the problem?

This is basically a mental blockage as much as anything else. But the ideal solution is to “practice” these things until they become almost like muscle memory. One great way I think to approach this, is to take part in a short term development competitions like Ludum Dare or any of the game jams. At the very least you will not feel like you are the only one to have the blockage. But if you practice throwing together project code and “getting things done” you will eventually get over that hump of not being able to figure out where to start.

Here area few other ideas for practice for programmers:

  • Make yourself create small project files from scratch, including setting up all build settings. Create your own libraries from scratch and use them as both static and dynamic libraries.
  • Make yourself a testbed project and integrate new technology middleware into it often, write code so that you can “wrap” each middleware so that you can implement new ones in an opaque manner
  • Take on a new aspect of technology, in an area you are not familiar with (animation, graphics, networking, physics, engine design etc)
  • Practice 2D and 3D, make prototypes that work with similar code across both (thinking in components is good here)
  • Work on math skills, practice things like orientations and simple but useful tools like easing. Do plenty of work involving different coordinate frames and transformations.
  • Work on interfaces and API designs. Time them and profile them. Look at how memory changes and how different sizes of allocations change the profiles.
  • Find a new aspect of the language and implement something new with it. Look at delegates, smart pointers and any new C++ standards features. Study new features in boost and other open source frameworks (POCO for instance).
  • Learn a new language, implement a completely unrelated type of project with it than you normally work with (i.e. a database, a tool of some sort etc).

I do think that many companies would actually be wise to give their devs time to practice starting smaller games from scratch too, although it seems fairly rare. But the idea of making quick prototypes and generally not sweating the details of choices or designs or architectures I think is a useful practice as it stretches you to not be concerned with details and instead by productive and experimental. Of course the details matter, but then you can practice those issues seperately. As the Ericsson research shows, the best experts actually use a well thought out practice regime. It isn’t that you simply practice for its own sake, but that you practice in order to learn something new, or to commit to memory some process or practice that you can build on incrementally. In a way you are trying to find flaws in your own process and then factoring in corrections into the overall practice regime.

I’ve recently been teaching myself 3D art skills and one of the things I’ve learnt is that simply practicing things like sculpting (for faces) helps to improve my eye for proportion and anatomy and helps me understand what makes a “good” image. So doing regular sculpting, even if I’m still fairly tragic at it all is good. As I learn the tools and the forms I slowly begin to appreciate the underlying structure and process.

I think we can see code in similar terms. So doing quick and dirty prototypes will begin to show you the underlying patterns we need to commonly use in more complex works. Or will show us some aspect of design we aren’t fully aware of until we actually try and develop in that area.

So my questions to you, the readers of this entry. Have you done your 10,000 hours? How do you approach practice and what do you do to work on your skills and to expand your working process? Have you tackled anything new recently? How did you learn the new area and how can you apply those learning processes to new areas next time?


What does casual becometh?

Original Author: Tyler Coleman

 

I love management/tycoon games. It seems the uprising of the social game is bringing my favorite genre back from a quiet period. Therefore, I love social games by association. Many social games are on the borderline of being management/tycoon games, but are not considered as such.

com·plex  (k) adj.  Consisting of interconnected or interwoven parts; composite.

(I’ll be picking on Zynga’s games for this article, due to their market penetration)

Example – City Building

It seems that social city-building games are not considered to by city-building games by Utopia. They are all city builders on an island. However, Empire & Allies is a social game, so it is deemed unfit to earn the mark of a strategy game.

Tropico -vs- Empire & Allies

Tropico -vs- Empire & Allies

One company that realizes this, and is doing a great job at reaching both markets is Firaxis. With the release of Civ Revolution and CivWorld, Firaxis focused on a market that was not familiar with 4X games, and the years of growth they have endured. These games have introduced a new demographic to 4X games, and did it in a simplified way. We cannot expect a player unfamiliar with a genre to step into its most complex title first. There must be a “weaning” process. The core gamers have had the history of video games to bring themselves in to the complexity of games.

Do a comparison of FarmVille to the more recent popular social games. There is a definite growth in complexity over the past 3 years. And, it seems as if that growth is at a faster rate than the original growth within these genres. Looking at the change of complexity from the original Sim City to Sim City 4, it took 14 years and 3 sequels for Maxis to reach the complexity found in Sim City 4. Zynga and their competitors have released games on a yearly basis, with small increases in complexity. The increase per game is much smaller, but given 14 years, will social games be as complex as their “core” counterparts?

My opinion? The social market is currently weaning a new generation of management game players.

Example – Match 3

Another example of this is the Match 3 genre. One of the first versions of this game is successor. The original Bejeweled had two games modes. The third has 8, with various options in each. In three titles, PopCap has grown the complexity of Match 3 exponentially. And, the market is following them. Games such as Puzzle Quest are increasing in complexity, introducing a demographic of casual players to basic RPG mechanics.

Other Examples

These aren’t the only genres that are reaching into the core realm.

A genre recently coined in the casual market, “time management”, is on its way to having a niche core player base. Games like Diner Dash have paved the way for a group of hardcore time-management games, which most of us would have difficulty picking up immediately. While these games may be lumped in to the “casual” games, I would pose that the mouse clicks per minute on a hard level of a time management game is not much different from a FPS or RPG.

Plants Vs Zombies is another example of a game genre aimed at a more casual market. Defense games have existed in the flash realm (and others) for over a decade, but it took PopCap’s polished game to reach a more casual market. It would be interesting to see if the casual gamers who played PvZ went on to play more challenging and complex defense games after finishing it.

Long night of Dwarf Fortress

 

What are your thoughts on this? Am I finding a correlation where there is none, or could social games be creating a new wave of core gamers? Could companies with AAA interest benefit from introducing their brand of game to a casual/social audience, in order to bring more diversity to their player base?

 

Will social games wean soccer moms in to playing Dwarf Fortress? Only time will tell…

 

 

 

 

 

 

 


C++ Events

Original Author: Christian Schladetsch

Motivation

All interactive game architectures are, by their nature, forced to be somewhat event-driven. There are various hardware-based events for input, audio and network systems, etc.

Furthermore, in modern game systems events are often used to drive many other aspects, such as:

  • handling abstracted input from Gui controls, such as ButtonClicked and MouseEnter
  • producer/consumer models for long-running tasks such as path-finding
  • collision events emitted from the physics simulator
  • processing abstracted network events above the raw hardware layer

So there is a requirement for an efficient, flexible and expressive event system for C++, which is unfortunately lacking from the standard library.

Of course, there are mechanisms in the Boost libraries that can be used to implement such event-based systems, but game developers are generally wary of adding a dependency on boost-related code to their source base.

Introduction

This blog entry will introduce a multi-cast event model that consists of exactly one header file which has exactly two external dependencies. These are the standard C++ <list> and <memory> headers. The library resides entirely in the header; there are no associated source files or libraries to link with.

Events from this system can delegate to functions, methods, or other events. The delegate methods can be const or non-const, and the arguments can be any mix of value or reference types. Events are copy-constructable and assignable.

The maximum number of arguments in an event signature is fixed to eight for this implementation, but that can be increased by changing a constant and rebuilding the header.

So here’s some example use, given the one header file:

#include "Events/EventP.h"
 
  #include <string>
 
   
 
  using namespace std;
 
  using namespace Schladetsch::Events;
 
   
 
  class Foo
 
  {
 
  public:
 
      void Method(int num, const string &str);
 
  };
 
   
 
  void Fun1(int num, const string &str);
 
  void Fun2(int num, const string &str);
 
   
 
  int main()
 
  {
 
      // make an event that has two parameters
 
      Event<int, const string&> event;
 
   
 
      // add a delegate method
 
      Foo foo;
 
      event.Add(foo, &Foo::Method);
 
   
 
      // add a delegate function
 
      event.Add(Fun1);
 
   
 
      // fire the event: the foo.Method will be called,
 
      // as well as Fun1
 
      event(42, string("Hello, Events"));
 
   
 
      // remove Fun1 from the event
 
      event.Remove(Fun1);
 
      event(123, string("Fun1 not called"));
 
   
 
      // it is perfectly safe to copy events
 
      Event<int, const string&> other(event);
 
      other.Add(Fun1);
 
      other(456, "Both Foo::Method and Fun1 called");
 
   
 
      // we can also 'chain' events: by adding one event to another,
 
      // the added event will be fired when the parent event is fired
 
      Event<int, const string&> chained;
 
      chained.Add(Fun2);
 
      event.Add(chained);
 
      event(789, "Foo::Method called, as well as firing chained, which will call Fun2");
 
   
 
      return 0;
 
  }

Events are templates that build the signature of supported delegates from their template type parameters.

The interface to the system is minimal, with just two methods ‘Add’ and ‘Remove’, to add and remove delegates from events.

Invoking, or firing, the event looks just like a function call. When the event is fired, all delegates that are stored in the event are invoked in order that they were added.

I could have added operator overloading for += and -= to add and remove delegates, as used in C#, but I considered that a little too twee.

Architecture

The system is based on the idea of decoupling the actual delegate from the way that it is invoked. The event object itself stores a list of pointers to generalised delegates. When the event is fired, it iterates through its delegates, passing the arguments to each. As such, please use reference types for large objects, including strings – but you do that anyway.

Along with the base Invoker arity-type (see below) stored in event instances, there are arity-types for delegated functions, const methods, non-const methods, and events. ‘Delegated events’ in this sense are used for chaining events together, such that when one event is fired, the next chained event is also fired. Syntactically, chaining events is exactly like adding a new delegate to an event.

I used the term ‘arity-type’ above to describe a collection of C++ types that vary only by the arity (number of meaningful type parameters) that they support, but are otherwise semantically equivalent. The general pattern used to implement an arity-type is:

// forward declare the general case
 
  template<int Arity>
 
  struct ArityType;
 
   
 
  // specialise for the case of no arguments
 
  template <>
 
  struct ArityType<0>
 
  {
 
      template <class T0, class T1, ..., class Tn>
 
      struct Given
 
      {
 
           // implementation for arity-0
 
      };
 
  };
 
   
 
  // ...
 
   
 
  // specialise for the case of m arguments
 
  template <>
 
  struct ArityType<m>
 
  {
 
      template <class T0, class T1, ..., class Tn>
 
      struct Given
 
      {
 
           // implementation for arity-m, using type arguments T0...Tm
 
      };
 
  };

If you think this gets tedious for arities up to eight arguments, with four different delegate types (function, method, const method, chained event), plus the base invoker type – you’re absolutely correct. That’s why I pulled out the big guns to help with the implementation.

Implementation

The system was made using the EventP.h, is created by running the source headers through the C++ pre-processor and manually editing the result.

Alternatively, one can use the underlying headers directly by including Event.h instead, but this will require that you have all the library headers, as well as Boost.Preprocessor header files available.

A benefit of using the post-processed headers is that compilation time is kept to a minimum as only two external headers are included: the standard <list> used to store delegates within an event, and <memory> for tr1::shared_ptr.

Installation

Add EventP.h to a location in your include path.

This is not meant to be a human-readable file, but it is useful when debugging, as it contains the post-processed output from the source.

The default fully-qualified type name for an event is Schladetsch::Events::Event. You probably don’t want to use that name, so set the SCHLADETSCH_NAMESPACE pre-processor symbol to something else before including EventP.h.

Improvements

There is a single virtual method call required to invoke each delegate within an event when the event is fired.

This implementation is not thread-safe.

This post has described what and where, but not much of why or how. More work is required to describe how to use these types of systems well, and why they are useful to modern game development practices.

Other Work

As pointed out in the comments below, there are other similar libraries available, such as those supplied by Don Clugston.

Conclusion

This blog post introduced a mechanism that you can use to implement a Subscriber/Publisher pattern (also known as Signals/Slots) to your system architecture, by including just one header file.

Thanks to Chris Regnier for the excellent comments.

I hope you find this system useful, and I appreciate all feedback.


A Cough Syrup-Fueled Rant

Original Author: Andrew Meade

DISCLAIMER:  I have the Black Death, and have just downed a flagon of NyQuil. Wrapped up in a blanket, donning my amazingly hideous warm track pants and alliance hoodie – I WRITE!!!

Here on AltDev, a number of scholarly topics are covered. From RayTracing, to greater diversity in the industry, a lot of professional level thoughts are broached.

This is my fourth post, and although a few have been from a Student perspective, I’d like to go there one more time – because there’s something I’m just itching to talk about.  This isn’t directed at anyone in the industry, but to those hoping to join the ranks. This is a post that I hope, for years to come, will be passed on to every aspiring Game Designer. This is where I try for once and all to answer one burning question with an extremely blunt and long winded answer.

Here we go.

Future Designer: Will I have to learn how to program?

Me: OMG YES!!!!!111!!!!!1999911111

Future Designer: But I’m not good at –

Me: NOU!!!!

Future Designer: But what if I just –

Me: YOU WILL SIT DOWN AND L2 CODE SOME OF THE THINGS AND YOU WILL LIKE IT!

 

Ok seriously, maybe that’s a bit jarring, so let’s break it down a bit. As designers, we have to wear many hats – and to be perfectly honest, high level programmer probably isn’t going to be one of them unless that’s the background we come from. But let’s face it….games are built on code. Code is the magical thing that makes your totally awesome flying squirrel mechanic work. Code is the thing that pushes the boundaries of how many flying squirrels we can render in an update. Code is the thing that tells the AI of said flying squirrels to path correctly.

Long story short – No code, no flying squirrels…wait, where was I?

Programming! Here’s the thing, chaps. You gotta know how your games are built – even if it’s a meager, remedial, embarrassing, pitiful bit of knowledge. When you go to the

Programmer Mana

Never ask if its possible, ask how long

Original Author: Deano

One piece of advise I always give, is never ask if something is possible, ask how long it will take.

To a good developer, everything is possible. Programmers know that (almost?) everything is doable with significant time to develop, their is almost always some new way of doing something that hasn’t yet been invented. So if you ask is this possible, the correct answer is (in 99.999999%) of the cases, yes. Exactly the same for design, art and all the other fields.
For example:
Is it possible to paint the roof of a massive church with an amazing stunning visage that will last for centuries and be considered a masterpiece, well yes (see the Sistine Chapel), you just don’t want to know how long or how much it will cost.

The more interesting question, “is how long will it take?”, if the estimate involves the words, “I know how to do or X seems to do that quite well”, then with luck the production team can sleep soundly. However if it’s considered “not sure I’ve seen anybody do that before”, then ignore any time estimate, its research and while given infinite time and resources likely solvable, infinite tends to upset spreadsheets.

Unfortunately many producers can’t handle that honest unknown assessment, Its ingrained into them that, “we need estimates, we need milestones” which make no sense at all for research. Research can take wrong paths and simply not go forward at any track-able pace, so the concept of milestones fail (a milestone is meant to show a linear extrapolation towards the result, but in research situations it’s not necessary linear progression, so extrapolating based on milestones is dangerous at best).

This leads to a fairly reasonable answer that then no business can do research. After all how can you cost something that’s unknown?

Well if you step back and reject some traditional thoughts about production, its is indeed doable.

1. Trust – you can only let those you trust, not to waste time, to have the lateral thinking and insight needed and the mindset of ‘playing’ with the field. Hiring a researcher is hard, as those qualities are often only noticeable after you’ve seen someone through the fire of a product release, the only other guide, is have they done research elsewhere. That trust is vital, as you have to know they are working on the problem and have to ability to evaluate if something can be done (and honest enough to tell you). A good researcher may spend 6 months on a promising idea, to find it falls down at the last hurdle and that problem is fundamental and will take a long time to work around and so suggest its not worth continuing.

2. Cost – you need to realize that the cost of research (in games anyway) is mostly salary which you do know, the only unknown is how long you’re paying for them to find something. Once you’ve decided that someone is in a research role, how long it takes becomes to a certain degree unimportant. The cost is fixed, the outcome is variable. The hard part comes from where you place that cost in project based costing, my advise is usually to treat it the same what you cost HR staff and other non project related people, its simply a cost of doing business, making sure your studio is pushing frontiers in areas the company feels is right for them. To make it more palatable (at least in UK and Europe) there are tax breaks and even grants to encourage research, having researcher roles on the books makes that much easier to claim back.

3. Not dependent – you have to decouple it from any important dependencies, if you’ve sold your project to a publisher on a research topic, your risk level just went sky-high. Research should feed back into products when its ready not when its needed. Its always tempting to promise something amazing that’s halfway through its research, but try to avoid it, wait till everyone has decided “this works in all cases and is awesome”, before selling it to clients.

4. Inquiring mind vibe – without milestones, how do you know they are doing anything? Well its usually easy, a researcher will talk and ask questions to other people, when you ask them hows it going, they will get manic or depressive and go into a long explanation of whats currently good/bad. As a producer you don’t even have to understand, all you have to know that quiet is a bad sign. Research is inherently about the thrill of discovery and telling someone is a hard thing to keep in, if they are quiet (whether happy or depressed) its generally a bad sign (though not just a one-off quiet period, give it a little while, everyone has bad days/weeks!).

5. Communication – Even failed research is often very enlightening, so a great way of understanding where things are, is to encourage communicating with the rest of the company/team etc. However try not to do regular director or management presentations, let them speak to there peers who will understand it. Often things just aren’t presentable in a nice form except to people who understand the field well. Mini tech presentation in lunch hours, or wiki with work in progress notes and feedback comments, allow you to get those not in research also stretching their mind and hopefully speeding the research up as more brains is always better (ask any zombie 😉 ). Its good to show everyone (especially those paying the bills!) when something is demonstrable, but forcing it to fit some non research schedule, doesn’t help the research. Nothing wrong with asking “got anything laymen can see”, but just allow the answer to be, “erm no not yet”.

6. Feel useful – After a piece of research is done, find someway quite quickly of using it or at least acknowledge its usefulness. A research piece on a new control system might not be applicable for the current games, but how about a mini game using it or a conference/blog post on it? A piece of graphics research might be too demanding now, but books and conferences allow it to be useful in the wider sense. Often the idea of giving something away after paying the research is hard to swallow, its hard enough to convince that paying for a researcher can be useful, let alone then giving those results to your competitors for free! But usually the secondary effects, of improved company PR and status, having others improve and adapt it and just that it will improve team morale are worth it. The other fact is that often its the researching itself not the results itself which are most valuable. When you do come to use the results, having the person who knows it from top to bottom and all the caveats is the ideal situation. The worst situation is to bury the research after its done, nothing says “why bother?” more than spending some time on something to see it put away like the Ark in the first Indiana Jones.

Having people doing research is a really good way of getting experienced staff brains working on stuff that really stands out. Some people might like it just as a break after a long hard project, others it’s a career move that doesn’t mean management or a the burdens of continual shipping.
Most good games developers are creatives, letting them explore that away from the often soul-destroying reality of shipping products with its harsh rules of cuts, marketing decisions and other fun elements that we all love about the “business” is a great way of keeping talent and keeping fresh ideas flowing through a company.

Naive fallback to canvas from WebGL

Original Author: Rolando-Abarca

Disclaimer

Falling back to canvas should be used only when your game is really simple and has no fancy shader. It can also be used to provide the player with a subset of the experience of your game, for instance for mobile devices, when there’s no WebGL support. So what I’ll descrive here only works for really simple games and perhaps, to just show a limited experience instead of a “download Chrome to play this game”

The theory and implementation

When writing ChesterGL I realized that if I kept the rendering logic isolated enough, I could easily create a thin layer for adding new rendering techniques, like Canvas API or plain DOM. The rationale behind is that all the math is done in a very general way (using the good old matrix transformations), so I just needed to think through it. One thing that makes things easier, is that for every image (sprite) rendered on screen, you don’t really need to position the image, just set the transform of the current context before drawing and that’s it. And of course, you can easily get the transform from the current model-view matrix:

ChesterGL.Block.prototype.render = function () {
 
  	if (ChesterGL.webglMode) {
 
  		// ... the usual WebGL way
 
  	} else {
 
  		var gl = ChesterGL.offContext;
 
  		// canvas drawing api - we only draw textures
 
  		if (this.program == ChesterGL.Block.PROGRAM.TEXTURE) {
 
  			var m = this.mvMatrix;
 
  			var texture = ChesterGL.getAsset('texture', this.texture);
 
  			gl.globalAlpha = this.opacity;
 
  			gl.setTransform(m[0], m[1], m[4], m[5], m[12], m[13]);
 
  			var w = this.contentSize[0], h = this.contentSize[1];
 
  			var frame = this.frame;
 
  			gl.drawImage(texture, frame[0], texture.height - (frame[1] + h), frame[2], frame[3], -w/2, -h/2, w, h);
 
  		}
 
  	}
 
  }

I added an option to set the opacity of the sprite as well by changing the state of the context before drawing. If you look at the drawImage call, we can even support sprite sheets very easily. So with only a few lines, you can support WebGL sprites as well as pure canvas API sprites. Since we use the same matrix for WebGL and canvas, the whole scene graph is preserved so everything else works the same way.

So where should you start this? I did it in the initialization code, where you create the WebGL context from the canvas, if that fails, then create a 2d context:

/**
 
   * tryies to init the graphics stuff:
 
   * 1st attempt: webgl
 
   * fallback: canvas
 
   */
 
  ChesterGL.initGraphics = function (canvas) {
 
  	try {
 
  		this.canvas = canvas;
 
  		if (this.webglMode) {
 
  			this.gl = canvas.getContext("experimental-webgl");
 
  		}
 
  	} catch (e) {
 
  		console.log("ERROR: " + e);
 
  	}
 
  	if (!this.gl) {
 
  		// fallback to canvas API (can use an offscreen buffer)
 
  		this.gl = canvas.getContext("2d");
 
  		if (this.usesOffscreenBuffer) {
 
  			this.offCanvas = document.createElement('canvas');
 
  			this.offCanvas.width = canvas.width;
 
  			this.offCanvas.height = canvas.height;
 
  			this.offContext = this.offCanvas.getContext("2d");
 
  			this.offContext.viewportWidth = canvas.width;
 
  			this.offContext.viewportHeight = canvas.height;
 
  			this['offContext'] = this.offContext;
 
  			this.offContext['viewportWidth'] = this.offContext.viewportWidth;
 
  			this.offContext['viewportHeight'] = this.offContext.viewportHeight;
 
  		} else {
 
  			this.offContext = this.gl;
 
  		}
 
  		if (!this.gl || !this.offContext) {
 
  			throw "Error initializing graphic context!";
 
  		}
 
  		this.webglMode = false;
 
  	}
 
  	this['gl'] = this.gl;
 
   
 
  	// get real width and height
 
  	this.gl.viewportWidth = canvas.width;
 
  	this.gl.viewportHeight = canvas.height;
 
  	this.gl['viewportWidth'] = this.gl.viewportWidth;
 
  	this.gl['viewportHeight'] = this.gl.viewportHeight;
 
  }

There are some things that will not work, the most obvious being batched sprites (BlockGroup in ChesterGL) and shaders. But if you want to have a very simple fallback line, this could work.

For clearing the screen, you have two options: either clear the whole rect, or paint it some color. I opted for drawing a black rectangle to simulate the glClear in WebGL:

/**
 
   * main draw function, will call the root block
 
   */
 
  ChesterGL.drawScene = function () {
 
  	if (this.webglMode) {
 
  		// WebGL draw mode here
 
  	} else {
 
  		var gl = this.offContext;
 
  		gl.setTransform(1, 0, 0, 1, 0, 0);
 
  		gl.fillRect(0, 0, gl.viewportWidth, gl.viewportHeight);
 
  	}
 
   
 
  	// start mayhem
 
  	if (this.rootBlock) {
 
  		this.rootBlock.visit();
 
  	}
 
   
 
  	if (!this.webglMode) {
 
  		// copy back the off context (if we use one)
 
  		if (this.usesOffscreenBuffer) {
 
  			this.gl.fillRect(0, 0, gl.viewportWidth, gl.viewportHeight);
 
  			this.gl.drawImage(this.offCanvas, 0, 0);
 
  		}
 
  	}
 
  }

I even let the option there to use an offscreen buffer for drawing (something like double buffering). I did some quick performance test and I couldn’t find a big difference between using fillRect instead of clear. Also, we need to set the transform to the unit matrix.

Conclusion

It can be very trivial to have a simple fallback to canvas API, but you must keep in mind what you will use it for. One concern would be performance: it is definitively not bad, but it’s not as good as it can be with WebGL, also you will lose all the cool things you would be able to do in WebGL, like adding 3D objects to your 2d game, 3D effects/transitions or fancy shaders.

One way to optimize the canvas API rendering would be to do not draw the whole screen and use just dirty rects to only draw the things that where modified. Another optimization would be to port BlockGroup (batched sprites) and draw those sprites to an offscreen buffer, this would work great for tiled maps or backgrounds.

That’s it… easy and simple fallback to canvas API when WebGL is now available for your game. Oh, and it also works on iOS! I got ~26fps with 12 moving sprites on iOS 4.3.5 and ~35fps with 42 moving sprites on iOS 5 – pretty good for canvas!

If you want to try this technique, you’re more than welcome to download github (but please note that ChesterGL is still a work in progress that I maintain on my free time). If you’re also interested, you can read the original article where I introduced ChesterGL.

Don’t be “that guy”… but how ?

Original Author: Julien Delavennat

Last time I said I would give more details about my interweaving method, didn’t I ? I’m working on it, but the post will need to be pretty dense. And since I have urgent, critical and time-consuming stuff to do at the moment, but don’t want to stop posting here on AltDev, my post today will be short and off-topic 😀

A while ago I was reading through this excellent series of articles about networking and the last article of the list made me think.

It turns out, I AM that guy. Well, sort of, I’m me, but… let me explain.

People have sighed when talking about me, I’ve often embarrassed myself in ways I’ve never really understood. People considered me as different. I’ve been isolated sometimes :D. People have said I’m weird. I never got any clues about the reason why, ever.

Until recently realizing that I was that guy, and what that meant.

In this post I’ll talk about a definition of who “that guy” is, what the problem is, and how it could be solved.

Let’s talk about what being that guy means.

  • People don’t take you seriously.
  • You embarrass yourself a lot, often without realizing it.
  • People don’t pay attention to you, or when they do, they’re wondering what the f- you’re doing or talking about.
  • People aren’t interested in you or what you have to say.
  • People avoid you if they can.
  • People basically don’t like you very much.

Why would that be the case ? I think we can sum up like this: you have to be seen as relevant to people’s needs and interests.

Let’s see what happens if you’re not.

  • If you’re talking about stuff nobody cares about, you’re wasting people’s time.
  • If you’re trying to grab people’s attention for no reason, you’re abusing any attention people are paying to you, hence giving them a reason NOT to care about you next time.
  • If you’re talking about yourself, why on earth would people care about that ?
  • If you don’t offer something upfront, people might assume you don’t have anything to offer, and will ignore you.

People care about themselves, and well… you’re not them, so talking about yourself is… rarely relevant.

Think about it.

If people are willing to spend resources on you (time, brain-cpu-cycles, energy and whatever), they’re expecting to get something out of it. People want you to spend resources on them before spending resources on you. People want to talk about themselves, not to hear about you. People want you to validate them, not yourself.

I summed up the problem by saying we had to “be seen as” relevant didn’t I ?

There is one mistake I used to make regularly. I’ve spent a couple of years doing my best to give constructive feedback to amateur video-makers on warcraftmovies.com and tell them how to make their videos better. But few people thought I was helping. Some people thought I was talking nonsense. I was only trying to help.

I tried to be useful. But that wasn’t enough, there is another thing we need to do.

We need to be understandable. If you’re trying to help people but it looks like you’re not, how can people accept your advice ? Some people feel offended when we tell them they can improve. Those people don’t understand what you’re trying to do, and misinterpret it.

As I said, you have to show that you’re useful upfront. If you don’t make your use clear enough for people to understand it, you’re doing it wrong.

Be useful, be understandable. Or as I prefer to phrase it: “make yourself indispensable, prove that you are”.

Oh and put other people’s interests before yours… or understand that by helping other people, you’re helping yourself.

 

Another problem is, if you ARE that guy, how would you know ? You probably won’t, unless you do… that IS a no brainer, yes.

If you’re that guy only half of the time, you’ve got a chance at improving, you do that 😀

 

Anyway, this is common sense, but it has eluded me for YEARS, so I hope this will help some of you too n_n

Do Reviews Matter To Me?

Original Author: Ariel Gross

Saints Row: The Third is coming out soon. I’ve spent the last few years of my life on it. Just living and breathing Saints Row, day in, day out. Now it’s done. It hits the street on 11/15, which is right around the corner. So, I’ve been thinking a little bit about the reviews.

I’ve noticed a change in myself over the last couple of years. Back in the day, I worked on a lot of casual and indie games. Reviews were fairly scarce for these games in the first place, let alone a thorough one, so when a review had as much as a mention of the audio, it was amazing to me. I would study every word in that review, poring over it like it was a love note or something. I would revel in the praise and torture myself over the criticisms.

More recently, though, I’ve changed inside, and I’m not referring to my languished musculature and sagging organs from sitting in the same chair for three years, although that has also happened. It’s more of a psychological change. It feels like I’ve realized some things about game reviews, both in general and involving the audio aspects. The bottom line is that they matter less to me as a developer these days.

First of all, on the audio tip, I find myself astonished when there’s more than a few sentences about the audio. And beyond that, when there is at least some mention of the audio, a lot of reviewers out there don’t seem to know how to provide a good critique of the sound effects. Some are pretty good at critiquing the dialogue, and some are good at critiquing the music. But when it comes to critiquing the sound effects, the vast majority of reviewers out there just don’t seem to have the lexicon to express their opinion, or maybe they don’t realize how much the sound effects are adding to their experience. So, what we end up with is a couple of sentences mostly about the voice acting, writing, and/or music, and maybe, if we’re lucky, “the explosions were visceral,” or something like that. Many of us internal audio designers are most interested in the reception of the sound effects because often we’ve made them ourselves, and it can be disappointing to have so little said of our work.

Secondly, a review is written by some working schmoe like me. It’s just some other human being out there who is paid money to do their work. They may be qualified from playing lots of games and having some insightful views, maybe they’re good writers, and don’t get me wrong, writing a good review is an art in and of itself, but still… it’s just some goofball who accidentally burps in the middle of their sentences just like I do. Sometimes developers seem to fear reviewers as they affect the almighty metacritic rating, or they put reviewers’ opinions on a pedestal, or they think reviewers are evil, and there are reasons for these things, but really, they’re just some schmoe getting paid money to do work, just like me. They put their pants on two legs at a time using the Pantsinator 3X just like me. Somehow this makes me care less. I don’t know if it makes sense. It’s just how I feel. Lately I’ve been feeling the same way about celebrities and world leaders. We’re all a bunch of schmoes.

And the third and probably most relevant reason is that I’ve discovered that the actual act of doing the work is why I do the work. Of course I love the end results and love to see it all come together. But to me, the journey is more meaningful than the destination. I put my all into the work that I do because I’m on a team and other people are relying on me. I’ve learned more than I could have ever imagined. I’ve made relationships with people that are now very important to me. I’ve discovered things about myself that I didn’t know before, things that I am capable of that I wasn’t so sure about a few years ago. And because of this, it’s less meaningful to me to have someone that I don’t particularly care about critiquing my work. I already know that I’ve done my best.

Ok, so, all of that said, I need to be clear about a few things.

I do value feedback, even from complete strangers, but it’s more relevant to me if it’s feedback during the development process because I can do something about it. Now there is nothing that I can do about it. If a read a review that complains about the audio falling flat, well, I’ll just have to swallow that tough cookie because there’s no changing it at this point.

Also, I mean no disrespect to game reviewers. Reviewers play an important role in educating players about games, and I’ve found that some reviews are very useful and thought-provoking. I particularly enjoy reviews in which it’s obvious that the reviewer isn’t just going along with what the rest of the reviewers are saying and is instead speaking their own mind. I’ve learned things from reviews, and that’s very important to me. And as a gamer, I do occasionally use reviews to influence my purchases.

And finally, I hope that this whole me-being-honest-about-my-feelings thing isn’t coming across as cynical. I’m not summarily distrusting of reviewers, and I don’t hate them, or think that they are evil, or that they’re sheep, or scrubs, or anything like that. They’re people very much like me, and we probably have a lot in common, like languished musculatures, and sagging organs, and many are proud owners of the Pantsinator 3X.

In conclusion, reviews just don’t matter as much to me anymore, but I don’t mean to devalue them. When Saints Row: The Third hits the streets, I will read my fair share. I’m sure I’ll be excited by some and frustrated by some. But in the end, I’m much more interested in my next journey than the previous destination that I’m leaving behind.


Darwinian_Coding: ( Cultural_Text )

Original Author: jake kolb v

Context: where we use this

“Cultural Text” handles rendering text into images to display on-screen, use in textures, or export as a resource file. This set of services is used to correctly format different languages, cultural styles, and font features such as color, strike-thru, indent, or shadowing. It also encompasses managing sentence layout, glyph and line spacing, word-wrapping to borders and optimal legibility tradeoffs with anti-aliasing & pixel granularity.

Generally this is the classic “render-text” functionality found in most video-game engines or provided in libraries like FreeType, with support for Text-FX to display richly formatted layouts.

Goals: what we need

  • An API to render text into arbitrary destinations with pixel-perfect consistency across different platforms & screen sizes
  • Can combine multiple glyphs on top of each to form a single character (needed for some languages)
  • Can output with colored gradients, shadows, and other visual effects based on a markup encompassing 1 or more characters
  • Supports word wrapping, skipping ahead to a fixed column, and directional ‘justify’ modes
  • Can handle a full screen of legible text with changing values without performance slowing down
  • Can support fixed as well as variable width layout
  • Can reorient to render text in opposite directions as the usual flow ( such as english letters arrayed vertically )
  • Supports Logging (to the standard console as well as HTML)
  • Provides for user-controlled (not programmed) ‘variable precision’ control for decimal numbers, time and date formats
  • Can display two different languages side by side (helpful for in-app language translation) but more importantly allows using rectangles from other images for embedding icons, emoticons, avatar pics, and camera viewport images
  • Text can be transformed, such as moving, bending, squishing, etc
  • Has a method to import data from existing font files

Solutions: how we tried

Technique:  Letter-Only-Blitter aka ‘Lob’

Originally we looked at some of the font engines available but none met all of our platform needs so we decided to generate our own.  We built a gridded font texture for ASCII characters and generated the used subset for Japanese (hiragana, katakana, & ~500 kanji).  It was used on games in the mid 1990s and on the PSOne, so there was little text on-screen compared to these 2560×1600 modern times.  The process converted UTF-16 characters into a texture-page and rectangle-index for the characters themselves.  We stored a flag in the top bit of these ‘characters’  to act as an escape to trigger offsetting the top vectors (italics), upscaling (low-quality ‘bold’).
Pros: Ran reasonably fast with hardware blitters or CPU software copying.  Allowed real-time typing and editing of the Text-FX like in many wysiwyg editors.
Cons: Could only handle left-to-right layout.  Only supported english, spanish, french, german, and japanese (there were a fixed enumeration) and further languages would’ve required a lot of table-adjustments and possibly other coding to use.  Required artists to fill in ‘rectangle-text-files’ and build the fonts manually (no font-file extraction) which was painful.  Runs terribly slow on systems that have a high penalty for each draw call (modern era).  Japanese used a lot of memory which required the font to be scaled down which resulted in unsatisfyingly blurry text at the time.  Even now, compositing the characters using radicals could save meaningful space and deliver a broader range.

Technique:  Word-Particles aka ‘Wopa’

This technique was spawned primarily from the batching issues ( costs per draw-call ) found in the Lob system.  The idea was to have two systems, one that rendered words in powers-of-two-sized rectangles inside of large textures, and one that composited sentences to their destination.  We initially had a big speed boost on systems where batching mattered but the code became complex to handle optimal fitting of the words into the ‘recently used words’ textures.  We had to handle a downsizing approach when too many unique words were needed at once. It did allow us to render Japanese at a much higher resolution than Lob did however.  We used a fast-hashing scheme to identify which words were stored in which rectangle/texture.  We were forced to constrain the text FX approach to be per-word, which affected color tints, italic-tilts, and shadowing effects but mostly didn’t limit our artists’ goals.
Pros: Automated tools generated the texture information from images.  Avoided batching-woes.  Regular text browsing, such as in ‘help’ menus or information updates, went very quickly.
Cons: On systems that didn’t support render-to-texture, speed suffered due to poor Copy_Pixels_from_Screen_to_Texture or Texture_Upload times (when we rasterized the words on the CPU).  Rapid number updates could cause stutters as the mru-word textures could get overloaded.

Technique:  Just use the OS aka ‘Juto’

After struggling to properly handle Arabic, Thai, Hindi, Hebrew, and the various languages using Chinese characters, we decided to use the native OS capabilities to composite text into a buffer and upload that to the GPU for rendering or to format for export.  This approach allowed us to skip many of the complexities that has cost so much time in Q&A.  As this happened before  Microsoft’s DirectX “DirectWrite” API, we used GDI+ on Windows, FreeType on Linux & BREW cell phones, and Cocoa on OSX.
Pros: Most of the foreign language single-word issues were handled correctly.  We could reach a broader audience and support translators easily.
Cons: It was costly per draw call & update.  Adding features like underline or colored letters became very complicated due to tracking various sizes and issues with the OS allocating buffers (not FreeType however).  Foreign language paragraphs still had a lot of complexity and required different per-platform coding-responses.  Most of the Text-FX features were inconsistent from platform to platform.

Survivor: who proved best & why

Technique:  Cached lines of Variable-Interval-Composites aka ‘Clovic’

Clovic is an outgrowth of Wopa that relies on caching entire lines instead of words.  It uses a simple string sorting/matching approach to determine what text in on what line.  Each ‘text-texture’ is broken into a series of lines.  Each line is packed using half powers of two…such as 2, 3, 4, 6, 8,12, 16, 24, 32, 48, 64, etc. which gives better coverage than the previously power of 2 W x H rectangles.  We convert UTF-8 directly into texture/rectangle references as before, but allow mixed language compositing to support icon-based values…such as the signal or battery-life indicators on your phone.  The text FX have been unified with the regular “render things into  a viewport” visualizer language so that each ‘cache-line’ of text can support all of the visuals (blur out, HDR, movement) of any regular 3D scene.  For speed purposes, we update each of these lines at a slower rates than the main display rate.  Values like health or location coordinates which may rapidly change seem acceptable to update at 10fps instead of 30 or 60.

Pros: Simplified the per word fitting schemes of Wopa.  Easier to handle the multi-cultural language layouts with a per line (aka continuous run) to handle kerning/spacing issues.  Makes true bold or outlining ( using a bloom filter, not rescaling ) much cheaper and more accurate.
Cons: Hard to tune memory use and requires overestimating the amount of text needed.  Currently aliasing approaches are not well suited ( MLAA, FXAA ) and aliasing effects are apparent.

Future: It would be good to automate a method to show progressive ordering of the handwriting ‘strokes’, which are important to know well in many languages, especially any employing Chinese Characters.  This stroke approach could make for interesting visual effects as well as the obvious educational aspects.  There also can be value in providing a mechanism to displace the rendered text into 3D shapes (likely a height field where height 0 is an edge) or back into vectors for SVG support.  Mostly, the future should hold more robust versions of different languages and the interesting nuances of rendering messages correctly.

( Lamely the below image, made with Lob, has been JPG’d so some Text is blurred…)

(Here text is composited from other views)