Why I hate Test Driven Development

Original Author: Rob Galanakis

Technology/ Code /

I have no problem saying that I write good code. I place a focus on TDD and thorough unit and integration testing. I document everything I write (not just function documentation- I document classes, modules, and systems). The fact is, since I’ve been doing these two things somewhat religiously, the amount of time I have spent debugging code has gone down dramatically. There are just not many bugs to find in most of the code I write, they are easy to narrow down when I find them, and they rarely regress.

This is a good thing, isn’t it? So why do I hate TDD?

Because debugging is fun. There, I said it. I love debugging. I think lots of clever people like debugging. I love someone having a problem, coming to me, looking at it together, getting up to walk around, look at the ceiling, talk to myself, stand in front of a whiteboard, draw some lines that spark some idea, try it, manually test a fix out, slouch down in my chair staring at my computer lost in thought, and repeating this until I actually find and fix the problem. Not just think I fixed it, but really sure that I fixed it because suddenly it all makes sense. At which point I spring a terrific boner for my obviously superior brain power that was able to find this problem that plagued mere mortals.

So the sad fact is, since I’ve been doing TDD, I haven’t been able to go on this ego-trip a single time in our TDD-written codebase. Sure, sometimes we get a bug in the UI, but those usually manifest easily. Oh, and I’ve definitely used some frameworks and API’s improperly and caused bugs because of that. But those are the annoying types of debugging we all have to do, not the  magical mystery tour described above.

I didn’t realize how much I missed debugging until it was gone. Fortunately, there’s still lots of legacy code and API’s to get my fix from.

How to build irresistible social casino games

Original Author: Betable Blog

This month’s SlotSpot Facebook slots game. He shared his knowledge of the market and went through SlotSpot as a case study for social casino games.

A rapidly growing market

It’s no secret that there’s been a ton of interest in social casino games in the past 6 months. Two social casino game companies, Playtika and DoubleDown, were Zynga are getting in on what is expected to be the next hot social game genre. Kontagent also commented that their fastest-growing customer segment has been social and mobile casino games. But what does this all mean for game developers looking to get into the space?

First of all, if you’re not sitting on a pile of cash, don’t bother looking at Facebook. The viral channels that made Facebook a great platform for indie game developers are dead, says Dave. While this is true for Facebook in particular, competition in the social casino games genre has driven up CPA prices on all platforms. You should expect to be at war from day 1. Many newcomers are looking at cross-platform development solutions so they can spread their eggs across multiple baskets.

When being born into wartime, it’s important to pick your beachhead early. For poker, skill-based and sports-based games, Dave recommends targeting a young male audience. For slots and other chance-based games, the audience remains primarily older females (typically 65% of slot players are female according to Dave) but is starting to drift closer to an even gender ratio. This drift is due to the changing nature of how slots are presented to the player: men can try the game for free, play for home, and play online. These changes make slots-type games more appealing to a broader audience.

Furthermore, these changes make casino-style games more appealing to a broader audience across the board. If you look at the difference in growth curves between FarmVille style games and social casino games, casino games show more consistent, wider growth curves. This shows that casino games keep players engaged longer and are less reliant on a huge launch for success.

Lastly, Dave warned social game developers that real-money gambling games are going to be launched in many if not all major platforms in the next year. Facebook is already looking into allowing real-money gambling in the UK, and the few mobile gambling apps that already inhabit the Apple and Google app stores are experiencing meteoric growth. Real-money gambling games are going to be tough to compete against because these companies can afford a much higher CPA than virtual currency game developers. However, real-money gambling companies come from a world where each player is a paying player, and aren’t as familiar with the freemium model. If you’re intelligent with your user acquisition and optimize your free-to-paid conversion, you can compete with these giants.. for now.

SlotSpot Case Study: Build, learn, iterate

Blitzoo built SlotSpot in just 6 weeks with a single-minded focus on a Eric Ries’ Lean Startup methodology to impressive effect with their first social game (more on this in a later post).

When building his MVP product, Dave highlighted the importance of analytics and metrics. You need metrics for every piece of the game business:

User Acquisition. You need to segment your inbound users and see which creative, which demographics and which countries converted most effectively.

Free-to-paid Conversion. Which trigger got players to purchase? Which offer got that finicky player to finally buy? What creative was most effective?

Game Balance. What is the average session length? How many spins does it take until they are out of money? How does this change as they level up?

Player Temperature. This was Blitzoo’s own measure of positive vs. negative player feedback.

As you are building your game, you should always be asking yourself: what do I need to know? The answer will help you determine the metrics that are most important to your business.

SlotSpot Case Study: Maximizing revenue

When looking to maximize your revenue, the first step you should take is to identify your high value players. For the social casino genre, there are two types of high value players: whales and evangelists. Whales are the players that spend the most, typically spending over 10x more than the typical ARPPU rate of your game. These players can spend over $1,000 per game and make up a substantial portion of your game’s revenue. Evangelists are players that love your game. They invite their friends, are active on forums, and give you valuable feedback. Catering to these two groups is of utmost importance for any social casino game.

To appeal to these high value customers, you should segment them internally via your analytics and present them with unique offers. You should also track their playing habits and retention so that you can optimize your game to keep these players around. When dealing with their support or feature requests, take a little more time to write a custom response. Small tweaks like these can create the best experience for these players and keep them coming back.

Your second step when maximizing revenue is a no brainer: maximize your revenue from all of your players. Run A/B Tests and experiments on players, using the Lean Startup as a framework. Notice how in the graph above, the revenue spikes get larger and larger. This is because the A/B tests are improving the promotions’ effectiveness over time.

When running these experiments, be sure to segment each test by player type, whether it’s a whale, evangelist, first-time buyer or someone who has never purchased. Dave says that 90% of players on Facebook never pay, 5% will typically pay, and 5% will maybe pay. The key to maximizing the amount of players that do pay is by running these experiments. Test different offers, such as coin bundles, sales or referral promotions, to see which copy performs better. Be sure you test one thing with each test: the copy, the collateral, or the offer itself. However, never test yourself into having only one “optimal” offer. Dave’s advice is that a variety of offers always performs better than one “best” offer. Also, be sure to vary your delivery method of the offers, whether it’s from an interstitial ad, a banner ad, or an in-game graphic.

SlotSpot Case Study: Analytics as an immune system

The last key use for analytics is as an early warning system for bugs or problems with your game. Use your analytics tool to track errors, customer service requests, ARPPU, free-to-paid conversion, virality, average bet size, and anything else that is a mission-critical function of the game. If any of these numbers spike or drop dramatically, your canary is dead and it’s time to troubleshoot the coal mine. For example, Blitzoo had a problem where their overall revenue suddenly dropped. From their analytics, they could see that they had seen a significant increase in free-to-paid conversion rate, but an even larger decrease in ARPPU. It turned out that a promotion created by their marketing team was too aggressive, and undercut their price significantly. Once the error was spotted, fixing it was simple and the situation was resolved (damn it, marketing! :P).

Mobile: The Next Frontier

To conclude his presentation, Dave talked about Blitzoo’s upcoming transition to mobile and how they planned to adapt SlotSpot to the new space. He reiterated analytics’ importance on mobile because there are even more factors to a mobile player than a social player. There is no “single solution” for social sharing on mobile like there is on Facebook, so you need to incorporate and track a variety of them. Also, customer acquisition is done on a per-deal basis rather than run all through Facebook, so there’s a lot of segmentation there as well. Finally, customer onboarding is incredibly important to building a successful game on mobile and A/B tests are the key to optimizing your initial onboarding flow.

Dave pointed out that you simply cannot port a social network game to a mobile platform and expect the same results. For one, there’s a much shorter session length: mobile players spend 3 minutes per session on average while social players can spend up to 20 minutes. This requires a complete overhaul of game balance and the willingness to take an ax to your feature set. Offers are responded to differently on mobile, so you will need to start your in-game marketing testing all over again. Lastly, engagement is king on mobile. It’s much harder to keep players engaged with a mobile game and re-engage players that have been lost. Making all of these adjustments has been key to Blitzoo’s preparation for the mobile launch of their SlotSpots app.

Wrapping up

Thanks again to Blitzoo and Kontagent for throwing a great webinar. I’d recommend getting on their webinar circuit by signing up for their mailing list, each one I attend keeps getting better.

The social casino space is on fire right now, and one can only expect the fire to spread to mobile. If you’re looking to get into this space, move quickly and buckle up, because it’s about to take off. And while you’re busy fending off incumbents, trying to keep distance from the newcomer nipping at your heels, and avoiding the real-money giants, you might want to look into Betable. We’re the first and only platform that lets you add real-money gambling to games, and it might be the weapon you need to win in the social casino game space.

Put On Your Game Face

Original Author: David Czarnecki

TL;DR

I recently started a weekly blog series on our company blog called Agora Games GitHub account. External open source work refers to projects that we contribute to in off-hours and may or may not have anything to do with video games because we’re swell folks like that.” This is important for our company and I think it should be for your company.

PUTTING ON THE GAME FACE

At the beginning of 2012, I did an assessment of all the projects we open sourced in 2011. Late much? 22 projects wasn’t bad and I want to beat that number in 2012, but it got me thinking that this kind of information is timely and needs more regular attention. So, I started kicking around ideas for a blog series name. After a day, I settled on “Game Face”. “Game” obviously referred to the work we do in developing gaming-related libraries and middleware. “Face” would refer to the code-wise public face of the company and the developer(s) working on the individual libraries.

It will help you become a better writer: If you’re a developer and you’re scared about writing a weekly blog series, you shouldn’t be. The focus of the blog is technical in nature, so you don’t need to spend a lot of time on exposition. If you’re good with your commit messages or keeping a CHANGELOG, the blog posts write themselves. For example, from an item in a recent blog post, “This releases addresses the first future idea from the README when the gem was released over a year ago to add a method allowing for bulk insert of data into a leaderboard.” Cut and paste for the most part my friends. The intrepid developer might even automate the creation of the initial blog post that can be wordsmithed by someone you deem more fluent in languages other than C++ 🙂

code and tests to allow for leaderboards in “reverse” (lowest-to-highest) sorted order. I just hadn’t come across that use case in the games we’ve worked on where I needed a leaderboard appropriate for a racing game. But someone else did. And now our library is better off because of it. Beyond peer feedback, or any feedback, opening up your code and talking about it helps you to think about the ramifications of changes when there are developers other than you or your company using your code.

It will help you as a company: By highlighting your company’s open source work, it may motivate your developers to want to take a stab at open source in the first place or to clean up a library for external publishing. You might also attract developers who put a high value on knowing their contributions will be recognized, whether they are internal or external, and that their contributions may see the light of day outside of your company’s hallowed halls. The last “Game Face” blog post I wrote highlighted a new library one of our developers had released that allowed you to parse Beersmith2 (beer brewing software) files in Ruby. I imagine many video game companies rely on open source to some degree, and so by publicly promoting your contributions to open source, it can also help you in the eyes of the open source community.

It will help you have awkward conversations: I can’t tell you what to open source and what not to open source. You’ll have to talk as a team, as a company, and possibly with your lawyers to understand what you can and cannot open source. In 2011, there was only one instance where I went to our CEO to get a check on, “Can I open source this?” We walked around the block and our conversation went basically as follows:

Me: “I’d like to open source a leaderboard library. Given this is a core thing we do, is that OK?”

CEO: “So, someone else could do this before us using a similar method?”

Me: “Yes.”

CEO: “Ship it!”

FIN

Hopefully I’ve made some compelling arguments that your company should highlight your internal and external open source development. Our weekly blog post series highlighting our internal and external open source work comes out on Friday and now I usually get one or two messages throughout the week in our group chat room asking, “What’s going in to Game Face this week?” It feels pretty good when I can say back, “Your face.” If your company does anything with open source, I’d love to know about it!

Asset packaging in browser-based games

Original Author: Rob Ashton

I’ve written a couple of posts on asset management in web-based games already, and I still think they’re valid and useful on most of the points made in them.

However, in putting together my latest endeavour (a hack-n-slash isometric multi-player RPG in HTML5 canvas), I’ve learned a few more things and want to share them.

TLDR; I’ve pushed the code I’m currently using to Github and it can be found here: earlier post, I suggested if you had a resource caretaker which you requested resources like textures, sounds, models, shaders etc from – then it could take care of loading the data and return promises rather than the real things, this looked something like this:

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  
var modelOne = resources.find('models/hovercraft.json');
 
  var modelTwo = resources.find('models/missile.json');
 
  var explosionSound = resources.find('sounds/explosion.wav');
 
   
 
  resources.on('fullyLoaded', function() {
 
    game.start();
 
   
 
    // use the resources
 
    modelOne.get();
 
    modelTwo.get();
 
  });

And the world is a happy place to live in, we rely on those assets coming down via HTTP, rely on HTTP caching and everything else given to us and it works well.

However, we also rely on the initial state of the game indicating which resources it is going to need – and doesn’t account for other resources that might be requested once the game is under way (for example, explosions, other models/textures further in the world, sounds, etc).

This results in negative artifacts like ‘popping’, or sounds being played after the special effect has finished (or in bad cases, the player walking on an empty background!)

Therefore the answer is to pre-load, but how…?

The Application Cache

Now, I’m not going to give a full description of this, but essentially you can tell the browser “Hey, these are my resources, please download and cache them”, for example:

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  
CACHE MANIFEST
 
  # 2012-03-11:v1
 
   
 
  CACHE:
 
  /favicon.ico
 
  game.html
 
  models/hovercraft.json
 
  models/missile.json
 
  sounds/explosion.wav

We can generate this kind of application cache automatically as part of our deploy process by enumerating through the assets, and we can choose not to re-generate it if none of the files have changed.

When we re-generate it, we can set the timestamp (therefore indicating to the browser that because the manifest has changed it might like to go through those assets on the server and see which ones it needs to download).

We also have similar code to write against this, “Update the resources if necessary, wait for this process to complete”

1
 
  2
 
  3
 
  
appCache.addEventListener('updateready', function() {
 
    game.start();
 
  }, false);

Ace.

This has its share of issues though – the biggest one is probably that it doesn’t really allow for any granularity in your app.

In this modern age of the internet, our users have short attention spans and a long initial load may well mean losing out on users – if you have different levels for example, you probably want to download each level and the assets for that level before you play that level – not at the beginning of the whole game.

In short, the Application Cache is great if you have a small self contained game (such as puzzle games), or if you want to fully support the game being playable offline immediately after download;  it does however come with its share of issues.

That brings us onto another option

Write your own Application Cache

We can easily emulate what the AppCache does for us by writing our own manifest definition and using that to pre-load files from the server. We can then rely on the browser-cache to carry on working the way it always has (so if files haven’t changed, don’t fetch them etc).

This is essentially like our first solution again, except we now have manifest files which describe what data needs pre-loading.

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  
resources.preloadFromManifest('levels/levelOne.json');
 
  var modelOne = resources.find('models/hovercraft.json');
 
  var modelTwo = resources.find('models/missile.json');
 
  var explosionSound = resources.find('sounds/explosion.wav');
 
   
 
  resources.on('fullyLoaded', function() {
 
    game.start();
 
   
 
    // use the resources
 
    modelOne.get();
 
    modelTwo.get();
 
  });

This still causes some issues – the browser may choose to not cache some objects, may decide that some things aren’t available offline, and weirdly give us some issues with sounds.

For example: One issue I discovered with sounds in my last Ludum Dare attempt was that you have to create a new Audio object every-time you play a sound – and the browser re-requests the asset from the URL you give it each time that happens!

Another problem with this approach is again it doesn’t scale too well – if you’ve got 500 textures, 500 HTTP requests is not really appropriate at with the current technology stack (Until SPDY or something similar is supported universally at least).

That brings us to where I am at the moment with my hack-n-slash multiplayer canvas RPG…

Bundle everything up into a single file!

How things come full circle – desktop game developers have been inventing and consuming package formats since time began, and now web-game developers can get in on that action too.

So hence writing a command line utility in NodeJS to scan a directory and package various files into a JSON file.

1
 
  2
 
  3
 
  4
 
  
.json -> keep it as JSON
 
  .png -> Base64 encode it
 
  .wav -> Base64 encode it
 
  .shader -> add it as a string

This works well, as on the client loading these assets means writing a small amount of code like so:

1
 
  2
 
  
var image = new Image();
 
  image.src = "data:image/png;base64," + imageResource.get();

In actuality, what we end up doing is loading an entire asset package, say ‘assets.json’ and writing the following code before loading the game.

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  13
 
  14
 
  15
 
  
$.getJSON('assets.json', function(rawData) {
 
    preloadAssets(rawData);
 
  });
 
   
 
  function preloadAssets(rawData) {
 
      forEachResourceInRawData(rawData, preloadItem);
 
  };
 
   
 
  function preloadItem(key, itemData) {
 
    var loader = findLoaderForFiletype(key);
 
    increaseAwaitCounter();
 
    loader(key, itemData, decreaseAwaitcounter);
 
  };
 
   
 
  resources.on('complete', startGame);

Where an implementation of a handler for a PNG might do this

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  
function preloadPng(key, itemData, cb) {
 
      var preloadedImage = new Image();
 
      preloadedImage.src = "data:image/png;base64," + itemData;
 
      preloadedAssets[key] = preloadedImage;
 
      preloadedImage.onload = cb;
 
  };

This means that once the assets file is loaded, the game can start and there are no delays in playing audio, displaying textures and so on and so forth.

Clearly this is massively simplified, because in the real world yet again what we actually do is.

Do the hybrid approach

  • Load the assets for the current land
  • Request an asset
  • Is the asset in the preloaded package?
  • Yes? -> Return a promise containing the asset
  • No? -> Return a promise without that asset
  • -> Make an HTTP request to get the asset
  • -> Cache the asset

For example, my little RPG pre-loads most of the textures and models used across the land, but downloads the actual tile information (where is a tile, what is on that tile) as the player walks around.

This is similar to the streaming that takes places in any reasonable desktop game, and offers a good compromise in a connected multiplayer game (and would work well for a disconnected one too).

I’ll push out the client code I’m using as a library at some point, but that’s not really a suitable candidate for ‘frameworking’, because homogenising something like asset management on the client side can do more harm than good.

AltDevPanel on Optimization

Original Author: Don Olmstead

Update: Use this planner to find the time for your city.

I’m happy to announce our first #AltDevPanel! Scheduled for this Saturday at 10PM PDT, we’ll be bringing together some masters of high performance programming to talk about their craft. Our scheduled panelists are Mike Acton, Tony Albrecht, John McCutchan, and Jaymin Kessler.

The optimization panel will focus on optimizations from a micro to a macro level, providing observations on what to do and what not to do when optimizing a code base. It will also delve into how to measure performance, and how to weigh the impact of a particular code change. And finally some discussion on how to practice optimizing so you can do it when it counts.

We’ll be accepting questions related to the panel’s topic prior to the panel. Questions can be posed in the comments below or via Google+ and Twitter using the hash tag #AltDevPanel.

The panel will be broadcast using Google’s On-Air functionality. This requires a Google+ account to participate as an audience member. If you don’t have a Google+ account or aren’t available at the time the panel is set to occur you can still watch the talk at a later time. All talks will be posted to YouTube, and the links will appear on the site shortly after.

We’d like to thank Google for allowing us early access to On-Air. We’d also like to thank the people that helped us with this process, Colt McAnlis and Travis Sanchez. Without their efforts this wouldn’t be possible, so thank you guys.

Panelist Bios

Mike Acton

Engine Director at Insomniac Games (Resistance, Ratchet and Clank for PS3.) Keeper of #AltDevBlogADay

Tony Albrecht

Tony Albrecht is the founder and director of Overbyte, a company that specialises in high performance programming solutions for game companies. He’s built game engines for a range of companies over the last decade or so and now spends his time helping other studios improve their software’s performance.

John McCutchan

John McCutchan is from Kitchener-Waterloo, Ontario, Canada. Wrote inotify for the Linux kernel. Got his M.Sc. in computer science from McMaster University. Lead of the Game Systems team in Developer Support, SCEA. Wrote Move.Me.

Jaymin Kessler

DoD disciple, his datas trifle. He shoot structs from his brain just like a rifle at Q-Games.

Making Rad Audio For Everyone

Original Author: Ariel Gross

Team Audio is tasked with making the game sound amazing for everyone. Every single player should have an equally rad aural experience when they’re playing the game. That’s a typical goal for most Team Audios out there. But I’m starting to think that it’s not quite the right goal. Or that there’s a problem with it. Or something. Let me explain.

Players have all kinds of crazy crap.

When it comes to audio, players be crazy. Not meant as a dis. Just an observation. One person might have a high fidelity 7.1 surround sound system with discrete speakers placed perfectly according to THX standards. The next person might have the same system, but all of the speakers are stacked on top of each other like a tower in the center of their room. And the next person is listening through their old CRT television’s stock speakers. And the next guy is listening on amazing headphones. And the next person is listening on earbuds that they got from a plastic egg after depositing a nickel in one of those vending machines.

We never really know what the heck our player is doing when it comes to their sound system setup. The Volition offices are a microcosm of this. Some people here use professional speakers (Team Audio). Some people use headphones (Team Design). Some people use cheap PC speakers (Team Production). Some people use earbuds (Team We Hate Team Audio).

Again, not meant as a dis. All of these are, of course, perfectly fine means of getting sounds into ears. However, some issues can crop up as a result.

“I can’t hear Sound X. Turn it up.”

Story time! Also, disclaimer time! This is a dramatization. This didn’t happen here at Volition. But I know it has happened elsewhere, and it’s not some isolated incident. Team Audios world-wide will be more than happy to regale you with some variation of this story while sullenly drinking their whiskey and/or pink-colored bismuth.

Once upon a time there was an Audio Designer. Our dear Audio Designer had made Sound X and had implemented it into the game. Audio Designer was happy.

Two weeks later, Audio Designer is called over to Project Producer’s desk. Audio Designer was scared. Nervous. Vulnerable. Naked. Okay, not naked. No, you know what? Naked. May as well make this story extra weird.

Project Producer was frustrated because they could not hear Sound X very well, and demonstrated this through their cheap PC speakers. Naked Audio Designer responded that the reason that Sound X couldn’t be heard very well was because of the crappy sound system that Project Producer was using.

Project Producer frowned. Project Producer rebutted that perhaps Naked Audio Designer shouldn’t be testing sounds on top tier studio speakers and should instead be testing sounds on crappy PC speakers, or television speakers. And why wasn’t Naked Audio Designer wearing clothes? Well, that’s another story altogether, isn’t it? It is.

Well, Naked Audio Designer immediately went home and wept for two weeks in their crawl space and forgot to drink any water, and therefore died, and was of course eaten by rats.

So, why don’t we just test on TV speakers?

I find that the easiest way to answer that question is to refer to our dear friends in Team Art. While it’s possible that Team Art might have to use whatever monitor is in front of them, they tend to want to work on the highest possible quality, perfectly color calibrated monitor. When they work on these monitors, they can assume that the visual quality will be maintained to all the different types of screens.

We’re in a similar situation in audio. We do our work using high end studio speakers (usually called reference monitors, but I tend to use the word speaker to avoid confusion), sometimes with especially flat frequency responses (meaning the speaker itself isn’t changing the sound), and usually in calibrated listening spaces, because that way it should translate better to all the different kinds of speakers that our players have.

If a colleague from production or studio management wants your Team Audio to work with low quality speakers because that’s what most players will be using, point to this article if you want. Maybe then you’ll be able to scam them into getting you nice equipment. Wait, did I just ruin that strategy? Maybe I did. But I’m too lazy to go back and change the word scam to convince. Moving on.

But testing on TVs… that just seems smart.

You make a good point, Blog Heading.

Testing on TVs still seems like a good idea. I’m not trying to make a case against testing your mix on a ridiculously wide range of audio systems. You should probably still do that. And you should still test your game audio on a system that you’re most familiar with. Test it on the system that you play games on. Then capture some audio from your game, burn it to a disc, and listen to it in your car. Test it in the conference room where you do your show n’ tells. Test it on the Jumbotron in your living room. What, doesn’t everyone have a Jumbotron in their living room? Well, then do your best with what you have, I guess.

In my experience, the most important thing to test on a wider variety of sound systems is your mix. If you are struggling to hear the dialogue in your game on your television where you play most of your games, then that’s something that might be worth looking into. Does the music seem too loud when you listen on the headphones you’re most used to? Data point added! Just be careful. If you’re doing a final mix of the game, then you’re probably pretty close to submission. Choose your battles wisely!

Another trend in audio is to give our players a bunch of audio options. We give them discrete volume options to adjust sound, music, and voice, sometimes even more granular than that. We also occasionally give them overarching presets like Hi-Fi, Television, Headphones, etc, which can do things like control how much compression there is on the master mix or on sub-mixes, among other wizardry. This is awesome, but if we’re being thorough, it can put an increased burden on the audio team as well as audio QA.

No time. Who care about most?

Wow, Blog Heading, the way you wrote yourself is really convincing. You even deliberately left out a couple of words. Nice work.

I think the question that the Blog Heading is trying to ask is, when we’re low on time, which is typical in audio land, then who should we be designing audio for? Should it be for the player with the kick ass high end sound system? Or the player with the crappy earbuds?

Your team may have varying opinions on this. Some people or disciplines will suggest that you need to appease the lowest common denominator. Like when you make your PC game playable on an 80486SX, even though you may need to wait two years to play the same game on max settings. So, if that’s the case, then we need to design for earbuds gamer. Or at least crappy Labtec PC speakers gamer.

Well, my current opinion is the opposite. Go for the gamer with the higher end sound system. Why? That gamer probably gives a crap about the sound in the game.

I know, earbuds gamer might care, too. Maybe earbuds gamer just isn’t able to afford a nice sound system. Well, that’s okay, because fortunately for earbuds gamer, we always do our best to accommodate everyone, right? Right!

But the gamer with the higher end sound system is basically begging you to put that system to good use. Higher end sound system gamer probably wants you to do some serious face melting. Why else would they spend all that money? They probably want an aural experience more similar to the (good sounding) movie theaters.

This might be kind of controversial. I don’t really know. I haven’t tested the waters on this one, yet. I guess I should say that this is only my personal opinion, not that of any company that I work for, and it’s just an opinion, and I’m just a dude whose opinion changes, like, all the time. I’m not running for president, here. And if I was, I’d probably make a law that proclaims bologna as the official meat AND undergarment material of the country. Does the president make laws? I can’t remember, but I don’t think so. I think it’s congress, actually. Anyway, just remember, a vote for me is a vote for bologna.

Should I Volunteer?

Original Author: Heather M Decker-Davis

I absolutely love encouraging people to volunteer within the game development community. It benefits the particular effort in question, the individual, and the game development community at large.

However, over the course of being heavily involved in a variety of volunteer operations, it’s come to my attention that the general understanding of what it means to volunteer may vary from person to person. It’s not just raising your hand and feeling good. These two steps are indeed part of the process, but there’s a lot more to it than that!

The following are general guidelines for volunteering, which I’m hoping may serve as standards to help a variety of organizations, groups, and individuals by better educating budding volunteers on how they can most effectively serve their cause.

 Initial Steps

The first step to volunteering isn’t just saying you’ll do something. If you’re eager to get involved with an effort, please start by:

  1. identifying available opportunities
  2. evaluating how realistic it is for you to contribute, based on your existing workload, schedule, and abilities

You might begin by asking a professional organization like IGDA how you can help out, and in turn, receive a list of items that could currently use some attention.

Take a moment and actively match yourself to things you know you can accomplish. That isn’t to say you shouldn’t push yourself to grow, but you should be able to fulfill the basic need you’re stepping up for. If you volunteer for something that’s completely beyond your capabilities, the organizer is basically back to square one when this detail is discovered, which works against the overall intent of helping people.

Playing to your strengths and thoughtfully managing your time will overall aid you in becoming an outstanding volunteer.

Following Through

Similarly, it’s extremely important to finish what you’ve started. The bottom line is, someone needed help with X. Therefore, the most useful thing you can do is actually take X from a need to a finished objective.

Offering your time to help an organization, group, or individual should be considered a commitment. When you volunteer, people are now counting on you to pitch in with something!

For example, say I needed someone to make posters for an event. If no one came forward, I would be aware that I had something unasigned and I’d unconsciously operate with the understanding that I need to stretch my resources to cover it. However, if I have volunteers, my organizational thoughts change. I might start pouring more effort into my primary tasks, in the interest of making the event all that much better. Having more hands to help essentially means you can do more!

Unfortunately, when a volunteer bails last minute or otherwise falls short, an organizer suddenly has an unanticipated hole in their plans, and thus, must scramble to shuffle things around and make it right. Be aware that flaking out on something you committed to makes it harder for everyone else involved. Volunteers should strive to be a helpful and accountable.

That being said, it’s understandable that sometimes life happens and things don’t always go according to plan (emergencies, etc.) If something uncontrollable comes up, be sure to let your coordinator or organizer know as soon as possible.

Doing it Well

Keep in mind that volunteering generally results in some form of work–although it can often be quite enjoyable–and thus, your volunteering efforts should be held to a high quality standard. You want to be proud of your work, right? Unless the original objective stated was to “slap something rough together,” treat your task as you would a paid job. If you’re not sure what the expectation is, don’t be afraid to ask! Most coordinators are more than happy to detail out tasks and get you everything you need to accomplish the given goal.

Additionally, in the game development community, always demonstrating that you uphold high quality standards as a volunteer is a great way to build an awesome reputation. In all, it publicly demonstrates that you’re a hard worker. It’s no secret that this is a very close-knit industry and people talk. If they have great things to say, it’s highly beneficial to how others (including potential employers,) may regard you. In contrast, if they have nothing good to say, it can have the opposite effect.

Enjoying the Benefits

So it might sound like a great deal of effort, but in general, volunteering in the game development industry is great for both personal and professional growth.

  • You get out there and meet tons of great new people in your field! This is exceptionally useful to networking efforts.
  • You often acquire new skills along the way! For example, I learned the logistics behind running an IGF booth last year.
  • You feel awesome for contributing to something larger than you could do on your own.
  • You continue to nurture the game development community, which is carried entirely by volunteers who are dedicated to their craft and the constant improvement of it.

The Big Picture

My hope is that these tips will help you be the best volunteers you can be, and through teaching each other, we can continue to nurture the quality and reach of our community efforts.

For those of you already out there, doing all of these things and more: thank you so much! You are the amazing force that makes this field so inspiring to work in.

And to all aspiring super-volunteers in the making: go forth and be excellent!


Intermediate Floating-Point Precision

Original Author: Bruce-Dawson

Riddle me this Batman: how much precision are these calculations evaluated at?

void MulDouble(double x, double y, double* pResult)
 
  {
 
      *pResult = x * y;
 
  }
 
   
 
  void MulFloat(float x, float y, float* pResult)
 
  {
 
      *pResult = x * y;
 
  }

If you answered ‘double’ and ‘float’ then you score one point for youthful idealism, but zero points for correctness. The correct answer, for zero-idealism points and forty two correctness points is “it depends”.

Read on for more details.

It depends. It depends on the compiler, the compiler version, the compiler settings, 32-bit versus 64-bit, and the run-time state of some CPU flags. And, this ‘it depends’ behavior can affect both performance and, in some cases, the calculated results. How exciting!

Previously on this channel…

If you’re just joining us then you may find it helpful to read some of the earlier posts in this series.

Post 4 (Comparing Floating Point Numbers) is particularly relevant because its test code legitimately prints different results with different compilers.

Default x86 code

The obvious place to start is with a default VC++ project. Let’s look at the results for the release build of a Win32 (x86) project with the default compiler settings except for turning off Link Time Code Generation (LTCG). Here’s the generated code:

?MulDouble@@YAXNNPAN@Z
    push    ebp
    mov     ebp, esp
    fld     QWORD PTR _x$[ebp]
    mov     eax, DWORD PTR _pResult$[ebp]
    fmul    QWORD PTR _y$[ebp]
    fstp    QWORD PTR [eax]
    pop     ebp
    ret     0

?MulFloat@@YAXMMPAM@Z
    push    ebp
    mov     ebp, esp
    fld     DWORD PTR _x$[ebp]
    mov     eax, DWORD PTR _pResult$[ebp]
    fmul    DWORD PTR _y$[ebp]
    fstp    DWORD PTR [eax]
    pop     ebp
    ret     0

It’s interesting that the code for MulDouble and MulFloat is identical except for size specifiers on the load and store instructions. These size specifiers just indicate whether to load/store a float or double from memory. Either way the fmul is done at “register precision”.

Register precision

x87 code for floating-point operations, as shown above. The peculiar x87 FPU has eight registers, and people will usually tell you that ‘register precision’ on this FPU means 80-bit precision. These people are wrong. Mostly.

It turns out that the x87 FPU has a precision setting. This can be set to 24-bit (float), 53-bit (double), or 64-bit (long double). VC++ hasn’t supported long double for a while so it initializes the FPU to double precision during thread startup. This means that every floating-point operation on the x87 FPU – add, multiply, square root, etc. – is done to double precision by default, albeit in 80-bit registers.The registers are 80 bits, but every operation is rounded to double precision before being stored in the register.

Except even that is not quite true. The mantissa is rounded to a double-precision compatible 53 bits, but the exponent is not, so the precision is double compatible but the range is not.

In the x87 register diagram below blue is the sign bit, pink is the exponent, and green is the mantissa. The full exponent is always used, but when the rounding is set to 24-bit then only the light green mantissa is used. When the rounding is set to 53-bit then only the light and medium green mantissa bits are used, and when 64-bit precision is requested the entire mantissa is used.

Diagram of 80-bit x87 registers

In summary, as long as your results are between DBL_MIN and DBL_MAX (they should be) then the larger exponent doesn’t matter and we can simplify and just say that register precision on x87 means double precision.

Except when it doesn’t.

While the VC++ CRT initializes the x87 FPUs register precision to _PC_53 (53-bits of mantissa) this can be changed. By calling _D3DCREATE_FPU_PRESERVE. This means that on the thread where you initialized D3D9 you should expect many of your calculations to give different results than on other threads. D3D9 does this to improve the performance of the x87’s floating-point divide and square root. Nothing else runs faster or slower.

So, the assembly code above does its intermediate calculations in:

  • 64-bit precision (80-bit long double), if somehow you avoid the CRT’s initialization or if you use _controlfp_s to set the precision to _PC_64
  • or 53-bit precision (64-bit double), by default
  • or 24-bit precision (32-bit float), if you’ve initialized D3D9 or used _controlfp_s to set the precision to _PC_24

On more complicated calculations the compiler can confuse things even more by spilling temporary results to memory, which may be done at a different precision than the currently selected register precision.

The glorious thing is that you can’t tell by looking at the code. The precision flags are a per-thread runtime setting, so the same code called from different threads in the same process can give different results. Is it confusingly awesome, or awesomely confusing?

/arch:SSE/SSE2

The acronym soup which is SSE changes all of this. Instructions from the SSE family all specify what precision to use and there is no precision override flag, so you can tell exactly what you’re going to get (as long as nobody has changed the rounding flags, but let’s pretend I didn’t mention that shall we?)

If we enable /arch:sse2 and otherwise leave our compiler settings unchanged (release build, link-time-code-gen off, /fp:strict) we will see these results in our x86 build:

?MulDouble@@YAXNNPAN@Z
    push     ebp
    mov      ebp, esp
    movsd    xmm0, QWORD PTR _x$[ebp]
    mov      eax, DWORD PTR _pResult$[ebp]
    mulsd    xmm0, QWORD PTR _y$[ebp]
    movsd    QWORD PTR [eax], xmm0
    pop      ebp
    ret      0

?MulFloat@@YAXMMPAM@Z
    push     ebp
    mov      ebp, esp
    movss    xmm0, DWORD PTR _x$[ebp]
    movss    xmm1, DWORD PTR _y$[ebp]
    mov      eax, DWORD PTR _pResult$[ebp]
    cvtps2pd xmm0, xmm0
    cvtps2pd xmm1, xmm1
    mulsd    xmm0, xmm1
    cvtpd2ps xmm0, xmm0
    movss    DWORD PTR [eax], xmm0
    pop      ebp
    ret      0

The MulDouble code looks comfortably similar, with just some mnemonic and register changes on three of the instructions. Simple enough.

The MulFloat code looks… longer. Weirder. Slower.

The MulFoat code has four extra instructions. Two of these instructions convert the inputs from float to double, and a third converts the result from double back to float. The fourth is an explicit load of ‘y’ because SSE can’t load-and-multiply in one instruction when combining float and double precision. Unlike the x87 instruction set where conversions happen as part of the load/store process, SSE conversions must be explicit. This gives greater control, but it adds cost. If we optimistically assume that the load and float-to-double conversions of ‘x’ and ‘y’ happen in parallel then the dependency chain for the floating-point math varies significantly between these two functions:

MulDouble:

movsd-> mulsd-> movsd

MulFloat

movss-> cvtps2pd-> mulsd-> cvtpd2ps-> movss

That means the MulFloat dependency chain is 66% longer than the MulDouble dependency chain. The movss and movsd instructions are somewhere between cheap and free and the convert instructions have comparable cost to floating-point multiply, so the actual latency increase could be even higher. Measuring such things is tricky at best, and the measurements extrapolate poorly, but in my crude tests I found that on optimized 32-bit /arch:SSE2 builds with VC++ 2010 MulFloat takes 35% longer to run than MulDouble. On tests where there is less function call overhead I’ve seen the float code take 78% longer than the double code. Ouch.

To widen or not to widen

Wide Load signSo why does the compiler do this? Why does it waste three instructions to widen the calculations to double precision?

Is this even legal? Is it perhaps required?

The guidance for whether to do float calculations using double-precision intermediates has gone back and forth over the years, except when when it was neither back nor forth.

The IEEE 754 floating-point math standard chose not to rule on this issue:

Annex B.1: The format of an anonymous destination is defined by language expression evaluation rules.

Okay, so it’s up to the language – let’s see what they say.

In The C Programming Language, Copyright © 1978 (prior to the 1985 IEEE 754 standard for floating point), Kernighan and Ritchie wrote:

All floating arithmetic in C is carried out in double-precision; whenever a float appears in an expression it is lengthened to double by zero-padding its fraction.

However the C++ 1998 standard does not appear to contain that language. The most obvious governing language is the usual arithmetic conversions (section 5.9 in the 1998 standard) which basically says that double-precision is only used if the expression contains a double:

5.9: If the above condition is not met and either operand is of type double, the other operand is converted to type double.

But the C++ 1998 standard also appears to permit floating-point math to be done at higher precisions if a compiler feels like it:

5.10 The values of the floating operands and the results of floating expressions may be represented in greater
precision and range than that required by the type; the types are not changed thereby.55)

The C99 standard also appears to permit but not require floating-point math to be done at higher precision:

Evaluation type may be wider than semantic type – wide evaluation does not widen semantic type

And Intel’s compiler lets you choose whether intermediate results should use the precision of the source numbers, double precision, or extended precision.

/fp:double: Rounds intermediate results to 53-bit (double) precision and enables value-safe optimizations.

Apparently compilers may use greater precision for intermediates, but should they?

The classic Goldberg article points out errors that can occur from doing calculations on floats using double precision:

This suggests that computing every expression in the highest precision available is not a good rule.

On the other hand, a well reasoned article by Microsoft’s Eric Fleegal says:

It is generally more desirable to compute the intermediate results in as high as [sic] precision as is practical.

The consensus is clear: sometimes having high-precision intermediates is great, and sometimes it is horrible. The IEEE math standard defers to the C++ standard, which defers to compiler writers, which sometimes then defer to developers.

It’s a design decision and a performance bug

Floor waxAs Goldberg and Fleegal point out, there are no easy answers. Sometimes higher precision intermediates will preserve vital information, and sometimes they will lead to confusing discrepancies.

I think I’ve reverse engineered how the VC++ team made their decision to use double-precision intermediates with /arch:SSE2. For many years 32-bit code on Windows has used the x87 FPU, set to double precision, so for years the intermediate values have been (as long as they stay in the registers) double precision. It seems obvious that when SSE and SSE2 came along the VC++ team wanted to maintain consistency. This means explicitly coding double precision temporaries.

The compiler (spoiler alert: prior to VS 11) also has a strong tendency to use x87 instructions on any function that returns a float or double, presumably because the result has to be returned in an x87 register anyway, and I’m sure they wanted to avoid inconsistencies within the same program.

The frustrating thing about my test functions above is that the extra instructions are provably unneeded. They will make no difference to the result.

The reason I can be so confident that the double-precision temporaries make no difference in the example above is that there is just one operation being done. The IEEE standard has always guaranteed that the basic operations (add, subtract, multiply, divide, square root, some others) give perfectly rounded results. On any single basic operation on two floats if the result is stored into a float then widening the calculation to double is completely pointless because the result is already perfect. The optimizer should recognize this and remove the four extraneous instructions. Intermediate precision matters in complex calculation, but in a single multiply, it doesn’t.

In short, even though the VC++ policy in this case is to use double-precision for float calculations, the “as-if” rule means that the compiler can (and should) use single precision if it is faster and if the results would be the same “as-if” double precision was used.

64-bit changes everything

Here’s our test code again:

void MulDouble(double x, double y, double* pResult)
 
  {
 
      *pResult = x * y;
 
  }
 
   
 
  void MulFloat(float x, float y, float* pResult)
 
  {
 
      *pResult = x * y;
 
  }

And here’s the x64 machine code generated for our test functions with a default Release configuration (LTCG disabled) in VC++ 2010:

?MulDouble@@YAXNNPEAN@Z
    mulsd    xmm0, xmm1
    movsdx   QWORD PTR [r8], xmm0
    ret      0

?MulFloat@@YAXMMPEAM@Z
    mulss    xmm0, xmm1
    movss    DWORD PTR [r8], xmm0
    ret      0

The difference is almost comical. The shortest function for 32-bit was eight instructions, and these are three instructions. What happened?

The main differences come from the 64-bit ETW/xperf profiling and for many other tools and, as dramatic as it looks in this example, is not actually expensive and should not be disabled. On 64-bit the stack walking is done using metadata rather than executed instructions, so we save three instructions.

The 64-bit ABI also helps because the three parameters are all passed in registers instead of on the stack, and that saves us from two instructions to load parameters into registers. And with that we’ve accounted for all of the differences between the 32-bit and 64-bit versions of MulDouble.

The differences in MulFloat are more significant. x64 implies /arch:SSE2 so there’s no need to worry about the x87 FPU. And, the math is being done at float precision. That saves three conversion instructions. It appears that the VC++ team decided that double-precision temporaries were no longer a good idea for x64. In the sample code I’ve used so far in this post this is pure win – the results will be identical, only faster.

butterfly effect, wherein a distant hurricane may cause a butterfly to flap its wings

If you want to control the precision of intermediates then cast some of your input values to double, and store explicit temporaries in doubles. The cost to do this can be minimal in many cases.

64-bit FTW

64-bit code-gen was a chance for the VC++ team to wipe the slate clean. The new ABI plus the new policy on intermediate precision means that the instruction count for these routines goes from eight-to-twelve instructions to just three. Plus there are twice as many registers. No wonder x64 code can (if your memory footprint doesn’t increase too much) be faster than x86 code.

VS 11 changes everything

Visual Studio 11 beta (which I’m guessing will eventually be called VS 2012) appears to have a lot of code-gen differences, and one of them is in this area. VS 11 no longer uses double precision for intermediate values when it is generating SSE/SSE2 code for float calculations.

It’s the journey, not the destination

So far our test code has simply done one basic math operation on two floats and stored the result in a float, in which case higher precision does not affect the result. As soon as we do multiple operations, or store the result in a double, higher precision intermediates give us a different and more accurate answer.

One could argue that the compiler should use higher precision whenever the result is stored in a double. However this would diverge from the C++ policy for integer math where the size of the destination doesn’t affect the calculation. For example:

int x = 1000000;
 
  long long y = x * x;

The multiply generates an ‘int’ result and then assigns that to the long long. The multiply overflows, but that’s not the compiler’s problem. With integer math the operands determine the precision of the calculation, and applying the same rules to floating-point math helps maintain consistency.

Comprehension test

Here’s some simple code. It just adds four numbers together. The numbers have been carefully chosen so that there are three possible results, depending on whether the precision of intermediate values is float, double, or long-double.

float g_one = 1.0f;
 
  float g_small_1 = FLT_EPSILON * 0.5;
 
  float g_small_2 = DBL_EPSILON * 0.5;
 
  float g_small_3 = DBL_EPSILON * 0.5;
 
   
 
  void AddThreeAndPrint()
 
  {
 
  	printf("Sum = %1.16en", ((g_one + g_small_1) + g_small_2) + g_small_3);
 
  	PrintOne();
 
  }

Take this code, add it to a project, turn off LTCG, turn on Assembler Output (so you can see the generated code without even calling the code), and play around with build settings to see what you get. You can also run the code to see the three different results, perhaps with some calls to _controlfp_s to vary the x87’s precision.

// Here's how to use _controlfp_s to set the x87 register precistion
 
  // to 24-bits (float precision)
 
  unsigned oldState;
 
  _controlfp_s(&oldState, _PC_24, _MCW_PC);

To aid in your explorations, use this simple flowchart to figure out what precision is used for intermediates in all-float calculations. Click on the image for a clearer view of the flowchart:

image

Got it? Simple enough?

I’m sure the VC++ team didn’t plan for the flowchart to be this complicated, however much of the complexity is due to the weirdness of the x87 FPU, and then their desire for SSE/SSE2 code to be x87 compatible. The Intel compiler’s options are complex also but I glossed over the details by just assuming that SSE/SSE2 is available.

And at least Microsoft is moving towards a dramatically simpler story with VS 11 and with x64.

Bigger is not necessarily better

I want to emphasize that more precise intermediates are not ‘better’. As we have seen they can cause performance to be worse, but they can also cause unexpected results. Consider this code:

float g_three = 3.0f;
 
  float g_seven = 7.0f;
 
   
 
  void DemoOfDanger()
 
  {
 
      float calc = g_three / g_seven;
 
      if (calc == g_three / g_seven)
 
          printf("They're equal!n");
 
      else
 
          printf("They're not equal!n");
 
  }

If intermediate values use float precision then this will print “They’re equal!”, but otherwise – by default on x86 – it will not.

Sometimes bigger is better

Imagine this code:

float g_three = 3.0f;
 
  float g_seven = 7.0f;
 
   
 
  double DemoOfAwesome()
 
  {
 
      return g_three / g_seven;
 
  }

If the compiler uses source-precision (float-precision in this case for intermediates) then it will calculate a float-precision result and then convert it to a double and return it. This is consistent with the rules for integer calculations (the destination of a calculation doesn’t affect how the calculation is done), but it means that maximum precision is not realized. Developers can fix this by casting g_three or g_seven to double, but this is just another example showing that there is no simple solution.

The code that got me interested in this question is shown below in a simplified form:

float g_pointOne = 0.1f;
 
   
 
  void PrintOne()
 
  {
 
      printf("One might equal %1.10fn", g_pointOne * 10);
 
  }

Depending on the intermediate precision, and therefore depending on your compiler and other factors, this might print 1.0000000000 or 1.0000000149:

What’s a developer to do?

Right now the best recommendation for x86 game developers using VC++ is to compile with /fp:fast and /arch:sse2.

For x64 game developers /arch:sse2 is implied, and /fp:fast may not be needed either. Your mileage may vary, but it seems quite possible that /fp:strict will provide fast enough floating-point, while still giving conformant and predictable results. It really depends on how much you depend on predictable handling of NaNs and other floating-point subtleties that /fp:fast takes away.

For x86 developers using VS 11 /arch:sse will still be needed but, as with x64, /fp:fast may no longer be.

If you need double-precision intermediates you can always request them on a case-by-case basis by using double or by casting from float to double at key points. That’s why I like source-precision intermediates because it gives developers the option to request higher precision when they want it.

Be wary of floating-point constants. Remember that 1.0 is a double, and using it will cause your float calculations to be done at double precision, with the attendant conversion costs. Searching your generated code for “cvtps2pd” and “cvtpd2ps” can help track down significant performance problems. I recently sped up some x64 code by more than 40% just by adding an ‘f’ to some constants. You can also find double-to-float conversions by enabling warning C4244:

// Enable this to find unintended double to float conversions.
 
  // warning C4244: 'initializing' : conversion from 'double' to 'float', possible loss of data
 
  #pragma warning(3 : 4244)

Wrap up

I hope this information is useful, and helps to explain some of the reasons why IEEE compliant floating-point can legitimately give different results on different platforms.

Perversely enough, it appears that developers who are not using /fp:fast are at higher risk for having their floating point results change when they upgrade to VS 11 or start building for 64-bit.

Got any tales of woe in this area? Share them in the comments.

Be Aspirational

Original Author: Claire Blackshaw

A year ago ago I signed up to this service, TimeHop, which emails me my tweets and status updates from a year ago, and it has been strangely motivating. To see my growth, challenges and remind me of my goals, my dreams, my successes and my failures.

So this week a year ago I was…

  • Listening to this classic song (
  • Bitching about it’s lack of ngons, which it now has in dev branch (^_^) Great job guys!
  • Which led to this game asset: “You Should be Drawing” post by Mike Jungbluth, with a piece on the Flour Sack doodle. How everyone should doodle old school animation at least once to get a feel for motion and weight in animation… which led to me just wanting to shout from the roof tops.

    BE ASPIRATIONAL!

    If you’re reading AltDev, or better yet contributing, you have at least made the first steps. I encourage you to draw up a bucket list or a dream list of stuff you want to do! Try Schemer!

    Start your list with some traditional gamedev skills.

    • Code
      • Make “Hello World”
      • Make Pong
      • Make Particle Fountain
    • Art
      • Draw Flour Sack Animation
      • Draw Human Hand
      • Model Something on Your Desk
    • Design
      • Invent a Card Game using a normal playing deck
      • Make a Boardgame
      • Write a Roleplaying Module

    Nothing is stopping you extending that list to include:

    • Cook a Quiche
    • Crochet a Scarf
    • Model something out of clay
    • Learn to Cat Yodel

    There is no such thing as a useless skill! Aspire to be more, learn more, do more! Also being reminded about last year via schemer are not bad places to start.

    Everyone I’ve ever met worth anything wanted to me worth more.

    P.S. Sorry about the fluffy post but I got super fired up and needed to shout!

    P.P.S Also there are similiar services to TimeHop & Schemer. They just happen to be the ones I’m using.

    P.P.P.S Should I think things through more and be less impulsive… maybe

    P.P.P.P.S I really will do a more technical, solid post next time I promise.

    P.P.P.P.P.S: Can you make a recursive Post Script? I wonder…

PIMPL vs Pure Virtual Interfaces

Original Author: Niklas Frykholm

In C++, separating the interface (public declarations) of a class from its implementation (private methods, members and definitions) serves several useful purposes:

  • Implementation details are hidden, making interfaces easier to read and understand.

  • Smaller header files with fewer dependencies means faster compile times.

  • A weaker coupling between the interface and the implementation gives greater freedom in reorganizing and refactoring the implementation internals.

In pure C, we can achieve this separation by using a pointer to a forward declared struct:

struct SoundWorld;
 
  typedef unsigned SoundInstanceId;
 
   
 
  SoundWorld *make_sound_world();
 
  void destroy_sound_world(SoundWorld *world);
 
  SoundInstanceId play(SoundWorld *world, SoundResource *sound);
 
  void stop(SoundWorld *world, SoundInstanceId id);

The struct is opaque to the users of the API. The actual content is defined in the .cpp file:

struct SoundWorld {
 
      SoundInstance playing_instances[MAX_PLAYING_INSTANCES];
 
      Matrix4x4 listener_pose;
 
      ...
 
  };

C++ programmers are often recommended to use the PIMPL idiom (pointer to implementation) to achieve the same thing:

class SoundWorldImplementation;
 
   
 
  class SoundWorld
 
  {
 
  public:
 
      typedef unsigned InstanceId;
 
   
 
      SoundWorld();
 
      ~SoundWorld();
 
   
 
      InstanceId play(SoundResource *sound);
 
      void stop(InstanceId id);
 
   
 
  private:
 
      SoundWorldImplementation *_impl;
 
  };

Here, SoundWorld is the external interface of the class. All the messy stuff: instance variables, private methods, etc is found in the SoundWorldImplementation class, which is in the .cpp file.

The _impl pointer is created in the constructor and calls to the methods in SoundWorld are forwarded to the implementation object via method stubs:

SoundWorld::SoundWorld()
 
  {
 
      _impl = new SoundWorldImplementation();
 
  }
 
   
 
  InstanceId SoundWorld::play(SoundResource *sound)
 
  {
 
      return _impl->play(sound);
 
  }

Another solution to the same problem is to write the interface as an abstract, pure virtual class in the .h file and then create the implementation as a subclass in the .cpp file.

You don’t see this solution recommended as much (at least not as a solution to this particular problem), but I actually like it better. With this approach, the header file will look something like this:

class SoundWorld
 
  {
 
  public:
 
      typedef unsigned InstanceId;
 
   
 
      virtual ~SoundWorld() {}
 
      virtual InstanceId play(SoundResource *sound) = 0;
 
      virtual void stop(InstanceId id) = 0;
 
   
 
      static SoundWorld *make(Allocator &a);
 
      static void destroy(Allocator &a, SoundWorld *sw);
 
  };

Note that since the class is now abstract, we cannot create actual instances of it, to do that we need the factory functions make() and destroy(). I’ve added an allocator parameter for good measure, because I always want to specify explicit allocators for all memory operations.

The corresponding .cpp file looks something like:

class SoundWorldImplementation : public SoundWorld
 
  {
 
      friend class SoundWorld;
 
   
 
      SoundInstance _playing_instances[MAX_PLAYING_INSTANCES];
 
      Matrix4x4 _listener_pose;
 
   
 
      SoundWorldImplementation()
 
      {
 
          ...
 
      }
 
   
 
      virtual InstanceId play(SoundResource *sound) 
 
      {
 
          ...
 
      }
 
   
 
      virtual void stop(InstanceId) 
 
      {
 
          ...
 
      }
 
  };
 
   
 
  SoundWorld *SoundWorld::make(Allocator &a)
 
  {
 
      return a.make<SoundWorldImplementation>();
 
  }
 
   
 
  SoundWorld *SoundWorld::destroy(Allocator &a, SoundWorld *sw)
 
  {
 
      return a.destroy<SoundWorldImplementation>(sw);
 
  }

The reason why most people recommend the PIMPL approach is that it has some distinct advantages:

  • Factory functions are not needed, you can use new(), delete() or create objects on the stack.

  • The SoundWorld class can be subclassed.

  • The interface methods are not virtual, so calling them might be faster. (On the other hand, we need an extra memory fetch to get to the implementation object.)

  • PIMPL can be introduced in an existing class without changing its external interface or its relation to other classes.

For my use cases, none of these advantages matter that much. Since I want to supply my own allocators, I’m not interested in new and delete. And I only use this for “big” objects, that are always heap (rather than stack) allocated.

I don’t make much use of implementation inheritance. In my opinion, it is almost always a bad design decision that leads to strongly coupled code and hard to follow code paths. Inheritance should be limited to interface inheritance.

The performance issue of virtual calls is not a huge issue, since I only use this for “big” objects (Systems and Managers). Also, I design the API so that the number of API calls is minimized. I.e., instead of a function:

void set_sound_position(InstanceId id, const Vector3 &pos);

I have:

void set_sound_positions(unsigned count, const InstanceId *ids, const Vector3 *positions);

This reduces the virtual call overhead, but also has additional benefits, such as being DMA friendly and allowing for parallelization and batch optimizations.

In the words of Mike Acton: Where there’s one, there’s more than one.

The abstract class method has some advantages of its own:

  • Cleaner code and a lot less typing, since we don’t have to write forwarding stubs for the methods in the public interface.

  • Multiple classes can implement the same interface. We can statically or dynamically select which particular implementation we want to use, which gives us more flexibility.

To me, not having to write a ton of stupid boilerplate cruft is actually kind of a big deal. I know some people don’t mind boilerplate. It’s just a little extra typing, they say. Since there is nothing complicated or difficult in the boilerplate code, it doesn’t pose a problem. Programmers are not limited by typing speed, so how much you have to type doesn’t matter.

I don’t agree at all. In my view, every line of code is a burden. It comes with a cost that you pay again and again as you write, read, debug, optimize, improve, extend and refactor your code. For me, the main benefit of “higher-level” languages is that they let me do more with less code. So I’m happy to pay the overhead of a virtual call if it saves me from having 150 lines of idiotic boilerplate.

A nice thing about the interface and implementation separation is that it gets rid of another piece of hateful C++ boilerplate: method declarations (hands up everybody who enjoys keeping their .h and .cpp files synchronized).

Methods defined inside a C++ class do not have to be declared and can be written in any order. So if we want to add helper methods to our implementation class, that are not part of the public interface, we can just write them anywhere in the class:

class SoundWorldImplementation : public SoundWorld
 
  {
 
      virtual InstanceId play(SoundResource *resource) {
 
          InstanceId id = allocate_id();
 
          ...
 
      }
 
   
 
      // A private method - no declaration necessary.
 
      InstanceId allocate_id() {
 
          ...
 
      }
 
  };

It’s interesting that this small, purely syntactical change — getting rid of method declarations — makes a significant different in how the language “feels”. At least to me.

With this approach, adding a helper method feels like “less work” and so I’m more inclined to do it. This favors better structured code that is decomposed into a larger number of functions. More like Smalltalk than traditional C (home of the mega-method). The Sapir-Worf hypothesis appears to hold some merit, at least in the realm of programming languages.

Another interesting thing to note is that the pure C implementation of opaque pointers stacks up pretty well against the C++ variants. It is simple, terse and fast (no virtual calls, no forwarding functions).

Every year I’m a little more impressed by C and a little more depressed by C++.

(This has also been posted to The Bitsquid Blog).