How to improve build stability

Original Author: Andre-Dittrich

Build stability is always an important topic for us but once a game production has entered the production phase in earnest the stability of the game and the tools becomes one of the more important aspects for the tech team. The simple reason for this is that the number of people relying on this is highest at that point and any time these people have to wait for a bugfix or missing tools potentially means a lot of money wasted. So keeping your build as stable as possible is important.

And now for the bad news: I do not have the “This Solves All Our Problems” recipe. I want to share some of measures we have applied in our projects. If you have other measures you have taken to ensure build stability please tell me. I am always interested in doing more.

 

Iteration time rules

Having a stable build is very important – yes, but you cannot ruin the iteration time for your team. There will always be that level designer that requests a small feature, a small change or simply needs a critical bug fix really fast (usually yesterday) to finish the mission for the next milestone. You do not want him to wait for a week for that change. With 10, 20 or even more engineers working on your code base at the same time the chance is high that there is always at least one that has added a bug that makes it impossible to release the next engine version to the team at least if you do not take some measures that help to keep the build stable. The problem of course is that the measures you take cannot add so much overhead that they become a reason for slow iteration times. So everything you do needs to strike a balance between overhead and improved build stability.

 

Automated build systems

CIS – continuous integration server: you need this! It is bad enough if “real” bugs trouble your build – it is far worse if simple bugs destroy it. Ever come into the office in the morning to find out that you cannot compile the game? A typo, a file that had not been checked in, a bad merge? How many people lost how much time during this one morning? This is totally avoidable. The main function of our CIS is to continuously build the engine whenever somebody checks in a change. This makes sure that the engine and tools at least compile. Of course we also run a few easy and fast smoke tests that also make sure that you can at least start the engine.

But you can do even more. During the day the focus is on getting the engine build as fast as possible and run smoke tests. During the night we can do a lot more. We run automated tests to get statistics for memory usage and performance in test levels and game levels. These statistics are made available as graphs on an internal website. These graphs are an enormous help to recognize and track down sudden jumps in either performance or memory as well as gradual development. Together with good check in comments (see below) you can prevent this from breaking the game before it actually becomes a problem or you at least recognize the problem very fast and efficient (without TAs or programmers spending time to find out why MissionXY is not running any more).

When I talk about automated tests I guess I have to talk about unit tests as well. I have some experience with it though I have to admit that most of it is about how not to do it. We integrated a unit testing framework into the Unreal Engine on the Unreal Script and Kismet level pretty early in the production process. We started to use it for the AI code mostly as this was mainly written by us and not relying too much on middleware code (except pathfinding). The main mistake we made was that we ended up with actually doing integration tests and maintaining those takes a lot of time. For some time we even made it part of the process to have “unit tests” for every feature we did. At some point we started spending more and more time on fixing the tests which were failing because of changes in other systems and not because of bugs in the tested code – we stopped doing it. For next projects I want to do actual unit tests to test critical parts of our code. Integration tests is something that should be used for finished features that are not very likely to change a lot and I guess that means you have to keep that for a later time in the production. If you have experience with successfully applying either I would like to hear about your experiences.

 

Peer reviews

This is one of the best tools in our belt to improve build stability. It does not only give you a substantial improvement in build stability it also fosters communication within the team and distributes knowledge (win – win – win).

The idea is pretty simple: When ever someone wants to check in a change he needs to get this change reviewed by one of his colleagues. Of course this will only work if it is taken serious. The goal of a review should be that the reviewer has a good understanding of what the change is, how and why it was done. There are no dumb questions during a review. If you do not understand something while you do a review, ask. This goes especially towards seniors or leads that sometimes might feel they should not ask dumb questions. If you think you even need another ones opinion get it. You may and should criticize style and details. Ask for additional or improved comments if you think they might help. This is not only about making sure the change works it is about sharing ideas and knowledge as well.

So what do you get in the end? Reviews will easily spot obvious issues or problems with the idea of how to solve the issue at hand. They will rarely spot really intricate bugs or side effects. By that it will remove quite a number of bugs that would have been found later by the automated systems, by the QA or even worse by somebody trying to use a broken tool. What you also will get is people learning from each other, people looking into parts of the system they would not see usually. At least 2 people know the change that has been made in detail, so people getting sick or leaving the company becomes less of an issue. You get a culture of talking about your work and making sure work is actually done before the checkin (it is pretty embarrassing if obvious flaws are discovered by your reviewer in a piece of code that you actually considered worth checking in). People in your team talk, they develop a common language, they understand weaknesses and strengths of the team members.

A few things to keep in mind to make peer reviews work:

– it costs time – make sure everybody knows that this is time well spent and factor it into your estimates

– every checkin is reviewed – a lot of mistakes are made with “easy” or “small” checkins

– people should be available for a review – nothing is as annoying as not being able to checkin just because nobody has time therefore you should have a damn good reason to refuse a review

– add the information about who did the review to the checkin comment – reviews will be taken a lot more serious that way and if you hunt a bug caused by a checkin you know the two guys you should talk to to help you

 

Checkin comments

It might not be very clear initially how checkin comments can improve build stability because once the bug is checked in it is in. Good checkin comments make it a lot easier to track down an issue. Applying a structured format makes it even easier. Just imagine you sitting in front of the screen scanning through a list of 100 checkin comments to find out which change could cause your AI getting stuck while trying to vault over a cover. The easier it is to read the information and the better the information is in it the faster you will be. We fixed quite a lot of our “hard” bugs that way.

But actually checkin comments (if they are well done) have even more uses. You can subscribe to your source control system (we use perforce) to get an automated mail for every checkin in areas you are interested in and stay up to date with what is checked in by whom. This is not only a useful tool for a Lead it is also interesting for other programmers, QA or producers to know what actually is checked in.

 

Testbuilds

This is something that is not easy to do and it requires substantial inhouse QA resources and some additional tool support. The basic idea is again pretty simple: Before you check in in a change that you are not so sure about – test it. I guess everybody knows this bad feeling when he is changing something in a very old part of the code and this code also touches a lot of other code (maybe the guy who originally wrote it is not even there any more or you have to change code in your middleware). You are just not sure about the side effects and yes there is no automated testing around that part of the code. Basically the only way to find out what your change does besides what you intend it to do, is testing it. The best people you have for testing are QA people (some of our QA guys find the strangest bugs and more importantly reliable repros for really hard ones – amazing). So the idea is to create a local build of the game (or representative part of it) and send that to the QA team to test your change. While you are waiting you can shelf your change and continue with something else. To make this a viable option you need really great tools to make the whole process as easy as possible. We are using the Unreal Engine with their build tools. It is easy to create a local build of the game for any platform using the Unreal Frontend. This tool is used to cook the game for the platform you need it for. Out of this tool we can push a build on to a central server (the prop server). The QA can get this build by using a simple web frontend and have it copied to their PC or XBox. Yes you could cook a build copy it into some network folder and write a mail to the QA where to find it. But the easier the whole process is the more likely it is that people are actually using it and do not find excuses to not do it or get frustrated because they have to. We also established a bit of a strict workflow around it to make the whole process even smoother.

Even applying all of the things I explain above perfectly will not give you zero bugs but it will allow you to spend your time on the important and interesting bugs and what is even more important – adding cool shit to your game.

To not lengthen this lengthy blog entry further I kept the individual parts pretty short. If you are interested in the details of how we exactly do certain things – let me know. I could make one of my next blog entries cover this in more detail.

 

Things to look at next

Static code analysis – I have seen a pretty interesting talk about this on the GameFest 2011 in London and after that I hoped to try this pretty soon on our project. After John Carmacks Keynote during the Quakecon this has just become a lot more interesting. If you are interested watch the second half of his 90min talk it should be easy to find on Youtube.

 

 

 


Data Validation: Another trick in the bag to catching errors before they cause problems

Original Author: Tiffany-Smith

We all do some form of data validation. Something as simple as making sure that the png we’re importing as a texture is actually a png, or import detection of the classic “million poly rock”. But what I’m talking about here is a specific pass on otherwise valid data that has snuck in. For example you might import a texture and the data is correct but the texture is a temporary texture. This is something that could easily be missed and found later when it’s too late to do anything about it. Here at Nameless, we perform this check by hooking it up in the editor on level load and level test, we also have an offline process that allows us to run it on all of our levels from the command line.

After saying that, obviously something that runs on level load needs to be fast, so I’m not talking about a process that takes minutes, but seconds. For our current game our levels are small, so there isn’t much data to validate. But since we allow the user to use the editor while the validation is running, we should be able to scale up without much hassle. If speed really becomes a problem, you could have the validation constantly running and once the initial validation was done, only run the validation on data that has changed.

 

What kind of things are we talking about validating?

Assuming you’re making a platformer you may want to make sure there is a valid path to collision geometry. I’m not talking about pathfinding the whole level but more like something as simple as validating that all collision surfaces are within jump height of another collision surface. This would detect the error if someone laid out a collision surface a bit too far, or perhaps accidentally nudged it in the editor when they were doing something else. This and many other little bugs are easy to detect, so why shouldn’t we.

Depending on the editor, it may also be able to accidentally duplicate geometry. In that case then you could look for overlapping objects of the same type. But really it’s about using common sense and flagging issues as you find them to prevent having to debug them in the future.

 

So, how do we get people to actually fix their warnings?

I think the sanest way is to just enforce no warnings, much like enforcing no warnings in code it gets easier as people get used to the process. If people aren’t staying on-top of their warnings you could also turn them into errors and refuse to allow them to test the game via the editor until they are fixed. I wouldn’t recommend disabling save though, because if the error is incorrect or they have completed 5 hours’ worth of work and missed something but their machine crashed before they could fix it, then they would be in a world full of hurt.


Six elemental questions to find out just about everything

Original Author: Raul Aliaga Diaz

When I was in third grade I belonged to a little group of kids called “Kid Journalists”, in which we crafted weekly articles on Saturday morning to post them on a big canvas on a wall somewhere at my elementary school. I say “craft” because usually the images weren’t photos but pictures drawn by the journalists themselves :P.

And I learned something great. Our teacher leading the group taught us that every news article must answer six elemental questions: What?, Who? When? Where? How? & Why?. As I grew up it never ceased to amaze me that even high profile news articles or clips don’t answer all of them. I mean, is almost like a checklist isn’t it? How hard it can be?

But what’s truly even more great is that those questions can be wielded by anyone on different situations to wrap their heads around anything they want to handle effectively.

Let’s take, for a example, a job interview. The applicant has the job description and research about the company done while preparing the interview. Usually job descriptions include responsibilities, skills and qualifications and some company context. Those should cover about the job for the applicant questions What and Where, even When probably if a starting date or urgency is stated. But it would probably not cover well Who, How and Why. So reasonable questions to bring up to the interview are “Who will I work with?”, “Which methodologies, procedures and logistics are implemented to work at this job”, and more importantly, “What’s the big picture of this job?”, “Why are we doing this and how can I align my efforts to accomplish that high level goal?”.

In the other hand, the interviewer has a CV and/or a Cover letter. That should cover, about the matter of filling that job post, at least Who, What and Where, but in the interview questions When, How and Why must be addressed. So the job applicant should at least expect questions like “When can you start?”, “How do you handle conflict? pressure? delegation? peer collaboration?”, “Why do you want to work with us?”.

The key is to be able to see which things intuitively each party is trying to grasp and frame the six elemental questions for those things accordingly.

Let’s take another example. Ever been trapped in a difficult decision? That question is usually a really big “What should I do!?”. Well, let’s find it out! Difficult decisions are hard because it’s not easy to relate all things involved in the decision and weigh the trade offs involved, specially when that decision must be communicated to several parties. Let’s assume you’re handling the issue of whether to cut off a particular feature for your game or not.

What?: To cut or not to cut.
Who: In this case, who are the people involved? Your team? the publisher? external contractors? a client? Define each party involved and which goals each one of them is pursuing in this decision.
When: Do you have a ticking clock behind you? Do you have unlimited time? Does it depend on other sub decisions? Does a party have a power to frame and constrain the time for the decision?
Where: Sometimes there’s no sense of “physical place”, but there is indeed some sort of “topology” and a sense of where. Are we cutting this feature on all of our games? on all the platforms? on all the countries? on this stage of development?
How: Will it be easy to cut it off? How much do we “pay” to cut it? Will it break something else? Are there contract liabilities? How much work will each party on their different time frames will need to do?
Why: Why are we doing this? Are these reasons valid on all the times and places we can take this decision? Are all parties aware of these reasons? How do each one of them weigh in all the reasons?

For any particular example, some questions will be covered quickly and other will be the tough ones. But almost all the time, difficult decisions are different people, looking for different things, with different times, at different places with different methods and with different purposes. So with the Six Elemental Questions™ you just can find out about all of those and be fully aware of What to do!.

Leaderboard Hack-A-Thon Post-mortem

Original Author: David Czarnecki

TL;DR

In my last post about our internal studio Hack-A-Thon, I concluded with:

At our upcoming Hack-A-Thon, I’m going to be working on rewriting the internals of our open source leaderboard code. More specifically, I want to change the API to be more readable and self documenting as well as to take advantage of transactions to get consistent snapshots of leaderboard data. After that I’m going to re-run the performance metrics to see how transactions affects the leaderboard data retrieval. Next up will be updating the public documentation. Finally I’ll release a new library of the leaderboard code. And then I’m going to do all of that for the PHP, Java, and Scala ports. That’s the plan at least.

What was it they say about the best laid schemes of mice and men?

AWRY WE GO

I started working with the “companion cube” was with me as well. I had a game plan based on my revision history to the leaderboard code to tackle the rewrite in the next language, Scala.

The scala-leaderboard API, documentation and performance metric updates obviously went quicker. I finished those updates around 4:30 AM. 12 hours into the Hack-A-Thon, being up for nearly 24 hours at this point and I was still going strong. Let’s randomly pick another language, PHP.

That’s right folks, I decided to tackle the updates to the I greeted the sunrise. And then I took a 40 minute nap. Around 7:30 AM, people were starting to filter into the studio and a bunch of us decided to go for breakfast. I just needed a recharge. After breakfast I worked at the PHP environment for another couple of hours, flipped my desk over (not really), and rallied for the final push of leaderboard updates in the final language, Java.

I got through the API updates to the Java leaderboard library. I still have yet to finish off the documentation and performance metric updates.

FIN

All-in-all, the Hack-A-Thon was a success.

During the Hack-A-Thon, another colleague did his own port of the leaderboard library and so we have a good start on the node leaderboard library.

The Importance of Game Jams

Original Author: Adam Rademacher

Advocacy /

Recently I had the chance to participate in Ludum Dare 21, a weekend event comprised of a solo-only 48 hour competition and a more relaxed 72 hour jam.  Aside from it being an altogether amazing experience, with an astounding 599 games submitted, it got me to think about the importance of Game Jams at an individual level in this constantly-evolving, volatile industry.  After a fortnight of reflection, I’ve come to an powerful conclusion: Game Jams are the best practice for Game Dev.  Period.Hard Skills

Perhaps the most obvious benefit of jamming is the practice you ensure on your hard skills.  It’s 48 hours of focused practice.  Not all of it is actual development time (especially if you take time to sleep), but the entire weekend you’re thinking about game development.  Thinking about how to program new features, or how to speed up your art production.  Even  if you don’t finish the game on time, it’s not hard to see how it can improve your skills.  Even if you only learn to write one new function, or one new shader, you’ve improved upon your skillset, and now you have a (hopefully) cool prototype to continue building on.

Speaking of Prototyping

What better place to prototype new tech or gameplay ideas than in a game jam?  If it’s proprietary or you’re otherwise not comfortable releasing it, don’t.  But now you’ve got a gameplay prototype to take to  your company, or client, and say “Look at how cool this is.  Let’s build upon it for our next game.”

Practicing Creativity

We’re all part of a living, breathing, rapidly evolving industry.  A  creative industry.  Producers, programmers, designers, artists, audio — everyone has to be creative and contribute to the development process.  Unfortunately, practicing creativity is a counter-intuitive concept.  How are you supposed to practice being creative?  Easy.  You practice it the same way that dancers practice it; the same way writers practice it; the same way directors practice it.  Just do it.  It’s easy to set out on a project with all intention to create something innovative and new, then be completely distraught when it’s no fun, or unreasonable to try to finish, or just not as innovative as you thought it would be.  But that’s cool.  Because you’ve only spent a weekend on it.  Imagine now if you made a game every weekend for a year, getting slightly more creative with each one.  You’d have 52 games made!  Not all of them would be all-star quality (or fun at all), but if you take all of those same thought patterns and experiences and apply them to larger projects…

Cool new stuff

If you don’t have a particularly good reason to game jam, maybe you don’t need one.  Why not take the opportunity to check out a piece of new technology you’ve been looking at?  There are dozens of freely (or cheaply) available engines out there — why not take the opportunity to learn one?  Sure, you might not have time to learn UDK or CryEngine, but maybe try out Cocos2D with a simple iOS game.  Or Game Maker for good measure (this is an awesome hobby game dev tool).  Hell, pick up LOVE2D and learn Lua, or flashbuilder and learn flash, or write a native game on Android ’cause it’s cool.

You won’t regret it.  Ludum Dare 22 is in December, I hope to see you there.

You Can’t Please Everyone…Can You?

Original Author: Jameson Durall

Starting up a new project has me thinking a lot about ways to improve team buy-in and enthusiasm about the game that we are building.  I’ve worked on many different team types over the last decade and have seen many ways to run things…with varying degrees of success.  As team sizes are easily in the area of 100+ people on the types of games I generally work on, communication alone becomes difficult.

The small team on the beginning of a project is usually spending their time thinking about what key features they want to build the game around…or what new features to add in the case of a sequel.  Some teams use this early time to make the plan for the game and nail down a lot of the top level decisions before everyone else rolls on so people can hit the ground running.  Other teams may decide to use this time to keep things very loose and come up with lots of ideas for ways to go in various areas of the development.  This could mean having front runners in areas of decision as a basis to work forward from, but essentially waiting for a bigger chunk of the team to come on board and help make those decisions final.

There generally, in my opinion, seems to be two types of game developers when it comes to moving onto a new project. The first group really enjoys to be a part of the decision process. If you choose to lock some Design decisions down early, they will feel that it’s too late for their ideas to be heard and they may not feel like they can fully get on board with the decisions that were made. The other group prefers to have the big decisions made so they can apply their expertise to the situation and make the best game possible within those constraints.  If you keep things open, they may feel that the group doesn’t have their heads on straight, what have they been doing all this time and how can I be effective in this lack of direction?

How do you accommodate both types of people?  It would be ideal if we could get the people who want to be involved first and the others much later on…but that kind of thinking is from a dream world.  So, what are some potential ways we can get people to buy-in and truly get behind what we are making regardless of what method is being used?  I wish I had answers…but I do have a few thoughts:

Communication

I think it’s important to go out of your way to ensure that every person coming onto the team knows exactly what decisions have and have not been made right away.  Give them as much context as possible and try to help them understand why.  If every team member gets the proper info right away, then any misconceptions can be washed away and a new groundwork can be laid for moving forward.  They may not be happy with the decisions made, or not made, but this allows a great starting point for discussions.

 

 

Sometimes you’ll think that everyone is on the same page about a decision and find out that you weren’t effective enough in getting the message across.  It takes great effort, often multiple tries, to make sure the right info gets to everyone.  This step is sometimes deemed important by Designers because they don’t feel they have to justify decisions or even expect that team members will read the documents they have created about each topic (they won’t).

 

Transparency

I think there is often a lack of understanding of what a Design team does and how even our best laid plans early on can go horribly wrong. There are a lot of assumptions that have to be made by Designers in the early stages of development and those sometimes have to be re-evaluated as the game evolves or cuts happen.  Coming up with ideas is easy…it’s finding the ones that fit in the vision of the game and with your constantly changing time constraints that make them good Design decisions.

 

 

 

 

Other disciplines often view this as Design changing their minds or not knowing what they are doing and it’s hard for them work toward a moving target.  The team members who prefer to be involved may say the decision would have been better if only they would have been included in the first place.  This can quickly lead to resentment of the Design Team and that kind of respect is hard to regain once lost. Taking steps to fix the before mentioned communication will help with this, but nothing sucks more than someone doing work that later needs reworking or deemed unneeded entirely.

 

You have to do your best to make sure people understand as best as possible what goes into the decisions that are made.  Respect goes both ways and you need to keep in mind that every team member’s time and opinions are valuable even if the opinions may not fit with your current goals.  This kind of understanding can at the very least get you some empathy and keep the respect levels at a place where everyone can work together effectively.

 

Alcohol

I’m only partly kidding about this.  If you find that some of the team members are having difficulty buying in to the game that everyone is making…try taking the discussion outside.  Go somewhere outside of work and have a drink or lunch together and just talk frankly about what frustrations they are having.  A very free conversation like this can often help you understand each other better and alcohol can only help with that right?

 

 

 

 

In the end…this is something I’m thinking a lot about for my project before team members start rolling on in large quantities.  I obviously don’t have the answers, but really wanted to use this as a topic for discussion.  What ideas do you have for helping with this?

Know Your Units

Original Author: Drew Thaler

Let’s talk about one of the fundamental things that you need to know when talking about data access.

Always, always, always know your units.

Binary vs Decimal

Computers have a natural affinity for binary units, or multiples of 1024.

Historically, people (like us!) who work with computers have borrowed the SI prefixes for multiples of 1000 (K-, M-, G-, etc) and abused them to mean the closest multiple of 1024. This tradition continues right up to the present day: “64 KB” in a computing context is quite naturally interpreted as 64 * 1024 = 65536 bytes.

However, in many contexts — including data storage and bandwidth — decimal units may be used. This is not the manufacturers trying to “trick” you, but rather it’s the normal and unavoidable friction that results from the proper use of SI units as decimal values (in physics, etc) running up against the computer industry’s abuse of them as binary values.

You need to be particularly careful when reading. If you accidentally read a decimal unit as a binary unit, you’ll be in for a surprise when you’ve got less than you expected. This only gets worse as the sizes increase.

SI unit Pronounced Meaning … But Sometimes Difference
k (K) kilo 1,000 1,024 2.4%
M mega 1,000,000 1,048,576 4.86%
G giga 1,000,000,000 1,073,741,824 7.37%
T tera 1,000,000,000,000 1,099,511,627,776 9.95%
P peta 1,000,000,000,000,000 1,125,899,906,842,624 12.6%

Since 1999, the proper, IEEE-approved way to write 1024 has been with the Ki- prefix, and its siblings Mi-, Gi-. These are deliberately unambiguous and clearly denote that you want a multiple of 1024 rather than a multiple of 1000.

Binary unit Unambiguous Meaning
Ki 10241
Mi 10242
Gi 10243
Ti 10244
Pi 10245

The NIST has an excellent page with more details. Go, read it! It’s quick and to the point.

Unfortunately, the original SI prefixes will remain ambiguous until everybody stops abusing them to mean different things. It’ll be a tough habit for us to break, though – especially when it’s become so culturally ingrained. Still, you and I can do our part.

Resolving ambiguity when writing

I can’t emphasize this enough: If you’re writing and you need to use a value that uses a binary multiple, YOU SHOULD USE THE BINARY PREFIXES. It’s not that hard: write “64 KiB” instead of “64 KB”, and boom, you’re done. All you need to do is practice it a bit and it’ll become second nature.

Ah, but what if you’re writing and you need to use decimal units? Unfortunately, if I really want to communicate that something can transfer 9 million bytes per second, that’s a bit harder to write. I can’t just say “9 MB/s” because you may assume I meant binary units but was just too dumb to write “9 MiB/s”.

In those cases, I’ve found that the simplest way to disambiguate is to write both the decimal and binary values: “The speed is 9 MB/s = 8.58 MiB/s”.

Resolving ambiguity when speaking

The IEC recommendation is to use special pronunciations with “-bi-” in the middle, as in: “kibibyte”, “gibibyte”, “mebibyte”, etc.

Frankly, this suggestion is crap.

Nobody I know actually does this, and you’re likely to confuse people and sound like a dork if you try. Go ahead, try it — say it out loud. My favorite ludicrous example is “gibibyte”. Tongue twister, isn’t it? Say it three times fast and you’re well on your way to rubber baby buggy bumpers.

Instead, I prefer to add an explicit qualifier and say “binary megabyte” or “decimal megabyte”. This is perfectly clear to every computer programmer I’ve ever spoken to, even those who aren’t aware of the binary/decimal confusion problem. It works beautifully.

In spoken contexts (and only then) I’ll happily shorten this to “megabyte” if I feel the context is clear — just as you might just say “John” if there’s only one in the room, but “John Smith” or “John Thompson” if there is more than one John nearby.

Resolving ambiguity when reading or listening

This one ultimately depends on context.

Any discussion of computer memory or RAM will typically use binary units. If someone says “allocate 64K”, it’s probably safe to assume they mean 65536 bytes.

Discussions of disk capacities or data bandwidths are normally written with decimal units. I like to imagine that this is because cramming bits onto a disk or through a wire is a task laden with physics, so the units are naturally the standard SI units.

In the middle, you’ll find a lot of confusion when you talk about transferring from disk (or network) to RAM, or vice versa. If a computer programmer is talking about how fast they can fill memory, they are probably thinking in binary units. If a filesystem or network person is talking about how fast an interface or device performs, they are probably thinking in decimal units. Be careful to use the right one!

What Interpretation Rewritten Without Ambiguity
1 Gb/s Ethernet decimal 1 Gb/s (953 Mib/s) Ethernet
8 GB RAM binary 8 GiB RAM
2 TB hard drive decimal 2 TB (1.81 TiB) hard drive
200 MB file size binary 200 MiB file size
1.5 Mb/s cable modem decimal 1.5 Mb/s (1.43 Mib/s) cable modem
6.9 MB/s 5x DVD speed decimal 6.9 MB/s (6.6 MiB/s) 5x DVD speed
16.6 MB/s 12x DVD speed decimal 16.6 MB/s (15.85 MiB/s) 12x DVD speed
9 MB/s 2x Blu-ray speed decimal 9 MB/s (8.58 MiB/s) 2x Blu-ray speed
9.4 GB DVD capacity decimal 9.4 GB (8.75 GiB) DVD capacity

50 GB Blu-ray capacity decimal 50 GB (46.56 GiB) Blu-ray capacity

What you can do

Start using binary units when writing, and, if necessary, qualify with “binary” or “decimal” when speaking. It’s easy, it’s safe, and it puts you into the cool club of kids who speak without ambiguity.

If you’re reading or talking to someone who doesn’t use the binary units, make sure you know the context.

Watch out for translation errors.

And, of course, suggest that the other person pick up the habit of using the binary units too. 🙂


My top 5 games of childhood

Original Author: Kyle-Kulyk

I started gaming when I was about 7 receiving my first computer for Christmas, a Commodore Vic-20.  Most of my childhood in the 80’s was spent on acreages and although I had no shortage of friends from school, most of them lived some distance away.  My siblings and I on rainy days and long winters spent a lot of time gaming together on our C64 and later on our Amiga.  While I did have the opportunity to spend time with The Legend of Zelda, Super Mario and the like while visiting friends there are dozens of games you never hear about that left their mark on my childhood and I just thought I’d share a few with you.  I’m far from a retro gamer but I think as a game developer it never hurts to look back at the games you found magical.  So marvel as I completely date myself and take a brief look at some of my childhood favourites.  Click each image for game play videos!

Elite

It feels like I spent a lifetime playing Elite on the C-64.  Elite was the first space fighter/trading game I played using glorious 3D wireframe graphics and viewed from the cockpit of your own spacecraft.  This space trading game came out in 1984 with its own novella to set the backdrop and it included somewhat of a moral system that I hadn’t experienced in games before.  Trade in illegal goods in some systems and you’ll become a wanted man!  Is it worth the hassle?  I remember playing with an eye occasional cast over my shoulder so my mother wouldn’t stumble in and exclaim “You’re trading narcotics to that poor planet?  The authorities are right to hunt you from system to system.  I have no son!”

Giving options and consequences in a game left a mark on me as a gamer.  I hadn’t experienced this before.  Usually I played as the good little trader, occasionally saving others from pirates myself, but sometimes you just have to walk the line.  Surprisingly, I didn’t grow up to become an arms dealer.

 

Raid on Bungeling Bay

I was surprised this morning to learn that one of my early favourites was the first game designed by SimCity’s creator, Will Wright.  Raid was also available on the C-64 about 1984.  Raid was a top down helicopter shooter which had you taking out radar stations to avoid detection in an attempt to bomb one of six factories.  What I loved about this shooter is that things happened off-screen.  Many arcade shooters at the time centered only on the player and the rest of the world simply didn’t exist until the player stepped into the screen.   With Raid, supply boats made their way to factories and if left to their own devices they’d build themselves up and be that much harder to take out.  It reminded me of one of my arcade favourites Sinistar.  If you left Sinistar alone for too long while you were off mining asteroids, he’d later become a huge threat, but Raid seemed to have so much more depth.  If that weren’t enough of a challenge, your aircraft carrier would often come under attack and you’d have to rush back and switch to defensive mode.  I remember that almost sick desperation as I limped my chopper back to the carrier after taking a beating.  I really hadn’t experienced this level of challenge before.  Also, it’s fun to blow things up.  Later games like “Desert Strike – Return to the gulf” provided a satisfying ratio of shots fired to things blowing up, but few shooters provided the challenge of Raid on Bungeling Bay.

Castle Wolfenstein/Beyond Castle Wolfenstein

Forever will I think of these titles when I see the word “Achtung!”

The first and second title came out between 1981 – 1984 and had two features that have become videogame staples today.  Stealth play and Nazis.  You wouldn’t get too far in Castle Wolfenstein to approach the game with guns blazing.  You had to bide your time, hide around corners and then, when the time was right, blow a hole in a wall with a grenade and burst out with pistol a’blazing!  Beyond continued the same gameplay, but this time the game focused on a plot to bomb Hitler.  The stealth tactics ramped up.  Guards were more easily alerted to your presence.  Gunshots brought the entire bunker down on top of you.  Bodies needed to be hidden out of sight.  Playing games like Metal Gear Solid over a decade later reminded me of just how much the stealth and survival horror games owed to games like Wolfenstein.  Nothing quite ratchets up the tension to a 10 year old like limited ammo and the sudden appearance of a nigh unstoppable foe.  I can hear the audio cue of the SS’s arrival in the back of my head to this day when I’m playing a game and a difficult enemy makes a sudden appearance.

 

Archon I and II

More from around 1984, the Archon games combined strategy and action and successfully kept myself and my younger brother at each other’s throats for at least a couple of years.  You moved various player pieces around a game board, but instead of one piece taking another piece automatically, player pieces would be transported into an arena where they would engage in often lopsided battles.  There were a number of two player games available that would pit brother against brother.  Spy vs Spy, GI Joe, Front Line, but none had the variety that Archon offered at the time.  None offered the same type of strategy.  I can think of no other two player games from my childhood that brought my younger brother and I together while simultaneously driving us apart like these two titles.

 

Eye of the Beholder

Eye of the Beholder is the most recent game on this list (1990) but it is one that sticks with me as it was my first, real role playing game.  I was never really into D&D as a kid.  It wasn’t something that my parents approved of as they were sure, like many parents of the time, that D&D would turn their children into sword wielding maniacs with a lust to consume human flesh, or devil worshipping blasphemers with a lust to consume human flesh, or some such thing involving cannibalism.

Eye of the Beholder offered a world to explore, multiple party members and better visuals right at a time when I was just discovering books like Lord of the Rings and the Shannara series.  EOTB gave me a chance to really get my geek on in a way that previous games had fallen short of.  Huge dungeons hid a plethora of mystical artefacts and mythical creatures and I spent days plumbing the depths beneath Waterdeep.  Unlike many role playing games of the time, Eye of the Beholder also offered that first person perspective, even though all enemies seemed to conveniently fit into your field of view.  Dungeon crawls today like Dragon Age or Oblivion take me right back to those pixelated corridors.

 

There are many more games that shaped my love of videogames over the years, from Monkey Island to Black Tiger, Doom to HalfLife – but there’s something undeniably powerful and influential about those games you discovered as a child.  My hope is the games I develop will, for someone, resonate like these games have with me, or will on some level connect with those feelings I still have when I look back on my favourites.


Animations Without an Animator

Original Author: Szymon Swistun

Animations Without an Animator

We needed to author around 70 animation sequences for 4 characters without an animator. I watched a bunch of character key frame animation video tutorials and found out quickly that manually authoring human motion is a lot of work. So we looked into motion capture to help generate the base foundation of our animations.

We wanted to have the flexibility of authoring our own unique motions and were not happy with the quality of using motion databases. Another option was to record our own motion’s using other studios, but the cost was too high. After hitting many walls, we finally stumbled upon iPi Desktop Motion Capture Software.

Trial Run

After downloading the free version, we setup a trial run and purchased the following:

  • 3 PS3 Eye Cameras @ $30 each
  • 3 Camera Tripods @ $30 each
  • 1 USB2 Hub @ $10
  • 2 USB2 10ft extensions @ $6 each
  • 1 Mag light mini for calibration @ $20
  • Tape @ $0 used some from home

iPi Mocap Jonas Kick Test

Results were pretty good, not great. You can see how the motion is represented but not perfectly. Calibration process takes a bit of processing time but is mostly automatic. Once calibrated, we captured a bunch of video of motions that are then taken to be tracked with their studio software. Tracking quality was ok and missed around faster more complex motions, but they provide a nice ability to pause the tracking, pose, fit, and track backwards from a known pose state and fixup most issues.

iPi Studio Calibration and Kick Test Result

3DS Max Work

Once we had the motions tracked well, we applied jitter removal to smooth them out and exported into Max. After applying the motion on our rigs, thus getting re-targeted, basic fixup was required to get working well in game. Those include:

  • Set the timeline to intended animation frequency, we use 60fps.
  • Manually delete keys around foot plants and set plant keys instead lock the feet to the ground
  • Add a layer animation for head tracking matching captured video
  • Fixup hands to work with intended props either manually, or using a layer
  • Also, accent the animation with a layer for any specific styles, fix-ups or exaggerations and to match the video capture motions more accurately if needed.
  • For cyclic animations, use the Mixer / Motion Flow to create a cyclic transition to the same animation. Some manual work here required to get the right cycle working. Clip the animation to the intended boundaries.
  • For movement-less actions, delete X and Z root key frames. Recently, we moved this step to our own internal export and retain the root motion to move our physics representation in game instead.

Final Setup

The results worked seamlessly in game, and we decided to upgrade to the standard package and get a 6 camera 360 degree studio setup to improve capture quality and get more range of motion. We use a quad core Intel i5 with a nVidia 480 GTX and works fine with 6 PS3 Eye Camera setup as long as you have 3 separate USB2 controllers each that can take 2 USB2 PS3 Eye Cameras plugged in at full performance. We additionally purchased:

  • Standard Edition iPi Desktop Motion Capture Software for $995
  • 3 more PS3 Eye Cameras @ $30 each
  • Two 3 meter tall light stands instead of tripods for 2 cameras to get them higher @ $30 each
  • 4 PS3 USB2.0 4.7 meter long repeaters @ $10 each
  • 1 more USB2.0 hub @ $10
  • $15 table and two $10 dollar chairs for some added comfort.

[youtube

The Tech Artist’s Creed

Original Author: Rob-Galanakis

Last month we started a tech-artists.org about creating a tech artist’s creed.  After several weeks of back and forth, we finally came up with something we could all agree upon.  Here it is:

I am a Tech Artist,

Every day I will teach, learn, and assist,

And build bridges between teams, people, and ideas.

I will observe without interrupting and mediate without judging.

I may not give exactly what you ask for,

But I will provide what you need.

I am a Tech Artist,

I will approach every problem with mind and ears open

To my colleagues and peers across the industry.

I will solve the problems of today,

Improve the solutions of yesterday,

And design the answers of tomorrow.

I am a Tech Artist,

I am a leader for my team,

And a standard-bearer for my community.

I will do what needs to be done,

I will advocate for what should be done,

And my decisions will be in the best interest of the production.

My goal for the creed was to have the community come up with a code of ethics and standards for tech art in general.  We are a diverse group and there are as many specialties as there are TA’s.  So it was necessary to create something widely applicable, but still meaningful.

My hope is that we can hold ourselves to, and judge our actions against, this creed.  I think it says everything vital about what a tech artist should strive for.  I know I have not always lived up to it, and I want my fellow TA’s to call me out when I do not.  I expect that other tech artists will share that sentiment.  I want to keep pushing our craft forward, bettering ourselves and our community, and I think this creed embodies that.

So, a short post today because so much brain power and effort went into those words above.  They are not mine alone (or even primarily), they are those of the tech-artists.org community which represents and advocates for the tech art community at large.  I am just fortunate enough to have the honor and privilege of posting the creed here, on behalf of an amazing and incredibly creative group of people.

So read it over, tell me what you think, and if you have something to suggest, suggest away- the creed should continually grow and evolve just as our role does.