Communication is Key

Original Author: Lee Winder

Successful communication is one of the most important aspects of a well functioning and productive team. Without good communication between peers, managers, publishers and anyone else involved in the game development process a team will not perform at it’s best.

If developers do not feel confident in the reasons behind their work, if they don’t fully understand how their work will fit into the project as a whole or indeed when it will be required, the team will produce lower quality work with last minute changes and requirements fostering an atmosphere of distrust and crunch.

But communication is one of the most difficult things to get right but so costly when it’s done wrong.

The following are methods I’ve used over the years to try and improve communication within the team. I’d be very interested to hear other ways people have tried too!

 
Team Wiki

Having an open Wiki that people can contribute information and notes is a good idea for documentation and persistent information. It is not a good tool for perishable short term information. Documentation on team processes (getting the latest build, creating submissions, setting up PC’s etc.) is usually the kind of information that finds a home on a wiki and while it’s useful it’s not something that affects the team on a daily basis.

And as it requires people to physically search for the information in the long term people don’t bother looking for new information on a regular basis.

As a result, the Wiki is useful but doesn’t actually improve the day to day communication on a team.

 
Blogs

We have a team blog that people update about 2 or 3 times a week, usually discussing their recent work, posting up screen shots or letting people know the state of the project. It’s a nice simple way to push information to the team, though it does require everyone to contribute to the blog to get good cross team communication going.

Discussions can take on a life of their own which is actually a good way to gauge buy in into a project but can’t be used to judge the success of the process.

But you’d be amazed how many people don’t have any kind of RSS feed reader set up…

 
Micro-blogging

Internal mico-blogging tools like Yammer or Status.net allow people to quickly thrown up what they’re working on, problems encountered or general team information. The best thing about micro-blogging is it’s push communication style with peoples updates being automatically feed to clients which people can update as much as they want (I usually recommended at least twice a day).

But so far I’ve had very little success with micro-blogging tools in a team environment.

Not because the idea was bad (when it worked it worked well) but I’ve yet to find a service where the official client is anywhere near usable and able to filter out the information people don’t want to read. Without a good way to filter and push information where you want it (like all the best third party Twitter tools do) it either becomes an information overload or a sea of noise, neither of which improve communication.

 
Wall Displays

Walls are valuable real estate especially in an Open Plan office and I’ve rarely seen them used to their full potential. But highly visible walls in the middle of a team area are one of the best ways to communicate information across a team.

As an example on my current project we have the entire timeline of the project (it’s only a short one) with dates/deliverables clearly indicated, a ‘where we are right now’ marker and a description – feature by feature – of what is required for a given milestone.

Next to that we have our sprint wall which is our most ‘live’ wall display. But position is key and in our case the sprint wall’s impact on the team is reduced due to it’s rather awkward position between a big TV, a constantly spinning fan and quite a lot of server machines. But I did say wall space is valuable real estate and it’s always hard to find a compromise between distance from team and accessibility.

 
Morning Meetings

Morning meetings are one of the best ways to push information across a team but I’ve found that you need to follow a few rules to make them really valuable.

  • Keep the groups small. I’ve lost count of the number of 20+ people stand up meetings I’ve seen where the majority of the ‘participants’ are looking bored or simply waiting for their turn. If you’re groups are not small, the information is less targeted and much less relevant, meaning more information is lost than actually passed around the team.
  • Keep them focused. There is nothing worse than 1 out of the 6 people speaking for 15 minutes about the most intimate implementation details, leaving everyone else itching to get back to their seats to carry on working.
  • Don’t make them reporting sessions. If everyone is talking at a single person (usually their manager) take the manager out for a while and get people used to talking to each other as it makes it much more likely for people to take in what is being said.

 
Milestone Reviews

I love the concept of a milestone review. Everyone playing the game, lively discussions about what went right, what went wrong and what we can do better next time. But it’s easy for these to be less than stellar if not approached in the right way.

If these reviews are not focused, maybe even as structured as a schedule or points to cover, people may start to feel unsure as to what they can comment on or what exactly they should be doing. You’ve also got to make sure that people feel comfortable both giving and taking criticism and manage the situations when that goes pear shape (and sometimes it will).

I’ve found that when done right, when people contribute to discussions and when people can (importantly) see change as a result of these reviews, the quality of information coming out of them is invaluable. It also has the added bonus of making people feel like they are making a difference to the team and allowing their thoughts to be heard.

 
Sprint Planning

The days of managers sitting in a room building up a schedule and dishing it out to the developers is (almost) long gone. And there’s good reason for it.

Getting a group of developers (again with the morning meetings it needs to be small and focused) to discuss, schedule and plan the work ahead significantly improves the knowledge people have of what is happening across the team. Again, if people feel that have a say in how work will be implemented, how it will be assigned and when it’ll be done by is vital to spreading information about the project and the work being done.

 

So those are a couple of methods I’ve used to try and improve communication and information across the team. I’m sure there are plenty more (and I’ve tried a few which have been colossal failures) so what methods have you used and how well did they work out?

 

license.

 


When the Surprise > the Prize

Original Author: Lisa Brown

This is just an anecdote that I thought was both amusing and insightful.  So, occasionally the designers at Insomniac go out to dinner together, for cross-project bonding and whatnot, and one night we went to have burgers at Islands.  After dinner, the waitress came to tell us about an exciting promotion they were doing that we would all get to participate, hooray!

A free one of these COULD BE YOURS! Joy!

Basically, she had a bunch of scratch-off cards, the cheap lotto sort, under which were various exciting prizes like “free entree!” or “free dessert!” and the like.  Okay sure, that seems like a pretty cute thing for a restaurant to do, right?  But then she explained to us, “the rules.”

The rules were that we would each get a scratch-card, but we COULD NOT SCRATCH IT OFF.  No, we had to wait until the next time we came to eat at Islands, and then give our scratch-card to the manager, and THEY would scratch it off to see what our prize was.  That was how the prizes would be redeemed. We all stared, dumbfounded.  Who made this promotion?  Did they not know that the very joy of a scratch-card is to scratch it?  This went against the fundamental nature of the scratch-card itself!  I brought this up, and the waitress thought that maybe when I returned and asked the manager, she would let me scratch it off myself.

Anyway, she handed out all of the scratch-cards and left the table.  Immediately, we all started digging for scratch-utensils and scratched away to see what the prizes were that we would not be getting.

“I would have gotten a free appetizer, yay!”

“Ooo, I would have gotten a free drink.”

The tactile allure of the scratch-card

One person noted that one of the potential prizes was $500 and paused, wondering if he scratched off and would have gotten $500, would he be mad at himself for pre-emptively scratching and nullifying the prize?  A beat went by, and he scratched away, unable to resist (he didn’t not-get $500).  When the waitress returned she was legitimately horrified at what we’d done, and we tried helplessly to explain that the act of revealing the surprise was far more enticing than the prize itself.  Her exasperated sigh revealed that this was probably the more common reaction of anyone who got the scratch-cards.

The episode did make me think about games, and experiences in games where the act of revealing a surprise is more valuable to the player than the prize itself.  It reminded me of something I learned back in grad school about puzzles, about how the brain is delighted when it sees the answer of a puzzle, regardless of whether it actually solved the puzzle or looked it up in a walkthrough.  It also made me think of this VG Cats Comic.

Anyway, it is a funny thing about humans, yes?  We should think about it when we make our games.  I offer to the commenters to share their own experiences about games where the joy was all about uncovering the surprise when the reward itself was pretty much meaningless.

A personal favorite

For me, a good example would be the Wario Ware series, particularly the Gamecube version (of which I have fond memories of many hours spent playing with friends).  I found that when playing this game, the joy and delight has little to do with winning the games, nor even winning an individual microgame, but instead comes from the surprise of seeing what crazy thing the microgame has you do.

Inserted! Success!

This was especially true with the group microgames (and even moreso when playing with new people).  ”Which one will it be?  Which one which one?  OMG IT’S THE SNOT DODGING ONE I haven’t seen that one in ages BWAHAHAHAHA!”  In fact, half the time the surprise of the microgame (be it its visual style, its short-phrase instruction, or the pure wackiness of what it asked to do) would often result directly in failing that micro-game.  But the reveal was so delightful that it didn’t matter!

Alright, commenters, your turn!

 

 

 

 

With great powers …

Original Author: Robert-Walter

Once, James Gosling (inventor of Java) was asked what he’d change if he could do Java over again. He replied: “I’d leave out classes”. I read about this in this—kind of controversial—article by Allen Holub: Why extends is evil. But to set things clear: I don’t want to start the same discussion here as the article got (“he’s so wrong, ‘extends’ rulez!” vs. “he’s absolutely right, worship ‘implements’!”), and I’m pretty sure it wasn’t Holub’s intension either. Anyway, Gosling explained right after his statement that he actually addresses implementation inheritance to be the problem, not classes in general.

Now, when I recap my own programming education, I remember that object-orientation was always taught as something that is strongly connected to the mechanism of inheritance (which is not necessarily wrong, but only part of the truth). And, talking to my students nowadays highlights the same issues I had back then: It is hard for novices to differentiate between implementation inheritance (as a reuse mechanism) and interface inheritance (as a software design mechanism), especially when you learn OO with Java or C++, where implementation inheritance always comes with interface inheritance automatically (reusing a class’ implementation by extending it implicitly means that you inherit its interface). So, soon you got statements like: “Why should I use explicit interfaces anyway?”… or, “I don’t get the idea of interfaces, I use inheritance instead”. What’s more, other important aspects of object-orientation, like polymorphism, are also intertwined with inheritance in statically-typed languages (nothing to blame them for, it’s just how it works). My point here is, that this—in the minds of programming novices often, and in the minds of veterans often enough—leads to a simplified relationship: “inheritance is object-orientation”… which we could display in UML like this:

Beware! Not true!

 

In this post, I would like to introduce and discuss the fragile base class problem (FBCP). I think, it is a very good showcase why the introduction of an explicit interface concept in Java or C# has its reasons, but, first and foremost, I hope that it will illustrate how tricky your code can get when you use implementation inheritance (strong coupling). I also hope that this is not only interesting for the novices among us ;).

Note that the examples are dead simple and not good quality code, but intended to highlight the basics of the FBCP. If you are interested in getting a deeper insight, I recommend the paper A Study of The Fragile Base Class Problem.

Let us imagine the following classes, where the Collection class is part of a framework (base system) and the CountingCollection class is part of an extension somewhere else (sub system):

 

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  13
 
  14
 
  15
 
  16
 
  
// Version 1
 
  import java.util.ArrayList;
 
   
 
  public class Collection {
 
    ArrayList data = new ArrayList();
 
   
 
    public void addItem (Object item) {
 
      data.add(item);
 
    }
 
   
 
    public void addRange (ArrayList items) {
 
      for(Object o : items) {
 
        this.addItem(o);
 
      }
 
    }
 
  }
1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  
public class CountingCollection extends Collection {
 
    int n = 0;
 
   
 
    public void addItem (Object item) {
 
      n++;
 
      super.add(item);
 
    }
 
   
 
    public int getSize() {
 
      return n;
 
    }
 
  }

The Collection class represents a collection of items and you can add either a single item or a range of them. The extension, CountingCollection, adds a counter variable to be aware the number of added items. Everything works as intended.

Now, after a revision of the base system, the base class got changed.

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  13
 
  14
 
  15
 
  
// Version 2
 
  import java.util.ArrayList;
 
   
 
  public class Collection {
 
    ArrayList data = new ArrayList();
 
   
 
    public void addItem (Object item) {
 
      data.add(item);
 
    }
 
   
 
    public void addRange (ArrayList items) {
 
      // revised
 
      data.addRange(items)
 
    }
 
  }

This change is, considering the base system, valid, since it does not change the externally observable behavior of objects of type Collection. However, it breaks the sub system. This is because the subclass relies on the self-call in the first version of the base system in line 13, which means that it relies on the internal behavior (the implementation) of Collection. Here we face the FBCP.

Having a more general look at this circumstance, it means that “any open system applying code inheritance and self-recursion in an ad-hoc manner is vulnerable to this problem.”

The fact that the immediate cause and the observable effect of the FBCP can spread between different systems makes it hard to track down, though the goal should be to avoid its occurrence in the first place. But how? Well, in their above mentioned study, the authors introduce a flexibility property that must not be harmed by the programmers in order to avoid the FBCP. In short, the property describes that a modification M to a class C (the actual extension) must remain a valid refinement of C when applied to a refined version of C (C’ in the figure below; mod reads “modifies”).

Flexibility Property to avoid the FBCP

This is a bit theoretical, but in the essence it means that it is the duty of the programmer to ensure that everything’s coded fine; in the end, everyone can easily google for things like the Open-Closed Principle, can’t we?

Let’s take a more cynical or maybe naive perspective while looking at the upcoming example. It is also borrowed from the mentioned study, and it is only one of five examples that show orthogonal aspects of the FBCP, making it more than a trivial thing.

 

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  
public class BaseClass {
 
    int x = 0;
 
   
 
    public void aMethod() {
 
      x = x+1;
 
    }
 
   
 
    public void anotherMethod() {
 
      x = x+1;
 
    }
 
  }
1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  
public class SubClass extends BaseClass {
 
   
 
    public void anotherMethod() {
 
      this.aMethod();
 
    }
 
  }
1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  
//New base class
 
  public class BaseClass {
 
    int x = 0;
 
   
 
    public void aMethod() {
 
      this.anotherMethod();
 
    }
 
   
 
    public void anotherMethod() {
 
      x = x+1;
 
    }
 
  }

This example highlights the aspect of “Unanticipated Mutual Recursion”, and it could make us ask “Why do modern languages even allow that these problems can arise?”, or in other words “Why don’t we have languages that eliminate such issues by definition?” Well, on the one hand, there are code validation and checking tools that already support us programmers in writing good quality code. But I don’t think that, especially considering the last example, tools are able to detect fragile base classes automatically. On the other hand, the questions address something that accompanies the history of programming from the very beginning. Take pointers, for example. In the hands of an expert powerful weapon, but amateurs can do horrible things (while having good intensions!). And every one of us knows a guy who still swears that Algol 60 is the best language ever.

So, maybe there will be a new language in the near future that explicitly separates implementation inheritance and interface inheritance (and maybe no one will consider it useful), but until then, we, as lecturers and senior programmers, need to make sure that the upcoming generation of programmers is aware of the dangers in implementation inheritance and that they understand object-orientation more like this:

Object-Orientation how it should be considered

In the end, it is just like Stan Lee once said: “With great power there must also come — great responsibility!”

Best,

Robert


Present this!

Original Author: Kyle-Kulyk

Itzy, which should be launching next week (aaaahhh plug plug plug).  I thought I’d give a couple of tips on presentations while they’re fresh in my mind.

Show up early – Nerves might keep you outside until the last-minute.  You might want to rehearse your speech in the washroom.  You might want to belly up to the local pub for that final shot of liquid courage.  Don’t!  Get there early and make sure you’re setup.  Nothing makes a crowd as restless as watching you sit there, fumbling with a thumb drive, waiting for the projector to heat up, organizing your notes.  Be ready to go.  In one of my recent presentations, no one knew the login for the PC I was slated to present on.  We had to wait for a runner to go find someone in the know and report back to us, and that took some time.

Practice – Run through your presentation in its entirety before.  Do it a few times and make sure you’re comfortable with your visual media and the source material.  At my first presentation, I forgot my notes sitting beside the computer at home.  Thankfully, I knew what I wanted to say without my notes having run over the presentation earlier that afternoon and was able to just wing it with my slides acting as my guide.  Which leads to my next point…

Remember to bring your notes – Seems like a no brainer, but hey!  I forgot.  In fact, why not make a checklist of everything you need?  And check your files.  Make sure you actually remembered to bring your presentation materials, and don’t forget to bring the appropriate materials.  Has anyone seen a presenter prepare to do a PowerPoint presentation, only to find the machine he’s presenting on doesn’t have PowerPoint?  They were picking bits of that guy out of the ceiling ventilation system.  Save as a PowerPoint Show, or why not bring both file versions, just in case?

Do you like the walls of text?  Neither do we – Keep your visual presentations simple.  No one likes to sit through a presentation with a never-ending wall of text.  Break it up, simplify, add graphics.  And for the love of god, try to limit your presentation to about 20 minutes or so…

Don’t Memorize – No one likes to sit back and listen to a scripted speech.  Know your material, stick to your key points and have a conversation with the audience.  If you know what you’re talking about, keep to the key points and just wing.  It’s much less stressful than remembering your lines and helps you connect with your audience.

Have fun – With any luck, you like your job.  Bring that enthusiasm to your presentations.  Neither of my presentations went perfect this past week, but the experience of talking about a subject I’m passionate about sure as hell beats talking about something I have no interest in.  I’ve given many presentations over the years during my decade in the brokerage industry and talking now about my company and the game we’re working on instead of talking about mutual funds or what online brokerage platforms are best for you, that has to be the most fun I’ve ever had doing a presentation.  I was nervous at first, I always am and you will be too but don’t let that get to you.  Enjoy what you do, and it’ll be enjoyable for your audience.

A lot of these points might seem pretty straight forward, but when you throw in the extra stress of presenting suddenly the straight forward can be elusive no matter if you’re presenting to the public or just a room of your co-workers.

Good luck on your next presentation.

Mathematics for Game Developers 4: Rational Numbers

Original Author: John-McKenna

We now have integers, and can solve any equation of the form x + a = b, with a and b being arbitrary natural numbers.  In fact, we can solve those equations with a and b being integers too.

The reason that we needed to create new numbers is that we wanted to solve equations like x = 2) but not all.

Consider 3times 3 = 9 > 7.  We want a number that’s in between the two, but as long as we stick with integers, there isn’t one.

So, remembering how we defined the integers, we try the same trick again: create an equivalance relation on pairs of integers, such that (b,a).  That’s all assuming that a isn’t zero, of course.

These new numbers are called “rational numbers”, because they represent a ratio of two integers.

We need to define addition and multiplication of rationals.  We want them to be consistent with the definition we already have for integers, so (a,1) times (b,1) = (ab,1) (I’m abusing notation here, writing (a,1) when I mean “the equivalence class that contains (a,1).  It’s much easier, and mathematicians do this all the time.  It doesn’t cause any problems as long as you’re paying attention).

It’s not obvious how to extend this to more general cases, so let’s go back to the equation we used as motivation.  Say we have two numbers, x and y, defined as the solutions to acx + acy = ac(x+y) = ad+bc.

And that gives us a definition for addition: if x and y are rational numbers, with x containing the pair (b,a) and y the pair (d,c), then x+y is the rational containing (ad+bc,ac).

We can do something similar for multiplication.  Again, I’ll leave the details to you.

Are there any more numbers that we need to invent?  Surely not, because there isn’t any more room to look for them.  We had all that space less than zero in the natural numbers, so we had to invent integers to fill it.  And we had all that space between integers to fill with rationals.  But in between the rationals, there are just more rationals: frac{1}{2}(x+y) is between x and y, and there are infinitely many more along with it.  If there are any more numbers, there’s nowhere for them to go.  We have all the numbers we need.

At least, that was the thinking about two and a half thousand years ago.  Then something shocking happened.  I’m sure you already know what that was, but it’s late and I’m tired, so I’ll leave that story for next time.


C / C++ Low Level Curriculum part 2: Data Types

Original Author: Alex Darby

Prologue

Hello and welcome to the 2nd part of the C / C++ low level curriculum series of posts that I’m currently doing.

Here’s a link to the first one if you missed it: /2011/11/09/a-low-level-curriculum-for-c-and-c/

This post is going to be a little lighter than most of the other posts in the series, primarily because this post is vying for my spare time with my urge to save a blonde girl with pointy ears from the skinny androgynous Demon Lord of extended monologue in a virtual universe powered by three equilateral triangles.

Before we continue, I’d like to quickly bring to public note a book that has now been recommended to me many times as a result of the first post: http://www1.idc.ac.il/tecs

I can’t personally vouch for it, but I fully intend to buy it and grok its face off as soon as I get some spare time in my schedule. This book looks awesome, and if it is half as good as it looks to be then reading it should be an extremely worthwhile investment of your time…

Assumptions

The next thing on my agenda is to discuss assumptions.

Assumptions are dangerous. Even by writing this I am making many assumptions – that you have a computer, that you can read and understand The Queen’s English, and that on some level you care about understanding the low-level of C++ to name but a few.

Consequently, dear reader, I feel that it’s worth mentioning what I assume about you before I go any further.

The important thing, I guess, that I should mention is that I assume that you are already familiar with and comfortable using C and/or C++. If you’re not, then I’d advise you to go and get comfortable before you read any more of this 🙂

Data Types?

So, again, I find myself almost instantly qualifying the title of the post and explaining what I mean when I say data types.

What I am talking about is the “Fundamental” types of C++ and what you should know about how they relate to the machine level – even this seemingly straightforward aspect of C++ is not necessarily what you would expect; especially when dealing with multiple target platforms.

Whilst this isn’t the kind of information that will suddenly improve your code by an order of magnitude, it is (in my opinion) one of the key building blocks of understanding C / C++ at the low level; as it has tonnes of potential knock on effects in terms of speed of execution, memory layout of complex types etc.

Certainly, no-one ever sat me down and explained this to me, I just sort of absorbed it or looked it up over the years.

 

Fundamental and Intrinsic Types

The fundamental types of C/C++ are all the types that have a language keyword.

These are not to be confused with the intrinsic types which are the types that are natively handled by some given CPU (i.e. the data types that the machine instructions of that CPU operate on).

Whenever you use new hardware you should check how the compiler for your platform is representing your fundamental types. The best way to do this is (can you guess?) to look at the disassembly window.

These days all fundamental types of C++ can be represented by an intrinsic type on most platforms; but you definitely shouldn’t take this for granted, it has only really been the case since the current console hardware generation.

There are 3 categories of fundamental type: integer, floating, and void.

As we all know, the void type cannot be used to store values. It is used to specify “no type”.

For both integral and floating point types there is a progression of types that can hold larger values and/or have more numerical precision.

For integers this progression is (from least to most precision) char, short, int, long, long long; and for floats: float, double, long double.

Clearly, the numerical value limits that a given type must be able to store mandate a certain minimum data size  for that type (i.e. number of bits needed to store the prescribed values when stored in binary).

 

Sizes of Fundamental types

As far as I have been able to discover, the C and C++ standards make no explicit guarantee about the specific size of any of the Fundamental types

There are, however, several key rules about the sizes of the various types which I have paraphrased below:

  1. A char must be a minimum of 8 bits.
  2. sizeof( char ) == 1.
  3. If a pointer of type char* points at the very first address of a contiguous block of memory, then every single address in that block of memory must be traversable by simply incrementing that pointer.
  4. The C standard specifies a value that each of the integer types must be able to represent (see page 33 in this .pdf of the C standard if you want the values – see the header of a standard conformant C++ implementation for details of the values used by your compiler).
  5. The C++ standard says nothing about size, only that “There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long long int”. In this list, each type provides at least as much storage as those preceding it in the list.” (see page 75 in this .pdf of the latest C++ standard I could find).
  6. 4 & 5 have similar rules in the C and C++ standard for the progression of floats.

Helpfully, MSDN has a useful summary of this information (though it’s partly MSVC specific, it’s a good starting point).

Despite all this leeway in the standard, the size of the fundamental types across PC and current gen console platforms is (to the best of my knowledge) relatively consistent.

The C++ standard also defines bool as an integral type. It has two values, true and false, which can be implicitly converted to and from the integer values 1 and 0 respectively; and is the return type of all the logical operators (==, !=, >, < etc.).

As far as I have been able to ascertain, the standard only specifies that bool must be able to represent a binary state. Consequently, the size of bool can vary dramatically according to compiler implementation, and even within code generated by the same compiler – I have seen it vary between 1 and 4 bytes on platforms I’ve used – I have always assumed that this was down to speed of execution vs. storage size tradeoffs.

This ‘size of bool’ issue resulted in the use of bool being banned from use in complex data structures at least one company that I have worked at.  I should clarify that this was a ‘proactive’ banning based on the fact that it might cause trouble rather than one that resulted from trouble actually having been caused.

We should also mention enums at this point (thanks John!) – the standard gives the storage value of an enumerated type the liberty to vary in size depending on the range of values represented by each specific enum – even within the same codebase – so an enum with values < 255 (or <= 256 members with no values assigned) may well have sizeof() == 1, and one which has to represent 32 bit values would typically have sizeof() == 4.

This brings us onto pointers. Strictly speaking pointers are not defined as one of the fundamental types, but the value of a pointer clearly has a corresponding data size so we’re covering them here.

The first thing to note about pointers is that the numeric limits required for a pointer on any given platform are determined by the size of the addressable memory on that platform.

If you have 1 GB of memory that must be accessible in 1 byte increments, then a pointer needs to be able to hold values up to ((1024 * 1024 * 1024) – 1), which is (2^30 -1) or 30 bits. 4GB is the most that can be addressed with a 32 bit value – which is why win32 systems can’t make use of more than 4GB.

For example, when compiling for win32 with VS2010, pointers are 32 bit (i.e. sizeof() ==4), and when compiling for OSX with XCode (on the Macbook Pro I use at work for iOS development) pointers are 42 bit (sizeof() ==6).

One thing that is definitely worth noting is that all data pointers produced by a given compiler will be the same size (n.b. this is not true of function pointers).  The type of a pointer is, after all, a language level abstraction – under the hood they are all just a memory address. This is also why they can all be happily converted to and from void* – void* being a ‘typeless pointer’ (n.b. function pointers cannot be converted to or from void*).

That said, knowing the type of the pointer is absolutely crucial to the low level of many of the higher level language mechanisms – as we shall see in later posts.

Addendum

So, following on from a couple of the comments, I need to cover function pointers as separate from data pointers.

I made an incorrect assertion that all pointers were the same size. This is only true of data pointers.

Function pointers can be of different sizes precisely because they are not necessarily just memory addresses – in the case of multiply inherited functions or virtual functions they are typically structures.

I recommend the blog that Bryan Robertson linked me to, as it gives a concrete example of why pointer to member functions often need to be more than a memory address:

C++ Built In Types

C++ Fundamental types (win32 compiled on Windows 7 with VS2010)

A Tale of Two Call Stacks

Original Author: Bruce Dawson

My kingdom for some symbols

I spend a large portion of my time at work trying to make things faster and less crashy. Usually the problems I investigate are in our own code so I have full information – source code and symbols. However sometimes the problems are at least partially in some other company’s code, and the task gets trickier.

Mandatory disclaimer: this post represents my opinion, not that of my employer.

A call stack we can believe in

For example, last week I investigated a Visual Studio hang. This intermittent hang had been bothering me for months and I finally decided to record an xperf trace of the hang and investigate. The details will be the subject of another post, but one vital clue was this call stack from the xperf CPU scheduling summary table:

image

The call stack is entirely in Microsoft code. It starts in Visual Studio and ends up in Windows, and this call stack shows that Visual Studio hung for 2.585 seconds while trying to CreateFileW so that it can GotoEntry in the CResultList. Even though I know nothing about the Visual Studio architecture that was enough information to let me understand the problem, and I then changed our project files in order to completely avoid this hang in the future. Shazam!

The reason I was able to diagnose this problem is because Microsoft publishes symbols for most of its code on a public symbol server. Symbols are published for Windows, Visual Studio, and much more, and this often lets me fix performance problems and crashes even when they are entirely separate from our code. Yay Microsoft!

A call stack that knows how to keep its secrets

Another example, not quite so happy, is demonstrated by this call stack. This is sampling profiler data from a thread that is in our game:

image

Huh. This thread sure is using a lot of CPU time. In our process. I wonder what it’s doing? Except for the two out 1,036 samples that hit in Windows functions I can only tell that it is NVIDIA code that is executing – there are no indications as to what it is doing.

I don’t mean to pick on NVIDIA here. Well, to be more accurate, I don’t mean to just pick on NVIDIA. This is a problem with all three major graphics vendors – NVIDIA, AMD, and Intel. None of them share symbols with the public and this leaves game developers with a significant problem. When a crash occurs deep in graphics driver code (a not uncommon occurrence) we are helpless. When a frame glitch occurs deep in graphics driver code (also quite common) we are helpless. And when game startup includes excessive memory allocations or CPU time deep in graphics driver code… we are helpless.

You can’t handle the truth

I’ve been told by some graphics vendors that having symbols would not be valuable to game developers, and might even be confusing. Game developers couldn’t possibly understand their cryptic function names, and might misinterpret them.

Poppycock.

I’ve solved dozens of performance problems in other people’s code, with just symbols to guide me. Having symbols has never been confusing, and has almost always saved me time.

If I had symbols for the graphics drivers then I could solve some problems on my own. I could recognize patterns in the crashes and performance problems that I see. I could give more precise suggestions and bug reports to the graphics vendors. I could more easily figure out what is happening in my code that is causing problems in their code.

As it is I can do almost nothing. Significant CPU time and memory is being consumed in my game’s process and I don’t have symbols to help understand why.

Call to action

If you’re a game developer, ask the graphics vendors that you work with for symbols. They’ll say no, but it’s still important to ask, in order to remind them of the importance of this issue. After they say no be sure to send them all of the crash dumps and xperf traces where they are a factor and insist that they help you, since they won’t let us help ourselves. And, don’t forget to share your stories and needs for symbols in the comments.

If you’re a graphics vendor – please release symbols. I’m confident that if you do it will let us make better games on your hardware, while saving time for your support team, and for game developers. I know that deciding to share symbols is hard, because symbols reveal a lot. I get that. But not sharing symbols with anyone is counterproductive.

Final disclaimer

I own stock in Microsoft, Intel, and AMD, but not NVIDIA. I hope it has not affected my impartiality. You decide.

Test Driven Game Development Experiences

Original Author: Alistair Doulin

We’ve just wrapped up our first game that employs full Test Driven Development (TDD) practices. I’ll share my experiences, good and bad, now that we’re completely finished the first version of the project. I’ve spoken previously about Test Driven Game Development (TDGD) but a lot of that was theoretical so today I’d like to give some more concrete thoughts on how TDGD helped with the creation of Battle Group.

Test Driven Game Development?

The idea behind Test Driven Game Development is writing automated Unit Tests to confirm the correctness of your production code. Unit tests can be written to test all areas of a game from gameplay to rendering engines and anything in between. As I stated previously, there are three main goals from TDGD:

  1. Find out when code breaks. If you make a code change that breaks something that was previously working (and is tested) you will know about it immediately.
  2. Forces modular code.  For code to be “unit testable” it must be modular with clear boundaries and “separation of concerns” between various systems.
  3. Allows you to “find the fun”.  Once you’re guaranteed that certain requirements are met, you’re free to experiment with gameplay to make it more enjoyable without breaking the game.  This also extends to giving designers more freedom when scripting.  Encouraging experimentation by designers without programmer intervention is invaluable.
UpdateUpdated #1 thanks to kevingadd. Previous badly worded version – Guarantee code works. With a full suite of unit tests you are guaranteed that your code does what it is supposed to do.  Bugs will be found as soon as they are introduced as tests fail immediately.

How We Did It

In general the development of new gameplay features followed these steps:

  1. Design the unit test for the new feature, testing the outward interface seen by the rest of the system
  2. Implement the feature as quickly as possible, cutting corners as needed so the feature is playable in the quickest time possible
  3. The team tests the feature, makes sure it’s working and fun and we make iterative improvements to it
  4. Once happy with the feature I then refactor the code to clean up any shortcuts and make it the optimal solution

This system worked extremely well and lead to a high development velocity both for the creation of new features and maintenance/update of existing features.

Gameplay Testing

All of the unit tests created for Battle Group were testing gameplay code as this is the most complex area of our game. Unity does the heavy lifting for most of the technology based systems (eg rendering, input) freeing us to have ~90% of the code in the project related directly to gameplay. The main motivation for this testing was to allow us to rapidly develop features and experiment with existing code without breaking existing systems. For the most part, this motivation was met throughout the lifetime of the project. While the gameplay stayed fairly constant throughout the 4 month development cycle, there were a couple of major changes we made and the unit tests were invaluable during these changes.

Battle Group started out (as most games do) as a game design document. We then prototyped the game, tweaked the design document and began working on the alpha build. This design document was translated into unit tests. While I (the programmer) was responsible for this, I plan in the future for our game designer to begin to take over the role of writing gameplay specific unit tests. As the project progresses, the artifacts of its design move from a static design document/wiki to an active, maintained set of unit tests. As design changes requests came in from the team I would focus my time on updating the unit tests to match the new design.

What About Prototyping?

We did not create these unit tests during the prototyping stage (about a week’s work) as this was a throwaway prototype and unit tests would have slowed us down without much real value. One of the negatives related to TDGD is the reduction in velocity while making fairly major changes to the codebase. This is often the case while prototyping and therefore I strongly urge against TDGD during the prototyping phase of your development, particularly throw away prototypes.

The prototyping phase is a great opportunity to start fleshing out the design of your testing suite however. As you work on the core features of the game you can see where the bulk of the coding effort is likely to go and also which areas are most susceptible to change throughout the life of the project. A good example of this was the blast radius and velocity of the weapons used in Battle Group. Small changes to this had a major impact on the feel, flow and accessibility of the game. For this reason I made sure that this was both easy to change and robust in the changes I made. As velocities increased the distance traveled per frame became quite large. Coupled with this was the low physics frame rate on older mobile devices and it was crucial that we had repeatable behavior at varying frame rates and data values. As I had already planned to implement TDGD post-prototype I was mindful of these areas of code and made a mental note to test these areas first and thoroughly.

Code Coverage

Whenever I discuss TDD with someone in or outside the game development industry, there’s often a heated discussion about code coverage. Code coverage is the percentage of production code that is “covered” by unit tests. There are purists that claim you’re not really doing true TDD without 100% code coverage and there are others who say some arbitrary percentage is enough. My stance is that the game/code itself should determine the code coverage you should aim for. Sometimes certain tests are causing more trouble than good (eg changing a few lines of code requires 10 times more lines of unit testing code to be updated). Either the tests need a rethink in the way they are implemented or it’s best just to remove them.

I didn’t “watch the clock” when it came to code coverage on Battle Group. My main focus was to get the best value from my limited resources (as the sole programmer on the team). During prototyping and throughout development I noted which areas of code were critical or breaking often and made sure to get the highest code coverage possible on them. There is certainly a point of diminishing returns when it comes to code coverage which differs between projects and between systems within a project.

Test First Development

I opted for a “Test First” development style where I would create my unit test and then implement the feature being tested. This allowed me to design exactly how I thought the code should be used rather than writing the solution to the problem. I found this was invaluable to keeping a clear separation of concerns and made sure everything was as modular as possible. By thinking about the outward interface of the functionality first, it helped me get in a mindset of creating exactly what I wanted. When solving an interesting and complex problem it’s easy to lose site of the original reason for the code’s existence. Test first development focuses your efforts where they are needed most, on defining and then implementing the functionality of a piece of code as seen by the rest of the system.

NCrunch

One tool that was released during the development of Battle Group was NCrunch. This tool will completely change the way you unit test. I won’t go into too much detail as it’s a little off topic, but I strongly recommend you grab it and experiment with how it works. The whole system can be summed up in two points:

  1. Unit test code has real-time inline green/red lights to indicate whether it is currently passing. Tests are continually running in the background to keep this constantly up to date
  2. Production code has a similar green/red light system showing how many unit tests covering this code pass and fail (or if there are no tests at all)
Conclusion
Overall I was really happy with the way our TDGD turned out on Battle Group and I definitely plan on adopting it again for future projects. My development velocity increased overall while being slightly slower at the point of implementation of a new feature.
Have you used TDD on game projects? Do you have any experiences you can share? Or do you think this is all a load of crap and I should get off my soapbox and get back to coding the game instead of the unit tests?

Tips for IGF Student Submissions

Original Author: Heather M Decker-Davis

It may be a bit early for tips and pointers, but I feel the need to reflect! This year marks the third year I’ve worked on an IGF student submission, and regardless of having gone through the process before, the same items often end up overlooked or underestimated. For the good of others who may submit in subsequent years, I’d like to go over the most commonly fumbled details.

Padding and Polish in the Schedule

Wiggle room in your schedule is epecially important if your project is volunteer-based (outside of your normal course work,) but all projects can benefit from some padding and polish time. Note that these are not the same thing.

Padding means taking your general estimate for completion of an item and giving it some extra breathing room for variances, problems, etc.

Example: if it takes an artist a week to complete one tile set, consider scheduling each set at a week and a half or two weeks. When a set is completed early, it’s a magnificent feat! If a particular set drags out and fills the full two weeks, it’s not a setback.

Polish time occurs after everything is essentially completed. No new features are going in and no placeholders remain. Setting aside extra time for polish allows the team to improve on various aspects and tighten up the presentation before submitting.

If everything has gone according to plan, polish time will not be devoured by fixes, etc., but in the worst-case scenario, it can also end up being a safety net. If all else fails, you can still push polish updates after the initial submission. While this isn’t the ideal situation to end up with, it’s something to be aware of.

Additional Requirements

Beyond actually completing your game, there are additional considerations to keep in mind.

Be forewarned that it’s extemely easy to get caught up in the production of your game and forget about extra details until the last minute. This, in turn, makes your initial impression with viewers and judges a bit less snappy. If you can delegate and schedule these items in addition to the main production, your presentation will relfect it.

Trailer Video

To submit a game to IGF, you’ll also need a video trailer, in the form of a YouTube or Vimeo link.

To shoot a video trailer, your game needs to be pretty finished… which feeds back into the aforementioned importance of scheduling. If you’re still working on the game up until the last minute, the trailer video suffers for it, especially when you figure in editing, rendering, and upload time.

Your trailer should be more than a quiet video of game action. Check out game trailers from previously successful games for good examples of how trailers generally work. They tend to combine actual game footage with some degree of marketing or excitement, including either text or audio narration to guide the viewer.

If your video is the first impression people get after surfing YouTube or encountering your website, it’s going to be important that it not only represents your game, but also showcases what’s appealing and unique about it!

Website

You will also need to provide a link to the game’s website. The complexity of the site design generally depends on the type of game, but any website will need hosting space and a domain lined up ahead of time. You may be able to talk to your school about obtaining these, or you will need to purchase them yourself.

Your website can also be used in the promotion of your game. Share that you entered and get some feedback. The earlier you can do this, the better.

Screenshot

The screenshot image of the game is shown on the IGF entry page for your submission. As it’s one of the first things people will see when browsing to your game, you’ll want it to be intriuging and representative of the type of game you’ve entered.

Keep in mind your screenshot must also meet the format requirements listed by IGF. Be sure to check the current entry form for details!

Description

The description will also be published on your entry page, right under the screenshot, so take some time to draft up something appropriate and concise. It was limit 200 words for 2012 entries.

Parting Words

In all, being aware of the specific requirements for IGF, above and beyond simply completing your game, will certainly aid your entry process. Scheduling appropriately and taking additional requiments into consideration from the start of your project are just a couple of practices that will help you stay on track.

Each year the number of student entries increases. The importance of attention to detail and a polished presentation cannot be understated.

Put your best foot forward and good luck with your future submissions!


Darwinian_Coding: ( Implementing Creature Cognitive Replay )

Original Author: jake kolb v

Context: where we use this

“Creature Cognitive Replay” covers how we code video-game entities, such as Space-Marines or Martian-Primates, to learn and store ‘knowledge’, such as “where are my favorite forest hiding places”, “is that large blinking-red object blocking my path dangerous?”, or “how do i rebuild my destroyed wigwam”.  Creating a believable, living world requires us to separate the omniscient simulation or ‘game’ knowledge from what an individual creature should be able to know, such as the difference between castle guards automatically knowing the player’s position versus seeing/hearing an object move and having to investigate what the sound might be.  While every game’s needs are quite different, the focus here is what approaches have worked best simulating 100s to 10,000s of unique creatures in a reasonably balanced ecosystem of producers (plants) and consumers (herbivores & carnivores).  The last three approaches we used were given between 8 & 64 MiB of active ‘current-time’ creature-knowledge-memory for PC desktop applications but could easily move to mobile or console spaces.  The ‘replay’ part of the title handles timeline based thinking so we can ‘rewind/fast-forward’ our simulation and the knowledge will be accurate/consistent.  In this discussion, we use ‘knowledge’ as a unit of information and a ‘thought’ as a piece of knowledge accessible to an entity which can change over time.

Goals: what we need

  • Speedy Queries: entities can ask questions about their knowledge to make decisions (Note: ‘decisions’ are made using an interpreted script or pre-coded behaviors)
  • Instancing & Breeding: supports ‘generic’ abstractions such as ‘space-tiger’ vs specific tigers and allows cloning, breeding, and other ways to mix any inherited knowledge when they are created at runtime.
  • Trending: can store the expected/default/popular thoughts (everybody knows that SpaceCola increases your speed) vs fringe/unique thoughts (SpaceCola decreases my health)
  • Trust: supports ‘certainty’ to help decide how reliable the knowledge is to the entity
  • Origin: stores knowledge from different sources, such as instinct vs communicated vs sensory vs analytical thought-process
  • Rewind: handles recording/playback of each creature’s knowledge over time to permit simulation rewind and cognitive replay

Solutions: how we tried

Technique:  Straightforward-Object-Oriented Programming ( SOOP ) 

Our first attempt at representing “knowledge per entity” used classic Smalltalk/C++ object-oriented programming with a hierarchy of ‘classes of knowledge’ and a vast virtual-function interface to communicate with each class.  Templates were not supported back when we tried this approach and we had a huge base of files to handle any interactions with the knowledge and ownership-access.  We had to anticipate any possible query an entity might want to make ahead of time to match the entity type (Gorilla) with the member functions of the knowledge types ( Mammal, Physical, Jungle_Prowler, Nest_Builder ) and the permutations quickly got out of hand.
Pros: Easy to code & debug using a design doc for each type of entity.  Given fixed entity behaviors, little to no design changes when coding, and few iterations of tweaking the behavior code, this is a safe, reliable choice.  Easy to rewind/fast-forward given fixed structures of the SOOP approach as we can just difference the bits between two structures and run-length/huffman-encode those deltas.
Cons: Creating new queries required new code which was expensive in time and couldn’t be done by a designer.  Given the classes were fixed at runtime, we could not create new knowledge-classes or modify their member types in real-time.  Using over 600 knowledge-class files took a long time to compile and made a ginormous executable.  Given the number of classes involved to instantiate most entities, such as dog or cat or space-tank, it also took a while to create and destroy creatures, which did not scale well to large numbers.  While a straightforward OOP is an appealing approach to begin with, maintaining/evolving the code was not well suited to the real-world development demands of sudden/frequent design changes and complex iterations of tweaking behaviors.  After missing two milestones, we reworked our schedule to switch to the TAM approach below.

Technique:  Tuple-Arranged-Minds ( TAM )

Inspired by the admirable and rich AI legacy of functional languages like LISP, Prolog, & Scheme, we decided to implement something similar that interfaced with our existing codebase.  Back then in the mid-1990′s, we simply couldn’t find an ‘open-source’ engine to plug-in, so we gave each entity a red-black-tree of linked lists.  Each ‘list’ node had a knowledge-type string such as enemies, goal_location, or favorite_weapon and one or more thought-strings such as ‘robots, mosquitoes’, ‘treehouse’, or ‘shotgun, gasgun, sonic-screwdriver‘.  We called these nodes a “thought-tuple” and we used it for function-style queries, such as returning ‘no‘ for “Does_like( self, mosquitoes )“.  Most importantly, we could combine queries such as “Create_Match_List( my.enemies, my.favorite_weapon, my.enemies.favorite_weapon ) Sort_by Distance( enemies.location, my.location )” to find an ordered list what enemy we should hunt next to stock up on our favorite ammo.  All of the strings used were stored per game level (a simulation-scene in a particular space and time).  We avoided storing any string duplicates by giving each entity’s thought-tuples fixed indices to strings (similar to LISP’s atoms).  On the downside, we never destroyed knowledge as reference counting was too expensive and this resulted in some levels growing too large.  We tried building an ‘amnesia-processor’ to search all entities and reduce any unused knowledge or collapse new thought-tuples into old ones, but this simply generated weird behaviors.  To handle the rewind/fast-forward for cognitive-replay, we kept a list of offsets/deltas to each entity’s tree of thought-tuples but this required updating the tree for each change.
Pros: TAM had tremendous flexibility in creating new knowledge.  If the designer can write it in a sentence, it can be stored for an entity.  TAM let designers edit in realtime using text strings and make any sort of query they could consider.  TAM permitted real-time cloning/breeding of entities and there behaviors diverged based on what happened to them over time.  TAM had easy life-cycle management of code ( around 15 files ) compared to SOOP.
Cons: Sometimes it was difficult for designers to use and understand the results of their queries.  Although quite flexible, it ran terribly slow for all but the simplest queries.  Modest queries that ran @ 1 or 2 Hz for each creature, such as “Distance( target.location, my.location )” could back up and stall the main loop when walking the thought-trees for 100s of entities ( Pentium Pro & II era ).  While the thought-tuples delivered some of what we needed to simulate large evolving worlds, the red-black-tree w/ pointers to lists of strings approach had too much overhead in answering queries.

Technique:  Individual_Mind_Packages ( IMP )

IMP was an approach to collapse the string-storage of TAM’s thought-tuples into pre-defined binary types for faster processing and compact space.  Packaging everything into a linear memory block of ‘thoughts’ per entity greatly improved cache-coherence.  Instead of a red-black tree that matched strings and returned linked lists, we created a hash-key for each type of knowledge and used it to retrieve an offset into that linear memory block or ‘mind-package’ to retrieve the knowledge queried.
Pros:  IMP translated most queries into only a few instructions which made it quite speedy compared to TAM while retaining the same flexibility of run-time queries.   Designers really needed those run-time queries to rapidly iterate on behaviors as they made game levels.  IMP gave us fast create/destroy mechanics as we only needed to allocate/free a chunk of memory in a pre-allocated pool.
Cons: Although we saved space by compacting thoughts into various binary types, this approach required each unique entity to allocate all the possible knowledge it could ever store upfront.  The upfront cost was because we needed to re-use the large array of hash-table offsets per entity type and that ‘large-array’ was usually larger than the size of per-entity knowledge.   This limited what an entity could learn/store, such as only 5 friends and 3 enemies, and used the same amount of space as if each entity had learned/stored everything it could.  Since all entities started off using ‘defaults’ or a blended combo (chromosomal-style mixing from two parents), most of the knowledge was being duplicated, unlike in the TAM system.  That made changing any defaults cause a large hiccup as we iterated all entities who used those values.  IMP is hard to parallelize for large (10,000+) numbers of entities doing lots of queries per update as we can’t lock each unit of knowledge without increasing its size with lock/unlock semantics.

Survivor: who proved best & why

Technique:  Sparse_Individual_Thought_Tables ( SITT ) 

After analyzing several game levels we realized that around 2/3s of our entity minds were still using defaults as many entities would converge on similar thoughts or never really had an experience that changed their initial thoughts.  This lead us to extracting an entities thoughts into database-style tables for each knowledge type.  This compacted per-entity storage by using the ‘atom’ technique from TAM (and indirectly from LISP) for all of the default/inherited/trending information and still gave us unique, changing thoughts.  Now each entity has its own skip-tree of thought-atoms that would reference the knowledge as well as its access/ownership traits in the event of any changes.   Cognitive replay becomes much more efficient as previous thought values are likely to be stored in the same database table, acting like dictionary compression before we even begin to compare the thought deltas.
Pros: Having removed the large linear memory block per entity and large hash-tables per entity type, we can efficiently run on multiple threads by using atomic compare-and-exchange to lock the database knowledge instead of the entire entity.  That means that for a single entity at one time there can be a physics thread can update forces, a thinking thread determining a new set of goal-priorities, a path-thread planning obstacle avoidance, a communication thread interpreting (or misinterpreting) dialog, and a metrics-thread tallying up statistics that this entity remembers.
Cons: For behaviors that access a lot of different thoughts for a single entity, the cache-coherence is poor as many different database tables are loaded.  Requires a lengthy processing step to separate the thought-database tables for batches of ‘game-levels’ or simulation-scenarios.

Future: Work on ecologies with millions of creatures and see how multi-CPU-cores or GPU (OpenCL?) processing does with complicated behaviors and frequent changes to entity thoughts.

 

Survivor_Datastructs

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  13
 
  14
 
  15
 
  16
 
  17
 
  18
 
  19
 
  20
 
  21
 
  22
 
  23
 
  24
 
  25
 
  26
 
  27
 
  28
 
  29
 
  30
 
  31
 
  32
 
  33
 
  34
 
  35
 
  36
 
  37
 
  38
 
  39
 
  40
 
  41
 
  42
 
  43
 
  44
 
  45
 
  46
 
  47
 
  48
 
  49
 
  50
 
  51
 
  52
 
  53
 
  54
 
  55
 
  56
 
  57
 
  58
 
  59
 
  60
 
  61
 
  62
 
  63
 
  64
 
  65
 
  66
 
  67
 
  68
 
  69
 
  70
 
  71
 
  72
 
  73
 
  74
 
  75
 
  76
 
  77
 
  78
 
  79
 
  80
 
  81
 
  82
 
  83
 
  84
 
  85
 
  86
 
  87
 
  88
 
  89
 
  
//===
 
  //PSUEDOCODE
 
  //===
 
   
 
  //THOUGHT_ACCESS
 
  enum Thought_Access_t
 
  {
 
  Thought_Access__Universal_k,
 
  //fixed, accessible by all,
 
  //generally used for math constants or universal enumerations like
 
  //the 6 3D cardinal directions (up/down/left/right/ahead/back),
 
  //names of places, or the 7 colors of the rainbow, etc)
 
   
 
  Thought_Access__Entity_Type_k,
 
  //thought that is stored specific to the "abstract description" of the entity, such as
 
  //all cats, or all siamese-cats, etc.
 
  //This is where you'd store knowledge that all cats would
 
  //have such as 'love mice', 'hate dogs' or 'fear vacuum cleaners'
 
   
 
  Thought_Access__Shared_k,
 
  //these are thoughts that are shared amongst a group, such as 'who is the Orc leader we follow' //or what is our 'marching-song' or 'slogan'.  Any change here happens for the entire group
 
   
 
  Thought_Access__Unique_k,
 
  //Unique thoughts that can be changed for a unique individual,
 
  //such as favorite_food, suspicion_of_traitor_ID, or hunger_level
 
  }
 
   
 
  //===
 
  //THOUGHT_SOURCE
 
  enum Thought_Source_t
 
  {
 
  Thought_Source__Instinct_t,
 
  //Inherited instinct
 
  //or taught/told by a trusted person so long ago it is equivalent to instinct
 
   
 
  Thought_Source__Sensed_t, //Sensory experience, as in seen, heard, tasted, etc.
 
   
 
  Thought_Source__Communicated_t, //Thought came from dialog or body language communication
 
   
 
  Thought_Source__Thinking_t,
 
  //Thought came from examining existing thoughts
 
  //and generating new ones using code
 
  }
 
   
 
  //===
 
  //KNOWLEDGE_FORM
 
  enum Knowledge_Form_t
 
  {
 
  Form_Number_t, //Quantity
 
  Form_Name_t, //Reference or ID#
 
  Form_Style_t, //Quality
 
  Form_Collection_t, //Aggregate of more information units (deeper type)
 
  }
 
   
 
  //===
 
  //KNOWLEDGE_LOCATION
 
  struct Knowledge_Location_t
 
  {
 
  Database_Table table;
 
  Database_Key Begin_key; //Represents a range of keys that can be used
 
  Database_Key End_key;
 
  }
 
   
 
  //===
 
  //ENTITY_THOUGHT
 
  //Each Entity has an array of "Thoughts" for the types of Knowledge it
 
  //can understand
 
  struct Entity_Thought_t
 
  {
 
  Thought_Access_t access;
 
  Thought_Source_t source;
 
  Knowledge_Location_t loc;
 
  Percent_t Certainty_f;
 
  Time_t time;  //when this thought occurs, helps to interpolate between past and future thoughts
 
  Linked_list_t Prev_thought; //reference to previous thought
 
  Linked_list_t Next_thought; //reference to next thought
 
  }
 
   
 
  //===
 
  //KNOWLEDGE_UNIT
 
  struct Knowledge_Unit_t
 
  {
 
  Entity_t owner;
 
  Knowledge_Form_t form;
 
  Information_Unit_t value; //int, float, unsigned_int index, etc.
 
  }
 
   
 
  //===
 
  //END

How do we code what these beasts know…