Easy Collaboration with Object URLs

Original Author: Marc Flury

At DROOL, we’ve recently started using URL “links” to our game objects that we can easily share via e-mail.  It’s simple and probably not a unique solution.  But it’s been useful and feels like a feature every engine should have.   It was quick and easy to implement, but the most time was spent working around some minor annoyances.

Our need for “Object URLs”

Brian (the DROOL artist) lives in Providence, Rhode Island, and I live in Seoul, South Korea.  Because of the distance (and 14 hour time difference), our opportunities for liquid communication via voice and video chat are limited and we often resort to e-mailing each other about development issues.

Like most game engines, our assets are divided into files, each containing multiple objects (meshes, animations, sounds, etc.).  For our first game, THUMPER, we’ve accumulated hundreds of files and thousands of objects.  During development, there could be an issue with any one of those files or objects.  For example, Brian might have a rendering problem and send me an e-mail like this:

SUP FLURY.  There is a rendering problem with bug_eyed.mesh in gameplay/resource/bugs.objlib.  I’m not sure if there’s a problem with material settings or a code issue?

Writing e-mails like this isn’t hard, but we send hundreds of e-mails to each other every month.  Even in this simple example, there’s opportunity for the sender to mistype file and object names.  And the receiver has to navigate to the appropriate file, search the file for the object, and open the object editor.  Wouldn’t it be nice to avoid this error-prone and manual labor?

Now we do, with “Object URLs” that directly reference our files/objects.  These links can be pasted into an e-mail by the sender and clicked by the receiver.  For example, Brian’s e-mail now looks like this:

SUP FLURY.  There is a rendering problem with http://drool.ws/gameplay/resource/bugs.objlib?obj=bug_eyed.mesh.  I’m not sure if there’s a problem with material settings or a code issue?

When I click that link, the DROOL editor is launched, the “bugs.objlib” file is opened, and the editor for the “bug_eyed.mesh” object is opened automatically.

Registering a Custom URL protocol

The first step is to register a custom URL protocol for your editor’s executable.  For example, the “http” protocol is associated with your web browser.  For the DROOL engine, we defined a custom “drl” protocol and associated it with our editor’s executable.  Full details for multiple operating systems are described here, but these quick steps assume you’re using Windows:

  1. Copy the text below into a text editor.
  2. Replace all the drl protocols with your own custom protocol.
  3. Replace C:\path\to\your\editor.exe with the path to your editor’s executable (don’t forget to double backslashes).
  4. Save the text as a file with the .reg extension (e.g. custom_url.reg).
  5. Run the file via the command line or by double-clicking in Explorer.  You’ll have to click through some security warnings.
REGEDIT4
 
  [HKEY_CLASSES_ROOTdrl]
 
   @="URL:drl Protocol"
 
   "URL Protocol"=""
 
  [HKEY_CLASSES_ROOTdrlshell]
 
  [HKEY_CLASSES_ROOTdrlshellopen]
 
  [HKEY_CLASSES_ROOTdrlshellopencommand]
 
   @=""C:\path\to\your\editor.exe" "%1""

Since we’re a two person team, my “deployment” method was to simply check this .reg file into source control and tell Brian to run it on his machine.  Large teams probably already have a way to deploy scripts like this to every team member’s machine.  Large teams might also have multiple editors installed on each machine (for different source branches or projects).  For brevity, I’m skipping over these issues here.

URL Format and Parsing

Now that your URL protocol is registered, you can test it by typing a link starting with your protocol (e.g. drl://test) into your web browser.  You might have to click through another security warning, but your executable should be launched and the full URL will be passed to it as a command line argument.  Now your next coding task is to parse the URL into a file name and object, open that file, and open the object editor automatically.  I’ll leave the details to you, but if you already support double-clicking your editor files in Windows Explorer, then you’ve already done 90% of the work.

My URL parsing is bare bones to minimize overhead.  I just used standard C string functions to extract the file and object name and I don’t worry about properly encoding/decoding special characters (all our file and object names are alphanumeric without spaces anyway).  But I did follow the standard URL “query string” syntax so that if I do use a legitimate URL parsing library, it will be trivial to extract the values.  You can use any format, but I prefer something that is short while still being human-readable.  Our format is simple:

drl://relative_path_to_file?obj=object_name

Making URL Links “Paste-able”

At this point, I thought I was finished with URL protocols, but I discovered that unlike standard http:// protocol URLs, if you paste a URL with a custom protocol (like drl://) into most e-mail clients (like Gmail), they don’t automatically get turned into “click-able” links when you send the e-mail.  Of course, you can manually make links with custom URLs into click-able links by using your e-mail app’s GUI, but that is not quick and easy enough for me.  I just want to paste the link and send it!

After some research, I learned that custom protocol URLs aren’t automatically click-able in most e-mail clients for mod_rewrite and a regular expression rule.  I recommend that you don’t do any actual parsing of your URL in your webserver, just do a simple find/replace and then redirect.  I prefer to keep all the parsing in my editor and involve the webserver as little as possible.

UPDATE: As commenter F Fulin suggested below, an alternative to redirecting is to use a standard, but out-dated, protocol (e.g. gopher://) instead of a custom one.  If your e-mail client automatically highlights links using a protocol (and you don’t need to use it for anything else) you can hijack it and avoid redirecting.

More Editor Integration

Once you have this much working, everything else is gravy.  Now that we can go from an Object URL to a specific file/object in our editor, the obvious next step is to make it easy to extract Object URLs from our editor.  When you right-click objects in our editor, the context menu has a “Copy URL” option.  Selecting that copies the Object URL for a particular object to the clipboard.

Object URL copying in action

Other Applications for Object URLs

The convenience of Object URLs has already paid off in other ways.  Like most game engines, when we detect an error, we print out an error message to standard output.  Including the relevant Object URL(s) in our errors makes them more useful and actionable.  Conveniently enough for me, in the Visual Studio Output window, http:// URLs are automatically click-able.

It’s typical for a large team to have a server that continuously grinds through game files and objects, potentially producing errors for each object based on certain validation criteria.  On a large project, you might have thousands of object-related errors at any one time.  From my experience, it’s challenging to manage this process.  Artists and other content creators might not appreciate the importance of these errors and it’s hard to keep the overall error count under control.  If you include Object URLs with these errors, I suspect that everyone on the team will appreciate the added convenience and will be more likely to fix their errors promptly.

This was also posted to the DROOL Development Blog.

iOS Open GL ES 2: Multiple objects at once

Original Author: Adam Martin

Recap:

  1. Part 1 – Overview of GLKit
  2. Part 2 – Drawcalls, and how OpenGL code is architected
  3. Part 3 – Vertices, Shaders and Geometry
  4. Part 4 – (additions to Part 3); preparing for Textures

…but looking back, I’m really unhappy with Part 4. Xcode5 invalidates almost 30% of it, and the remainder wasn’t practical – it was code-cleanup.

So, I’m going to try again, and do it better this time. This replaces my previous Part 4 – call it “4b”.

NB: if you’re reading this on AltDevBlog, the code-formatter is currently broken on the server. Until the ADB server is fixed, I recommend reading this (identical) post over at T-Machine.org, where the code-formatting is much better.

a standalone library on GitHub, with the code from the articles as a Demo app. It uses the ‘latest’ version of the code, so the early articles are quite different – but it’s an easy starting point

Drawing multiple 2D / 3D objects

A natural way to “draw things” is to maintain a list of what you want to draw, and then – when the OS / windowing library / whatever is ready to draw, you iterate over your “things” something like:

  • void draw( CanvasLayer layer )
    • foreach( Drawable nextItem in myDrawables )
      • layer.draw( nextItem );

As explained in Part 2 … OpenGL doesn’t work that way. Instead, for each item, you need to go through the complete setup and tear-down of every configurable parameter that might affect the drawing. Under the hood, most windowing API’s do this too – but they hide it from you.

We’ll create multiple triangles, each with their own unique geometry, and display them all at once.

Multiple objects: A VAO per draw call

A VAO / VertexArrayObject:

VertexArrayObject: stores the metadata for “which VBOs are you using, what kind of data is inside them, how can a ShaderProgram read and interpret that data, etc”

We’ll start with a new class with the (by now: obvious) properties and methods:

GLK2VertexArrayObject.h

[objc]

#import

@interface GLK2VertexArrayObject : NSObject

@property(nonatomic, readonly) GLuint glName;

@property(nonatomic,retain) NSMutableArray* VBOs;

@end

[/objc]

…and add this to the Draw call:

GLK2DrawCall.h

[objc]

#import “GLK2VertexArrayObject.h”

@property(nonatomic,retain) GLK2VertexArrayObject* VAO;

[/objc]

We upgrade our rendering call to actively switch between the VAO’s on a per-draw-call basis:

ViewController.m

[objc]

-(void) renderSingleDrawCall:(GLK2DrawCall*) drawCall

{

/** Choose a ShaderProgram on the GPU for it to execute while doing this draw-call */

if( drawCall.VAO != nil )

glBindVertexArrayOES( drawCall.VAO.glName );

//else PROBLEM: unbinding causes us to lose a texture unexpectedly, I don’t know why yet…

//glBindVertexArrayOES( 0 /** means “none */ );

glDrawArrays( GL_TRIANGLES, 0, 3 );

}

[/objc]

VBOs revisited; VBO vs. BO

I deliberately avoided going into detail on VAO (Vertex Array Objects) vs. VBO (Vertex Buffer Objects) until now.

Previously, I said:

  1. A VertexBufferObject:
    1. …is a plain BufferObject that we’ve filled with raw data for describing Vertices (i.e.: for each Vertex, this buffer has values of one or more ‘attribute’s)
    2. Each 3D object will need one or more VBO’s.
    3. When we do a Draw call, before drawing … we’ll have to “select” the set of VBO’s for the object we want to draw.
  2. A VertexArrayObject:
    1. …is a GPU-side thing (or “object”) that holds the state for an array-of-vertices
    2. It records info on how to “interpret” the data we uploaded (in our VBO’s) so that it knows, for each vertex, which bits/bytes/offset in the data correspond to the attribute value (in our Shaders) for that vertex

Note that “VAO” and “VBO” are independent: you can have multiple VAO’s sharing 1 VBO. You can have 1 VAO using multiple VBO’s. And any combination of many-to-many.

But a Draw call only ever uses a single VAO. A single VAO is mapping “a bunch of metadata, plus a big chunk of VRAM on the GPU … to a single draw-call”.

It’s important to understand that a VBO “is” a BO. Everything you can do with a VBO you can also do with any BO. We give it a different name because to us, as programmers, it’s easier to think about that way. There is one caveat: the GPU is allowed to do under-the-hood optimizations based on what you first use a BO for. In practice: you’ll rarely need to re-use a buffer for a different purpose, so don’t worry about it.

With a generic BO (BufferObject), some of the method calls in OpenGL will require a “type” parameter. Whenever you pass-in the type “GL_ARRAY_BUFFER”, you have told OpenGL to use that BO “as a VBO”; it has no special meaning beyond that.

Vertex Buffer Objects: why plural?

A BufferObject is simply a big array stored on the GPU, so that the GPU doesn’t have to keep asking for the data from system-RAM. RAM -> GPU transfer speeds are much slower than GPU-local-RAM (known as VRAM) -> GPU upload speeds.

As soon as you have BufferObjects, your GPU has to start doing memory-management on them. GPU’s are OK at this, but not great – it’s a complex problem and requires a lot of code and theory at the level of “building a new Operating System”.

With poor hardware you can get noticeable speed gains by having “only one” VBO for your entire app and doing your own memory-management on its contents. It’s messy in code / debugging terms (no isolation of data), but sometimes worth it.

On the flip-side … GPU’s have lots of gotchas to do with “replacing” the data inside an existing BO/VBO. OpenGL is hiding the multi-threaded reality from you – but when you write to the GPU’s local VRAM, from CPU, it’s easy to get “blocked” waiting for the render threads to complete. This is a particular problem on PVR, as used in all iOS devices.

For instance, Imagination (PowerVR manufacturer) has a blog post on avoiding massive performance drops when writing to a VBO, by using a couple of VBO’s, and swapping between them on alternate frames. The PVR chip has to wait for the “TA” to go through before it can render, stalling the process:

This affects dynamic data in your app – but also simple stuff like running out of memory: if you only have one VBO, you’re screwed. You can’t “unload a bit of it to make more room” – a VBO is, by definition, all-or-nothing. You have to dump the whole thing, then re-upload a subset of it.

Taken all together … the sweet-spot for OpenGL ES 2 on iOS is somewhere around “slightly more than 1 VBO dedicated to each VAO”.

Refactoring our old code into a new “VBO class”

We’re going to name this “BufferObject” instead of “VertexBufferObject”, since the “vertex” part is merely a property that could be set or unset on any instance:

GLK2BufferObject.h

[objc]

@property(nonatomic, readonly) GLuint glName;

@property(nonatomic) GLenum glBufferType;

[/objc]

We have our standard “glName” (everything has one), and we have a glBufferType, which is set to GL_ARRAY_BUFFER whenever we want the BO to become a VBO.

To refactor, start with a recap on our previous code, which I previously glossed-over:

(from previous blog post)

glGenBuffers( 1, &VBOName );

glBindBuffer(GL_ARRAY_BUFFER, VBOName );

glBufferData(GL_ARRAY_BUFFER, 3 * sizeof( GLKVector3 ), cpuBuffer, GL_DYNAMIC_DRAW);

The first two lines create a BO/VBO, and store its name. From now on, we’ll automatically set the “GL_ARRAY_BUFFER” argument using our self.glBufferType. Looking at that last line, the second-to-last argument is obviously “the array of data we created on the CPU, and want to upload to the GPU”.

… but what’s the second argument? A hardcoded “3 * (something)”?

(Ouch – very bad practice, hardcoding a digit with no explanation. Bad me :(.)

(glBufferData’s 2nd argument): The total amount of RAM the GPU needs to allocate … to store this array you’re about to upload

Three definitions of “size”

In our case, we were uploading 3 vertices (one for each corner of a triangle), and each vertex was defined using GLKVector3. The C function “sizeof” measures “how many bytes does a particular type use-up when in memory?”.

But we’re not done yet… when we later told OpenGL the format of the data inside the VBO, we used the line:

(from Part 3)

glVertexAttribPointer( attribute.glLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);

The 2nd argument there is also called “size” – but it’s a different number.

And, finally, when we issue the Draw call, we use the number 3 again, for a 3rd kind of ‘size’:

(from Part 3)

glDrawArrays( GL_TRIANGLES, 0, 3); // this 3 is NOT THE SAME AS PREVIOUS 3 !

  1. glBufferData: measures size in “number of bytes needed to store one Attribute-value”
  2. glVertexAttribPointer: measures size in “number of floats required to store one Attribute-value”
  3. glDrawArrays: measures size in “number of vertices to draw, out of the ones in the VBO” (you can draw fewer than all of them)

The final one – glDrawArrays (how many vertices to “draw”) – we’ll store in the GLK2DrawCall class itself, but the rest needs to be associated with the VBO itself, and make sure we use the right kind of “size” at each moment.

Multiple Attributes per VBO / Interleaved Vertex data

As part of configuring your Draw call, you use glVertexAttribPointer to tell OpenGL:

Use the data in BufferObject (X), interpreted according to rule (Y), to provide a value of this attribute for EACH vertex in the object

Earlier, I stated that you can put “all” your data for a draw-call into a single VBO. So far, we filled a single VBO with values for a single Attribute. There are many cases where that’s the right approach (one VBO contains data for one attribute) – but your starting point for a new 3D object is to cram all data, for all its Attributes, into one VBO.


(from Apple’s Techniques for Working with Vertex Data)

How?

It’s back to that glVertexAttribPointer method again:

glVertexAttribPointer( attribute.glLocation, 3, GL_FLOAT, GL_FALSE, 0 WAT?, 0 WAT (x2)? );

This method is IMHO one of the worst-designed ones in the OpenGL API. When newbie OpenGL programmers screw-up and can’t work out what’s gone wrong, it’s usually a misunderstanding of the arguments of this method. This is often made worse because tutorials put “0″ and “GL_FALSE” into the arguments, and don’t explain why.

One by one, using OpenGL’s glVertexAttribPointer docs, the arguments are:

  1. “index of the generic vertex attribute to be modified.” — i.e. the GLK2Attribute.glLocation we fetched from the ShaderProgram after linking
  2. “number of components per generic vertex attribute. Must be 1, 2, 3, 4″ — i.e. if your shader source code had this attribute as a “vec4″, you MUST set this to “4″. If it had a “vec2″, it would be “2″. For a simple float: “1″.
  3. “the data type of each component in the array” — Apple’s GLKit uses floats for everything, so unless you start optimizing data-formats, this will always be GL_FLOAT
  4. “specifies whether fixed-point data values should be normalized (GL_TRUE) or converted directly as fixed-point values (GL_FALSE)” — in this case, they mean “are you weird, you’re sending me the wrong data, and you want all your values to be converted into the range (0…1)?, or shall I just use the data you gave me?”. Hence: always GL_FALSE.
  5. “the byte offset between consecutive generic vertex attributes” — Aha. Interesting.
  6. “offset of the first component of the first generic vertex attribute in the array” — Hmm. Interesting.

If you only have one “vertex attribute” in the array … then the “offset between” them will be the size of the attribute in bytes (i.e. “read one attribute. Then move ahead “the size of one attribute”, and read the next). But OpenGL allows a pointless optimization here which only confuses people: if you provide “0″, it magically works in the special case of “only one” attribute.

…and with only one Attribute: your “offset” will be “0″ — i.e. “start at the start”.

It gets interesting with more than one. If you have e.g.two vertex attributes in your array of data:

  • “offset between” — each one has to read ahead “the size of both attributes”: i.e. add together their TOTAL size in bytes
  • “offset of the first component” — the first Attribute starts at the start – 0. The second Attribute will have to skip ahead a little to find its first value. i.e. add together “size of one each of the Attributes that were before this one in the array”.

e.g. if your attributes were a vec4 (4 floats, of 4 bytes each) and a vec2 (2 floats, of 4 bytes each):

  • Offset between: (4×4 + 2×4) = 24
  • Offset of first:
    • …for first Attribute: 0
    • …for second Attribute: (4×4) = 16
    • …(a third Attribute would be: 16 + (2×4) = 24)

The easy way to encapsulate all this info: A “buffer format”

GLK2BufferFormat.h:

[objc]

@property(nonatomic) int numberOfSubTypes;

[/objc]

We store the “bytes per item” and the “floats per item” into a pair of arrays, and access them by index:

GLK2BufferFormat.m:

[objc]

@interface GLK2BufferFormat()

@property(nonatomic,retain) NSMutableArray* numFloatsPerItem, *bytesPerItem;

@end

-(GLuint)sizePerItemInFloatsForSubTypeIndex:(int)index

{

/** Apple currently defines GLuint as “unsigned int” */

return [((NSNumber*)[self.numFloatsPerItem objectAtIndex:index]) unsignedIntValue];

}

-(GLsizeiptr)bytesPerItemForSubTypeIndex:(int)index

{

/** Apple currently defines GLsizeiptr as “long” */

return [((NSNumber*)[self.bytesPerItem objectAtIndex:index]) longValue];

}

[/objc]

Note how you can auto-convert data into those arrays using Apple’s new “@( … )” Objective-C autoboxing syntax – but when you extract them, you have to explicitly cast them to correct types. NSNumber does the magic for us, in both cases.

Each BufferObject will now need to have a “current Buffer Format”, and each time it changes we’ll add up the sizes of all the items and cache that:

GLK2BufferObject.h

[objc]

@property(nonatomic,retain) GLK2BufferFormat* currentFormat;

@property(nonatomic,readonly) GLsizeiptr totalBytesPerItem;

[/objc]

GLK2BufferObject.m

[objc]

-(void)setCurrentFormat:(GLK2BufferFormat *)newValue

{

[_currentFormat release];

_currentFormat = newValue;

[_currentFormat retain];

self.totalBytesPerItem = 0;

for( int i=0; i 0 , @”Invalid GLK2BufferFormat”);

self.totalBytesPerItem += bytesPerItem;

}

}

[/objc]

We can add the last bit of VBO/BO code into our buffer-object, using the buffer format etc:

GLK2BufferObject.m

[objc]

-(void) upload:(void *) dataArray numItems:(int) count usageHint:(GLenum) usage

{

glBindBuffer( self.glBufferType, self.glName );

glBufferData( GL_ARRAY_BUFFER, count * self.totalBytesPerItem, dataArray, usage);

}

[/objc]

VBO is now done, yay! But we still have to finish VAO – that glVertexAttribArray call needs cleaning up.

Multiple Attributes per VBO: configuring the VertexArrayObject

Taking all the above, and putting it together, we get a single method on the VAO that allows us to:

  • Provide:
    1. a set of Attributes
    2. an array of data (e.g. filled with GLKVector3′s)
    3. a buffer-format (with one entry per Attribute, saying how many bytes it is, and how many floats)
    4. the number of vertices in the array
  • …and have the VAO do for us:
    1. Create a VBO on the GPU
    2. Upload the data to the new VBO
    3. Store inside itself (the VAO) the mapping from “this VBO” to “what the shader expects”
    4. Handle data for “one attribute per VBO” equally well as “multiple attributes per VBO”

There’s quite a bit of code here, mostly it’s housekeeping and for clarity – the concepts are nothing new:

GLK2VertexArrayObject.m

[objc]

-(GLK2BufferObject*) addVBOForAttributes:(NSArray*) targetAttributes filledWithData:(void*) data inFormat:(GLK2BufferFormat*) bFormat numVertices:(int) numDataItems updateFrequency:(GLK2BufferObjectFrequency) freq

{

/** Create a VBO on the GPU, to store data */

GLK2BufferObject* newVBO = [GLK2BufferObject vertexBufferObject];

[self.VBOs addObject:newVBO]; // so we can auto-release it when this class deallocs

/** Send the vertex data to the new VBO */

[newVBO upload:data numItems:numDataItems usageHint:[newVBO getUsageEnumValueFromFrequency:freq nature:GLK2BufferObjectNatureDraw] withNewFormat:bFormat];

/** Configure the VAO (state) */

glBindVertexArrayOES( self.glName );

GLsizeiptr bytesForPreviousItems = 0;

int i = -1;

for( GLK2Attribute* targetAttribute in targetAttributes )

{

i++;

GLuint numFloatsForItem = [newVBO.contentsFormat sizePerItemInFloatsForSubTypeIndex:i];

GLsizeiptr bytesPerItem = [newVBO.contentsFormat bytesPerItemForSubTypeIndex:i];

glEnableVertexAttribArray( targetAttribute.glLocation );

glVertexAttribPointer( targetAttribute.glLocation, numFloatsForItem, GL_FLOAT, GL_FALSE, newVBO.totalBytesPerItem, (const GLvoid*) bytesForPreviousItems); // cast needed because GL API is overloaded too much in C

bytesForPreviousItems += bytesPerItem;

}

glBindVertexArrayOES(0); //unbind the vertex array, as a precaution against accidental changes by other classes

return newVBO;

}

[/objc]

The only special item here is “usage”. Previously, I used the value “GL_DYNAMIC_DRAW”, which doesn’t do anything specific, but warns OpenGL that we might choose to re-upload the contents of this buffer at some point in the future. More correctly, you have a bunch of different options for this “hint” – if you look at the full source on GitHub, you’ll see a convenience method and two typedef’s that handle this for you, and explain the different options.

(but “uploading to a live BufferObject” is a complex topic of its own, and I’m not going into any further detail right now)

Source for: GLK2BufferFormat.h and GLK2BufferFormat.m

Source for: GLK2BufferObject.h and GLK2BufferObject.m

Source for: GLK2VertexArrayObject.h and GLK2VertexArrayObject.m

Gotcha: The magic of OpenGL shader type-conversion

This is also a great time to point-out some sleight-of-hand I did last time.

In our source-code for the Shader, I declared our attribute as:

attribute vec4 position;

…and when I declared the data on CPU that we uploaded, to fill-out that attribute, I did:

GLKVector3 cpuBuffer[] =

{

GLKVector3Make(-1,-1, z)

Anyone with sharp eyes will notice that I uploaded “vector3″ (data in the form: x,y,z) to an attribute of type “vector4″ (data in the form: x,y,z,w). And nothing went wrong. Huh?

The secret here is two fold:

  1. OpenGL’s shader-language is forgiving and smart; if you give it a vec3 where it needs a vec4, it will up-convert automatically
  2. We told all of OpenGL “outside” the shader-program: this buffer contains Vector3′s! Each one has 3 floats! Note: That’s THREE! Not FOUR!

…otherwise, I’d have had to define our triangle using 4 co-ordinates – and what the heck is the correct value of w anyway? Better not to even go there (for now). All of this “just works” thanks to the code we’ve written above, in this post. We explicitly tell OpenGL how to interpret the contents of a BufferObject even though the data may not be in the format the shader is expecting – and then OpenGL handles the rest for us automagically.

Those “multiple” triangles…

First, we’ll re-write ViewController to use all the new classes above:

ViewController.m

[objc]

-(NSMutableArray*) createAllDrawCalls

{

draw1Triangle.VAO = [[GLK2VertexArrayObject new] autorelease];

[draw1Triangle.VAO addVBOForAttribute:attribute filledWithData:cpuBuffer bytesPerArrayElement:sizeof(GLKVector3) arrayLength: draw1Triangle.numVerticesToDraw];

/** … Finally: add the draw Call 2 into the list of draw-calls we’re rendering as a “frame” on-screen */

[result addObject: draw1Triangle];

}

[/objc]

Hang on – how come so little has changed?

This is the purpose of VAO’s: they encapsulate (at the OpenGL / GPU level) all the data surrounding a bunch of VBO’s. That means “the raw values of the Attributes”, but also “the metadata about the VBO’s”. By modifying and re-writing and refactoring our VBO/BO/BufferFormat code … we have no effect on the rest of the app, only the VAO code needs to change.

To add some triangles, we’ll simply “add more draw calls” – and let our existing rendering code automatically handle everything else. Replace the code for creating the “draw1Triangle” object with this:

[objc]

GLK2ShaderProgram* sharedProgramForBlueTriangles = [GLK2ShaderProgram shaderProgramFromVertexFilename:@”VertexPositionUnprojected” fragmentFilename:@”FragmentColourOnly”];

for( int i=0; i<4; i++ )

{

GLK2DrawCall* draw1Triangle = [[GLK2DrawCall new] autorelease];

/** … Upload a program */

draw1Triangle.shaderProgram = sharedProgramForBlueTriangles;

glUseProgram( draw1Triangle.shaderProgram.glName );

GLK2Attribute* attribute = [draw1Triangle.shaderProgram attributeNamed:@”position”]; // will fail if you haven’t called glUseProgram yet

/** … Make some geometry */

GLfloat z = -0.5; // must be more than -1 * zNear, and ABS() less than zFar

draw1Triangle.numVerticesToDraw = 3;

GLKVector3 cpuBuffer[3] =

{

GLKVector3Make(-1 + i%2, -1 + i/2, z),

GLKVector3Make(-0.5 + i%2, 0 + i/2, z),

GLKVector3Make( 0 + i%2, -1 + i/2, z)

};

/** … create a VAO to hold a VBO, and upload the geometry into that new VBO

*/

draw1Triangle.VAO = [[GLK2VertexArrayObject new] autorelease];

[draw1Triangle.VAO addVBOForAttribute:attribute filledWithData:cpuBuffer bytesPerArrayElement:sizeof(GLKVector3) arrayLength: draw1Triangle.numVerticesToDraw];

/** … Finally: add into the list of draw-calls we’re rendering as a “frame” on-screen */

[result addObject: draw1Triangle];

}

[/objc]

End of part 4 (b)

Next time – I promise – will be all about Textures and Texture Mapping. No … really!

Honor Thy Player’s Time

Original Author: Ben Serviss

Chrono Trigger clock

It’s a glorious Sunday morning. You stir and stretch in your bed, a mess of wonderfully soft sheets and covers. The whole day is open. Will you laze around for a bit more? Get up and go for a walk, or to the gym? Make plans to meet up with friends? Whatever you decide, this moment is the birthplace of the day’s possibilities, when just thinking of the wide expanse of possibility makes you smile.

Jump forward 100 years. Unless dramatic advances in cryogenics are discovered, it’s highly likely you’ll no longer be among the living, and your lazy Sunday afternoon from so long ago will now be one fixed event of the myriad fixed events known as your life. However insignificant your choice may have been, your decision on how you used some of the finite time at your disposal can never be unmade – for better or for worse.

Now, say you decided to play a game that Sunday morning. Whether you got stuck in a 10-minute long unskippable cutscene, a stultifying yet mandatory mini-game, a frustrating sequence replete with long loading times, or even if the game was exactly what you wanted, your experience is logged as another fixed life event.

In a world of seemingly unlimited free-to-play games, browser games, Steam sales and trial versions, players will never be bereft of games to play. Choosing to honor your players – and acknowledging that they have chosen to share some of their limited time playing your game – can be a surprisingly effective way to help cut out filler and make a sharper, more rewarding play experience.

Player Time: The Rarest Resource

When video games were still an oddity favored by teenagers and pre-adolescents, the higher the hours of gameplay to cost ratio was, the better. Now, with tons of games constantly coming out on all sorts of platforms, there are more games than ever vying for our time. Suddenly, the prospect of a 40-hour game doesn’t seem as appealing as it once did.

Choosing to consciously honor the player’s time and investment aligns well with recent trends of aging gamers, who have less and less free time to devote to their favorite hobby.

So how do you go about doing this? In theory, the steps are simple.

→ All gameplay must serve the aim of the game. Most games strive to create fun or joyful scenarios for their players. These games would be best served by adhering to their mandate for the entire experience – no excuses. Just as Nintendo famously focuses on making the simple act of moving your character entertaining, everything the player does in your game should serve its purpose.

Steel Battalion Controller

The Steel Battalion controller. Not pictured: Foot pedal controls.

The Xbox mech combat simulator Steel Battalion was notorious for its mammoth real-life controller, with over 40 buttons to manage, and an involved ‘start-up’ sequence that involved manipulating its buttons and switches in a precise order before you could even take a step. This sequence may seem like it arbitrarily takes time away from the player, but in fact this only helped to reinforce the game’s purpose – to simulate what it actually would be like to pilot a giant robot of war.

→ All gameplay must build towards the narrative theme. This only applies to games where narrative and story are a focus. If you’re trying to craft a compelling story around your game, does it really make sense to have one set of rules that apply during gameplay, and another that apply only during ‘story sequences’?

For example, if you spend the majority of a game killing faceless enemies, only to have your character mortified when faced with a corpse in a cutscene, would you realistically think that makes any sense? The fact that this double standard is still the norm in games is a huge reason why an overwhelming majority of game stories are looked upon as inferior to other media. By doing this, you compromise the story you’re trying to tell, and weaken the overall power of the game.

→ Don’t show how good you are at making other things. Cutscenes that go on for way too long, cutscenes that are unskippable, cutscenes that have little to do with the plot – this kind of indulgence is often more fun for the creator than the player.

While there are exceptions, and some designers like Hideo Kojima are celebrated for their signature, if not meandering cinematics, unexpected ones deliver a clear message to the player: Keep waiting, and maybe you’ll get to play later. This can’t help but be disappointing to someone who entered your game for the purpose of actively playing, not passively viewing.

The cinematics for the summon spells in Final Fantasy VIII caught flack for being unusually long as well as unskippable.

The cinematics for the summon spells in Final Fantasy VIII caught flack for being unusually long as well as unskippable.

This also applies to anything not in the game’s main field of expertise. Shallow mini-games and tacked-on puzzle elements only detract from the central gameplay promise your players came to fulfill.

→ Avoid player downtime. Whenever the player is ready and willing to play but cannot due to the game getting in the way, that is active player downtime. This includes routine loading times, which generally can’t be helped, but can sneak up on you with ill-conceived respawn times, cooldown times, failure state cutscenes and un-optimized forced loading screens.

Whenever there is a way to get the player back to active play as soon as possible that is consistent with the narrative, appropriate difficulty balancing and technical constraints, take it.

Honor Thy Lifetime

Of course, this concept applies to everything else in your life – especially interactions with others. Even if you’re in the middle of an interaction that is banal or routine, remember that it is always a remarkable one because it is the one that is happening right now.

As long as someone plays it, the same will be true of your game – even if it’s a modest indie project you made on your own just for fun. Even if you don’t make it obvious, when you respect the player’s time, they’ll know.

OpenGL ES 2: debugging, and improvements to VAO, VBO

Original Author: Adam Martin

UPDATE: This post sucks; it has some useful bits about setting up a breakpoint in the Xcode debugger – but apart from that: I recommend skipping it and going straight to Part 4b instead, which explains things much better.

This is Part 4, and explains how to debug in OpenGL, as well as improving some of the reusable code we’ve been using (Part 3 covered Geometry).

Last time, I said we’d go straight to Textures – but I realised we need a quick TIME OUT! to cover some code-cleanup, and explain in detail some bits I glossed-over previously. This post will be a short one, I promise – but it’s the kind of stuff you’ll probably want to bookmark and come back to later, whenever you get stuck on other parts of your OpenGL code.

NB: if you’re reading this on AltDevBlog, the code-formatter is currently broken on the server. Until the ADB server is fixed, I recommend reading this (identical) post over at T-Machine.org, where the code-formatting is much better.

Cleanup: VAOs, VBOs, and Draw calls

In the previous part, I deliberately avoided going into detail on VAO (Vertex Array Objects) vs. VBO (Vertex Buffer Objects) – it’s a confusing topic, and (as I demonstrated) you only need 5 lines of code in total to use them correctly! Most of the tutorials I’ve read on GL ES 2 were simply … wrong … when it came to using VAO/VBO. Fortunately, I had enough experience of Desktop GL to skip around that – and I get the impression a lot of GL programmers do the same.

Let’s get this clear, and correct…

To recap, I said last time:

  1. A VertexBufferObject:
    1. …is a plain BufferObject that we’ve filled with raw data for describing Vertices (i.e.: for each Vertex, this buffer has values of one or more ‘attribute’s)
    2. Each 3D object will need one or more VBO’s.
    3. When we do a Draw call, before drawing … we’ll have to “select” the set of VBO’s for the object we want to draw.
  2. A VertexArrayObject:
    1. …is a GPU-side thing (or “object”) that holds the state for an array-of-vertices
    2. It records info on how to “interpret” the data we uploaded (in our VBO’s) so that it knows, for each vertex, which bits/bytes/offset in the data correspond to the attribute value (in our Shaders) for that vertex

Vertex Buffer Objects: identical to any other BufferObject

It’s important to understand that a VBO is a BO, and there’s nothing special or magical about it: everything you can do with a VBO, you can do with any BO. It gets given a different name simply because – at a few key points – you need to tell OpenGL “interpret the data inside this BO as if it’s vertex-attributes … rather than (something else)”. In practice, all that means is that:

If you take any BO (BufferObject), every method call in OpenGL will require a “type” parameter. Whenever you pass-in the type “GL_ARRAY_BUFFER”, you have told OpenGL to use that BO as a VBO. That’s all that VBO means.

…the hardware may also (perhaps; it’s up to the manufacturers) do some behind-the-scenes optimization, because you’ve hinted that a particular BO is a VBO – but it’s not required.

Vertex Buffer Objects: why plural?

In our previous example, we had only one VBO. It contained only one kind of vertex-attribute (the “position” attribute). We used it in exactly one draw call, for only one 3D object.

A BufferObject is simply a big array stored on the GPU, so that the GPU doesn’t have to keep asking for the data from system-RAM. RAM -> GPU transfer speeds are 10x slower than GPU-local-RAM (known as VRAM) -> GPU upload speeds.

So, as soon as you have any BufferObjects, your GPU has to start doing memory-management on them. It has its own on-board caches (just like a CPU), and it has its own invisible system that intelligently pre-fetches data from your BufferObjects (just like a CPU does). This then begs the question:

What’s the efficient way to use BufferObjects, so that the GPU has to do the least amount of shuffling memory around, and can maximize the benefit of its on-board caches?

The short answer is:

Create one single VBO for your entire app, upload all your data (geometry, shader-program variables, everything), and write your shaders and draw-calls to use whichever subsets of that VBO apply to them. Never change any data.

OpenGL ES 2 doesn’t fully support that usage: some of the features necessary to put “everything” into one VBO are missing. Also: if you start to get low on spare memory, if you only have one VBO, you’re screwed. You can’t “unload a bit of it to make more room” – a VBO is, by definition, all-or-nothing.

How do Draw calls relate to VBO’s?

This is very important. When you make a Draw call, you use glVertexAttribPointer to tell OpenGL:

“use the data in BufferObject (X), interpreted according to rule (Y), to provide a value of this attribute for EACH vertex in the object

…a Draw call has to take the values of a given attribute all at once from a single VBO. Incidentally, this is partly why I made the very first blog post teach you about Draw calls – they are the natural atomic unit in OpenGL, and life is much easier if you build your source-code around that assumption.

So, bearing in mind the previous point about wanting to load/unload VBOs at different times … with GL ES 2, you divide up your VBO’s in two key ways, and stick to one key rule:

  1. Any data that might need to be changed while the program is running … gets its own VBO
  2. Any data that is needed for a specific draw-call, but not others … gets its own VBO
  3. RULE: The smallest chunk of data that goes in a VBO is “the attribute values for one attribute … for every vertex in an object”

…you can have the values for more than one Attribute inside a single VBO – but it has to cover all the vertices, for each Attribute it contains.

A simple VBO class (only allows one Attribute per VBO)

For highest performance, you normally want to put multiple Attributes into a single VBO … but there are many occasions where you’ll only use 1:1, so let’s start there.

GLK2BufferObject.h

[objc]

@property(nonatomic, readonly) GLuint glName;

@property(nonatomic) GLenum glBufferType;

@property(nonatomic) GLsizeiptr bytesPerItem;

@property(nonatomic,readonly) GLuint sizePerItemInFloats;

-(GLenum) getUsageEnumValueFromFrequency:(GLK2BufferObjectFrequency) frequency nature:(GLK2BufferObjectNature) nature;

-(void) upload:(void *) dataArray numItems:(int) count usageHint:(GLenum) usage;

@end

[/objc]

The first two properties are fairly obvious. We have our standard “glName” (everything has one), and we have a glBufferType, which is set to GL_ARRAY_BUFFER whenever we want the BO to become a VBO.

To understand the next part, we need to revisit the 3 quick-n-dirty lines we used in the previous article:

(from previous blog post)

glGenBuffers( 1, &VBOName );

glBindBuffer(GL_ARRAY_BUFFER, VBOName );

glBufferData(GL_ARRAY_BUFFER, 3 * sizeof( GLKVector3 ), cpuBuffer, GL_DYNAMIC_DRAW);

…the first two lines are simply creating a BO/VBO, and storing its name. And we’ll be able to automatically supply the “GL_ARRAY_BUFFER” argument from now on, of course. Looking at that last line, the second-to-last argument is “the array of data we created on the CPU, and want to upload to the GPU” … but what’s the second argument? A hardcoded “3 * (something)”? Ouch – very bad practice, hardcoding a digit with no explanation. Bad coder!

glBufferData requires, as its second argument:

(2nd argument): The total amount of RAM I need to allocate on the GPU … to store this array you’re about to upload

In our case, we were uploading 3 vertices (one for each corner of a triangle), and each vertex was defined using GLKVector3. The C function “sizeof” is a very useful one that measures “how many bytes does a particular type use-up when in memory?”.

So, for our GLK2BufferObject class to automatically run glBufferData calls in future, we need to know how much RAM each attribute-value occupies:

[objc]

@property(nonatomic) GLsizeiptr bytesPerItem;

[/objc]

But, when we later told OpenGL the format of the data inside the VBO, we used the line:

(from previous blog post)

glVertexAttribPointer( attribute.glLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);

…and if you read the OpenGL method-docs, you’d see that the 2nd argument there is also called “size” – but we used a completely different number!

And, finally, when we issue the Draw call, we use the number 3 again, for a 3rd kind of ‘size’:

(from previous blog post)

glDrawArrays( GL_TRIANGLES, 0, 3); // this 3 is NOT THE SAME AS PREVIOUS 3 !

WTF? Three definitions of “size” – O, RLY?

Ya, RLY.

  1. glBufferData: measures size in “number of bytes needed to store one Attribute-value”
  2. glVertexAttribPointer: measures size in “number of floats required to store one Attribute-value”
  3. glDrawArrays: measures size in “number of vertices to draw, out of the ones in the VBO” (you can draw fewer than all of them)

For the final one – glDrawArrays – we’ll store that data (how many vertices to “draw”) in the GLK2DrawCall class itself. But we’ll need to store the info for glVertexAttribPointer inside each VBO:

[objc]

@property(nonatomic,readonly) GLuint sizePerItemInFloats;

[/objc]

Refactoring the old “glBufferData” call

Now we can implement GLK2BufferObject.m, and remove the hard-coded numbers from our previous source code:

GLK2BufferObject.m:

[objc]

-(void) upload:(void *) dataArray numItems:(int) count usageHint:(GLenum) usage

{

NSAssert(self.bytesPerItem > 0 , @”Can’t call this method until you’ve configured a data-format for the buffer by setting self.bytesPerItem”);

NSAssert(self.glBufferType > 0 , @”Can’t call this method until you’ve configured a GL type (‘purpose’) for the buffer by setting self.glBufferType”);

glBindBuffer( self.glBufferType, self.glName );

glBufferData( GL_ARRAY_BUFFER, count * self.bytesPerItem, dataArray, usage);

}

[/objc]

The only special item here is “usage”. Previously, I used the value “GL_DYNAMIC_DRAW”, which doesn’t do anything specific, but warns OpenGL that we might choose to re-upload the contents of this buffer at some point in the future. More correctly, you have a bunch of different options for this “hint” – if you look at the full source on GitHub, you’ll see a convenience method and two typedef’s that handle this for you, and explain the different options.

Source for: GLK2BufferObject.h and GLK2BufferObject.m

What’s a VAO again?

A VAO/VertexArrayObject:

VertexArrayObject: stores the metadata for “which VBOs are you using, what kind of data is inside them, how can a ShaderProgram read and interpret that data, etc”

We’ll start with a new class with the (by now: obvious) properties and methods:

GLK2VertexArrayObject.h

[objc]

#import

#import “GLK2BufferObject.h”

#import “GLK2Attribute.h”

@interface GLK2VertexArrayObject : NSObject

@property(nonatomic, readonly) GLuint glName;

@property(nonatomic,retain) NSMutableArray* VBOs;

/** Delegates to the other method, defaults to using “GL_STATIC_DRAW” as the BufferObject update frequency */

-(GLK2BufferObject*) addVBOForAttribute:(GLK2Attribute*) targetAttribute filledWithData:(void*) data bytesPerArrayElement:(GLsizeiptr) bytesPerDataItem arrayLength:(int) numDataItems;

/** Fully configurable creation of VBO + upload of data into that VBO */

-(GLK2BufferObject*) addVBOForAttribute:(GLK2Attribute*) targetAttribute filledWithData:(void*) data bytesPerArrayElement:(GLsizeiptr) bytesPerDataItem arrayLength:(int) numDataItems updateFrequency:(GLK2BufferObjectFrequency) freq;

@end

[/objc]

The method at the end is where we move the very last bit of code from the previous blog post – the stuff about glVertexAttribPointer. We also combine it with automatically creating the necessary GLK2BufferObject, and calling the “upload:numItems:usageHint” method:

GLK2VertexArrayObject.m:

[objc]

-(GLK2BufferObject*) addVBOForAttribute:(GLK2Attribute*) targetAttribute filledWithData:(void*) data bytesPerArrayElement:(GLsizeiptr) bytesPerDataItem arrayLength:(int) numDataItems updateFrequency:(GLK2BufferObjectFrequency) freq

{

/** Create a VBO on the GPU, to store data */

GLK2BufferObject* newVBO = [GLK2BufferObject vertexBufferObject];

newVBO.bytesPerItem = bytesPerDataItem;

[self.VBOs addObject:newVBO]; // so we can auto-release it when this class deallocs

/** Send the vertex data to the new VBO */

[newVBO upload:data numItems:numDataItems usageHint:[newVBO getUsageEnumValueFromFrequency:freq nature:GLK2BufferObjectNatureDraw]];

/** Configure the VAO (state) */

glBindVertexArrayOES( self.glName );

glEnableVertexAttribArray( targetAttribute.glLocation );

GLsizei stride = 0;

glVertexAttribPointer( targetAttribute.glLocation, newVBO.sizePerItemInFloats, GL_FLOAT, GL_FALSE, stride, 0);

glBindVertexArrayOES(0); //unbind the vertex array, as a precaution against accidental changes by other classes

return newVBO;

}

[/objc]

Source for: GLK2VertexArrayObject.h and GLK2VertexArrayObject.m

Gotcha: The magic of OpenGL shader type-conversion

This is also a great time to point-out some sleight-of-hand I did last time.

In our source-code for the Shader, I declared our attribute as:

attribute vec4 position;

…and when I declared the data on CPU that we uploaded, to fill-out that attribute, I did:

GLKVector3 cpuBuffer[] =

{

GLKVector3Make(-1,-1, z)

Anyone with sharp eyes will notice that I uploaded “vector3″ (data in the form: x,y,z) to an attribute of type “vector4″ (data in the form: x,y,z,w). And nothing went wrong. Huh?

The secret here is two fold:

  1. OpenGL’s shader-language is forgiving and smart; if you give it a vec3 where it needs a vec4, it will up-convert automatically
  2. We told all of OpenGL “outside” the shader-program: this buffer contains Vector3′s! Each one has 3 floats! Note: That’s THREE! Not FOUR!

…otherwise, I’d have had to define our triangle using 4 co-ordinates – and what the heck is the correct value of w anyway? Better not to even go there (for now). All of this “just works” thanks to the code we’ve written above, in this post. We explicitly tell OpenGL how to interpret the contents of a BufferObject even though the data may not be in the format the shader is expecting – and then OpenGL handles the rest for us automagically.

Errors – ARGH!

We’re about to deal with “textures” in OpenGL – but we have to cover something critical first.

In previous parts, each small feature has required only a few lines of code to achieve even the most complex outcomes … apart from “compiling and linking Shaders”, which used many lines of boilerplate code.

Texture-mapping is different; this is where it gets tough. Small typos will kill you – you’ll get “nothing happened”, and debugging will be near to impossible. It’s time to learn how to debug OpenGL apps.

OpenGL debugging: the glGetError() loop

There are three ways that API’s / libraries return errors:

  1. (very old, C-only, APIs): An integer return code from every method, that is “0″ for success, and “any other number” for failure. Each different number flags a different cause / kind of error
  2. (old, outdated APIs): An “error” pointer that you pass in, and MAY be filled-in with an error if things go wrong. Apple does a variant of this with most of their APIs, although they don’t need to any more (it used to be “required”, but they fixed the problems that forced that, and it’s now optional. Exceptions work fine)
  3. (modern programming languages and APIs): If something goes wrong, an Exception is thrown (modern programming languages do some Clever Tricks that make this exactly as fast as the old methods, but much less error-prone to write code with)

Then there’s another way. An insane, bizarre, way … from back when computers were so new, even the C-style approach hadn’t become “standard” yet. This … is what OpenGL uses:

  1. Every method always succeeds, even when it fails
    • If it fails, a “global list of errors” is created, and the error added to the end
    • No error is reported – no warnings, no messages, no console-logs … nothing
    • If other methods fail, they append to the list of errors
    • At any time, you can “read” the oldest error, and remove it from the list

In fairness, there was good reason behind it. They were trying to make an error-reporting system that was so high-performance it had zero impact on the runtime. They were also trying to make it work over the network (early OpenGL hardware was so special/expensive, it wasn’t even in the same machine you ran your app on – it lived on a mainframe / supercomputer / whatever in a different room in your office).

It’s important to realise that the errors are on a list – if you only call “if( isError )” you’ll only check the first item on the list. By the time you check for errors, there may be more-than-one error stacked up. So, in OpenGL, we do our error checking in a while-loop: “while( thereIsAnotherError ) … getError … handleError”.

UPDATE: ignore the rest, use this (Xcode5)

Xcode5 now does 95% of the work for you, in 3 clicks – this is awesome.

Select your Breakpoints tab, hit the (hard to find) plus button at bottom left, and select “OpenGL ES Error”:

Screen Shot 2013-10-13 at 18.10.09

This is a magic breakpoint where OpenGL will catch EVERY GL error as soon as it happens and pause in the debugger for you. You should have this permanently enabled while developing!

(if you’re not familiar with Xcode’s catch-all breakpoints, the other one that most devs have always-on is “Exception Breakpoint”, which makes the debugger pause whenever it hits an Exception, and you can see the exact state of your program at the moment the Exception was created. It’s not 100% perfect – some 3rd party libraries (e.g. TestFlight, IIRC) create temporary Exceptions pre-emptively, and get annoying quickly. But it’s pretty good)

What follows is generic code (not dependent on IDE version). I’ll leave it here as an FYI – and in case you ever need to reproduce this logging at runtime, without the debugger (e.g. for remote upload of crash logs to TestFlight or Hockey). But for simple cases: use the Xcode5 feature instead

Using glGetError()

Technically, OpenGL requires you to alternate EVERY SINGLE METHOD CALL with a separate call to “glGetError()”, to check if the previous call had any errors.

If you do NOT do this, OpenGL will DELETE THE INFORMATION about what caused the error.

Since OpenGL ERRORS ARE 100% CONTEXT-SENSITIVE … deleting that info also MAKES THE ERROR TEXT MEANINGLESS.

Painful? Yep. Sorry.

To make it slightly less painful, OpenGL’s “getError()” function also “removes that error from the start of the list” automatically. So you only use one call to achieve both “get-the-current-error”, and “move-to-the-next-one”.

Here’s the source code you have to implement. After every OpenGL call (any method beginning with the letters “gl”):

[objc]

GLenum glErrorLast;

while( (glErrorLast = glGetError()) != GL_NO_ERROR ) // GL spec says you must do this in a WHILE loop

{

NSLog(@”GL Error: %i”, glErrorCapture );

}

[/objc]

This (obviously) makes your source code absurdly complex, completely unreadable, and almost impossible to maintain. In practice, most people do this:

  1. Create a global function that handles all the error checking, and import it to every GL class in your app
  2. Call this function:
    1. Once at the start of each “frame” (remember: frames are arbitrary in OpenGL, up to you to define them)
    2. Once at the start AND end of each “re-usable method” you write yourself – e.g. a “setupOpenGL” method, or a custom Texture-Loader
    3. When something breaks, start inserting calls to this function BACKWARDS from the point of first failure, until you find the line of code that actually errored. You have to re-compile / build / test after each insertion. Oh, the pain!

From this post onwards, I will be inserting calls to this function in my sample code, and I won’t mention it further

Standard code for the global error checker

The basic implementation was given above … but we can do a lot better than that. And … since OpenGL debugging is so painful … we really need to do better than that!

We’ll start by converting it into a C-function that can trivially be called from any class OR C code:

[objc]

void gl2CheckAndClearAllErrors()

{

GLenum glErrorLast;

while( (glErrorLast = glGetError()) != GL_NO_ERROR ) // GL spec says you must do this in a WHILE loop

{

NSLog(@”GL Error: %i”, glErrorCapture );

}

}

[/objc]

Improvement 1: Print-out the GL_* error type

OpenGL only allows 6 legal “error types”. All gl method calls have to re-use the 6 types, and they aren’t allowed sub-types, aren’t allowed parameters, aren’t allowed “error messages” to go with them. This is crazy, but true.

First improvement: include the error type in the output.

[objc]

while( (glErrorLast = glGetError()) != GL_NO_ERROR ) // GL spec says you must do this in a WHILE loop

{

/** OpenGL spec defines only 6 legal errors, that HAVE to be re-used by all gl method calls. OH THE PAIN! */

NSDictionary* glErrorNames = @{ @(GL_INVALID_ENUM) : @”GL_INVALID_ENUM”, @(GL_INVALID_VALUE) : @”GL_INVALID_VALUE”, @(GL_INVALID_OPERATION) : @”GL_INVALID_OPERATION”, @(GL_STACK_OVERFLOW) : @”GL_STACK_OVERFLOW”, @(GL_STACK_UNDERFLOW) : @”GL_STACK_UNDERFLOW”, @(GL_OUT_OF_MEMORY) : @”GL_OUT_OF_MEMORY” };

NSLog(@”GL Error: %@”, [glErrorNames objectForKey:@(glErrorCapture)] );

}

[/objc]

Improvement 2: report the filename and line number for the source file that errored

Using a couple of C macros, we can get the file-name, line-number, method-name etc automatially:

[objc]

NSLog(@”GL Error: %@ in %s @ %s:%d”, [glErrorNames objectForKey:@(glErrorCapture)], __PRETTY_FUNCTION__, __FILE__, __LINE__ );

[/objc]

Improvement 3: automatically breakpoint / stop the debugger

You know about NSAssert / CAssert, right? If not … go read about it. It’s a clever way to do Unit-Testing style checks inside your live application code, with very little effort – and it automatically gets compiled-out when you ship your app.

We can add an “always-fails (i.e. triggers)” Assertion whenever there’s an error. If you configure Xcode to “always breakpoint on Assertions” (should be the default), Xcode will automatically pause whenever you detect an OpenGL error:

Chris Ross pointed out, I made a stupid mistake here. To use the __FILE__ etc macros, the way they work (auto-referencing actual lines in source code) you need to make the call itself into a macro, so the compiler re-embeds them in source each time you use them. Modifying code below

Header:

[objc]

#define gl2CheckAndClearAllErrors() _gl2CheckAndClearAllErrorsImpl(__PRETTY_FUNCTION__,__FILE__,__LINE__)

void _gl2CheckAndClearAllErrorsImpl(char *source_function, char *source_file, int source_line)

[/objc]

Class:

[objc]

#include

void _gl2CheckAndClearAllErrorsImpl(char *source_function, char *source_file, int source_line)

{

NSLog(@”GL Error: %@ in %s @ %s:%d”, [glErrorNames objectForKey:@(glErrorCapture)], __PRETTY_FUNCTION__, __FILE__, __LINE__ );

NSCAssert( FALSE, @”OpenGL Error; you need to investigate this!” ); // can’t use NSAssert, because we’re inside a C function

}

[/objc]

… see how we create a macro that looks like the function, but expands into the function we need it to be.

Improvement 4: make it vanish from live App-Store builds

By default, Xcode defines a special value for all Debug (i.e. development) builds that is removed for App Store builds.

Let’s wrap our code in an “#if” check that uses this. That way, when we ship our final build to App Store, it will compile-out all the gl error detection. The errors at that point do us no good anyway – users won’t be running the app in a debugger, and the errors in OpenGL are context-sensitive, so error reports from users will do us very little good.

(unless you’re using a remote logging setup, e.g. Testflight/HockeyApp/etc … but in that case, you’ll know what to do instead)

[objc]

void _gl2CheckAndClearAllErrorsImpl(char *source_function, char *source_file, int source_line)

{

#if DEBUG

#endif

}

[/objc]

Source for: GLK2GetError.h and GLK2GetError.m

End of part 4

Next time – I promise – will be all about Textures and Texture Mapping. No … really!

Simplex Update #2: Unit-Testing

Original Author: Francisco Tufró

NOTE: If you want to read this post with syntax highlight I strongly suggest you read it on my blog: code coverage reports.

Implementation

To avoid starting from scratch and implement my own unit testing framework, I decided to use Google’s C++ Testing Framework because:

  • It detects your tests automatically (you don’t have to maintain a list of tests, just add the cpp and you’re done).
  • It has test selection from command line built in, which is a huge time saver.
  • It exports to XML (which I intend to use with Jenkins, you’ll learn about this in the next update).
  • It says it’s easily expandable, with custome predicates and advanced features, but I’ve not used these yet.

In order to be able to actually run these tests, we need a test runner application.

Since Simplex Engine is being developed in a modular way, I wanted the tests to be as independent as possible, so I created a module for running them.

I did something that is not great here, but I found it convenient. I created a module that includes a main() function to run tests.

In this way, it’s just a matter of including the cpp files (Tests + Code to be tested) and linking simplex-test to create an executable that will run all the tests without any plumbing on the module being tested.

Since the testing framework is not supposed to be shipped with the actual game’s executable, I think it’s ok to do this.

We’ll see if I regret about this in the future, but we’re agile, aren’t we?

Take a look at Runner.cpp (Runner.h has no interesting information at all):

[cpp]

#include “Simplex/Test/Runner.h”

namespace Simplex

{

namespace Test

{

Runner::Runner (int argc, char **argv)

{

testing::InitGoogleTest(&argc, argv);

}

int Runner::Start ()

{

return RUN_ALL_TESTS();

}

}

}

int main ( int argc, char **argv )

{

Simplex::Test::Runner testRunner = Simplex::Test::Runner ( argc, argv );

return testRunner.Start ();

}

[/cpp]

What do we see here? The Runner class is used to initialize Google’s C++ Testing Framework and to run all the tests when Start is called.

Then our main function creates the runner and starts testing.

So, whenever I link to simplex-test as a library, this main function will be called, and all my tests will run.

Sample test

In order to show my workflow on unit testing, I’ll guide you through the implementation of a class. In this case I chose Adapter, a base support class to implement the Adapter pattern, which I’ll use to interface with third party libraries.

While developing the enginle I’m using TDD, but for clarity I’ll avoid the whole write test / fail / implement / pass / refactor cycle.

I’ll just show you the tests and explain a little bit about them ( NOTE: I’ll paste the full source code for the class and the tests at the end of the post ).

The first thing we need to do is to include the necessary headers:

[cpp]

#include <Simplex/Test.h>

#include <Simplex/Core/Adapter.h>

#include “AdapteeMock.h”

[/cpp]

Simplex/Test.h includes the necessary stuff from Google’s C++ Testing Framework, then I include the .h for the Adapter class, and finally a mock class.

I use this mock class in order to simulate how I’ll use the Adapter class, subclassing it and creating specific behaviour.

Now let’s check a test.

[cpp]

TEST ( SimplexCoreAdapter, AcceptsAnAdaptee )

{

AdapteeMock* adaptee = new AdapteeMock();

Adapter adapter = Adapter();

adapter.SetAdaptee( adaptee );

ASSERT_EQ ( adaptee, adapter.GetAdaptee() );

}

[/cpp]

Let’s dissect this:

All tests in Google’s C++ Testing Framework start with a call to the macro TEST.

You need to pass two parameters to TEST:

  • A test case name: this would be the group this test is part of, in this case we’re testing Simplex::Core::Adapter, so it makes sense to name the group SimplexCoreAdapter.
  • A test name: this has to be as descriptive as possible, try to identify the only thing this test does and write that. In this case we want to try that it accepts an Adaptee, so AcceptsAnAdaptee makes sense.

Implementation-wise, I like to think of a unit test as a sequence of three operations: Setup, Act, Assert. This is reflected on how I split the lines of code inside the test body creating a mental division that aids locating each section of the test.

On the first two lines of the test, I basically create a mock adaptee (which we’ll take a look at later) and an adapter (Setup).

Then, we call SetAdaptee on adapter, passing the adaptee as the only parameter (Act).

Finally, I assert that the adapter registered the adaptee calling GetAdaptee and comparing the result with the one I’ve created.

This test may sound a litte dull, but it’s exactly what I want to achieve on my Adapter class, I want it to be able to receive an adaptee.

Remember this is a base class, and will be subclassed later with more specific things in mind (For example, OpenGL will be abstracted behind a Graphics API Adapter ).

The rest of the tests have a similar structure.

The code that passes the tests

These two methods are really simple, just a getter and a setter:

[cpp]

void Adapter::SetAdaptee ( Adaptee* adaptee )

{

mAdaptee = adaptee;

}

Adaptee* Adapter::GetAdaptee ()

{

return mAdaptee;

}

[/cpp]

These two methods are all the code I need to pass this test (besides the class declaration, that is).

So.. the only thing we need now is a way to run these tests automatically.

CMake modifications

——————-

We saw a little bit about this in the previous update, but now I wanted to talk about it in context.

In order to get all the tests we use the same technique that we used for the module source.

Inside the module’s CMakeLists.txt we need to assign all the tests cpp’s to a variable:

[cpp]

file ( GLOB TESTS_CODE

${CMAKE_CURRENT_SOURCE_DIR}/tests/*.cpp

)

[/cpp]

and then add an executable with that code:

[cpp]

add_executable ( SimplexCoreTests ${TESTS_CODE} )

[/cpp]

After doing this, we have to link the executable to simplex-test:

[cpp]

target_link_libraries ( SimplexCoreTests simplex-core ${TEST_LIBRARIES} )

[/cpp]

The last step is registering the executable as a test to CMake.

[cpp]

add_test ( SimplexCoreTests SimplexCoreTests )

[/cpp]

Yes, I know, there is a magic ${TEST_LIBRARIES} in there, that variable is in the root CMakeLists.txt since it’s platform dependent.

On Linux, for example, we need to add pthread as well as simplex-test, so it looks like:

[cpp]

set ( TEST_LIBRARIES simplex-test pthread )

[/cpp]

And that’s it.

When you run cmake and make, you’ll end up having an executable called SimplexCoreTests that will run all the tests for the Core module.

Not only that, since we registered the test in CMake, now we can do make tests and all the registered tests will run. Neat.

Conclusion

As of today, Simplex Engine is using Google’s C++ Testing Framework, and a clear workflow to define and run tests has been implemented.

When you create a test file (ClassName_Tests.cpp) inside the tests folder of a module, it gets automatically added to that module’s test suite.

You can start writing tests right away, and also implementing the classes it tests.

I’ve found it to be a really good setup to do TDD for now.

Full source

If you want to see the whole picture, you can take a look at the actual source code for the files I’m talking about here:

simplex-engine/CMakeLists.txt

simplex-test/src/Runner.cpp

simplex-core/CMakeLists.txt
simplex-core/include/Simplex/Core/Adapter.h
simplex-core/include/Simplex/Core/Adaptee.h
simplex-core/src/Adapter.cpp
simplex-core/tests/Adapter_Tests.cpp
simplex-core/tests/AdapteeMock.h

What’s so great about Oculus Rift?

Original Author: Alex Moore

Over the last few months, alongside the Xbox One and PS4 PR trains, articles on John Carmack joined Oculus full time definitely raised my eyebrows, but only for a moment.

My opinion completely changed when, a few weeks ago, I had a go on a dev kit.

We play games not just to entertain ourselves, but to experience things that we otherwise possibly couldn’t. Over the years I’ve won the Formula 1 world championship many times, driven (and crashed) cars that I’ll never be able to afford and flown everything from light aircraft to X-Wings. I’ve explored deep space, been to Hell and back countless times and killed thousands of demons, aliens, Nazi’s and terrorists. I’ve played an Alien creature that has a penchant for fishnet stockings, all the way through to being James Bond, and that’s only the tip of the iceberg. Games excel at allowing us to play out fantasies that we otherwise wouldn’t get the chance to do, but I’m fully aware that all these experiences have been just me, playing a game.

After just a few seconds in the Tuscany demo for Oculus Rift I’ve experienced something I’ve never had with any other kind of virtual entertainment: I have a memory of being there. Not a memory of playing the game, but of being in that location and exploring. This is very impressive technology, but why? What’s so great about Oculus Rift?

Sense of immersion

The sense of immersion is unlike anything else I’ve tried. It’s nothing to do with graphics – most of the demos are early-Xbox360 quality at best, and the resolution of the headset is very noticeably too low at the moment (something which Oculus have said they’ve fixed, but not yet released out as dev kits).

The things that Oculus Rift does so well, when set up correctly, are:

  1. The 3D effect is near on perfect. This is not 3D movie billboarding, depth perception feels very close to real life.
  2. There is no noticeable lag to the head tracking.
  3. The headset is light enough to barely register that you’re wearing it.

There are flaws, and the biggest one isn’t anything that Oculus can necessarily fix: as you move in the virtual world your eyes are telling your brain that you are moving too, but your inner-ear is saying that you’re stationary. For the first few goes this made me feel incredibly sick, and this seems to be a common response. Calibrating the headset to me made a big difference, and after an hour or so my brain more or less figured out what was going on and has been dealing much better with it since.

How far can it go?

After the initial nausea had passed the designer side of my brain kicked into gear, immediately conjuring up a whole host of ways to use this technology. It’s interesting reading interviews with Nate Mitchell about his dedication to gaming, and there’s no doubt that this is going to become a very important part of gaming in the future. But it has the potential to be much, much more.

There are lots of experiments already in progress, and this sky diving simulator by Nissan caught my eye.

Sky diving is one of those things that I have always thought about trying, but haven’t because part of me is certain that I’d completely choke at the point when I was supposed to jump out of the plane. The video above is very game orientated, and no matter how big the fan is it’s still unlikely to feel like the real thing. But it opens up the possibility for people to experience something that they might not otherwise ever be able to, like doing a space walk around the ISS.

Going Mobile

To me, the biggest advancement of the technology is going to happen when Oculus goes mobile. Being tethered to a laptop is definitely a limiting factor, and if you can run the hardware from a phone sized computer (which Oculus have announced is in the works) then you can eliminate the motion sickness as you can allow people to walk around for real. It would have to be in carefully controlled spaces, but I can easily imagine a Laser-Quest type arena being used as a proper game level: modelled basically in the real world but highly detailed in the virtual, and suddenly you have the more realistic war game ever made.

Drawbacks

The biggest drawback to the system is that it only really works for experiences that are first person, so gaming wise it already has limited scope: it’s not going to enhance your game of Civilisation for example.

There’s also the problem that you lose almost all connection with the real world. Sure, you can still bump into things, but you really can’t see the guy punching the air 2 inches from your face. I remember my parents struggling to get me off the computer when I was a teenager, getting someone out of Oculus could become a real issue.

Finally though, there’s the distinct possibility that real life will just become boring. If you’ve got all these potential experiences at your fingertips, why wouldn’t you want to stay plugged in? And as you have memories of those virutal experiences, distinguishing them from what’s real could become very difficult indeed.

I don’t know if that’s exciting or scary, but I can’t wait to find out.

Mini update

Following on from the comments below, and some comments on twitter, it seems I might be wrong to say that Oculus won’t enhance games like Civilisation. Oculus VR know about it and are working on figuring it out:

@mike_acton “it’s not going to enhance your game of Civ” – we’re working on it!

— TOMB Forsyth (@tom_forsyth) October 9, 2013

There are also a few demos to try too, which I can’t wait to get my hands on. It’s going to be fun finding out where the limits to this technology are.

Wolfram’s Mathematica 101

Original Author: Angelo Pesce / C0de517e

This will appear in a longer, more rambling form, on C0de517e.

It’s main section is inspired by LearnXinYminutes

Introduction

Games and rendering are becoming increasingly “data driven”, in a sense that we use big amounts of “offline” data, either through acquisition or simulation, that we have to fit into our games via forms of approximation. These can be derived analytically, through simplifying assumptions, numerically, by fitting functions to data (learning), or using both methods.

Mathematica can help with all these steps:

  • Being a Computer Algebra System, it aids with experimentation on analytic forms
  • It has great support for a wide range of numerical solvers, allowing to write simulations
  • It has strong visualization and interactive manipulation abilities, which makes reasoning about the data easier
  • It’s a prototyping, experimention language, with an interactive REPL and very coincise, powerful functional primitives

Mathematica’s language

It stands to reason that a CAS is built on top of a symbolic language that supports programmatic manipulation of its own programs (code as data, a.k.a. homoiconicity), and indeed this is the case here. The most famous homoiconic language is Lisp, and indeed you’re familiar with the Lisp family of languages, Mathematica won’t be too far off, but there are a few notable differences.

While in Lisp everything is a list, in Mathematica, everything is an expression tree. Also, expressions in mathematica can have different forms, that is, input (or display) versions of the same internal expression node. This allows you for example to have equations entered in the standard mathematical notation (TraditionalForm) via equation editors or in a textual form that can be typed without auxiliary graphical symbols (InputForm) and so on. Mathematica’s environment, the notebook, is not a purely textual one, but supports graphics, so even images and graphs can be displayed as output or input, inside equations, while still maintaining the same internal representation.

Mathematica is an interactive environment, but it’s not a standard REPL (read-eval-print loop), instead it relies on the concept of “notebooks” which are a collection of “cells”. Each cell can be evaluated (shift-enter) and it will yield an output cell underneath them, thus allowing for changes and re-evaluation of cells in any order. Cells can also be marked as not containing Mathematica code but just text, thus the notebook is a mix of code and documentation which enables a sort of “literary programming” style.

For completeness it’s worth noticing that Mathematica also has a traditional text-only interface that can be invoked by running the Kernel outside the notebook environment, which has only textual input and output and has only the standard REPL you would expect, but there’s little reason to use it. There is also a more “programming” oriented environment called the Workbench, an optional product that can make your life easier if you write lots of Mathematica code and need to profile, debug and so on.

Mathematica by example

If you don’t have Mathematica, you can try Mathics which is an OpenSource implementation based on SciPy and Sage. It’s obviously far from being as complete as Wolfram’s solution, but it has a lot of the basic language covered and also has an online interface! I expect most of the code below to work on it.

(* This is a comment, if you're entering this in a notebook remember that to evaluate the content of a cell you need to use shift-enter or the numeric pad enter *)
 
  
 
  (* Basic math is as expected, but it's kept at arbitrary precision unless you use machine numbers *)
 
  (1+2)*3/4
 
  (1.+2.)*3./4.
 
  
 
  (* % refers to the last computed value *)
 
  %+2
 
  
 
  (* Functions are invoked passing parameters in square braces, all built-in functions start with capitals*)
 
  Sin[Pi/3]
 
  
 
  (* N[] forces evaluation to machine numbers, using machine numbers makes evaluation faster, but will defeat many CAS functions *)
 
  N[Sin[Pi/3]]
 
  
 
  (* Infix and postfix operators all have a functional form, use FullForm to show *)
 
  FullForm[Hold[(1+2)*3/4]]
 
  
 
  (* Each expression in a cell will yield an output in a separate output cell. Expressions can be terminated with ; if we don't want them to emit output, which is useful when doing intermediate assignments that would yield large outputs otherwise *)
 
  1+2;
 
  
 
  (* Assigning a symbol to an expression *)
 
  x = 10
 
  
 
  (* If a symbol is not yet defined, it will be kept in its symbolic version as the evaluation can't proceed further *)
 
  y = x*w
 
  
 
  (* This will recursively expand z until it reaches expansion limit and errors out *)
 
  z = z+1
 
  
 
  (* Clears the previous assignments. It's not wise to assign as globals such common symbols, we use these here for brevity and will clear as needed *)
 
  Clear[x,y,z]
 
  
 
  (* Evaluation is controlled by symbols attributes *)
 
  x = 10
 
  
 
  (* y will be equal to "x*2", not 20 as := is the infix version of the function SetDelayed, which doesn't evaluate the right hand...*)
 
  y := x*2 
 
  
 
  (* …that's because SetDelayed has attribute HoldAll, which tells the evaluator to not evaluate any of its arguments. HoldAll and HoldFirst attributes are one of the "tricky" parts, and a big difference from Lisp where you should explicitly quote to stop evaluation *)
 
  Attributes[SetDelayed] 
 
  
 
  (* As many functions in Mathematica are supposed to deal with symbolic expressions and not their evaluated version, you'll find that many of them have HoldAll or HoldFirst, for example Plot has HoldFirst to not evaluate its first argument, that is the expression that we want to graph *)
 
  Plot[Sin[x], {x, 0, 6*Pi}]
 
  
 
  (* The Hold function can be used to stop evaluation, and the Evaluate function can be used to counter-act HoldFirst or HoldAll *)
 
  Hold[x*2]
 
  y:=Evaluate[x*2]
 
  y
 
  
 
  (* A neat usage of SetDelayed is for memoization of computations, the following pi2, the first time it will be evaluated, will set itself to the numerical value of Pi*Pi to 50 decimal points *)
 
  pi2:=pi2=N[Pi*Pi,50]
 
  pi2
 
  
 
  (* Defining functions can be done with the Function function, which has attributes HoldAll *)
 
  fn=Function[{x,y}, x*y];
 
  fn[10,20]
 
  
 
  (* As many Mathematica built-ins, Function has multiple input forms, the following is a shorthand with unnamed parameters #1 and #2, ended with the & postfix *)
 
  fn2=#1*#2&
 
  fn2[10,20]
 
  
 
  (* Third version, infix notation. Note that [Function] is a textual representation of a graphical symbol that can be more easily entered in Mathematica with the key presses: esc f n esc, many symbols can be similarly entered, try for example esc theta esc *)
 
  fn3={x,y}[Function]x*y
 
  fn3[10,20]
 
  
 
  (* A second, very common way of defining functions is to use pattern matching and delayed evaluation, the following defines the fn4 symbol to evaluate the expression x*y when it's encountered with two arguments that will matched to the symbols x and y *)
 
  fn4[x_,y_]:=x*y
 
  fn4[10,20]
 
  
 
  (* _ or Blank[] can match any Mathematica expression, _h matches only expressions with the Head[] h *)
 
  fn5[x_Integer,y_Integer]:=x+y
 
  fn5[10,20]
 
  fn5[10,20.]
 
  
 
  (* A symbol can have multiple matching rules *)
 
  fn6[0] = 1;
 
  fn6[x_Integer] := x*fn6[x - 1]
 
  fn6[3]
 
  
 
  (* In general pattern matching is more powerful than Function as it's really an evaluation rule, but it's slower to evaluate, thus not the best if a function has to be applied over large datasets *)
 
  
 
  (*Note that pattern matching can be used also with=,not only:=,but beware that=evaluates RHS,in the following fnWrong will multiply y by 3,not by the value matching test at "call" site,as test*y gets fully evaluated and test doesn't "stay" a symbol,it evaluates to its global value*)
 
  test=3;
 
  fnWrong[test_,y_]=test*y
 
  
 
  (* Lists are defined with {} *)
 
  a={1,2,3,{4,5},{aa,bb}}
(* Elements are accessed with [[index]], indices are one-based, negative wrap-around *)
 
  a[[1]]
 
  a[[-1]]
 
  
 
  (* Ranges are expressed with ;; or Span *)
 
  a[[2;;4]]
 
  
 
  (* From the beginning to the second last *)
 
  a[[;;-2]]
 
  
 
  (* Vectors and matrices are just appropriately sized lists and lists of lists *)
 
  b={1,2,3}
 
  m={{1,0,0},{0,1,0},{0,0,1}}
 
  
 
  (* . is the product for vector, matrices, and tensors *)
 
  m.b
 
  
 
  (* Expression manipulation and CAS. ReplaceAll or /. applies rules to an expression *)
 
  (x+y)/.{x->2,y->Sin[Pi]}
 
  
 
  (* Rules can contain patterns, the following will match only the x symbols that appear to a power, match the expression of the power and replace it *)
 
  Clear[x];
 
  1+x+x^2 +x^(t+n)/.{x^p_->f[p]}
 
  
 
  (* Mathematica has lots of functions that deal with expressions, Integrate, Limit, D, Series, Minimize, Reduce, Refine, Factor, Expand and so on. We'll show only some basic examples. Solve finds solution to systems of equations or inequalities *)
 
  Clear[a];
 
  Solve[x^2+a*x+1==0, x]
 
  
 
  (* It returns results as list of replacement rules that we can replace into the original equation *)
 
  eq=x^2+a*x+1
 
  sol=Solve[eq==0, x]
 
  neweq=eq/.sol[[1]]
 
  
 
  (* Simplifying neweq yields true as the equation is satisfied *)
 
  Simplify[neweq]
 
  
 
  (* Assumptions on the variables can be made *)
 
  Simplify[Sqrt[x^2], Assumptions -> x < 0]
 
  
 
  (* fn7 will compute the Integral and Derivative every time it's evaluated, as Function is HoldAll, fn8, using Evaluate, will force the definition to be equal to the simplified version which yields correctly back the original equation *)
 
  fn7[x_]:=Function[x,D[Integrate[x^3,x],x]]
 
  fn8[x_]:=Function[x,Evaluate[Simplify[D[Integrate[x^3,x],x]]]]
 
  
 
  (* Many procedural programming primitives are supported *)
 
  If[3>2,10,20]
 
  For[i = 0,i < 4,i++,Print[i]]
 
  n=1; While[n < 4,Print[n];n++]
 
  Do[Print[n^2],{n,4}]
 
  
 
  (* Boolean operators are C-like for the most, only Xor is not ^ which means Power instead *)
 
  !((1>2)||(4>3))&&((1==1)&&(5<=6))
 
  
 
  (* Equality tests can be chained *)
 
  (5>4>3)&&(1!=2!=3)
 
  
 
  (* == compares the result of the evaluation on both sides, === is true only if the expression are identical *)
 
  v1=1;v2=1;
 
  v1==v2
 
  v1===v2
 
  
 
  (* Boolean values are False and True. No output is Null *)
 
  
 
  (* With, Block and Module can be used to set symbols to temporary values in an expression *)
 
  With[{x = Sin[y]}, x*y]
 
  Block[{x = Sin[y]}, x*y]
 
  Module[{x = Sin[y]}, x*y]
 
  
 
  (* The difference is subtle. With acts as a replacement rule. Block temporarily assigns the value to a symbol and the restores the previous definition. Module creates an unique, temporary symbol, which affects only the occurrences in the inner scope. *)
 
  m=i^2
 
  Block[{i = a}, i + m]
 
  Module[{i = a}, i + m]
 
  
 
  (* In general prefer Block or With, which are faster than Module. Module implements lexical scoping, Block does dynamic scoping *)
 
  
 
  (* Data operations. Table generates data from expressions *)
 
  Table[i^2,{i,1,10}]
 
  
 
  (* Table can generate multi-dimensional arrays, i.e. matrices *)
 
  Table[10*i+j,{i,1,4},{j,1,3}]
 
  MatrixForm[%]
 
  
 
  (* List elements can be manipulated using functional programming primitives, like Map which applies a function over a list *)
 
  squareListElements[list_]:=Map[#^2&,list]
 
  
 
  (* Short-hand, infix notation of Map[] is /@ *)
 
  squareListElements2[list_]:=(#^2&)/@list
 
  
 
  (* You can use MapIndexed to operate in parallel across two lists, it passes to the mapped function *)
 
  addLists[list1_,list2_]:=MapIndexed[Function[{element,indexList},element + list2[[indexList[[1]]]] ], list1]
 
  addLists[{1,2,3},{3,4,5}]
 
  
 
  (* A more complete version of the above that is defined only on lists and asserts if the two lists are not equal size. Note the usage of ; to compound two expressions and the need of parenthesis *)
 
  On[Assert]
 
  addListsAssert[list1_List,list2_List]:=(Assert[Length[list1]==Length[list2]]; MapIndexed[Function[{element,indexList},element + list2[[indexList[[1]]]] ], list1])
 
  
 
  (* Or Thread can be used, which "zips" two or more lists together *)
 
  addLists2[list1_,list2_]:=MapThread[#1+#2&,{list1,list2}]
 
  
 
  (* There are many functional list manipulation primitives, in general, using these is faster than trying to use procedural style programming. Extract from a list of the first 100 integers, the ones divisible by five *)
 
  Select[Range[100],Mod[#,5]==0&]
 
  
 
  (* Group together all integers from 1...100 in the same equivalence class modulo 5 *)
 
  Gather[Range[100],Mod[#1,5]==Mod[#2,5]&]
 
  
 
  (* Fold repeatedly applies a function to each element of a list and the result of the previous fold *)
 
  myTotal[list_]:=Fold[#1+#2&,0,list]
 
  
 
  (* Another way of redefining Total is to use Apply, which calls a function with as arguments, the elements of a list. The infix shorthand of Apply is @@ *)
 
  myTotal2[list_]:=Apply[Plus,list]
 
  
 
  (* Mathematica's CAS abilities also help with numerical algorithms, as Mathematica is able to infer some information from the equations passed in order to select or optimize the numerical methods *)
 
  (* NMinimize does constrained and unconstrained minimization, linear and nonlinear, selecting among different algorithms as needed *)
 
  Clear[x,y]
 
  NMinimize[{x^2-(y-1)^2, x^2+y^2<=4}, {x,y}]
 
  
 
  (* NIntegrate does numerical definite integrals. Uses Monte Carlo methods for many-dimensional integrands *)
 
  NIntegrate[Sin[Sin[x]], {x,0,2}]
 
  
 
  (* NSum approximates discrete summations, even to infinites *)
 
  NSum[(-5)^i/i!,{i,0,Infinity}]
 
  
 
  (* Many other analytic operators have numerical counterparts, like NLimit, ND and so on... *)
 
  NLimit[Sin[x]/x,x->0]
 
  ND[Exp[x],x,1]
 
  
 
  (* Mathematica's plots produce Graphics and Graphics3D outputs, which the notebook shows in a graphical interface *)
 
  Plot[Sin[x],{x,0,2*Pi}]
 
  
 
  (* Graphics are objects that can be further manipulated, Show combines different graphics together into a single one *)
 
  g1=Plot[Sin[x],{x,0,2*Pi}];
 
  g2=Plot[Cos[x],{x,0,2*Pi}];
 
  Show[g1,g2]
 
  
 
  (* GraphicsGrid on the other hand takes a 2d matrix of Graphics objects and displays them on a grid *)
 
  GraphicsGrid[{{g1,g2}}]
 
  
 
  (* Graphics and Graphics3D can also be used directly to create primitives *)
 
  Graphics[{Thick,Green,Rectangle[{0,-1},{2,1}],Red,Disk[],Blue,Circle[{2,0}]}]
 
  
 
  (* Most Mathematica functions accept a list of options as the last argument. For Plots an useful one is to override the automatic range. Show by default uses the range of the first Graphics so it will cut the second plot here: *)
 
  Show[Plot[x^2,{x,0,1}],Plot[x^3,{x,1,2}]]
 
  
 
  (* Forcing to show all the plotted data *)
 
  Show[Plot[x^2,{x,0,1}],Plot[x^3,{x,1,2}], PlotRange->All]
 
  
 
  (* Very handy for explorations is the ability of having parametric graphs that can manipulated. Manipulate allows for a range of widgets to be displayed next to the output of an expression *)
 
  Manipulate[Plot[x^p,{x,0,1}],{{p,1},1,10}]
 
  Manipulate[Plot3D[x^p[[1]]+y^p[[2]],{x,0,1},{y,0,1}],{{p,{1,1}},{1,1},{5,5}}]
 
  (* Manipulate output is a Dynamic cell, which is special as it get automatically re-evaluated if any of the symbols it capture changes. That's why you can see Manipulate output behaving "weirdly" if you change symbols that are used to compute its output. This allows for all kind of "spreadsheet-like" computations and interactive applications. *)
 
  
 
  (* Debugging functional programs can be daunting. Mathematica offers a number of primitives that to a degree help. Monitor generates a temporary output that shows the computation in progress. Here the temporary output is a ProgressIndicator graphical object. Evaluations can be aborted with Alt+. *)
 
  Monitor[Table[FactorInteger[2^(2*n)+1],{n,1,100}], ProgressIndicator[n, {1,100}]]
 
  
 
  (* Another example, we assign the value of the function to be minimized to a local symbol, so we can display how it changes as the algorithm progresses *)
 
  complexFn=Function[{x,y},(Mod[Mod[x,1],Mod[y,1]+0.1])*Abs[x+y]]
 
  Plot3D[complexFn[x,y],{x,-2,2},{y,-2,2}]
 
  Block[{temp},Monitor[NMinimize[{temp=complexFn[x,y],x+y==1},{x,y}],N[temp]]]
 
  
 
  (* Print forces an output from intermediate computations *)
 
  Do[Print[Prime[n]],{n,5}]
 
  
 
  (* Mathematica also supports reflection, via Names, Definition, Information and more *)
 
  
 
  (* Performance tuning. A first common step is to reduce the number of results Mathematica will keep around for % *)
 
  $HistoryLength=2
 
  
 
  (* Evaluate current memory usage *)
 
  MemoryInUse[]
 
  
 
  (* Share[] can sometimes shrink the memory usage by making Mathematica realize that certain subexpressions can be shared, it prints the amount of bytes saved *)
 
  Share[]
 
  
 
  (* Reflection can be used to know which symbols are taking the most memory *)
 
  Reverse@Sort[{ByteCount[Symbol[#]],#}&/@Names["`*"]]
 
  
 
  (* Timing operations is simple with AbsoluteTiming *)
 
  AbsoluteTiming[Pause[3]]
 
  
 
  (* Mathematica's symbolic evaluation is relatively slow. Machine numbers operations are faster, but slow compared to other languages. In general Mathematica is not made for high-performance, and if that's needed it's best to directly go to one of the ways it supports external compilation: LibraryLink, CudaLink, and OpenCLLink *)
 
  
 
  (* On the upside, many list-based operations are trivially parallelizable via Parallelize *)
 
  Parallelize[Table[Length[FactorInteger[10^50+n]],{n,20}]]
 
  
 
  (* The downside is that only a few functions seems to be natively parallelized, mostly image-related, and many others require manual parallelization via domain-splitting. E.G. integrals *)
 
  sixDimensionalFunction=Function[{a,b,c,d,e,f},Re[(a*b+c)^d/e+f]];
 
  Total[ParallelTable[NIntegrate[sixDimensionalFunction[a,b,c,d,e,f],{a,-1,1},{b,-1,1},{c,-1,1},{d,-1,1},{e,-1,1},{f,-1+i/4,-1+(i+1)/4}],{i,0,7}]]
 
  
 
  (* Even plotting ca be parallelized, see http://mathematica.stackexchange.com/questions/30391/parallelize-plotting. Intra-thread communication is expensive, beware of the amount of data you move! *)
 
  (* There is a Compile functionality that can translate -some- Mathematica expressions into bytecode or C code, even parallelizing, but it'http://mathematica.stackexchange.com/questions/1096/list-of-compilable-functions *)

OpenGL ES 2: Vertices, Shaders, and Geometry

Original Author: Adam Martin

(this is Part 3; Part 1 has an index of all the posts)

We finished Part 2 with the most basic drawing of all: we filled the screen with a background colour (pink/magenta), but no 3D objects.

NB: if you’re reading this on AltDevBlog, the code-formatter is currently broken on the server. Until the ADB server is fixed, I recommend reading this (identical) post over at T-Machine.org, where the code-formatting is much better.

a standalone library on GitHub, with the code from the articles as a Demo app. It uses the ‘latest’ version of the code, so the early articles are quite different – but it’s an easy starting point

Points in 3D are not Vertices

If you’ve done any 2D geometry or Maths, you probably think of a “vertex” as “the corner of an object”. OpenGL defines “vertex” differently, in a more abstract (and powerful) way.

Imagine a cube. If it were merely “a collection of points in space” it would have no colour. And no texture. Technically, it wouldn’t even have visible edges joining the points together (I drew them here to make it obvious it’s a cube).

cube-of-simple-verts

In practice … 3D objects are a collection of labelled items, where each item has multiple “pieces of information, usually a different value for each one”.

cube-two-attributes

One of those “pieces of information” is the 3D location (x,y,z) of the “labelled item”. Another piece of information is “what colour should the pixel be here?”. You can have arbitrarily many “pieces of information”. They could be the same for all the items in an object, or all be different.

OpenGL Vertex: A point in space that has no particular position, and isn’t drawn itself, but instead has a LIST of tags/values attached to it. Your shaders take a set of vertices and render “something” using the information attached to the vertices.

A vertex could be part of an object, without being on the surface

This example doesn’t work directyl in OpenGL, but it’s the same concept. Look at a bezier curve:

bezier-curve

It has two ordinary points (a start point, and an end point), and two “control points”. The control points aren’t drawn as part of the curve, but you have to know their positions in order to draw the curve.

Likewise, OpenGL vertices: they are more than just “corners”, they are meta-data about the object itself. And … OpenGL is able to “fill the space between vertices”, without drawing the vertices themselves.

Gotcha: This is modern OpenGL; old OpenGL was a bit more prescriptive, and didn’t allow so much freedom. There is still one part of OpenGL that’s a hangover from those days: each vertex has to be given a position sooner or later, and if that position is “off the edge of the screen”, it will be PARTIALLY ignored by the renderer.

How many vertices for an object?

  • In most cases, every “corner” of a 3D object has a vertex
  • In many cases, each “corner” has multiple vertices (usually for special 3D lighting)
  • In rare cases, a “corner” can exist without having any vertices at all: it is calculated procedurally on the GPU itself (c.f. the Bezier curve above)

OpenGL Shaders

We’re now into the world of the GPU … the code we’re talking about will execute on-board the GPU (not the CPU). We’ll have to:

  1. write that code in a special language (one that the GPU understands)
  2. upload it to the GPU
  3. ask the GPU to run it for us at the appropriate time

Shaders are written in a custom, highly simplified, programming language. There’s lots of FUD about how complex it is, but really: it’s a lot easier/simpler than C (it’s like “C, with bits removed”). GLSL (the language) is fully described on just two pages of this GL ES 2 crib-sheet

There are two kinds of Shader in GL ES 2:

  • Vertex Shaders
  • Pixel Shaders (or “Fragment Shaders” as they’re known in OpenGL)

Vertex Shaders

Vertex shaders “do stuff with vertices” (plural of vertex). Hence the name. They can work in 1D, 2D, 3D, or 4D.

Most of the time, they work in 3D or 4D, but their output is usually in 4D. If you’re working in 3D, there’s an automatic “up-conversion” to 4D that happens for you, so you never need to understand the 4D part – except that: a lot of your variables will be of type “4D vector”.

Vertex Shaders typically do a lot of calculations on the input data (vertices) before handing the results to the next stage (see below). Examples include:

  • Move your geometry around: translate, rotate, scale
  • Different “3D” projections: convert to “perspective”, bend the universe, simulate fish-eye lens
  • Physical simulation of “light”: reflections, radiosity, skin, etc
  • Animate bones and joints of a 3D skeleton

The simplest possible Vertex Shader is one that does this:

  1. Ignores the data you send in, and generates the same output 3D position, no matter what

With later versions of GL (not GL ES 2!) there are techniques that use this to do all your rendering without any vertices at all. But ES 2 runs your code “once per vertex you uploaded”, so unless you upload some vertices … your code won’t run at all.

So, in GL ES 2, the simplest usable Vertex Shader does this:

  1. For each vertex, reads a single vertex-attribute that gives a “position in 3D”
  2. Return that point unaltered

Vertex Shaders: output

For each vertex the GPU has, it:

  1. …runs your source code once to calculate a 4D point.
  2. …takes the 4D point, and converts it to a 2D point positioned on-screen, using the X and Y co-ords
    • nearly always: also using the Z co-ord to make sure nothing appears “in front of” things that are hiding it from the camera
  3. …that point is used to create 0 or more pixels, and handed over to the Pixel Shader
  4. …the Pixel shader is executed once for each Pixel created

Vertex Shaders and Co-ordinates

By default, where does a particular 3D point appear on screen when OpenGL draws it?

When you write a 3D engine, you almost never use the OpenGL default co-ordinate system. But when you’re starting out, you want to debug step-by-step, and need to know the defaults.

By default, OpenGL vertex shaders throw away all points with x, y, or z co-ordinate greater than 1 or less than -1

By default, OpenGL vertex shaders do NOT draw 3D with perspective; they use Orthographic rendering, which appears to be 2D

It’s slightly more complex than “throw away all points”, but with simple examples and test cases, you should make sure all your 3D co-ordinates lie within the cube with width = 2, centered on the origin (i.e all co-ords are between -1 and +1).

NB: this also makes some debugging a lot easier, because “(0,0,0)” is the exact center of your screen. If not, you’ve screwed-up something very basic in your OpenGL setup…

Pixel / Fragment Shaders

Pixel shaders turn 3D points (NOT vertices – but the things created from vertices) into pixels on screen. A Pixel Shader always uses the output of a Vertex Shader as its input; it receives both “the 3D point” from the Vertex Shader, and also any extra calculations the Vertex Shader did.

Technically, OpenGL names them “Fragment” shaders. From here on that’s what I’ll call them. But I find it’s easier to understand if you think of them as “pixel” shaders at first.

Allegedly, the reason for calling them “Fragment” shaders is that source code for a Fragment Shader is run for “1 x part-of-a-pixel at a time”, generating part of the final colour for that pixel. In default rendering, each pixel has only one part, and a Fragment Shader might as well be called “a Pixel Shader”; but with sub-sampling, the Fragment Shader might be invoked multiple times for the same Pixel, with slightly different inputs and output

An alternative view: just as an OpenGL “vertex” isn’t quite the same as traditional “points” in geometry, an OpenGL “fragment” isn’t quite the same as a monitor’s/screen’s “pixel”. A Fragment often has an Alpha/transparency value (impossible with a screen pixel!), and might not be a colour (when you get into advanced shader-programming); also … it might have more than one value associated with it (it retains some of the metadata that came from the Vertex).

In simple terms, fragment shaders:

  1. receive the converted data from multiple vertices at once (1, 2, or 3, depending on the Draw call you issued), including the 2D position created by OpenGL
  2. look at the specific Draw call to decide what to do:
    1. if the Draw call specified “draw triangles”, fill in the area between the 3 points
    2. if it specified “draw lines”, interpret them as bends in the line and paint pixels like a join-the-dots puzzle
    3. if it specified “draw points” … merely draw a dot at each point (where “dot” can cover multiple pixels and be multiple colours; it’s really a “sprite” rather than a “dot”)
  3. Finally … once it knows which pixels its actually going to fill in … a Fragment shader gives each pixel a colour; usually a different colour for each one. This is where 99% of your source code goes when writing a Fragment Shader: deciding which colour to fill each pixel

The simplest possible Fragment Shader does this:

  1. Any Fragment Shader can have a pixel colored any colour that it wants so long as it is Blue.

Black would work fine too, but … in OpenGL examples, we never use the colour black.

Black (and white) are reserved for “OpenGL failed internally (usually because you gave it invalid parameters”. If you use black as a colour in your app it’s very hard to know if it’s working correctly or not. Use black/white only when you’re sure your code is working correctly.

Adding Geometry, Shaders, and a Draw call

In OpenGL, remember that you can and should use multiple draw-calls when rendering each “frame”. Ignore the people who tell you that iOS/mobile can’t handle multiple draw-calls; they’re using long-outdated info, or unambitious graphics routines. Modern OpenGL’s real power and awesomeness kicks-in when you have hundreds of draw calls per frame.

Each time you want to add a feature to your app, a good starting point is:

“if I’m about to try something new, I’ll make a new Draw call”

…if it has unexpected effects, it’s easy to toggle it on/off at runtime while you debug it.

Insert a new draw call into the stack of calls:

ViewController.m

[objc]

-(void) viewDidLoad

{

GLK2DrawCall* draw1Triangle = [[GLK2DrawCall new] autorelease];

[self.drawCalls addObject: draw1Triangle];

}

[/objc]

(we’ll be using this to draw a triangle, nothing more. In case the name wasn’t obvious enough)

Check the original is still visible, and that nothing appears any different:

Screen Shot 2013-10-05 at 17.50.03

OK. We want to check that the new Draw call is actually working. Give the new one a new clear colour:

[objc]

[draw1Triangle setClearColourRed:0 green:1.0 blue:0 alpha:0.5];

[/objc]

…and tell it to start clearing the screen each time it draws:

[objc]

draw1Triangle.shouldClearColorBit = TRUE;

[/objc]

Check the screen now fills to the new colour. Note that the “alpha” part is ignored.

Screen Shot 2013-10-05 at 17.53.18

Alpha generally kills hardware performance of all known GPUs, so you just have to accept it needs “special” handling at the API level. By default, alpha will often be ignored by the hardware until you explicitly enable it (to ensure that the default rendering performance is good).

Our new Draw call is working. But we only want it to draw a triangle, so … turn off the “clear” part again:

[objc]

draw1Triangle.shouldClearColorBit = FALSE;

[/objc]

Defining a 3D object using data

It’s time for GLKit to start helping in a bigger way.

OpenGL has an internal concept of “3D position”, but only on the GPU (i.e. in Shaders). On the CPU, where we have to create (or load) the 3D positions in the first place, OpenGL doesn’t have a vector/vertex/point type. This is a little absurd.

I don’t know the full reasons, but it’s partly historical: early desktop OpenGL, being a C API, worked in the lowest-level data it could: everything was a float or an int. A 3D point was “3 floats in a row, one after the other, in memory”. OpenGL could “get away with” not defining a type for “3D point” etc.

With C, every platform was free to add their own version of “3d point type” so long as it was implemented on top of 3 x float.

Shaders were added 10 years later, and with their custom programming language (GLSL) they needed … some kind of type … to represent this. Hence we got the standard of vec2 / vec3 / vec4, and mat2 / mat3 / mat4. But OpenGL’s authors don’t like going back and altering existing API calls, so the “old” API’s, which only understand float (or int, etc), are still with us.

Apple fixed that by creating the following structs in GLKit:

  • GLKVector2 (for 2D rendering, and for lots of special effects, and general 2D geometry and Maths)
  • GLKVector3 (used for most things when doing 3D; both 3D points and RGB colours (even though a “point” and a “colour” are very different concepts!))
  • GLKVector4 (for 3D rotations using Quaternions, and other Maths tricks which make life much easier. ALSO: for storing “colours” as Red + Green + Blue + Alpha)
  • GLKMatrix2 (for 2D geometry)
  • GLKMatrix3 (for simple 3D manipulations – but you’ll probably never use this)
  • GLKMatrix4 (for complex 3D manipulations, with GLKit doing some clever stuff for you behind the scenes)

Jeff Lamarche’s popular GL ES 1 tutorials (and his update to GL ES 2) use a custom “Vector3D” struct instead. For ES 1, Apple hadn’t released GLKit yet, and so he had to invent his own. But now that we have GLKit, you should always use the GLKVector* and GLKMatrix* classes instead:

  1. GLKit structs are a standard, identical on all iOS versions, across all apps
  2. They come with a LOT of essential, well-written, debugged code from Apple that won’t work without them, and you’d have to write yourself otherwise
  3. If you really need to … they are easy enough to port to other platforms (but you’ll be writing the boilerplate code yourself)

We want to create the geometry “only once, but every time we re-initialize the ViewController (re-start OpenGL)”. Your ViewController’s viewDidLoad is fine for now.

The numbers are chosen to fit inside the 2x2x2 “default” clipping cube used by Vertex shaders (see explanation above, “Vertex Shaders and Co-ordinates”):

ViewController.m:

[objc]

-(void) viewDidLoad

{

/** Make some geometry */

GLfloat z = -0.5; // must be more than -1 * zNear, and ABS() less than zFar

GLKVector3 cpuBuffer[] =

{

GLKVector3Make(-1,-1, z), // screen (left, bottom)

GLKVector3Make( 0, 1, z), // screen (middle, top)

GLKVector3Make( 1,-1, z) // screen (right, bottom)

};

…upload the contents of cpuBuffer to the GPU…

[/objc]

Note that “GLKVector3 cpuBuffer[]” is the same thing as “GLKVector3* cpuBuffer”, and that we could have malloc’d and free’d the array manually. But on the CPU, we won’t need this data again – as soon as it’s uploaded to GPU, it’ll be thrown away on the CPU. So we do a local array declaration that will be automatically cleaned-up/released when the current method ends.

Note OpenGL’s definitions of X,Y,Z:

  • X: as expected, positive numbers are to the right, negative numbers to the left
  • Y: positive numbers are TOP of screen (opposite of UIIKit drawing, but same as Quartz/CoreAnimation; Quartz/CA are deliberately the same as OpenGL)
  • Z: positive numbers OUT OF the screen, negative numbers INTO the screen

Upload the 3D object to the GPU

Look back at Part 2 and our list of “where” code runs:

  • either: CPU
  • or: GPU
  • or: CPU-then-GPU
  • (technically also “or: GPU-then-CPU”, but on GL ES this is weakly supported and rarely used)

Your shaders are code that runs 100% on GPU. The points/vertices are data you create on the CPU, then upload to the GPU. It turns out that transferring vertex-data from CPU to GPU is one of the top 3 biggest performance sinks in OpenGL (probably the biggest is: sending a texture from CPU to GPU). So, OpenGL has a special mechanism that breaks the core GL mindset, and lets you “cache” this data on the GPU.

…Buffers and BufferObjects: sending “data” of any kind to the GPU

Early OpenGL has a convention for “uploading data to the GPU”. You might expect a method named “glUploadData[something]” that takes an array of floats, but no, sorry.

Instead, OpenGL uses a convention of “Buffers” and “Buffer Objects”. The convention is:

A BufferObject is a something that lives on the GPU, and appears to your CPU code as an int “name” that lets you interact with it. You can “create” these objects, you can “upload” to them (and in later versions of GL: download from them), you can “select” which one you’re using for a given Draw call, “delete” them, etc.

Each Draw call can read from multiple BufferObjects simultaneously, to grab the vertex-data it needs.

The key commands are:

  • Create a buffer-object: glGenBuffers
  • Select which object to use right now: glBindBuffer
  • Upload data from CPU to GPU: glBufferData (using buffer as a noun, not verb. I.e. this method name means “put data into a buffer”, not “buffer some data that’s being transferred”)
  • Delete a buffer: glDeleteBuffers

Here are three lines that create, select, and fill/upload to a buffer:

[objc]

GLuint VBOName; // the name of the BufferObject so we can later delete it, swap it with another, etc

glGenBuffers( 1, &VBOName );

glBindBuffer(GL_ARRAY_BUFFER, VBOName );

glBufferData(GL_ARRAY_BUFFER, 3 * sizeof( GLKVector3 ), cpuBuffer, GL_DYNAMIC_DRAW);

[/objc]

We use the term “VBO”, short for “Vertex Buffer Object”. A VBO is simply:

VertexBufferObject: A BufferObject that we’ve filled with raw data for describing Vertices (i.e.: for each Vertex, this buffer has values of one or more ‘attribute’s)

Each 3D object will need one or more VBO’s.

When we do a Draw call, before drawing … we’ll have to “select” the set of VBO’s for the object we want to draw.

Add an explicit Draw call to draw our object

If we don’t select anything, a Draw call will use “whatever VBO is currently active”. We’ll tell OpenGL to treat our VBO as containing “Triangles”, and tell it there are “3 vertices in the VBO”, starting from “index 0″ (i.e. start at the beginning of the VBO, read 3 * vertices (i.e. 3 * 4 floats)):

ViewController.m

[objc]

-(void) renderSingleDrawCall:(GLK2DrawCall*) drawCall

{

glDrawArrays( GL_TRIANGLES, 0, 3); // For now: Ignore the word “arrays” in the name

}

[/objc]

Screen Shot 2013-10-05 at 17.50.03

Note that nothing changes. Why not? Because you have no shaders installed. Recap, you’ve:

  • ONCE only:
    1. created some geometry on the CPU (the GLKVector3* pointer)
    2. alloc/init’d space on the GPU to hold the geometry (glGenBuffers( 1, … ) + glBindBuffer( GL_ARRAY_BUFFER, … ))
    3. uploaded the geometry from CPU -> GPU (glBufferData)
  • EVERY frame:
    1. called “glClear”
    2. called “draw” and said you wanted the 3 vertices interpreted as a “triangle”.
      • (GL knows this means “flood-fill the space between the 3 points”)

It appears OpenGL is “drawing nothing”. In old-fashioned GL, this wouldn’t happen. Instead, it would have drawn your object in solid black. If you’d set a colour to black, it would seem to be working. If you’d set a black background, it would seem broken (see what I mean? never use black when debugging…).

In modern GL, including GL ES 2, it’s different:

Nothing appears on your background because there is no “shader program” installed on the GPU to generate fragment-colours for a “flood fill”

Creating and uploading a simple “default” ShaderProgram

First, we’ll make the world’s simplest Vertex shader. We assume there’s a “position” attribute attached to each vertex, and we output it unchanged. GLSL has a very small number of global variables you can assign to, and with Vertex Shaders you’re required to assign to the “gl_Position” one:

[objc]

gl_Position = position;

[/objc]

…but the complete file is a bit longer, since (as always in C languages) you have to “declare” your variables before you use them, and you have to put your code inside a function named “main”:

VertexPositionUnprojected.vsh … make a new file for this!

[objc]

attribute vec4 position;

void main()

{

gl_Position = position;

}

[/objc]

…and: the world’s simplest Fragment Shader

Create the world’s simplest Fragment shader. Similarly, a Fragment Shader is required to either discard the Fragment (i.e. say “don’t output a colour/value for this pixel”), or else write “something” into the global “gl_FragColor”:

[objc]

gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 ); // R,G,B,Alpha – i.e. solid blue

[/objc]

…as a complete file:

FragmentColourOnly.fsh … note the extension “fsh” instead of “vsh”

[objc]

void main()

{

gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );

}

[/objc]

OpenGL: Compiling/Linking a Shader

The OpenGL design committee screwed the pooch when it came to compiling Shaders. You, the programmer, are required to write stupid boilerplate code that is error-prone and yet identical for all apps. There’s no benefit to this, it’s simply bad API design. The steps are:

Note: the “compile” and “link” we’re talking about here happen inside OpenGL and/or on-board the GPU; its not your IDE / desktop machine doing the compiling/linking

  1. Compile each of your Shaders (Vertex, Fragment)
  2. Create a ShaderProgram to hold the combined output, and attach the Shaders
  3. Link the ShaderProgram, finalising it
  4. Enable the ShaderProgram using a method call that has the wrong name

Compiling and storing Shaders

You don’t have to store the type of shader – but it’s easier to avoid bugs if you track which was which:

[objc]

typedef enum GLK2ShaderType

{

GLK2ShaderTypeVertex,

GLK2ShaderTypeFragment

} GLK2ShaderType;

[/objc]

Our GLK2Shader class is almost entirely a data-holding class:

[objc]

@interface GLK2Shader : NSObject

@property(nonatomic) GLuint glName;

@property(nonatomic) GLK2ShaderType type;

/** Filename for the shader with NO extension; assumes all Vertex shaders end .vsh, all Fragment shaders end .fsh */

@property(nonatomic,retain) NSString* filename;

[/objc]

Since we’ll never subclass GLK2Shader (there’s no point), we create a convenience one-line constructor for it:

[objc]

/** Convenience method that sets up the shader, ready to be compiled */

+(GLK2Shader*) shaderFromFilename:(NSString*) fname type:(GLK2ShaderType) type

{

GLK2Shader* newShader = [[[GLK2Shader alloc]init]autorelease];

newShader.type = type;

newShader.filename = fname;

return newShader;

}

[/objc]

Compiling is simple, but uses non-standard approach in OpenGL.

First we read the file from disk, loading it into a C-string (required by OpenGL):

[objc]

const GLchar *source = (GLchar *)[[NSString stringWithContentsOfFile:file encoding:NSUTF8StringEncoding error:nil] UTF8String];

[/objc]

…then we tell the GPU to create an empty shader for us, and we upload the raw C-string to the GPU for it to compile:

[objc]

GLuint *shader = glCreateShader(type);

glShaderSource(*shader, 1, &source, NULL);

[/objc]

…finally, we tell the GPU to compile it:

[objc]

glCompileShader(*shader);

[/objc]

Apple’s version of the compile method is a static method “compileShader:type:file:”, and I’ve made as few changes to it as possible, it just contains the above code. All well and good, except … Apple’s methods for locating files in a project are arcane and ugly. To make things easier, we wrap this in an INSTANCE method “compile” which looks through the Xcode project to find the right files, and errors if it can’t find them:

[objc]

switch( self.type )

{

case GLK2ShaderTypeFragment:

{

glShaderType = GL_FRAGMENT_SHADER;

shaderPathname = [[NSBundle mainBundle] pathForResource:self.filename ofType: @”fsh”];

stringShaderType = @”fragment”;

}break;

case GLK2ShaderTypeVertex:

{

glShaderType = GL_VERTEX_SHADER;

shaderPathname = [[NSBundle mainBundle] pathForResource:self.filename ofType: @”vsh”];

stringShaderType = @”vertex”;

}break;

}

[/objc]

Source for: GLK2Shader.h and GLK2Shader.m

Linking 2 Shaders into a single ShaderProgram

GLK2ShaderProgram (remember: a ShaderProgram is the combination of multiple Shaders) contains a bunch of simple data, and a private internal method (“link”):

GLK2ShaderProgram.h

[objc]

@interface GLK2ShaderProgram : NSObject

/**

Load a pair of Shaders, compile them, put them together into a complete ShaderProgram

*/

+(GLK2ShaderProgram*) shaderProgramFromVertexFilename:(NSString*) vFilename fragmentFilename:(NSString*) fFilename;

@property(nonatomic) GLuint glName;

@property(nonatomic,retain) GLK2Shader* vertexShader, * fragmentShader;

[/objc]

First, we do something slightly special in the init/dealloc:

[objc]

– (id)init

{

self = [super init];

if (self)

{

self.glName = glCreateProgram();

}

return self;

}

-(void)dealloc

{

self.vertexShader = nil;

self.fragmentShader = nil;

if (self.glName)

{

glDeleteProgram(self.glName); // MUST go last (it’s used by other things during dealloc side-effects)

NSLog(@”[%@] dealloc: Deleted GL program with GL name = %i”, [self class], self.glName );

}

else

NSLog(@”[%@] dealloc: NOT deleting GL program (no GL name)”, [self class] );

[super dealloc];

}

[/objc]

OpenGL does retain/release management of Shaders and ShaderPrograms. This is unique in OpenGL (part of the “OpenGL committee screwed the pooch when adding Shaders” problem). To make life easy, our GLK2ShaderProgram class matches the ObjC retain/release to doing the same in OpenGL, so we don’t have to worry about it.

We also override the setFragmentShader: / setVertexShader: setters so that they use glAttachShader and glDetachShader as required. This is part of the retain/release management – again: needs doing once ever, and never worry about it afterwards.

Most of the time, since OpenGL requires exactly 2 shaders, the creation and setup code will be identical. After you’ve created the two Shaders you compile them, add them to the GLK2ShaderProgram, and finally … tell the ShaderProgram to “link” itself:

[objc]

… // the guts of our shaderProgramFromVertexFilename:fragmentFilename: method

[vertexShader compile];

[fragmentShader compile];

newProgram.vertexShader = vertexShader;

newProgram.fragmentShader = fragmentShader;

[newProgram link];

[/objc]

Linking is simple, just one line of code:

[objc]

+(void) linkProgram:(GLuint) programRef

{

glLinkProgram(programRef);

}

[/objc]

…but again, we wrap it in an instance method (“link”) which adds some checks before and after, and automatically detaches/deletes the unused Shader objects if the linking fails.

Source for: GLK2ShaderProgram.h and GLK2ShaderProgram.m

Adding Shaders and a ShaderProgram to our draw call

Finally, with all that Shader/ShaderProgram code written … we can compile and link the shaders, and upload them to the GPU.

Add the Shader and ShaderProgram imports:

ViewController.h

[objc]

#import “GLK2Shader.h”

#import “GLK2ShaderProgram.h”

[/objc]

…create the ShaderProgram and tell the GPU to start using it, because some GL methods will fail if there’s no ShaderProgram currently selected:

[objc]

-(void) viewDidLoad

{

/** — Draw Call 2: draw a triangle onto the screen */

GLK2DrawCall* draw1Triangle = [[GLK2DrawCall new] autorelease];

/** Upload a program */

draw1Triangle.shaderProgram = [GLK2ShaderProgram shaderProgramFromVertexFilename:@”VertexPositionUnprojected” fragmentFilename:@”FragmentColourOnly”];

glUseProgram( draw1Triangle.shaderProgram.glName );

}

[/objc]

Upgrade our “drawcall” class so that it can store a ShaderProgram:

GLK2DrawCall.h

[objc]

#import “GLK2ShaderProgram.h”

@property(nonatomic,retain) GLK2ShaderProgram* shaderProgram;

[/objc]

Now that a Draw call can have a ShaderProgram, our render method needs to make use of this.

NOTE: OpenGL re-renders everything, every frame, and after each Draw call, the OpenGL state is NOT reset automatically. For each draw-call, if it doesn’t use a given OpenGL feature, we must manually re-set OpenGL’s global state to say “not using that feature right now, kthnkbye”

ViewController.h

[objc]

-(void) renderSingleDrawCall:(GLK2DrawCall*) drawCall

{

glClear( (drawCall.shouldClearColorBit ? GL_COLOR_BUFFER_BIT : 0) );

if( drawCall.shaderProgram != nil )

glUseProgram( drawCall.shaderProgram.glName);

else

glUseProgram( 0 /** means “none */ );

[/objc]

Going forwards, every time you add a new “OpenGL feature” to your code, you will do “if( drawcall uses it ) … do it … else … disable it”. If you don’t, OpenGL will start to “leak” settings between your draw-calls; most leaks are cosmetic (the wrong stuff appear on scren), but in extreme cases then can crash the renderer.

Compile failure … WTF?

Find that the shader fails to compile:

What’s happened is that Apple has turned your shader file into a “source” file, and removed it from your project (you don’t want source files appearing in your shipping binary / .ipa file!). You have to go into “Project Settings > Build Phases”, find the file inside “Source files”, and drag it out of there and into “Resource files” / “Files to be copied”.

This is a bug in Xcode 4, pretty embarassing IMHO. Apple always converts shader files into “unusable” source files, and then promptly removes them from your project’s output. I haven’t found a way to prevent it doing this. Every time you make a new shader, you have to manually “unbreak” your xcode project. Only once per file, fortunately, but it’s a huge annoyance that Apple didn’t notice and fix this.

Fix that, re-build, and try again…

Telling OpenGL how to extract values of VertexShader “attribute’s” from its Buffers (VBO’s)

Screen Shot 2013-10-05 at 17.50.03

ARGH! It appears that OpenGL is still “drawing nothing”! Why?

Nothing appears on your background because:

  1. check you have successfully loaded and compiled 2 x shaders (yes!)
  2. … and created a program from them, and uploaded a program (yes!)
  3. … and selected it (yes!)
  4. check you have uploaded some data-attached-to-vertices (yes!)
  5. … and the program reads that data as ‘attribute’ variables (yes!)
  6. … and you’ve told OpenGL how to convert your raw uploaded bytes of data into the specific ‘attribute’ pointers it needs (wait, what?)

OpenGL can clearly see that you’ve uploaded a whole load of GLfloat values, and that you’ve got exactly the right number of them to use “1 per attribute in the shader, and 1 per vertex we told GL to draw” … but it plays dumb, and refuses to simply “use them”.

Instead, it waits for us to give it permission to use THOSE floats as the ACTUAL floats that map to the “attribute” we named “[whatever]” (in this case, we named it “position”, but we could have called it anything at all). In desktop GL, this mucking around has been improved (a little) – you can put the mapping into your Shader source files – but that’s not allowed in GL ES yet.

We have to do three things:

  1. While compiling/linking the shaders, tell OpenGL to save the list of ‘attribute’ lines it found in the source code
  2. Find the OpenGL-specific way of referencing “the attribute in the source file, that I named ‘position’”
  3. Tell OpenGL how to “interpret” the data we uploaded so that it knows, for each vertex, which bits/bytes/offset in the data correspond to the attribute value for that vertex

In the long-run, we’ll want to and need to do some more advanced/clever stuff with these “vertex attributes”, so we create a class specifically for them. For now, it’s purely a data-holding class:

GLK2Attribute.h

[objc]

#import

@interface GLK2Attribute : NSObject

+(GLK2Attribute*) attributeNamed:(NSString*) nameOfAttribute GLType:(GLenum) openGLType GLLocation:(GLint) openGLLocation;

/** The name of the variable inside the shader source file(s) */

@property(nonatomic) NSString* nameInSourceFile;

/** The magic key that allows you to “set” this attribute later by uploading a list/array of data to the GPU, e.g. using a VBO */

@property(nonatomic) GLint glLocation;

@end

[/objc]

The two variables should be obvious, except for:

[objc]

@property(nonatomic) GLenum glType;

[/objc]

…but ignore that one for now. It’s included in the class because you can’t read the other data for an ‘attribute’ in OpenGL without also reading this, but we have no use for it here.

Add code to the end of the “link” method that will find + save all the ‘attribute’ lines it finds. First add a Dictionary to store them:

[objc]

@interface GLK2ShaderProgram()

@property(nonatomic,retain) NSMutableDictionary * vertexAttributesByName;

@end

[/objc]

… and initialize it:

[objc]

– (id)init

{

self.vertexAttributesByName = [NSMutableDictionary dictionary];

}

-(void)dealloc

{

self.vertexAttributesByName = nil;

}

[/objc]

…then finally use this code to iterate across all the Attributes (by name, strangely), and store them:

[objc]

-(void) link

{

/********************************************************************

*

* Query OpenGL for the data on all the “Attributes” (anything

* in your shader source files that has type “attribute”)

* Allocate enough memory to store the string name of each uniform

(OpenGL is a C API. C is a horrible, dead language. Deal with it)

*/

GLint numCharactersInLongestName;

glGetProgramiv( self.glName, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &numCharactersInLongestName);

char* nextAttributeName = malloc( sizeof(char) * numCharactersInLongestName );

/** how many attributes did OpenGL find? */

GLint numAttributesFound;

glGetProgramiv( self.glName, GL_ACTIVE_ATTRIBUTES, &numAttributesFound);

NSLog(@”[%@] —- WARNING: this is not recommended; I am implementing it to check it works, but you should very rarely use glGetActiveAttrib – instead you should be using an explicit glBindAttribLoction BEFORE linking”, [self class]);

/** iterate through all the attributes found, and store them on CPU somewhere */

for( int i = 0; i < numAttributesFound; i++ )

{

GLint attributeLocation, attributeSize;

GLenum attributeType;

NSString* stringName; // converted from GL C string, for use in standard ObjC calls and classes

/** From two items: the glProgram object, and the text/string of attribute-name … we get all other data, using 2 calls */

glGetActiveAttrib( self.glName, i, numCharactersInLongestName, NULL /**length of string written to final arg; not needed*/, &attributeSize, &attributeType, nextAttributeName );

attributeLocation = glGetAttribLocation( self.glName, nextAttributeName );

stringName = [NSString stringWithUTF8String:nextAttributeName];

GLK2Attribute* newAttribute = [GLK2Attribute attributeNamed:stringName GLType:attributeType GLLocation:attributeLocation];

[self.vertexAttributesByName setObject:newAttribute forKey:stringName];

}

free( nextAttributeName ); // important: in C, memory-managing of strings is clunky. Leaking this here would be a tiny, tiny leak, so small you’d never notice. But that’s no excuse to write bad source code. So we do it the right way: “free” the thing we “malloc”-ed.

}

[/objc]

That’s great, now OpenGL is saving the list of attributes. There’s an NSLog warning in the middle there – we’re going to ignore glBindAttribLocation for now. Personally, in real apps, I prefer to use glBindAttribLocation (it makes life easier to experiment with different shaders at runtime), but you don’t need it yet, and it requires more code.

Back to your ViewController, where we’ll have to read-back the saved GLKAttribute, just after we’ve compiled and linked the ShaderProgram:

[objc]

draw1Triangle.shaderProgram = [GLK2ShaderProgram shaderProgramFromVertexFilename:@”VertexPositionUnprojected” fragmentFilename:@”FragmentColourOnly”];

glUseProgram( draw1Triangle.shaderProgram.glName );

GLK2Attribute* attribute = [draw1Triangle.shaderProgram attributeNamed:@”position”]; // will fail if you haven’t called glUseProgram yet

/** Make some geometry */

[/objc]

…which enables us to tell OpenGL ‘THIS attribute is stored in the uploaded data like THAT’. But there are two parts to this. If we simply “tell” OpenGL this information, it will immediately forget it, and if we try to render a different object (in a different Draw call), the mapping will get over-written.

Obviously, the sensible thing to do would be to attach this metadata about the “contents” of a BufferObject / VBO to the VBO itself. OpenGL took a different path (for historical reasons, again), and invented a new GPU-side “object” whose purpose is to store the metadata for one or more VBO’s. This is the VertexArrayObject, or VAO.

It’s not an Array, and it’s not a modern Object, nor does it contain Vertices. But … it is a GPU-side thing (or “object”) that holds the state for an array-of-vertices. It’s a clunky name, but you get used to it.

After you’ve finished storing the VBO’s state/settings/mapping-to-VertexShader-attributes in the VAO … you can de-select the VAO so that any later calls don’t accidentally overwrite the stored state. In standard OpenGL style, deselection is done by “binding” (glBindVertexArrayOES) the number “0″. We won’t do this here because it’s unnecessary, but lots of tutorials will do it as a precaution against typos elsewhere in the app.

[objc]

glBufferData(GL_ARRAY_BUFFER, 3 * sizeof( GLKVector3 ), cpuBuffer, GL_DYNAMIC_DRAW);

/** … Create a VAO (state) to save the metadata/state for the VBO (vertex data) */

GLuint VAOName;

glGenVertexArraysOES(1, &VAOName );

glBindVertexArrayOES( VAOName );

/** Tell OpenGL “how” the attribute “position” is stored/packed into the stream of bytes we just uploaded */

glEnableVertexAttribArray( attribute.glLocation );

glVertexAttribPointer( attribute.glLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);

[/objc]

Complete! Geometry on-screen!

At last, we have a triangle:

Screen Shot 2013-10-05 at 17.56.03

You can turn the background-clear on and off for the new Draw call, and see that without it, the triangle appears on the old magenta background, and with it, it appears on the new green background.

In this post, we’ve added three new classes to the project. No need to type them all out yourself. GitHub link to a commit that has this exact version of the full project (I’ll update this link if I make a new commit with changes/fixes).

Next tutorial…

What will be next? I think it has to be … texture-mapping!

UPDATE: actually, I did a quick time-out to cover Multiple objects on screen at once, and an improved set of classes for VAO and VBOs

How to Survive Working with Remote Collaborators

Original Author: Ben Serviss

Photo:
Photo: Laptop

For many game developers, the appeal of working with remote collaborators is hard to pass up. If you’re an indie, cutting out office space costs and commuting time is a no-brainer. If you’re an established studio, being able to work with employees and contractors in remote locations grants you access to a far greater talent pool than whoever happens to live in your city.

But as with a well-balanced game, there are numerous trade-offs for the type of remote collaboration you embark upon. In my experience managing projects with collaborators in China, Korea and Russia, I’ve observed that these trade-offs can be easily quantified through three metrics, or difficulty multipliers: time zoneculture and language.

Simply put, for each one of these that differ between you and your collaborators, your required effort to keep the project on track increases significantly.

T, C, L

Working in different time zones is the easiest to manage. If you and your remote collaborators share the same culture and language, setting up meeting times and coordinating schedules is a minor, but manageable obstacle.

Next is culture. If you and your collaborator are working in different countries, but share the same time zone and language, you’ll need to stay vigilant for the many subtle miscommunications that may occur throughout the course of the project. Note that this also includes the specific studio culture, if you’re working with an established remote office.

Last, and the most complex difficulty multiplier to manage, is language. If your collaborators don’t speak the same language as you do natively or even at all, you’ll need to route communication through a third party. Properly setting up and managing this pipeline is critical to staying on track.

Calculating (Approximate) Effort

Things get trickier when you realize that not only do these multipliers stack, but the effort required for each combination increases in different ways. For purposes of taking general stock of your particular situation, here’s a rudimentary formula to help illustrate the challenges in taking on remote projects with different difficulty multipliers.

Because cost is widely variable when it comes to remote work and is entirely dependent on the location and studio, I chose to represent this formula in terms of overall effort involved, since its purpose is to give an approximate idea of the challenges ahead.

effort = base * (t * c * l)

Where base equals the base effort required if all collaborators were in the same office.

Where time zone (t), culture (c) or language (l) are the same, assign them a value of 1.

Where time zone (t), culture (c) or language (l) are different, t = 1.25, c = 1.5, and l = 1.75.

It’s one thing to be aware of the situation you’re getting into with remote work, but it’s another entirely to be prepared for the challenges to come. Here are some useful tools for handling the unique challenges of remote collaboration.

General

→ Meet the team ASAP. When you sign on to work with remote collaborators, plan to meet them in person as soon as you can. Putting a face to a name helps both sides think of each other as actual people instead of faceless entities that only exist in email to ask for things. Much like having character portraits in early RPGs helped to humanize otherwise pixelated characters, having your collaborators meet you in person will make you seem more like a flesh and blood human who is harder to ignore.

RPG character portraits help humanize their less-detailed character sprites. (Photo:

→ Video > voice > text. For communication, the more types of language cues you can convey, the better. The subtle inflections in your voice carry more information than in neutered text, and your unconscious body language in video chats is an even more powerful way to communicate.

→ The bigger fish picks the chat. When deciding what text chat client to use, go with whatever the bigger entity is using. If the 30-person studio you’re working with in Shanghai uses MSN Messenger, that’s what you should use. On the other hand, if the two contractors you hired in Moscow use ICQ and your 15-person studio uses Gchat, go with that. If you can use a client that supports multiple kinds of text chat, even better.

Time Zone

→ Set designated overlap hours. It’s helpful to formalize what times to designate as co-working times – when both sets of collaborators will be online. Even if the time difference is incredibly inconvenient, having a designated hour or even half hour for a daily sync will noticeably help keep things moving.

→ Set office hours. Setting guidelines for what times either studio can call the other is a great way to instill mutual respect (you can set separate rules for emergencies). This can help prevent resentment growing if you’re constantly calling your remote partner after hours, and also promotes better management on the managing end.

Culture

→ Visit ASAP. This is so important, it’s worth repeating. If you’ve never been to your collaborator’s country, go as soon as you can to get a better sense of their environment – both in terms of their country and their particular studio’s internal culture.

→ Do your homework. Research the social norms and acceptable behaviors of your collaborator’s country to best acquaint yourself with their mind-set. The more olive branches of understanding you can extend, the more likely they are to be truly collaborative with you.

Language

→ Dedicated translation. Set up a dedicated translation resource to help convey information and documents between teams. Take care when setting up this pipeline – it’ll form the lifeblood of the project.

→ Promote language education. As much as you can, encourage your team to learn your collaborator’s native language and vice-versa. Even if your team can’t manage much past greetings and restaurant language, your collaborators will appreciate that you tried.

Takeaway

Even in the best cases, working with remote collaborators is guaranteed to make you suddenly appreciate working with people in the same room. Otherwise, you can certainly succeed working remotely – but only if you’re prepared to meet the unique challenges it poses.

Accurate Collision Zoom for Cameras

Original Author: Eric Undersander

Figure 1 - Camera, lookat target, and obstacle

Here’s the takeaway of this whole post: For camera collision zoom, don’t cast a ray. Don’t cast a sphere. Cast the near face of the view frustum.

Now, let’s start from the beginning. Consider the typical third-person camera: a lookat target (often the player character) and an offset to the camera. We never want to lose sight of the player, so how do we handle obstacles like walls that get in the way? One solution is to move the camera in towards the player, past all obstacles—this is collision zoom.

We can implement collision zoom with a single raycast, backward from the lookat target to the desired camera position. If the ray hits anything, we move the camera to the hit point.

This approach mostly works but it’s not entirely accurate. Many gamers will recognize this particular artifact: stand near a wall, rotate your camera near the wall, and sometimes you’ll get a glimpse into the adjacent room.

For collision zoom, regardless of nearby obstacles, we don’t want to push the near clip plane into the player character’s face. This minimum offset distance is represented in the diagram as the gray camera near the player’s head. In code, it’s the variable minOffsetDist.

So by casting the near face of the view frustum (or rays approximating it), we avoid the earlier wall artifact. This approach has another consequence, perhaps unexpected: in the last diagram, the final camera position (in orange) is placed inside the obstacle. This is okay because the obstacle is still behind the near face of the view frustum. We’ll have an unobstructed view of the player.

Actually, as for the camera being placed inside the obstacle, it’s not just okay—it’s ideal. Our collision zoom algorithm should move the camera no closer to the player than absolutely necessary. In a confined space like an interior hallway or stairwell, even a few inches, gained by the greater accuracy of this approach, can make a difference in the usability of the camera.

Finally, if you opt for the four raycasts instead of the shape-cast, be aware of the downside of this approximation. You may get some occasional visual artifacts depending on your game’s collision geometry. You can mitigate this by some judicious use of padding/fudging in your raycasts (not shown in the code snippet).

// returns a new camera position
 
  Vec3 HandleCollisionZoom(const Vec3& camPos, const Vec3& targetPos, 
 
      float minOffsetDist, const Vec3* frustumNearCorners)
 
  {
 
      float offsetDist = Length(targetPos - camPos);
 
      float raycastLength = offsetDist - minOffsetDist;
 
      if (raycastLength < 0.f)
 
      {
 
          // camera is already too near the lookat target
 
          return camPos;
 
      }
 
  
 
      Vec3 camOut = Normalize(targetPos - camPos);
 
      Vec3 nearestCamPos = targetPos - camOut * minOffsetDist;
 
      float minHitFraction = 1.f;
 
  
 
      for (int i = 0; i < 4; i++)
 
      {
 
          const Vec3& corner = frustumNearCorners[i];
 
          Vec3 offsetToCorner = corner - camPos;
 
          Vec3 rayStart = nearestCamPos + offsetToCorner;
 
          Vec3 rayEnd = corner;
 
          // a result between 0 and 1 indicates a hit along the ray segment
 
          float hitFraction = CastRay(rayStart, rayEnd);
 
          minHitFraction = Min(hitFraction, minHitFraction);
 
      }        
 
  
 
      if (minHitFraction < 1.f)
 
      {
 
          return nearestCamPos - camOut * (raycastLength * minHitFraction);
 
      }
 
      else
 
      {
 
          return camPos;
 
      }
 
  }