Original Author: Jonathan Adamczewski
One day soon, I hope to finish this PhD thing I’ve had going for a while now. I’ve done some research, I’ve written a thesis (aka dissertation), I recently received feedback from two examiners, and now have the task of applying their feedback to the thesis text — which after a nine month break seems simultaneously very familiar and very foreign.
As I’ve revisited the text, I’ve noticed a number of things that I wish I’d been able to complete, or to have done differently — which is to be expected, I suppose. There was one part, though, that caught my eye to write about here. It was included in the thesis for that reason — because it was eye-catching (any excuse to take a break from pages and pages of text), but was something of an aside that I didn’t have a chance to look more closely at. I don’t consider it to be particularly profound or world changing, but I do think it’s a neat little trick.
The Cell BE SDK from IBM included a Julia set raytracer that made use of a software cache — you can read about the program in an early SDK Installation and User’s Guide (pdf, see the last four pages).
So, that program produces images that look like this (small version — click for full size):
It’s a blobby thing in a box!
To quote from the above-linked document:
[there are] five cubemap texture lookup passes – 3 refraction lookups, a reflection lookup, plus a background lookup.
and it is these texture lookups that make use of the software cache.
When you’re implementing a cache, it’s good to know how good a job it’s doing. You can count hits and misses and other things, and build a wide range of statistics. That’s fine. And it so happened that I was implementing a cache(-like thing).
The problem is that you end up with broad averages that convey no specific information about how the cache is behaving in particular parts of the program. Also, the tables of numbers you get back are hideously dull to look at.
Wouldn’t it be nice to see the cache performance somehow? To be able to visualise which texture lookups were hits, and which were misses?
I taught my cache to lie.
When a request for some texture data is received, the cache handles the request as it normally would — fetching the data from main memory if it needs fetching, or just locating it in the SPE’s local store. Then — and here’s the lie — rather than returning the data was asked for, the cache returns either black if the access was a hit or white if there was a miss. The results look something like this (again, clicky for big version):
And it reveals various things about the cache and the rendering algorithm. You can see how texture colours are processed for each of the passes. The texture data is tiled, and you can probably work out the tile sizes and cache line size from the picture if you really wanted to. You can see how well the cache performs in different parts of the image, and plenty of other details.
The interesting task from here would be (perhaps) to look for clues to writing a better cache. Or perhaps improving the algorithm so you don’t need a generalised cache :P) There’s plenty of other things you could do to convey more information about the operation of the cache as well.
To be honest, when I first got it working I didn’t understand much of what I could see. It was only after digging through the code that I gained a clearer understanding of what was happening, and what these pixels actually meant. I made it as far as understanding a lot of the Why? questions that the image evokes, but — for various reasons — didn’t get to working out how to apply that understanding.
Regardless of its usefulness, I think this was a neat little trick 🙂
/* ').html("/web/20120418040418/./api.nrelate.com/rcw_wp/0.50.6/?tag=nrelate_related&keywords=I+taught+my+cache+to+lie&domain=www.altdevblogaday.com&url=http%3A%2F%2Fwww.altdevblogaday.com%2F2011%2F06%2F25%2Fi-taught-my-cache-to-lie%2F&nr_div_number=1").text(); nRelate.getNrelatePosts(entity_decoded_nr_url); /* ]]> */