Tag Archives: graphics

OpenCL - the compute intermediate language

We are now fast approaching the yearly Game Developer Conference, and around this time last year my favourite topic of conversation was the need for a "virtual ISA" that would include the current and future processor architectures, particularly GPUs. The term "virtual ISA" implied an assembly-like language and toolset that could be used to generate high performance (most likely data parallel) code without being tied to a specific architecture. This is much the same as the "virtual ISA" that LLVM provides for a wide variety of (primarily) CPU architectures.

Even a year on, it remains such a simple idea that once stated it becomes a wonder that it still doesn't exist. The main change in conversation is that it has become clearer that this is actually the role OpenCL should attempt to fill.

Why do we need a virtual ISA, and why not another language?

Right now, the reality is that no one knows the best way to efficiently program the emerging heterogeneous architectures. In fact, I don't think we even understand how to program "normal" GPUs yet. C++ will inevitably be crowbarred into some form of working shape, but I'd rather not settle for this as a long term solution.

A compute software stack sounds like a far more attractive option. And to build a well functioning stack you will need a solid foundation. As an illustration of what occurs without one, you should observe the rather scary rise in the number of compiler-like software projects that generate CUDA C as their output. This use case is not something CUDA C is designed for, and as OpenCL is essentially a clone of CUDA C, you may well think that this trend is something future OpenCL revisions should pay attention to.

In fact, there are a few other interesting trends to note.

A change in the wind?

A significant strength of the CUDA architecture is PTX, the NVIDIA-specific assembly-like language that CUDA C compiles to. OpenCL does not yet have a parallel of this. Several independent little birdies tell me that the committee felt attempting to define this as part of the initial standard of OpenCL could have derailed the process before OpenCL had a chance to find its feet. However more birdies tell me that this view has now mostly changed, and that defining something akin to a virtual ISA could actually be back on the cards.

PTX as a unofficial standard?

As nice as PTX is, it is not a GPU version of LLVM. Current support is limited to a single PDF reference and an assembler tool. The very impressive GPU ray tracer OptiX uses PTX as its target, but the authors do have the significant advantage of residing inside NVIDIA HQ. The compiler toolkit that makes LLVM so attractive is missing. Not to mention that PTX only targets NVIDIA hardware - although this itself is an interesting area for change, and one where I feel NVIDIA could gain a lot of momentum. One project I will be keeping a close eye on is GPU-Ocelot, which appears to be going a fair way towards addressing the shortcomings of PTX. While PTX may not be an "official" standard, much like Adobe Flash, it could establish itself as one anyway.

LLVM as a data parallel ISA?

Given the close parallels, we should seriously question whether LLVM can support GPU-like architectures. As a reasonably mature project, LLVM already has a lot in its favour and can point to some successes in this area. Notably, its use in the Mac OpenGL implementation, the Gallium 3D graphics device driver project, and an experimental PTX backend. I spoke with a few people who I know have seriously considered this option and found the jury still out. I haven't use LLVM in sufficient anger to come to a decision.

One obvious obstacle is what LLVM would compile to? I would expect a serious cross-vendor virtual ISA to have direct support from all hardware manufacturers. A level of indirection through LLVM or any other third-party ISA is unlikely to gain sufficient ground if it's not explicitly supported by all vendors through a standard such as OpenCL.

Demand and evolution

With relatively little movement in the industry for a year or more, I do occasionally consider whether I have misread the demand for a virtual ISA. But not for long! Apart from the clear advantage for code generation and domain specific language implementations (a topic I'm very interested in), a virtual ISA should become the target bytecode for GPU and HPC languages and APIs such as OpenGl, OpenCl, DirectX, DirectCompute, CUDA and their successors. It's a long and growing list.

While uncertainty surrounds the best way to program your GPU we can expect to see unhealthy but unavoidable amounts of biodiversity. But if we want to prevent our industries from painfully diverging we should at least agree on a foundational virtual ISA that we start to unify behind and build on.

The beauty of software development

This is all about how amazing software development really is.

Taking "X" to be a geeky subject: The belief that "X" is truly a thing of beauty but scorned, unloved and misunderstood by the masses is by no means a modern concept. But it lingers on all the same. I suppose it's no coincidence that the culmination of many geeky subjects into a sort of geeky mega-subject (software development) might attract a bit more than it's fair share of abuse. People at least have some respect for mathematicians and physicists, even if they choose to distance themselves. Tell people you develop software for a living and they promptly fall asleep, or complain that their computer never works. Unless of course, you develop games for a living at which point you become every kid's best friend. (It's a strategy I highly recommend.)

Here's a few thoughts and some of my favourite quotes on the topic of beauty and software.

Art
First up is Donald Knuth's, "Art of computer programming". For non-coders out there, this book is the equivalent of Steven Hawkings, "Brief History of Time", to most people: Everyone has heard of it. Many people own a copy. Some people have even attempted to read it but few have actually completed it and even less understood it. It's the kind of "compulsory reading" that most programmers skip but know they probably shouldn't have.

Knuth justifies his use of the word "Art" in the title:

Computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better.

You can almost hear a revolt starting.

Is coding Art? Well, I think there's one thing missing in Knuth's description that would make his assertion particularly convincing - Art can tell you something about humanity. Can your code do that? Well, I'm not sure. But, in the defense of code and the study of patterns in general, there are features and patterns of the world that are better reflected through them than Art. I think some of these patterns are surprisingly deep and beautiful - eigenvectors are the first to spring to mind. Certainly beautiful enough that I'd hang them on my wall if I could capture them in a picture.

Expression
You can express yourself through Art. Can you express yourself through code? Certainly. The most obvious example of this is the rapidly growing cross-over world of programming visual artists. Generative art is a topic all of it's own, so I'll just recommend anyone interested to check out Processing and follow links from there. I'm a fan of Robert Hodgin, especially this.

Is it possible to be defined by your creations, as many artists become defined by their output? This seems to be true of Justin Frankel, creator of several popular and sometimes controversial projects. There's a popular quote to go with his resignation from AOL to go with this, but please be aware I'm including it with some reservations as it's second hand and comes from a somewhat opinionated article. Just be aware it might be porky pies:

For me, coding is a form of self-expression. The company controls the most effective means of self-expression I have. This is unacceptable to me as an individual, therefore I must leave.

(I should probably also note his most recent project, REAPER, is absolutely fantastic and all you Cubase users should jump ship immediately.)

Elegance
I might be nitpicking, but I suspect the most common understanding of 'beauty' in reference to code is actually something closer to 'elegance' rather than beauty as such. Code elegance is arguably the reading-between-the-lines topic of many software engineering mailing lists.

Some noteworth texts from the small to the large include a decent blog post, On Beauty in Code; a presentation on how to go about writing beautiful code (in PHP of all things!); and of course there's a rather interesting looking book, Beautiful Code. I haven't read this yet, but intend to shortly. The highlight for me is an interesting review of a review of the book entitled, Code isn't beautiful:

Ideas are beautiful. Algorithms are beautiful. Well executed ideas and algorithms are even more beautiful. But the code itself is not beautiful. The beauty of code lies in the architecture, the ideas, the grander algorithms and strategies that code represents.

I think that's pretty much on the button.

Architecture
If your code was a building - an analogy that happens to be a good fit a lot of the time - you could marvel at it's architecture. You could be impressed by the construction, or the balance of functionality and aesthetics. And like appreciation of architecture, a lot can be in the eye of the beholder!

Coventry's Belgade Theatre.

Is it a "bold and dynamic" statement, developed through a "sculpural process" where "the spaces that it embraces, and that it implies around itself, are as important as the form itself"? Or, an unimaginative concrete cube ungracefully slapped into the middle of an already concrete-heavy town, representing little but the staggering lack of inspiration present in its creators? You decide! Comparisons with your most loved or love-to-hate software engineering projects as comments please.

Creation
Ignoring the code and algorithms for a moment, it's undeniable that the output of code can be beautiful - after all it's a major goal of computer graphics research. And not all of it involves artists in the traditional sense. Data visualisation has become a big topic in recent years. I find the growth of this area quite fascinating as it produces attractive, often intriguing images but apparently skipping over the role of an artist in a traditional sense and deriving input purely from real world data. It's arguably an expression of humanity - although not quite in the same sense I originally had in mind!

On a personal note, I still remember the first implementation of our radiosity algorithm emerge. The whole thing happened quite quickly and we lost several days to just playing with it: tweaking the scene, changing the lights, adding some post processing. It was something none of us had seen before, and it took us quite by surprise. I'd had that feel-good effect from previous projects, but there's something about actually being able to see the result and play with it that makes it all the more tangible.

Process
I clearly remember my tutor at university complaining that too many people focus on process over product. In fact, he was my music tutor complaining about composers, but the point applies very well to software engineering. But that's not to say there isn't beauty - even joy - to gain from the creation of code. This leads me to my last, but perhaps favourite quote of all time. Here's Alexander Stepanov (author of the C++ standard library) and Mat Marcus in some lecture notes:

Programming has become a disreputable, lowly activity. More and more programmers try to become managers, product managers, architects, evangelists – anything but writing code. It is possible now to find a professor of Computer Science who never wrote a program. And it is almost impossible to find a professor who actually writes code that is used by anyone: the task of writing code is delegated to graduate students. Programming is wonderful. The best job in the world is to be a computer programmer. Code can be as beautiful as the periodic table or Bach’s Well Tempered Clavier.

It's one of my favourite quotes because it's so passionate: I too love programming! I love patterns and algorithms! The world is fantastic!

But - and it's a big but - that quote simulateously shines light on the big elephant in the room: Software development is programming but with people. That 'people' part is vitally important, and is occasionally neglected by programmers of code, beautiful or otherwise. It mustn't be. Coding is empowering, but the power still lies with people. I suspect software development does have a thing or two to tell us about humanity.

And that's why software development really is amazing. Even if it's simultaneous one of the most mind-numbingly difficult, painful and exhilarating things I can think of.

Maths and ShaderX8

ShaderX7 is out

ShaderX7 Cover ArtShaderX7 has now hit the press in the USA, although (at the time of writing) it looks like the UK will have to wait a bit longer. I was luck enough to nab a copy directly from the publisher while at GDC. The first thing you notice is it's really fat this time - definitely one of the largest books on my shelf. It's not short on material and Amazon are currently selling it for a bargain $37.79 - rather short of the $59.99 RRP. Go grab yourself a copy.

Editing the shadows section was my first adventure into the world of publishing, and I'm glad I put in the effort. I was genuinely  surprised by the amount of time involved in editing just 4 articles, so my hat goes off to everyone - authors and editors alike - that contribute to these industry-led books. It's not done for money, it's very definitely a labour of love, and I'm happy that the end result made the work worthwhile.

ShaderX8 - now with added maths

Which leads me onto ShaderX8! This time around there will be a new section on Mathematics, which Wolfgang is kindly allowing me to get my editing paws on. The idea here is that the complexity and quantity of maths involved in writing shaders has greatly increased in recent times. Early shader models had pretty limited capabilities and most uses of them likely capped out at requiring knowledge of linear algebra - say, vectors, homogeneous matrices, and so on.  But we are now fast approaching being able to run typical x86 code on a GPU and the mathematical models being run are getting correspondingly more complex. There's also more than one way of boiling your mathematical eggs, and performance matters. Part of the process in writing shaders is learning, developing and optimising mathematical models - hence the dedicated new section.

The book is still at proposal stage so what this section really needs now are some article proposals. Please take a look at the schedule on the ShaderX website and email your proposals to Wolfgang before 17th May 09. If you have any questions on the maths section feel free to email me as well. The writing guidelines can be found here.

Complexity of maths in shaders

Here's my 2p on the complexity of maths in modern shaders.

I expect almost all graphics programmers will now at least have heard of spherical harmonics as they are an extremely efficient way of capturing lighting irradiance. Given their importance, there have been several excellent tutorials written to help the games industry understand how they work. But my impression of the industry is that many people's understanding of them is not yet at "comfortable" level. The use of spherical harmonics in graphics does not require comprehensive knowledge of the maths that underpins them, but it's definitely a step up from what was required of a programmer in a younger industry.

To add some context, it's worth noting that the use of spherical harmonics in lighting is now several years old. By industry standards spherical harmonics are by no means a new thing. I'd hazard this is evidence of their mathematical complexity reaching beyond what the industry is truly comfortable with handling.

Spherical harmonics are definitely not the most complex mathematical model in graphics. To use a more recent example: In the ShaderX7 Shadows section there's a really excellent paper by Mark Colbert and Jaroslav Křivánek, "Real-time dynamic shadows for image-based lighting". I won't go into the implementation details here, but suffice to say it's non-trivial. It requires familiarity with some fairly advanced linear algebra, a very sound knowledge of sampling, and solving some fiddly least-squares problems with some interesting regularisation to prevent overfitting. It's not dissimilar to the level of complexity of your typical SIGGRAPH paper.

mark-relighting

It's a pretty steep learning curve these days.

(Get your proposals in!)

My eyes! My eyes!

This is the eye-candy equivalent of munching too many wham bars. It's so simple I can't help but love it, cheesy as it is.

Before I explain what's going on - although you can probably guess - have a go at staring at the applet below. If there's a big "P" then it's still loading. If there's no applet... well, email me your browser, etc, etc and I'll try and fix it.

Instructions:

  1. Get really close to the screen
  2. Stare at the flashing square in the middle of the crazy-colour image. Don't blink or move your eyes!
  3. Count a good few flashes. 8-10 flashes should give you a good burn-in.
  4. Click the mouse button
  5. :O !
  6. Rinse and repeat

Processing-based Java applet:

This browser does not have a Java Plug-in.
Get the latest Java Plug-in here.

r0x0r!

The effect can last for a surprising amount of time, providing you keep your eyes completely fixed on the square. The second you move them, something in your eyes and brain puts the internal window screen wipers on and it's lost. At least for me, it seems that holding a fixed-view is more important than the length of time you stare. You can get a reasonable after image from a pretty quick glance providing you don't look around the screen.

Most people are familiar with "persistence of vision", as it's known, through looking at light bulbs for too long, their tellies refresh rate, helicopter blades turning into a blur, and cool gadgets like this. It's likely that the colour burn-in aspect of POV doesn't have quite as many real applications as the "things moving too quick to see" aspect, so I expect the applet above will come as a happy surprise to at least a few people.

Question is: Is it useful? Well, I reckon it might be in the right setting, but for someone working in graphics/lighting it's at least something to be aware of. I was shown this effect by Jeremy Vickery, an ex-Pixar lighting artist while on a visit to Geomerics, so I'd expect the film industry is already well aware of it. He demoed it to us in a regular powerpoint slide as one of several examples of how crazy and unpredictable your eyes/brain can be. You eyes just lie, frankly. If you are interested you can get further details from his DVD and likely find his GDC "Practical Light and Color" talk slides on the net.

The question is, can this effect be used to good effect in games/films? I'm open minded to this possibility, but still thinking about it. There are definitely other tricks-of-the-eye that are relevant to film makers. In films you have robust control over what comes next. Games? Maybe. Still thinking.