Category Archives: thoughts

OpenCL - the compute intermediate language

We are now fast approaching the yearly Game Developer Conference, and around this time last year my favourite topic of conversation was the need for a "virtual ISA" that would include the current and future processor architectures, particularly GPUs. The term "virtual ISA" implied an assembly-like language and toolset that could be used to generate high performance (most likely data parallel) code without being tied to a specific architecture. This is much the same as the "virtual ISA" that LLVM provides for a wide variety of (primarily) CPU architectures.

Even a year on, it remains such a simple idea that once stated it becomes a wonder that it still doesn't exist. The main change in conversation is that it has become clearer that this is actually the role OpenCL should attempt to fill.

Why do we need a virtual ISA, and why not another language?

Right now, the reality is that no one knows the best way to efficiently program the emerging heterogeneous architectures. In fact, I don't think we even understand how to program "normal" GPUs yet. C++ will inevitably be crowbarred into some form of working shape, but I'd rather not settle for this as a long term solution.

A compute software stack sounds like a far more attractive option. And to build a well functioning stack you will need a solid foundation. As an illustration of what occurs without one, you should observe the rather scary rise in the number of compiler-like software projects that generate CUDA C as their output. This use case is not something CUDA C is designed for, and as OpenCL is essentially a clone of CUDA C, you may well think that this trend is something future OpenCL revisions should pay attention to.

In fact, there are a few other interesting trends to note.

A change in the wind?

A significant strength of the CUDA architecture is PTX, the NVIDIA-specific assembly-like language that CUDA C compiles to. OpenCL does not yet have a parallel of this. Several independent little birdies tell me that the committee felt attempting to define this as part of the initial standard of OpenCL could have derailed the process before OpenCL had a chance to find its feet. However more birdies tell me that this view has now mostly changed, and that defining something akin to a virtual ISA could actually be back on the cards.

PTX as a unofficial standard?

As nice as PTX is, it is not a GPU version of LLVM. Current support is limited to a single PDF reference and an assembler tool. The very impressive GPU ray tracer OptiX uses PTX as its target, but the authors do have the significant advantage of residing inside NVIDIA HQ. The compiler toolkit that makes LLVM so attractive is missing. Not to mention that PTX only targets NVIDIA hardware - although this itself is an interesting area for change, and one where I feel NVIDIA could gain a lot of momentum. One project I will be keeping a close eye on is GPU-Ocelot, which appears to be going a fair way towards addressing the shortcomings of PTX. While PTX may not be an "official" standard, much like Adobe Flash, it could establish itself as one anyway.

LLVM as a data parallel ISA?

Given the close parallels, we should seriously question whether LLVM can support GPU-like architectures. As a reasonably mature project, LLVM already has a lot in its favour and can point to some successes in this area. Notably, its use in the Mac OpenGL implementation, the Gallium 3D graphics device driver project, and an experimental PTX backend. I spoke with a few people who I know have seriously considered this option and found the jury still out. I haven't use LLVM in sufficient anger to come to a decision.

One obvious obstacle is what LLVM would compile to? I would expect a serious cross-vendor virtual ISA to have direct support from all hardware manufacturers. A level of indirection through LLVM or any other third-party ISA is unlikely to gain sufficient ground if it's not explicitly supported by all vendors through a standard such as OpenCL.

Demand and evolution

With relatively little movement in the industry for a year or more, I do occasionally consider whether I have misread the demand for a virtual ISA. But not for long! Apart from the clear advantage for code generation and domain specific language implementations (a topic I'm very interested in), a virtual ISA should become the target bytecode for GPU and HPC languages and APIs such as OpenGl, OpenCl, DirectX, DirectCompute, CUDA and their successors. It's a long and growing list.

While uncertainty surrounds the best way to program your GPU we can expect to see unhealthy but unavoidable amounts of biodiversity. But if we want to prevent our industries from painfully diverging we should at least agree on a foundational virtual ISA that we start to unify behind and build on.

The beauty of software development

This is all about how amazing software development really is.

Taking "X" to be a geeky subject: The belief that "X" is truly a thing of beauty but scorned, unloved and misunderstood by the masses is by no means a modern concept. But it lingers on all the same. I suppose it's no coincidence that the culmination of many geeky subjects into a sort of geeky mega-subject (software development) might attract a bit more than it's fair share of abuse. People at least have some respect for mathematicians and physicists, even if they choose to distance themselves. Tell people you develop software for a living and they promptly fall asleep, or complain that their computer never works. Unless of course, you develop games for a living at which point you become every kid's best friend. (It's a strategy I highly recommend.)

Here's a few thoughts and some of my favourite quotes on the topic of beauty and software.

Art
First up is Donald Knuth's, "Art of computer programming". For non-coders out there, this book is the equivalent of Steven Hawkings, "Brief History of Time", to most people: Everyone has heard of it. Many people own a copy. Some people have even attempted to read it but few have actually completed it and even less understood it. It's the kind of "compulsory reading" that most programmers skip but know they probably shouldn't have.

Knuth justifies his use of the word "Art" in the title:

Computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better.

You can almost hear a revolt starting.

Is coding Art? Well, I think there's one thing missing in Knuth's description that would make his assertion particularly convincing - Art can tell you something about humanity. Can your code do that? Well, I'm not sure. But, in the defense of code and the study of patterns in general, there are features and patterns of the world that are better reflected through them than Art. I think some of these patterns are surprisingly deep and beautiful - eigenvectors are the first to spring to mind. Certainly beautiful enough that I'd hang them on my wall if I could capture them in a picture.

Expression
You can express yourself through Art. Can you express yourself through code? Certainly. The most obvious example of this is the rapidly growing cross-over world of programming visual artists. Generative art is a topic all of it's own, so I'll just recommend anyone interested to check out Processing and follow links from there. I'm a fan of Robert Hodgin, especially this.

Is it possible to be defined by your creations, as many artists become defined by their output? This seems to be true of Justin Frankel, creator of several popular and sometimes controversial projects. There's a popular quote to go with his resignation from AOL to go with this, but please be aware I'm including it with some reservations as it's second hand and comes from a somewhat opinionated article. Just be aware it might be porky pies:

For me, coding is a form of self-expression. The company controls the most effective means of self-expression I have. This is unacceptable to me as an individual, therefore I must leave.

(I should probably also note his most recent project, REAPER, is absolutely fantastic and all you Cubase users should jump ship immediately.)

Elegance
I might be nitpicking, but I suspect the most common understanding of 'beauty' in reference to code is actually something closer to 'elegance' rather than beauty as such. Code elegance is arguably the reading-between-the-lines topic of many software engineering mailing lists.

Some noteworth texts from the small to the large include a decent blog post, On Beauty in Code; a presentation on how to go about writing beautiful code (in PHP of all things!); and of course there's a rather interesting looking book, Beautiful Code. I haven't read this yet, but intend to shortly. The highlight for me is an interesting review of a review of the book entitled, Code isn't beautiful:

Ideas are beautiful. Algorithms are beautiful. Well executed ideas and algorithms are even more beautiful. But the code itself is not beautiful. The beauty of code lies in the architecture, the ideas, the grander algorithms and strategies that code represents.

I think that's pretty much on the button.

Architecture
If your code was a building - an analogy that happens to be a good fit a lot of the time - you could marvel at it's architecture. You could be impressed by the construction, or the balance of functionality and aesthetics. And like appreciation of architecture, a lot can be in the eye of the beholder!

Coventry's Belgade Theatre.

Is it a "bold and dynamic" statement, developed through a "sculpural process" where "the spaces that it embraces, and that it implies around itself, are as important as the form itself"? Or, an unimaginative concrete cube ungracefully slapped into the middle of an already concrete-heavy town, representing little but the staggering lack of inspiration present in its creators? You decide! Comparisons with your most loved or love-to-hate software engineering projects as comments please.

Creation
Ignoring the code and algorithms for a moment, it's undeniable that the output of code can be beautiful - after all it's a major goal of computer graphics research. And not all of it involves artists in the traditional sense. Data visualisation has become a big topic in recent years. I find the growth of this area quite fascinating as it produces attractive, often intriguing images but apparently skipping over the role of an artist in a traditional sense and deriving input purely from real world data. It's arguably an expression of humanity - although not quite in the same sense I originally had in mind!

On a personal note, I still remember the first implementation of our radiosity algorithm emerge. The whole thing happened quite quickly and we lost several days to just playing with it: tweaking the scene, changing the lights, adding some post processing. It was something none of us had seen before, and it took us quite by surprise. I'd had that feel-good effect from previous projects, but there's something about actually being able to see the result and play with it that makes it all the more tangible.

Process
I clearly remember my tutor at university complaining that too many people focus on process over product. In fact, he was my music tutor complaining about composers, but the point applies very well to software engineering. But that's not to say there isn't beauty - even joy - to gain from the creation of code. This leads me to my last, but perhaps favourite quote of all time. Here's Alexander Stepanov (author of the C++ standard library) and Mat Marcus in some lecture notes:

Programming has become a disreputable, lowly activity. More and more programmers try to become managers, product managers, architects, evangelists – anything but writing code. It is possible now to find a professor of Computer Science who never wrote a program. And it is almost impossible to find a professor who actually writes code that is used by anyone: the task of writing code is delegated to graduate students. Programming is wonderful. The best job in the world is to be a computer programmer. Code can be as beautiful as the periodic table or Bach’s Well Tempered Clavier.

It's one of my favourite quotes because it's so passionate: I too love programming! I love patterns and algorithms! The world is fantastic!

But - and it's a big but - that quote simulateously shines light on the big elephant in the room: Software development is programming but with people. That 'people' part is vitally important, and is occasionally neglected by programmers of code, beautiful or otherwise. It mustn't be. Coding is empowering, but the power still lies with people. I suspect software development does have a thing or two to tell us about humanity.

And that's why software development really is amazing. Even if it's simultaneous one of the most mind-numbingly difficult, painful and exhilarating things I can think of.

Maths and ShaderX8

ShaderX7 is out

ShaderX7 Cover ArtShaderX7 has now hit the press in the USA, although (at the time of writing) it looks like the UK will have to wait a bit longer. I was luck enough to nab a copy directly from the publisher while at GDC. The first thing you notice is it's really fat this time - definitely one of the largest books on my shelf. It's not short on material and Amazon are currently selling it for a bargain $37.79 - rather short of the $59.99 RRP. Go grab yourself a copy.

Editing the shadows section was my first adventure into the world of publishing, and I'm glad I put in the effort. I was genuinely  surprised by the amount of time involved in editing just 4 articles, so my hat goes off to everyone - authors and editors alike - that contribute to these industry-led books. It's not done for money, it's very definitely a labour of love, and I'm happy that the end result made the work worthwhile.

ShaderX8 - now with added maths

Which leads me onto ShaderX8! This time around there will be a new section on Mathematics, which Wolfgang is kindly allowing me to get my editing paws on. The idea here is that the complexity and quantity of maths involved in writing shaders has greatly increased in recent times. Early shader models had pretty limited capabilities and most uses of them likely capped out at requiring knowledge of linear algebra - say, vectors, homogeneous matrices, and so on.  But we are now fast approaching being able to run typical x86 code on a GPU and the mathematical models being run are getting correspondingly more complex. There's also more than one way of boiling your mathematical eggs, and performance matters. Part of the process in writing shaders is learning, developing and optimising mathematical models - hence the dedicated new section.

The book is still at proposal stage so what this section really needs now are some article proposals. Please take a look at the schedule on the ShaderX website and email your proposals to Wolfgang before 17th May 09. If you have any questions on the maths section feel free to email me as well. The writing guidelines can be found here.

Complexity of maths in shaders

Here's my 2p on the complexity of maths in modern shaders.

I expect almost all graphics programmers will now at least have heard of spherical harmonics as they are an extremely efficient way of capturing lighting irradiance. Given their importance, there have been several excellent tutorials written to help the games industry understand how they work. But my impression of the industry is that many people's understanding of them is not yet at "comfortable" level. The use of spherical harmonics in graphics does not require comprehensive knowledge of the maths that underpins them, but it's definitely a step up from what was required of a programmer in a younger industry.

To add some context, it's worth noting that the use of spherical harmonics in lighting is now several years old. By industry standards spherical harmonics are by no means a new thing. I'd hazard this is evidence of their mathematical complexity reaching beyond what the industry is truly comfortable with handling.

Spherical harmonics are definitely not the most complex mathematical model in graphics. To use a more recent example: In the ShaderX7 Shadows section there's a really excellent paper by Mark Colbert and Jaroslav Křivánek, "Real-time dynamic shadows for image-based lighting". I won't go into the implementation details here, but suffice to say it's non-trivial. It requires familiarity with some fairly advanced linear algebra, a very sound knowledge of sampling, and solving some fiddly least-squares problems with some interesting regularisation to prevent overfitting. It's not dissimilar to the level of complexity of your typical SIGGRAPH paper.

mark-relighting

It's a pretty steep learning curve these days.

(Get your proposals in!)

On parsing, regex, haskell and some other cool things

I've recently become slightly obsessed about finding ways (new or otherwise) to make parsing text really really simple. I'm concerned there are wide gaps in the range of currently parsing tools, all of which are filled by pain.

It's also a nice distraction from the C++ language proposal I was working on which is stalled while I dig through more research. It turns out someone has already done something very similar to what I was thinking! So there will be a bit of a delay while I bottom that out properly.

Parsed the pain.
Parsing with regular expressions covers a decent amount of simple low-hanging fruit. I happen to be a big fan of regex but it definitely doesn't handle parsing 'structured documents' very well. Here 'structure' means some non-trivial pattern: perhaps matching braces, nested data or maybe a recursive structure.

This is by design. Regular expressions are, or were originally, a way of describing an expression in a 'regular grammar'. Its expressive power is actually very limited and text doesn't need to be that complex before it exceeds the expressiveness of a regular expression. This regex email address parser is just about readable, but kind of pushing the limits:

\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b

However, XML, HTML, pretty much all source code, every file format I've ever written - basically all the documents I care about - are not regular grammars.

The pain in context
The next step up from a regular grammar in the Chomsky hierarchy is a 'context free' grammar. Parsing a context-free grammar frequently involves writing a lexer and parser combination to do the work. The lexer breaks the character stream into 'tokens' and the parser translates the token stream into a more meaningful tree structure. Parsers in particular tend to be complex and lengthy pieces of code so you'd more often than not find yourself using a parser generator such as yacc, bison, or antlr to actually generate the code for you from a separate description of the grammar. This is all before you actually get to doing something useful with the tree the parser outputs.

Either way you cut it, this is a significant step up in pain from a regular expression. Your problem has suddenly jumped from a condensed one-liner to full-on procedurally-generated code. If the task you have in mind is just a bit more complex than a regex can handle your pain increases disproportional with this increase in complexity.

Sadly, even context-free grammars don't cut much in practice. There's a fair gap between the expressiveness of context-free grammar and the real world of nasty ambiguous context-sensitive languages. I'm thinking mainly of the context-sensitivity of C++ where the task of writing a parser is full of painful implementation details. Not to mention that there is a further major leap to get close to parsing the world of natural languages, such as English.

Pain relief
There are no shortage of parsing tasks in the "slightly more complex than a regex" category. Context-free grammars actually contain several sub-categories that are more restrictive but simpler to parse, such as LL and LR. So it's not really much of a surprise to discover that a typical 'regex' isn't actually a 'regular grammar expression' any more.

Perl's implementation of regex supports recursion, back references, and finite look-ahead which allow it handle some - maybe all - context-free documents. I recently re-read the Perl regex tutorial to remind myself of it, and had some fun scraping the web for tescos voucher codes. I think the expansion beyond supporting just regular grammars is very helpful, but I don't think it's really bridging the gap to context-free parsing in a particularly manageable and re-usable way.

So, if Perl's extended regex doesn't cut it, what are the alternatives? Well, here's a couple of thoughts.

Structured grep
I thought this was quite a nice find: sgrep ("structured grep"). It's similar to, but a separate from the familiar grep. There are binaries for most platforms on-line as well as being found in Cygwin, Ubuntu and probably most other Linux distros. At least in theory, it extends regular grammar pattern matching to support structure through the use of nested matching pairs and boolean operators.

Here's how you might scrap a html document for the content of all 'bold' tags:

$ cat my_webpage.html | sgrep '"" .. ""'

The .. infix operator matches the text region with the specified start and end text strings. It also support boolean operators like this:

$ cat my_webpage.html | sgrep '"".. ("" or "")'

If you dig through the manual you'll come across macros and other cool operators such as 'containing', 'join', 'outer' and so on. It seems easy to pick up and you can compose more complex expressions with macros.

I would go on about it for longer but sadly it's current implementation has a fairly major flaw - it has no support for regex! This feels like a bit of simultaneous forwards and backwards step. I'm not actually sure whether it's a fundamental flaw in the approach they've taken or whether the functionality is simple missing from the implementation. It's a bit of shame because I think it looks really promising, and if you are interested I'd recommend you take a moment to read a short article on their approach. I found it an interesting read and have since hit upon a handful of mini-parsing problems that I found sgrep very helpful with.

Parser combinators
This was a recent discovery, and it now surprises me I hadn't come across it before. I think I didn't know of it because it's rather tightly bound to the realm of 'functional' languages, which isn't something I've spent that much time looking at until now. That's all changing though, as I think I'm becoming a convert.

It occured to me that a parser might be easier to write in a functional language: Parsing a grammar is kind of like doing algebra, and algebraic manipulation is the kind of thing functional languages are particularly good at. Googling these ideas turned up both Parser Combinators, an interesting parsing technique, and Haskell, a pure functional language where a 'parser' is really a part of the language itself.

Parse combinators are a simple concept: You write a set of micro-parsers (my name for them) that do very basic parsing duties. Each is just a single function that given a text string, returns a list of possible interpretations. Each interpretation is a pair of the interpreted object and the remaining text string. In Haskell, you'd write the type of all parsers (a bit like a template in C++) like this:

type Parser a = String -> [(a,String)]

For an unambiguous input string the parser will produce a list with just one item, ambiguous inputs will produce a list with more than one item, and an invalid input produces an empty list. An example micro-parser might just match a particular keyword at the start of the string.

Since all your parsers are of the same type, it's now simple to compose them together into more complex parsers. This is modular programming at its most explicit.

It's quite surprisingly how tiny and general the code to compose these parsers can be. You can reduce them to one-liners. Here's a few examples, again in Haskell:


-- Here, m and n are always Parser types.
-- p is a predicate, and b is a general function.

-- parse-and-then-parse
(m # n) cs = [((a,b),cs'') | (a,cs') <- m cs, (b,cs'') <- n cs']

-- parse-or-parse
(m ! n) cs = (m cs) ++ (n cs)

-- parse-if-result
(m ? p) cs = [(a,cs') | (a,cs') <- m cs, p a]

-- parse-and-transform-result
(m >-> b) cs = [(b a, cs') | (a,cs') <- m cs]

-- parse-and-ignore-left-result
(m -# n) cs = [(a,cs'') | (_,cs') <- m cs, (a,cs'') <- n cs']

-- parse-and-ignore-right-result
(m #- n) cs = [(a,cs'') | (a,cs') <- m cs, (_,cs'') <- n cs']

I've taken these examples from "Parsing with Haskell", which is an excellent short paper and well worth a read.

Learning Haskell has been something of a revelation. I had glanced at Objective CAML and Lisp before, but I'm actually really quite shocked at how cool Haskell is and that it took me so long to find it.

C++0x, "just about everywhere"

When I started this blog I promised myself not to post rants about C++. In something as large and complex as the C++ language there is always plenty of material to rant about, and I figured I'd quickly bore myself and everyone else. To be honest, I was planning on attempting to forget about C++ altogether as a fact of life every programmer must live with and learn to love. But here I am, writing about it quite fondly.

"C++0x" is the working title for the now feature-complete upcoming ISO standard of C++, hopefully to become "C++09" this year. In any coders diary, this is a significant occasion and shouldn't go by without some pause for thought.

I'm not going to comment on the contents of the upcoming standard for a while, but if you work with C++ I highly recommend you check it out so I'll provide some links at the end.

I'm currently more interested in the development trend of the language itself. The process is fascinating. Plus, I am cooking up a proposal for a new language project based on C++ that could be an interesting angle - but that's for another day.

Significant standards

The first standard for C++ came in 1998, almost 20 years after it's conception in 1979. By 1998, C++ itself had long since 'dug in' and by many accounts the standardisation effort was a big painful undertaking. Unsurprisingly it wasn't perfect and "C++98", as it became known informally, had a spring clean in 2003 but without any significant changes an end-user would notice.

C++0x represents the first major revision of the language since it was first standardised as C++98, and looks to be another dramatic undertaking. It's a pretty bold move on many levels. It has many bright innovations and the current draft appears to be very well thought through.

I find C++0x even more remarkable when considering how it has been developed. On face value, the end product could be considered fairly impenetrable and bears more similarity to a legal contract than a working design. The C++ standardisation committee is a huge international democratic effort with nearly 200 members consisting of "corporations to fanatics". It has no owner. It does no marketing. But it appears to know that the decisions it makes through its huge, painful, diligent and slow process will effect the lives of literally millions of developers. I think it's a spectacular organisational achievement and its coordinators probably possess saint-like levels of patience :).

C++ "everywhere"

Now, in theory, you could write any application in assembler - but it doesn't scale so you'd struggle to write large, complex software with it. And you couldn't write every application in C# or Java. As interpreted, garbage-collected languages, they are just not always going to fit with your hardware or application's performance demands. C++ is in a sweet spot.

Even so, I have a love/hate relationship with C++. On one hand, I've spent many an hour teasing out holes and swearing at my compiler, but on the other I have no doubt my job would be more more difficult without it. Despite its flaws it is a language that is unique in the range of applications it can address. And because of this it's a language that's "just about everywhere".

I'm quoting Bjarne Stroustrup, from a talk he gave to the University of Waterloo last year. As the language's father and original author he may have a bias, but he has some evidence to support his statement. It's arguable that this applications list could well be cherry-picked, but I'd hazard a guess that he's likely to be more in touch with the users of C++ than most, and frankly I agree with him.

Endless growth

"Give a man enough rope and he will hang himself". There's even a book on C/C++ with the same title. As I published this post I discovered Bjarne is quoted as saying, "C++ makes it harder to shoot yourself in the foot; but when you do, it takes off the whole leg". His argument is really that the idiom is as true of C++ as it is of any powerful tool. I think this is a fair point but I think it is still a valid concern to have of popular language, particularly one that's growing.

So, given the language is about to expand even further, I spent a surprisingly enjoyable couple of hours one Saturday morning watching Bjarne talk through some new features in C++0x and comment on its development.

C++0x appears to me to not be an extension in the sense it just provides some new features. In the literal sense, this is exactly what C++0x does. But when inspected more closely, some of the features actually help clean up some of the extraneous cruft. To me, features such as initialisation lists and concepts appear to be part of a crafty expand-but-consolidate manoeuvre. If you're worried about having too much rope, then this is the best possible route the committee could have taken, so I am suitably impressed.

Subtraction

The committee have one hand tied. They simply can't subtract. It's practically impossible for them to remove features or they risk breaking people's existing code and the fallout from that could be enormous.

To be fair, the standardisation committee pursue some other options as well. They look at revising the C++ standard libraries (stl). Libraries are a far simpler way to extend a language as they are intrinsically "turn on and off-able" in a way language extensions typically are not. Opinions on the stl tend to vary, but I don't think many people would argue that they could be considerably better.

Libraries are not immune from the subtraction problem. But it's definitely an area the committee could do a lot of further good.

C++ is dead. Long live Java/C#/Python/etc

Although it's not specific to C++, the 'subtraction problem' is a rather fundamental problem with all language development - and perhaps much "live" software. Like many humans, software has a tendency to grow-up too quickly, become fat and fidgety at the peak of its powers, leading to the inevitable replacement by younger leaner contenders.

Given the inevitable growth at each revision - however well constructed the revision - how far will the life of C++ extend? In the long run, is C++ condemned to expand its way to death? This is certainly the view many people like to hold.

The death of C++ was being widely touted while I was at university in the 90s, rather conspicuously around the same time the first ISO C++ standard was being agreed. C++ was an old messy language that Java would replace, we were told. This belief extended into the course structure where no C or C++ option was offered. As a consequence I was never taught the language I've used daily ever since. And to be honest, I would not be surprised if some universities continue to do the same thing now.

Despite it all, C++ lives on and does so with relative ease. C++0x should further increase the language's lease on life. However, C++0x will not - and can not - resolve all the problems with the language. Assuming we believe perfection is possible, to do so requires subtraction, and subtraction requires an alternative approach to the standardisation committee. I have a proposal brewing on this topic, which I'll return to in a follow-up post.

Some references