🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Towards A Richer Toolbox

Published February 26, 2006
Advertisement
Some historical context
NOTE: If you don't care about my navel-gazing musings, you can skip to the good stuff, starting after the horizontal bar below.

Those of you who have followed my activities for a while (you sick, voyeuristic freaks!) know that a while ago I was working heavily on research for a realtime raytracing engine. The project was called Freon 2/7, for no really good reason, and basically started out as an attempt to achieve high graphics quality without taking all day to render. Specifically, the project arose as a result of my tinkerings with the POVRay raytracing system; I liked the images it could produce, but I hated the render times.

I started out some research, found some other people to help tinker with the project, and over the space of about three years started to produce some results. Those of you who are familiar with the demoscene know that realtime raytracing has been more or less around for many years - but it works by making many assumptions, and many sacrifices. The goal of the Freon project was to find rendering solutions that were fast, generic enough to be useful for things like realtime games, and most importantly high-quality.

Eventually, I made some interesting discoveries, like some additions to kernel-based photon mapping estimations for global illumination. (Sorry, too lazy to explain that for anyone who hasn't spent a couple years dabbling in light simulation theory [wink]) Early on, it became obvious that dedicated raytracing hardware was the solution. After a lot of thought and investigation, I determined (and remain convinced) that the future of 3D graphics will be in programmable hardware that is specifically designed around raytracing, not polygon rasterization. I even started the early phases of negotiating with hardware designers and investors to actually produce a prototype chipset.

However, I was largely foiled by two critical problems: lack of money, and lack of time. These problems were, in fact, two facets of a single issue; I lacked time because I had to spend my time doing things that actually earn money. Like having a job. When I buckled from my contractor habit and started my day job, the project basically died completely. What free time I had was already consumed by games programming, so Freon has been untouched for over two years now.

This has always bugged me a little bit, and every now and then the whole deal comes back to mind and I feel compelled to finish what I started. (I had made repeated oaths, to myself and others, that Freon would finally be the first big project I actually finished. Oops.) Even more pressing is the fact that today raytracing hardware remains an unexplored frontier. Five years ago, my plan was to have consumer-ready cards on the market and in the hands of developers by now. Full-scale games targetting the hardware were due as early as 2007. Back in reality, the only really viable raytracing hardware project remains something of a non-starter.

What's ironic (to me at least) is that I've watched the SaarCOR project make a lot of philosophical mistakes that I dodged early on. At the risk of sounding like a pompous ass, I correctly predicted that certain ways of thinking about computer graphics would hamper RTRT efforts, and require ridiculous amounts of hardware power. I have not widely discussed my work on Freon for two reasons: first, because it is somewhat heretical from an academic point of view, and I don't have the energy to defend my all of my decisions; and secondly, because I'm still a little smug that I've got ideas for far more efficient solutions than are currently being explored. I continue to feel compelled to work on Freon because I still wholeheartedly believe that I've got a unique - and intensely marketable - concept here.


I don't bring this up to gloat about how I'm smarter than a bunch of highly educated researchers. That would be rather stupid. I bring this up not because I've got it all figured out, and they don't, but rather because I made a massive failing early on. I made one of my own critical philosophical mistakes, and it ended up costing me an opportunity to have a tremendously promising product on the market by now.

Originally, I was opposed to programmable hardware. I didn't disagree with the concept, I just didn't see it as being a viable entry point. Programmable logic is hard to build, and build fast. Doing a dedicated circuit first is cheaper, and educational: it helps illuminate the bottlenecks and weaknesses of a design while the design is still fairly easy to change. So, when I first started writing the Freon emulator software, the idea was to lay out the algorithms for a photon mapping engine in C code, and then translate that to hardware via tools like SystemC. It was a good concept... for a trivial and boring circuit. Realtime graphics is not trivial.

The mistake I made was in clinging to the "generic" requirement, and trying to make it fit with the "dedicated circuit" requirement. I wanted something that could rival (and, of course, surpass) the capabilities of rasterizer hardware. After all, if the raytracing card couldn't look better than Doom III, nobody would buy it. What I failed to realize was that programmable shaders have long since made dedicated circuits obsolete. Today, a single top-end video card can do some amazing things with shaders - things that were once the domain of raytracers. Things that I was counting on being able to do, that Doom III couldn't, so that I could sell my hardware.

Over the past two years it has become painfully clear that I screwed that whole decision up, royally. A dedicated circuit can't be extended and hacked on like a programmable one. A 1960's era telephone will never do more than relay sounds from one point to another; a 21st century cell phone can do all manner of things. The difference is programmability. A dedicated, inflexible raytracer card will never compete with a programmable rasterizer card, in the real world, because programmability is simply too powerful. Maybe the raytracer can do more nifty effects now, but in another year, someone will figure out how to do those same effects on a Radeon, and now your raytracer has no marketable strengths. More games will work on the Radeon, so the raytracer loses, and probably dies altogether.


After having a couple of years to ponder the whole adventure, and after a couple of years of seeing my "only serious competitor" fail to do anything really astounding with the technology, I've come to the conclusion that there's still a chance for Freon - but it will come at the cost of a massive philosophical shift. That shift is to build a platform, not a product. I really hate those terms (they smell like marketing and buzzwords to me), so let me explain what I'm getting at.



Platforms
The term "platform" is a fairly familiar description of a certain type of tools. Specifically, in the programming realm, a platform gives us the ability to develop new, interesting things, by giving us some pre-packaged tools to work with. Platforms exist on many levels, and come in many different sizes.

A computer architecture, like the IA32 architecture, is a platform: it gives us hardware to run programs on. An operating system is a platform: it gives us an easy way to talk to the hardware, and lets us build more programs. C++ is a platform: it's a specification for a set of tools (compilers, etc.) that we can use to build a whole lot of different types of programs. .Net is a platform: it gives us a massive library of prefabricated tools that we can use to make our own programs much more quickly and reliably. Spreadsheets like Excel are also platforms: they can be used to solve a huge variety of problems.

Usually, though, that's where platforms stop - at the "programming tool" level. This is mostly a philosophical issue; someone invents a platform, and wants it to appeal to Just About Everyone, so they make it into a programming tool. (The Unix realm is heavily polluted with this way of thinking.) Programming tools are great, but there's really only so many of them we can handle.


Domain-specific platforms

If we look a little closer, though, we find that there are more platforms out there, lurking under the guise of applications. Interaction tools like Exchange and Lotus Notes are platforms: they are designed to allow other types of software to do interesting things, without having to do the groundwork. However, we often misunderstand these, and just think of them as applications; this misunderstanding can lead to a lot of hate (c.f. how pretty much everyone despises the client end of Notes).

Games are also becoming platforms, to an increasing degree. "Modding" tools let players change the way games look, sound, and work. Some modding tools even let people build entirely new games. Traditionally, we don't call these systems "platforms" - we use words like "engine" and "scripting" and "data-driven architecture." But the idea is the same; games designed for heavy moddability are designed, in essence, to be platforms, on which more games can be made.


Seeking a more productive balance

I think software development, as a field, could benefit immensely from looking at this concept in a new light. Platforms are usually extremely generic (Windows, C++, .Net) or extremely domain-specific (the Half Life engine). Platforms often fail because they try to do too many things, and the end result doesn't do much of anything (99% of the Linux distributions out there). Some things fail because they really should be platforms, but people see them instead as applications (Notes... and Freon).

What we need is a balance, a mid-point between these extremes. Game engines are a good example: tools like Torque and OGRE give us the ability to do many things, but still within a specific domain. Torque isn't designed to make spreadsheet applications. Excel isn't so hot at making games. Yet both are extremely useful in their respective domains.

Code reuse is one of the most hyped notions in modern software engineering. It began with structured programming and subroutines, and it's been raging on ever since. Yet how often do we really get it right? Maybe we reuse one class in three projects. That's good. Maybe we make a platform that virtually everyone can reuse, and we produce .Net. That's good too. But there are a lot more than three projects in the world, and very few people have the resources to produce another .Net. Most people probably wouldn't use another .Net; the one we've got works great.

What we need is platforms that are generic enough to be useful for a lot of things, but still recognize that they can't do everything. These mini-platforms need to recognize their own limits, and refrain from collapsing under the weight of their own bloated feature creep. Many such platforms arise out of a sincere desire to acheive good code reuse. Yet many of them could be far more effective if they were thought of as platforms from the very beginning.


I've worked on a lot of programming projects over the years. I've written games, accounting tools, business management systems, store inventory systems, collaboration systems, highly specialized applications. Every time I enter a new problem domain, I wish dearly for a platform - not quite a .Net that does everything under the sun, but some sort of blob of existing, swappable concepts that I can use to avoid repeating a lot of effort. The "Don't Repeat Yourself" principle is burned strongly into me, both as a piece of my personality, and a part of my work ethic. If I've already done it once, I shouldn't have to do it again.

This is not a new concept at all. However, it is usually pursued with wild abandon, and that leads to madness. We like to get carried away, and make our platform do everything for everyone. Inevitably, these attempts make people disgusted. Moderation is the key.


I run a sort of idle curiosity called Tiny KeyCounter. The project innocently counts every keystroke and mouse click on your computer, and lets you compare your numbers with other people. Despite being utterly pointless and stupid, it's pretty popular. At the moment, the server for the project is dead (hardware failure) and I'm strongly considering leaving it dead until I build the fabled Version 2.0 of the software.

I want to do a Version 2.0 because I want to add more statistics: things like how many miles you moved your mouse today. When I first started thinking about TKC2.0, I had some vague notion of making it a "distributed statistics platform" where the software could track any kind of numbers you want, and compare them all through a unified framework. The more I thought about it, the more I realized the idea sucks. Nobody would want to use it.

I still want to make it more flexible, though. I want to make a platform for tracking silly, meaningless computer usage statistics. TKC has been successful because it doesn't take itself seriously. The project admits (sometimes very loudly) that it's all more or less a big joke. A "distributed statistics platform" would totally negate that, and make the entire concept more or less worthless. What I need is a mini-platform: extensible, so that others can potentially expand on the concept, but humble enough to know that it should not attempt to become a panacaea.

The way to do this, I think, is not to just build programs that let you write plugins. TKC1.0 tried that, and nobody wrote any plugins. Writing plugins in C++, VB, C#, etc. is boring and takes work. Even writing plugins in embedded languages takes skill and learning. Instead, I think the mini-platform concept should embrace the idea of highly domain-specific languages.

The essence of the mini-platform notion is to develop a high-level, highly abstract language where the concepts of a domain can be expressed and explored freely. This language should exist not as a binding to an existing language, nor as a language that tries to let one write any conceivable type of program. The language should be specifically and solely for discussing the problem domain, and for doing interesting things inside that domain.


I want to do this same thing with Freon. I did a lot of experimentation with new and wacky rendering techniques and algorithms. Usually, changing from one algorithm to another meant massive code rewrites. Since I was trying to build a dedicated application, I lacked dexterity and flexibility in my design philosophy. I have a backlog of dozens of cool algorithms I want to try, but would take literally months to write from the ground up as new rendering engines.

Towards the end of the project, I started to realize just how desperately I wanted a mini-platform. I wanted a way to easily explore new methods of rendering. I wanted to be able to describe rendering algorithms at a high level, with lots of abstraction, but without having to build them an existing language like C. Now, almost two years later, I've figured out what I really needed: a programming language that was designed specifically and solely for ray-oriented graphics technology.

More interestingly, I think that's the future of the hardware, too. A dedicated circuit is a dinosaur that will not compete against programmable technologies. Even a programmable photon mapping engine is limited, though. What I think we really need is a hardware mini-platform: it speaks the language of rays, but does not restrict itself to one particular rendering method. Path tracing, Whitted raytracing, photon mapping, and some of my own techniques should all be possible on the hardware - and not merely possible, but naturally easy to build.



My opinion is that this same pattern could have very good effects in other areas as well. I think we're close already; a lot of software sort of accomplishes an existence as a mini-platform, without ever directly realizing it. I'd like to see some smart people apply some time and thought to this notion, and see where we can go by capitalizing on it.

What kinds of things do you see being useful as mini-platforms? How could existing projects be given more clarity, more direction, and more appeal by doing less? We've had the dream of highly extensible "plugin-centric" software for years, and code reuse is right up there with it. How might a mini-platform concept get us closer to those ideals?

If we set out to build a domain-specific language for every interesting domain, how much more efficient could we be? Most importantly, how many doors could we open for those who know the domain well, but not the existing platforms like C++?
0 likes 2 comments

Comments

Ravuya
Yes, I read this whole thing.

Quote: If we set out to build a domain-specific language for every interesting domain, how much more efficient could we be? Most importantly, how many doors could we open for those who know the domain well, but not the existing platforms like C++?


We do have lots of domain specific languages -- Verilog, PostScript, Processing, MIDI (kind of), TeX -- these are all neat languages that have no use in general development other than producing insanely cool hacks. Paul Ford wrote a great essay on Processing, which is something interesting to read.

It would be nice to produce a domain specific language for unique problem sets that deserve it and TeX is probably the best example that I can think of -- there are subsets for linguists, scientists, mathematicians, musicians, computer scientists... to write macros and program documents. Is that not a general programming language? Linguists are much more efficient because of the TIPA extensions and macros, I would expect.

It's akin to old-school pidgin English and Creole -- it gets what needs to be said, said. But you can't use it for much.

Of course, the problem with writing a domain specific language is that it has to be a domain that somebody is interested in enough to wander outside of the domain to learn tools to write the aforementioned DSL.
February 27, 2006 12:46 AM
ApochPiQ
Nice link, thanks. I could spend a lot of time getting excited about making the web suck less as an information medium, but I'll resist [grin]

DSLs are great, and a really high-potential solution (I think, at least). Obviously producing each language is going to be a problem; the foundation takes a lot of work.

So here's a bit of programmer-zen for you: solve the problem recursively. Build a language that is useful for building domain-specific languages, and linking them into existing technologies. Think of it like having a tool that lets you build Python, and then build boost::python for it.

I think we've got enough experience in formally declaring languages - especially programming-type languages - that this is a totally feasible goal. Of course, it also suffers a metaproblem that is analogous to the problem that we want to solve with DSLs in the first place; namely, we have to strike an extremely sensitive balance between being domain-agnostic, and being application-specific.

In any case, like most problems, I suspect it would be pretty straightforward to resolve if more brain-time was devoted to thinking about it. Now I just have to find someone who has spare brain-time...
February 27, 2006 10:43 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement