Procedural Textures | Procedural Textures: Gaming’s Future

This French company is getting a lot of press lately, so let’s talk about procedural textures for a bit. The first thing to note is that the company would be the first to admit that procedural textures aren’t new, so let’s take the hype down a notch. Some of us have been using them since before Photoshop was around, often of sheer necessity over personal preference.

What this company claims to have built is "better math" to represent more complex, organic phenomena, using Wavelets vs. Fouriers, I gather. That’s good. But the thing that typically holds procedural anything back isn’t the math, it’s the art tools, and those I haven’t seen. Perhaps they’ve solved that too. But that’s what they need to be showing the world, not the benefits of small game size.

There’s no reason someone couldn’t make a great photoshop-like app that computes everything procedurally. I mean, photoshop is actually somewhat procedural already, though the procedures are very basic, like "apply-airbrush-at-(100,200)" repeated a million times. It’s only for convenience and speed that we save files in JPEG form. Even JPEG is procedural, given that the format already stores the parameters needed to reconstruct your image, and not the raw image bits. The magic compression actually happens on those parameters as well.

The tradeoff, as with any kind of compression, is saving space vs. spending time. And one way to re-balance that equation is to have a game where textures change gradually over time. With standard texturing, you’d have a hard time of it, perhaps needing hundreds of variations built ahead of time. With procedural methods, you could expose certain parameters (it’s better called parametric texturing then) and get vastly different results with very little work.

The question is how many games need to morph their textures on the fly? And the answer is that this is useful not only for time-variant parameters, but also space-variant, e.g., creating instant diversity, making a forest where no two leaves are alike. Ultimately, though, you come up against the limitations of the hardware, where even procedural textures need to be expanded and cached as big textures for optimal rendering. Only so many can be stored and/or produced on the fly (there’s another drawback, in that most hardware now supports one or more pre-compressed texture formats, which are incompatible with procedural texturing, and so we waste valuable texture memory too.)

However, even with all of those technical issues, my thesis is that procedural modeling tools are indeed the future, as they solve a lot of important problems. But my sense is that the art tools aren’t quite there yet.

Making smaller textures may not matter much for games loaded from disk, as disk formats grow and the CPU time spent expanding the texture may match closely with the load times for less compressed versions. Where procedural texturing makes the most sense is where they need to be dynamic, or where they need to fit in a really small space or narrow pipe, like, for example, for on-line. That’s one reason some on-line worlds use procedural objects already — the CPU cost is worth it because the bandwidth is the bottleneck.

Ultimately, what artists really want is control of the final image quality, because that’s what they do for a living. So if you really want to sell them on the technology, the thing to emphasize is how they can re-use their work and do more in a day. If I can make 2X the number of textures because some of them are derived from others, that’s a big time saver, as long as I like the results.

Leave a Reply

Your email address will not be published. Required fields are marked *