Procedural graphics is where content is generated by an algorithm, rather than by an artist. This is not either/or, but exists on a continuum.
You could probably argue that all computer graphics are at least somewhat procedural. Digital cameras use algorithms to turn light into image files, and image editors (like Photoshop) apply effects, driven by users input to manipulate pixels. 3d modelling programs do a lot of behind the scenes work on low level vertex positioning, so the artists can concentrate more on the overall shape and animation.
Even though the data was created using algorithms and tools, most people would not consider them procedurally generated as the output content is made to the vision of the artist and firmly under their control. But the lines get blurred more as we go from creation to display. Once the 3d model is made, the steps from there to the final image on the screen involves an immense numbers of calculations as the model is loaded, transformed, textured, lit and then projected onto the camera.
3d animation has moved to become more procedural over the years, and is a good example of the tradeoffs that come with this decision. Consider animating a 3d model of a man running.
Static frames
Store the 3d points of the man for each frame of his walking. While the artist here has total control, they must spend a lot of effort and memory making lots of frames, and this generally looks jerky with "popping" between frames.
Key frames
Store 3d points of key frames and interpolate between them. For instance say you have 2 frames of walking animation with a 1 second transition. The renderer takes the 2 values, then picks a point inbetween them based on how much time has passed. While this is a big improvement on the static frames above, it can look somewhat unnatural as organic beings don't move in straight lines.
Bones
Models are constructed with internal bones, which have their movement constrained by joints which store ranges of motion. The surface skin is then calculated based on bone positions.
As you can see, the general progress has been towards the artist working more generally, and surrendering low level details like the final location of vertices to simulation. Simulation means the graphics are driven by the state of the world, which can be determined by user input and code (for instance physics and artificial intelligence).
The possible outcomes in simulations are near infinite. Id software (creators of Quake) were suprised when players used in-game explosions to push themselves extremely high in the air. Consider a screenshot of someone standing in part of a level the designers never expected. Where did this image come from? The artists? The programmers? The players? The answer is all of them, and it emerged out of the rules of the simulation.
Computers are getting relentlessly faster so we can display more and more, and the artists need help to generate the massive amounts of data. Instead of animating each strand of hair on a model, they designate types of hair which are animated and rendered by special tools. Instead of constructing trees by hand in a modelling program, there is software which can generate a whole forest worth of trees for you. There are algorithms for all kinds of effects, from water, clouds, rock textures to simulating crowds, battling armies or herding animals.
The advantages of moving to procedural generation are (in some cases) more natural looking environments, adjustable detail levels based on CPU speed, and quicker development. This is actually more akin to how things are done in real life - when a movie director wishes to film a forest scene he goes to a real forest that resembles what he had in mind, he doesn't have his art department construct thousands of trees by hand, which is more akin to current computer techniques.
My belief is that the future of computer graphics will continue this development of moving low level control to simulation. Content creation tools (like 3d modellers) will further integrate procedural techniques into their tool chain.
Sunday, January 24, 2010
Thursday, January 14, 2010
City Skyline and random seeds
Click the image on the left to view the canvas demo. (you need a browser other than Internet Explorer)
This generates a random city skyline, which fades from sunset to night time.
A way to achieve the effect would have been to generate the buildings, store them and alter their state as we go, but I did it a different way to demonstrate a property of generating scenes with pseudo random numbers.
Computers are deterministic so the random numbers are not really random. There are algorithms which can produce a random looking number from another number, and so by using one number to create the next, you can create a random looking sequence. The first number you start with is called a (random) seed.
A generator will usually set a seed if you don't specify it, using something kind of random it can get, eg the current time, the id of the process or the temperature of the CPU. Not setting the seed is thus the normal thing to do, as each run of the program will probably be different and things will appear more random.
However, by setting the seed you can generate a repeated sequence of numbers. This means that if you have some data that is created by a sequence of random numbers, by setting the seed to the same value, you can re-create it again. This is an incredible CPU/memory trade off, where you can generate infinite complexity with CPU time for a single integer value in the seed! A game in the 1980s called Elite used this technique to generate a huge game world on the tiny computers of the time.
The way the demo works is to reset the random seed each frame and then generate everything again using repeated calls to random. The windows have a random value of what time they'll turn on, and if it's past that time, they will be on.
This demonstrates how you can change some things in the the repeated calls, but critically the calls to random must be the same sequence, ie don't put calls to random in branches that depend on something that varies each call.
Subscribe to:
Posts (Atom)