Thursday, August 26, 2010

Skies & Silhouettes

Click the image on the left to start the demo. Update: There appears to be a bug that shows up in Chrome, where the sky is really blocky, but it works fine in FireFox. I've got big plans for doing a project with heaps of skies and silhouettes (I'm kind of obsessed by them), and made this little program to quickly test out what different generated skies look like. I had fun playing with it, hope you do too. Try out some examples, change the color values and click the button. Update: (23/9/10) Added MaxAlpha setting, the actual alpha is a random number between 0 and this value, so the fact that you can set a number much higher than 255 is a feature not a bug.

Saturday, June 19, 2010

Flock in a cloudy blue sky

Click on the image to load the canvas demo (requires FireFox or non-IE browser) I took the existing Javascript boids implementation from Coderholic, and made it look pretty with swallows on a cloudy background. The birds follow the standard 3 rules for Boids as first implemented by Craig Reynolds Separation: steer to avoid crowding local flockmates Alignment: steer towards the average heading of local flockmates Cohesion: steer to move toward the average position of local flockmates These simple rules for each individual leads to the group behavior - there is no central group intelligence or control. Flocking behaviors like this are often described as being emergent. Update: (23/9/10) There is now an initial pause as the simulation runs 500 steps before displaying the boids. This removes the ugly initial craziness before they start flocking properly.

Thursday, June 17, 2010

LSystem 3

Click on the image below to load the canvas demo (requires FireFox or non-IE browswer)

 I added a zoom, and moved the Javascript into its own file. A tip is to zoom with a very low iteration, as otherwise it is very slow. I am going to look at rendering to a buffer and allowing scrolling.

Monday, June 7, 2010

Falling Leaves

Click on the image on the left to load the canvas demo (requires FireFox or non-IE browser). As with all of these demos, try clicking reload a few times to see different random versions. Just an idea I had while riding my bike home, looking at fallen leaves (it's the start of winter here) I changed the l-system renderer to use curves which I think makes the trees look more natural. I also used the canvas wrapper to be able to get the position of the leaves as they are generated so I can animate them. Update: (23/9/10) Added hack to make leaves appear to collect on the ground beneath the trees.

Tuesday, May 18, 2010

Cloudy blue Perlin sky


Click on the image to load the canvas demo (requires FireFox or non-IE browser) Perlin Noise is a technique for generating procedural textures, commonly used for smoke, clouds or to add a random looking realistic roughness to surfaces. The algorithm is basically to generate noise, then scale that noise randomly across an image at different sizes and levels of transparency. This creates a self similarity which mimics the appearance of some natural processes. I found a canvas implementation by iron_wallaby and made the noise be the transparency of pure white, giving a cloud like effect. This was originally going to be just the background for another canvas demo I was working on, but that is delayed as I'm currently doing bioinformatics study/working on a project in my spare time. Things to improve on this are restricting how cloudy it is to a range, and removing some visual artifacts (lines/blockiness)

Tuesday, April 13, 2010

Russian Dolls - Random combinations

Click on the image on the left to load the canvas demo (requires FireFox or non-IE browser) This demo shows how a huge amount of content can be generated by randomly combining variations. This technique is useful as for example a crowd of identical people looks unnatural, but you don't necessarily care what the individuals look like, so long as they match a general theme. The numbers get large very quickly - for instance 3 variations of hair color, and 2 of eye color gives 3 * 2 = 6 possible combinations (assuming independent assortment). Adding another trait multiplies this again, and soon the number of possible variations becomes enormous. The code internals do not resemble genetics, but there is a similar idea in nature, with alleles. In the future I may use the dolls and their many variations as phenotypes for some experiments in this area. The reason I chose to use Russian dolls (aside from just liking their recursive nature) is because my daughter Caitlin has lots of them on various clothes, bags, sheets etc, so this is for her. It works fine in FireFox but there is something strange going on with parts of the faces not being drawn in some versions of Chrome.

Wednesday, March 17, 2010

Canvas Wrapper with getTransform()

The Canvas spec does not include a getTransform() method, so after performing rotation/scale/translation operations there is no way to work out where you really are.

So, I have created a wrapper class, which sits over the top of Canvas, and should behave exactly the same, except you now have a getTransform() method which returns a matrix such that the following doesn't do anything:
var m = ctx.getTransform();
ctx.setTransform( m[0][0], m[0][1], m[1][0], m[1][1], m[2][0], m[2][1] );
To use, add the following line:
<script type="text/javascript" src="canvas_wrapper.js"></script>
then wrap your canvas object in a CanvasWrapper, ie:
ctx = new CanvasWrapper(document.getElementById("canvas").getContext("2d"));
It works by duplicating Canvas' matrix operations, and mimicking the Canvas interface. An annoyance is that Canvas has public fields, instead of accessor methods, so I can't tell when it has been changed and have to update the canvas state before any drawing call.

Update 10/2/2013: Thanks to Heikki Pora for a patch adding some missing features.

Update 19/5/2015: The HTML Spec now includes context.currentTransform but it doesn't work for me in Firefox or Chromium, ctx.mozCurrentTransform does work in Firefox.

Tuesday, March 16, 2010

L-System v2



 

I have modified my previous L-System demo and added a few new features. Click on the image on the left to load the canvas demo. The axiom, rules and angle are now in editable fields, so you can change them and click [update] to view. This makes it quite quick to play around and explore l-systems. For the code internals, I separated out the l-system code into its own .js file, and also changed how the turtle works. Instead of a case statement, it now uses a map of functions, which looks like this:

return {
    // Turn right
    '+': function(args) { args.ctx.rotate(vary( args.angle, args.angle * args.angleVariance)); },
    // Turn left
    '-': function(args) { args.ctx.rotate(vary(-args.angle, args.angle * args.angleVariance)); },
    // Push
    '[': function(args) { args.ctx.save(); args.depth++; },
    // Pop
    ']': function(args) { args.ctx.restore(); args.depth--; },
    // Draw forward by length computed by recursion depth
    '|': function(args) { args.distance /= Math.pow(2.0, args.depth); drawForward() },
    'F': drawForward,
    'A': drawForward,
    'B': drawForward,
    'G': goForward
};
This makes it extremely easy to add new turtle commands, for instance to draw a green dot every time you see a 'L' just add the following to the map:
turtleHandler['L'] = function(args) {
args.ctx.fillStyle = '#00ff00';
args.ctx.beginPath();
args.ctx.arc(0, 0, 2, 0, 2 * Math.PI, true);
args.ctx.fill();
}
This was used to draw the leaves in the image above. To get 'L's in the right place in the l-system, I used 'L->' as the first rule, which deletes all existing 'L's, so the only ones that remain at the end of the iterations are the most recently created ones, which are on the (appropriately named) leaf nodes. TODO: Make extra functions an editable field, make better alignment/zooming.

Sunday, January 24, 2010

Thoughts on the degrees of procedural graphics

Procedural graphics is where content is generated by an algorithm, rather than by an artist. This is not either/or, but exists on a continuum.

You could probably argue that all computer graphics are at least somewhat procedural. Digital cameras use algorithms to turn light into image files, and image editors (like Photoshop) apply effects, driven by users input to manipulate pixels. 3d modelling programs do a lot of behind the scenes work on low level vertex positioning, so the artists can concentrate more on the overall shape and animation.

Even though the data was created using algorithms and tools, most people would not consider them procedurally generated as the output content is made to the vision of the artist and firmly under their control. But the lines get blurred more as we go from creation to display. Once the 3d model is made, the steps from there to the final image on the screen involves an immense numbers of calculations as the model is loaded, transformed, textured, lit and then projected onto the camera.

3d animation has moved to become more procedural over the years, and is a good example of the tradeoffs that come with this decision. Consider animating a 3d model of a man running.

Static frames
Store the 3d points of the man for each frame of his walking. While the artist here has total control, they must spend a lot of effort and memory making lots of frames, and this generally looks jerky with "popping" between frames.

Key frames
Store 3d points of key frames and interpolate between them. For instance say you have 2 frames of walking animation with a 1 second transition. The renderer takes the 2 values, then picks a point inbetween them based on how much time has passed. While this is a big improvement on the static frames above, it can look somewhat unnatural as organic beings don't move in straight lines.

Bones
Models are constructed with internal bones, which have their movement constrained by joints which store ranges of motion. The surface skin is then calculated based on bone positions.

As you can see, the general progress has been towards the artist working more generally, and surrendering low level details like the final location of vertices to simulation. Simulation means the graphics are driven by the state of the world, which can be determined by user input and code (for instance physics and artificial intelligence).

The possible outcomes in simulations are near infinite. Id software (creators of Quake) were suprised when players used in-game explosions to push themselves extremely high in the air. Consider a screenshot of someone standing in part of a level the designers never expected. Where did this image come from? The artists? The programmers? The players? The answer is all of them, and it emerged out of the rules of the simulation.

Computers are getting relentlessly faster so we can display more and more, and the artists need help to generate the massive amounts of data. Instead of animating each strand of hair on a model, they designate types of hair which are animated and rendered by special tools. Instead of constructing trees by hand in a modelling program, there is software which can generate a whole forest worth of trees for you. There are algorithms for all kinds of effects, from water, clouds, rock textures to simulating crowds, battling armies or herding animals.

The advantages of moving to procedural generation are (in some cases) more natural looking environments, adjustable detail levels based on CPU speed, and quicker development. This is actually more akin to how things are done in real life - when a movie director wishes to film a forest scene he goes to a real forest that resembles what he had in mind, he doesn't have his art department construct thousands of trees by hand, which is more akin to current computer techniques.

My belief is that the future of computer graphics will continue this development of moving low level control to simulation. Content creation tools (like 3d modellers) will further integrate procedural techniques into their tool chain.

Thursday, January 14, 2010

City Skyline and random seeds

Click the image on the left to view the canvas demo. (you need a browser other than Internet Explorer) This generates a random city skyline, which fades from sunset to night time. A way to achieve the effect would have been to generate the buildings, store them and alter their state as we go, but I did it a different way to demonstrate a property of generating scenes with pseudo random numbers. Computers are deterministic so the random numbers are not really random. There are algorithms which can produce a random looking number from another number, and so by using one number to create the next, you can create a random looking sequence. The first number you start with is called a (random) seed. A generator will usually set a seed if you don't specify it, using something kind of random it can get, eg the current time, the id of the process or the temperature of the CPU. Not setting the seed is thus the normal thing to do, as each run of the program will probably be different and things will appear more random. However, by setting the seed you can generate a repeated sequence of numbers. This means that if you have some data that is created by a sequence of random numbers, by setting the seed to the same value, you can re-create it again. This is an incredible CPU/memory trade off, where you can generate infinite complexity with CPU time for a single integer value in the seed! A game in the 1980s called Elite used this technique to generate a huge game world on the tiny computers of the time. The way the demo works is to reset the random seed each frame and then generate everything again using repeated calls to random. The windows have a random value of what time they'll turn on, and if it's past that time, they will be on. This demonstrates how you can change some things in the the repeated calls, but critically the calls to random must be the same sequence, ie don't put calls to random in branches that depend on something that varies each call.