![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKIATLnfRYo6YDUUZn0_14Yx5HKP3UofMkVJJXtt9EJohOWgUSyB2RH6tt-MsCvIwzUD506UoX13RrErm7SmXLaCQdxY4JNE70PBN5wM6SnAdplqR3puwbzbSGvxN3gZaTPe6bZ3ruT1Q/s320/skies_silhouette.png)
Thursday, August 26, 2010
Skies & Silhouettes
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKIATLnfRYo6YDUUZn0_14Yx5HKP3UofMkVJJXtt9EJohOWgUSyB2RH6tt-MsCvIwzUD506UoX13RrErm7SmXLaCQdxY4JNE70PBN5wM6SnAdplqR3puwbzbSGvxN3gZaTPe6bZ3ruT1Q/s320/skies_silhouette.png)
Saturday, June 19, 2010
Flock in a cloudy blue sky
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcBmcELTIufh1TO_uDgkigSAbPzRfmweyObJDzesnW5gYj0wIGXye2AtG99QCGXgrxOFJgVjMkfdJ4Og7RBvGYxHHQZ1jnuipiCA_elNb8XynAMmB2ESWpH2Kx7nr1Mj6I8bNTOk56KUM/s320/flock_sky.png)
Thursday, June 17, 2010
LSystem 3
Click on the image below to load the canvas demo (requires FireFox or non-IE browswer)
I added a zoom, and moved the Javascript into its own file. A tip is to zoom with a very low iteration, as otherwise it is very slow. I am going to look at rendering to a buffer and allowing scrolling.
I added a zoom, and moved the Javascript into its own file. A tip is to zoom with a very low iteration, as otherwise it is very slow. I am going to look at rendering to a buffer and allowing scrolling.
Monday, June 7, 2010
Falling Leaves
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhi0HFKYPXU_w-50LRHx1oT1aCqEJ0E43r1S_qksx3CEudNFwtzuPec78rqMogaI6hJ1gIswbW-oXuuCtB-I5_W9M0n3DKM-CxpK0iMJvKywUeb2DhriX1Okc1QitQmt0QzxPH_xtoeNXQ/s320/fall.png)
Tuesday, May 18, 2010
Cloudy blue Perlin sky
Click on the image to load the canvas demo (requires FireFox or non-IE browser) Perlin Noise is a technique for generating procedural textures, commonly used for smoke, clouds or to add a random looking realistic roughness to surfaces. The algorithm is basically to generate noise, then scale that noise randomly across an image at different sizes and levels of transparency. This creates a self similarity which mimics the appearance of some natural processes. I found a canvas implementation by iron_wallaby and made the noise be the transparency of pure white, giving a cloud like effect. This was originally going to be just the background for another canvas demo I was working on, but that is delayed as I'm currently doing bioinformatics study/working on a project in my spare time. Things to improve on this are restricting how cloudy it is to a range, and removing some visual artifacts (lines/blockiness)
Tuesday, April 13, 2010
Russian Dolls - Random combinations
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgASu7o3wngY5qleYzQUm42NznFk-51Dla8cPmnSpRIGiQR8zTCjinqE6uv8yK6pM_MPUEc8HkElCoTqwx50kKEdsoG1Rm9zhfVvX0x12RUze5vxB0xRbCRI7AuAvqv_wotksA0FNOAbWE/s320/Screenshot+at+2011-10-20+00%253A36%253A32.png)
Wednesday, March 17, 2010
Canvas Wrapper with getTransform()
The Canvas spec does not include a getTransform() method, so after performing rotation/scale/translation operations there is no way to work out where you really are.
So, I have created a wrapper class, which sits over the top of Canvas, and should behave exactly the same, except you now have a getTransform() method which returns a matrix such that the following doesn't do anything:
Update 10/2/2013: Thanks to Heikki Pora for a patch adding some missing features.
Update 19/5/2015: The HTML Spec now includes context.currentTransform but it doesn't work for me in Firefox or Chromium, ctx.mozCurrentTransform does work in Firefox.
So, I have created a wrapper class, which sits over the top of Canvas, and should behave exactly the same, except you now have a getTransform() method which returns a matrix such that the following doesn't do anything:
var m = ctx.getTransform(); ctx.setTransform( m[0][0], m[0][1], m[1][0], m[1][1], m[2][0], m[2][1] );To use, add the following line:
<script type="text/javascript" src="canvas_wrapper.js"></script>then wrap your canvas object in a CanvasWrapper, ie:
ctx = new CanvasWrapper(document.getElementById("canvas").getContext("2d"));It works by duplicating Canvas' matrix operations, and mimicking the Canvas interface. An annoyance is that Canvas has public fields, instead of accessor methods, so I can't tell when it has been changed and have to update the canvas state before any drawing call.
Update 10/2/2013: Thanks to Heikki Pora for a patch adding some missing features.
Update 19/5/2015: The HTML Spec now includes context.currentTransform but it doesn't work for me in Firefox or Chromium, ctx.mozCurrentTransform does work in Firefox.
Tuesday, March 16, 2010
L-System v2
I have modified my previous L-System demo and added a few new features. Click on the image on the left to load the canvas demo. The axiom, rules and angle are now in editable fields, so you can change them and click [update] to view. This makes it quite quick to play around and explore l-systems. For the code internals, I separated out the l-system code into its own .js file, and also changed how the turtle works. Instead of a case statement, it now uses a map of functions, which looks like this:
return { // Turn right '+': function(args) { args.ctx.rotate(vary( args.angle, args.angle * args.angleVariance)); }, // Turn left '-': function(args) { args.ctx.rotate(vary(-args.angle, args.angle * args.angleVariance)); }, // Push '[': function(args) { args.ctx.save(); args.depth++; }, // Pop ']': function(args) { args.ctx.restore(); args.depth--; }, // Draw forward by length computed by recursion depth '|': function(args) { args.distance /= Math.pow(2.0, args.depth); drawForward() }, 'F': drawForward, 'A': drawForward, 'B': drawForward, 'G': goForward };This makes it extremely easy to add new turtle commands, for instance to draw a green dot every time you see a 'L' just add the following to the map:
turtleHandler['L'] = function(args) { args.ctx.fillStyle = '#00ff00'; args.ctx.beginPath(); args.ctx.arc(0, 0, 2, 0, 2 * Math.PI, true); args.ctx.fill(); }This was used to draw the leaves in the image above. To get 'L's in the right place in the l-system, I used 'L->' as the first rule, which deletes all existing 'L's, so the only ones that remain at the end of the iterations are the most recently created ones, which are on the (appropriately named) leaf nodes. TODO: Make extra functions an editable field, make better alignment/zooming.
Sunday, January 24, 2010
Thoughts on the degrees of procedural graphics
Procedural graphics is where content is generated by an algorithm, rather than by an artist. This is not either/or, but exists on a continuum.
You could probably argue that all computer graphics are at least somewhat procedural. Digital cameras use algorithms to turn light into image files, and image editors (like Photoshop) apply effects, driven by users input to manipulate pixels. 3d modelling programs do a lot of behind the scenes work on low level vertex positioning, so the artists can concentrate more on the overall shape and animation.
Even though the data was created using algorithms and tools, most people would not consider them procedurally generated as the output content is made to the vision of the artist and firmly under their control. But the lines get blurred more as we go from creation to display. Once the 3d model is made, the steps from there to the final image on the screen involves an immense numbers of calculations as the model is loaded, transformed, textured, lit and then projected onto the camera.
3d animation has moved to become more procedural over the years, and is a good example of the tradeoffs that come with this decision. Consider animating a 3d model of a man running.
Static frames
Store the 3d points of the man for each frame of his walking. While the artist here has total control, they must spend a lot of effort and memory making lots of frames, and this generally looks jerky with "popping" between frames.
Key frames
Store 3d points of key frames and interpolate between them. For instance say you have 2 frames of walking animation with a 1 second transition. The renderer takes the 2 values, then picks a point inbetween them based on how much time has passed. While this is a big improvement on the static frames above, it can look somewhat unnatural as organic beings don't move in straight lines.
Bones
Models are constructed with internal bones, which have their movement constrained by joints which store ranges of motion. The surface skin is then calculated based on bone positions.
As you can see, the general progress has been towards the artist working more generally, and surrendering low level details like the final location of vertices to simulation. Simulation means the graphics are driven by the state of the world, which can be determined by user input and code (for instance physics and artificial intelligence).
The possible outcomes in simulations are near infinite. Id software (creators of Quake) were suprised when players used in-game explosions to push themselves extremely high in the air. Consider a screenshot of someone standing in part of a level the designers never expected. Where did this image come from? The artists? The programmers? The players? The answer is all of them, and it emerged out of the rules of the simulation.
Computers are getting relentlessly faster so we can display more and more, and the artists need help to generate the massive amounts of data. Instead of animating each strand of hair on a model, they designate types of hair which are animated and rendered by special tools. Instead of constructing trees by hand in a modelling program, there is software which can generate a whole forest worth of trees for you. There are algorithms for all kinds of effects, from water, clouds, rock textures to simulating crowds, battling armies or herding animals.
The advantages of moving to procedural generation are (in some cases) more natural looking environments, adjustable detail levels based on CPU speed, and quicker development. This is actually more akin to how things are done in real life - when a movie director wishes to film a forest scene he goes to a real forest that resembles what he had in mind, he doesn't have his art department construct thousands of trees by hand, which is more akin to current computer techniques.
My belief is that the future of computer graphics will continue this development of moving low level control to simulation. Content creation tools (like 3d modellers) will further integrate procedural techniques into their tool chain.
You could probably argue that all computer graphics are at least somewhat procedural. Digital cameras use algorithms to turn light into image files, and image editors (like Photoshop) apply effects, driven by users input to manipulate pixels. 3d modelling programs do a lot of behind the scenes work on low level vertex positioning, so the artists can concentrate more on the overall shape and animation.
Even though the data was created using algorithms and tools, most people would not consider them procedurally generated as the output content is made to the vision of the artist and firmly under their control. But the lines get blurred more as we go from creation to display. Once the 3d model is made, the steps from there to the final image on the screen involves an immense numbers of calculations as the model is loaded, transformed, textured, lit and then projected onto the camera.
3d animation has moved to become more procedural over the years, and is a good example of the tradeoffs that come with this decision. Consider animating a 3d model of a man running.
Static frames
Store the 3d points of the man for each frame of his walking. While the artist here has total control, they must spend a lot of effort and memory making lots of frames, and this generally looks jerky with "popping" between frames.
Key frames
Store 3d points of key frames and interpolate between them. For instance say you have 2 frames of walking animation with a 1 second transition. The renderer takes the 2 values, then picks a point inbetween them based on how much time has passed. While this is a big improvement on the static frames above, it can look somewhat unnatural as organic beings don't move in straight lines.
Bones
Models are constructed with internal bones, which have their movement constrained by joints which store ranges of motion. The surface skin is then calculated based on bone positions.
As you can see, the general progress has been towards the artist working more generally, and surrendering low level details like the final location of vertices to simulation. Simulation means the graphics are driven by the state of the world, which can be determined by user input and code (for instance physics and artificial intelligence).
The possible outcomes in simulations are near infinite. Id software (creators of Quake) were suprised when players used in-game explosions to push themselves extremely high in the air. Consider a screenshot of someone standing in part of a level the designers never expected. Where did this image come from? The artists? The programmers? The players? The answer is all of them, and it emerged out of the rules of the simulation.
Computers are getting relentlessly faster so we can display more and more, and the artists need help to generate the massive amounts of data. Instead of animating each strand of hair on a model, they designate types of hair which are animated and rendered by special tools. Instead of constructing trees by hand in a modelling program, there is software which can generate a whole forest worth of trees for you. There are algorithms for all kinds of effects, from water, clouds, rock textures to simulating crowds, battling armies or herding animals.
The advantages of moving to procedural generation are (in some cases) more natural looking environments, adjustable detail levels based on CPU speed, and quicker development. This is actually more akin to how things are done in real life - when a movie director wishes to film a forest scene he goes to a real forest that resembles what he had in mind, he doesn't have his art department construct thousands of trees by hand, which is more akin to current computer techniques.
My belief is that the future of computer graphics will continue this development of moving low level control to simulation. Content creation tools (like 3d modellers) will further integrate procedural techniques into their tool chain.
Thursday, January 14, 2010
City Skyline and random seeds
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhQdP1-QKAVyrdBaBcAG_z5RtEBq3w5iYKZFpqjseQ78f7OYqMOf_fiJ4MrWZJ7q0HyKYuSP28M-Q2N8sYuG5NGHOhwtW4xvx7kuzixZBymup8juNPxet_Q1I1QByN51O9-09LY53Il_w/s320/Screenshot+at+2011-10-20+00%253A38%253A19.png)
Subscribe to:
Posts (Atom)