Procedural graphics is where content is generated by an algorithm, rather than by an artist. This is not either/or, but exists on a continuum.
You could probably argue that all computer graphics are at least somewhat procedural. Digital cameras use algorithms to turn light into image files, and image editors (like Photoshop) apply effects, driven by users input to manipulate pixels. 3d modelling programs do a lot of behind the scenes work on low level vertex positioning, so the artists can concentrate more on the overall shape and animation.
Even though the data was created using algorithms and tools, most people would not consider them procedurally generated as the output content is made to the vision of the artist and firmly under their control. But the lines get blurred more as we go from creation to display. Once the 3d model is made, the steps from there to the final image on the screen involves an immense numbers of calculations as the model is loaded, transformed, textured, lit and then projected onto the camera.
3d animation has moved to become more procedural over the years, and is a good example of the tradeoffs that come with this decision. Consider animating a 3d model of a man running.
Store the 3d points of the man for each frame of his walking. While the artist here has total control, they must spend a lot of effort and memory making lots of frames, and this generally looks jerky with "popping" between frames.
Store 3d points of key frames and interpolate between them. For instance say you have 2 frames of walking animation with a 1 second transition. The renderer takes the 2 values, then picks a point inbetween them based on how much time has passed. While this is a big improvement on the static frames above, it can look somewhat unnatural as organic beings don't move in straight lines.
Models are constructed with internal bones, which have their movement constrained by joints which store ranges of motion. The surface skin is then calculated based on bone positions.
As you can see, the general progress has been towards the artist working more generally, and surrendering low level details like the final location of vertices to simulation. Simulation means the graphics are driven by the state of the world, which can be determined by user input and code (for instance physics and artificial intelligence).
The possible outcomes in simulations are near infinite. Id software (creators of Quake) were suprised when players used in-game explosions to push themselves extremely high in the air. Consider a screenshot of someone standing in part of a level the designers never expected. Where did this image come from? The artists? The programmers? The players? The answer is all of them, and it emerged out of the rules of the simulation.
Computers are getting relentlessly faster so we can display more and more, and the artists need help to generate the massive amounts of data. Instead of animating each strand of hair on a model, they designate types of hair which are animated and rendered by special tools. Instead of constructing trees by hand in a modelling program, there is software which can generate a whole forest worth of trees for you. There are algorithms for all kinds of effects, from water, clouds, rock textures to simulating crowds, battling armies or herding animals.
The advantages of moving to procedural generation are (in some cases) more natural looking environments, adjustable detail levels based on CPU speed, and quicker development. This is actually more akin to how things are done in real life - when a movie director wishes to film a forest scene he goes to a real forest that resembles what he had in mind, he doesn't have his art department construct thousands of trees by hand, which is more akin to current computer techniques.
My belief is that the future of computer graphics will continue this development of moving low level control to simulation. Content creation tools (like 3d modellers) will further integrate procedural techniques into their tool chain.