Another cool animation effect, called morphing, gradually blends between two things. This could be used to mix two effects over a sequence of frames. A complete animation sequence can be created by performing KEY-FRAME INTERPOLATION. Important frames of the animation are identified, and the frames in between them can be generated with substantially less effort. Instead of the application doing complex calculations to determine the proper way to render the "in between" object or effect, it can all be done automatically within the shader.
You can blend between the geometry of two objects to create a tweened (inbetween) version or do a linear blend between two colors, two textures, two procedural patterns, and so on. All it takes is a shader that uses a control value that is the ratio of the two items being blended and that is updated each frame by the application. In some cases, a linear blend is sufficient. For an oscillating effect, you'll probably want to have the application compute the interpolation factor by using a spline function to avoid jarring discontinuities in the animation. (You could have the shader compute the interpolation value, but it's better to have the application compute it once per frame rather than have the vertex shader compute it once per vertex or have the fragment shader compute it needlessly at every fragment.)
For instance, using generic vertex attributes, you can actually pass the geometry for two objects at a time. The geometry for the first object would be passed through the usual OpenGL calls (glVertex, glColor, glNormal, etc.). A second set of vertex information can be passed by means of generic vertex attributes 0, 1, 2, etc. The application can provide a blending factor through a uniform variable, and the vertex shader can use this blending factor to do a weighted average of the two sets of vertex data. The tweened vertex position is the one that actually gets transformed, the tweened normal is the one actually used for lighting calculations, and so on.
To animate a character realistically, you need to choose the right number of key frames as well as the proper number of inbetweens to use. In their classic book, Disney AnimationThe Illusion of Life, Frank Thomas and Ollie Johnston (1995, pp. 6465) describe this concept as "Timing," and explain it in the following way:
16.4.1. Sphere Morph Vertex Shader
The shader in Listing 16.2, developed by Philip Rideout, morphs between two objectsa square that is generated by the application and a sphere that is procedurally generated in the vertex shader. The sphere is defined entirely by a single valueits radiusprovided by the application through a uniform variable. The application passes the geometry defining the square to the vertex shader with the standard built-in attributes gl_Normal and gl_Vertex. The vertex shader computes the corresponding vertex and normal on the sphere with a subroutine called sphere. The application provides a time-varying variable (Blend) for morphing between these two objects. Because we are using the two input vertex values to compute a third, inbetween, value, we cannot use the ftransform function. We'll transform the computed vertex directly within the vertex shader.
Listing 16.2. Vertex shader for morphing between a plane and a sphere
The sphere is somewhat unique in that it can be procedurally generated. Another way to morph between two objects is to specify the geometry for one object, using the normal OpenGL mechanisms, and to specify the geometry for the second object, using generic vertex attributes. The shader then just has to blend between the two sets of geometry in the same manner as described for the sphere morph shader.