Chapter 19. Shaders for Imaging
One of the longtime strengths of OpenGL relative to other graphics APIs is that it has always included facilities for both imaging and 3D rendering operations. Applications can take advantage of both capabilities as needed. For instance, a video effects application might exploit OpenGL's imaging capabilities to send a sequence of images (i.e., a video stream) to OpenGL and have the images texture-mapped onto a three-dimensional object. Color space conversion or color correction can be applied to the images as they are sent to OpenGL. Traditional graphics rendering effects such as lighting and fog can also be applied to the object.
The initial release of the OpenGL Shading Language focused primarily on providing support for 3D rendering operations. Additional imaging capabilities are planned for a future version. Nonetheless, many useful imaging operations can still be done with shaders, as we shall see.
In this chapter, we describe shaders whose primary function is to operate on two-dimensional images rather than on three-dimensional geometric primitives.
The rasterization stage in OpenGL can be split into five different units, depending on the primitive type being rasterized: point rasterization, line rasterization, polygon rasterization, pixel rectangle rasterization, and bitmap rasterization. The output of each of these five rasterization units can be the input to a fragment shader. This makes it possible to perform programmable operations not only on geometry data, but also on image data, making OpenGL an extremely powerful and flexible rendering engine.
A fragment shader can be used either to process each fragment that is generated with glBitmap or glDrawPixels or to process texture values read from a texture map. We can divide imaging operations into two broad categories: those that require access to a single pixel at a time and those that require access to multiple pixels at a time. With the OpenGL Shading Language, you can easily implement in a fragment shader the imaging operations that require access to only a single pixel at a time. And within the fragment shader, you can read the varying variable gl_Color to obtain the color value for fragments that were generated with glBitmap or glDrawPixels. Operations that require access to multiple pixels at a time require that the image first be stored in texture memory. You can then access the image multiple times from within a fragment shader to achieve the desired result.
An OpenGL shader typically can perform imaging operations much faster on the graphics hardware than on the CPU because of the highly parallel nature of the graphics hardware. So we let the fragment shader handle the task. This approach also frees the CPU to perform other useful tasks. If the image to be modified is stored as a texture on the graphics accelerator, the user can interact with it in real time to correct color, remove noise, sharpen an image, and do other such operations, and all the work can be done on the graphics board, with very little traffic on the I/O bus.