JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

Real-Time Shading Languages

The advent of powerful 3D graphics cards in the consumer market forever changed the way graphics were done. Very popular algorithms from the past were soon degraded, as the focus shifted from precomputed to real-time graphics. The 1990s were definitely exciting times for graphics programmers. The evolution has been huge, putting Moore's Law into question many times. Speed has increased astronomically, and today GPUs are probably more complex than the CPU.

But this quest for raw speed had its phases, too. It all began with simple rasterizers that did little beyond drawing triangles onscreen. Multitexturing was then introduced, mainly as a way to compute items like light maps and bump maps. The arrival of multitexturing marked a shift in the mind of most developers. Texture maps were no longer "pictures of a material we wanted to tile on the mesh": They were arrays of data with which we could do mathematical operations. Good examples are the normal maps used to encode bump mapping. With this change in attitude, APIs needed more powerful ways of specifying texture arithmetic, in other words, how different texture stages related to each other. Mechanisms like the register combiners were designed for this purpose. If you think about it, these were the first steps in the realm of programmable pipelines.

As hardware iterations have passed, graphics chip vendors have realized the potential of programmable pipelines and have incorporated more and more functionality into their systems. The concept of vertex versus pixel programs was established early on, and from there it was all a matter of adding programmability to the system. The first shading languages appeared, basically as a way to provide more flexibility to the register combiners mechanism. The syntax was a very crude assembly language. The current crop of languages, such as C for Graphics (Cg) or high-level shading language (HLSL) are nothing but interfaces to these assembly processors that allow programmers to work in a high-level, intuitive scenario.

There is a clear division in the way we can access the pipeline. We can still send data and let the API take care of everything by using the fixed-function pipeline. This way computations like transform, lighting, projection, shading, and texturing are automatically processed. On the opposite end of the spectrum, we have the programmable pipeline, which is the realm of vertex and pixel shaders. Notice that I've described these two as alternatives that are mutually exclusive. Both have unpredictable side effects. Under the programmable pipeline, we have to take care of everything, not only our particular effect. It is a black-or-white situation. In the fixed function, everything is done by the API. In the programmable pipeline, the API does nothing by itself (other than running shaders), and we must implement routines like projections inside our shader code. Luckily, some utility routines exist to help us out in this respect.

Current Languages

Unfortunately, there is no universal standard for real-time shading languages as of today (although I expect this issue to be resolved sometime soon). Different vendors are pushing forward different languages, which, in the end, have minimal differences. This hurts the developer community and, as a result, the users, because the lack of an open standard prevents shaders from reaching widespread acceptance. Developers do not want to code different versions of their software, and different users have cards that might not be compatible with shaders. Let's take a look at what's currently available.

Cg

Cg is a high-level shading language introduced by NVIDIA for its GeForce family of accelerators. Cg has a syntax that is very close to C and allows the creation of procedural effects that are run directly on the hardware. The Cg compiler detects the accelerator type and produces hardware-specific microcode for your graphics card.

Cg shaders can work on vertices to perform custom geometry processing, create procedural texturing effects, or screen postprocesses. They can even be used in animation and skinning algorithms to reduce the CPU burden. The downside, at the present time, is that Cg is only supported by NVIDIA cards. Although the compiler is open-sourced, other manufacturers have not yet embraced it, so shaders written in Cg will, at least for now, be limited to GeForce-class cards.

Cg can be used in DirectX and OpenGL, so at least this language is a cross-API standardization effort.

HLSL

HLSL is Microsoft's take on shading languages in the context of DirectX. Thus, it is an API-specific language. Incidentally, HLSL and Cg syntax is identical. Microsoft and NVIDIA reached a common ground that they both respect, at least as of today. So, a shader in HLSL can be compiled with a Cg compiler, and vice versa.

HLSL is supported by ATI cards under DirectX as well.

GL2 Shading Language

For some time, a committee has been working on a proposal for OpenGL 2.0, which would be the first major revision to OpenGL. This revision would try to upgrade the API to current and next generations of accelerators. OpenGL has remained unchanged on a global level for more than 10 years now. One component of this new specification is the addition of a shading language, which would make programmable pipelines part of the GL2 standard. The shading language borrows most of its syntax from Renderman, which is a very good move toward an industry-wide standard. The GL2 shading language is supported by all vendors involved in the GL2 effort. As such, companies like ATI and 3D Labs have already announced their support for it.

      Previous Section Next Section
    



    JavaScript EditorAjax Editor     JavaScript Editor