two- and three-dimensional images created by computers that are used for scientific research, artistic pursuits, and in industries to design, test, and market products. Computer graphics have made computers easier to use. Graphical user interfaces (GUIs) and multimedia systems such as the World Wide Web, the system of interconnected worldwide computer resources, enable computer users to select pictures to execute orders, eliminating the need to memorize complex commands.
HOW COMPUTER GRAPHICS WORK
Before an image can be displayed on the screen it must be created by a computer program in a special part of the computer's memory, called a frame buffer. One method of producing an image in the frame buffer is to use a block of memory called a bitmap to store small, detailed figures such as a text character or an icon. A graphical image is created by dividing the computer's display screen into a grid of tiny dots called pixels. Frame buffer memory can also store other information, such as the color of each pixel.
Computers store and manipulate colors by representing them as a combination of three numbers. For example, in the Red-Green-Blue (RGB) color system, the computer uses one number each to represent the red, green, and blue primary components of the color. Alternate schemes may represent other color properties such as the hue (frequency of light), saturation (amount), and value (brightness).
If one byte of memory is used to store each color component in a three-color system, then over 16 million color combinations can be represented. But in the creation of a large image, allowing so many combinations can be very costly in terms of memory and processing time. An alternate method, color mapping, uses only one number per color combination, storing each number in a table of available colors like a painter's palette. The problem with color mapping is that the number of colors in the palette is usually too small to create realistically colored images.
Choosing the colors that make the best image for the palette, called color quantization, becomes a very important part of the image-making process. Another method, called dithering, alternates the limited palette colors throughout the image—much like the patterns of dots in a newspaper comic strip—to give the appearance of more colors than are actually in the image.
Aliasing and Anti-Aliasing
Since a computer monitor is essentially a grid of colored squares arranged like a sheet of graph paper, diagonal lines tend to be displayed with a jagged "stair step" appearance. This effect, called aliasing, can be lessened by calculating how close each pixel is to the ideal line of the drawn image and then basing the pixel's color on its distance from this line. For example, if the pixel is directly on the line, it may be given the darkest color, and if it is only partially on the line, it may be given a lighter color. This process effectively smooths the line.
Image processing is among the most powerful and important tools in computer graphics. Its underlying techniques are used for many applications, such as detecting the edge of objects; enhancing images and removing noise in medical imaging; and blurring, sharpening, and brightening images in feature films and commercials.
Image warping lets the user manipulate and deform an image over time. The most popular use of image warping is morphing, in which one image deforms and dissolves into another. Morphing is different from similar processes, in which one image simply fades into another, because the actual structures of the original image change.
To morph an image, the user specifies corresponding points on the original and final objects that the computer then distorts until one image becomes the other. These transformation points are usually either a grid overlaid on each object or a specific set of features, such as the nose, eyes, and ears of two faces to be morphed.
CREATING THREE-DIMENSIONAL COMPUTER GRAPHICS
Many uses of computer graphics, such as computer animation, computer-aided design and manufacturing (CAD/CAM), video games, and scientific visualization of data such as magnetic resonance images of internal organs, require drawing three-dimensional (3D) objects on the computer screen. The drawing of 3D scenes, called rendering, is usually accomplished using a pipeline or assembly-line approach, in which several program instructions can, at any given time, be executed in various stages on different data.
This graphics pipeline is implemented either with special-purpose 3D graphics microprocessors (hardware) or with computer programs (software). Hardware rendering can be expensive, but it enables the user to draw up to 60 images per second and to make immediate changes to the image. Software renderers are very slow, requiring from a few hours to a full day to render a single image. However, computer animation almost always uses software renderers because they provide greater control of the images and potentially photo-realistic quality.
The first step in a rendering pipeline is the creation of 3D objects. The surface of an object, such as a sphere, is represented either as a series of curved surfaces or as polygons, usually triangles. The points on the surface of the object, called vertices, are represented in the computer by their spatial coordinates. Other characteristics of the model, such as the color of each vertex and the direction perpendicular to the surface at each vertex, called the normal, also must be specified. Since polygons do not create smooth surfaces, detailed models require an extremely large number of polygons to create an image that looks natural.
Another technique used to create smooth surfaces relies on a parametric surface, a two-dimensional (2D) surface existing in three dimensions. For example, a world globe can be considered a 2D surface with latitude and longitude coordinates representing it in three dimensions. More complex surfaces, such as knots, can be specified in a similar manner.
Once these models have been created, they are placed in a computer-generated background. For example, a rendered sphere might be set against a backdrop of clouds. User instructions specify the object's size and orientation. Then the colors, their locations, and the direction of light within the computer-generated scene, as well as the location and direction of the viewing angle of the scene, are selected.
At this point, the computer program generally breaks up complex geometric objects into simple "primitives," such as triangles. Next, the renderer determines where each primitive will appear on the screen by using the information about the viewing position and the location of each object in the scene.
Lighting and Shading
Once a primitive has been located, it must be shaded. Shading information is calculated for each vertex based on the location and color of the light in the computer-generated scene, the orientation of each surface, the color and other surface properties of the object at that vertex, and possible atmospheric effects that surround the object, such as fog.
Graphics hardware most commonly uses Gouraud shading, which calculates the lighting at the vertices of the primitive, and interpolates, or blends, colors across the surface to make the object appear more realistic. Phong shading represents highlights by blending the lighting and colors in a direction perpendicular to the surface at each vertex, the normal, and calculating the lighting at each pixel. This provides a better approximation of the surface but requires more calculation.
Several techniques permit the artist to add realistic details to models with simple shapes. The most common method is texture mapping, which maps or applies an image to an object's surface like wallpaper. For example, a brick pattern could be applied to a rendered sphere. In this process only the object's shape, not features of the texture, such as the rectangular edges and grout lines of the brick, affect the way the object looks in lighting; the sphere still appears smooth.
Another technique, called bump mapping, provides a more realistic view by creating highlights to make the surface appear more complex. In the example of the brick texture, bump mapping might provide shadowing in the grout lines and highlights upon some brick surfaces. Bump mapping does not affect the look of the image's silhouette, which remains the same as the basic shape of the model. Displacement mapping addresses this problem by physically offsetting the actual surface according to a displacement map. For example, the brick texture applied to the sphere would extend to the sphere's silhouette, giving it an uneven texture.
Once the shading process has produced a color for each pixel in a primitive, the final step in rendering is to write that color into the frame buffer. Frequently, a technique called Z buffering is used to determine which primitive is closest to the viewing location and angle of the scene, ensuring that objects hidden behind others will not be drawn. Finally, if the surface being drawn is semitransparent, the front object's color is blended with that of the object behind it.
Physically Based Rendering Because the rendering pipeline has little to do with the way light actually behaves in a scene, it does not work well with shadows and reflections. Another common rendering technique, ray tracing, calculates the path that light rays take through the scene, starting with the viewing angle and location and calculating back to the light source. Ray tracing provides more accurate shadows than other methods and also handles multiple reflections correctly. Although it takes a long time to render a scene using ray tracing, it can create stunning images.
In spite of its generally accurate portrayal of shadows and reflections, ray tracing calculates only the main direction of reflection, while real surfaces scatter light in many directions. This scattered-light phenomenon can be modeled with global illumination, which uses the lighting of the image as a whole rather than calculating illumination on each individual primitive.
Many scientific applications of computer graphics require viewing 3D volumes of data on a 2D computer screen. This is accomplished through techniques that make the volume appear semitransparent and use ray tracing through the volume to illuminate it.