Sunday, May 24, 2009

Textures

Generally, textures are 1D, 2D or 3D arrays of pixels applied to a 3D object’s surface.
Texture coordinates are in texture space. When a texture is applied to a polygon, its texel addresses must be mapped into object space. They must then be translated into the screen space. 3D Renderer performs this as an inverse mapping. That is, for each pixel in screen space, the corresponding texel position in texture space is calculated. The texture color at or around that point is sampled. The applications should specify texture coordinates for each vertex. These values are called (u,v) and must be in the range of 0.0 to 1.0.
For every pixel in the primitive's on-screen image, it must obtain a color value from the texture. This is called texture filtering. When a texture filter operation is performed, the texture being used is typically also being magnified or minified. In other words, the texture is being mapped into a primitive image that is larger or smaller than it is. Think of filtering as a type of interpolation. Five schemes are listed below.
Nearest-Point Sampling
Linear Filtering
Bilinear Filtering
Anisotropic Filtering
MipMapping
Textures are used mainly for mapping patterns, for adding roughness to a surface and for simulating shadows and lighting.