Sunday, May 24, 2009

Shadows

The keywords in shadowing process are light sources, shadow casters and shadow receivers. The light source can be a point source or area source. Calculating the shadow for point sources is easier compared that for area sources.
Point light sources generate only fully shadowed regions, while area light sources produce a fully shadowed region (umbra), and a partially shadowed region (penumbra).
Fully shadowed regions are called hard shadows and partially shadowed region are called soft shadows.
A basic technique for shadowing is projected planar shadows.

It is a simple technique. First, a three points are chosen so that the triangle formed by them defines the projection plane. The four plane coefficients are calculated from the three points that define the projection plane. Light position vector and these coefficients are used to pack a matrix called shadow matrix. This matrix squishes the 3D object on to the plane. To render the shadow, this matrix is applied to the objects that should cast shadow and the objects are rendered in dark color with no lighting. In this scheme, the three-dimensional object is rendered in two passes to create a shadow. In the first pass, the scene is rendered normally with lighting and depth test switched on. Then in the second pass, the depth test is disabled to deal with the z fighting between the shadow and the projection plane. While depth testing is disabled, the things that render last lie on top. The shadow receiver is already drawn in the pass. To cast the shadow, the shadow matrix is applied as described early.

Some other techniques are:
Projected Shadows
Shadow Mapping
Vertex Projection
Shadow Volumes

Textures

Generally, textures are 1D, 2D or 3D arrays of pixels applied to a 3D object’s surface.
Texture coordinates are in texture space. When a texture is applied to a polygon, its texel addresses must be mapped into object space. They must then be translated into the screen space. 3D Renderer performs this as an inverse mapping. That is, for each pixel in screen space, the corresponding texel position in texture space is calculated. The texture color at or around that point is sampled. The applications should specify texture coordinates for each vertex. These values are called (u,v) and must be in the range of 0.0 to 1.0.
For every pixel in the primitive's on-screen image, it must obtain a color value from the texture. This is called texture filtering. When a texture filter operation is performed, the texture being used is typically also being magnified or minified. In other words, the texture is being mapped into a primitive image that is larger or smaller than it is. Think of filtering as a type of interpolation. Five schemes are listed below.
Nearest-Point Sampling
Linear Filtering
Bilinear Filtering
Anisotropic Filtering
MipMapping
Textures are used mainly for mapping patterns, for adding roughness to a surface and for simulating shadows and lighting.

Colors

A large percentage of the visible spectrum can be represented by mixing red, green, and blue colored light in various proportions and intensities. RGB Color model is used in OpenGL and DirectX. A color value is an (Red, Green, Blue) triplet. The values can range from 0.0 (none) to 1.0 (full intensity). An intuitive color model is HSB. Hue is the color reflected from or transmitted through an object. Saturation is the strength or purity of the color. Saturation represents the amount of gray in proportion to the hue, measured as a percentage from 0% (gray) to 100% (fully saturated). Brightness is the relative lightness or darkness of the color, usually measured as a percentage from 0% (black) to 100% (white).

Shading

Shading is the process of applying a shading model to obtain pixel intensities or colors that we see for all rendered vertices and surfaces in a scene. It takes input from lights, materials and vertex positions.

A shading model describes how to calculate the intensity of light that we should see at a given point on the surface of an object.

Shading models are of two types: - Accurate models such as radiosity methods and empirical models based on simple photometric calculations.

A basic shading model

The illumination is calculated as a sum of the ambient, diffuse, specular and emissive light per vertex.

Illumination = Ambient Light + Diffuse Light + Specular Light + Emissive Light

Major shading models :
Flat shading
Goraud shading
FastPhong shading

GPUs allow the programmers to customize the rendering portion of the pipeline using shader programmers or scripts.

More on shaders from knol:

http://knol.google.com/k/koen-samyn/hlsl-shaders/2lijysgth48w1/2#

Materials

A surface can interact with light in many ways; it absorbs, reflects, transmits, refracts or emits light. Dull objects absorb more of the incident light and shiny objects reflect more of the incident light. The scattering of the reflected light by rough surfaces is called diffuse reflection. In addition to diffuse reflection, shiny objects create bright spots when illuminated. This type of reflection is called specular reflection. The polygon surfaces in a 3D model should be associated with materials for calculating how the surfaces and lights interact. The ratio between incoming light intensity and outgoing light intensity is measured as surface reflectance; it is different for different colors, so it is represented as an RGB vector or color, commonly called the surface color. A material (In DirectX or OpenGL) is surface’s reflectance properties for ambient, diffuse and specular lights. In addition, material contains the emissive lighting information of the surface and power of the specular highlight.

Lights

The beauty of the game is greatly influenced by how the lights are used.

Basic Light Properties
Position, Range, Attenuation, Direction
Position of the light is represented as a position vector in the world space. Range is the maximum distance from the position at which the light’s intensity is non-zero. Attenuation controls how a light's intensity decreases toward the range. It is given as a set of three constants. The direction property represents the direction in which light rays travel. It is represented using a vector.

Light Color Components
The light emitted by a light source consists of Ambient, Diffuse and Specular components.
1. Ambient light
It provides constant lighting for a scene. It lights all object vertices the same because it is not dependent on any other lighting factors such as vertex normals, light direction, light position, range, or attenuation.
2. Diffuse Light
Diffuse light comes from one direction, so it is brighter if it comes squarely down on a surface than if it barely glances off the surface. Once it hits a surface, however, it is scattered equally in all directions, so it appears equally bright, no matter where the eye is located.
3. Specular Light
Specular light tends to reflect off the surface in a preferred direction causing a bright shine that can only be seen from some angles. A shiny metal or plastic has a high specular component, and chalk or carpet has almost none.
Emissive Light
Emissive Lighting is light that is emitted by an object, not by the light source. Emissive light lights all object vertices with the color defined in the material emissive property. It is not dependent on the vertex normal or the light direction.

Types of Sources
A light source may be of type point, directional or a spotlight.
1. Point Source
Point lights have color and position within a scene. They emit light equally in all directions. A light bulb is a good example of a point light. Point lights are affected by attenuation and range. Since point sources have no single direction, for calculating lighting at vertex, the light vector is calculated by subtracting the position vector of the vertex from the point source’s position vector.
2. Directional Source
Directional lights have only color and direction, not position. They emit parallel light. This means that all light generated by directional lights travels through a scene in the same direction. Imagine a directional light as a light source at near infinite distance, such as the sun. Directional lights are not affected by attenuation or range, so the direction and color are the only factors considered when the lighting calculation is done. Because of the small number of illumination factors, these are the least computationally intensive lights to use.
3. Spotlight
Spotlights emit a cone of light, which has two parts; a bright inner cone and an outer cone. Light is brightest in the inner cone and is not present outside the outer cone, with light intensity attenuating between the two areas. This type of attenuation is known as fall off. Spotlights are affected by falloff, attenuation, and range. These factors, as well as the distance light travels to each vertex, are figured in when computing lighting effects for objects in a scene.

Saturday, May 23, 2009

Hidden Surface Removal /Occlusion Culling

Selecting only visible geometry  to render the graphics performance can be increased.

With respect to a view  (camera or player pos) two basic culling are performed at the application level.
View frustum culling eliminates  polygons outside the view frustum.
Occlusion culling eliminates groups of polygons inside the view frustum and occluded by other objects.

Face culling and Depth buffering schemes are built in components.(e.g. OpenGL implementations )
Face culling eliminates polygons facing  towards/away with respect to the camera.
Depth buffer(Z-buffer) stores the depth value for each location in the frame buffer. When a new pixel is to be written to the frame buffer the old depth values are compared against the new one.


Algorithms for HSR work in either image space or object space. Object space methods deal with comparing objects and parts of objects each other within the scene. Image space methods deal with deciding the visibility point by point, at each pixel position on the projection plane

Object space methods:
Face Culling
View-Frustum Culling
Occlusion Culling
Octree Method
BSP tree Method
Portals

Image space HSR methods:
Z-Buffer
A-Buffer
Scan line method
Area Subdivision Method

Monday, February 23, 2009

Visual Realism in 3D Games

Developers have been relying heavily on the following techniques for creating realistic static and moving images.

Transformations
Hidden Surface Removal
Lights
Materials
Shading
Colors
Textures
Shadows
Transparency
Reflection
Curves and Surfaces
Collision Detection
Animation
Realistic camera models
Advanced meshes
Image based rendering

Tuesday, February 17, 2009

A Typical Rendering Pipeline

Three conceptual stages of the rendering pipeline are application, geometry and rasterization.



The Application Stage

The input at this stage is a scene description which is a set of meshes, materials, textures, lights etc. The lesser the number of polygons the lesser the burden on geometry stage. Occlusion culling, View Frustum Culling, Hidden Surface Removal, and LOD selection are the main techniques that helps the geometry stage from overloading. Culling means to select a portion of the geometry from the current scene for rendering. If an object is outside the viewing volume, the object is culled. Occlusion culling methods eliminate a bunch of objects hidden by groups of other objects. Tessellation, game objects movement, camera movement, AI and physics calculations are some other tasks done at this stage.


The Geometry Stage


The triangles will contain vertices at object space with information such as position, color, normals and texture coordinates. In addition to this a 3D scene may contain other information such as light sources and materials. The Geometry stage takes this input. The first task is to convert object coordinates into world coordinates by rotations, scaling and translations chosen by the designers. Next the world space coordinates are to be translated and rotated around the camera's view. The viewing coordinate axes which are used for moving and orienting virtual camera of the scene can be calculated easily from three entities: eye point, target point, and up vector. Lighting calculations are also done at view space. Orthographic or perspective transformation is applied next.The perspective projection gives the illusion of distant objects to look small and near objects to look big. Clipping is performed next. This clipping can also result in re-tessellation of some of the triangles. Perspective division is applied next. The result is a 2D image residing on the projection window. Now it is to be mapped to the monitor. This is the final transform called window-to-viewport transform. This result contains (x,y,depth) for each vertex and are send to the rasterization stage.

The Rasterization Stage


For each primitive(line, point or triangle) the renderer calculates the interpolated depth, interpolated vertex positions, interpolated light intensities and interpolated texture coordinates for each pixel applying a shading model. These pixels are copied to framebuffer after a series of operations like depth test, blending etc. The final output is a sent to the frame buffer for display.

Creative Commons License
A Typical Rendering Pipeline by Manoj MJ is licensed under a Creative Commons Attribution-Share Alike 2.5 India License.
Based on a work at gamedev1001.blogspot.com.