Rendering, processes and algorithms in computer graphics for generating new representations of data from old representations. A common example is the process of generating pixels, discrete samples of a piece-wise continuous two dimensional function, usually a projection of a piece-wise continuous (sometimes discretized) three dimensional function, from the original three dimensional data and parameters. This data often includes a representation of: two dimensional spaces representing the shape of objects in reality (surfaces), how different points on each surface distribute light via reflecting and transmitting i.e. spatially varying bi-directional reflectance/transmittance distribution functions (svbrdf/svbtdf), light emitters and their properties, continuous or discretized volumes (gas, marble), and particle-like matter (dirt, rain). The data is sometimes approximated due to various computing limitations such as current algorithms, hardware compute capability, latency, bandwidth, and storage. That is to say representing everything in our world requires structural complexity that is extremely vast. Imagine rendering every atom of a simple toy ball individually.

Popular algorithms for generating pixels include: ray tracing and rasterization. Various other rendering algorithms exist in computer graphics. While most are concerned with generating displayable images, various intermediate representations have useful applications. SVGPU is an example of a rendering algorithm used to generate planar geometry/vector images (2D piece-wise continuous functions) from 3D representations.

The default image used by yahoo webhosting for an article teaser is actually an example of rendering 4 squares comprised of 4 line segments each (pieces of continuous line functions) into a pixel image. The squares are in an animation loop comprised of simple transformations (3D rotations, or 2D sheers). Each iteration of the loop a pixel image is rendered, and displayed in sequence.