Color image pipeline

Summary

An image pipeline or video pipeline is the set of components commonly used between an image source (such as a camera, a scanner, or the rendering engine in a computer game), and an image renderer (such as a television set, a computer screen, a computer printer or cinema screen), or for performing any intermediate digital image processing consisting of two or more separate processing blocks. An image/video pipeline may be implemented as computer software, in a digital signal processor, on an FPGA, or as fixed-function ASIC. In addition, analog circuits can be used to do many of the same functions.

Typical components include image sensor corrections (including debayering or applying a Bayer filter), noise reduction, image scaling, gamma correction, image enhancement, colorspace conversion (between formats such as RGB, YUV or YCbCr), chroma subsampling, framerate conversion, image compression/video compression (such as JPEG), and computer data storage/data transmission.

Typical goals of an imaging pipeline may be perceptually pleasing end-results, colorimetric precision, a high degree of flexibility, low cost/low CPU utilization/long battery life, or reduction in bandwidth/file size.

Some functions may be algorithmically linear. Mathematically, those elements can be connected in any order without changing the end-result. As digital computers use a finite approximation to numerical computing, this is in practice not true. Other elements may be non-linear or time-variant. For both cases, there is often one or a few sequences of components that makes sense for optimum precision and minimum hardware-cost/CPU-load.[1]

This figure shows a simplified, typical use of two imaging pipelines. The upper half shows components that might be found in a digital camera. The lower half shows components that might be used in an image viewing application on a computer for displaying the images produced by the camera. (Note that operations mimicking physical, linear behaviour, such as image scaling, are ideally carried out in the left-hand side, working on linear RGB signals. Operations that are to appear "perceptually uniform", such as lossy image compression, on the other hand, should be carried out in the right-hand side, working on "gamma-corrected" r'g'b or Y'CbCr signals.)

See also edit

References edit

  1. ^ Nakamura, Junichi (2005). Image Sensors and Signal Processing for Digital Still Cameras. CRC. ISBN 0-8493-3545-0.
  • "Implementing Image-Processing Pipelines in Digital Cameras". Archived from the original on 2008-07-04. Retrieved 2008-07-06.