The Gaussian pyramid is a technique in image processing that breaks down an image into successively smaller groups of pixels to blur it. It is named after German mathematician Johann Carl Friederich Gauss. This type of precise mathematical blurring is used extensively in artificially intelligent computer vision as a pre-processing step. For instance, when a digital photograph is blurred in this way, edges of objects are easier to detect, enabling a computer to identify them automatically.
How does it work?
The “pyramid” is constructed by repeatedly calculating a weighted average of the neighboring pixels of a source image and scaling the image down. It can be visualized by stacking progressively smaller versions of the image on top of one another. This process creates a pyramid shape with the base as the original image and the tip a single pixel representing the average value of the entire image.
Programming terms