Pixel binning is a photography technique that uses a small number of pixels to create a more accurate image. Pixel binning can be used to improve the clarity of images, reduce noise, and improve the overall image quality. Pixel binning can also be used to create images with more detail and color.
Binning Pixels Together
Pixel binning is a technique where multiple pixels on a camera sensor are grouped to work together as one. The most common form of binning take four adjacent pixels and make them act as one. On newer phones with truly enormous megapixel counts, specifically 108-megapixel phones, the ratio is nine to one. That makes the effective megapixel count actually 12.
So why bother with more megapixels? If the final megapixel count is only going to be 12, why not just put a 12-megapixel sensor in the phone? Some companies, notably Apple, do exactly this. Indeed, on the surface, pixel binning may seem like is just a way to put larger numbers on a camera to impress customers who may not know any better. However, there are in fact legitimate reasons to use binning. To understand them, though, we need to take a short detour into how camera sensors work.
Gathering Photons and Sensor Size
A camera sensor is, in some ways, like a solar panel. When photons hit the surface of the sensor it creates an electrical charge. Each pixel in the sensor is represented by a photosensitive spot on the sensor surface.
The sensors filter the different colors of light to construct a full-color image as a final product you can work with. These photo-sensitive spots aren’t the same size in every camera sensor.
This is why megapixel count by itself isn’t a good measure of image quality. Two sensors that have the same megapixel count can generate an image with the same resolution. However, if one sensor is four times larger than the other, each photo “spot” offers a much larger surface for photons to hit.
The more photons a sensor can sample from the scene, the better the quality of the final image. It represents a truer, more detailed representation of the scene. A full-frame camera sensor, found in professional dedicated cameras, measures 24×36 millimeters. That’s a surface area of 864 square millimeters or about 1.34 square inches. For comparison, the iPhone 13 Pro‘s main camera only offers 44 square millimeters, or 0.0682001 square inches. The iPhone 13 has a sensor at the larger end of the typical spectrum, but it’s still much smaller than those used in dedicated cameras.
Apple is a good example of sensor size and photo quality. With every generation of iPhone in recent years, they’ve held the pixel count at 12MP but increased the size of each pixel. This improves image quality and performance but limits resolution.
The Benefits of Pixel Binning
This is where pixel binning comes into the picture. It offers a way to get the light gathering benefits of large pixels, such as with Apple’s 12MP sensor in the iPhone 13 Pro while letting you take very high-resolution images when there’s no shortage of light.
If you’re taking photos in a bright environment, there are more than enough pixels going around for that 108MP sensor to get a good quality sample. In low light, pixels are binned together to combine their light-gathering abilities. Giving you low-light performance similar to a sensor with bigger pixels.
The main downside is that you get a lower-resolution image. However, the typical 12MP image produced by, for example, a smartphone using pixel binning, exceeds the resolution needs for most users. No one is posting massive 12MP images directly to social media. 12MP is even suitable for fairly large prints and certainly more than enough for a typical picture frame.
If you don’t want your camera or phone to bin pixels down to a lower resolution, there’s usually a setting somewhere to force the maximum resolution. For example, Samsung’s S21 Ultra has a dedicated 108MP shooting mode in its camera app, although the resulting image is absolutely massive!
RELATED: Samsung Galaxy S20: How to Change Your Screen Resolution