From the course: Exploring Photography: Shooting in Raw Mode

Exploring how a digital sensor captures an image

From the course: Exploring Photography: Shooting in Raw Mode

Start my 1-month free trial

Exploring how a digital sensor captures an image

- Like your eye, a digital image sensor works by gathering light. An image sensor is just a computer chip like any other. It's made of silicon, it has circuit pathways etched into it, and it's soldered into a circuit board. But unlike most chips, it also contains a large area on its surface that's made out of a particular kind of metal. Turns out that some types of metal emit electrons when they're struck by light. Hit that same metal with more light, and you get more electrons off the metal. That's called the photoelectric effect, and it had first been observed in the 1830s. But it was not accurately understood until Albert Einstein, of all people, published a paper on the photoelectric effect in 1905. Curiously, he solved the mystery of what the photoelectric effect is by showing that light is not a wave, but rather, is made up of discreet quanta called photons. So in this single paper, he not only opened the door to digital image sensors; he also laid the foundation for quantum physics, as one does. In 1921, he was awarded the Nobel Prize in Physics for this work, but there's always been some question and controversy about why he won for this rather than relativity, but I digress. Of course, Einstein never foresaw the digital image sensor. That was left to two engineers at Bell Labs, George Smith and Willard Boyle, who sketched out an idea for a new type of computer chip, but they thought could be useful as computer memory and for creating a solid state video camera. Over the course of just one hour, in October of 1969, they came up with the digital image sensor. The result of that hour was the CCD, or charge-coupled device. Now, some cameras still use CCDs, but most are now built around a technology called CMOS, or complementary metal-oxide semiconductor. At the simplest level, both technologies work the same way. We start with a grid of tiny pieces of metal. These are called photodiodes. A capacitor, a little thing that can hold electricity, is attached to each photodiode, and the combination of the photodiode and capacitor is called a photosite. Now, the image sensor in your camera is covered with a grid of these photosites. There's one for each pixel in your final image. When you turn your camera on, the photosites are given an electrical charge. This provides the sites with electrons that they can release later. So when you take a picture, the surface of the chip is exposed to light. Because of the photoelectric effect, as light strikes each photosite, some electrons are drained from the capacitor at that site. As more light strikes a particular site, more electrons get drained. After the picture is taken, the voltages from each photosite are read and pass to an amplifier and then measured. That measurement is then digitized; that is, assigned a corresponding number or digit. And when the whole process is finished, you have a big mess of numbers that represent the amount of light that struck each pixel in the image. However, at this point, the numbers only describe levels of brightness. In other words, they do nothing to tell us about color. So far, our digital image sensor is only capable of yielding gray scale data. In other words, at the most fundamental level, all digital cameras are black and white devices. To get color requires some very complex computation.

Contents