It’s been nearly two weeks since I recorded this image of the not-so-distant city lights of Avis to our south and of Lock Haven to the west. The wagons were illuminated by the light of the full moon. I don’t know why I’m continually amazed at how much more sensitive my camera seems to be than my eye, especially at detecting color in low light, but I am. Trying to find good discussion on this subject was a bit difficult but the folks at Cambridge in Colour have done a nice job. The CMOS sensor in my camera is made up of more than 24 million pixels (light gathering units). A single human retina contains a mere 7 million cones (allowing you to perceive color) and nearly 100 million rods (which are more sensitive than cones and allow you to perceive black and white). The answer to the question Which is more sensitive, the eye or the DSLR? comes down to just how you determine the actual sensitivity of the retina in terms of megapixel equivalents (signal processing by the brain dictates that cones are very definitely not equivalent to pixels in terms of their ability to gather bits of light energy). Some judge the retina to be as sensitive as a 50 MP photo sensor while others would say that that number is closer to just 10 MPs. So, in darkness at least, it shouldn’t surprise me that the 24 MP sensor may do as much as twice as well as my eye at recording color, and thus the city lights in the image below of which I was entirely unaware. At the other end of the spectrum-of-estimates the camera does half as well. I’m no engineer but it seems that this is one of those cases where the proof is very much in the pudding … the photo strongly suggests that the camera is far better at sensing color in near-darkness than my retina.