Mind's Eye

April 14, 2020

The photo at left above was taken with a hand-held camera near sunrise from a hotel window in downtown Boston several years ago.  Allowing for the fact it is in black & white rather than in color, this raw image is what a camera sees.  The same image appears at right after a bit of digital enhancement to extend the camera's dynamic range to bring out light and detail from the shadows.  Again, allowing for the switch from color to black and white, this version more closely approximates what the human eye normally sees without digital enhancement.

Anyone who takes up photography soon learns that a camera doesn’t see the same way a human eye sees.  The eye is a much more versatile optical instrument.   Just try taking a picture that combines bright sunlight and deep shadow within the same frame.  The resulting image will be either overexposed or underexposed, whereas the human eye has a dynamic range that can easily accommodate both extremes.  The eye adjusts rapidly to changes in ambient lighting and is 600 times more sensitive at night than during the day.  A photographer must master a variety of lenses, filters and camera settings to approximate what the human eye accomplishes automatically. 

Still other adjustments may be necessary after a photograph is taken.  A routine step during post-production, using Photoshop or other software, is to adjust the white balance so that white objects appear white.  This “color correction” step is often necessary, especially for indoor shots, because a camera will always record color as it actually appears on film or digital sensor, making no allowances for the color temperature of the ambient lighting.  A camera will typically record a yellowish cast from incandescent lighting or a bluish cast from fluorescent fixtures.  Human vision, on the other hand, has its own built-in color correction mechanism so that white objects always appear white, regardless of lighting. 

If the human eye were as literal-minded as a camera, there would be a gaping hole in our field of vision — in fact, there would be two of them, one for each eye.  That’s because there is a hole in the retina that the optic nerve must pass through on its way to the brain.  There are no photoreceptor cells in these locations.  Yet we would be hard-pressed to find any blind spots in our field of vision (although they are there).  Why is this so?  One theory is that each eye supplies missing visual information for the other. There is also a suggestion that the brain plugs the hole based on visual cues from the surrounding environment. 

The 19th-century Scottish physicist Sir David Brewster, best known as the inventor of the pinhole camera, marveled at the filling-in of visual information to compensate for the blind spot in the eye.  He credited the “divine artificer” — a term not much employed anymore to describe God’s hands-on role in fashioning creation.  Whatever God’s role in creation, he appears to have delegated certain functions to the brain, which takes raw visual information from the eye and assembles it into the finished picture we see.  In other words, it is not the eye but the mind’s eye that puts the finishing touches on our view of the world.  We may think we see with our eyes, but we really only see what we think. 

 

Archive
January February March April May June July August (3) September (1) October (1) November (3) December (2)