High dynamic range imaging (‘HDR’) has become very popular in the past few years. HDR embraces many different techniques aimed at exceeding the camera’s dynamic range, including dedicated programs like Photomatix, the Merge to HDR feature in Adobe Lightroom and Adobe Camera Raw, Merge to HDR Pro in Adobe Photoshop, and hand blending of multiple exposures using layers and masks in Adobe Photoshop.
Digital cameras can only record a limited range of luminosities compared to the human eye, although some cameras have a greater range than others (Sony sensors in the top Sony and Nikon cameras are currently considered the gold standard in this regard). A camera’s ‘dynamic range’ is the range between the maximum and minimum measurable light intensities that a sensor can record in a single exposure. HDR involves taking several different exposures of the same scene, and merging them together to exceed the dynamic range of a digital camera sensor. This is typically accomplished by taking at least one bright exposure which optimizes detail in shadow areas, one dark exposure which optimizes detail in highlight areas, and possibly one or more exposures in the middle of the two extremes to make the blending easier and more natural looking. Done properly, HDR can realistically capture a range of luminosities that more closely approximates what the human eye sees.
The problem (in my opinion) with many HDR images is that, while merging multiple exposures, the photographer pays little attention to preserving the relative luminosities in the scene. For example, with many HDR images, areas in shadow are rendered as bright as, or even brighter than the sky other highlight areas of the image...[vision_notification style="tip" font_size="20px"]Read the whole article inside issue 67.[/vision_notification]