PDA

View Full Version : can shooting in RAW increase dynamic range?



ReF
01-17-2005, 04:37 AM
I found this on DPreview.com in the D-rebel review:

RAW latitude (digital exposure compensation)
Shooting RAW on the EOS 300D provides you with approximately one stop (1 EV) of additional latitude above the clipping point (pure white - 255,255,255) of the original exposure. In the example below you can see that a large area of the image was over exposed, applying a -0.6 EV digital exposure compensation to the RAW image retrieves some of this detail.

www.dpreview.com/reviews/canoneos300d/page15.asp

I have an unclear understanding of what "latitude" refers to in the statement. is it basically saying that you get more dynamic range shooting in RAW or in RAW on the D-rebel in particular? does latitude refer to highlights? - again, i don't really know what it means. thanks.

jaykinghorn
01-17-2005, 08:18 AM
I don't know exactly how the encoding works behind the scene, but functionally, shooting raw stores additional highlight values brighter than 255,255,255 white. This affords additional editing headroom of an extra 2/3 stop in the highlights. I don't know that I would use the term "latitude" since those extra highlights have to be manually "brought in" by the editing software and this process simply compresses the highlight info causing a decrease in the overall image contrast.

Perhaps someone else on the list can give a better explanation as to how the process works.

Best regards,
Jay Kinghorn
RGB Imaging

D70FAN
01-17-2005, 09:26 AM
I don't know exactly how the encoding works behind the scene, but functionally, shooting raw stores additional highlight values brighter than 255,255,255 white. This affords additional editing headroom of an extra 2/3 stop in the highlights. I don't know that I would use the term "latitude" since those extra highlights have to be manually "brought in" by the editing software and this process simply compresses the highlight info causing a decrease in the overall image contrast.

Perhaps someone else on the list can give a better explanation as to how the process works.

Best regards,
Jay Kinghorn
RGB Imaging

If you guys have a chance you might want to check out articles (and a book) by Uwe Steinmueller on RAW processing. I happend to get a tutorial along with the iNova D70 e-book, and it actually got me back into shooting in RAW. If you don't understand exactly what you are doing, then RAW will seem like a waste of memory card space, as most cameras do a pretty decent job of in-camera processing. But the differences, using a competent RAW image processor, can be striking.

I hesitate to quote word for word from iNova's e-book pages as that amounts to copyright infringement, but in essence this is my understanding of the wider latitude statement:

RAW images are a direct image report as output by a wide-latitude 12 bit (on most cameras) Analog to Digital converter (ADC). Typically, this is a holding state to be processed by the in-camera processor and saved as an 8bit per channel JPEG.

With RAW images you get the original, 12 bit, unprocessed image. Wide-latitude means that each channel is still up to 12 bits (4096 levels) vs. the processed 8 bit (256 level) JPEG image. Wheather there is useful information in those extra levels in RAW depends on the content of the image and the user settings in the camera. But in the case of JPEG in-camera processed images you do not have access to that wider latitude data, as the in-camera RAW processor has made that decision for you.

As external RAW processors (Like Adobe Camera Raw and Phase One Capture One DSLR) become more sophisticated it should be possible to recover or emulate even more information from RAW files, making them truely a "digital negative".

jaykinghorn
01-17-2005, 04:26 PM
RAW images are a direct image report as output by a wide-latitude 12 bit (on most cameras) Analog to Digital converter (ADC). Typically, this is a holding state to be processed by the in-camera processor and saved as an 8bit per channel JPEG.

correct.


With RAW images you get the original, 12 bit, unprocessed image. Wide-latitude means that each channel is still up to 12 bits (4096 levels) vs. the processed 8 bit (256 level) JPEG image. Wheather there is useful information in those extra levels in RAW depends on the content of the image and the user settings in the camera. But in the case of JPEG in-camera processed images you do not have access to that wider latitude data, as the in-camera RAW processor has made that decision for you.

That is also correct, but the user settings on the camera don't matter as much for 3rd party raw converters like Adobe Camera raw as most of them can't read the "secret sauce" tags containing the user selected features on the camera.



As external RAW processors (Like Adobe Camera Raw and Phase One Capture One DSLR) become more sophisticated it should be possible to recover or emulate even more information from RAW files, making them truely a "digital negative".

This is true. Particularly if Longhorn (Microsoft's much touted but still vaporware OS) is able to use floating point calculations to take advantage of this extra stop of headroom. Currently, even though data exists for those values, it still shows up as 255 white unless you manually bring back inline with the rest of the image. The camera is essentially capturing 7 stops of information but only displaying 6. Exactly why this occurs, I don't know.

The future of RAW conversions is very exciting. I would expect to see dramatic improvements in demosaicing, advanced noise-reduction software and better contrast (and perhaps dynamic range). These software advances will take the raw files you shoot today and improve upon them. Just one of many great reasons to shoot raw.

Best regards,
Jay Kinghorn
RGB Imaging

ReF
01-17-2005, 05:38 PM
thanks for your responses. they were very helpful!

radek_42
01-17-2005, 05:55 PM
If you guys have a chance you might want to check out articles (and a book) by Uwe Steinmueller on RAW processing. I happend to get a tutorial along with the iNova D70 e-book, and it actually got me back into shooting in RAW. If you don't understand exactly what you are doing, then RAW will seem like a waste of memory card space, as most cameras do a pretty decent job of in-camera processing. But the differences, using a competent RAW image processor, can be striking.

I hesitate to quote word for word from iNova's e-book pages as that amounts to copyright infringement, but in essence this is my understanding of the wider latitude statement:

RAW images are a direct image report as output by a wide-latitude 12 bit (on most cameras) Analog to Digital converter (ADC). Typically, this is a holding state to be processed by the in-camera processor and saved as an 8bit per channel JPEG.

With RAW images you get the original, 12 bit, unprocessed image. Wide-latitude means that each channel is still up to 12 bits (4096 levels) vs. the processed 8 bit (256 level) JPEG image. Wheather there is useful information in those extra levels in RAW depends on the content of the image and the user settings in the camera. But in the case of JPEG in-camera processed images you do not have access to that wider latitude data, as the in-camera RAW processor has made that decision for you.

As external RAW processors (Like Adobe Camera Raw and Phase One Capture One DSLR) become more sophisticated it should be possible to recover or emulate even more information from RAW files, making them truely a "digital negative".

Hmm. This is very interesting. I heard about raw format, but never that many details. I am just starting with digi and I decided to put raw format aside for time being. Mind you, I know nothing about raw format ;)

However, there are couple things that I would be interested. You are saying, that the CCD/CMOS detector resolves three color channels 12bit each and camera saves that as jpeg 8-bit per channel. I guess the first question would be, which bits are taken. I can imagine, there are several ways to do that: first 8bits, last 8bits, middle 8bits, or any 8bits with the most contrast, etc.

You guys mentioned camera settings.... does the camera setting determine which bits are taken into account?

R.

jaykinghorn
01-17-2005, 08:54 PM
No, the bits used aren't user defined. Essentially,8-bit and high bit (12, 14 or 16) image share the same black point and white point. High bit images slice the range into more discreet steps resulting in (sometimes) better images after editing, because we really only need 200 or so levels in a finished image to give the illusion of a continuous tone. Using tools like Shadow/highlight or making BW conversions with Channel Mixer can benefit from having a high-bit image.

But that isn't the only reason to shoot raw. I shoot raw because: 1) The workflow is faster if I am not shooting in a controlled lighting situation, 2) I have a greater capacity to edit color balance and tone, 3) I can export multiple versions of the same file, each adjusted for localized tone and contrast....and the list goes on. Building a raw workflow takes more skill to work efficiently, but often results in a superior image.

Jay Kinghorn
RGB Imaging

radek_42
01-18-2005, 06:50 PM
Thanks. That makes sense. I have one more question concerning dynamic range. I posted it before in different thread. So there it is:


... well, similar (photo editing) question :
I read about enhancing dynamic range of your pictures using exposure bracketing (series of under- and over-exposed frames) and "combing" these pictures; see http://www.dpreview.com/learn/?/Glossary/Exposure/Auto_Bracketing_01.htm

Does anyone know what "combining" means?

Thanks,
R.

Any ideas?

R.

ReF
01-18-2005, 07:42 PM
Thanks. That makes sense. I have one more question concerning dynamic range. I posted it before in different thread. So there it is:



Any ideas?

R.

combining them in post processing such as in photoshop and working with two or more layers. you can then keep it in seperate layers and/or flatten it into one layer. the technique works very well in very high contrast situations and is much easier with a tripod.

radek_42
01-19-2005, 11:20 AM
combining them in post processing such as in photoshop and working with two or more layers. you can then keep it in seperate layers and/or flatten it into one layer. the technique works very well in very high contrast situations and is much easier with a tripod.

Thanks. I used layers in photoshop before, but I do not know what "operation" to use while combining layers. I tried it using sample images on http://www.dpreview.com/learn/?/Glossary/Exposure/Auto_Bracketing_01.htm but I could not make it work.

Cheers,
R.

ReF
01-19-2005, 03:21 PM
Thanks. I used layers in photoshop before, but I do not know what "operation" to use while combining layers. I tried it using sample images on http://www.dpreview.com/learn/?/Glossary/Exposure/Auto_Bracketing_01.htm but I could not make it work.

Cheers,
R.

for a better explanation, go to:

http://luminous-landscape.com/tutorials/digital-blending.shtml

it works wonders.

try not to shoot a pic where the sky is too blown out, because if it is, the overexposed areas might eat into other parts of your pic, making it very difficult or even impossible to combine with another shot. from what is being said in this thread, shooting in RAW should help a little this, though I have yet to try it. IMO it is better to shoot the foreground a tiny bit underexposed (not to the point where you loose shadow details though) so that the overblown highlights are still sorta under control. you can then adjust the levels of your foreground to brighten it up, and increase the saturation and/or contrast if necessary. shooting at least two different versions of the dark foreground is a good habit, while only one correctly exposed shot of the sky/other highlights is neccessary, in my experience.