More Your Questions Answered

Another round of questions I get via email whose answer deserves to be seen by all.

“Will HDR kill off Flash?”

This is one of those questions that, unfortunately, reveal the naïveté of the questioner. Had they used the term "fill flash," we'd have something to talk about. 

Flash is a light source. Light sources have intensity, direction, and properties. HDR, as high dynamic range manipulations are usually referred to, doesn't have those things. You might say it has a kind of intensity, but that's really just tonal mapping.

Photography is all about controlling multiple variables, as many in real time as you can. The real time portion is important, because it implies that you'll collect optimal data if you're making optimal decisions. When you post process to deal with an issue, generally you're often dealing with sub-optimal data that you're trying to make right.

Fill flash is different than post-processing tonal manipulation. The reason many photographers resort to fill flash is to bring up tonal values in portions of a scene that are lit poorly. If you're doing that on a static subject, then yes, HDR techniques coupled with tonal adjustments in post processing can often give you basically the same result (though you'd be fighting noise with post processing). The key difference is that HDR produces much more work in post processing, whereas most fill flash users have a set of go to settings that they can dial in quickly while in the field (Galen Rowell's was -1.7 stops flash exposure compensation). 

Overall, however, HDR is more a solution to a sensor and data recording problem, not so much a solution to a photographic problem. Personally, I want to control light whenever possible, and flash is one of my most useful tools in doing so. HDR can't match what I can do with a flash.

“How does the use of lens corrections impact your lens ratings?”

A very tricky question that I keep asking myself, actually. Most of you are/will be using the camera maker’s lens corrections. Indeed, camera makers are designing lenses now where they assume that lens corrections will either be applied in camera or in post processing. 

So my answer would basically be: yes, I look at tests and images with lens corrections applied. I also try to do the same thing without the lens corrections applied (that’s not always possible; a camera that only shoots JPEG and has automated lens corrections is a problem, obviously). Another problem is what do I write about what I see?

In general, normal to telephoto lenses don’t show a lot of difference between corrected and non-corrected results. There’s usually minimal distortion correction as well as a modest vignetting correction (I rarely find that vignetting is even close to fully corrected). I can’t say that lens corrections have ever impacted what I write about a normal or telephoto lens. That’s because there’s not a huge amount of linear correction being applied, typically less than 2%, often less. 

Wide angle is a different problem. Lens designers are now designing lenses “wider” than their marked focal length, because they’re correcting a lot of linear distortion after the fact. So a 14mm lens may actually be closer to 12mm uncorrected, but with so much barrel distortion (3% or higher; I’ve seen as high as 8%, but I see a lot in the 3-5% range). If the lens design has much astigmatism or coma, it will show up as what I call smear when the corners are corrected (it may appear as smear even in the uncorrected result, but a lot of linear correction will definitely make it worse).

Usually, I write about the corrected lens results with wide angle lenses, as almost everyone is going to want to correct the large distortions we typically see. If there’s a significant difference between the two I’ll report it, though.

I wish that the camera companies and conversion software folk were all on the same page, though. If a camera has a setting that turns corrections on/off, converters should pick that up and apply it (and it should be more clear that’s what’s being done). Moreover, converters should allow you to change your mind after the fact. 

“Does the increasing use of lens conversion profiles come with tricks and traps to be aware of?

Lenses that have strong real vignetting that gets corrected can trigger noise in corners of your image you weren’t expecting. In some cases you’re doing what amounts to be a three-stop increase. So at high ISO values, this can produce strong noise issues you have to deal with. Moreover, many of the vignetting corrections are "rings" of correction, not a feathered correction, so when you post process heavily after the fact, you can see those rings pop up into visibility.

DSLRs, meanwhile, are at a disadvantage with lens profiles where it comes to linear distortion: the viewfinder doesn’t actually show you your final composition (unless you use Live View). Mirrorless cameras do. So folk like me that spend a lot of time and energy trying to get the composition right in the camera can get frustrated by having profiles automatically applied to DSLR data in our raw converters: our frame edges aren’t exactly where we expected them to be.

“Thinking long term, do we have to worry about lens profiles becoming forgotten or no longer available?” 

Yes, we do. And particularly so for low volume brands. I notice that a few special things that happened early in the D1 era are no longer supported by most current converters (e.g. the D1x line doubling trick). 

For decades I’ve advocated open sharing of information. Raw file formats, JPEG profiles, spectral response curves, special compression methods, and now lens conversion profiles. The camera makers all think that they have “secret sauce,” but in every case the proprietary information has been eventually reverse engineered, at least to a significant enough degree to mimic what the camera maker thought was secret. 

Unfortunately, what happens in tech is that things get dropped over time. A raw converter that is fully updated for every digital camera ever made becomes more and more cumbersome to maintain over time, and significant changes—for example the 32-bit to 64-bit transition software went through—tends to break things, and no software developer wants to go back and fix hundreds of things. So, some day in the distant future I wonder whether the lens correction files we have today will come along in a big OS transition. 

Of course, if you don’t have the lens correction file, you’ll just get what the lens actually does, and just have to come up with your own corrections, which might not be a bad thing.

"Why don't we have square image sensors? (Or round ones?)"

Cost, primarily. The cost of producing an image sensor increases mostly exponentially with area. Moreover, you have more wafer waste with square sensors than you do with rectangular ones, and wafers aren't cheap. Couple that with the fact that you can put a slightly narrower image circle on a 36x24mm area than you can with 36x36mm, and you also have some reduction in costs of the lenses, too. 

As others have long argued, our vision is horizontally skewed, too. In particular, our peripheral vision mostly extends to the sides, not so much up and down. So we're quite used to thinking that what we're seeing as being "wider" rather than squarer. I'm sure that comes into play with feature films: those big screens and wide aspect ratios developed to try to take in all of our vision, both central and peripheral. 

Personally, I prefer the aspect ratio of 2:1 (16:9 comes close enough for me). Others, obviously, have different preferences. I wouldn't, however, want to pay for a 36x36mm sensor when I'm mostly only using 36x18mm, though. 

Ironically, there are two cases where the extra cost might justify someone making a square sensor: (1) if the market volume got enormous and someone wanted to carve out a niche; and (2) if the market volume shrunk to the point where all gear was so expensive that it wasn't being bought on price. From 1999 to 2012 we were headed towards #1. Since then we turned around around and have been headed towards #2. But in neither case have we actually gotten to the point where it would be viable for someone to make a square sensor camera. We're still in the middle where price matters.

As for round sensors, the issues become numerous. Again, there's the issue of sensor waste (unless the sensor took up the entire wafer being used). Also, there's the problem of data timing. With current technologies, columns near the left and right edge would be short and offload fast, columns in the center would be long and offload slower. In the current bandwidth situation for sensors, we'd likely have some really weird rolling shutter issues at a minimum. 

Looking for gear-specific information? Check out our other Web sites:
mirrorless: sansmirror.com | general: bythom.com| Z System: zsystemuser.com | film SLR: filmbodies.com


dslrbodies: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts, 
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: