The User Problems with Cameras

(commentary) Caution, very long article, several rants, some tongues in cheeks ;~)

question-mark.jpg

PROBLEMS? WHAT PROBLEMS?

Recently I asked all of you to send me the biggest problem you needed solving. I stopped counting after a couple hundred responses. And there are so many responses it would take me awhile to carefully group and tally them correctly. Instead, I’m going to tackle what you wrote a bit more informally. If a camera company wants to pay me to do their work for them, I’ll be happy to charge them for my time in presenting data in a nice tidy form that they can understand ;~). 

There were surprises in your responses. First, though, I have to point out that what I’ve written in the past few years certainly must have an influence on what you think, as I got a lot of parroting. Indeed, I even got back a few Last Camera Syndrome responses: “truly, all my problems are solved; [nothing new that’s] significant to shell out big bucks.” Or: "I don't plan on buying any more gear unless the cameras die. I know the lenses will out live me.” (That last clause is almost a Catch-22, by the way: Canon and Nikon built their DSLR dominance on legacy lenses, now they’ll have to live with the consequences.)

Still, by not putting any real parameters other than “biggest problem,” I did get a pretty good snapshot of what this site’s 3m unique visitors a year think about sophisticated cameras at the moment, and it’s not a pretty picture. 

In no particular order, here are many of your responses (edited for brevity) as bullet points. Followed after the bullet (or group of related bullets) by any commentary I have:

  • Color matching between camera LCD, monitor, iPad screen, home printer and lab printer. It’s confusing and difficult to get correct.
  • Can’t get photos that look like what I saw (many). 
  • White balance setting (many).

I was a little surprised at just how many of you suggested that getting ultimate image quality out of your camera was your biggest problem. Sometimes I forget that I’ve been converting and correcting digital images for over 25 years now. So I guess one thing that spoke to me here is that perhaps I can step in with some sort of product and at least partially solve your problem. I’m going to have to think about that a bit more. 

You can see various elements of the “color problem” all over the Internet. It’s common, for example, for people to get excited by Fujifilm or Olympus JPEG colors compared to other cameras. Or claim that Nikon and Panasonic cameras don’t get skin tones right. Or that violets and purples come out wrong. 

The truth of the matter is much more nuanced, however. Fujifilm and Olympus JPEG color—at least at default settings—tends to be over saturated and exaggerated by excessive contrast. It’s not “accurate” color. Skin tones are a really complex problem in that there is no such thing as a single skin tone value you need to be accurate to. Skin pigments range all over the place, and women tend to change or mask the color of their facial skin tones with cosmetics. So best case, a camera would have to be accurate across a pretty wide range of possibilities to be accurate for everyone. And purples? Wow, there are all kinds of things at play there, including the fact that our brains actually make up some responses to actual colors incorrectly. 

Still, something is wrong with the cameras here. 

I remember setting up video cameras in the 70’s using waveforms on known test targets in the lighting we were shooting in and matching camera response to broadcast standards using oscilloscopes. It could be a real pain to get cameras to “match” in certain situations, but we had the tools to do so. Exactly what tool would a digital still shooter use? 

Sure, we’ve got ColorChecker charts and a few pieces of software that will automate a profile from a shot of such test charts. It’s an extra workflow step (see below), but it can be done without too much trouble. 

Funny thing is, most people are actually complaining about color that’s so bad it fails the “eyeball test.” In other words, they’re saying that cameras are producing color that is obviously incorrect at a glance. How is it 30+ years into the digital camera age that cameras can still fail to get basic color right? Has there been that little improvement in being able to recognize the color of light in that time? 

Indeed, if you detect a face in the frame, how is it that white balance can be so far off? Even though I noted that skin tones can vary a lot, they also have a huge range of overlap, too, and there are commonalities to the pigments that should be able to tell you if you’re “off.” 

What can you do about this problem? Several things. First, read my Quick and Dirty Color article. Here are a few other things to think about that might help you get where you want to be:

  1. If you shoot JPEG, White Balance is crucial. Yes, I know that the camera makers all say that their automatic white balance systems work great. They don’t work nearly as well as the camera makers think. In virtually every one of my Nikon camera books I have to point out that the range Nikon thinks that Auto White Balance works is wider than it actually does work in. So, setting the white balance is something you need to get in the habit of doing, and if you don’t have four decades of setting white balances like I do and can’t do it by eyeball, learn how to use the Custom White Balance setting on your camera properly. Simply put, if the camera doesn’t start out with the right light information, when it starts applying color constructs to the data it collects you won’t get good color. 
  2. If you shoot in mixed lighting, you have to make choices. You have to set your white balance for the predominate contributing light—especially for people—or you have to find a way to filter one of the lighting sources (e.g. put a filter over your flash head to make it better match the background light). In some cases you can add light that “overwhelms” the existing mixed lighting. All of these are a pain, which is why most of us pros try to control all lighting whenever we can. 
  3. If you shoot in lighting that is frequency based (fluorescent, most indoor and outdoor arenas, etc.), you must shoot at a shutter speed that’s a derivative of the AC cycle or else you’ll get changing color between shots due to differences in phosphor decays. In the US, safe shutter speeds are 1/30, 1/60, 1/125. That’s about it. 1/30, as you might guess, risks having subject motion even with people who are relatively static. Shutter speeds above the ballast frequency (higher than 1/125 in the US) will produce inconsistent color between shots. And here’s a little tip for you indoor sports shooters: in most venues if you’re between rows of frequency based lighting your color inconsistencies will be higher than if you’re shooting an area lit predominately by one light or one row of lights ;~). 
  4. If you shoot raw, you may think that you have the ability to change white balance after the fact. You do, but to what and how? That’s what a ColorChecker reference shot in the light is for. Get the first shot in the sequence right, then apply the change as a preset to your other images in that sequence using the same light. Shoot new references when the light changes. 

Finally, you really should check your ability to see color before you complain about what the camera is doing. Take one of the many color blindness tests on the Internet. 

Can the camera makers “fix” this problem? Maybe. But I’m not sure I want the fix. For the very same reasons I didn’t like what happened towards the end of the film era: color response moved from “accurate” to “pleasing.” The problem is that “pleasing” tends to be faddish. Fujifilm and Kodak engaged in a process of pushing saturation and certain color responses that was a bit like the cold war missile crisis: ever escalating. 

Personally, I’m a proponent of “accurate.” It’s far easier to add saturation, contrast, clarity, hue shifts, and the like, than it is to take them out of data, especially 8-bit compressed data. 

I have a strong suspicion that no matter what any camera maker does in response to this problem—other than better establishing white balance in Auto modes—simply isn’t going to please most of you complaining about the problem. Which makes one of the things below (workflow) more important to solve.

  • Seeing what I’m about to shoot in tight spaces.

A lot of folk forget that Nikon’s older film SLRs tended to have modular viewing options you could purchase. Moreover, Nikon made a series of right-angle finders that were useful in particular situations for cameras that didn’t have a replaceable finder. 

Unfortunately we’re at the mercy of the Law of Numbers. I’ll bet that if I could watch 100,000 DSLR shooters at work, 99,000 of them would pretty much always shoot at eye level. The DSLR viewfinder or a mirrorless EVF works just fine for that. It also makes for boring photos, as the perspective is always somewhere between 5 and 6.5 feet off the ground. Always. 

I was lucky enough to train with photographers who never thought that way. Indeed, it was rare that they shot at eye level. On the ground, on top of things, in things, and pretty much every other position you could think of except for shooter eye level. 

Note I wrote “shooter eye level.” When you take photos of animals or people, the most engaging shots tend to be at eye level of the subject. Not many kids are 5 to 6.5 feet off the ground with they eye sockets. Squirrels tend to only be very temporarily at that height as they’re running up the tree ;~). 

But if you move the camera to the subject position you need, how can you see what you’re framing? Right, modular viewfinders and right angle finders (or the modern day equivalent of swivel LCDs with Live View). Or perhaps even remote view on a separate screen (smartphone, tablet, computer, or portable LCD). 

While we have some variations of this available in the digital world, most of it is just poorly optimized. Take, for example, if I want to hold my camera way over my head and frame a shot. With a tilting LCD or swivel LCD and Live View, I can do that, but to reach the shutter release I’d have to pull the camera downwards some; the ergonomics of using the camera this way are terrible. That’s one reason why touch screens on a tilting LCD are important (amongst others). Also remotes.

  • Size/Weight (many). Carrying while walking, hiking, through airports, how often you carry the camera versus not. 

This was the most frequently mentioned problem you reported, as it turns out. And I believe that it’s the reason why mirrorless cameras, the Fujifilm X100, the Panasonic LX-100, and the Sony RX100 have been somewhat popular with serious shooters.

The Coolpix A and Nikon 1 factored into this discussion, too. Unfortunately, these solved the size/weight problem but produced other problems. Oops. It’s illustrative to note that, because what people are really asking for is the "same thing, only smaller and lighter.” To a large degree, that’s why we’ve seen so many leakers/samplers checking out the Fujifilm X-T1, the Olympus E-M1, and the Sony A7 models. The hope is that these are “DSLR quality” at smaller size and weight. For some, they are. For others, something impinges on their DSLR-like expectations, typically focus performance. 

As I noted, one might conclude that the relatively flat mirrorless sales and declining DSLR sales actually revolve around this point. In other words, ILC sales are declining, but the buyers that remain actually want smaller/lighter product. Has nothing to do with whether there’s a mirror or not (the only dimension mirrorless impacts is thickness of camera, and as I’ve noted before, the most popular mirrorless products now have deep right hand grips that are as deep as DSLR mounts). But the mirrorless cameras are smaller and lighter, and thus getting more interest as time passes. 

Canon has barely explored the smaller DSLR arena with the SL1 (and now EOS M and SL2). I’m not sure why Canon felt that a smaller Rebel should cost more than the same Rebel in “normal” size (and I’m amused that if I go to CanonUSA’s site and do a compare between, say a Rebel T3i and a SL1 the very first category, weight, doesn’t have an entry for the SL1 ;~) How am I supposed to compare them, Canon?). Frankly, it seemed as if Canon didn’t actually want to sell any SL1 cameras, but merely wanted to say that “yes, we make smaller and lighter DSLRs.” Only they don’t tell me how much lighter in their Web site comparison. Marketing Fail.

Nikon, meanwhile, is reducing size and weight on many of their DSLRs (e.g. D5xxx, D750), but it’s not really a substantive change yet. They’re going the correct direction, just not fast or far enough.

That said, I’m not suggesting that we shouldn’t have some larger DSLRs with more mass. There are times when that mass is important to have. But most cameras aren’t sold for purposes that would require brick-style builds. Besides, it would be perfectly fine to have a line of bigger, heavier products and another line of smaller, lighter products. Let the customer pick what they want. Solve the user problem.

  • Inability to control the camera quickly (UI)(many). 
  • Simplicity (many).

With Nikon shooters, this response almost always resolved around U1/U2 (consumer cameras) and Banks (pro cameras). First, the difficulty of getting these things set up right and knowing what’s in them; second the limited number; and third, inconsistencies between models. 

It’s also interesting how many people noted that the Df was conceptually on track, but failed in actual practice. The demand here was for a straight out simple product: aperture, shutter speed, ISO, white balance, exposure compensation controls with only PASM modes. Many even asked for a raw-only camera (reducing the menu needs wickedly). 

One respondent took this a different way: “my requirements are different from yours but why is my camera identical to yours?” To a large degree, custom settings and programmable buttons/controls were the camera makers response to that request. Obviously, it’s not exactly working. This particular response had an interesting proposal that sort of ties into my modular proposals, basically custom order cameras where the user chooses:

  1. Body style (pro build, lighter build [see above ;~])
  2. Mount choice
  3. Buffer size
  4. Sensor choice
  5. Add on: wireless package
  6. Add on: wired package
  7. Add on: video package

In other words, let the user build the complexity that they desire. 

One thing I should point out, though, is this: complexity isn’t something that should always be avoided. If I built you a camera that was dirt simple to use, it would also have limitations. The higher up the professional scale you go, the more you can tolerate (and even need) more complex tools that force you to master them. That’s because as you get better and better at your craft/skill/art, it’s more and more small nuances that set your work apart from those of others, and you’re going to want a way to control those. 

Every camera maker has some form of “fewer features at the low end, more features at the high end” in their product line. Note that I didn’t write “simpler at the low end, more complex at the high end.” The problem is that many of the choices that the camera makers put into defining different models are absolutely arbitrary. They’re not justified by  the simple/complex scale, they’re justified by marketing lists that have evolved arbitrarily. In other cases, they’re justified by ease of engineering/manufacturing. What they’re not justified by is user problem solving. 

I write about “cruft” in many of my reviews: stuff that just doesn’t need to be in the product, but someone in Japan decided was necessary. There appears to be no one tasked with eliminating cruft in the camera companies, as it gets worse with each generation of camera. Indeed, sometimes the only difference between generations of cameras from a user’s perspective is the addition of more cruft! 

What you told me over and over again in your responses is that the primary things you need in a camera are getting buried from immediacy. One of the reasons why I like the Panasonic LX-100 as my carry everywhere camera is that I can ignore the cruft and directly set and control everything I want to change 99% of the time. Even though the Sony RX-100 is smaller and does fit in my shirt pocket, it does not let me directly set and control everything I normally change anywhere near as easily. It’s not even a matter of image quality or focus performance or anything else. For a carry everywhere camera I’m reacting to photographic opportunity on the fly, so I want to be able to set what I need to directly and quickly. 

  • Can’t rely upon tuning/tolerances of my camera and lenses (many).

Here’s a place where camera makers redirected one of their problems (really tight manufacturing tolerances) into a user problem (AF Fine Tuning). I say that because I’ve just not seen an example yet where my camera/lens has really drifted out of tuning. Moreover, if I did find a combo that was drifting, i’d be sending that camera in for inspection and possible repair, because it would tend to indicate that something had moved (or wore out) in the complex chain that makes up the focus system. A symptom of the broken back frame on the D800, for example, is that it will not hold an AF Fine Tune for a lens.

Based upon a reasonable sample size at this point, the data also suggests that Canon users are better off than Nikon users. Data I’ve seen from pretty much all the Canon lenses tend to form tight bell curves around the zero tuning point, indicating that Canon’s manufacturing tolerances are well controlled, and not often off by much. The same type of data from Nikon lenses tends to be much less tightly grouped, and often not even in a bell curve. And this whole problem of D800’s going in for “focus repair” and coming back all needing +10 to 12 AF Fine Tuning afterwards is mind boggling to me. That indicates that Nikon's repair facilities don’t use the same standards that their manufacturing facilities do. This creates yet another user problem (having to AF Fine Tune a second time).

There’s a simple solution: tighten tolerances and make sure that every aspect of your business uses the same standards. 

But the problem extends further: how do you tune a zoom lens? You don’t. At least not perfectly. Moreover, I’m stunned that the camera makers would push this whole tuning thing on users and not clearly explain what it is, what it is used for, and how to use it reliably. Basically engineering teams went “phew, glad that problem’s no longer mine.” Well, sure, it might no longer be the engineers’ problem, but now it’s the problem of the person who bought your gear, and they’re confused. Frustrated. Sometimes unable to get satisfaction. 

And please don’t tell me that Nikon’s “How to Use the AF Fine-Tune function” article in their knowledge base is an acceptable way to solve the user problem. First off, reading a ruler that’s off center from the focus point used means that you’d better not be tuning a lens that has field curvature ;~). No, don’t get me started on their presentation. 

Long term this problem is going to (mostly) go away. At the point where contrast detect focus information is flowing at very high speeds to the focus system—faster than it does today in any mirrorless system—the camera can probably autocorrect for fine focus discrimination of static subjects. However, moving subjects focused almost exclusively via phase detect are the primary need for AF Fine Tune, as lens motors may not actually get to absolute X when the focus system says "go to X". That’s one of the reasons why the Olympus E-M1 camera has AF Fine Tune in it, by the way.

So yes, I’m all for the camera makers solving this user problem. And taking the additional user problem they’ve handed us (AF Fine Tune) off our plates.  

  • Camera doesn’t focus fast/reliably enough (many).
  • Can’t focus quickly/reliably where I want to (many).

These were another set of common user problems in the responses I received. I’ve written about this before: if a camera maker really wants to stand out from the others, they need to make big forward strides in focus performance and control. Not something that’s hard to measure, but something that each of us immediately notices when we pick up the product. 

Funny thing is, this is where DSLRs still have a prime advantage over mirrorless cameras. I’d think that Canon and Nikon would be working feverishly to keep that advantage. Super feverishly. So feverishly that they might need to be liquid cooled.

I’ve written before that the DSLR phase detect system has a data discrimination advantage over the mirrorless on-sensor phase detect system. Sometimes you just can’t escape physics or geometry. Yet, despite this, the mirrorless systems are creeping up on DSLR focus performance, at least consumer DSLR focus performance. Some of this is faux. One way to deal with speed is to make things move less, which also means that tolerance starts to enter into the fray. I suspect that a lot of the “almost in focus” results I’m seeing with some mirrorless continuous autofocus tests is the result of just letting DOF cover the slight misses. Of course, camera makers have been using a very loose definition of DOF for as long as I can remember, partly because they took to rounding the Zeiss formulas quite a bit, so those slight misses become somewhat visible.

The problems of focus are simple conceptually: what are you focusing on? How fast can we move the lens to that focus position? Is that thing moving, and if so which direction and how fast? 

You complicate things by introducing another variable: the camera isn’t steady (see never wanting to use a tripod, below). Worse still, the subject isn’t in a predictable position in the frame because you’re not steady. I see this all the time with birds in flight (BIF) practitioners. They complain about the focus system. Then I look at the sequence of photos they took. The bird doesn’t fill the frame and in the sequence is all over the frame because they couldn’t keep the camera steady on the subject.  They’re asking one heck of a lot from the camera.

Not that this couldn’t be figured out with the right technology. We’d need to know the subject and its motion tendencies, the distance to the subject, the focal length of the lens, and have motion detectors in the camera itself. 

One of the things about smartphones is that they have a huge number of sensors in them, including that motion detector I just mentioned. Much of what smartphones do is all about software tying things from sensors together. Ultimately, this will be what cameras need to do, too. The problem here is solved by more data, more computation, and the right software doing the computations and user interaction. To put it bluntly, the current “smart” autofocus systems in cameras (Auto Area, 3D, etc.) aren’t particularly smart and don’t have enough input. Note just how much focus improved when Canon and Nikon hooked the color metering sensor to the focus computations. We need more of that kind of work to get something that is truly smart. 

But the “dumb” autofocus systems in cameras are often further blunted with UI issues. If I’m controlling where the focus point (or points) is (are), why must I press a button before I can press anything to move that point (I’m looking at you Samsung and Sony)? Indeed, the recent appearances of thumb pads that are continuous on the high-end Canon and Nikon cameras are a step in the right direction (pardon the pun), though I’d like it better if my D4 thumb pad didn’t pop off at all the wrong times ;~). 

I’ve written before that if I were in charge of a DSLR group (or any camera group for that matter), focus is the one place I’d be trying to move the state-of-the-art. Even us pros have plenty of stories about struggling with focus. And Nikon, if you’re listening, we should always know what focus sensor was picked. Always. No exceptions. Moreover, showing us a bunch of focus positions that are “in focus” but really only in some range of near focus isn’t useful unless you also point out the primary focus position, the one that’s absolutely defining the focus plane. 

Unfortunately, we seem to have a ways to go before this problem is solved.

Wait a second I hear some of you saying, what about light field technology, such as Lytro? Ugh. Yes, this is what we need, another workflow step where we have to pick the focus after the fact. Using proprietary tools with onerous restrictions. Generating far fewer pixels. Can you say “not ready for prime time?” 

  • DOF indication (many).

Quite a few of you want information about the current focus point, and even more want to know “what’s actually in focus?” (depth of field). 

I can tell you why you don’t get those things in current cameras: (1) low dispersion glass has a tendency to focus at different points varying with temperature; (2) the method by which a zoom lens reports focal length isn’t precise, ditto those systems that report “focus distance” to the camera; and (3) which theory of depth of field do you want the camera makers to use? And which Circle of Confusion value if you say Zeiss?

I was just today looking at two photos from two different cameras using the same lens trying to respond to a user’s complaint about clear differences in his photos, despite both being FX cameras. Sure, the EXIF field reported a focus distance. The same focus distance. So the photos should be really close, right? But I could tell just by looking at the images that the subject wasn’t at the same distance, and this was creating one of the defining differences between the two images. How can that be? 

Nikon’s “distance” system, first introduced with D-type lenses and some new Speedlights and cameras, for the most part breaks focus into only about 20 values. The top two values are often something like 20-50 feet, and 50 feet or more. In other words, not very precise. Those designs were more precise for close focus distances, and that has to do with where you’d use a flash and the nature of the inverse square law for light. Even in Nikon’s “distance-driven” flash system, repeatability is next to zero. It’ll be close, but not exactly the same if you let the camera calculate flash power using that distance information.

Once again we are in the land of tolerances. In many modern lens designs the focus elements barely move (to speed up focus performance, amongst other reasons). Getting extremely precise data of where they are would require expensive build items in the camera. 

So the question to those of you with this problem is “how accurate does the data have to be?” If you can tolerate less accuracy and some variable accuracy across lenses, it could probably be easily done. Indeed, a few camera makers have tried along the way, only to hear complaints about how inaccurate and thus un-useful the system was. If you need precise answers, it’s going to cost you.

  • Understanding the automation (many).

This one came up in all sorts of ways: how does the matrix meter really work, how does the flash system calculate exposure, how is Group AF different than Dynamic AF, at what data point does Highlights flash, and a host of other questions. 

Thanks for all the fish, but can you tell us what we’re eating? 

It’s called documenting the system. I’ve been cleaning out my attic lately and trying to better preserve some things that should be preserved while getting rid of others I don’t need. You may remember that I was, amongst many other things, in charge of documenting the Osborne computers back in the early 80’s. Guess what I found in my attic? A series of seven books that I and my staff wrote that documented everything we produced—including a technical manual on the computer itself—and not a single one of those was less than 400 pages long. It can be done. But camera companies don’t do it. 

Actually, a lot of companies don’t do it anymore. First, they all think they’re giving away trade secrets and special sauce (as if no one is going to do a tear down and reverse engineer what they did), and second it takes time and staff and thus money to do it. Of course, my solution at Osborne was that all but one of those books were extra cost to the user if they wanted them ;~). 

Oh, we get some not-very-deep videos from the companies from time to time. Nikon’s knowledge base system also tends to give brief answers to very complex things, in essence not really explaining anything useful but looking good (see AF Fine Tune, above ;~).  

Frankly, if you want to stand out against competitors, you have to offer something that the competitors don’t. Information is something that would be easy to add to the mix. A lot easier than inventing an entirely new and better focus system ;~). Moreover, this makes you look more responsive to your customer (see trust, below).

  • Aspect ratios are fixed.

I was wondering if this one would show up. As far as I’m concerned, this is one of the most brain-dead aspects to imaging at the moment. Let’s see, I can go to Target and buy frames for 5:4 format output. My TV is 16:9. My preferred landscape print size is actually 2:1 to 3:1. On this Web site, 1:1 would work best. And I’ve got mostly 3:2 and 4:3 aspect ratios hard wired into my cameras. Is that not insane? It certainly adds to workflow (see below).

What do camera companies think we actually do with our images? And why is that the pre formatted matte and frame companies all seem to still be living in the 20th Century? 

I’m going to mention the Panasonic LX-100 again. Hey, there’s a switch right there on the top of the lens to control aspect ratio. The company actually thought about aspect ratios when they built the camera. Wow, isn’t that progressive? Only problem, it’s missing 5:4 ;~).

On the top end DSLRs, Nikon got around to providing some aspect ratio choices. Not so much on the lower end models. And what’s with Adobe? If you select 16:9 for an Olympus raw it doesn’t actually save 16:9, it saves the whole 4:3 in raw (correct call), but Adobe won’t let you get to that other data. 

First things first: all images should not be the same aspect ratio. Second things second: cameras should be able to produce images for all output sizes that are common. And when I shoot raw, I really want the camera to save the raw data it saw, all of it, plus the camera settings that were in effect. I want my raw converter to then recognize those camera settings and apply them, including aspect ratio. But I might change my mind later, so I want the additional data that might be cropped preserved and the ability to change the aspect ratio after the fact.

Of all the things that digital inherited from film, the one that just is the nastiest, is this notion that pictures should all be the same aspect ratio. It wasn’t true in the film world, though we had to do our cropping post facto. Now that we have computers in our cameras we ought to be able to do our cropping pre facto, yet apparently almost no one making cameras has woken up and discovered that. 

Note that not solving this problem for a user creates additional downstream problems for the user (workflow, see below). When I write that “camera companies do not understand workflow,” this is just one aspect of what I mean. Camera companies are stuck on the notion that the camera just takes a picture, then you take the storage media out of the camera and do things to it. 

I will give Nikon partial credit for one thing: in the Retouch menu they’ve provided a number of “solutions” to the output problem, including aspect ratio. Unfortunately, they live as distinctly different menu options and have overly burdensome workflow. Try it. Shoot a raw file that you want to output as a 1920 x 1080 pixel 16:9 aspect JPEG. That’s right, you’ll probably be using three different Retouch menu commands, each of which feels like it was designed by a different person. Partial credit only, Nikon. 

  • Can’t shoot reliably in low light (many variations). 

DX and FX shooters were a little different in talking about this problem, as you might expect. A lot of DX shooters think part of their problem could simply be dealt with by better DX lens choices ;~). They’re certainly right when it comes to wide angle, and that’s one of the reasons why the Tokina 11-16mm f/2.8 zoom has been so popular: it’s covering the 16-24mm (equivalent) range about a stop faster than you can achieve with other solutions. The difference between 1/60 and 1/125 as a shutter speed when dealing with humans is important. (Which goes to point out that some problems have sub-problems in them. In this case, subject motion in low light might be the problem, and it’s a problem because of another sub-problem: you’re using a smaller sensor at maximum tolerable ISO. In other words, there are other potential solutions to the problem.)

But here’s the bad news: photons conspire against you. More bad news: sensor makers still haven’t managed to break the laws of physics. 

What’s that mean? Well, the classic complaint tends to come from sports photographers shooting night games (inside or out). You see, they need a very short shutter speed to stop action (at least 1/500, but for pro sports 1/1000 is a better minimum). The telephoto lenses they tend to use only go to f/2.8. EV (exposure value) is equal to the log base 2 times the aperture squared divided by the shutter speed. That means we need EV13 to achieve that aperture and shutter speed combination. Night sports are typically lit to a max of EV9, and often worse, thus we’re bumping the ISO at least four stops up. 

Ugh. In essence, we’re getting few photons to start with and multiplying them upwards via the sensor’s built-in gain setting. The result is that we have lower dynamic range, and at the bottom of that range, the randomness of photons is causing additional “noise.” 

Will this get better for us in the future? Probably. Depending upon who you ask and what kind of mood they’re in that day, you’ll hear most engineers say that continued sensor improvement with current technologies should net as much as 2 stops more from where we are today (though it won’t do anything about random photons, so those shooting in really low light will always see quantum shot noise). Some may suggest that there are other technologies that might provide another boost of some sort, though there’s usually a lot of waving of hands and chants to the gods as they talk to you about that, and no one wants to be quoted on numbers. 

Which brings me back to where I started: faster lens. That’s the solution. Or wait. That's the other solution. And the best solution is wait and buy a faster lens.

So if faster lenses are the solution, what the heck is Nikon doing with DX? ;~) f/3.5-5.6. f/3.5-5.6. f/3.5-5.6. f/3.5-5.6. Yet they could start solving real user problems today with f/2, f/2.8, even f/4 in some cases. Oh, right, they want you to use the other solution: buy an FX camera.  

  • Can’t get sharp stills without a tripod (many).

I was surprised about how many of you wrote in and said that you never want to use a tripod. Never. In retrospect, I should have seen that one coming, as it is part of the smaller/lighter/simpler requests. Basically, tripods are a hassle. They add weight, size, and complexity to a shoot. 

The brave amongst you asked for image stabilization systems that are so sophisticated that they take out every possible movement, no matter how small the displacement or how short the movement lasted. 

A number of these “I want perfect” requests (see AF Fine Tune, above) start to hit limits in what can be done in the affordable realm. Or today. Technology does move forward, so you can expect things like IS systems to get better over time. However, did you know that you could have far better IS/VR today if you really wanted it? That’s right, both the detection and movement mechanisms that are used in all current IS systems come in far higher precision parts. Only one problem. Those parts are outrageously expensive compared to the ones that the camera makers are using. 

Sometimes you can only solve a problem through application of money and time. This is one of those. Some day, grasshopper. Not today.

That said, shutter slap has been a known detriment to image quality for as long as I can remember (which is to say 50 years or more). Things have gotten better, but even today I still find that pretty much every camera using a mechanical shutter has a range of shutter speeds at which less-than-optimal image quality is achieved. Sometimes, as in the case of m4/3 cameras or the Sony A7r, it’s far beyond less-than-optimal and could be almost characterized as an unusable range of shutter speeds. I was happy to see Nikon partly address this with the electronic first curtain feature on the D810. We need more camera companies paying more attention to this.

  • Automated flash exposure.
  • Need higher flash sync speeds. 

I’m always amused when I watch experts in lighting teach. Almost without exception, they immediately start turning off automatic flash exposure features. Oh, automatic triggering via light or radio wave they leave on. They might even use manual flash level controls via remote triggering (e.g. adding or removing EV from the flash fire level). But they rarely, if ever, let the camera do the calculations for them. 

Frankly, lighting is one of the three things you have to master to be a great photographer (composition and technique being the other two). Do you really want to have an auto-light, auto-compose, auto-technique camera? Wouldn’t all the photos produced by such a thing look the same? 

Personally, I want radio control of multiple flashes and no limit to shutter speed while shooting with a flash. That solves all my problems. Of course, I’ve been doing studio and remote lighting for 40+ years now. When you do it long enough and often enough, you develop a personal style for amount and direction of lights. Notice I said lights.

But we have another intersection of problems here. One of the reasons why many of you want better automated flash exposure is because you’re shooting in low light. Hey, one solution to your problem is to add light. Then those pesky random photons will be less pesky. Heck, your SuperSmallSensor won’t need to be taxed to its extreme. 

And here physics raises its ugly head again. Perhaps Sheldon Cooper will someday solve this problem for you by having some of your light go through alternate universes to balance out, but when you use a flash it will accurately light one distance only. Just like your camera will only accurately focus at one distance only (hopefully those two are the same ;~). 

So I’ll just ask for two problems to be solved here: (1) reliable radio communication and control between camera and light units (of all kinds, not just flash); and (2) global electronic shutters that allow us to shoot flash at any speed. #1 is possible today through third party devices, while #2 is going to take a few more generations of sensor technology to solve all the issues with.

  • ETTR exposure.

Here we are 15 years into the DSLR era and 14 years after I and others asked for some specific tools in our DSLRs, and low and behold, the camera makers still don’t seem to get that they’re causing some of us to jump through hoops to solve some basic problems. Like “how many electrons did I collect in each channel?” It’s as if the camera companies had never heard of Ansel Adams.

Aside: before someone accuses me of having a fat head and comparing myself to Ansel Adams, let me be clear: quite a few of us have Ansel Adams-like goals of optimizing our exposures. That’s what I’m referring to. I actually don’t have a shot at being as good as Ansel Adams if I don’t have the tools that let me understand my data. And no, I don’t think I’m Ansel Adams.  

ETTR for those of you who have been sleeping in class, is “expose to the right.” What that means is that we want to put the very brightest part of our exposure right up to the top of the electron saturation well of the sensor (I sometimes make an exception for specular highlights). If the sensor’s photosites can hold 100,000 electrons, we want the brightest thing in our scene to be defined by a value of 100,000 (actually, probably 99,999, but what’s a little rounding between friends? ;~). Lower values then fall where they fall. In this way we maximize the use of bits in the data (every data point uses the maximum possible number of bits it can), and we take full advantage of the sensor’s dynamic range.

The resulting photo might not look right, though. But that’s okay, because we’re working in raw and will apply our own curves to the data to make it look the way we want it to. Just like Ansel Adams did via chemistry and a lot of waving of hands in the darkroom.

Strangely, one tool we did have Nikon took away from us when they introduced Capture NX. It used to be that we could create a White Balance value with Capture 1 that didn’t apply any white balance correction to the data, thus when we looked at histograms and highlights, we were actually looking pretty much at the actual data (e.g. UniWB). This doesn’t work for JPEGs, obviously, but the problem is that even if you’re shooting raw, histograms and highlights in every camera except one made by Leica show you information derived from the JPEG. Which is not the raw data. And no matter how well we set our camera’s, no camera-made JPEG is going to be as optimal as what we can do with our digital darkroom tools (e.g. a good raw converter and Photoshop). 

  • Noise (of camera).

Electronic shutters to the rescue. 

Frankly, my solution to this problem is very pragmatic. There aren’t a lot of times I shoot when I need the camera to be silent (golf, wedding ceremonies, surreptitious street shooting, theatre). For those occasions I’ll use one of the cameras that can be configured to be silent, simple as that. My usual cameras stay home (or in the bag). Problem solved. 

But I don’t think most of you will be happy until your main—and possibly only—camera can be configured for silent running. Patience grasshopper.

  • I no longer trust Nikon products.

Ah, Nikon's QA issues show up as a user problem ;~). Just as in real life relationships between significant others, once you lose trust, the relationship itself tends to disintegrate. 

However, the one person who mentioned this problem cited hearsay, not reality. It wasn’t that the camera or lens that they bought had a problem, but the rumor that the new 300mm f/4 has some sort of problem that set this person off (and then quoted problems for cameras they don’t own, either). 

Some people have wondered why I have been harping at Nikon for the way they’ve first denied then handled various QA issues. Well, this is the reason: brand credibility is being eroded. People aren’t buying products because they’re afraid they’ll encounter a problem. Worse still, if you do encounter a problem, Nikon’s customer service will just make it worse. I’m currently working with and following details on one camera’s journey back and forth to NikonUSA that just has me flabbergasted. The camera clearly still has problems, though one major problem was addressed. Nikon’s latest defense is that the user is trying to use in too cold a situation. Hmm. Since this camera was completely torn down to rebuild, don’t you think the problem might actually be that your repair person made a mistake on the rebuild? Wouldn’t you want to check that?

Funny thing is, of all the user problems you readers threw at me, this would be the simplest and easiest one for Nikon to address. Apologize. Admit mistakes were made. Clearly identify what those mistakes were. Clearly state you will do your best to not make them again. Set up a customer service system that is responsive. Frankly, I’d even go further and do the Apple Care thing and make it an element of the “turnaround.” As in “we now offer a no-questions-asked repair and replace policy for a one-time up front fee if you’re truly concerned about the products you buy from us.” 

  • Voice annotation.

I was surprised by this one, but it makes sense. For awhile when I was shooting some sports with the big Nikon pro cameras I was using voice annotation to capture an important detail (e.g. player name). I can see how photojournalists would find this feature useful, too. 

Certainly every DSLR now has a microphone and speaker in it (because of video), so there’s no reason why camera makers can’t add this feature. Other than it’s one of those arbitrary feature decisions they make and it adds complexity (see above). 

  • Notation (e.g. GPS).

We all carry smartphones these days. All smartphones have GPS in them. Why don’t cameras just grab GPS data from our smartphones? 

Originally, because someone at a company that thought that they sold accessories profitably but really is terrible at selling accessories (and who’s name starts with an N) saw this as a profit center, we got add-on GPS units. Exactly none of which I’d say work reliably enough over time to invest in. I’ve watched in amusement as my teaching assistant—who is a GPS fanatic—has destroyed GPS unit after GPS unit after 10-pin connector after cord after…well, you get the idea. And then the darned unit doesn’t always find satellites or reports current position accurately. Yet I don’t have those problems with my iPhone ;~). 

This is another variation of the “camera companies live in their own world” problem. We see that with workflow (see below), but it’s a systemic problem. Camera companies don’t know what they integrate with (remember the aspect ratio problem, above?). Or they don’t want to integrate with anything. Maybe it’s too hard for them or something. 

Funny thing is that the GPS problem is fixed by a Bluetooth connection and a background app at each end, worst case. Not by a team designing yet another GPS unit where they try to skimp on the parts to keep their profit on the accessory up, then fail to get their marketing and sales departments to actually sell. 

  • Finding the optimal digital system and sticking with it.

Not a problem. 

Well, okay, it is a problem. It’s the quintessential existential problem. We seek better in all things in life, so why wouldn’t we seek it in our professional or hobbyists tools? So the first part of the complaint (finding an optimal system) is never going to go away. There will always be better coming along.

The second part of the complaint (sticking with it) is about discipline. Too many people chased too small an advance and just churned through cameras and lenses and software and everything else. Too few people actually outgrow the thing they bought before they discard it for new. We’re seeing that in the pixel chase going on at the moment in full frame. Have you really mastered 16mp? Then maybe you don’t need 50mp. Just saying.

  • Focus markings are not accurate. Focus screens are terrible (many).

What’s with all the focus problems? ;~) 

I can’t remember the last time that I needed to know exactly what the focus distance was. Part of that is just sheer usage: I can guess useful DOF from what I see. And I have other ways of determining DOF than focus scales on lenses when we shoot. Okay, so I’m in the .1% percentile. 

It isn’t that focus markings aren’t accurate. It’s that they’re sparse. I’ve seen lenses lately with four markings, ala 0.5', 2', 5', infinity. How often are you going to focus exactly on one of those distances? (And remember, how often will that number the actual number for low dispersion glass?) One real problem here is that lens makers are trying to make focus elements move less. I won’t go into all the reasons why that is, but suffice it say that you probably want that (except for you macro shooters). 

Unfortunately, that also means that the mechanism connected to those lens elements, the focus ring, doesn’t move much, either. So there isn’t a lot of room for markings to start with. Simple solution, though: most of you asking for this probably are manually focusing, so just use the old manual focus lenses with long throws on their focus rings and lots of markings. 

Which brings me to the other problem. Yes, focus screens are terrible for manually focusing. As autofocus cameras became more popular in the late film SLR days, you users however asked for something else: can you make the screen brighter? Yeah, sure, only it won’t be so good for focusing ;~). Be careful what you wish for when you wish for a specific feature like that. That’s one reason why I posed this assignment as “what is the problem you need solved?” Had you said “I need the screen to be brighter so that I can focus more easily,” the camera makers might not have gone all in on just making the screen brighter. Or perhaps they would have left the focus area at the center the same with the outer areas brighter. Or something different than they did when they responded to a frequent user request. 

On the Nikon pro cameras, it’s amazing how many people forget that the autofocus indicators tell you which way you’re out of focus and when you’re “in focus.” While I don’t 100% rely on these, they’re useful for quickly getting close and for verifying what my eyes are telling me. They’re also reliable consistent. > • < light at predictable points for the focus plane. If you find that moving from > to just light • is “the spot” versus moving from < to •, that tends to be repeatably consistent. Certainly if • isn’t lit and your AF sensor is on what you want in focus, you need to do some double checking.

But shame on Nikon. The Df was a perfect time to reintroduce a well-tuned manual focus system. They just tweaked the focus screen a bit. So little, in fact, that most people didn’t notice, including their marketing department ;~). 

  • More working distance with macro lens.  

200mm f/4G AF-S Micro-Nikkor anyone? ;~) But it’s far worse than that. With rare exceptions, camera makers seem to think that macro photographer stops at 1:2 or 1:1 ratios. With working distances from the front lens element that make it near impossible to be flexible in lighting a subject. 

The original 55mm Micro-Nikkor (now with 5 extra millimeters!) was designed exactly to solve a user problem. Anyone remember what that problem was? Right, we didn’t have ubiquitous photo copiers and scanners, so if we needed to “copy” something printed, we needed to put it on a copy stand and take a photo of it. Guess what the focal length and magnification ratio of the 55mm was tuned for? You guessed it: it was about the right focal length for copy stands that sat on tables. At the top height of the stand, you would be able to get shots of almost any book or book-sized reproduction you needed. If you wanted  to copy something bigger, you needed a copy stand bigger than the semi-portable ones you’d put on a table. 

Do we use macro lenses for that purpose any more? No. So why is the focal length and working distance of the primary macro lens the same focal length almost 55 years later? This is one of the reasons why I rail on the Japanese designers so often: they learn to iterate something and they just iterate it ad infinitum and then wonder why at some point sales start falling and at another point no one buys at all. It’s not rocket science, dummies: the user problem the product solved changed or went away.  It’s also the reason why I almost never recommend the 40mm DX Micro-Nikkor or the 60mm FX Micro-Nikkor: other lenses would more likely solve the problem the user wants to solve. 

Nikon’s market department looks at it a different way: “we keep managing to sell people 60mm macro lenses, so I guess we should keep doing it.” Way to ignore the customer problem, Nikon.

  • Can’t get my subject large enough in frame (reach)(many). Sometimes coupled with needing to be smaller/lighter. 

There are some interesting “nuances” to this problem. For example, the missing D400 ;~). Seriously. Here’s a relevant quote from one reader “I have decided that DX cameras are better for me [for reach] than FX, but the buffer is terrible.” Right on, baby!

Okay, so Nikon solved this problem with the Nikon 1.

Nope. “I have decided DX cameras are better for me…” Quite a lot of folk made that decision. I don’t blame Nikon for trying to fit a third, smaller sensor size under DX and FX, but realistically the next smaller size should have been m4/3 ;~). Oops. Can’t do that. How about one more notch down the ladder? In essence, CX is a worse choice for someone finally giving up on DX than FX is. 

Or how about this: just fix the rung on the broken ladder (DX). I’m not going to rant any further on this. If Nikon doesn’t already know how I and more than a million other Nikon shooters (D300 owners) feel about this, then there’s no hope for them. 

  • Cost (tended to be coupled with any of the above problems)(many). 

This was a bit of a surprise to me. On inflation adjusted pricing, serious photography gear is about as inexpensive as it’s ever been. Many of these too-expensive responses had statements like this “I would like a faster long lens, but cost keeps me from getting one.” And that was from a D800 owner. 

Yeah, I have to do this. Yes, I really do. Here goes:

Nikon, what are you thinking? Where’s the D400? It’s not the body cost itself that’s the inhibitor, it’s the total cost of everything the shooter thinks they need. I couldn’t help but noticing that a lot of the cost complaints were from FX owners. 

Okay, I feel better now.

Seriously, while cost may be a problem for some of you, I’m not sure it really is a problem that the camera makers need to address. Virtually all the other problems are much more important to solve, while keeping cost to the customer as a factor to consider. 

The minute we get into a “make it cheaper” kind of mind think, the more we’re talking about “good enough.” Frankly, we have plenty of “good enough” products at very reasonable prices at the moment. What we want are better products that truly solve our problems. As Apple has proven time and again, get the solution right, and cost is negotiable. 

And now on to the mother-load, workflow. As a group, the workflow-related problems all showed up as the biggest set (focus-related being second). But there are quite a few variations of where the problem is. I’m going to call out a few of the ways you stated the workflow problem:

  • Can’t get my photos to my family easily (requires convoluted workflow)(many).
  • Digital workflow is more difficult than film workflow (many). Too much post processing steps required by multiple incompatible products. USB 2.0 too slow, card readers extra step.

The camera makers aren’t helping with this one. Way back in the digital dark ages they defined a naming system that can deal with a total of only 9999 images. Ever. They have refused to address this limitation for over 20 years since it was pointed out to them that it was insane. 

Let me let one of you make the indictment (words slightly edited): "I have a D4, the lenses I want, the flashes I want, Lightroom 5, Photoshop, and a PC with 16GB RAM. I have the gear and the time but I have lost the enjoyment of photography. Why? Downloading is a pain, editing is a pain, sharing is a pain, and printing is a huge pain.” 

It’s almost as if we drive our cars to work, but when we get there, we first have to grade the land, then pave it, then create a parking space, and then and only then, park the car. Or maybe we go to our garage to get ready to go to work and have to first assemble a car.  

Our problem is that we want an image, and we want to share it in some way with someone else. Any camera can create an image, not much in what camera companies do lets us share it, let alone conveniently. Even if we know what we want to do with the image and where it needs to go—we don’t always know those things when we take an image, after all—there’s not a single shortcut we can take with the camera that lets us do that. None. Nada. Zero. Nor are the camera companies actually working with third parties to solve that problem. No, the camera companies see “sharing sites” with photos, and their answer is that they want to create a sharing site. And still not give you an easy way of getting the image where you want it, even if it’s the camera company's site!

Simply put, the camera companies just don’t get the big picture (pun intended). It’s actually quite sad. 

Let’s move on, because there are two elements of workflow that are very important to fix: 

  • The time it takes to process, edit, and deliver results (many many many). 

The critical word in that is “time.” Camera companies seem to not care about how much time it takes us to do something with our images. They’ve spent most of their resources reducing the amount of time it takes to make a photo (faster focus, automated exposure, etc.). But that’s the extent of what they’ve done. 

And the reason why smartphones now take more photos than cameras every day is simple: that time factor. It’s not actually getting better with cameras, yet it constantly gets better with smartphones. In other words, the problem is going to get worse. 

  • Immediacy (many).

Related to the above. I’m going to let someone else’s words do a lot of the work here: "While there is still a market for high-end, unique photos such as yours in glossy magazines and the walls of nice homes, the largest marketplaces for most photos are Facebook and Instagram. To get 'likes', I need to greatly decrease the time between my shutter click and their mouse click. This may seem a trivial pursuit, but social media is half my day job and we have a staff of six who do online promotion for the firm.”

This is actually a disturbing thought. Note that this is a “professional” user of product that’s a bit different than we thought about in the film era. The fact that the camera companies are not serving this professional customer is really, really bad. But frankly, even the traditional “professional” these days has to shorten their turnaround times in order to keep clients. So the problem is even worse than you might think. 

One way to see if you’re not solving a significant problem is to see how many third-party products pop up in specific areas. May I present one such related area in terms of immediacy: tethering products (while I’ve linked to my page on a few of them, I’m woefully behind in keeping up; I’ve got a dozen more sitting on my shelf next to my desk I haven’t gotten to yet). Tethering is all about immediacy (and also about simpler workflow and better client relations). While every camera company seems to have some tethering solution, they all suck. Most of us use other party’s tethering solutions, and we have plenty of complaints about those, too, but they’re better than what the camera companies give us. To me, that’s the ultimate indictment of camera companies when it comes to workflow. The one simple thing they’ve tried to do just doesn’t cut it. 

I’m not going to belabor the point. Workflow is only going to be solved with software, both in the camera and out. I think most of you know what I think about most Japanese photographic software efforts to date. Moreover, it’s Silicon Valley that’s beating the drum that everyone has to march to. So here’s the solution to the problem: a Western-style design organization headquartered in Silicon Valley that’s driving the software side of cameras. The Japanese companies are already late to the game. In fact, the game is almost over and the other team is winning. It may be that the Japanese didn’t realize that there even was a game that they needed to play. 

Here’s one problem in workflow that I don’t really ascribe to the camera companies, though:

  • Keeping my photos organized (many). 

Other than not giving us the ability to name our photos well, this isn’t the camera’s problem. This is the problem at the far end of the workflow, well away from the camera itself. 

Could I imagine a camera that helped with this problem? Sure. Just being able to tell the camera that today I’m shooting the Medici wedding would help with organization. So, yes, there are small things that can be done. But most of you who cited this problem were citing a discipline problem of your own: since your camera lets you take tens of thousands of photos without apparent cost, you just did that, and now you have a mess. 

Is there a problem here that needs solving? Absolutely. Adobe and Apple both see this as a big opportunity for them to solve, Google will throw a search algorithm at it, and Microsoft will eventually say “oh, you’ve been taking photos?” I’m pretty sure that you’ll see plenty of action at the end of the workflow on this, but very little of it involves the camera. At least if that camera has an adequate naming system and stores useful GPS data (see above ;~). 

  • This final problem I’m going to let the respondent’s words describe, because it’s solved by something I’ve written extensively about (improve the photographer): "Panning with fast moving aeroplanes, while trying to keep the shutter speed low enough to retain some prop blur. The fix, more practice.” There were other similar examples, as well.

I stuck this last type of problem in because it illustrates a point that sometimes gets forgotten in all our discussion of camera gear and shouldn't: not all user problems should dictate new product designs. Sometimes the fix is simply the punch line to that old “How do I get to Carnegie Hall?” joke: practice, practice, practice. 

One last point. Way too many of you had responses like “one problem, give me a break, where do I start?” In other words, you didn’t have just one big problem that needed solving, you had lots of problems that you wanted solved before buying more gear. 

If you’ve made it this far, you’ll wonder how the pig even manages to fly. In just two days of input from site visitors, I got a pretty darned good map of the many user problems that are plaguing them with photo gear. 

Way too many of you can point out a considerable list of things you consider flaws in current camera designs. Not user preferences, but clear flaws that inhibit your ability to do something quickly, conveniently, completely, and accurately. It’s really time that we hold camera makers to the fire on fixing those problems. 

So, if you’re on Twitter, tweet this article with the hashtags #dearnikon and #nikon (#bythom would be nice, too ;~). If you’ve got the email address of a camera company manager or designer, send them a link to this article and ask them to look into these problems. (I’ve put a linkable copy of this article in the Cameras section where it won’t get lost with my constant flow of news/views.)

If you’re not into the modern sharing methodologies, print out a copy of this article, mark FIX THESE PROBLEMS in big red ink at the top and mail it to the president of your local Nikon subsidiary. 

You Canon users can do the same as I just outlined, just substitute your company’s name in the above. Same with Sony users, though you should probably add a note saying “Free the bits! All 14 bits please.” The note Olympus users should add is “Fix the menus!”  The note from Pentax users should probably say “anything new yet?” 

Phew! That was fun. 


Looking for gear-specific information? Check out our other Web sites:
mirrorless: sansmirror.com | general: bythom.com| Z System: zsystemuser.com | film SLR: filmbodies.com


dslrbodies: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts, 
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: