How Much Better is One Camera Over Another?

"Dear Thom: if the upcoming DXXX has twice the pixels of my DYY, how much better would my pictures be if I upgrade? Signed PixelChaser."

Dear PixelChaser: if you have to ask that question the answer is that your photographs will either be about the same or possibly even degrade in quality. Yours, Thom.

It seems like forever that I've been answering this question. I, too, at one time thought more pixels always meant more better. I still do, actually, as you'll find out by the end of this article.

Unfortunately, we're talking an aspect of cameras that is supposed to generate more quality, and there are hurdles that get in the way for many people to achieve that. Adding pixels isn't a given that it will increase quality, in other words. I don't mean to be mean in my answer, above, but in practice, it's proven to be mostly accurate. If you don't know what it is you're chasing and why, you won't achieve it.

My first digital camera was .3mp (that's point 3, not 3). My current camera is 36mp. Yes, I've got a camera today that will provide me far better quality than my original. Let's back up and talk some basics, though.

Serious shooters basically fall into one or both of two categories: (a) They are pushing every last pixel they've got into very large print work; or (b) they are looking for low-level pixel integrity because they realize that gives them more and better choices in post processing. If you're in one or both of those categories you know exactly what more pixels should provide you and I'm not sure why you're reading this article.

Category (a) wants pixels of at least similar integrity to what they've got, just more of them so they can print bigger. If they think their current camera is maxed out at prints of 24", then they want the same abilities but with enough extra pixels to print 36". In the end, many of the people in this category end up doing the same thing they did with film: they go up a size. If they were shooting DX at 12mp, they go to FX at 24mp. If they were shooting FX at 24mp, the move to MF at 40+mp. The reasons to do this basically are the same as they were with film: the bigger capture area comes with a built-in advantage when you're basically chasing maximum print size: it decreases the magnification (all else equal). We've had a good run with digital sensors, where today's 16mp DX is arguably giving us far better pixel integrity than the old 6mp DX sensors, but it's often quicker, easier, and more productive to just buy a bigger camera once you start trying to make big prints.

Category (b) wants their images, usually of some (relatively) fixed size—perhaps they shoot for a magazine—to look better. This is a trickier situation than (a) because "better" can come in a lot of different forms. For instance, if you were shooting inside NBA arenas, low light performance improvements were more interesting than more pixel improvements, which is why the D3 and later D3s and D4 won so many converts. If you're shooting at a fixed size (13x19" from your desktop printer, magazine page for your photo editor, 8x10" print from a lab, etc.), more pixels on their own doesn't always give you better quality, though it can with diligent shooting and workflow.

To understand that last remark, you need to understand all the meanings we usually cram into the word "resolution." True resolution is measured in line pairs per millimeter, which is a linear measurement. (True resolution is also a chain of resolutions—sensor, lens, etc.—but that's out of the scope of this article.) Thus, the linear change in pixel count is important:

  • 3mp: 2000 pixels on long axis
  • 6mp: 3000
  • 12mp: 4000
  • 16mp: 5000
  • 24mp: 6000
  • 36mp: 7000

I've been a little loose with those numbers because it makes it easier to see the thing I want to relate. When we went from our 3mp D1 with 2000 pixels on the horizontal axis to our 12mp D3 with 4000 pixels, we got a probable doubling of resolution (all else equal). We don't get that when we go from our 12mp D3 with 4000 pixels on the horizontal axis to a D3x with 6000 pixels. Instead, we get a 50% increase. Now consider the 16mp D7000 to 24mp D7100 leap: 20% increase. Different sources come up with different figures, but I generally use 15% as being the minimum necessary for most people to see any difference in resolution at the pixel level, so the D5100/D7000 to D3200/D5200/D7100 is just barely above that bar.

Meanwhile, we've got other factors potentially fighting us. One of the reasons why some of those old 3mp and 4mp images look pretty good these days is that we weren't really recording diffraction impacts. Diffraction wasn't hitting far enough away from an individual photosite to get well recorded into adjacent pixels. It's another area of debate (there are many in this discussion), but some of us use 2x the diagonal of the photosite for the "diffraction impact" mark. Below that, diffraction doesn't significantly lower resolution test numbers. Above that, it does. So as photosites have gotten smaller, diffraction impacts have gotten more visible.

I used the term "pixel integrity" earlier. What are the components of that? A pretty long list, actually, of which here are just some:

  • diffraction impacts
  • microlens spill (another light adjacency issue)
  • Bayer consistency
  • AA filtration level
  • consistency/linearity of ADC
  • electron migration
  • on-board noise leveling (neighbor pair assessments, etc.)

The list goes on and on. 

I have little doubt that the camera makers will continue to push forward for both the (a) and (b) shooter. We'll get higher pixel counts, and we'll get a continued devotion to better pixel integrity. The reason is simple: without those things, it gets pretty tough to sell a new DSLR at all. Let's face it, from the D3200 on up Nikon's DSLRs have a pretty long list of features that'll let you do most anything you need to, plus these cameras run from pretty good performance at all things they do, to superb. So without sensor improvement, it would take a big change (did I hear someone say CPM?* ;~) to sell any of us a new camera.

Nevertheless, even though we're getting strong gains of (a) and (b) in sensors, the return is getting smaller with each generation. We're now very near the point where it should become obvious to everyone that the real choice for tangible gain is to go up a size, just as we did with film.

Now don't get me wrong. I'll generally take pixel count increases, all else equal, because basically this means "more sampling." When I point my 16mp camera at a landscape, I get 16 million samples of the landscape. When I point my 36mp camera at the same scene, I get 36 million samples. If everything else is equal, more sampling is good. Even with diffraction, if I'm handling my camera carefully, I should be able to make edges look sharper and render more detail (one of the things that's equal is my print size, by the way). 

But my answer to those asking the original question is usually the one at the top of the page: if you have to ask… 

Let me explain. Three results are possible:

  1. More pixels create other issues for the shooter.
  2. More pixels create other issues for the shooter, but they then take the time to learn how to handle those issues.
  3. More pixels create more data for the shooter, and it improves their shots. 

People asking the original question tend to be Category #1. Hopefully, other articles on this site might help them get to Category #2, otherwise buying more pixels is probably a waste of money and time, and thus my slightly flippant answer. On the other hand, those that are already in Category #3 don't ask the question, as they already are benefiting and know why.

*For those with short memories, CPM is my short-hand for communicating, programmable, modular, which is what I've been suggesting for several years now that our cameras really need to be. 

text and images © 2017 Thom Hogan
portions Copyright 1999-2016 Thom Hogan-- All Rights Reserved
Follow us on Twitter: @bythom, hashtags #bythom, #dslrbodies