We've got state actors attempting to influence other states' elections with disinformation. We've got lobbying organizations running campaigns with clear agendas. We have individuals who've discovered that they can be a big influencer of others by just typing on their keyboard. We have "numbers" published that are supposedly meaningful.
Have you ever thought about whether or not misinformation might be harming the photography market?
I have.
And it's not just outright misinformation, it's also information where the nuance is all reduced down to some overall numerical value (e.g. dpreview's "Overall Score", which I find mostly pointless).
For instance, recently a flurry of messages came my way about DxOMark's rating for the 24-70mm f/4 S lens. In particular, the thing that comes up every time that DxOMark publishes a lens test these days is a number they report for "Sharpness." For the Nikkor in question, that number was 19, which is lower than the 20 given the Nikkor 24-70mm f/2.8E, and more intriguingly, way lower than the 24 given the Sony-Zeiss 24-70mm f/4 ZA, a lens I long ago gave up on using because of its poor performance.
Bore down a bit into those DxOMark "scores," though. The highest rated lens for Sharpness is on the highest resolution camera (5DS R). The two Nikkor numbers come from two completely different cameras and sensors (Z7 and D800E). DxOMark is creating scores using different test platforms.
What struck me most, though, was the Nikkor Z and Sony FE comparison. One lens I think is really good (the Z), one I think is not worth using for a lot of work (the FE). The difference comes down to how each lens performs as you move away from the center of the frame.
So, let's look at DxOMark's testing protocol. In DxOMark's own words: "For each focal length and each f-number sharpness is computed and weighted across the image field, with the corners being less critical than the image center. This results in a number for each focal length / aperture combination. We then choose the maximum sharpness value from the aperture range for each focal length. These values are then averaged across all focal lengths to obtain the DxOMark resolution score that is reported in P-MPix (Perceptual Megapixels)."
Uh, what? Corners less critical? In what way? How is that weighted? Why choose the maximum sharpness obtained and not the median? Why average the maximum of all focal lengths?
Virtually no one seeing a DxOMark Sharpness number—even on DxOMark's own site—does the drill down to see what the heck that number actually means.
It means absolutely nothing that's useful, in my opinion. It's an average of the maximums of unspecified weightings! Oh, and by the way, in their reviews they have this disclaimer: "Remember that the lenses are intended to be used on different camera systems and mounts, so the comparisons are not strictly applicable."
Funny thing is, if you read DxOMark's actual textual review of the Nikkor in question, they write as their conclusion "Great all-rounder for Nikon Z users." Indeed it is. Unfortunately, the testing methodology that DxOMark uses masks exactly how that might actually really compare to other brands' products.
Not that I'm trying to call out DxOMark here. I'm just using them as one example. There are plenty more where that came from. The real culprit here is all the folk on the Internet who want to repeat a single number that sums up a subjective evaluation criteria (DxOMark's Sharpness number) as if it is meaningful in comparing two lenses.
Another example of what seems to be disinformation happened recently when at the X Summit where Fujifilm introduced a concept called "Value Angle." I know that the camera makers struggle to describe why certain mount decisions can have significant impact on optical design, and I, too, have at times short-handed the discussion by simply pointing to the angle from the edge of the sensor to the edge of the mount. However, Fujifilm is a bit disingenuous in their discussion.
The reason why they promote Value Angle is that their APS-C XF mount calculates to a bigger angle than the best full frame mount (the Nikon Z mount). So it must be better, right? You don't need to buy full frame at all!
Unfortunately, the Fujifilm XF mount has the worst Value Angle of any of the mirrorless APS-C mounts; Canon's EOS M would be the best in this calculation. Own goal, Fujifilm.
Fujifilm doesn't make full frame cameras, though. So obviously the XF cameras are better than the full frame cameras because of the mount, right? At least that's what they want you to believe.
Not so fast Fujifilm. There are way too many factors that go into the optical design of a lens and the way the optical system at the focal plane—UVIR filtration, lowpass filtration, filtration depth, gap to sensor, microlenses, photo diode depth—works for digital cameras to reduce everything to one number across differing formats. (And to add insult to injury, Fujifilm's medium format GF cameras would be worse than the full frame cameras using the Value Angle metric!)
Aside: What a larger opening and a wider angle from that opening to the sensor gives you is more optical design flexibility (all else equal). Optical center point, size and position of rear element, angle changes of extreme light through the optical path, all have more options available when you create a bigger/shorter mount for any given size sensor. Note the last highlighted clause: Fujifilm actually created the worst mount scenario for their APS-C sensor size.
What we have here is another arbitrarily calculated number standing in for actual useful information, and this time in marketing information. Be wary of those arbitrary numbers you see.
Reviewing in context is difficult. I first became aware of that back in the late 70's with High Fidelity reviews. In the early 80's I wrote one of the first standardized reviewing guidelines (50+ pages) for the computer industry in my role as editor of InfoWorld. At one point I fired a reviewer who did not disclose a paid relationship with a company in the industry, which was required by our guidelines. I managed a similar project in the 90's at Backpacker. Because all the outdoor product companies have hidden special pricing for influencers (even back in the 90's), I forbid staff to take advantage of that. We also returned or donated all gear we received for review, rather than keep it as sometimes happens elsewhere. I know how hard it is to try to describe how a product actually performs and what that might mean to a user. And I think I know what trying to maintain integrity and disclosure of conflicts means, too.
But none of us are perfect, nor are any of us writing on the Internet capable of perfectly reviewing a product, with full context to all other relevant products and your particular needs and usage.
Thus, I caution everyone to be smart in their reliance upon external information being passed around on the Internet. There are bad actors, paid influencers, poor articulators, and meaningless numbers to wade through. Even the best of us may not get everything perfectly and adequately described, and you have to be careful not to read things into words that weren't intended.
Over time, you find sources that you can trust. Don't trust new sources without vetting them. Trust, but verify, sources you've grown used to. Understand the business model of the sites you visit and how that might influence them (DxOMark's current business model appears to be consulting services plus selling their Analyzer product, which puts them in competition with Imatest for producing a set of numbers from standardized testing [disclosure: I use Imatest in measuring camera and lens performance in the lab, though I don't report these numbers because they're generally not comparable across products as DxOMark would like you to believe]).