National Geographic and Fauxtography
Monday, 31 May 2010
My friend Bob G. recently told me about a column on page 12 of National Geographic Magazine's June 2010 issue. The column titled "Getting Real" is a rant about how a photographer supplied the magazine with a digitally altered photo. (A shorter version of the article is available online under the title "Your Shot Digital Manipulation".)
Basically, "photographer" William Lascelles submitted a photo to National Geographic and claimed that it was real. The magazine asked Lascelles to verify the photo, and he submitted a second fake photo. National Geographic then printed the image, only to learn that Lascelles lied to them twice; they were duped into printing a fake photo.
Whenever I post blog entries about photo manipulation, there is always someone who asks "what's the harm?" and "what about artistic freedom?" In this case, the harm directly impacts National Geographic's reputation. The photo contest is restricted to real photos for a reason: the magazine strives to use photos based on reality and not fake or doctored or modified situations that are fictitious.
In response to being duped, National Geographic published a full column identifying the fraud by William Lascelles and repeatedly naming him. The printed column also names another fake, by Dobrev and published in December 2009. This is more than just posting a small correction. This is a full outing.
I was thrilled with the Smithsonian's reply. They listened to the concern, investigated the situation, and took appropriate action.
I sent a similar letter to National Geographic on Tue, 24 Nov 2009 21:36:10 -0700. I had noticed that at least one of their contest submissions was digitally modified. Here's the reply they sent me:
That's right: National Geographic does not review each of the images that they accept for publication on their web pages, and they do not remove images that they know or believe are digitally modified. As seen with William Lascelles, their "verification" consists of asking the photographer -- who has already lied and won't mind lying again. I can understand that they don't have a big budget or staff for image analysis. However, this is their reputation. One would think that they would be interested in protecting it.
In the fraud exposure column, National Geographic wrote that they have changed their policy: "Now we're looking more closely at all Your Shot pictures." It's about time!
Frankly, if you are going to hold a photo contest and require original photos (not digitally altered), then you should take the time to verify every finalist. I'm not saying that you should police every submission. Rather, attempt to evaluate every finalist or semifinalist -- probably a dozen or so pictures. You usually don't even need special tools; a critical eye is usually good enough. If you don't have the time or resources to validate potential winners, then perhaps you shouldn't hold the contest.
And I'm sure that someone is going to ask "Where can they get my tools?" It isn't so much the tools as the training. If you are not trained to spot photo manipulation, then the best tools in the world won't help you. And as my friend Cynthia Baron has repeatedly demonstrated to me: people with the right training and experience can do this without any specialized tools.

Disclaimer
Please keep in mind: I'm not analyzing the original submission that National Geographic received. I'm analyzing a resave that National Geographic likely scaled for use on their web page. I seriously doubt that this is the original submission. In effect, I'm looking at a bad photocopy of the original submission. As with any bad photocopies, some results may be inaccurate due to artifacts introduced during the reproduction process and some evidence of modification may be completely wiped out. And more importantly: the original submission was fake, so it also includes modifications and artificial artifacts.
Photo Ballistics
Lascelles' file was saved as a JPEG and includes a JPEG APP12 section labeled "Ducky". (If you search for strings in the image, you will see "Ducky".) What is Ducky? It is a section added by Photoshop's "Save For Web" and includes the saved quality level. In this case, someone used Photoshop's Save For Web and selected 83% quality. (The "someone" was probably National Geographic.)
However, the quantization tables do not match the stated quality level. Instead, the Quantization tables match 94% compression. The discrepancy is due to the saved settings. Specifically, the last save used Save For Web with "JPEG", "Very High" and 83% quality. (The "Very High" setting selected the 94% quantization tables. Photoshop's quality level does not represent the quantization tables.)
Principal Component Analysis
PCA is great at identifying JPEG artifacts from resaves. These appear as rectangular blocks that are either 16x16, 16x8, 8x16, or 8x8. The more extreme the blocks, the lower the JPEG quality. High quality or original images should have very few visible artifacts.

In this case, the PCA shows severe blocking in the sky -- this is a low quality image from multiple resaves. But there is a problem... The blocks are not 8x8; they are smaller. In this case, the big squares appear to be 7x7 and small squares appear to be about 3x3. This means that the image was low quality and then scaled smaller. (I wouldn't be surprised if National Geographic scaled the final image for presenting on their web page.) The final image is probably about 40% smaller than the previous version. Since this image is 201x134, the previous image was somewhere around 500x335 (or larger with multiple resaves that scaled it smaller).
The other thing to notice is the block quality. The sky has big chunks indicating a low quality image. The front of the house has small blotches with no visible grid. The dog shows grid-like blocks on his ear and face that match the sky but not the house. And the jets have no visible blocky artifacts. So while the dog may go with the sky, it does not match the house or the jets. The mottled pattern on the front of the house actually matches what I would expect from a picture of this quality. (I even ran a few tests with other pictures using Save For Web and "Very High" and 83% -- the tests generated the same blotchy pattern seen on the house.) This means that the dog, jets, and sky are wrong for this picture.
Error Level Analysis
Taken without any context, the ELA for this image identifies the dog and jets as being at a higher error level potential (newer) than the rest of the image. However, this difference could be explained due to a combination of Photoshop and scaling. Photoshop attempts to counteract the JPEG losslessness by over-emphasizing high frequency areas. (See my Alyson Hannigan write-up.) This image does have a large amount of rainbowing (the red/blue/purple coloring), so this certainly matches the meta data that identified Photoshop. With scaling, every pixel is modified and high frequency areas (like the dog's fur) can have pixel values altered more than the rest of the image.

However, there are two issues that really stand out. First, the jet planes appear to be uniform in color (low frequency) yet have a high error level. So this is a modification.
Second, ELA should identify similar 8x8, 16x16, etc. blocks as PCA; for low quality images, ELA identifies the chrominance subsampling. And this is a problem. The subsampling should be the same across the entire image. However, the sky clearly shows 16x8 subsampling (scaled to fit 7x3 grids). However, the house only has square subsampling (either 8x8 or 16x16 scaled to 7x7; you can easily see them on the roof). With the dog and the jets, I don't see the subsampling grid. This means that the image must be made from four separate components: sky, house, jets, and dog.
Blur Detection
Blur detection identifies subtle, high frequency edges created from artificial blurs. Ideally, each edge should either consist on one thin line, or two parallel lines that are one pixel apart (1 pixel wide double line). Anything else indicates an artificial blur.

In this picture, the dog has 1-pixel wide double lined edges -- it is real. However, the house and jets both have wide double edges; the jets and house have artificial blurs.
Color Distance
A new algorithm that I've been working on is based on color distances. Basically, real pictures blend colors along edges. When pictures are spliced (one pasted onto another), there is no blending. This algorithm measures the amount of blend. If you see a thin black line outlining anything, then it was spliced.

In this case, the dog has a thin, black outline against the sky. He was spliced into the picture. It is a little more difficult to see, but the upper 4 jets also have thin black lines. They were pasted into the picture.

In this example, I have shifted the picture down and to the left a little. This allows me to overlay the top three jets onto the bottom three jets. Guess what? Two of them are perfect matches. Let's number the jets for clarity:
From what I can tell jets 2, 4, and 5 are all the same plane and all uniformly spaced. Similarly, jets 1, 3, and 6 are the same plane. That is a total of two unique planes.
In real life, jets flying in formation will not be perfectly at the same angle to the viewer and they will not be perfectly spaced. Instead, they will all be ever so slightly different.

DoD photo by Airman 1st Class Gul Crockett, U.S. Air Force/Released

Same image, shifted to align the top plane on the middle plane, and overlaid to show differences.
This uniqueness factor even holds when the image is scaled smaller, like when the wingspan is only 20 pixels across. They may be small, but they should all be different. I think Lascelles cloned some jets.
Also, notice how none of Lascelles' vapor trails look the same. If they are about the same thickness and same color and the sun is in the same place, then they should all look similar. Jet #3 has a much whiter vapor trail -- likely two trails pasted next to each other. Jet #5 has a dark edge, probably from blending an overlapping paste.
Basically, "photographer" William Lascelles submitted a photo to National Geographic and claimed that it was real. The magazine asked Lascelles to verify the photo, and he submitted a second fake photo. National Geographic then printed the image, only to learn that Lascelles lied to them twice; they were duped into printing a fake photo.
Whenever I post blog entries about photo manipulation, there is always someone who asks "what's the harm?" and "what about artistic freedom?" In this case, the harm directly impacts National Geographic's reputation. The photo contest is restricted to real photos for a reason: the magazine strives to use photos based on reality and not fake or doctored or modified situations that are fictitious.
In response to being duped, National Geographic published a full column identifying the fraud by William Lascelles and repeatedly naming him. The printed column also names another fake, by Dobrev and published in December 2009. This is more than just posting a small correction. This is a full outing.
Told You So
Like National Geographic, Smithsonian Magazine holds an annual photo contest. In March 2009, I wrote to the Smithsonian and informed them that some of their finalists used digital modifications that appeared to be outside the permitted amount of allowed manipulations. The Smithsonian responded a few days later. They had investigated the images and queried the photographers. In the end, three of the five modified photos were disqualified from the contest. The remaining two were determined to have an acceptable amount of digital modification.I was thrilled with the Smithsonian's reply. They listened to the concern, investigated the situation, and took appropriate action.
I sent a similar letter to National Geographic on Tue, 24 Nov 2009 21:36:10 -0700. I had noticed that at least one of their contest submissions was digitally modified. Here's the reply they sent me:
Subject: Re: Digital Manipulation in the Photo Contest
To: "Dr. Neal Krawetz"
Sender: [redacted]@ngs.org
Date: Tue, 8 Dec 2009 13:48:19 -0500
Dear Dr. Neal Krawetz:
Thank you for contacting the National Geographic Society.
Your comments regarding photos submitted to the 2009 International Photo Contest are very much appreciated. While we provide information on photo manipulation and what is and is not accepted in the contest rules <http://ngm.nationalgeographic.com/photo-contest/manipulation>, all the photos in the contest are submitted by the individual photographers. We do not go through each one of them and remove them, even if we feel they have been manipulated. We simply do not have the time or staff to do that. You can view the winning photos on our website at http://ngm.nationalgeographic.com/photo-contest/past-winners
Best wishes,
[Name redacted]
National Geographic Society
That's right: National Geographic does not review each of the images that they accept for publication on their web pages, and they do not remove images that they know or believe are digitally modified. As seen with William Lascelles, their "verification" consists of asking the photographer -- who has already lied and won't mind lying again. I can understand that they don't have a big budget or staff for image analysis. However, this is their reputation. One would think that they would be interested in protecting it.
In the fraud exposure column, National Geographic wrote that they have changed their policy: "Now we're looking more closely at all Your Shot pictures." It's about time!
Frankly, if you are going to hold a photo contest and require original photos (not digitally altered), then you should take the time to verify every finalist. I'm not saying that you should police every submission. Rather, attempt to evaluate every finalist or semifinalist -- probably a dozen or so pictures. You usually don't even need special tools; a critical eye is usually good enough. If you don't have the time or resources to validate potential winners, then perhaps you shouldn't hold the contest.
And I'm sure that someone is going to ask "Where can they get my tools?" It isn't so much the tools as the training. If you are not trained to spot photo manipulation, then the best tools in the world won't help you. And as my friend Cynthia Baron has repeatedly demonstrated to me: people with the right training and experience can do this without any specialized tools.
Going Deeper
The dog picture by Lascelles is available online at http://s.ngm.com/your-shot/img/faked-blue-dog-615.jpg. Let's see what can be found with real image analysis...
Disclaimer
Please keep in mind: I'm not analyzing the original submission that National Geographic received. I'm analyzing a resave that National Geographic likely scaled for use on their web page. I seriously doubt that this is the original submission. In effect, I'm looking at a bad photocopy of the original submission. As with any bad photocopies, some results may be inaccurate due to artifacts introduced during the reproduction process and some evidence of modification may be completely wiped out. And more importantly: the original submission was fake, so it also includes modifications and artificial artifacts.
Photo Ballistics
Lascelles' file was saved as a JPEG and includes a JPEG APP12 section labeled "Ducky". (If you search for strings in the image, you will see "Ducky".) What is Ducky? It is a section added by Photoshop's "Save For Web" and includes the saved quality level. In this case, someone used Photoshop's Save For Web and selected 83% quality. (The "someone" was probably National Geographic.)
However, the quantization tables do not match the stated quality level. Instead, the Quantization tables match 94% compression. The discrepancy is due to the saved settings. Specifically, the last save used Save For Web with "JPEG", "Very High" and 83% quality. (The "Very High" setting selected the 94% quantization tables. Photoshop's quality level does not represent the quantization tables.)
Principal Component Analysis
PCA is great at identifying JPEG artifacts from resaves. These appear as rectangular blocks that are either 16x16, 16x8, 8x16, or 8x8. The more extreme the blocks, the lower the JPEG quality. High quality or original images should have very few visible artifacts.

In this case, the PCA shows severe blocking in the sky -- this is a low quality image from multiple resaves. But there is a problem... The blocks are not 8x8; they are smaller. In this case, the big squares appear to be 7x7 and small squares appear to be about 3x3. This means that the image was low quality and then scaled smaller. (I wouldn't be surprised if National Geographic scaled the final image for presenting on their web page.) The final image is probably about 40% smaller than the previous version. Since this image is 201x134, the previous image was somewhere around 500x335 (or larger with multiple resaves that scaled it smaller).
The other thing to notice is the block quality. The sky has big chunks indicating a low quality image. The front of the house has small blotches with no visible grid. The dog shows grid-like blocks on his ear and face that match the sky but not the house. And the jets have no visible blocky artifacts. So while the dog may go with the sky, it does not match the house or the jets. The mottled pattern on the front of the house actually matches what I would expect from a picture of this quality. (I even ran a few tests with other pictures using Save For Web and "Very High" and 83% -- the tests generated the same blotchy pattern seen on the house.) This means that the dog, jets, and sky are wrong for this picture.
Error Level Analysis
Taken without any context, the ELA for this image identifies the dog and jets as being at a higher error level potential (newer) than the rest of the image. However, this difference could be explained due to a combination of Photoshop and scaling. Photoshop attempts to counteract the JPEG losslessness by over-emphasizing high frequency areas. (See my Alyson Hannigan write-up.) This image does have a large amount of rainbowing (the red/blue/purple coloring), so this certainly matches the meta data that identified Photoshop. With scaling, every pixel is modified and high frequency areas (like the dog's fur) can have pixel values altered more than the rest of the image.

However, there are two issues that really stand out. First, the jet planes appear to be uniform in color (low frequency) yet have a high error level. So this is a modification.
Second, ELA should identify similar 8x8, 16x16, etc. blocks as PCA; for low quality images, ELA identifies the chrominance subsampling. And this is a problem. The subsampling should be the same across the entire image. However, the sky clearly shows 16x8 subsampling (scaled to fit 7x3 grids). However, the house only has square subsampling (either 8x8 or 16x16 scaled to 7x7; you can easily see them on the roof). With the dog and the jets, I don't see the subsampling grid. This means that the image must be made from four separate components: sky, house, jets, and dog.
Blur Detection
Blur detection identifies subtle, high frequency edges created from artificial blurs. Ideally, each edge should either consist on one thin line, or two parallel lines that are one pixel apart (1 pixel wide double line). Anything else indicates an artificial blur.

In this picture, the dog has 1-pixel wide double lined edges -- it is real. However, the house and jets both have wide double edges; the jets and house have artificial blurs.
Color Distance
A new algorithm that I've been working on is based on color distances. Basically, real pictures blend colors along edges. When pictures are spliced (one pasted onto another), there is no blending. This algorithm measures the amount of blend. If you see a thin black line outlining anything, then it was spliced.

In this case, the dog has a thin, black outline against the sky. He was spliced into the picture. It is a little more difficult to see, but the upper 4 jets also have thin black lines. They were pasted into the picture.
Observation
Alright, so it really looks like the dog and jets were spliced into the image. How many jets were there originally?
In this example, I have shifted the picture down and to the left a little. This allows me to overlay the top three jets onto the bottom three jets. Guess what? Two of them are perfect matches. Let's number the jets for clarity:
4
2
5 1
3
6
From what I can tell jets 2, 4, and 5 are all the same plane and all uniformly spaced. Similarly, jets 1, 3, and 6 are the same plane. That is a total of two unique planes.
In real life, jets flying in formation will not be perfectly at the same angle to the viewer and they will not be perfectly spaced. Instead, they will all be ever so slightly different.

DoD photo by Airman 1st Class Gul Crockett, U.S. Air Force/Released

Same image, shifted to align the top plane on the middle plane, and overlaid to show differences.
This uniqueness factor even holds when the image is scaled smaller, like when the wingspan is only 20 pixels across. They may be small, but they should all be different. I think Lascelles cloned some jets.
Also, notice how none of Lascelles' vapor trails look the same. If they are about the same thickness and same color and the sun is in the same place, then they should all look similar. Jet #3 has a much whiter vapor trail -- likely two trails pasted next to each other. Jet #5 has a dark edge, probably from blending an overlapping paste.
National Disaster
Even without specialized tools, National Geographic should have noticed the cloning of the jets, varying vapor trails, and the artificial blur. Since they have the original submission, they could have checked the meta data and quantization tables to see if it matched the digital camera. This alone would have identified the fraud. And most importantly: don't just accept the word of a photographer; it may be significantly altered even if they claim that it is "100% real".

Awesome! I didn't notice that, but now that you point it out, it is obvious!
Thanks!
I enjoy reading about your tools, and I find them fascinating. However, we should remember to not rely on just the tools alone.
I will point out that if the jets are flying that low then they are violation of local air traffic control regulation. =) Only thing I see them that low is in the low-rent locations near the airport or at an air show.
I do have to say it is a shame that either the photographer or NatGeo stripped out the most valuable data of all for detecting fakes... The camera meta data.
Secondly, the unclouded blue sky is most often (lens vignetting aside) a pretty uniform gradient. It changes, but the rate of change and direction of change is usually uniform, and always either increasing or decreasing. I see lots of gradient variation in the blue sky that seems wrong to me.
And I think Neal, you meant to say that planes 2, 4 and 5 were the same... otherwise there would only be 1 plane!
This is a fascinating article on how to detect manipulated photographs using technology, however, well-trained editors catch the simple errors such at the mis-matched lighting and depth-of-field issue.
You know, long before photoshop existed we "modified" photos. Cropped, selected certain film stocks due to the way they reproduce color and contrast and then further refined that look by the way we processed the negatives. We masked and airbrushed and used multiple exposures. We dodged and burned and sometimes even cloned.
I understand that you are coming from a photojournalist's perspective - and I suppose there is some merit in that. But that is not the end all and be all of photography. Photography is often about selling, whether it is a product or an idea or a call to action. It is a medium based on visceral emotional response, and ANYTHING done to achieve that goal (outside of false journalism) is good.
Now, I know you'll say that there are different contests and categories for altered images, but I think when discussing non PJ entries, more leeway needs to be given (based on your linked Smithsonian article) especially if shooting digitally.
A digital camera's raw is just a starting point. With digital you don't have the ability to select a particular film stock and then cross process it with chemicals. You have to do that after the fact - resulting in an "altered" image. Apparently, if I show you that file you'll balk, but if I show you a negative processed in the exact same manner you'll approve - that's nuts.
As far as the blue with the girl in the car goes, that can be done in a traditional wet darkroom as well.
Forensics work is great and needed. Applying it to PJ work to uphold standards of honesty is valuable. Applying it to photos where that level of honesty is unnecessarily banal and goes against every form of artistic expression.
But does it require the same level of scrutiny as war footage from Afghanistan?
I probably should have replied to the Smithsonian thread with my earlier reply as that's the one that really got my goat, so if you want to move it, feel free.