All Mouth, No Trousers
Thursday, 21 July 2016
Bait and switch is a common fraud technique, and it exists in every field. With digital photo analysis, a person will describe one thing, and then use "proof" that shows something else. I often see this on Twitter, and occasionally I call them out on their claims. For example:
There's a conspiracy group that claims that the killings at Sandyhook Elementary School didn't take place, or that the government was behind it. I'm not going to focus on the Sandyhook killings or the government conspiracy claims. Instead, I'm only going to focus on the digital image analysis claims.
LJ claimed that this picture is a composite and used FotoForensics in an attempt to support the claim. However, the conclusion is not supported by the analysis. While the picture has undergone multiple resaves and is low quality, there is no indication of a composite image.
When I pointed this out, LJ changed the claim. Now LJ claims that the image description doesn't match the metadata date. While the finding about the metadata date does appear to be correct, the original claim about it being a composite is neither supported by FotoForensics nor by the metadata. The composite conclusion cannot be reached with the available data. (LJ also didn't attempt to identify why the date was incorrect. And LJ also claims that the picture is a stock photo, but never provided proof of this claim.)
In effect, LJ claimed one thing, and then switched the findings to something else.
Here's another example:
Lena stated that a screenshot of tweets are fake and uses FotoForensics to justify this claim. However, she misinterpreted the results. (Among other things, "white" does not mean edited. And the results across the picture are consistent.)
When I pointed out that the claim was incorrect, she fell back to stating that nobody said those things. We later determined that the account was active during those times, but did not appear to make those statements. However, finding out that the text is fake by looking up the timestamps on Twitter is not the same thing as using FotoForensics to identify an edited picture.
In this example, "Obama Apocalypse" claimed that some low quality screenshot of a tweet was fake. Again, he jumped to the wrong conclusion. He followed it up by questioning the photo rather than the entire picture. (Here's a hint: if the entire screenshot is a known fake, then why question whether the tiny picture of a person has been altered?)
With this example, I did identify that the screenshot was fake, but I did it without using the FotoForensics result. Instead, I tracked down the source image and noticed that it used the wrong date format for Twitter. This picture came from a fake-tweet generator, like LetMeTweetThatForYou or Simitator. With this kind of program, a user provides a picture and types in the text. They can then grab a screenshot of the fake tweet.
However, in this case, ELA didn't identify anything other than a low quality picture. It took additional knowledge (noticing the wrong date format) to determine that the screenshot wasn't real.
Perhaps it is just my sample set, but most of the time these fake analysis results -- followed by switching results when caught -- are usually associated with conspiracies. Pro-government, anti-government, and it doesn't matter which government. If it's a conspiracy, then they will switch conclusions after having their analysis questions. In the extreme cases, they will go on the attack and/or bring up "new" evidence with the same faulty evaluations. In contrast, non-conspiracies seem to just delete their tweets.
With the scientific approach, it does not matter who's tool you use. A conclusion should be repeatable though multiple tools and multiple algorithms. One of the pictures that they ran though Tungstène was the same cloud picture that they used with ELA. And unsurprisingly, it generated similar results -- results that should be interpreted as low quality and multiple resaves.

Last year, Bellingcat evaluated the cloud picture with ELA. They claimed to see 5 different regions. They then incorrectly concluded digital alteration based on a low quality picture.

This year, Bellingcat evaluated the picture with some of Tungstène's filters. These filters appear to average results over the JPEG grid. For the uniformly white section of cloud, one filter marked it as white to denote the uniform region. After repeated resaves and scaling, the white clouds normalized to a single color. For the second filter (purple, Q3 histogram), we can see that they still evaluated the picture that was annotated. These results denote a low quality picture and multiple resaves, and not an intentional alteration as Bellingcat concluded.
Just like last year, Bellingcat claimed that Tungstène highlighted indications of alterations in the same places that they claimed to see alterations in the ELA result. Bellingcat used the same low quality data on different tools and jumped to the same incorrect conclusion.
With their other pictures, they made the same types of mistakes. Specifically, they ignored quality and evaluated pictures that have been clearly annotated and resaved. For example, their "The 'Two Buks' Image" clearly contains annotations, which means it is not original, has undergone alterations (annotations), and resaves. Moreover, the picture is clearly blocky, indicating scaling larger and more low quality. There's an old saying with analysis tools: "garbage in means garbage out". They put low quality pictures into Tungstène and claim to get out a result other than "low quality picture".
Similarly, in their PDF report (figure 5), they claim to identify the types of vehicles based on low quality satellite imagery. Based on the data provided, they cannot identify specific vehicles with any reasonable degree of confidence.
Bellingcat's PDF report makes many remarkable claims. Of the claims that I am qualified to evaluate, they are all false claims. Bellingcat cannot truthfully reach the conclusions based on the data they evaluated. And since these pictures form the basis for much of their report, they repeatedly reach conclusions based on faulty premises. Moreover, since they are making the exact same mistakes as they did last year, their report can only be interpreted as a work of fraud.
I should also mention that other people have also pointed out problems with Bellingcat's PDF report. For example, there is a claim that the satellite is at the wrong angle. I am not an expert on satellite positioning, so I cannot confidently evaluate this claim. However, I can readily identify glaring and obvious problems with their digital photo analysis -- that they use for nearly 25% of their report. If I can immediately spot problems with 25% of their report, then there is no reason for me to believe the 75% that I cannot readily evaluate contains any truth.
(Requests to evaluate other portions of the report are pretty meaningless since the known problems are so vast. It's like asking, "Other than that, how was the play, Mrs. Lincoln?" Or claiming that you didn't plagiarize part of a speech because you changed a couple of words and it wasn't the entire speech. When a significant portion is provably deceptive, there becomes no reason to spend time evaluating the rest for any tiny glimmer of potential truth.)
I chose (1) to not respond privately to Marcel, and (2) to decline his offer.
Marcel then sent me the following email response:
Every legitimate reporter that I have encountered has either honored my no-contact request, or said if I wanted to reconsider, then I could contact them. But in either case, they left me alone.
However, that isn't what Marcel did. Instead, he resorted to pressure tactics. He implied that not helping him means I don't care about the 298 people who died. This reminds me of that phone call I recorded from a scammer. She pointed out that I was hurting the US economy because I didn't want to do business with her company.
Moreover, Marcel continued to ask me for my opinion. And when I refused to respond to his demands, he made disparaging remarks.

However, the insults did not just come from a reporter associated with this Bellingcat report. They also came from Bellingcat's founder, Eliot Higgins. For example:
(I can only assume that the insult "all mouth no trousers" is some kind of colloquialism. Either that, or he has hacked the webcam in my office.)
I find it odd that Eliot Higgins would call experts in the field and forensic tool developers "hacks". In my case, Bellingcat relied on my tools in an attempt to justify their conclusions, actively sought out my expert opinion, and then insulted me when my opinion differed from their desired result.
The sad thing is that I know people in the media who consider Bellingcat to be a credible source. As one of them described it, Bellingcat is a collection of professional journalists, citizen journalists, rank amateurs, and conspiracy nuts. The problem is that there is no way to tell them apart. And since the organization's founder appears to use fraudulent methods to promote a conspiracy and uses various intimidation tactics, including bullying people online and casting baseless insults, I must consider Bellingcat to be nothing more than trolls impersonating journalists. Bellingcat appears to be less legitimate than Fox News. (I would compare the factual accuracy at Fox News to the The Onion, but I don't want to insult The Onion.)
https://twitter.com/hackerfactor/status/751204889145978880
There's a conspiracy group that claims that the killings at Sandyhook Elementary School didn't take place, or that the government was behind it. I'm not going to focus on the Sandyhook killings or the government conspiracy claims. Instead, I'm only going to focus on the digital image analysis claims.
LJ claimed that this picture is a composite and used FotoForensics in an attempt to support the claim. However, the conclusion is not supported by the analysis. While the picture has undergone multiple resaves and is low quality, there is no indication of a composite image.
When I pointed this out, LJ changed the claim. Now LJ claims that the image description doesn't match the metadata date. While the finding about the metadata date does appear to be correct, the original claim about it being a composite is neither supported by FotoForensics nor by the metadata. The composite conclusion cannot be reached with the available data. (LJ also didn't attempt to identify why the date was incorrect. And LJ also claims that the picture is a stock photo, but never provided proof of this claim.)
In effect, LJ claimed one thing, and then switched the findings to something else.
Here's another example:
https://twitter.com/hackerfactor/status/414818825796718592
Lena stated that a screenshot of tweets are fake and uses FotoForensics to justify this claim. However, she misinterpreted the results. (Among other things, "white" does not mean edited. And the results across the picture are consistent.)
When I pointed out that the claim was incorrect, she fell back to stating that nobody said those things. We later determined that the account was active during those times, but did not appear to make those statements. However, finding out that the text is fake by looking up the timestamps on Twitter is not the same thing as using FotoForensics to identify an edited picture.
https://twitter.com/hackerfactor/status/456660690778873857
In this example, "Obama Apocalypse" claimed that some low quality screenshot of a tweet was fake. Again, he jumped to the wrong conclusion. He followed it up by questioning the photo rather than the entire picture. (Here's a hint: if the entire screenshot is a known fake, then why question whether the tiny picture of a person has been altered?)
With this example, I did identify that the screenshot was fake, but I did it without using the FotoForensics result. Instead, I tracked down the source image and noticed that it used the wrong date format for Twitter. This picture came from a fake-tweet generator, like LetMeTweetThatForYou or Simitator. With this kind of program, a user provides a picture and types in the text. They can then grab a screenshot of the fake tweet.
However, in this case, ELA didn't identify anything other than a low quality picture. It took additional knowledge (noticing the wrong date format) to determine that the screenshot wasn't real.
Caught
Most of the time, users delete their tweets after I point out the false conclusion. For example, [1], [2], and [3]. These links are my responses, but the original tweets I responded to were deleted by their authors.Perhaps it is just my sample set, but most of the time these fake analysis results -- followed by switching results when caught -- are usually associated with conspiracies. Pro-government, anti-government, and it doesn't matter which government. If it's a conspiracy, then they will switch conclusions after having their analysis questions. In the extreme cases, they will go on the attack and/or bring up "new" evidence with the same faulty evaluations. In contrast, non-conspiracies seem to just delete their tweets.
More Bad Analysis
Last year, a group called 'Bellingcat' came out with a report about flight MH17, which was shot down near the Ukraine/Russia border. In their report, they used FotoForensics to justify their claims. However, as I pointed out in my blog entry, they used it wrong. The big problems in their report:- Ignoring quality. They evaluated pictures from questionable sources. These were low quality pictures that had undergone scaling, cropping, and annotations.
- Seeing things. Even with the output from the analysis tools, they jumped to conclusions that were not supported by the data.
- Bait and switch. Their report claimed one thing, then tried to justify it with analysis that showed something different.
Bellingcat conducted a simple error level analysis (ELA) exercise using an online tool, Foto Forensics. Others have criticized Bellingcat, suggesting that a proper analysis should use more sophisticated software available to police and intelligence agencies. The James Martin Center for Nonproliferation Studies at the Middlebury Institute of International Studies at Monterey has a license for one example of such software, called Tungstène, and conducted an analysis using a suite of filters in the program.The first problem was not that they used ELA. The problem was that they used it wrong and reached a conclusion that was not supported by the error level analysis. (I should also point out that branches of law enforcement do use FotoForensics. But unlike Tungstène, my client list is not public.)
With the scientific approach, it does not matter who's tool you use. A conclusion should be repeatable though multiple tools and multiple algorithms. One of the pictures that they ran though Tungstène was the same cloud picture that they used with ELA. And unsurprisingly, it generated similar results -- results that should be interpreted as low quality and multiple resaves.

Last year, Bellingcat evaluated the cloud picture with ELA. They claimed to see 5 different regions. They then incorrectly concluded digital alteration based on a low quality picture.
This year, Bellingcat evaluated the picture with some of Tungstène's filters. These filters appear to average results over the JPEG grid. For the uniformly white section of cloud, one filter marked it as white to denote the uniform region. After repeated resaves and scaling, the white clouds normalized to a single color. For the second filter (purple, Q3 histogram), we can see that they still evaluated the picture that was annotated. These results denote a low quality picture and multiple resaves, and not an intentional alteration as Bellingcat concluded.
Just like last year, Bellingcat claimed that Tungstène highlighted indications of alterations in the same places that they claimed to see alterations in the ELA result. Bellingcat used the same low quality data on different tools and jumped to the same incorrect conclusion.
Intentional Deception
Jumping to the wrong conclusion one time can be due to ignorance. However, using a different tool on the same data that yields similar results, and still jumping to the same wrong conclusion is intentional misrepresentation and deception. It is fraud.With their other pictures, they made the same types of mistakes. Specifically, they ignored quality and evaluated pictures that have been clearly annotated and resaved. For example, their "The 'Two Buks' Image" clearly contains annotations, which means it is not original, has undergone alterations (annotations), and resaves. Moreover, the picture is clearly blocky, indicating scaling larger and more low quality. There's an old saying with analysis tools: "garbage in means garbage out". They put low quality pictures into Tungstène and claim to get out a result other than "low quality picture".
Similarly, in their PDF report (figure 5), they claim to identify the types of vehicles based on low quality satellite imagery. Based on the data provided, they cannot identify specific vehicles with any reasonable degree of confidence.
Bellingcat's PDF report makes many remarkable claims. Of the claims that I am qualified to evaluate, they are all false claims. Bellingcat cannot truthfully reach the conclusions based on the data they evaluated. And since these pictures form the basis for much of their report, they repeatedly reach conclusions based on faulty premises. Moreover, since they are making the exact same mistakes as they did last year, their report can only be interpreted as a work of fraud.
I should also mention that other people have also pointed out problems with Bellingcat's PDF report. For example, there is a claim that the satellite is at the wrong angle. I am not an expert on satellite positioning, so I cannot confidently evaluate this claim. However, I can readily identify glaring and obvious problems with their digital photo analysis -- that they use for nearly 25% of their report. If I can immediately spot problems with 25% of their report, then there is no reason for me to believe the 75% that I cannot readily evaluate contains any truth.
(Requests to evaluate other portions of the report are pretty meaningless since the known problems are so vast. It's like asking, "Other than that, how was the play, Mrs. Lincoln?" Or claiming that you didn't plagiarize part of a speech because you changed a couple of words and it wasn't the entire speech. When a significant portion is provably deceptive, there becomes no reason to spend time evaluating the rest for any tiny glimmer of potential truth.)
Behind the Curtain
On 6-July-2016, I was contacted by a "freelance journalist" who asked me about NEF files. (NEF is the Nikon Raw format, based on TIFF.) Before responding to a stranger, I decided to look him up. The twitter profile for Marcel van den Berg identified him as part of the MH17 research community. I then checked if there was anything about NEF and MH17: someone claimed to have camera-original files related to some MH17 conspiracy, but was refusing to make them public.I chose (1) to not respond privately to Marcel, and (2) to decline his offer.
Marcel then sent me the following email response:
Hello Neal,
Saw your Tweet. I am doing serious research into what happened to flight mh17.
In the interest of the 298 people who died and their family I really appreciate if you can answer my question with a simple yes or no.
Thanks.
Marcel
Every legitimate reporter that I have encountered has either honored my no-contact request, or said if I wanted to reconsider, then I could contact them. But in either case, they left me alone.
However, that isn't what Marcel did. Instead, he resorted to pressure tactics. He implied that not helping him means I don't care about the 298 people who died. This reminds me of that phone call I recorded from a scammer. She pointed out that I was hurting the US economy because I didn't want to do business with her company.
Moreover, Marcel continued to ask me for my opinion. And when I refused to respond to his demands, he made disparaging remarks.
However, the insults did not just come from a reporter associated with this Bellingcat report. They also came from Bellingcat's founder, Eliot Higgins. For example:
https://twitter.com/EliotHiggins/status/754968032196395009
https://twitter.com/EliotHiggins/status/756093560252952576
(I can only assume that the insult "all mouth no trousers" is some kind of colloquialism. Either that, or he has hacked the webcam in my office.)
I find it odd that Eliot Higgins would call experts in the field and forensic tool developers "hacks". In my case, Bellingcat relied on my tools in an attempt to justify their conclusions, actively sought out my expert opinion, and then insulted me when my opinion differed from their desired result.
The sad thing is that I know people in the media who consider Bellingcat to be a credible source. As one of them described it, Bellingcat is a collection of professional journalists, citizen journalists, rank amateurs, and conspiracy nuts. The problem is that there is no way to tell them apart. And since the organization's founder appears to use fraudulent methods to promote a conspiracy and uses various intimidation tactics, including bullying people online and casting baseless insults, I must consider Bellingcat to be nothing more than trolls impersonating journalists. Bellingcat appears to be less legitimate than Fox News. (I would compare the factual accuracy at Fox News to the The Onion, but I don't want to insult The Onion.)
Read more about Forensics, FotoForensics, Image Analysis, Mass Media, Politics, Unfiction
| Comments (14)
| Direct Link


https://pbs.twimg.com/media/CoJs-xCWgAAV5uF.jpg:large
While anyone knowing maths should understand that the effect will be present on pretty much any 256 level grayscale image, such as here:
http://savepic.ru/10672011.png
I haven't seen where Bellingcat is making that cloning claim. However, if they are making that claim, then it is trivial to debunk! I like your Charlie Chaplin example -- very apt and is a clear demonstration.
I wrote about this copy/clone detection algorithm back in 2008 and included examples that show both false-positive and false-negative matches.
http://www.hackerfactor.com/blog/index.php?/archives/185-Myth-Busting-Boats.html
http://www.hackerfactor.com/blog/index.php?/archives/187-Myth-Busting-Boats-Revisited.html
http://www.hackerfactor.com/blog/index.php?/archives/308-Send-In-The-Clones.html
The picture I posted above was (likely) created by Michael Kobs out of the work of ArmsControlWonk - he just pasted the "cloned" pixels pattern, which you can find in the link below, over the original image.
They do claim (link below) that the cloned pixels pattern is unusual: "In any event, we would not expect a result like this in an unaltered image". Which, if one understands a little maths, is a BS. Here is another picture I've made http://savepic.ru/10662831.png.
http://www.armscontrolwonk.com/archive/1201635/mh17-anniversary/
> However, if they are making that claim, then it is trivial to debunk!
This is what Michael was trying to do and I jumped over the bandwagon.
https://twitter.com/MichaKobs/status/757288105451917312
Arms Control Wonk made a big deal out of the same kind of thing in the "Two Buks" image as well. Duplication is even more likely to occur naturally there because of the featureless nature of the background. One part of a low quality image of an empty field looks much like any other:
http://www.armscontrolwonk.com/files/2016/07/MH17-Buk-Clone-Cropped_11_15.jpg
Items that have been copied and pasted, on the other hand, should show up as distinct contiguous areas. Some examples in Tungstene are shown here:
http://www.exomakina.fr/eXo_maKina/Clonage.html
I commented on Marcel's site that in the absence of much hard evidence all theories about MH17 are conspiracy theories, including the theory that the weapon was a Buk missile. I wrote that the only way Marcel could be sure that he eliminated all b.s. from his site would be to close it. Marcel banned me for that.
I wondered then how long it would be before he banned you. It wasn't a long wait.
Now he is making statements like, "finding out the truth about MH17 seems to be a religion for some people instead of pure fact finding. Some dislike Bellingcat very much however fail to provide any reasonable counter evidence."
The issue for insecure people like Higgins is their egos. They constantly needed inflated and when the opposite occurs they feel their very existence is being threatened so they fend off these threats with aggressiveness. He exists in a bubble; an Internet echo chamber, if you will, surrounded by likeminded conspiracy buffs who each blow smoke up each other's asses. Anyone outside of this huddle are all wrong. They're all "truthers" and "idiots". Such is the insecurity of their position that fear of being challenged keeps them in their groupthink.
Higgins is now trying to distance himself from the report by saying that Bellingcat didn't author it but this "Arms Control Wonk" did and he nor Bellingcat have any connection to this group.
Just more lies from him as there is a direct link between both parties and her name is Melissa Hanham.
https://s31.postimg.org/tgxuthaxn/image.png
I only saw a little of that.
Sometime last night, Eliot Higgins blocked me on twitter. I cannot see any of his tweets. However, he appears to continue to make comments about me, even though I cannot see his comments and have no opportunity to respond (assuming I had any desire to respond). #unethical
A point of interest herein is that the guy quite literally has no experience, qualifications or credentials of any kind in any of the subjects he writes about. He has admitted this openly yet he's somehow an "expert" of weapons, chemical weapons, crator analysis, digital forensics and satelitte imagery. Yet real experts in each of these fields have consistently labelled his work as "hobbyist" and "fake" (yourself and Prof Postol) and that he was "merely reading tea leaves" (Der Spiegel)
He has went on the attack with you because you dared say he was wrong. He had been selling this report, prior to its release, as a smoking gun that would prove him right on his amateur ELA analysis and you wrong. Now that you have again said he and the report are wrong you've left him in a bind. So he has blocked you and will now try character assassination as he did with Prof Postol. He likes to carry out his debates on Twitter using the limited 140 character as a hiding place. You didn't bite and instead wrote a critique blog piece, which is the professional way to do things.
In fact, one does not have to be an expert to tell whether a method works or not. It can be treated like a black box and fed genuine and fake images. After running a reasonable amount of data through the method, even a complete non-expert will be able to build a reliable understanding of whether the method works or not.
Of course, Belinngcat, having a clearly traceable agenda, were not going to do that. Marcel called this approach, which actually is a pretty basic scientific technique, "nonsense".
So you can get an idea of the quality of any analysis conducted by these guys.
First I wonder if the results are necessarily due to repeated resaving, leading to normalisation to a single brightness level. They could instead be due to complete over-exposure of the cloud part of the image. That could equally have produced a region of pixels of the same brightness, ie. maximum or a level of 100%, even without resaving. The exposure of the satellite cam would most likely be automatically set for the darker features on the ground, not for clouds.
Either way, the reason for the cloud appearing different to other regions, as seen through the Tunstene filters, is the fact that the cloud was a bright white sunlit object with hardly any texture, lines, spots, dots or graininess. That would easily explain the results for frequency distribution, compression, noise and identical pixels.
Secondly, I would have thought that if a cloud was added to the image, the fake would show up in its border region and not in the middle. That's because the cloud is thinner and semi-transparent at its edges, as seen in the fact that it is darker there. That should make the cloud's border much harder to fake, since that area would be made by adding together cloud pixels and ground pixels, which would be hard to perfectly align together.
That doesn't mean that a cloud can't be faked. It just means that if any discrepancies did show up, they would be much more likely to found in the boundary and not in the middle, where Arms Control Wonk claims to have found them. I could be wrong about that since I'm not an expert on that subject, so I'd be interested to know if it's difficult to articially add a cloud, or even smoke, to an image without it being detected.
Brendan, this can be easily done via the following stupid method. You photoshop anything and then photograph your work. You'll get a genuine image file (with whatever imaginary RAW encryption features that are supported) of a fake picture. Modern cameras actually have a very little colour drift, if you combine it with a good printer (you can get idea about small colour drift here youtu.be/OWnC9tSA3iA).
Btw, tried to email you via metabunk.