Enlarge / A macro shot of the city of Seattle, Washington on a map.
Satellite imagery showing the expansion of large internment camps in Xinjiang, China, between 2016 and 2018 provided some of the strongest evidence of government crackdown on more than a million Muslims, resulting in international condemnation and sanctions.
Other aerial photos – for example of nuclear facilities in Iran and missile sites in North Korea – had a similar impact on world events. Now, image manipulation tools enabled by artificial intelligence can make it harder to take such images at face value.
In a paper published online last month, Professor Bo Zhao of the University of Washington used AI techniques similar to those used to create deepfakes to alter satellite images of several cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings that don’t exist in Seattle and to remove structures and replace them with green in Beijing.
Zhao used an algorithm called CycleGAN to manipulate satellite photos. The algorithm developed by researchers at UC Berkeley has been widely used for all kinds of image tricks. It trains an artificial neural network to recognize the most important features of certain images, such as a painting style or the features of a certain type of map. Another algorithm then helps refine the performance of the first by trying to detect when an image has been tampered with.
As with deepfake video clips purporting to show people in compromising situations, such images could mislead governments or spread on social media, sowing misinformation or doubts about real visual information.
“I firmly believe this is a big problem that may not affect the average person tomorrow, but will be a much bigger part of it behind the scenes for the next decade,” said Grant McKenzie, assistant professor of geospatial science at McGill University in Canada. who was not involved in the work.
“Imagine a world where a state government or other actor can realistically manipulate images to either show nothing or a different layout,” says McKenzie. “I’m not exactly sure what can be done to stop it at this point.”
A few grossly manipulated satellite imagery has already gone viral on social media, including a photo purported to show India that was lit up during the Hindu festival of Diwali and that appears to have been retouched by hand. It can only be a matter of time before more sophisticated “deepfake” satellite images are used, for example to hide weapon installations or to falsely justify military actions.
Gabrielle Lim, a researcher at Harvard Kennedy School’s Shorenstein Center who focuses on media manipulation, says cards without AI can be misled. She points to images shared online suggesting Alexandria Ocasio-Cortez was not where she claimed to be during the January 6th Capitol uprising, as well as Chinese passports showing a disputed region of the South China Sea as part of China . “Not fancy technology, but it can achieve similar goals,” says Lim.
Manipulated aerial images could also have commercial significance as such images are enormously valuable for digital mapping, tracking weather systems, and orienting investments.
US intelligence agencies have recognized that compromised satellite imagery is a growing threat. “Adversaries can use falsified or manipulated information to impair our understanding of the world,” says a spokesman for the National Geospatial-Intelligence Agency, which is part of the Pentagon, which oversees the collection, analysis and dissemination of geospatial information.
The spokesman says forensic analysis can help identify counterfeit images, but admits that the rise in automated counterfeiting may require new approaches. The software may be able to detect tell-tale signs of tampering, such as: B. visual artifacts or changes to the data in a file. But the AI can learn to remove such signals, creating a game of cat and mouse between counterfeiters and fake spotters.
“Knowing, validating and trusting our sources is becoming increasingly important, and technology plays a huge role in making that happen,” says the spokesman.
Recognizing images manipulated with AI has become an important area of academic, industrial, and government research. Big tech companies like Facebook, worried about the spread of misinformation, are backing efforts to automate deepfake video identification.
Zhao from the University of Washington wants to investigate ways to automatically identify deepfake satellite images. He says studying how landscapes change over time could help identify suspicious features. “Temporal-spatial patterns are going to be really important,” he says.
However, Zhao notes that even if the government has the technology to detect such fakes, the public may be taken by surprise. “If there is a satellite image that is widely shared on social media, that could be a problem,” he says.
This story first appeared on wired.com.