Satellite imagery is becoming ubiquitous. Research has demonstrated that artificial intelligence applied to satellite imagery holds promise for automated detection of war-related building destruction. While these results are promising, monitoring in real-world applications requires high precision, especially when destruction is sparse and detecting destroyed buildings is equivalent to looking for a needle in a haystack. We demonstrate that exploiting the persistent nature of building destruction can substantially improve the training of automated destruction monitoring. We also propose an additional machine-learning stage that leverages images of surrounding areas and multiple successive images of the same area, which further improves detection significantly. This will allow real-world applications, and we illustrate this in the context of the Syrian civil war.
This project has been particularly hard to implement. Machine learning from images is it's very own discipline and we benefitted tremendously from the seed funding we received from the La Caixa BGSE project and the servers of the Computer Vision Centre at the UAB. The project would have been impossible to implement without several research assistants who helped us reach several dead-ends and in the implementation of the final methodology. We are grateful for extremely valuable research assistance by Bruno Conte Leite, Jordi Llorens, Parsa Hassani, Dennis Hutschenreiter, Shima Nabiee, and Lavinia Piemontese. We are particularly grateful to Javier Mas for his research assistance which produced the final coding backbone to this project.
The paper is available as an unpublished manuscript here or from ArXiv. Supplementary Information is here. The published version at PNAS can be accessed here. Replication files are here.