A NASA-funded project has demonstrated how deep-learning algorithms can help experts determine from satellite scans whether a volcano is due to erupt or not.
Geoscience eggheads and computer scientists at Pennsylvania State University (PennState) in the US built a convolutional neural network (CNN) that automatically removes atmospheric distortions from satellite radar images depicting volcanoes and their environs, making it easier to spot the minute telltale movements that an eruption is coming. A paper describing this AI system was published in the Journal of Geophysical Research last month.
Christelle Wauthier, co-author of the paper and an associate professor of geosciences at PennState, explained this week that precise radar images are needed in particular, as these geological movements are “subtle and cannot be picked up by the naked eye”.
“The shape of volcanoes is constantly changing and much of that change is due to underground magma movements in the magma plumbing system made of magma reservoirs and conduits,” she added. These tiny deformations can be used to determine whether an eruption is imminent, and are the sorts of signals that could be used to develop an early-warning system.
The shape of volcanoes is constantly changing and much of that change is due to underground magma movements in the magma plumbing system made of magma reservoirs and conduits
We could use instruments on the ground to pick up these changes; this equipment is more accurate than radar images from space, but the gadgets have a very limited range, must be able to function with little or no maintenance in highly remote areas, and may be destroyed by geological activity in the run up to a larger eruption or otherwise wrecked by nature. And so, radar images may be the better choice for a warning system, though the coverage suffers when land is covered with thick clouds and other weather disturbances that block or hinder the radar’s radio waves.
Enter machine learning to clean up the radar readings: it can automatically remove the noise, allowing the tiny movements in the earth to be analyzed for signs of danger. The researchers trained the neural network on 16,000 synthetic radar images, and tested it on 4,000. The team used 20 Nvidia Tesla P100 GPUs to train their model.
Synthetic data is normally used when there isn’t enough real data or if the real data is too messy. The researchers are confident their algorithms can be applied to real-world scenarios, and tested them on radar satellite images of the Masaya volcano in Nicaragua. The goal was to see if the AI could sufficiently remove noise from the radar images so that any movements present in the pictures married up with movements detected on earth by ground stations.
The neural network didn’t perform badly, though its output appeared to overestimate the movement of the volcano.
“These initial errors in the CNN may be caused by poor spatial sampling,” the researchers wrote in their paper. “Analysis using synthetic benchmarks shows that the CNN is capable of revealing noise-free surface deformation signals,” they concluded. To improve the accuracy of their model, they recommend including more data in the training stage, including GPS readings.
“We wish to be able to identify earthquake and fault movements as well as magmatic sources and include several underground sources generating surface deformation,” PennState’s Wauthier added. “We will apply this new groundbreaking method to other active volcanoes thanks to support from NASA.” ®