OBORN OF THE The most frequently asked questions to the United States Geological Survey are whether earthquakes can be predicted. Their answer is an unconditional “no”. The relevant page on the agency’s website states that no scientist has ever predicted a large earthquake, nor do they know how such a prediction could be made.
Enjoy more audio and podcasts on iOS Where Android.
But that may soon cease to be true. Although after decades of failed attempts and unsubstantiated claims about earthquake prediction, some skepticism is warranted – and Paul Johnson, a geophysicist at Los Alamos National Laboratory, is indeed downplaying the predictive potential of what he prepares – it is nonetheless the case that, as part of investigations aimed at better understanding the science of earthquakes, he and his team have developed a tool that could make earthquake prediction possible.
Like so many scientific investigations do these days, their approach relies on artificial intelligence in the form of machine learning. This, in turn, uses computer programs called neural networks that are based on a simplified model of how nervous systems are supposed to learn things. Machine learning has exploded in recent years, achieving success in areas ranging from turning speech into text to detecting cancer from CT scans. Now it is applied to seismology.
The difficulty with doing this is that neural networks need large amounts of training data to teach them what to look for – and that’s something earthquakes don’t provide. With rare exceptions, large earthquakes are caused by the movement of geological faults at or near the boundaries between the Earth’s tectonic plates. This tells you where to look for your data. But the seismic cycle on most faults involves a process called stick-slip, which takes decades. First, there is little movement on a defect as stress builds up, and so there are few data points to feed into a machine learning program. Then there is a sudden, catastrophic slip to release the accumulated tension. This certainly creates a lot of data, but nothing particularly useful for prediction purposes.
Dr Johnson therefore estimates that it takes about ten cycles of seismic data to form a system. And, seismology being a young science, it is far from possible. The San Andreas Fault in California (pictured), for example, generates a big earthquake about every 40 years. But only about 20 years (or half a cycle) of data detailed enough to be useful are currently available.
In 2017, however, Dr. Johnson’s team applied machine learning to another type of earthquake activity. Slow-sliding events, sometimes called silent earthquakes, are also caused by plate motion. The difference is that while an earthquake typically ends in seconds, a slow-slip event can take hours, days, or even months. From a machine learning perspective, this is much better, because such a long process generates many data points on which to train the neural network.
The Dr Johnson Class is the Cascadia Subduction Zone, a tectonic feature that extends 1,000 km along the coast of North America from Vancouver Island in Canada to northern California. It is the boundary between the Explorer, Juan de Fuca, and Gorda Plates to the west, and the North American Plate to the east. The regular movement of this last plate over the first three generates a slow sliding event every 14 months or so, and geophysicists have recorded this activity in detail since the 1990s. This means that there are many complete cycles of data – and the machine learning system trained on these by Dr Johnson was able to “retrospectively” past slow slides based on the seismic signals that preceded them, “predicting” when they would occur within a week or two of when they had actually occurred.
The next test of the technique, which has not yet been executed, will be an actual prediction of a slow-sliding event. But even without that happening, Dr Johnson’s Slow Slip Project suggests that machine learning techniques do indeed work with seismic events, and so could be extended to include earthquakes if only there was a way. compensate for the lack of data. To provide such compensation, he and his colleagues apply a process called transfer learning. It works with a mix of simulated and real information.
“Lab earthquakes” are miniature earthquakes generated on a laboratory bench by slowly squeezing glass beads in a press, until something suddenly breaks. This has proven to be a useful substitute for the stick-slip motion. Dr Johnson’s team created a numerical simulation (a computer model that captures the essential elements of a physical system) of an earthquake in the lab and trained their machine learning system on it, to see if it can learn to predict the course of substitution tremors. .
The result is moderately successful. But what really makes the difference is to reinforce the trained system with additional data from real experiments, in other words, transfer learning. The combination of refined simulated data with a pinch of reality is significantly more effective in predicting when an earthquake will occur.
The next step towards earthquake prediction will be to apply the same approach to a real geological fault, in this case probably the San Andreas. A machine learning system will be trained on data from a digital simulation of the failure, as well as the half cycle value of available live data. Dr. Johnson’s team will see if this is enough to perform a retrospective of events not included in the training data. He mentions the 2004 magnitude six Parkfield earthquake — a San Andreas landslide that did minimal damage, but was extremely well studied — as a possible target.
Currently, Dr. Johnson’s aspirations are limited to predicting the time of an impending earthquake. A full prediction should also include the whereabouts along the fault and its magnitude. However, if the timing can indeed be predicted, it will surely boost efforts to forecast these other criteria as well.
He hopes to see the first results within the next three to six months, but warns it could take longer than that. If these results are indeed promising, however, there will undoubtedly be a rush for other teams around the world to attempt to do the same, using historical data from other seismic faults to validate the technique. This, in turn, should improve the underlying model.
If all else fails, nothing will have been lost, as Dr. Johnson’s work will certainly lead to a better understanding of the physics of large earthquakes, which is valuable in itself. But, if it comes to nothing and instead creates software that can predict when large earthquakes will occur, that would be a truly earth-shattering discovery. ■
This article appeared in the Science & Technology section of the print edition under the headline “And now, stay tuned for earthquake forecasts”