As an Amazon Associate I earn from qualifying purchases.

How AWS contributes to an earthquake safety system for the US West Coast

[ad_1]

Every second of every day, ground motion data is collected from more than 500 sites in California, ranging from the southernmost end (including the Baja peninsula) into the central part of the state.

Not only is that a lot of data, it might also contain urgent signals: Signs of a major earthquake can lie buried amidst thousands of normal ground motion shifts. (Southern California alone sees a quake every three minutes.)

The information is processed by algorithms that sift the data for signs of an earthquake, and both the location of these earthquakes and their magnitudes are calculated in as close to real time as possible.

“Typically, all this happens within about 60 seconds to a few minutes after the data comes in,” said professor Zachary Ross, a seismologist at the California Institute of Technology (Caltech).

We don’t want to be performing computations here in Pasadena when a big earthquake knocks out the power. We’ve set it up so that now the data get broadcast to AWS immediately.

Those details are collected and distributed by Caltech in partnership with the US Geological Survey (USGS) through the Southern California Seismic Network (SCSN). The data, both raw and processed, are then made publicly available.

Policymakers, scientists, and academics use the data — for research on fault locations, earthquake precursors, and more — as do some early warning systems built to get the word out about larger quakes.

The expansive nature of that data, coupled with the essential role it serves, are why about four years ago Ross moved the existing system to the AWS framework.

“We don’t want to be performing computations here in Pasadena when a big earthquake knocks out the power. We’ve set it up so that now the data get broadcast to AWS immediately,” he explained. “That way, the data continues to get processed if the power gets cut or infrastructure gets damaged.”

Now, with the help of new machine learning techniques, Ross and his team are upgrading the system in a way that could help scientists identify more earthquake events — and understand why they happen.  

Upgrades needed

The upgrade to Caltech’s system has been a long time coming. Ross noted that the algorithm the data is run on at the moment is a standard signal-processing algorithm, written in-house about 30 years ago.

“It’s been slowly updated over the years as new databases or technology have come about, but it hasn’t gone through any kind of major overhaul during that time,” he said.

This is a screenshot from an interactive map which tracks magnitude 2.5 and higher earthquakes. Policymakers, scientists, and academics use this data for research on fault locations, and earthquake precursors.

USGS

The outputs of the signal-processing algorithm also require constant refinement.

“We have a whole team of people here that basically spend most of their time fixing all of the mistakes that these algorithms make,” Ross said.

Due to the age of the system, the team is now working on a “complete rewrite of everything from scratch using a cloud-native framework,” Ross said. He explained the big push to do this now stems from advances in machine learning technology in the past few years. Because the existing systems are labor-intensive, and because the way the work is done now would make it impossible to incorporate modern machine learning, they needed to start afresh.

Ross’ research group at Caltech has been working on developing new algorithms that are more efficient and more sensitive for better, more automated data monitoring. These advances include the incorporation of deep learning algorithms, which allow for routine detection of three to five times more events.

The upgrade will also allow the team to better utilize the high quality data available to them.

“In seismology, we have a lot of labeled data available to us,” Ross said. “That’s because we have these professional seismic analysts who have been manually measuring all these events and locating them for many decades at this point.”

Better basic earthquake science

Updating the system helps with basic science mission too. Currently, not all of the data collected by Caltech can be analyzed, due to time limitations (all those hours spent making corrections). So certain subsets of the data, like larger events, are prioritized.

However, only being able to analyze larger quakes means a lot of important data processing isn’t happening. If the Caltech team were able to look at the smaller, more frequent quakes, scientists could get incredibly useful information. That owes to the nature of earthquakes.

Animation of a scenario M6.9 earthquake on the Rose Canyon fault

This video presents an animation of computer-simulated ground motions that might occur for a magnitude 6.9 earthquake rupturing the Rose Canyon fault in southern California. This simulation highlights the complex nature of seismic waves that are created during fault rupture, including the strong rupture directivity effects that would impact the densely populated areas near San Diego and Tijuana.

An earthquake isn’t just ground motion at a certain scale or location — it’s the sudden unstable movement of a fault at depth. And leading up to that slippage isn’t necessarily a single event, but often a sequence of events — earthquakes tend to trigger other earthquakes. Thus, larger events are sometimes triggered by smaller events that precede them. This cascading phenomenon means that it’s incredibly useful for scientists like Ross to study earthquakes in a complete sequence — and that means being able to reliably identify smaller earthquakes as well.

That’s also where the data grows exponentially.

Geologists documented fault offsets after the Ridgecrest earthquake sequence in California that occurred in 2019.

Katherine Kendrick, USGS

“Earthquakes have a scientifically well-known characteristic, which is that the smaller they get, the more of them occur,” Ross explained. “Every time we go down a magnitude unit, there’s about 10 times more quakes that that occur.”

Reliably measuring smaller quakes means seismologists can also figure out where faults lie, another key to better understanding earthquakes. If you can take a greater number of smaller earthquakes and plot their hypocenters on a map, “those hypocenters will tell you something about where the faults are located at depth, which is very difficult to know otherwise, because we can’t drill down that deep. We’re talking about, often, eight miles below the surface, which is just impossible to get down to,” Ross explained.

To handle that much data, Ross and his team are relying on a grant of AWS Promotional Credits to build their prototype system. The data is streamed on Amazon Kinesis, which is used to collect and process large streams of data in real time.

There are millions of people living in the part of California that this system is authoritative over, so it’s really important to have it working correctly.

This increased reliability and sensitivity will enable Ross and his team to detect “something like a factor of five times more smaller events” using the new generation of algorithms.

“The vast majority of what we’re recording right now is being missed, which is a pretty remarkable statement,” Ross said.

Once the new system is up and running, it will be observed in action for several years. The information could potentially be made available sooner, but would be labeled as “experimental” or something similar.

Ross stresses the importance of getting this right: “There are millions of people living in the part of California that this system is authoritative over, so it’s really important to have it working correctly.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo