Wednesday, February 18, 2015

The Future of GeoNet Revisited - Part 3: Impact Reporting


In my last blog I talked about the GeoNet Community - the large and growing group of people who rely on, use and are interested in GeoNet, our operations, data and other outputs. In this blog I will introduce the first of two fundamental changes I see happening to GeoNet and our community over the next decade.

Potential Impact reporting ....
Our vision is that GeoNet will be able to provide near-real-time potential impact reporting not just for earthquakes, but also for volcanic eruptions, tsunami impacts and landslide potential. The potential impact reports can then feed directly into systems designed to estimate the likely levels of damage given the people and infrastructure at risk. This is a major move from event reporting (where, when, how big) to impact reporting (what will be the likely effects where people or infrastructure reside). This reporting will use instrumental data, community reporting (citizen science) and effective modelling.

If we consider earthquakes, then felt intensity is a form of impact reporting. The magnitude of an earthquake estimates the physical size of the event where it ruptured – whereas the intensity relates to its impact on people, landforms, buildings and infrastructure. So reporting an earthquake location, depth and size is event reporting, but providing intensity estimates at multiple locations where people live and work is impact reporting. For a volcano, stating it has erupted is event reporting, but giving ballistic and ash fall damage estimates is impact reporting. You get the picture.

So why are we not doing impact reporting now? In short, because it is hard to get it right! Let’s consider earthquakes (again) as being a seismologist I find it easier. If we had enough sensors, then we could just use the sensor network to give an accurate estimate of the shaking level where you are for any earthquake. But the nearest sensor to you may be tens of kilometres away so we have to make an estimate based on the earthquake location, size and depth, the ground type below where you are and various other factors (like if you are in a multi-storied building). If you are lucky there may be a sensor near where you are, but it may be on different ground (soft rather than hard rock for example). We need either a huge increase in the number of sensor sites, or we can use known science to estimate the likely felt intensity anywhere. We can also supplement the physical measurements  and modelling with reports from people as explained in my last blog. 

ShakeMap NZ ....
The approach taken by the USGS ShakeMap, which we are in the process of implementing in New Zealand, is to use modelling and all available data. For example, Figure 1 shows the ShakeMap for the most recent large earthquake in New Zealand, the M 6.0 Wilberforce earthquake of 6 January 2015. In this case the nearest strong motion stations were a long distance from where the earthquake was centred so the maximum recorded shaking was less than 5% the force of gravity. However, ShakeMap estimated shaking levels of more than 20% of the force of gravity near the epicentre. 



Figure 1. ShakeMap of the Wilberforce earthquake of 6 January 2015 showing shaking intensity at the surface. The maximum accelerations indicated in the yellow and orange zones (around 0.2 g) could potentially have caused minor damage if the location was not so remote.
This shows both the strength and weakness of ShakeMap - it gives us an estimate of the maximum shaking levels but we can not confirm this value because we have no nearby instruments (see Figure 2). This event was also originally mis-located because of the influence of a small foreshock in a similar location which confused the automatic location system (an issue which was not identified by the Duty Officer immediately). Because ShakeMap requires the location, depth and magnitude as well as any actual shaking data to estimate the overall pattern of shaking, the mis-location caused the shaking pattern to be also wrongly-estimated. This was not a big problem in this case because of the remoteness of the earthquake from population centres, but this would not have happened if we had more sensors in the region. The remoteness of the location also meant we had few felt reports from close to the earthquake location.


Figure 2. The strong motion recordings for the Wilberforce earthquake of 6 January 2015. Note that the indicator bar are 10% the force of gravity and that there are no recording close to the earthquake epicentre.
The best choice we have is to improve our models of shaking AND to increase the number of sensors over time as I advocated in my original GeoNet technological blog series. Improving our knowledge of shaking requires more data on the earthquake source, the effects of the earth the earthquake waves travel through and the near-surface damping and amplification effects near where you require the shaking information. In many ways putting in more sensors is easier!

By providing improved potential impact reporting outputs like ShakeMap directly into systems providing damage and harm estimates GeoNet can make a major positive difference. In the modern world this is becoming more and more important making this development essential for the future of GeoNet and our community.

Future Event Scenarios
Before I move on to forecasting (or early warning) let's consider another step on that road. For recent volcano and earthquake events GeoNet has published a short list of what the most likely future scenarios may be along with the probabilities (chances) of what may happen. For example, for the Wilberforce earthquake discussed above we estimated that a normal aftershock sequence was by far the most likely future scenario, but other possibilities could not be totally ignored. In future GeoNet will provide this information following all geohazards events.


3 comments: