Sunday, May 27, 2012

GeoNet and the Art of Earthquake Location Part 2


In my previous blog I discussing the principles of earthquake location, but we also have some reasonably difficult practical issues. The most important of these is how to identify the arrival of the earthquake waves when there are many sources of ground shaking. These include the background actions of the oceans on the shores, weather noise (such as wind, rain, thunder, etc.) and humans and other animals (see Figure 1). In fact it is what we call “cultural noise” which causes us the most difficulty. This is the noise us humans make going about our everyday lives (vehicles, factories, and just people walking around). This is obviously worse in cities where there are many of us causing ground noise. To avoid this many of our recording sites are as far away from people as possible! Another GeoNet blog (see GeoNet – Shaken not stirred) gives a very good example of seismic noise made by a large group of people. For all these reasons considerable skill is required to “pick” the first arriving earthquake waves which may be buried in ground shaking noise. Moving this to an automated process is difficult, but good progress has been made.  Machines now do the job more consistently than humans, but can still more easily be fooled by noise.


Figure 1: The GeoNet seismograph station near Denniston on the west coast of the South Island. The image shows two earthquakes near the centre, but also a lot of "cultural" noise. This site is prone to disturbance by nearby mining operations, which show as small, similarly-sized blobs during usual working hours.

An additional practical problem is making sure the correct earthquake arrivals are associated with the correct earthquake. In New Zealand where more than 20,000 earthquakes are located each year there are often earthquakes happening at the same time in different parts of the country. If the automatic processing mixes the arrivals from one earthquake with another event the calculated location will be inaccurate. To avoid this the computer is actually making 100s of estimates every second seeing if a “picked” phase arrival will fit any earthquake location. In this process an earthquake location needs to have a good level of accuracy before it is accepted. But some bad events do get through when there is a large amount of ground noise or signals from distant earthquakes are mixed with nearby events.

Our new earthquake analysis system, GeoNetRapid (currently in Beta) is based on the SeisComP3 system developed by GFZ in Potsdam, Germany which is made freely available and has a large and active user community (for details see my colleague’s blog). This system automatically identifies earthquake wave arrival times (phases), associates the phase into earthquake events and then provides a location and depth with error estimates (and magnitude estimates). Additionally, within GeoNet Rapid we are using many decades of earthquake and tectonic research in New Zealand in the form of a three dimensional model of how earthquake wave speeds vary around New Zealand. This allows for the more accurate estimation of the true location and depth of earthquakes. But even with all this new technology the machines will sometimes get it wrong. For larger felt earthquakes recorded on many stations this is now rare and will continue to improve as we refine GeoNet Rapid. For more details on how to use GeoNet Rapid see GeoNet Rapid - Why is it different?

Links
How do seismologists locate an earthquake?
Foo Fighters rocked Auckland!
GeoNet Rapid (the Beta website)
The SeisComP3 earthquake Analysis System (the heart of GeoNet Rapid)
GeoNet Rapid - Being Faster
GeoNet Rapid - Why is it different?

Sunday, May 13, 2012

GeoNet and the Art of Earthquake Location - Part 1


Using the recordings of earthquake waves at GeoNet stations and some simple mathematics we can easily calculate an earthquake’s location. Yeah Right! (non-New Zealanders should check here and Figure 1 to understand the above statements). Earthquakes are complicated ruptures of the rock within the Earth. We imagine them as simple fault breaks deep underground, usually showing as nice straight lines where they reach the Earth’s surface. This simple picture is far from what actually happens - most earthquakes do not break the Earth’s surface, and larger earthquakes usually rupture more than one fault. This is why asking “what fault was that earthquake on?” is usually the wrong question unless you are talking about a large earthquake. For example, only the Darfield (September 2010) earthquake in the Canterbury earthquake sequence caused an identifiable surface rupture. Using various kinds of land surveying (very accurate GPS and satellite radar mapping) and many recordings from ground shaking sensors we can build up a picture of the faults which ruptured in the major earthquakes in the sequence. What we have found is that each earthquake is actually made up of several fault breaks within the Earth.


Figure 1: Yeah right! Tui beer is promoted through a humorous advertising campaign which uses stereotypes, heavy irony and the phrase Yeah Right. This phrase has become a part of New Zealand culture.

Let’s look at the earthquake location process in a bit of detail, including an “Earthquakes 101”. When we talk about the location and depth of an earthquake we are actually referring to the place where the fault rupture starts and begins sending out earthquake waves. A very big earthquake can break a fault (or faults) 100s of kilometres long, but its location will be given as the point where it starts. Technically we refer to the point on the Earth’s surface above where the rupture starts (referred to as the focus or hypocentre) as the epicentre (or just the location). The earthquake’s focus will be some depth below the Earth’s surface directly below the epicentre (Figure 2).


Figure 2: Earthquake location terms. Image from “Earthquakes and Plates


The location process involves measuring the arrival time of the earthquake waves (referred to as phase arrivals or just phases) at our ground shaking sensors. There are two main types of earthquake waves, imaginatively called primary (P; see Figure 3) and secondary (S; see Figure 4) waves. P-waves are like sound waves which travel through the Earth and are much faster than S-waves, which could also be referred to as shaking waves as they cause a side to side motion. It is the S-waves that cause most earthquake damage. In the upper 10 km of the Earth’s crust P-waves travel at about 4.5 to 6.5 km per second and S-waves at 3 to 4 km per second. There are other kinds of earthquake waves which are a combination of the main wave types. The difference in arrival time between these two wave types indicates the distance from the earthquake to the recording station (a bit like counting the seconds between the lightning flash and the sound of thunder gives an estimate of how far you are from a storm).


Figure 3: A representation of how P-waves, which are compressional waves (like sound waves) travel through the Earth. Copyright 2004-10.  L. Braile.  Permission granted for reproduction and use of animations for non-commercial uses.


Figure 4: A representation of how S-waves, which are transverse waves travel through the Earth. Copyright 2004-10.  L. Braile.  Permission granted for reproduction and use of animations for non-commercial uses.

Calculating the location, depth and size of an earthquake would be much easier if the earth beneath our feet was uniform and composed of just one kind of rock. But the rocks are layered, made of a variety of rock types, are full of fractures and far from uniform. In fact because of the alignment of some rock crystals and cracks earthquake waves may travel at different speeds in different directions! So that simple mathematics I mentioned above (tongue in cheek) gets complicated very quickly. Usually we ignore all these complications and just assume the speed of earthquake waves only varies with depth within the Earth. This works reasonably well if the wave speeds only change a small amount from place to place, but New Zealand’s location on a tectonic plate boundary means that using the simple approach can introduce large errors. The earthquake location process uses the phase arrival times to calculate the position of the earthquake source in relation to all the stations which recorded the earthquake waves using the travel times and distances involved (the simple mathematics I talked of above, see here for a more detailed description). In general the more stations recording an earthquake the better the estimation of location and depth will be.  But also the most accurate locations are calculated when the recording stations surround the earthquake, and the poorest locations are when the earthquake occurs outside the sensor network (such as offshore). The long thin nature of New Zealand means recording stations often do not surround an earthquake’s location.

In my next blog I will talk about how we identify the P and S waves that are crucial to getting an earthquake location.



Monday, April 23, 2012

The Case for Building Instrumentation


Should more buildings in New Zealand be equipped with earthquake recording instruments to measure their response to shaking? GeoNet data have proven to be very important for understanding the extensive damage caused by the Canterbury earthquakes during the last 18 months. We can thank the vision of John Berrill, formerly of the Engineering School at the University of Canterbury, for the high level of GeoNet instrumentation in the Canterbury region (although the major target was recording  an Alpine Fault rupture; see "CanNet: the little network that could!” in GeoNet News October 2010). The many GeoNet strong motion stations provided a very good indication of the extreme levels of ground shaking caused by the major earthquakes (see Figure 1),  but only a single building in Christchurch had been instrumented in the GeoNet building instrumentation programme.  What would more instrumented buildings in the region have told us? Could they have identified buildings damaged in the Darfield (September 2010) earthquake and helped with post-event building evaluations? Could they be helping us make decisions about the rebuild process? Should future buildings over a certain size be instrumented to a specified level as is required in California? Can the one instrumented building in Christchurch provide an insight into the answer to these questions? I will give a little background and then come back to these questions.

Figure 1: The levels of shaking for the top six high impact Canterbury earthquakes. The length of the bars show the vertical and horizontal shaking levels at the indicated sites around the Christchurch area. The vertical shaking in the Christchurch (22 February 2011) Earthquake  exceeded 2 times the force of gravity, and a similar level of horizontal shaking occurred during the June 13 2011 earthquake.

I recently attended the annual conference of the New Zealand Society for Earthquake Engineering (NZSEE) at the University of Canterbury in Christchurch. The theme of the conference was "Implementing lessons learnt” from the Canterbury earthquakes.  The major Canterbury earthquakes were high impact events which inflicted higher than expected levels of damage. There are many reasons for this, the most important are:

  • the closeness of the earthquakes ruptures to Christchurch city;
  • the very high shaking levels;
  • the extensive liquefaction.
Many of the papers presented at the NZSEE conference used GeoNet data as the basis for their analysis (although with limited acknowledgement of GeoNet and its sponsors – the Earthquake Commission (EQC), Land Information NewZealand (LINZ) and GNS Science). What is clear is that much of the damage would have been very difficult to understand and explain without the availability of the GeoNet data showing the actual level of ground shaking. Without data it is very difficult to match expected and actual levels of damage.

It is a common misconception that the aim of our current building codes is to ensure buildings are not damaged by major earthquakes. It is not - the aim is to ensure life safety. Buildings that perform well and save lives may still need to be demolished and replaced following a major earthquake. Critical buildings such as hospitals are built to higher standards, and one way this is done is by using base isolation so the building does not respond as violently to the ground shaking.  Base isolation and other means of shaking energy absorption appear to be very effective at reducing the level of damage, although very few (around a dozen) buildings have base isolation in New Zealand.  Papers presented at the conference suggested the additional cost of base isolation is usually less than 10%.

The GeoNetBuilding Instrumentation Programme (see Figure 2) aims to install multiple seismic instruments in about 30 representative buildings (commercial and residential) and bridges throughout New Zealand to gain insights into the earthquake engineering performance of those structures. To date 10 installations have been completed and several others are in progress. A brochure on the programme can be found here. The list of buildings are chosen to cover the range of building types and were identified largely on the likelihood of capturing useable data, so most are in Wellington or along the east coast of the North Island. There were some planned for the South Island but originally very few for Christchurch.

Figure 2: A typical schematic representation of the components of the seismic instrumentation deployed within a building. The sensors are distributed at various levels of the building and connected through computer network cables to the central recording unit. The GPS receiver provides accurate timing (to less than 1 ms). Wherever possible, one of the sensors is mounted in an enclosure a short distance from the building so as to record shaking levels away from the building. Diagram courtesy of Canterbury Seismic Instruments Ltd.
What is clear is that without instruments in buildings it will always be impossible to know if the damage caused by past large earthquake could have been identified using such instruments. For example, we will never know if some damage could have been detected instrumentally after the Darfield earthquake and before the Christchurch earthquake. The two major changes which could be identified are inter-story drift (floors moving horizontally, relative to each other) and the frequency of the modes of oscillation of a building. Research is needed to identify how much use building instrumentation would be and how the data from instrumented buildings can best be used to detect what has come to be known as “building health”. But in my opinion we should be instrumenting as many buildings as possible, and perhaps there should be a minimum instrumentation standard for buildings of a given size or complexity. It seems like an oversight that few base isolated buildings are currently instrumented. Based on the usefulness of the ground-based GeoNet data for the understanding of the Canterbury earthquakes, how much more could have been added if a selection of the most damaged buildings in central Christchurch had also been instrumented before the earthquakes occurred?

Links:
GeoNet Website
GeoNet Rapid (Beta)
GNS Science
Earthquake Commission
Land Information New Zealand
GeoNet News, Special Darfield Earthquake issue, October 2010
New Zealand Society for Earthquake Engineering
NZSEE 2012 Conference
GeoNet Building Instrumentation Programme
GeoNet Building Instrumentation Programme Brochure

Wednesday, April 4, 2012

GeoNet Rapid - Why Now?



One of the obvious questions to ask about the launch of GeoNet Rapid (Beta) is - why now? Why didn’t we have it before the Canterbury earthquakes began? There are three factors to consider when answering these questions: the coverage of the GeoNet sensor networks, the rapid development of the systems and technology used to locate earthquakes and the long thin and plate boundary nature of New Zealand. GeoNet Rapid is the “tip of the iceberg”, and relies on an extensive sensor network throughout New Zealand, a real-time data communications network (like a private version of the Internet), a high technology earthquake analysis system and a state-of-the-art information delivery system.

To explain this a bit more let’s look at the evolution of GeoNet by traveling back a decade or so in time to before GeoNet existed (see Figure 1). Back then there were just four real-time earthquake recording stations, two radio networks and a small number of "dial-up" stations in the whole of New Zealand. The rest of the stations (the small black squares) recorded on cassette tapes and paper printouts which were mailed in weekly for processing. In the best case it would take an hour to get an approximate earthquake location, and weeks to months to get a “final” location. Sometimes we needed to ring up the local farmer who would read off earthquake data from the printouts! Estimates of shaking intensity from the (film recording) strong shaking instruments took up to a year to become available.


Figure 1: The Pilot network existing before the start of GeoNet in July 2001
(diagram from the original GeoNet proposal dated 16 March 2000)

Contrast that situation with the current GeoNet network (see Figure 2) which has more than 550 sensor sites and real-time (or near real-time) data communications. The GeoNet sensor network grew from almost nothing to its current size over the decade following the launch of GeoNet in 2001, but only in the last few years has it been at the size and density required to give reliable automatic earthquake locations. GeoNet was developed as a long term sustainable system and much of the effort in the first decade went into the development of the sensor networks, and it was only when they were in place that GeoNet Rapid became feasible.


Figure 2: The current GeoNet sensor network - to prevent
clutter only the earthquake recorders are shown.
For more information about the GeoNet network see
http://www.geonet.org.nz/resources/network/netmap.html

Locating an earthquake and estimating its depth and magnitude  is a complex process involving many calculations once the earthquake shaking waves arrive and are measured at a minimum number of sites (I will cover this in more detail in a later blog). Although the theory of earthquake analysis has not changed greatly, the available systems and technology have developed considerably in the last decade, receiving an extra boost following the Indian Ocean tsunami at the end of 2004. This has greatly improved the availability of software for the rapid characterisation of earthquakes. There have also been big advances in the ability to feed this information quickly to websites (and as you are now seeing to mobile devices).

New Zealand is not an easy place to locate earthquakes because it is a country made up of two long thin islands. It lies on the plate boundary between the Pacific and Australian plates and experiences many shallow and deep earthquakes. To locate an earthquake it must be almost surrounded by earthquake recorders - hard to achieve in many parts of New Zealand. A very effective earthquake recording network for New Zealand would have many offshore (undersea) instruments costing many times the current resources of GeoNet.

So GeoNet Rapid was not possible until the GeoNet sensor network was near completion and the world had made the fast advances in technology in the last few years. Even with the current network and technology earthquakes in some parts of New Zealand (where there are fewer stations) and those offshore will sometimes be mis-located and need seismologist intervention. GeoNet Rapid (Beta) can now, in most cases, produce good locations very quickly, but will still sometimes give less reliable estimates. We are working to improve this as we move through the beta process to the final release later this year.

Wednesday, March 21, 2012

Deep Earthquakes and Magnitudes


Let’s just have one more look at magnitude before moving on to other topics. Some people have noticed that the magnitudes being given for deep earthquakes under the North Island by GeoNet Rapid (Beta) are much lower than the official Local Magnitudes being given on www.geonet.org.nz. This is related to GeoNet Rapid (Beta) moving to magnitudes based on estimates of Moment Magnitude, as discussed in my last blog. It highlights why understanding earthquake magnitude can be complicated – particularly in New Zealand where we have deep earthquakes. The magnitude estimate used by GeoNet Rapid (Beta) is removing the bias in the Local Magnitude caused by the way the earthquake waves lose or do not lose energy as they travel through the complicated earth structure beneath the North Island. To understand this let’s look in a little detail at what lies below our feet (assuming you live in the North Island as I do).

Under the North Island of New Zealand the Pacific and Australian tectonic plates are colliding, and the Pacific plate is being pushed down (subducted) under the Australian plate (for more details see article in Te Ara). It is a slow collision compared to a car crash at only around 5 cm a year, but reasonably fast in geological terms. This can be seen in the image of earthquakes under the North Island of New Zealand (see diagram) – shallower earthquakes (orange) near the east coast give way to deeper earthquakes (green, blues to purple) as we travel west outlining the Pacific plate getting deeper beneath the Australian plate. By the Taranaki area the earthquakes are hundreds of kilometres deep and by Auckland you have moved out of the region where there is a subducting Pacific slab at depth. Above the Pacific plate under much of the central North Island, the material has been disturbed by this collision and subduction process forming a region of volcanic and geothermal activity.


When an earthquake happens deep under the North Island the earthquake waves travel up and along the colder rock of the Pacific plate without losing much shaking energy, but the waves travelling up through the hotter volcanic zone lose most of their shaking energy. This explains why these earthquakes are often strongly felt on the East Coast of the island but are sometimes not even felt directly above where they occur! Putting all this together we see why measuring the magnitude of a deep North Island earthquake is difficult. Our instruments record high levels of shaking along the East Coast of the North Island and even felt levels of shaking along the same coast in the South Island, but low levels of shaking directly above the earthquake and to the west (depending on the location).

When the New Zealand Local Magnitude scale was devised in the 1970s, these complications were not taken into account fully and so deep earthquakes under the North Island are assigned higher magnitudes than newer techniques like Moment Magnitude give. This explains why GeoNet Rapid (Beta) which uses a magnitude estimate based on Moment Magnitude gives values lower than Local Magnitude for deeper North Island earthquakes by around 0.5 units or more. Similar effects happen for deep earthquakes in the Fiordland region of the South Island.

Saturday, March 17, 2012

What’s the Magnitude?

And whose Magnitude is it anyway?

With the introduction of GeoNet Rapid (our new automated earthquake analysis system - http://beta.geonet.org.nzwe have began the move to a unified magnitude estimate (Summary Magnitude, or just M) based on Moment Magnitude. Moment Magnitude is more closely based on the full earthquake characteristics, and will align better with the magnitudes given by international institutions such as the United States Geological Survey (USGS).

Was that a magnitude 4.8 or 5.3? That earthquake felt much larger than the one last week but GeoNet says it was ONLY a magnitude 4.8! And why has the magnitude now gone up? These are the questions we are asked all the time. And once we move to GeoNet Rapid I am sure even more questions will come our way.

For example, on the last day of December 2011 a (GeoNet) magnitude 4.8 earthquake occurred at 1:44 in the afternoon about 10 km east of Christchurch. The USGS calculated it was magnitude 5.3. Who was right? The media suggested the USGS was (see http://www.stuff.co.nz/the-press/news/christchurch-earthquake-2011/6206867/Aftershock-may-be-one-of-biggest), and several people in Christchurch agreed saying that it felt much bigger than 4.8. As stated in the article the USGS usually gives lower magnitudes than GeoNet, but in this case they did not because they used a different magnitude method than usual. Independent estimates using Moment Magnitude gave a value a little lower than GeoNet as expected.

I repeat this story here to show the trap of using magnitude as a measure of earthquake size without knowing the detail. Currently GeoNet publishes Local Magnitude (sometimes called Richter Magnitude after its inventor, Charles Richter) for most earthquakes but uses Moment Magnitude for large (usually greater than around magnitude 6) earthquakes because local magnitude is unreliable for larger events.

The magnitudes of earthquakes cause much confusion, with different organisations publishing different values for the same earthquake. And often more data results in the magnitude being revised up or down.

It does not help that there are more different magnitude methods than I have fingers! Or that it is not possible to sum up the size of a complicated natural phenomenon like an earthquake with a single number.

We describe an earthquake as happening at a place (the epicentre), at a distance below the Earth’s surface (depth), and having a size (magnitude). Real earthquakes start breaking the rock somewhere “down there”, and continuing to rupture for a time in a particular direction (or in more than one direction). To fully describe an earthquake requires many numbers rather than the three listed above. So why do we use magnitude?

Magnitude is an estimate of the size of an earthquake independent of the location of the person experiencing it (l'll talk some more about felt intensity in a later blog). Originally magnitude was based on the size of the traces on a particular type of earthquake drum (the Wood-Anderson seismograph). And this is still how Local Magnitude is calculated (although now computers transform modern seismograph signals into “pretend” Wood-Anderson instruments before the measurement is made). The values we give are an average of many measurements on many drums and are accurate to about one decimal place (for example 4.1) although the average usually has many decimal places. Most countries have developed their own Local Magnitude estimations.

Over the years scientists have developed other magnitude methods for particular uses. Probably the most useful of these is Moment Magnitude which is based on the actual earthquake source dimensions and properties. To fully characterise the Moment of an earthquake takes many numbers, but these are then reduced to the one number – the Moment Magnitude. There are always downsides, and in the case of Moment Magnitude it takes longer to calculate (because you have to wait for more data to arrive), and it cannot be calculated for smaller earthquakes (much below about magnitude 4).

The way around this is to use Local Magnitude as the preferred estimate for magnitude for smaller earthquakes and a quickly calculated estimate of Moment Magnitude for larger ones. This is what GeoNet Rapid will provide using Summary Magnitudes (or just M) based on this idea. It is not as simple as that because the scales need to mesh together, be consistent with previous methods, so over time we will be making refinements as the new system develops. And we will be working with USGS to get consistency with them, but be warned - magnitude is an estimate and it is rare for two institutions to give exactly the same value for an earthquake. Within 0.1 is the best we can expect.