Deeper insight into the subsurface

Paul Boughton

The rising speed and lowering cost of processing power means that generating a complete synthetic seismic data set is now affordable for individual companies and many times cheaper than an actual seismic survey. Keith Forward reports.

Generating synthetic seismic data from the earth model and comparing it to real seismic data can give a deeper insight into the actual geometry of the subsurface - and tell you where a real seismic survey would be worth the money.

Usually you need to shoot the seismic in order to find out what it could tell you about the subsurface, but with seismic modelling you can get this information more quickly and at a fraction of the cost.

Seismic modelling or synthetic seismic is the process of using the wave equation which governs the passage of acoustic waves through the subsurface to generate a prediction of what a real seismic survey would look like.

Starting with an approximate earth model, built up from existing information about the subsurface, the seismic model takes into account all the imperfections of a real survey, including the noise and multiple reflections that can make seismic data hard to interpret.

Although the earth model is not perfect and therefore the seismic data generated from it will not be perfect, it can still provide valuable information about what a real seismic survey would reveal. If it shows that seismic data would not be very useful, there is no need to actually go out and get it. Or maybe there are certain regions that would benefit from a wide azimuth survey but there is no reason to shoot the entire prospect.

This is what BP did in 2003, when it was considering doing the first ever wide azimuth survey for sub-salt imaging on fields that could not be imaged well using standard seismic techniques, which was going to be very expensive.

Management decided it was too great a risk to spend tens of millions of dollars on the acquisition, which at that time was still a theoretical proposition, so first they spent a few million on a seismic model. With the model, the geoscientists could show the benefits of the wide azimuth survey, justify the expense of acquiring the extra data, and pinpoint which areas would benefit the most.

The result is history and the technique of wide azimuth acquisition was born and is now being used by many other companies around the world.

Managing risk

Synthetic seismic is basically a tool to quantify and minimise seismic risk. The information that can be gained about what real seismic would look like, or what existing seismic data is telling you, makes it possible to decide how confident you can be in pursuing a particular course of action.

For example, some parts of a reservoir could be in a shadow zone which is not susceptible to acoustic imaging. The synthetic data can show you which parts would be best illuminated by a seismic survey and which would benefit from a different method such a gravity or magnetic.

But it is not just before a seismic acquisition project that it can be useful; it can also be used with existing seismic data to give an indication of how much it can be trusted.

Since seismic always contains a significant amount of noise and artefacts, from multiple reflections between rock boundaries or ground roll from surface waves, part of the work of a geoscientist is to decide how reliable the data is and to quantify the risk that an interpretation could be wrong. Having a synthetic model to compare with the real seismic can highlight the areas where the noise makes interpretation more risky.

Migration result

In Fig. 1, the fact that the migration result is so different from the seismic impendence reflectivity in the original model shows how imperfect seismic data is. If seismic imaging were perfect the two images would be identical.

Knowing these imperfections can significantly aid acquisition, interpretation, and reservoir characterisation.

For example, if you were unsure about a feature on the real seismic, you could run a model based on what you already know about a reservoir to see if the same feature appeared anyway, even though there was nothing there in your model. You could then be confident that it was an artefact.

The model can also indicate how best to process the data. Since it can be hard to distinguish subtle features from artefacts, if you aggressively process the seismic you can end up interpreting noise.

Knowing where the noise is likely to occur can help to indicate where the interesting features really are and where you need to pay more attention to filtering the noise.

Improving the reservoir model

Most of the time the underlying geology is known well enough from previous surveys to make a good model that can be used to generate the seismic model. What is not known is the exact location of the oil and the faults which is the critical information you need.

If the model is perfect, the seismic generated from it would be identical to the real seismic, but since it probably has some errors, comparing the two should give you a better idea of where those errors are.

By feeding back the information gained from comparing the real with the synthetic data you can try and make the reservoir model match the actual geology better, improving on what you already know.

The model can be fine-tuned and a new seismic prediction made to compare with the real seismic, then the process can be repeated iteratively until the two match more exactly, at which point the model will be a faithful representation of what the seismic data is telling you about the geology.

SEAM

The Society of Exploration Geophysicists Advanced Modeling Corporation (SEAM) has been working on a project for the past three years to "advance the science and technology of applied geophysics through a cooperative industry effort focused on subsurface model construction and generation of synthetic data sets for geophysical problems of importance to the resource extraction industry."

Phase I concentrated on a 3D geological and geophysical model for a deepwater region containing reservoirs at a range of depths, including around and below a massive salt body. It conducted complementary geophysical simulations including CSEM (controlled source electromagnetic), gravity, and magnetic modelling as well as seismic.

Phase I brought together a consortium of twenty oil companies and four service companies who contributed time and funds to the effort. According to a recent survey conducted by SEAM, the participants overwhelmingly agreed that the investment was worthwhile and 88 per cent agreed that the seismic earth model was of significant benefit to the future of sub-salt imaging research.

The original idea was to share the cost of the data processing which was prohibitively expensive for a single company, but the cost has come down to such an extent that it is no longer a barrier. Also more efficient algorithms mean that less processing power is needed.

The Society of Exploration Geophysicists is a not-for-profit organization that promotes the science of applied geophysics and the education of geophysicists. SEG, founded in 1930, fosters the expert and ethical practice of geophysics in the exploration and development of natural resources, in characterising the near surface, and in mitigating earth hazards. The Society, which has more than 33 000 members in 138 countries, fulfills its mission through its publications, conferences, forums, web sites, and educational opportunities.

Faster wave equation

Tierra Geophysical, a small start-up company with three employees that was given the contract to calculate the seismic data set, spent two years optimising the wave equation to run faster. It estimates that with its algorithms the model can be run around twenty times faster, and the latest processers can raise the improvement to fifty times.

Tierra Geophysical was sold to Landmark Graphics in February, which was awarded the contract for storage and distribution of the Phase I data. Approximately 25TB of compressed data will be stored for up to 10 years.

The SEAM Board of Directors has selected Land Seismic Challenges for Phase II which will start in early 2011.

Recent Issues