Introduction

Synthetic seismograms are critical in understanding seismic data. We rely upon them for an array of tasks, from identifying events on seismic data to estimating the full waveform for inversion. That said, we often create and use synthetic seismograms without much thought given to the input log data or erroneous synthetic results. Contrary to popular belief, and despite how we treat them as interpreters, log measurements are not ground truth, and not all synthetic seismogram algorithms are created equal. Have you ever created a synthetic seismogram that has a tenuous tie to seismic? Have you wondered why the synthetic seismogram changes significantly when the density curve is included? Have you queried the quality of the sonic and density logs used to generate your synthetic seismogram? Have you seen a synthetic seismogram change when a different software package is used? If you can answer yes to any of these questions and / or you would agree that synthetic seismograms are not always simple in your world, then this is the article for you.

We present the nuts and bolts of synthetic seismograms from the perspective of the seismic interpreter; we look at what can go wrong, what does go wrong, and what you can do to prevent falling into some of the pitfalls that arise.

According to the Schlumberger Oilfield Glossary, a synthetic seismogram is defined as:

“The result of one of many forms of forward modeling to predict the seismic response of the Earth. A more narrow definition used by seismic interpreters is that a synthetic seismogram, commonly called a synthetic, is a direct one-dimensional model of acoustic energy traveling through the layers of the Earth. The synthetic seismogram is generated by convolving the reflectivity derived from digitized acoustic and density logs with the wavelet derived from seismic data. By comparing marker beds or other correlation points picked on well logs with major reflections on the seismic section, interpretation of the data can be improved.

The quality of the match between a synthetic seismogram depends on well log quality, seismic data processing quality, and the ability to extract a representative wavelet from seismic data, among other factors. …”

The basic process of creating a synthetic is simple (Figure 1) and described in many geophysical text books, however it is in the details that problems occur. Input density and sonic logs are multiplied together to obtain an impedance log. This a wavelet to obtain the synthetic seismogram. In the Schlumberger definition presented above, we have highlighted some of the inputs and steps that can have problems and/or complications. Keep in mind that doing everything “correctly” can still lead to issues at any or all of the above steps. Below, we discuss a few pitfalls we have witnessed.

Fig. 01
Figure 1. Cartoon describing the process of generating and correlating a synthetic.

Density and Sonic Logs

As geophysicists, we often assume that the well information, particularly log data, provided by our geologist or petrophysicist is the “ground truth”. Nonetheless, we often find high amplitude events in our synthetic seismograms that do not correlate to geology. A common “fix” is to throw out the density log, and build synthetics without it. Depending on the software we use, that means the density is approximated with a constant, or with some pre-defined regression to the sonic log. In either case what we have done is declare the entire density log to be unfit, without really understanding the full picture. An old proverb declares that the devil is in the details. In this case, it turns out, like so often in geophysics, the details are a QC issue.

Density logs are greatly affected by borehole conditions such as washouts or invasion. Reliable density log measurements of the rock require good pad contact with the borehole wall. Without good pad contact, density log measurements will be spuriously low. A geologist or petrophysicist may indicate that the density log is “fine” over the zone of interest. “Fine” is a subjective term. The subjectivity is related to the purpose of the log. If one is simply looking for porosity, then the presence of a washout may indicate porosity, and the log is then suitably “fit for purpose”. If one needs a reasonably accurate measurement of the rock’s bulk density, in order to build a representative synthetic seismogram, then a washout zone is definitely not “fit for purpose”. Geophysicists must be clear with their professional colleagues: “fit for purpose” requirements vary.

Simply discarding the density log may not be the best procedure, as it may lead to erroneous correlations or phase assumptions. Discarding the entire log implies that we have concluded that the entire log is free of any meaningful content. This is clearly not the case. A better option may be to regenerate the density log over affected intervals using all available supplemental information, although we suggest that this is best left to a petrophysicist. Figure 2 shows an example of a synthetic created using velocity * “raw density”, velocity * “edited density”, and velocity only (constant density), where velocity is calculated from the sonic log. Differences exist between all three, leading to potentially different interpretations of which events correspond to the tops of the sands.

Fig. 02
Figure 2. Different synthetics created with different density curves. Track 2 shows the raw density log (grey) compared to the edited density log (black), the change in density has been highlighted. Note how the sands change from peaks to troughs on the synthetic trace. This can yield a very different interpretation. Using a constant density (same as no density) also produces a trace with differences from what should be expected. Courtesy Jay Gunderson.

Despite being less sensitive to borehole conditions, sonic logs can also have issues. Sonic logs are measured over a much larger interval, with redundancies built in to most units. Additionally, sonic logs are the result of a process similar to first-break picking on conventional seismic data, essentially the result of picking the time of the direct arrival on several receivers, and computing the time it takes energy to travel from the source to each of the receivers, spaced over several meters. In the end, sonic logs appear to be better behaved because they effectively average over a much larger interval than density. The easiest thing we can do to check the logs is to look at a caliper log. This will give you an indication of the potential for poor readings and where the synthetic amplitudes are suspect. Additionally, this information can be used with additional logs and/or wells to reconstruct the poor log values over that interval, providing a means of creating a more reliable synthetic trace. However, such replacement techniques must be used with care. If a section of log is “bad” in a well this suggests something about the rock properties in this part of the well. If this is replaced by “good” section of log from a different hole, then it may be that the log was only good because the rock properties were different in the other well, and therefore not an appropriate replacement.

One common problem with older logs, is that they have often been digitized from paper and from time to time human error enters the equation. Figure 3a shows a dramatic acoustic velocity increase at 1350m. Referring to the raster copies of the logs (Figure 3b), we see that there is a change in the grid at the velocity break suggesting that there may be a digitizing problem. It is possible that the digitizer thought that this was the location of a scale change although we note that more often scale changes are missed rather than inserted by mistake. To check, we can often electronically peel back the lower log to see the overlap zone as shown in Figure 3c. While this is easy to check and to fix once recognized, we must visually recognize the possible error, making it potentially harder to diagnose.

Fig. 03
Figure 3. Example of digitizing problem common with older wells. (a) shows the downloaded curve, (b) shows the paper logs, where there is a potential change in the lateral scale. (c) shows the log displayed with the correct scales. Note that the apparent break on the downloaded curve (a) is not present on the edited raster image (c).

Sonic logs are effectively the result of geophysical processing being applied to an acoustic measurement of the subsurface (similar to first-break picking), which can have the same problems as surface seismic data in terms of noise, tool problems, etc. An example is presented in Figure 4a, where erroneous values in the sonic log have made it all but useless for synthetic generation. When we look at the repeat log (Figure 4b) we see that the same erroneous values occur but not always at the same depths. This does not mean, however, that the data itself is poor, but perhaps has not been fully QC’d.

Fig. 04
Figure 4. AQC display from a dipole sonic run. Tracks (a) & (b) show the main pass and repeat p-wave sonic logs with cycle skipping present. The two right tracks show different picking displays, showing why the initial processing resulted in cycle skipping, and where the p-wave slowness should have been picked.

Looking at the full waveform presented in Figure 4c, we can see that the semblance autopicker has jumped from the compressional waveform (DTCO3) to the shear waveform (DTSH3). The human eye quickly observes the error in the computer semblance pick – so, like seismic, we should be QC’ing the processing of well log data. It is also a good idea to request and archive a copy of the raw waveforms from your logging company so that the processing can be revisited as new technology becomes available.

Another common pitfall often encountered by junior geophysicists is the “casing point reflection”. Logging runs typically extend from TD up to surface casing but should be truncated at the casing point before being interpreted. As we record up the borehole from relatively slow velocity shallow formation into solid steel casing, there is a large impedance contrast that manifests itself as a high amplitude event on the synthetic (Figure 5), which is solely associated with the surface casing (SC) not geology. This is easily spotted if all the logs are inspected in conjunction with the density and sonic, and we are aware of our location in the borehole. The resistivity tool shows a large increase when entering casing and provides a useful QC in this case. There are instances where this phantom event has been interpreted as a shallow marker and used to correlate across townships of data, despite not being a real geologic marker. Although this may seem to be an easy pitfall to avoid, it is far more hazardous when we consider all the intermediate casing points in deep boreholes.

Fig. 05
Figure 5. Synthetic made including the apparent “casing-point reflection”. Best-practice would remove the upper (i.e. non-geologic) portion of the affected logs off before generating a synthetic to avoid potential pitfalls.

We believe that the examples presented above highlight the necessity for interpreters to pay close attention to the well logs, to the point of having a general understanding of the data acquisition and processing and an idea of the borehole conditions. If in doubt, logs should be QC’d by a petrophysicist.

The order of operations

Thankfully, good logging runs without washouts or processing problems do occur, but that is not all that we are concerned about. When correlating logs to seismic, we know that we are calculating a synthetic from flawed log data and correlating it to flawed seismic data, but how often do we consider the effect of our choice of software. After all, most of the software in use by major E & P companies has certainly undergone rigorous testing, right? True, but that does not mean it is perfect. The software may be running on a flawless computer but programs often do not always work the way we expect or assume.

Figure 1 shows the processing flow used by many software companies to create synthetics, however, are they all the same and, if not, does it matter? A comparison of the order of operations used by several of the major software vendors (Table 1) shows that not every package contains the same order of operations – and it matters.

Flow A Flow B
Table 1. Comparison of flows for creating synthetic seismograms
Load logs (sonic & density) Load logs (sonic & density)
Calculate impedance Calculate impedance
Calculate reflectivity Convert to time
Convert to time Calculate reflectivity
Convolve with wavelet Convolve with wavelet

The only difference is the order of steps 3 & 4, but does it matter? Three different synthetic seismograms created with identical well logs and wavelets (Figure 6) should theoretically be identical, however we observe that the first two traces are reasonably similar but the third trace, the one that uses flow B, is quite different. The only difference is the software used to create the trace.

Fig. 06
Figure 6. Large differences in synthetics using identical input data and wavelet parameters. Note the apparent amplitude/phase differences denoted by the red arrows and the large difference apparent in the cross- correlation of these 3 traces. Traces 1 & 2 were created using flow A from table 1, trace 3 from flow B. Even following a similar flow, differences exist between the first two traces in how they handle the end of the logs indicated by the apparent reflectivity present below 810 ms.

Does the wavelet make a huge difference?

Let’s assume that the log data is perfect and the software is creating synthetics as we expect – let’s consider the wavelet. Often the first thing an interpreter does with his or her seismic data upon receiving it from the processor is to correlate it with a synthetic, and, in those first few minutes, several judgments about the quality of the data and the seismic processing are made. The wavelet is a critical factor when generating the synthetic but does it matter which one we use? The answer depends on what we are doing with the synthetic. For example, if we are simply estimating the approximate phase of the data (within 45°) and identifying the zone of interest, then most wavelets will be fine, generally. If however we are working on a more detailed analysis, for Q-estimation or impedance inversion, then the wavelet can critically influence the result.

Fig. 07
Figure 7. Differences in apparent reflectivity from wavelet differences. Note the differences in apparent resolution for these wavelets, for example within the box above. Often the subtle character changes we are looking for from our targets (e.g. Wabamun porosity) can be masked or accentuated by the wavelet we use.

A comparison of a strong reflectivity contrast with four different wavelets is shown in Figure 7. While the main event does not vary much in this example, many of our hydrocarbon targets are contained in the side-lobe of a major event (e.g. Wabamun porosity, Slave Point, Lower Mannville sands, etc.). The frequency content of the wavelet should closely match the wavelet contained within your seismic data, the estimation of which is another paper in itself.

Making a synthetic tie

We’re happy with our input logs, the synthetics are being generated as expected and the wavelet is appropriate for our seismic tie – so let’s tie the synthetic to the seismic. Most likely all interpreters have had some ‘fun’ tying synthetics to seismic data, so we are just going to illustrate one of the more bizarre software glitches.

Fig. 08
Figure 8. Software does not always behave. In this example, a bulk shift was applied to a synthetic, however the result was that only the lower half of the logs were bulk shifted while the upper portion of the log was stretched. The pitfall here is the apparent reflectivity created up-hole which could be erroneously interpreted.

In our last example, applying a bulk shift to a synthetic within a commercial software program went awry (Figure 8). A standard script was run to apply a specified time shift. What resulted, however, was that instead of applying the bulk shift, the program applied a stretch to the first portion of the log, and a bulk shift to the rest, resulting in a well log that appears significantly longer than it is in reality. We note that the synthetic looks reasonable in that upper portion that was stretched and we can even convince ourselves that the tie is reasonable as the seismic is low fold in the shallow section so not all the events are correctly represented. But these events are not real! It is important to keep an eye on your time-depth curve and don’t take your software for granted.

Final comments and key points

We have presented a series of examples in which the creation and use of synthetic seismograms have created issues for interpreters. Synthetic seismograms are not as simple as they seem and many things can go wrong, so we advise:

  • Use all available logs for QC
  • If in doubt, check the paper logs
  • Just like seismic, QC every step of well log processing
  • If you need help, call your petrophysicist
  • Know the borehole environment
  • Keep an eye on your time-depth curve
  • Know your wavelet
  • Not all software is created equally

Not discussed but just as important are other more complex factors, such as sample rates during depth-to-time conversion, AVO effects, attenuation and imaging effects. All these factors, and others, are contained within the seismic data we are correlating or our 1D simple synthetics, which can also lead to erroneous conclusions.

End

Acknowledgements

We are grateful to Apache Canada Corp. and Nexen Inc. for permission to present this material at the CSEG lunchbox talk in April, 2008. Additionally, we thank the audience at our lunchbox talk for participating in a lively discussion, providing personal experiences and asking interesting questions. Additionally, we would like to thank Dave Monk, Marco Perez and Ron Larson for their critiques of this article. We would be remiss if we did not acknowledge the support of our spouses, Ian Crawford & Elizabeth Anderson, in putting up with us while we pulled out our hair in trying to understand the issues around what most of us take for granted.

The Schlumberger Oilfield Glossary can be found at: http://glossary.oilfield.slb.com/

     

About the Author(s)

Paul Anderson was born and raised in Alberta and received a Bachelor degree in geophysics from the University of Calgary. He is currently working as a Senior Geophysicist in the Exploration & Production Technology group at Apache Canada Ltd. Paul's current responsibilities include seismic processing QC, multicomponent processing & analysis, AVO analysis & prestack inversions, in Canada and abroad. Paul is also working towards his master's degree with the University of Calgary CREWES project.

Before joining Apache in January 2006, Paul was with Veritas GeoServices in Calgary (1998-1999, 2000-2005) and Houston (1999- 2000), where his responsibilities included AVO & LMR analysis, Post stack inversions, modeling, multicomponent interpretation and fracture detection work. He has a wide range of experience in a number of geologic basins, including; Western Canada, onshore Texas, Gulf of Mexico, Beaufort Sea, Mackenzie Delta, Alaska & Offshore Australia. Paul is currently a member of APEGGA, CSEG, CSPG & SEG.

Rachel Newrick was born and raised in New Zealand where she obtained her BSc in geology and BSc Honours in geophysics at Victoria University of Wellington. Since then, she has been a summer student for BHPP in Australia; spent 2 1/2 years traveling the world; worked for Veritas in Canada; undertook 4 month work terms with Occidental Petroleum and Exxon Mobil in the United States; co-authored a text book on exploration geophysics with Dr. Larry Lines and obtained a PhD in exploration seismology from the University of Calgary under the supervision of Dr. Don Lawton and Dr. Deborah Spratt. After graduating from the U of C in 2004, Rachel started work with Nexen Inc. in the Canadian Exploration New Growth Team as an exploration geophysicist, worked on prospects across SK, AB and BC, worked on and traveled to Colombia as a geophysicist, visited Yemen and NE BC while in the Formation Evaluation Team where she worked for 18 months, and most recently has been posted to the UK as an exploration geologist.

She founded a motorcycle group for women in 1998 (Wild West Vixens) and a geoscience networking group for women in 2004 (GoGG – Girls of Geology and Geophysics). Both are going strong. In her spare time she is either traveling, skiing, or road riding (albeit now in the UK, not Canada!).

References

Appendices

Join the Conversation

Interested in starting, or contributing to a conversation about an article or issue of the RECORDER? Join our CSEG LinkedIn Group.

Share This Article