There are different terms used to refer to different types of processing, aimed at achieving the ultimate geological objective. It is in the interest of all geophysicists to be clear on what processing term is used in what context. The following question was posed to clarify some of these terms.

Apart from taking a jab at the question myself, the ‘Experts’ answering the question are, Fred Hilterman (University of Houston), Dan Hampson (Hampson Russell, Calgary), Mrinal (MK) Sengupta (Consultant, Houston) and Mike Graul (Texseis, Houston).

The order of the responses given below is the order in which they were received.

Question

Seismic data processing is often characterized as ‘amplitude preserved’, ‘relative amplitude,’ true amplitude’ or ‘ controlled amplitude’ processing. Considering that the ultimate goal in processing is to yield amplitudes that are a measure of reflectivity in the subsurface, how are these terms different and in what context is each used?

Answer 1

Relative amplitude processing flows usually employ simplistic gain corrections, statistical scaling, trace-to-trace operations and spatial filters yielding a stack with optimized signal to noise ratio and bandwidth. While such sections are useful for poststack interpretation – to portray the structural setting of amplitude anomalies, e.g. bright spots, flat spots, etc., relative amplitude processing typically does not attempt to preserve pre-stack amplitudes. Consequently, large time-variant and offset-variant scaling errors may result and lead to misleading interpretations.

In an early study of the effects of the relative amplitude processing techniques on AVO (controlled amplitude processing), Yu (1985) evaluated whole trace equalization, surface consistent amplitude balancing and section dependent equalization.

Interpreters associate changes in seismic amplitudes with changes in geology in terms of rock properties. This assumption needs to be true for arriving at accurate inferences. To ensure this, first the factors that affect seismic amplitudes need to be understood and considered and then the processing of data has to be employed in such a way that the changes in amplitudes can be reliably interpreted as changes in geology. The effect of the different factors affecting seismic amplitudes has been studied since long (Sheriff, 1971, Hilterman, 1975, Doherty and Anstey, 1975), and corrections to amplitudes have been suggested. After the introduction of AVO, the effect of these factors on amplitudes was considered more seriously. There are basically three types of phenomena that distort amplitudes in the pre-stack domain. While one is related to the wave propagation effects in an inhomogeneous visco-elastic medium, the other two are related to the noise effects (Dey Sarkar et al, 1986). In addition to these three categories, another category may also be added which refers to the processing induced artifacts that could contaminate the data.

Processing of the seismic data for AVO analysis should estimate and remove the energy losses that occur for the seismic waves as they travel through the subsurface, remove the effects of the near surface and completely remove any processing related artifacts. Processing attempts at achieving this have been characterized as ‘amplitude preserved processing’ or ‘true amplitude processing’. Such processing flows include surface consistent scaling or deterministic scaling corrections for geometric divergence, source array and receiver array effects, angle of emergence and absorption. Needless to mention, such deterministic corrections would need geologically meaningful velocity grids that are used for angle computations. Any trace-by-trace operation or use of spatial filters that may include spectral whitening, median filtering, f-k filtering are avoided.

With the removal of noise and all the other effects from the acquisition and wave propagation, the resulting amplitudes can be taken as the ‘true’ amplitudes that could be considered as authentic for more accurate geologic analysis.

Satinder Chopra,
Arcis Corporation, Calgary

Answer 2

With the advent of the digital era in 1962-1963, I believe GSI introduced “true-amplitude recovery” (TAR). Shortly after that, Western introduced RAP, “relative- amplitude processing”. Now with AVO and AVOA, “true” and “relative” still are still being addressed. My first reaction to true or relative amplitude is to ask, “What is the mathematical model or process that you want the seismic amplitude to represent?” A simplistic model for the processed seismic amplitudes, RC (q), is

RC (θ) = K RCZoepTop(θ) (4πβ/λ) cos(θTRANS)

= (Constant)* (Zoeppritz response off top of thin bed)* (Thin bed effect) * offset thinning effect

I have no great passion for this equation; it is a model that I hope processing will retrieve.

There are numerous methods of cataloging earth effects on seismic amplitude. I look at them as (A) intrinsic attenuation, (B) Wavefront distortion from spherical shape, and (C) Internal multiple (coda). Section (B) is large because it includes unwanted amplitude variations caused by geometrical spreading for transmission and reflection that need to be properly addressed but can only be addressed if one knows the exact earth properties. In addition, section (B) includes effects from lateral velocity variations, scatters above reflecting interfaces being investigated, etc.

I have no fault with the description of “amplitude processing” that you presented. I believe this will generate significant letters to the editor, no matter how you defined amplitude processing.

Additional comments …

  1. About 15 years ago, I was asked to do AVO processing of signbit recorded data. The acquisition was 2D, with a recording system of 1000 channels, dynamite as the source and coverage was 160 fold. The data were recorded over what is now a new Houston suburb. Thus, no additional acquisition is possible. To summarize, the data were 6-second records whose amplitudes were zero or one. (This amounts to the most extreme AGC that could be applied.) However, each CMP gather was stacked 16-to-1 so the offset fold reduced to 10. This was migrated and the resulting pre-stack common-off s e t processing surprised all of us. A beautiful AVO anomaly was found that was consistent to the structural interpretation.
  2. With the above mentioned, now think about conventional 60 fold data in Class 3 AVO areas where the stack is the best fluid indicator (Hendrickson, 1999, “Stacked”, Geop. Prospecting). If the data are noisy (land acquisition), would one be hesitant to apply a 500-ms AGC and still believe the stack is a “true” amplitude representation of the stacked version of the above equation or is this relative?

Fred Hilterman,
University of Houston, Houston

Answer 3

The basic answer is that these all refer to more or less the same thing – providing seismic data whose amplitudes measure the underlying reflectivity of the earth, and are not affected by acquisition or processing artifacts. One easy distinction is that between “ relative amplitude” and “true amplitude”. Theoretically, “true amplitude” implies that the sample values of seismic events are absolute measures of reflectivity, i.e., ranging from -1 to +1. The term “relative amplitude” implies that the seismic amplitudes are proportional to the reflectivity – in other words, there may be a constant (unknown) multiplier, but all the events are relatively correct. In practice, however, most processors really mean “relative” amplitude, no matter which term they use.

The most important process which requires “true relative amplitude” data is AVO, where virtually all algorithms assume that the seismic data is an accurate representation of amplitude behavior as predicted by Zoeppritz or Aki-Richards equations. In particular, this means that amplitudes must not contain residual effects of geometrical spreading, transmission loss, frequency- dependent attenuation, near-surface effects, geophone coupling, etc, etc. This is a very difficult goal to achieve, and the success of much AVO analysis depends on very careful amplitude processing.

Dan Hampson,
Hampson-Russell, Calgary

Answer 4

All these variants in the amplitude terminology are used, in my impression, to imply that amplitudes in the data are a reliable measure of reflectivity in the subsurface, as stated above. However, the terminology ‘true amplitude’ conveys this sense of reliability to its highest degree by trying to present the seismic amplitudes in units of the absolute particle displacements, velocities, or pressure fluctuations (depending on the sensors used). This is a very hard task, almost next to impossible. Otherwise, in all stratigraphic/interpretive processing of seismic data, amplitudes are to be proportional to the reflectivity in the subsurface. So, why people use different terminologies to convey exactly the same meaning?

My answer to this question is the individual’s preferences in using a particular amplitude terminology. For example, some people think that using the terminology ‘true amplitudes’ would impress their customers more on the reliability of the amplitudes of their processed seismic data. But, in all reality, the true amplitudes may never exist for the seismic data, as the absolute values of different amplitude corrections, such as the corrections for the radiation patterns of the seismic sources in real situations, and/or the full compensation for the amplitude absorption along the propagation path are very difficult to estimate.

On the other hand, ‘relative amplitudes’ are really our goals in seismic data processing, although these goals are not totally achieved most of the times due to the uncertainties and the errors in the amplitude corrections that we apply.

For example, the expected errors in the Q-estimates can reasonably cause 1 db error in the far-to-near offset amplitude ratio for a 6000-ft offset of seismic data and for reflections at around 3-4 seconds. This factor alone may result in about 15% error in the observed AVO anomaly, thus affecting our goal to attain the fidelity in ‘relative amplitudes’.

Since the goals of attaining ‘relative amplitudes’ remain mostly illusive, the terminology of ‘amplitude preserved’ seismic processing is generally a ‘misnomer’ for the overall processing. However, in my views, this terminology is quite appropriate to use to characterize some component processes such as, amplitude- preserved demultiple, dmo, and/or migration. Generally in these processes, no amplitude changes are intentionally made to the primary reflections, and if and when amplitude corrections are applied in these products, these corrections are only applied deterministically. Particularly to mention, no artificial or cosmetic amplitude corrections like AGC are applied in these amplitude-preserved component processes. One may notice that the amplitude-preserved seismic products do not normally look as ‘aesthetically nice’ as those having amplitude gain factors applied solely for cosmetic purposes.

The remaining terminology of ‘controlled amplitude’ processing truly represents, in my opinion, the state of art in doing the amplitude preservation as good as it is practically possible. In this process, one generally applies ALL reasonable amplitude compensations to the seismic data such as, the offset-dependent geometric divergence (primarily deterministically) and absorption corrections (generally in a data-adaptive way) using all available information. In addition, the amplitude-preserved demultiple, dmo, and/or migration processes are also applied, if and when necessary. Note that this terminology does NOT claim to have the perfect amplitude preservation in the final products, thus truly representing some deficiencies in our real world. So, it would be preferred to use the term, “Controlled Amplitude Processing”, whenever attempts are made to treat amplitudes ‘with tender loving care’. The degree of controlled amplitude processing may, however, vary from dataset to dataset, depending on which amplitude corrections were applied and how, and also on the amplitude corrections that could not be estimated.

Mrinal (MK) Sengupta,
Consultant, Houston

Answer 5

This question is not likely to be popular with the seismic processing industry. For too long we have been allowed to bandy about these advertising slogans without serious challenge. It’s time to ‘fess up and to set a righteous course in these murky waters.

The first three phrases (preserved, relative, and true) represent varying degrees of self delusion coupled with promotional buzz terminology intended to impress potential clients with the scientific merits of the shop using them. The last phrase (controlled) at least allows some latitude for truth-in-advertising. Example: the use of AGC for amplitude treatment. Remember what the “C” stands for?

The question includes the essence of our desired goal, “ … yield amplitudes that are a measure of reflectivity in the subsurface …” The processing claims of preservation, relativity, and truth (!) all imply that reflectivity has been captured alive and well. Not a chance. Why am I pessimistic in this regard? And more importantly - since it’s easy to be negatively critical about virtually any processing procedure – what can and should be done to produce data whose amplitudes will be useful for the interpreter.

First, to be more specific about the goal, we are talking about some measure of reflectivity on the Pre-stack migrated (PSM) data. Clearly, stacked data holds no hope of reflectivity- proportional amplitudes – if one believes in AVO. Further, many processing packages, in common and daily use, normalize the sum of N traces with N1/ 2, thus producing “signal” amplitudes proportional to the square root of fold, which varies not only with time, but spatially as well, especially in land data. (This silly default option derives from the old analog processing days in a mis-guided attempt to temper the noise in low fold areas.) We also recognize that amplitude only matters at certain discrete times – namely, at reflection time. The rest of the reflection wavelet amplitudes are only along for the ride, proving no useful information about the reflection, and are more likely to distort the truth than illuminate it.

OK, we know what we want, ideally – reflectivity as a function of time, spatial position, and offset (a greedy scientist would probably throw in azimuth, as well). The problem is to get amplitudes into this mode. Reflectivity is a dimensionless quantity: R = AR/ AIN. This compares the amplitude of the reflected wavelet to that of the incoming waveform. There are no restrictions on the nature of the waveform, and R may be measured anywhere along the wiggle. Unfortunately, no one tells us what the incoming wavelet is. For discussions of AVO, etc., we often just set AIN = 1, thus lulling ourselves into the happy belief that AR = R. Amplitude comes to us as a digital value (“23427”) in what the International Committee On Nomenclature refers to as arbitrary international seismic amplitude units (AISAU). Even if reflections were single-valued returns from boundaries in the subsurface, there would be an unknown – and probably unknowable – scaling factor, usf, between amplitude and reflectivity, A = (usf)·R. Attempts at calibration of this factor generally fail. With detailed knowledge of certain rock properties at a boundary, and a reasonably resolved reflection from such a boundary, one might be able to convert the observed amplitude to reflectivity. With VSP measurements of the incoming and reflected wavelet immediately above an interface, the relationship could be computed – for the VSP. Translation of these techniques into something useful for surface recordings 100m down the road from the borehole has not proven successful. What is often claimed is really the quintessential model for precision without accuracy.

S(t, x) = [R(t,x) • D(p)]*w(t;s,r) * v(p) + NC + NR

where,

S(t, x) is the recorded trace at time, t, and offset, x (source to receiver: s➔r);

R(t, x) – the reflectivity at t and x (accounting for AVO);

D(p) represents all the smooth amplitude decay factors (spreading loss, transmission, etc.) along the ray path, p, from s to r,

w(t; s, r) represents the non time-varying wavelet, accounting for the filtering (*) at the source and receiver locations: source signature, near surface, receiver, and instruments, including all surface consistent amplitude factors;

v(p) comprises all the (time and space varying) filtering effects along p, the ray path between s and r: array filtering, interbed (short path) multiples, Q – inelastic attenuation, and so on – all of which affect “amplitude” in a nasty way;

NC and its cohort in crime, NR, represent, respectively, the ubiquitous coherent and random noise components of the recording independently added to the subsurface signal.

The inversion of this mess, yielding R(t, x), may be expressed as,

R(t,x) = {[S(t,x) − NR − NC ]*[v(p)]−1 *[w(t;s,r)]−1 }•[D(p)]−1

Note that this inversion is accomplished by applying the mathematical inverse of each component in the reverse order of their forward application. It is immediately obvious that we’re in trouble. In order to subtract the noise, one has to know, very specifically, what the noise is. If we were to characterize groundroll, for example, as low frequency, we would be doing a great injustice to the low (and necessary) frequency components of the reflection signal. Herein lies the first of many gaps in our knowledge of the data. No one tells us the what the noise is – just get rid it, now. In what book does one learn what the wavelet, w(t; s, r) is? Is there a table look-up for v(p)? As a matter of fact, what is the ray path from s to r? As far as D(p) is concerned, there have been a number of fine theoretical – some bordering on the practical – treatments of spherical and not-so-spherical amplitude decay and its cure (see, for instance, Hilterman or Newman), but they deal with the inversion of amplitude factors on a model that we can never simulate: a noise free, fully deconvolved, signal-only data set. We try, but because we lack perfect knowledge of the various unwholesome components, we fall back on statistical techniques. We attack both noise suppression and deconvolution with statistical approaches (various deterministic techniques have been tried in the past, but they soon lapse into the oblivion of impractical programs when tested on real data). Enlightened processors are well aware of this and the importance of sequence in the pseudo inversion process. If one were, for example, to apply a well meaning spherical divergence correction before deconvolution, what would that do to the minimum phase assumption in statistical deconvolution?

In order to complete the inversion to something approximating (usf)•R(t, x), we must again ask ourselves what story we want the amplitudes to tell. Who could argue against amplitudes as DHI’s – grease finders? Ideally we would like the envelope of the PSM seismic trace, in time, space, and offset, to represent the reflection strength at those coordinates. If we knew what the average (measured over a broad space, offset, and time window) of the reflection energy was, then it would be a simple matter of bringing the smoothed, inverted amplitude energy (or its square root) to this level. Once again, we are thwarted by ignorance: what is the average reflectivity energy function on the data? But let’s not punt on 3rd down (2nd in Canada). Aren’t we really most interested in AVOmalies – the local lumps in amplitude behavior, the deviations from the background behavior of ordinary, everyday reflections, living out their lives of quiet desperation? Of course we are, and toward this goal we may simply bring the smoothed averaged amplitudes to a constant level – leaving in the (amplitude) lumps, as in a good pancake batter. This technique could be refined to include any gross and smoothly constrained estimates of very general reflectivity energy levels varying throughout the area. These geologic attributes would most often be available from well data.

The statistical step described above may be expressed mathematically by the following.

pR(t,C) = a(t,C)•A(t,C)•E(t,C)

where,

pR(t, C) is the pseudo reflectivity (preserving local lumps or AVOmalies)

C represent all salient coordinates in space and offset (maybe azimuth);

a(t, C) is the individual trace amplitudes after the best try of noise suppression and deconvolution/wavelet processing yielding a nearly zero-phase wavelet, with the proper polarity.

A(t, C) is the inverse square root of the wide-window averaged squared amplitudes, [a(t,c)]2, (energy);

E(t, C) represents the gross reflectivity strength. If unknown, it may simply be set to a nice constant, like 1.0.

Now the question becomes, what should we call this procedure? How about controlled, simulated relatively true amplitude preservation with arbitrary scaling factors varying in time and space, suitable for AVO analysis?

Mike Graul
Texseis, Houston

End

References

Share This Column