On the afternoon of May 6, 1993, the final day of the annual convention of the Canadian Society of Exploration Geophysicists, a workshop entitled "Problems with Phase" was held at the Calgary Convention Centre. It produced a fascinating look at a subject which, despite its longevity, continues to present major challenges for exploration geophysics. As these notes show, we are still far from a consensus.
The workshop was organized by Peter Cary of Pulsonic Geophysical Ltd. and chaired by Dan Hampson. Six presentations were given, beginning with the keynote speaker Professor Anton Ziolkowski from the University of Edinburgh. These talks are summarized only briefly here since the abstracts can be found in the convention program manual. The discussions following each talk are included in more detail.
Although written in the first person, these notes have been edited for clarity and brevity, and should not be taken as direct quotes. The workshop began with some remarks by the chairman:
Dan Hampson (Hampson-Russell Software Services Ltd.):
Opening Remarks
I've thought of three issues which I would like to see resolved during the course of this workshop: First, who cares about wavelet phase? Why is it important?
Second, why is it so hard to get it right? I've been in the business since the middle 70's, and the first thing we heard about were the problems with spiking deconvolution. Through the course of time we worried about instrument dephasing, and then homomorphic, maximum-entropy, maximum-likelihood, and multi-component deconvolution. It must be that in 1993 we're much better at getting a zero-phase section than we were in 1976, but I'm not convinced that's true. I suspect that our ability to get the wavelet phase right hasn't progressed continuously, and I would like to know why.
Finally, what are the current methods people use? What is the state of the art?
Anton Ziolkowski (University of Edinburgh):
Why Do We Need to Measure the Seismic Signature?
By measuring the seismic signature, you provide severe constraints on the models you can use to fit the data. I've been told that the measurement of the signature is irrelevant because the wavelet that comes back from the target looks nothing like what goes in at the surface. I argue that this so-called wavelet doesn't exist. I believe that the phase problems you have in Alberta demand a radical change in approach which must start in the data acquisition.
Absorption must be included in the wave equation to compensate for it correctly. You can't remove it through deconvolution because it isn't convolutional - the arrivals have taken many different ray paths and so have been attenuated differently. Deconvolution can only be used to get rid of convolutional effects such as the source time function and the receiver response. Even then you must not damage the signal-to-noise ratio, so you can't change the amplitude spectrum too much.
With well-ties our synthetic seismograms are normally computed using plane waves at normal incidence to the earth. This model cannot account for shear waves. It's better to include the measured source signatures and the geometry of the sources and receivers, and process this to stack as you did the data.
If post-stack migration doesn't work because the structure is too complicated then you have to do pre-stack migration. This assumes all energy is primary, so you must remove multiples first. The most important multiples are those introduced by the free surface. Removing them is difficult because you don't have stacking to help you, but it can be done if you know the source depth, receiver depth, and source signature. Thus measuring the source is just as important for structural interpretation as for stratigraphic.
How do we measure the source signature? For dynamite I propose we fire two shots of different sizes. The signatures are related by a scaling function, allowing you to derive them. With Vibroseis, our current measurement does not account for base plate bending. There's a Dallas company, however, which makes a device which bolts onto the base plate and measures the ground force directly. For a marine source, we can determine the wavefield of a source array by measuring its response with a sufficient number of near-field hydrophones.
The purpose of signature deconvolution is to find a filter which shapes the measured source signature into something with more resolution, without changing the noise too much. In an air gun array you can shorten the signature in time by smoothing out spikes in the frequency spectrum caused by oscillating air bubbles, and then converting it to minimum phase.
If you have an array of sources, such as with Vibroseis or air gun, you must remove the directional effects before AVO analysis. The signature needs to be shortened in both time and space, which may be done with a two-dimensional shaping filter on common receiver gathers.
Free-surface multiples can be removed in marine data if you first remove the source signature and the source and receiver ghosts. The 3D case is possible to solve if you have the source signature, but expensive.
In 1981 I shot and processed a North Atlantic line using near-field measurements of the source signatures. When compared to a section with wavelet estimation every shot, my section not only has better resolution solely due to phase differences, but shows an important unconformity which is invisible on the other section.
Questions:
Peter Cary (Pulsonic Geophysical Ltd.): I might believe your arguments for marine data, but I have some doubts for land. You wrote the convolutional equation as the convolution of the source signature with the Green's function. It seems to me there's a lot of stuff in the Green's function we want to deconvolve. We don't want our interpretation polluted with near-surface reverberations that the present statistical approaches try to get rid of.
Ziolkowski: I agree, but the first thing we must do is get rid of the source. What you say does not argue against measuring the source. There are near-surface reverberations, and we have to do the best with them that we can - maybe we have to do predictive deconvolution afterwards. And not everything is convolved in the Green's function, only near-surface effects.
Cary: Do you think predictive deconvolution is capable of getting rid of the rest?
Ziolkowski: No. We may still have to do surface-consistent deconvolution. I believe underneath every geophone you can have a convolutional effect, but that's a separate problem.
Dave Hutchinson (Techco Geophysical Services Ltd.): I question whether what we're after is the total impulse of the earth. If we're interested in what's happening at 10,000 feet down, what we'd really like is the earth response to 9,000 ft. This is not a simple point function, but we can represent it with a multi-dimensional convolution. In the case of land data, we have a well-defined injection function just beneath the surface, which includes a lot of terms you have not included in the source.
Ziolkowski: Those terms are in the Green's function. There's no such thing as an injection function at 9,000 ft. since every single arrival is different. There's no convolutional model you can apply to describe how the wavelet has been changing. Second, every wavelet estimation method I've looked at windows the data, and you can't get long wavelets out of short windows. The wavelet must get longer due to dispersion, so the wavelet you extract cannot be shorter than the source signature. If your wavelet extraction method doesn't show that, there's something wrong with your method.
Hutchinson: There's no such thing as an injection function at 9,000 ft, but there's certainly a response which the material above 9,000 ft produces. This could be measured if you had the tools to measure it.
Audience member: Nobody has used dynamite for the last 50 years. Explosives vary considerably in what they do. A 125 gm charge will not be a quarter of a 500 gm charge, because there's a time delay for the shock wave to develop. Also, explosives vary with temperature. If you put a small charge down the hole after a big charge, it won't necessarily perform anything like the previous charge. Also you find the same trade names will have different formulas for different countries and for different diameters.
Ziolkowski: I've written a paper on this which is being published in Geophysics in August. The scaling law I derived relies on the kind of explosive being the same both times. I'm talking about drilling two independent holes a couple of metres apart, and putting the explosives in at the same time. So long as they have the same chemical composition, the actual shape of the explosive is irrelevant because the source is small compared to the wavelength.
Audience member: Its performance won't be the same because it has this zone where the charge gets up to full velocity. You have radically different effects with a 125 gm charge than you have with a 500 gm charge. It's as if you had two different formulations.
Hampson: The idea of measuring source signatures on land isn't new, yet there don't seem to be a lot of examples of it being used successfully. I notice you don't have an example. How do you account for that?
Ziolkowski: I'm just a simple guy from university, I'm not a rich oil company. One major company in the UK is spending 50 million pounds this year on 3D seismic, and not a penny on measuring the signature. On land I don't know why nobody is recording the Vibroseis ground force. Colum Keith showed an example this morning using the STASIS system. The people who make the system are bankrupt. Nobody is using it, but nobody has argued that the information you get is not worth having.
Hampson: Isn't it possible that the information isn't particularly useful? You may get the source signature at the surface, but you're not that interested in it there. You're interested in doing something to your data at the zone of interest, and if it's not possible to improve the data then it's not worth measuring.
Ziolkowski: This is the argument. I've heard for twenty years: There's no point in measuring the source signature because it's different at the target. So we don't, and we can never find out whether we're right. If I told you 20 years ago that 3D data is better than 2D, and you said "prove it", well I can't because I don't have any 3D data. Likewise you can't prove that measuring the source signature is better than not measuring it unless you can compare the two.
Schlomo Levy (Landmark/ITA Ltd.): You say you have to remove the surface multiples before you do prestack migration. If you measured the source signature, you could remove surface-related multiples by a simple operation.
Hampson: Is there anybody in the audience that has experience measuring source signatures here in Alberta that can shed any light on its usefulness? (No response)
J.V. Pendrel and V. Groeneveld (Gulf Canada Resources Ltd.):
Model-based Phase Correction and Lithologic Processing
A "model" here means our entire perspective on the sub-surface, and includes the geological and geophysical concept, well data, numerical modelling, and seismic data. Our methods must yield self-consistent and reasonable results - otherwise we should walk away from them.
Here are some of the steps to lithologic processing. After each step we must assess the quality and risk, using controls such as synthetics, zero-incidence models, and VSP's. This often means a great deal of back tracking and rechecking to ensure that all our information and conclusions are self-consistent.
- Initial modelling. We define the processing objectives, and estimate the uniqueness and sensitivity of the model, especially to wavelet variations.
- Spectral analysis. This is used for spectral enhancement, to ensure we can image the target, and to estimate a final zero-phase wavelet.
- Spectral whitening. This must be done cautiously. We broaden the signal just enough to obtain the minimum resolution needed.
- Phase correction. We use a Wiener-Levinson algorithm. Again this must be done cautiously, using a short operator. We try to compare the operator with one derived from the same line but a different well to check for spatial variability.
- Seismic inversion. This is highly sensitive to phase, and so is also useful for quality control of the phase correction.
There can be many reasons for failure of the model. These include unreliable logs, lack of uniqueness between seismic and model, multiples, unresolvable targets, poor phase correction, non-stationary seismic, and lack of true-amplitude processing.
Questions:
Hampson: Your phase correction took place after stack using the well log. Does that mean you don't trust the processing to get the phase right?
Pendrel: We can't assume it – the phase test always had to be done, no matter what has been done beforehand. I wouldn't go to seismic inversion, which depends critically on phase, without checking it. Sometimes it is right.
Hampson: If you find there are significant phase changes to be done after stack, doesn't that lead you to think there were variable phase changes that should have been done before stack.
Pendrel: Absolutely. The reality is that for many of us the processing is a fait accompli. That said, we don't want to miss opportunities to make phase corrections knowing that, at the end of the day, there will be a Wiener-Levinson shaping filter that will make everything right. We should do as much as we can as early as we can as often as we can, and hopefully there won't be much left for the filter to do.
Colum Keith (Imperial Oil Resources Ltd.): In response to Dan's question, this morning I gave a paper which showed some of the benefits of trying to measure the Vibrator signature at each point down the line. When the mechanical system hiccuped we got told about it. In this way we avoided phase problems with the vibrators.
Ziolkowski: When you are rotating the phase of your wavelet, you have to start with a known wavelet. How do you know what the wavelet is without making any assumptions about the geology?
Pendrel: The wavelet is an interpretor's wavelet. We don't pretend it's exact. The wavelet is given the same frequency response as the data. We value interpretations which are not overly sensitive to the shape of the wavelet. If an interpretation is sensitive to wavelet sidelobes, then we must judge it to be risky.
Ziolkowski: But if the wavelet has the same spectrum as the data, then the impulse response of pre-critical reflections must be white, and you can't prove that it can't be white.
Pendrel: I'm not assuming it is white. Spectral enhancement is a very powerful and potentially dangerous tool. We seek to do only enough spectral broadening to image the target.
Mike Doty (Unocal Canada Ltd.): It strikes me that what Pendrel is talking about are attempts to fix things that were done badly. So why haven't we progressed in handling phase? My answer is that the seismic business is market driven. The end users have not been asking for it.
The talk by Sudhir Jain of Commonwealth Geophysical Development Co. Ltd. was not recorded due to technical problem. However, an abstract can be found in the convention program manual.
J. Downton and S. Levy (Landmark/ ITA Ltd.):
Interpretive Wavelet Processing
We're going to make a case for interpretive wavelet processing. We will examine phase distortions which can't be explained by time-invariant offset-invariant convolutional models - that is, ones which can't be solved by surface-consistent deconvolution. The principal effects we will examine are from processing artifacts, wave propagation theory, and acquisition geometry.
One objective of processing and interpretation is to achieve an optimal tie between the seismic and available well control. Traditionally this is done by comparing the stacked section with a vertical-incident synthetic. The problem is that we ignore phase and amplitude as a function of offset, leading to phase distortion within the stack.
Our method is to generate synthetic shots using an acoustic or elastic modelling scheme. We then process the shots, perform scaling and NMO correction, and take limited- offset stacks. We then compare the stacks to the vertical-earth seismograms. We assume the source wavelet has been solved for.
To study the effects of NMO stretch, we generated some data using an acoustic modelling scheme. For residual NMO, we estimated velocities from semblances rather than using the model velocities. Both effects created phase distortion in the full stack, but post-stack wavelet extraction did a pretty good job of removing it.
To consider post-critical reflections and transmission and converted-mode losses, we applied an elastic modelling scheme t? the previous model. This caused a post-critical reflection on the Banff, resulting m amplitude and phase changes. There's also a series of converted S modes, and transmission losses have decreased amplitude levels on some events. After NMO correction, mute, and stack, we see a significant phase distortion on the Banff. A shaping filter can improve the match.
To consider dispersion and attenuation we applied the Futterman attenuation model to the acoustic shot. This created both time and offset-variant phase rotations relative to the synthetic. We could not design a shaping filter that gave good results for both shallow and deep events on the full stack.
The actual seismic data around the well shows many of the effects we've discussed, including post-critical reflections, phase and amplitude distortions perhaps due to AVO effects, and transmission losses due to a high angle of incidence. However, we are not seeing converted S modes. The stacked data shows a good match to the seismogram after match filtering.
Acquisition can also effect phase. Two lines can have different offset weightings due to different cable lengths and group intervals. Within a line, lateral phase distortions can be caused by no-shoot areas, and changes in offset distributions at the line ends.
We conclude that unless you use full pre-stack wave-equation processing, where you allow for these problems in the formulation, you may as well do the phase treatment as a post-stack interactive correction.
Questions:
Hutchinson: It looked as if your phase correction operators depended on offset distribution and the relative amplitude of the traces in the gathers. Would you propose using operators which varied along the line as the conditions changed, or would you make sure the amplitude and offset distributions are uniform?
Downton: We were looking at using post-stacked operations for correcting the phase, recognizing that there were problems introduced by stacking the data. I'm opposed to a spatially variant phase operator because I have no confidence it's doing the right thing. I like to tie my phase operator to the well.
Chris Irvine (Chevron Canada Resources Ltd.): Were the depths to the Banff and Wabamun the same as the mute offset?
Downton: The ratio was about 1:1. The problems are mostly at far offsets. These become significant for events recorded for AVO with very long cable lengths. It's a compromise between recording long offsets for multiple attenuation and source-generated noise, versus the problems introduced by those long offsets. We have a velocity inversion at the Banff, so there are angles of about 12 degrees at the Slave Point, and we lose many of our high angle offsets due to ray bending.
Keith: You did a full modelling of the source for an entire spread and then compared them with the ID synthetic, and the ID synthetic can not accurately tell you what's in your seismic data. Do you agree with that?
Downton: The ID vertical synthetic is simplistic, in that it can't see the distortions we've discussed. But stacking is a powerful tool in removing many of these effects, so the answer is yes and no.
Levy: The answer is no. I thought when we started out that we'd get far more effect from the things we've shown you. But none of these things can't be stopped by stacking and proper muting. However, we put absorbing boundary conditions at the surface, so we didn't need far offsets to suppress multiples. In an area where multiples are not a problem then the only difference between a spherical-wave zero-offset seismogram and a plane-wave vertical-incidence seismogram is a 45 degree phase shift.
Ziolkowski: You used a 2D finite-difference scheme. You can't really include thin layers in it.
Levy: We sampled the well log every 2.5 m.
Ziolkowski: 2.5 m sounds good enough, but if you had an area with a lot of thin layers, you might want to use the reflectivity method.
Levy: That depends on the ratio of the frequency bandwidth to the layer width.
Cary: Your statement that the final phase estimation has to be interpretive makes me uneasy. I started studying geophysics because I thought it was a science. We should be able to produce a zero-phase section without subjective choice. We all do match filtering, but I consider it an embarrassment.
Levy: We believe that full elastic wave equation inversion will resolve the problem. Unfortunately it requires a lot of computer resources. We think that eventually we'll be able to do it, but it's not realistic now.
Doty: I suggest that processing and interpretation are one and the same. The interpretor has a body of knowledge not available to the processor. Part of the problem is that certain elements are left out of the loop. I also think that the measured source signature is a source of information which we should exploit.
Easton Wren (Consultant, Petrel Robinson Ltd.): In early experiments with inversion it became clear that unless the data was zero-phase, the inversion was unstable. Is that no longer a concern? Can you invert data with any phase?
Hampson: Looking at this year's abstracts, it seems inversion remains sensitive to phase.
X. Wang (Geo-X Systems Ltd):
Phase Control by Deconvolution
You can do post-stack processing to correct the phase, but you're better off if you can fix the problem earlier on. In my opinion, statistical deconvolution does work, and with improvements in statistical deconvolution, the ties between seismic lines and wells are improving.
Surface-consistent deconvolution began with source and receiver components, but we have been adding components such as offset and CMP. However, we always have more data than unknowns. If we fit the data exactly we get trace-by-trace deconvolution, which might be called an infinite-component model. This gives the most resolution but is very vulnerable to noise and so can only be used if the data is very clean. Therefore, more components is not necessarily better. The one-component model is robust, but has poor resolution. The best is somewhere in between. From my experience, the 5-component model is sometimes too much, since adding components increases the chances of modelling the noise.
We are improving the deconvolution. Time-variant properties become apparent when we perform a high resolution deconvolution, which can cause ringiness in the shallow data. We can fix this through time-variant deconvolution, but the data must be clean to handle the loss of statistical redundancy.
The minimum-phase assumption is not a bad assumption. If the source and receiver responses are not already minimum-phase, we can convert them. For the earth response, Aki and Richards state in their book "Quantitative Seismology" that in one dimension a causal wavelet, propagating in an attenuating, dispersive medium, is minimum-phase.
There are limitations to how much we can safely get out of the data. To improve deconvolution we will eventually need more information such as shear wave recordings.
Questions:
Ziolkowski: Aki and Richards do say the earth transmission response is minimum phase. But Jacob Fokkema and I proved in a 1987 "Geophysics" paper that the reflection response of a layered earth is not in general minimum phase. We have different opinions about whether statistical deconvolution works. To be scientific there should be a test. I propose we do a blind deconvolution on a synthetic record, calculated properly like Downton and Levy did, where you don't know the answer, and you come up with the wavelet and geology, and let's see who's right.
Audience member: As interpreters, we have tests every day, and these tests are our predictions of the geology. We trust the phase if it's consistent. We don't care if it's minimum phase.
Hampson: So as long as the phase is consistent across the section, you find that's okay?
Audience member: I know that it's not consistent temporally or spatially, but I have to take that into account as an interpreter. It's an art, not a science.
Wang: I'm not aware of Ziolkowski's article. The reason I support statistical deconvolution is because it works. We have processed lines of many different energy sources and vintages in a single area, and they tie reasonably well.
Hampson: So you're defining working as if you deconvolve many different data sets, the results will tie.
Wang: And also ties the well.
Hampson: But that's where minimum phase is critical. To tie each other doesn't require the minimum phase property.
Wang: But they have to have some sort of consistent phase property.
Hampson: Did you say it was useful to measure the source signature?
Wang: It depends on the source. Vibroseis is not minimum-phase, and if it improves the data I'm happy to use. I understand the dynamite source is difficult to measure. Even without the signature the lines seem to tie. Maybe with the signature they tie better – we'll have to wait and see.
Audience member: Is it better to perform instrument and geophone phase compensation without surface-consistent deconvolution, perform surface-consistent deconvolution without phase compensation, or perform both? They all give different results.
Hutchinson: There are also many ways to perform phase compensation. Some remove the instrument phase, some change the phase spectrum to minimum phase given the amplitude spectrum, and some reshape both the phase and amplitude spectrum. The first option, although it doesn't preserve the minimum phase property, often ties best because it balances out the errors caused by additive noise.
Cary: They're probably all wrong. Surface-consistent deconvolution produces consistent phase, rather than random phase as with trace-by-trace deconvolution.
Hampson: Does everyone agree that multi-component deconvolution is better than single trace?
Levy: You can't do surface-consistent deconvolution if you don't have a low velocity layer at the surface. It's like minimum-phase. It's a good assumption if the thing you're trying to remove is minimum phase.
Dave Hutchinson (Techco Geophysical Services Ltd.):
Stacking in Phase
Our objective in pre-stack processing is to prepare the data for stack. If we stack with traces out of phase with each other, we destroy data irrevocably.
Many of our phase problems come from the low velocity layer (LVL) at the near surface. Because of the LVL, there's a close relation in land data between statics, filtering, and additive noise.
We should look at statics as exactly the same problem as deconvolution. Statics are caused by data passing through an LVL of a given thickness. If the layer is non-dispersive then all frequencies travel at the same phase velocity, and we call it a static. If the material is dispersive then we see different phase velocities for different frequencies.
Deep dispersion cannot be represented by a convolution, but I suggest it's not as serious a problem because much of the material is of higher Q, and because rays tend to heal themselves, bending around low-Q materials since these also tend to have low velocity.
What tools can deal with these problems? Deterministic deconvolution is excellent for compensating for filters in the recording system. Statistical deconvolution is necessary because there's a large amount of filtering in the near surface which cannot be practically measured. Phase balancing and interpretive intervention may also be necessary. Phase balancing is a type of frequency-dependent static. So long as it's surface-consistent you cannot produce anomalies shorter than a spread length. Otherwise it's very dangerous.
Pre-stack phase processes can be categorized as statics or deconvolutions - that is, linear or non-linear phase shifts. They can also be categorized as absolute or residual. Absolute processes try to convert to something given, and include datum statics and most deconvolutions. Residual processes only remove differences between traces, and include residual statics and phase balancing.
An example from the Alpine overthrust in Europe shows how phase balancing has improved continuity both within and below the design window.
To repair anomalies longer than one spread length, we must use interpretive intervention. In a 3D example, I extrapolated phase information from clean shots on land to noisy shots in a river valley. After absolute phase balancing, the land and river shots demonstrate a much better tie. Events in the CDP stack can now be followed right across the river.
Questions:
Cary: There's confusion about how many components should be included in surface- consistent deconvolution. The source and receiver components are the only ones we're interested in. Offset and CDP components are included to collect effects which would otherwise contaminate the source and receiver components. The offset component collects ground-roll and normal moveout effects, and can be estimated accurately since it has very high fold. The CDP component should probably not be included because it can't be estimated accurately. Thus there's good reason to have at least a 3-component deconvolution. If we can show there are other components that are presently contaminating our estimates, we should include them - for instance, I've heard talk of an azimuth component, although we shouldn't include it without first verifying it.
Hutchinson: I believe that if we include an offset component, we must allow it to vary continuously down the line, in which case we don't have the redundancy to do it. We should not assume we can use the same offset function for each shot when in fact they can vary drastically. We should instead ignore the inside of the spread, and design operators from traces uncontaminated by low-velocity shot waves.
Stewart Trickett (Kelman Seismic Processing Inc.): I question the assumption that because people are going to do statistical deconvolution they have no use for a measurement of the source wavelet. There are phase compensation methods that, the more information you give them, such as the responses of the recording instrument, geophone, and Klauder wavelet, the better the performance of the deconvolution. One reason is that just because something is analog minimum phase doesn't mean spiking deconvolution does a good job on it.
Hampson: It sounds as if as you add deterministic knowledge you're converging to a better and better answer. Does this mean you get different answers?
Trickett: Yes, but you also get more consistent answers.
Keith: To sum up, John Pendrel showed you can take anywhere in Alberta and make it look like anywhere in Australia. Sudhir Jain said if you don't know the wavelet there's no point in doing an inversion, and proceeded to show that we don't know the wavelet so we shouldn't do inversion. John Downton and Schlomo Levy showed that ID synthetics and synthetics from offsets don't match, and Schlomo says there's only a 45 degree difference between the two. Wang says never mind the phase, we sure have some neat deconvolution programs these days. Hutchinson says that the interpreters are on their own. Ziolkowski says we sin grievously against the tenets of science, but he doesn't give us any way to stop sinning.
Join the Conversation
Interested in starting, or contributing to a conversation about an article or issue of the RECORDER? Join our CSEG LinkedIn Group.
Share This Article