Peter Cary organized this workshop with the help of other members of the technical committee of the 1992 CSEG National Convention. He is presently the chairman of the technical committee of the 1993 CSEG Convention. Another workshop is being organized for the Convention coming up May 4-6. The topic this year is “Problems with Phase”, with keynote speaker Prof. Anton Ziolkowski from the University of Edinburgh.

On the afternoon of Thursday, May 7, 1992, several hundred geophysicists assembled at the Calgary Convention Centre to take part in a workshop entitled Imaging Complex Land Data. This event was designed as a windup to the technical sessions of the 1992 Canadian SEG National Convention, and it was highly successful in terms of both technical content and exchange of ideas. So many questions of technical interest were discussed during the 3 1/2 hour workshop that even those people who were present would probably have difficulty remembering the contents in much detail. For those people who would like a record of what took place during the workshop this summary of the presentations of the five panelists and the questions that those presentations generated, may be useful. The entire contents of the workshop are not included here. Both the panelists’ presentations, and the discussions that followed them, have been edited for the sake of brevity. Although the sections entitled “Questions” have been written in the first person, they should not be considered to be direct quotations. The purpose of using the first person voice is only to try to convey the liveliness of the discussions that took place that afternoon.

The topic of the workshop was chosen in order to focus on the problems specific to imaging complex land data, as opposed to imaging complex marine data. The emphasis on problems unique to land data was intended to make the workshop more useful to those geophysicists who work in difficult areas such as the Canadian foothills, where structurally complex subsurface geology is combined with mountainous, and highly variable, surface conditions. Previous workshops on the topic of imaging complex seismic data have focused almost exclusively on issues to do with velocity analysis and migration. As most of us know, however, land data often suffer from strong noise and weak signal, which make processes such as statics and deconvolution difficult to perform correctly. A lot of processing effort is required just to get difficult land data to the stage where migration processes can begin. And even when migration begins, we sometimes find that algorithms that work perfectly well on marine data encounter difficulties with land data. Needless to say, there are many complex issues that become intertwined when processing land data from complex structural areas, and as the workshop proved, there is considerable material for discussion.

Ken Larner (Colorado School of Mines): Seismic Imaging Overview

The keynote speaker for the workshop was Ken Larner, the Cecil H. Green Professor of Geophysics at the Colorado School of Mines. In his opening address, Lamer introduced many of the issues that were to come up again later in the afternoon. He began by stating that complex land seismic data provides geophysicists with a challenge. It is where the theoretic and algorithmic meet with practical reality. When the topic of imaging is raised, migration naturally comes to mind, but migration is only one step in the imaging of land data. All seismic processing aims to image the subsurface, and all steps are important in attaining the final image. Unfortunately land data can suffer from a compounding of noises which makes each step more difficult. Compared to marine data, much land data has both weaker signal and stronger noise. In complex areas, the subsurface features scatter the signal. Inevitably the statics are more difficult with complex structure. In addition to all these difficulties, it is clear that land data in structurally complex areas ultimately require our most sophisticated migration tools, such as 3-D prestack depth migration, for proper imaging.

A number of years ago, migration was not considered much of an issue. For example, not many papers on migration were presented at the 1986 SEG meeting. This has now all changed. Over 100 papers on migration and imaging issues were presented at the 1991 SEG meeting. The reason is a desire for greater accuracy. We now want accuracy up to and beyond 90 degrees. At the same time we need greater generality in the algorithms in order to handle large velocity variations, and we want these results at faster speeds. The issue of speed is very important since we are moving towards velocity analysis methods that require many iterations.

At the 1991 SEG Meeting, a remarkable session was held on imaging salt. Papers presented by Amoco and Chevron showed some amazingly clear images of salt dome overhanging features obtained by new techniques based on turning-ray migration. However, it is probably safe to say that these recent improvements are not applicable to a lot of land data. The salt-dome examples were obtained from the Gulf of Mexico, where a smooth, well? defined increase in velocity with depth, and little lateral velocity variation, results in a situation ideally suited for the generation of turning rays. In difficult land areas such as the Canadian foothills, Central America, and the Rocky Mountains of the U.S. complexities in dealing with velocity variations indicate that the incredibly detailed 3-D depth images that are possible with Gulf of Mexico data are probably still a long way off with land data.

Clearly, then, velocity analysis is a large problem in imaging complex land data. Unfortunately, there may be no “easy” solutions for determining the required velocity information in the sense that there may be no low cost solutions. Perhaps the only way to obtain proper images is with 3-D prestack methods, which will require expensive processing. This in turn will require more data acquisition, and in mountainous areas with complex surface terrain, this will require that more money be invested. In some situations the required expense may be so great that it will not be justified.

The economics of exploring in difficult land areas may leave the geophysicist with the challenge to come up with innovative and affordable solutions to problems that the data often require. This may take some truly innovative thinking. For instance, it may be unrealistic to expect that all noise can be taken care of during processing. Prestack migration can produce some vast improvements over older imaging methods, but it is definitely not designed to be a noise suppressor. Something like weighted stacking works much better as a noise attenuator, but weighted stacking works against migration fidelity. Noise ends up getting in the way of our theoretical developments more often than we would like. Several years ago Bill French came up with the term “omnibus processing” to describe a dream-like, single-step processing system that would have raw seismic records as input and ideal subsurface images as output. In practice this ideal can never be met because of various evils like noise, deconvolution and statics that get in the way.

One suggestion that perhaps might be taken seriously is that we should try acquiring single-fold data with sources and receivers buried below the near-surface layers. We shoot multi-fold data because it offers redundancy and sometimes we use the extra information only in order to throwaway the bad traces. Perhaps we should be taking steps in the field in order to reduce noise.

For the foreseeable future it appears that what we don’t know will continue to outweigh what we do know. We have to break up the data processing into individual steps, mostly because we lack near-surface knowledge. Headway with difficult land data will be made only with the availability of all tools, plus the knowledge of which tools to use. Progress will come from researchers and from interpreters alike. Throughout, there will be nothing to compete with the expertise of the astute geophysicist who knows which problems to tackle, and which tools to apply to them.

Rodney Calvert (speaker) and Gary Barnes (Shell Canada Ltd.): Starting at the Top

Rodney Calvert, the Manager of Geophysics at Shell Canada Ltd., gave a presentation that focused on statics, which is always a crucial issue in the processing of complex land data. He began by showing two sections from the Canadian foothills. One showed excellent data quality, and the other showed poor (and fairly typical) data quality, even though both datasets were from the same region. The problems with the poor dataset had to do with the near-surface conditions. Obviously then, we need to start at the top when processing foothills data, since that is where the major problems originate.

An obvious question to ask is whether higher fold data offer better solutions. Calvert answered this question with another dataset acquired in the same location as the original poor dataset, but with 120 fold and a finer (17 m) station interval. A brute stack with only elevation statics already showed improvements over the original dataset. With residual statics, the stack was much better than the lower-fold dataset.

Calvert pointed out that the higher-fold data gave better results because it was able to provide some visible signal to start with. Once you have a bit of signal stacking in, then you can work on improving it-you can mix along it, enhance the signal-to-noise, do velocity analysis along it, or make reference traces for statics. If you don’t have any coherence, then you can’t get started.

Does that mean that we have to spend a lot of money on acquisition in order to work in these areas? To answer this important question, Calvert showed the results of an experiment in which the l20-fold dataset was decimated by throwing away 213 of the shots and by throwing out all the larger offsets. The decimated, 30-fold, dataset was then reprocessed in order to see the disadvantages of working with lower-fold data. The result was that the residual statics did not work nearly as well. The image was severely degraded, except for the edges of the section. However, when the statics from the l20-fold data were applied to the 30-fold dataset, the image was much better. Calvert observed that this result revealed the nonlinearity of the problem; the statics solution requires more multiplicity than we need in order to see what we want to see. Yet with poor signal-to-noise, we cannot get started.

The results of another experiment, where the near offsets were thrown out, and a statics solution was obtained with 40-fold data, still had the same problem. Again the original statics solution imaged the data much better. Shot gathers from the dataset showed lots of noise and rapid statics variations. Single-fold data obviously would not image well because of the poor signal-to-noise ratio. Even after migration the sparse dataset imaged as well, with perhaps a bit more noise, than the full-fold dataset.

In conclusion, the sparse datasets imaged virtually as well as the full-fold dataset, but only by using the statics solution derived from the full-fold data. The full multiplicity was required not for imaging, but for resolving statics. Long offsets were not found to be a big advantage. Since statics are on the critical path to an image of the subsurface, it appears that we must throw a lot of money into acquisition of high-fold data, or else do research into better statics algorithms.

Questions

Pat Butler (Kelman Seismic): Were refraction statics done on these data?

Calvert: Yes, refraction statics are routinely done on all our foothills data, although this does not always result in an improvement in the data. The problem is that the data do not always fit the assumptions that the refraction statics solution is based on.

Kris Vasudevan (Lithoprobe, U. of Calgary): What method was used for the residual statics?

Calvert: A conventional, correlation-based, Wiggins-type linearized statics analysis was used.

Vasudevan: In cases where refraction statics and linearized residual statics do not work, I believe that a stochastic approach may come to the rescue.

Colum Keith (Esso Canada Resources): What was the magnitude of the residual statics in this dataset, and how did the sparse data solution differ from the high-fold solution?

Calvert: The statics were not very large, maybe only 10 to 20 ms, but with a lot of short and long wavelength structure. In the sparse data solution, the long wavelengths in particular were not derived properly.

Keith: Was there much correlation between the statics and the surface geology?

Calvert: Yes, there probably is, but knowledge of the surface geology is not much help, except to know where the good areas and bad areas are. The trouble is that the existence of a continuous low velocity surface layer in such an area doesn’t make much sense. Outcropping inhibits the refraction model assumptions.

Rob Stewart (U. of Calgary): Perhaps part of the problem with refraction analysis in such areas is that we are taking a horizontal traveltime and converting it to a vertical velocity. Anisotropy is probably a major influence, and can easily affect results by 10%. Perhaps three-component recording in such areas would be of help in resolving anisotropy and as an aid in noise attenuation problems.

Calvert: You have an interesting point. When we derive a refraction velocity, we should ask: velocity of what? Refracted waves are boundary waves that have nothing at all to do with P-wave transmission velocity.

Vasudevan: What time window was used for the residual statics?

Calvert: I can’t recall exactly, but typically we choose a window where there is signal and where geology has parallel events.

Vasudevan: Unfortunately we have the problem that with a different choice of windows, we can end up with different solutions. I have another question that goes back to Ken Lamer’s presentation. What impact does poor signal-to-noise in the data have on what migration method is to be used?

Lamer: Migration really isn’t a signal-to-noise enhancing process and shouldn’t be used as such.

Peter Cary (Pulsonic Geophysical): What about aperture? Should a low-dip migration algorithm ever be used in preference to a high-dip algorithm?

Lamer: In cases where you know that steep dips are not present, a smaller aperture can help you. Pat Butler: I would like to point out that if you are going to take into account changes due to the near-surface, then it is better to do it in a time sense, not a depth or velocity sense.

Calvert: Yes, times are measured facts, depths and velocities are interpretations.

Shlomo Levy (Landmark-ITA): The choice of statics window is very important. If we take a deep window, and do depth migration, we can destroy near-surface continuity that we could have gained with depth migration. We need to take a long window so that depth migration is not destroyed. The danger is that the CDP term in the statics solution can capture some of the nonhyperbolic moveout that only depth migration can correct for.

Lamer: You probably do not want to restrict the window to a narrow portion of your data, but there is nothing gained by including large portions of noise within the correlation window.

Davis Ratcliffe (Amoco): What kind of lateral velocity variations exist below the weathering layer? Were they significant?

Calvert: Yes, they are significant.

Ratcliffe: We have found that prestack depth migration can help a lot in obtaining a better image in a similar problem area in Pakistan, for example.

Calvert: Yes, but you need to start with some coherent data nonetheless.

Bob Godfrey (speaker), Greg Johnson, Nick Moldeneavu (Geco-Prakla): Imaging Foothills Data

Bob Godfrey, a research geophysicist with GECO-Prakla, presented a case history of the processing of one dataset from the Canadian foothills where several processing techniques were used in an attempt to image the data. A considerable amount of time was spent on the preprocessing of the dataset, which included Green Mountain refraction statics. A comparison was then shown of the stack with residual statics after both trace-by-trace spiking deconvolution, and after surface-consistent deconvolution. The statics solution in both cases was obtained with a Gauss-Seidel, Wiggins-type residual statics approach. The result with surface-consistent deconvolution showed a considerable improvement over the trace-by-trace result. Next, crooked-line DMO was applied, but the improvement in the stack was only incremental. DMO aided the stacking of some criss-crossing events without destroying the signal-to-noise ratio in most areas. Next, the result after post-stack time migration with an omega-x algorithm, plus some coherency enhancement showed a “reasonably good” final result obtained by normal processing techniques.

The next approach that was used was to apply prestack time migration using a Kirchhoff algorithm. The approach was to output migrated gathers, which are used for postmigration velocity analysis. The velocities can then be used either for stacking the migrated gathers, or as improved migration velocities for another iteration of prestack time migration. In this case the migration was not repeated. The stack of the migrated gathers showed the wormy appearance that is characteristic of Kirchhoff migration. A maximum migration aperture of 9 km was used. For this dataset the stack after pre-stack time migration showed only marginal improvement over the poststack time migration result.

At this point, the dataset was processed with prestack depth migration. A Kirchhoff algorithm was used, with the traveltimes calculated with the finite-difference, eikonal equation technique of van Trier. In order to obtain the starting depth model, an image-ray migration of the time model was performed, and this model was then smoothed.

The starting model for this dataset showed large lateral velocity variations from 3000 mls to 6000 m/s. The first prestack depth migration was done with a model derived from the prestack time migration result. The structure in the model was then modified with four iterations with these velocities. At this point the result was given to the interpreter, who modified the velocities, and the migration was performed again. At this point the process was stopped, and poststack depth migration was applied to the final stack obtained by the “normal processing stream” with the model derived from prestack depth migration. The result was degraded by a large number of migration artifacts.

In conclusion, the prestack depth migrations were observed to be very sensitive to residual statics. Surface-consistent deconvolution was an important step in obtaining a good statics solution. In addition, modeling software for determining whether the time-depth models are consistent is important. When unbalanced sections are derived, this can lead to changes in the models.

Questions:

Davis Ratcliffe: I am concerned with the methodology you used for changing the velocity model. We use well control and any conventional velocity information to derive the starting model. We then iterate 30 to 40 times with prestack depth migration and let prestack depth migration dictate how the velocities should be modified.

Godfrey: What we have done is fix the velocities provided by the interpreter, and then iterate on the structure prestack. In this example, we have iterated on the structure four times. At each iteration we can do some focussing analysis to update the model. This is a method used in a paper by Whitmore with poststack depth migration.

Ratcliffe: To get the structure to stabilize, you may need to iterate 30 or 40 times.

Lamer: Have you tried looking at individual offsets to see residual moveout and using that to update model?

Godfrey: Yes, we have done that on marine data, but, not yet on foothills data.

David Klepacki (speaker), Janet Porter-Chaudhry, Colum Keith (Imperial Oil Resources Div.): Improved Interpretation from Seismic Images using Prestack Time Migration: Examples from South Alberta

David Klepacki, a foothills interpretation geophysicist with Imperial Oil, began his talk by pointing out that the interpreter’s job is a lot easier when there are long continuous reflectors to interpret. His talk showed examples where prestack time migration succeeded in giving a better, more continuous, image. Klepacki also reiterated a couple of points from the previous presentations. First, you need good signal-to-noise going into the migration. This often requires a lot of front-end work on statics and velocities. Second, it should be obvious to everyone now that we need to move away from common midpoint stacking in processing complex data.

For illustration Klepacki used some data from the Waterton area of southern Alberta that has good signal-to-noise. The data were processed with prestack time migration because of time and expense constraints. They have found that the ability to perform residual velocity analysis after migration has helped to solve many of the imaging problems due to smear and steep dips. This is not to say that prestack depth migration would not be preferable. Prestack time migration works well with a vertically varying velocity field, but not so well for laterally varying velocity. Depth migration would be preferable, but time constraints at present inhibit the use of prestack depth migration all the time.

A shot record from the Waterton area data showed the good quality of the data. Klepacki noted that this area was a fairly good one for data quality. On the other hand, other areas, such as the northeastern British Columbia foothills are well known for producing data with extremely bad signal-to-noise. A stack of the line after “normal” processing showed large gaps of “no data” zones, where no good reflectors were visible. Using the velocities from this initial result, prestack time migration was then performed on the data. Residual velocity analysis was then performed on the prestack, postmigrated data with the use of common-velocity-function panels. At this point it is important for the interpreter to be involved. A processor might pick the highest energy events, whereas the interpreter might pick events that reinforce a preconceived notion of what the section should show. Probably neither is perfectly right, but hopefully there will be a happy medium. The interpreter probably has a better idea of what the correct interval velocity is at a particular time or depth. The improvement in the Waterton dataset after prestack time migration was dramatic. The improved stack was then interpreted, and a depth model was derived from the time section. To conclude, Klepacki reiterated that pre-stack depth migration was desirable, but that time constraints inhibited its use right now.

Questions:

Ken Lamer: How frequently spaced were the velocity analysis positions? Were they close?

Klepacki: Yes, I don’t remember exactly, but we try to do velocity analysis at very close intervals after prestack time migration.

Cary: Is it possible to force the image to have long continuous reflectors, even if the subsurface is really not that way?

Klepacki: It is a possibility. Sometimes there is a choice as to which events to pick, in which case you just hope that you can see your way. It is somewhat of a seat-of? the-pants approach, but hopefully it is obvious when you are abusing the system, and when you are getting closer to reality.

Cary: You stated that prestack time migration gives improved images, but that prestack depth migration would be preferable. Do you think that time migration is eventually going to become extinct?

Colum Keith: I think that we are probably going to want to do poststack time migration on everything. Then, for the interesting areas we would acquire more data in that area. With the new data we would do the processing with prestack time migration, to get a better understanding of the velocity model. With a line that will be drilled, we would then do prestack depth migration, since drilling is a significant investment. I don’t think that prestack depth migration will be done on every line that comes in house in the foreseeable future. That’s just my opinion.

Calvert: Your results showed that events which were not dipping very much were imaged better with prestack time migration, so the stacking must have been better. Was this just because of closer velocity analysis positions, as Ken Larner said?

Colum Keith: I suspect not. We always do our velocity analysis at close intervals. The prestack time migration does allow a better velocity analysis. Probably prestack time-migration velocities applied even to “normal” processing would produce a large Improvement.

Karl Schleicher (Halliburton Geophysical Services): What method of prestack time migration did you use and why?

Klepacki: We used common-offset Kirchhoff migration. Janet may have more to say about why.

Janet Porter-Chaudhry: We have tried other prestack time-migration methods and have ruled them out for various reasons. Kirchhoff is the method we tend to use.

Larry Mewhort (Husky): With the final prestack time-migration image, did you do horizon-based migration with image rays to get the final depth model?

Klepacki: No, we use a Sierra-type depth conversion. We then stick this model into a GEOSEC balanced-section construction tool, and use that to smooth the lines and check the model for balance and thicknesses.

Moshe Reshef (Landmark-ITA): Structural Imaging in Complex Structural Areas

In contrast to the previous talks, Moshe Reshef presented results from working entirely with prestack depth migration. All the final sections were displayed in depth rather than time. Reshef began by emphasizing a few important points that must be taken into account with foothills data. First, the migration algorithm must be capable of handling the irregular acquisition geometry that is used for land data. This can have a big influence on the choice of migration algorithm to use. Next, the algorithm must handle the combination of topography variations and high velocities at the surface. Also, 3-D effects are obviously important, so issues such as crooked lines and energy arriving from out-of-the-plane must be considered.

The sparse, and often irregular, sampling of shot points with land data compared to marine data can have a big impact on the performance of the migration algorithm. With marine data it is possible to use a migration algorithm that works in both the shot and receiver domains, but with land data the common-receiver domain can be severely aliased. It is not unusual to have a jump in the shot interval within the line of, say, a quarter of a cable length. For this reason, migration on common-shot gathers is preferable. In this domain the migration can be localized to specific areas of a line. All the migrations are done from surface. In addition, results are often improved just by treating the topography correctly. Just applying a static shift and starting the migration from a flat datum isn’t good enough.

Prestack depth migration does not have to be an expensive process. To reduce the processing time, downward continuation and velocity analysis can be performed in layers. It is possible to keep the result from the last layer and use it as the input for the migration of the next layer. In practice several different migration methods are needed: Kirchhoff, phase-shift, and space-frequency finite-difference.

Reshef then showed some data examples. Looking at a single migrated shot profile, you could see the topography variation within a cable length, which shows how important it is to handle topography correctly. The poor quality of a “normal” stack from the area indicated that the data were acquired in a complex, problem area. As described earlier, the interval velocity analysis required for prestack depth migration was performed in an iterative manner, from the surface down. This analysis requires interactive geological interpretation. A method that has been found to be fairly robust, a true CDP panel analysis, is used for picking correct interval velocities. The migrated panels are analyzed locally, not in one pass of the entire dataset.

The CDP panel analysis goes by various names: focusing analysis, common surface-location panels or coherency panels. In any case, the panels are analyzed in the x-z domain, and the correct velocity is determined by the fact that an event should be flat from CDP panel to CDP panel. This point was illustrated with a synthetic example with a flat layer overlying another flat layer, but with the upper layer including a lateral velocity variation. Constant half-space imaging was then performed, which consists of performing the migration at several (say 5) different constant velocities, and comparing the flatness of the layers.

With real data the method is the same except that the constant half-space imaging is performed just down to a certain layer with several velocities. With synthetic data you can see the different curvature of events with the different velocities. Real data can be completely different. The events can have quite a strange appearance. A brute force method is therefore used for picking the velocities. The migrated events in the panels are examined, and subjective judgment is used for selecting the best velocity. Local stacks in that area can also be used to help out. From this information, velocities are picked before going on to the next layer. At different locations on the line the events can have quite a different appearance. Usually velocities are picked at places where the data quality is better. The interpolation between control points also includes some interpretation.

The prestack and poststack processing that accompanies prestack depth migration deserves some comment. Can we preserve amplitudes during the processing? In theory, yes we can, but in practice we cannot claim to preserve amplitudes. Prestack depth migration is a structural imaging tool, so everything is done to enhance the structure. Should we include the near-surface velocity model in the migration? In theory, yes, if the algorithm can handle it, and we are confident of the model. In practice this is another matter.

A warning is needed regarding the frequency content of the output from depth migration. The depth sections look different from time sections because the frequency content looks very different. The section is expanded in places. This requires some special attention to the filters that are used after migration. The post-migration mute is also an important issue. Often data can image well outside the original location, so applying a mute can be dangerous. It can act as a dip filter. It is important to keep the migration, stacking, filtering and muting as separate processes. This can prevent destroying the image in some cases. For quality-control, there are a couple of options available. One is to overlay the model on top of the final section. Another is that the common-surface-location panels should be flat all over. This is a strong quality-control tool. The approach described above has been applied successfully many times in practice. Reshef pointed out that he has found that prestack time migration and prestack depth migration with Kirchhoff algorithms take about the same computation time, so he sees no point in using prestack time migration. Even after prestack time migration you are left with the problem of how to convert to depth. And what are the quality-control tools to use with prestack time migration? Even with the correct velocity model, prestack time migration does not give the correct result.

Questions:

Davis Ratcliffe (Amoco): How many iterations did you use?

Reshef: We usually start at a place near a well where there is good control, and typically use less than 10 layers for velocity analysis.

Ratcliffe: Have you worked with plane-wave domain algorithms? We have found that there is not a lot of shallow information in the shot domain, but the plane-wave domain does have that information.

Reshef: No, but we have found that the Kirchhoff algorithm does not give as good results near the surface as other algorithms, although it is sufficiently good for velocity analysis, Once we determine the velocity model from Kirchhoff migration, we then go back and do the final migration with different algorithms which are more accurate and give better results up-shallow. We typically use f-x domain finite-difference for the shallow section, and combine that with another algorithm in the deeper section.

Pat Butler (Kelman): I see a problem arising with your method at a particular location if the velocities around it are not correct. The result at a particular location is influenced by results at locations beside it.

Reshef: Then the events won’t be flat, but the quality-control tool still works. Velocities are defined over a range, so it is not perfectly local.

Butler: But what if there is a fault? The question is how do you place the fault?

Reshef: When there is a fault, then even with the correct velocity, the panel won’t be flat. The event will be displaced on either side of the fault. That’s another issue. It is a problem. We play with the location of the fault until it is flat on both sides. So we use a structural model and a velocity model.

Butler: The problem is that you need to know the location of everything before you can change the velocities.

Rodney Calvert: How accurate do the velocities have to be? Do you not find that errors up-shallow cascade down to lower layers?

Reshef: True. We try to do the shallow part as accurately as possible, but we don’t always succeed.

Calvert: The model you derived for the real data example looked remarkably simple. There were maybe four layers with complex geometry on them, and constant velocities within each layer.

Reshef: Yes, it was as simple as that. Calvert: Sonic logs never look like that.

David Aldridge (University of British Columbia): I have a very general question. Does the industry consider reflection travel-time tomography a potentially useful technique for getting velocity models?

Shlomo Levy (Landmark-ITA): The trouble is that you have to pick traveltimes on 100% records. The mechanism to pick them is not available. If you have a good stack, you can start your picking, but if you have a good stack, then depth migration will do as well to get the background velocity.

Brian Link (Kelman Seismic): What about off-line events? How do they affect the final result?

Reshef: If the off-line event has a similar velocity to everything else at that level, then it will come through on the image, and you can only hope that the interpreter recognizes it as an anomalous event. If it has a different velocity, then you can discriminate against it with the velocity analysis. The only complete solution is 3-D imaging.

End

     

About the Author(s)

References

Appendices

Join the Conversation

Interested in starting, or contributing to a conversation about an article or issue of the RECORDER? Join our CSEG LinkedIn Group.

Share This Article