Better known for his work on seismic migration, Bee Bednar, a Mathematician turned Geophysicist, has had a long and interesting career spanning more than three decades. Presently serving as Vice-President at Advanced Data Solutions, a Houston-based company specializing in Depth Migration, Bee was in Calgary recently. The RECORDER was lucky to meet with Bee for an interview and to learn more about him.
Peter Cary, a well-known name in the seismic industry, very sportingly agreed at short notice to participate in the discussion. The following are excerpts:
[Satinder]: Bee, tell us about your educational background and professional experience.
It’s pretty strange — a bachelor’s degree in music, and a Master’s and a Ph.D. in mathematics from the University of Texas. I graduated in 1968 and I did classified anti-submarine warfare research. You could not publish or even tell anybody what you worked on. I got into the oil business when I took a job at the University of Tulsa. I was hired as a consultant to show the owner of CEJA Corporation how to go from analog to digital acquisition. In submarine warfare everything we were doing was digital while, at the time, almost nothing was digital in seismic acquisition. The owner was a Geologist with a Ph.D. from Harvard. So he taught me pretty much everything I know about Geology. He got me into the oil business.
[Satinder]: Tell us about some memorable experiences in your geophysical journey.
The best one occurred while I was chauffeuring Enders Robinson, George E. P. Box (a very famous English statistician) and Manny Parson to a time series conference in Iowa. I figured if I had a wreck and killed them, I would set science back a few decades <laughter>! That’s the most memorable one. Another one was finding out that the geophysical industry had a much more technological bent that I gave it credit for. I did not know anything about it. What I did know was kind of distorted, but I had the great fortune of visiting Conoco’s Ponca City Research lab in 1974 having a then very young R. H. Stolt explained migration to me. I didn’t have a clue as to who Bob Stolt was, but I can vividly remember meeting him. I thought everybody in the geophysical business was a geologist and so here I had a superb physicist in a oil company research lab explaining things in mathematical physics that I could get into. This was great. Oil companies paid more than research mathematicians could get at the time.
[Satinder]: When you look back, how does it feel when you think of the drastic pace at which computational power mushroomed or 3D exploration became so pervasive?
I go back to the days when computers had 256 23-bit words on memory that was really a continuous loop of magnetic tape. One could program in something called absolute machine language. No FORTRAN, no C, and not even an assembler. It took days to calculate simple things like square roots. One couldn’t even think of running any of the kinds of applications we run today. The first “fast” computer I had access to occupied almost one floor of a secure building that almost no one could enter. It was illegal for anyone without top secret clearance to go into that room. It was in a government lab with an armed Marine guard at the door. In contrast, the first Apple computer I owned was 1000 times faster than that machine and sat conveniently on my desk. While this all made a big impression, the pace of change didn’t really hit me until we started to build cluster systems out of PC’s that were as fast as some of the big Cray and IBM mainframes. It was then that I realized if you stuck a big bucket load of these things together you could probably construct one of the world’s fastest computers at a fraction of the cost of those mainframes. It really wasn’t the speed of the machine as much as the fact that I could build so cheaply. My feeling now is that Moore’s law is really valid, and the exponential growth in computer speed has not yet run its course. We are going to build a new generation of computers that are smaller and faster than the current PC crop. So, in a very small space you are going to have a machine that will be powerful enough to solve almost any kind of imaging problem you have.
Right now we solve acoustic problems. Soon we will routinely solve elastic problems, and then full waveform, and so on. One will be able to see how well your current guess of the subsurface fits your data and still have time to iterate 4-5 times to improve it. Maybe even do stochastic hypothesis testing. I think it’s truly a revolution. What this power does for us is provide a platform on which we can evaluate all our theories against measured data. All of the advanced technologies we know about, the wave equation and so forth, become possible in the most general mode. In my mind it makes no sense to provide a new theory if you aren’t willing to compute it. The computer revolution makes it possible to run many new things fast enough to make hypothesis testing possible. So, if somebody says his answer is the best, you can test it and see whether it is or not. If you think your model is the correct one, you can run a simulation to see if it matches your recorded data.
[Satinder]: Apart from this, what other areas of geophysics can you put in this category?
I think we are going to do more integrated inversions, where a variety of measurements, including electromagnetics, and gravity, along with our seismic measurements, are simultaneously inverted for more and more realistic parameters.. I have been a big sci-fi fanatic for years and years. When Spock runs his sensor scan and then tells you from orbit which kind of planet it is, how many life forms live on it, how much iron there is, where oil is, and where gas is, his computers are doing some kind of integrated inversion. I don’t think we will ever see a lot of this, but things like weather prediction fall into this category. While most of us GeoWizards pride ourselves on the amount of data we have and how much computation we must do, we don’t really consider what out atmospheric colleagues are doing. In fact, their computational load is much higher than anything we have. What these scientists can do is accurately predict the weather about 6-7 hours ahead of time. The only problem is it takes about 3 weeks of computing power to do it. By the time you predict, the weather is three weeks old. You know what the answer was before you predict it. When the machines get just 21 times faster, we will be able to predict the weather 6-7 hours ahead of our measurements. So again, the computer becomes the enabling tool that produces the next generation of technologies.
[Peter]: Have you always had a love for computing?
Pretty much. I think it comes from playing all those musical “computers.”
[Peter]: Does it stem from your early education?
It stems from a challenge. I am trained as a theoretical functional analyst. So I knew nothing about applications. I had an office with a petroleum engineer who was getting a degree in numerical analysis. He was a superb programmer, knew a lot about computers and he basically told me, “Well you are pretty smart Bee, but you could never pass this computer class.” So I took the computer class and got hooked. So that’s how I got into it. I like computers, they let me do things, they let me test things and ideas on them, very quickly and easily and I like that. It seemed to me to be much more fun than proving theorems that had little or no impact on anyone.
[Peter]: Do you consider yourself as a computer scientist first, and a geophysicist or mathematician second?
It is probably computer scientist, mathematician and then geophysicist, although I like the geo- part because of all the geology around you. In Moab, Utah, spectacular geology is everywhere you look. If you have any scientist in you, you want to know as much about it as you can. You learn about the surface geology, and then you want to know what the subsurface is like. It takes acoustics to see it so then you want to know about that. So, it’s a natural progression from observation to analysis to the tool used to visualize. I think I viewed computers the same way. I saw them do things. I learned what was inside them and then I learned how the fundamental equations are programmed into them to make them into a totally different kind of machine. I saw and revelled in the general kinds of problems they would solve. I guess the computer knowledge provided a kind of left hand turn from pure theory to a broader picture that I found much more fun and interesting. Programming a computer lets you create in much the same way that playing music lets you create.
[Peter]: Whatever happened to music?
I directed a church choir for 24 years, so that did not go away <laughter>. I have enough arthritis in my fingers so I can’t play piano very long or very well, but I can still play. I have 2 pianos, 2 key boards, a trumpet, 2 accordions and something else which I can’t remember <laughter>; a lot of stereo equipment too.
[Peter]: I am interested in your employment career; you’ve worked in a number of different places in the industry.
I started with the Defence Research Lab at the University of Texas that later became TRACOR. The Vietnamese war was in full swing, and I was given the task of trying to digitally simulate chaff dispensers for F-4 Phantoms. We were trying to simulate what happens when you fire chaff out of a plane traveling faster than 600 mph. What is supposed to happen is that the chaff explodes into a highly reflective ball. The plane dashes through the ball and what a tracking radar sees is this great big ball. As the ball expands and slows down, or the plane turns, it separates from the plane and provides a much more reflective target for enemy missiles. Any radar tracking device fired at the plane targets that ball instead of the plane. So we were trying to use the simulation to maximize the chance of a miss. At some point in time, I don’t remember when, I helped one of the engineers trying to do something similar with a sonar device. I showed him how to make his program work, so they shifted me to anti-submarine warfare (ASW). When I finished my Ph.D. in 1968, the company found itself in a lot of financial trouble. I was under the impression that teaching might be what I really wanted to do, so I took an assistant professorship in the Mathematics Department at Drexel University in Philadephia. I thought I was out of the computer business for ever. One can’t always be right!
I had a friend at the University of Tulsa who on the coldest day of the year, would call me and say, “The temperature in Tulsa is 50 degrees; what’s the temperature in Philly?” Of course it would be 6 degrees or less! So he enticed me to move, and that got me into the oil business through the consulting arrangement I discussed previously. After I had gotten to know Bob Stolt reasonably well, I managed to talk Conoco into letting me consult on a once a week basis. I guess that as I got to be better known, John Shanks hired me at Amoco on a similar consulting basis. Bill Clement, who left AMOCO to become a manager at City Service, hired me as Seismic Research Manager there. He then moved to Amerada Hess and took me with him.
Throughout all of this, I had wonderful people working with and for me. Art Weglein, John Anderson and Bob Keys at Exxon- Mobil were and are really good scientists and friends. In addition, Val Hebertsen, Gerald Neale, Yonghe Sun, Fu Hao Qin, Steve Checkles, Vic Forsyth, John Weigant, and others at Amerada Hess taught me 90% of the geophysics I know. Because of Tulsa’s position in the energy business, I also got to know to people like Sven Treitel, a prince of a guy, and Enders Robinson, who certainly was at the foundation of things. It just progressed after that.
When Bill Clement left Amerada Hess I took over as manager of his group. Hess was one of the companies that would give you a minimum amount of resources to get a maximum amount of work done. On the surface one might consider that to be bad, but it turned out to be good. It forced us to stumble onto things. We were trying to do seismic processing in-house with insufficient resources. We stumbled on an Apollo DN-10000 workstation and discovered that if we ran a string of UNIX processes strung together though UNIX pipes, one application would execute on one CPU and the other on the second. Somebody suggested that you could pass the data over the network and run another application on a second workstation.
Now, I never have understood time migration. I could derive the depth migration stuff but never really understood why anyone would even want to run time migrations. Every contractor we dealt with in Canada or the US simply did time migrations. According to them it was too expensive to run depth migrations, so we started doing everything in depth. Because we could simultaneously image different offsets on each of any number of machines, just to tell you how crudely we parallelized, ten workstations would migrate a 2D line relatively quickly. Sam Gray was one of the people doing cutting edge research in this, and by keeping close tabs on his work, we learned how to do these migrations faster and faster. By 1989- 1990 almost every 2D line processed in-house was depth migrated. By 1993 we were doing small 3Ds on SP1’s and later SP2’s. By 1994 we could do 4 Gulf of Mexico blocks a month on our SP2.
I worked in Amerada for 14 years until we eventually got a VP who decided in-house computers were too expensive as a fraction of the per barrel costs. I retired a year later and went back to Conoco and Ponca City. During this period of consultation, I met Rade Drecun, the president of ADS at one of the SEG meetings in Dallas. He said come by and talk to us sometime. So in February 1998 I was made an offer. ADS was running all their stuff on HP mainframes and I talked them into LINUX based PC clusters and the rest is all history. Now we run everything on clustered PC’s. So that’s the kind of background I have, more of a stumble rather than an organized walk. Maybe a bit of Brownian motion.
[Satinder]: You had a company 3D Bee Tech – tell us about that.
It was a consultancy. I also have an anisotropic ray tracer developed by myself and Tariq Alkhalifah. I still have it, but if somebody wants it, 3DBee Tech can sell it.
[Satinder]: Yes, you had a paper with Tariq at the 1999 SEG.
That’s interesting. I did all the processing work and Tariq did all the theory and programming. I thought he should do the presentation. There is no question that he is THE expert on what that paper was about. But he gently refused and said he might not make the meeting. So I agreed to do it, and later he shows up at the meeting! I teased him a lot about this. Even called him a wimp for not doing the talk. Regardless, he’s a wonderfully intelligent and friendly fellow. He published a paper in 1998 in Geophysics on time anisotropy that details the raytracing equations used for the results in my paper on Seeing the Invisible in The Leading Edge.
[Satinder]: Your son is also a geophysicist?
He is a quantum optician. Oh, don’t call him a geophysicist! I think one thing he got from me was not the geophysics but the love of computers. He is really good at computers. He makes me look like somebody who knows nothing about computers. He has a Ph.D. in quantum optics, but I think his first love is still computers.
[Peter]: Is Geophysics a field you would like to steer your son towards? Are you optimistic enough about the future of geophysics to do that?
Jerry Ware, an MIT Ph.D. who wrote one of the defining papers on 1D seismic wave equation inversion (scattering theory type inversion) had what I think was one of the better views on this. Once I was complaining about the fact that I felt unqualified to work in geophysics because I did not know much about geology or rocks. He tells me not to worry about that. Get a degree in fundamentals, Physics, Mathematics, whatever, and I will teach you all the Geophysics you need in 6 months or less. I think you need to be well rounded in the fundamentals of whatever you take. If you want to be good at science you must start with first principles. You can’t be too particular about what you learn, because you never know what you are going to need when you have to solve a problem you’ve never seen before. I should qualify this. I am not concerned about getting a degree in Geophysics. I don’t care about that. My concern is the oil business itself. It may suddenly die one day and that bothers me. Maybe it won’t. But it looks to me that eventually people in the oil patch are going find themselves in a dead business It may be 100 or more years from now, but it still bothers me because of the labeling that gets attached to working as a geoscientist. You get this “geo” label and immediately other research opportunities are closed. If you have the foundations, you have the ability to change to whatever you think you are capable of doing. That’s why I like the ideas of being educated in the fundamentals.
[Peter]: You probably also see a change in how research is done in the oil business.
Let me count them: Amoco, Cities Service, Mobil, Texaco, Chevron, Shell… These companies have either merged, folded, or cut research budgets severely. I was just talking to Larry Lines about this. When I consulted at Amoco, Amoco had the finest library in Oklahoma, no question about it. Any kind of book you wanted, they would get it for you in no time. All of that stuff was dumped because, “We don’t want to pay the money for this kind of library,” and to me that was extreme narrow mindedness, near sightedness, whatever words you want to call it. Most of those so called executives who were running the company, didn’t realize that Amoco was doing depth migrations and finite-difference modeling in the 70’s. Nobody wanted to use it then. It was useless and unnecessary. Now such topics are on everyone’s list — those guys knew how to use it 30 years ago! Yeah, maybe they didn’t have the compute power, but they knew how to do it. They understood it practically, and they knew that the big problem was going to be how to determine the velocity model.
The oil companies paid for the knowledge and then threw it away. It’s still going on. Look what’s happening to Phillips and Conoco. When they combine, the thing that is going to suffer the most is research on geophysical imaging. I think that is extreme near sightedness. Today, building an anisotropic model is a real challenge. I don’t know if anybody really knows how to do it well. You can read Tsvankin’s book until you’re blue in the face, yet how many people have pursued any of the possible methods in detail? I think there is tremendous opportunity for research, but there is no funding coming through. Even worse, the window for performing this research occurred during the last several years of depressed oil patch economics. We need answers now to questions that should have been asked a decade ago.
[Peter]: Do you think University consortia are able to keep up a meaningful level of research?
I am sure they can. Things are cheaper for them. But just look at Jon Claerbout: he went from 75 companies to 30 companies, and that is a wonderful consortium. I haven’t kept up with CREWES, but I’ll bet their situation is not much better. CSM seems to be keeping up technically, but again they’ve got 25-30 companies. Bill Symes Trip consortia at Rice has 4 sponsors. Gary Schuster out of the University of Utah has the same kind of opportunity. I think what that means is consortia will consolidate, just like the oil business. You are going to lose a lot of good research, because these guys are going to have to change disciplines in order to maintain their positions. Sven Treitel has some harsher words than I am willing to give, so that question should be posed to him. Remember the same people that reduced Oil company research budgets, also reduced funding to consortia.
[Satinder]: When I think of depth migration I am reminded of Oz Yilmaz’s words, “Do not spend time in the time domain, try and spend more time in the depth domain.”That was in the mid or early 90’s. How much work is being done in the industry today in terms of depth migration?
My impression now is that everybody wants to get onto the depth bandwagon. It wasn’t too long ago when you could count on your fingers the few oil companies that were doing almost every major oil project in depth: Amerada, Shell, Exxon, Mobil, Amoco, maybe a couple of others. In 1990, there was this inversion conference held in Copenhagen and they threw out this Marmousi data set and said, “Go find the velocity and get the image of this dataset.” Imaging this dataset in time was considered impossible by the usual post-stack migration route. Shlomo Levy says he can produce an image in time. So can I, but what I have to do to get it required that I understood how to do it in depth first. I’ll believe Shlomo, but back then no one could produce a reasonable image through time processing alone. It turned out to be very difficult getting an image of this dataset in depth. But when you went to that conference a stunning thing came out. All the oil companies that participated could migrate that dataset in less than 20 minutes. None of the contractors could migrate it in less then 2-3 days. The technology inside the oil companies was already focused on doing this kind of processing. It was at this same time that Oz made that comment.
There was also a conference that focused on migrations in general. Bill French, Ken Larner were there, Oz might have been there also. The panelists all appeared to take the position that we were many, many years away from even being able to do depth migrations. One of Royal Dutch Shell’s key scientists was also on the panel. Unfortunately, I don’t remember his name, but someone observed that time migration solves 90 percent of the problem, and the Shell guy said, “Yes, but the oil is in the other 10 percent!” So my opinion is depth is always the way to go. But having said that, I don’t think you can get there without going through time. In spite of what Oz said, you have to spend a certain amount of time in time to figure out where you are in time first, and then you can think and figure out where you are in depth. In similar manner, you have to find out where you are in acoustic depth to figure out where you are in anisotropic depth.
Without going to depth, you won’t really know if there are any serious problems. Most people ask, “Is this a depth migration problem or a time migration problem?” You won’t really know until you do depth migration. In my career, I have watched many people drill wells on time migrated data only to find out that the rig or platform was in the wrong place. This was usually blamed on poor time processing or bad interpretation, but the instant they saw the depth migrated image they knew they had made a significant mistake. While ignoring anisotropic variations might have resulted in depths that did not match the wells, improved lateral positions and better vertical imaging meant more accurate interpretations They could now recognize where they should have put the platform but it was just too late. On the east coast of Canada, I hear the Canadian and American companies are beginning to drill very expensive wells. Compared to dry well costs, a depth image is free.
[Satinder]: Do you think depth migration is affordable?
We think it is. In 1993-94, Amerada Hess, Phillips, Western, PGS, and Diamond all bid on a 16 block depth migration project. The input was 16 blocks and the output was 4 blocks. Amerada and Phillips had almost identical bids that were almost exactly ? the cost of the bids submitted by the contractors. The bids from the contractors were close, within 10 percent; but what stunned me was the cost. To do this 16 by 4 you were talking a price in the 1.5 to 2 million dollar range. For these 4 output block images you were talking about 350,000 dollars a block. Today the prices are a bit cheaper. An output block now typically costs about $25,000. What they are charging for depth today is what they were charging for time in 1997, and while the cycle times may not be the same they are getting close.
If you have done the time processing, fine — you have a beautiful time section that had good editing, and filters. It’s been decon’d, homogenized, smoothed, stirred and spindled correctly, but there’s still things that are going to slow down the depth work and make it more expensive. One challenge in the depth migration domain is the velocity anomalies. More precisely, the severe velocity anomalies. The best example is that of salt. It is very easy to see the top of the salt. What you cannot see is what’s below that top. Even with really good technology we do not see the subsalt as accurately as we think we should. Part of the reason is that most imaging is done using a single arrival Kirchhoff tool. Worst of all, we are using an acoustic algorithm to image our data. When was the last time you picked up a liquid rock? Returning to the focus of the question, depth imaging in this setting is expensive, but absolutely necessary, almost regardless of cost. What makes if difficult is the fact that the interpreters don’t really understand what they are looking at. Their focus is on getting the depth right and not getting the best image. If the depths don’t match, the image is faulty. Unfortunately, the depth image can be better than the time image, but discarded because the geophysicist does not understand that what he is looking at is just in distorted depth. The result is frequently many expensive iterations with an acoustic algorithm that can never get the depths right. No focus is put on figuring out if there is anisotropy or not.
[Peter]: Here in Canada, depth migration is not universally accepted, or perhaps to put it better, it’s not seen to be needed over and above time migration. Other people who have been interviewed for the RECORDER have put this on record. Their reasoning is that they have gone through the extra effort and expense of depth migration and in the end it wasn’t worth the time and effort because it didn’t change their drilling locations. What is your response to that?
If it doesn’t change the drilling location and it doesn’t give you anything more clearly, you don’t need to do it. What I am saying is if you don’t do it you are not going to know whether you need to do it. There is no way for you or me to sit back and say this is not a depth migration problem until I image it. I have been really surprised with the things depth migration will focus, where it will pull things, that time migration won’t. But you have to figure out a way to know whether the velocity variation is worth going after. If it isn’t worth going after then it means you don’t have much velocity variation. If you don’t have much velocity variation, why not run depth migration? It’s not going to be any worse. Don’t spend so much time on the velocities, spend more time on getting the right imaging velocity and then running the best algorithm you have.
Hess was very successful in the foothills. All the wells were place on depth migrated sections.
Out of some 33 wells, 31 were successful. That’s not bad in an area where depth migration is unnecessary
[Peter]: For example, on Alberta Plains data, you know it’s flat geology. Is there ever going to be a point to running depth migration?
I don’t think you’re going to get a worse answer doing depth migration. Caroline is an example, where I think we got the depth migration to tie the wells the very first time we tried. Amerada walked away from there, I mean we did not do any more exploration there, but the depth image in my eyes was much better than the time version of it. We also did a coverted wave survey there, in time, with exactly the same negative result. Maybe we just did post-stack instead of pre-stack migration, but the depth image was crisper and had better resolution.
One of the other things we did was to use depth migrations and wells, to build models to simulate various geophysical settings. While I do not seem to remember the name of the reef, it was too long ago, we did this over a Canadian reef that had limited exploration success. The interpreters were worried about incorrect interpretations caused by side-swipe off this 300-400 foot reef. While it was beyond reasonable resolution, one could still see it clearly on the depth migration section, but not on the time section. We also took the velocity model and shot a 3D synthetic survey over it to see whether what we were picking might correlate to sideswipe. As far as we could tell there was no sideswipe. So the 2D lines had to be showing different parts of the reef. I think the reef was still dry but we at least proved the validity of the interpretation.
[Peter]: I know one of the topics people here are keenly interested in is wave equation migration versus Kirchhoff, or Kirchhoff versus non-Kirchhoff. Especially for 3D land surveys, people started off automatically going to Kirchhoff migration because Kirchhoff can apparently handle the variable offsets and azimuths of an onshore 3D survey. How do you see resolving this problem with a wave equation migration?
Well, typically what we do is migrate a shot at a time, so there are no azimuth or offset handling issues. The shot profile migration handles this just fine. The problem is how to refine the velocity when the migration indicates problems. What we do with the Kirchhoff is offset-based common image gathers to QC the accuracy of the velocity field. A different approach, emerging out of Claerbout’s SEP allows us to produce common open angle gathers during the migration. It doesn’t matter which migration you use, you can still produce angle gathers with a reasonable amount of increased complexity. For the most part everybody thinks this is just emerging from Stanford, but Larry Lines and his students as well as Dan Whitmore at AMOCO did the same thing in the mid ‘80s. The French school — École des Mines has practiced this approach for several years. They do it with Kirchhoff, but you can also do it with common-azimuth or common-offset wave equation migrations. For shot profiles, you migrate single shots and come out with a suite of angle gathers that are then summed together to form the final opening angle gathers. I don’t see that Kirchhoff has any advantage at all. You are still handling all the azimuths exactly the way you should. You are still doing a 3D problem.
[Peter]: You do not worry about data aliasing of the shot between cross lines?
As a matter of fact what we have observed empirically is that one may not need to migrate every shot. The shot aliasing is not all that bad compared with receiver aliasing, because of what the receivers are doing for you — there is paper on this by Nolen and Simon in the 1996 SEG abstracts. What they show is that when you are doing the shot migration you get these strange kinematics artifacts. They thought for a long time that these artifacts were due to errors in the algorithms, but they are not. There is a kinematic explanation for each and every artifact. If you look at the shots they migrated, they image a wider piece of each of the horizontal event, so if you have a shot every 500 m, you get just as good an image and continuity as with a shot interval of 100 m. So shot aliasing is not a problem, receiver aliasing is. You still need at least four points per wavelength to get a good image. What this means is that if the receiver spacing is too wide, the data will be aliased above some frequency. Your wavelength content is limited by this spacing. Nothing keeps you from filling in zeros and just migrating these traces and adding them up. But I agree that data aliasing is always going to be a problem on land.
The impact of aliasing also depends on how the data is shot. If it is on a regular grid — I do not think it ever is — but if it is on a say 15 m x 15 m regular grid, then all shot and receivers fit exactly where they should and so you just migrate them. If they do not fit this regular grid, you have to do some sort of a partial moveout or some kind of Kirchhoff data mapping trick to get them onto that grid and then migrate them shot by shot. And that is how all land data sets have been done. We did do common azimuth on land datasets that had a very wide azimuth variation and that turned out to look very good, or at least as good as time migration. But I won’t recommend it — you can do it if you want, but you are now saying all these things are recorded in one azimuth, and ignoring azimuth variation. You are going to get into trouble eventually.
[Peter]: So you think Kirchhoff is going to die away?
think the use of Kirchhoff is going to be replaced by some other variations. For example something like Gaussian beam. You are still going to have something like a Kirchhoff but it will be much better at handling multiple arrivals and amplitudes. One of the great advantages of Kirchhoff is its great flexibility. One can migrate single traces to any output location. It doesn’t matter what your output or input grids are. If you want velocity analysis at position A, even if you don’t quite have data there, Kirchhoff will give you a common-image gather there, and that flexibility to migrate in any domain in any order, is going to keep it around for quite a while. I also think it’s going to be a long time before we get to a computational point where we can run non-Kirchhoff anisotropic migrations. It’s very easy to trace rays to produce an anisotropic Kirchhoff.
[Satinder]: Migration (time or depth) is usually considered a difficult subject, and people tend to shy away, as getting into the wave equation solutions is not everybody’s cup of tea. In a recent interview we had a guy say that a new technology isn’t worth a damn, if you are not able to explain it simply enough for people to understand. Do you have any suggestions for people to understand this subject without actually going through the mathematical rigor that it entails?
In one word, “Geometry!” The greatest migration trick I have ever seen in my life was done by Gerry Gardner. He showed that it was possible to migrate data first and then do the velocity analysis and stack afterward. There is not a single wave equation analysis anywhere. You can argue whether what he did was something you want to do, but Shell used it for years as their only prestack time migration. You can explain absolutely every single piece of it geometrically. Same thing is true for Stolt migration. You can explain everything in it from a simple tan(b) = sin(a) rule. In fact, every migration algorithm is easily and completely explained by its impulse responses. That’s all you need to know. You can explain the entire migration scene with virtually no mathematics at all. You have to understand trigonometry and perhaps a little analytical geometry, but you really don’t need to know anything else to understand what’s happening. If you really want to understand the algorithms in detail like for example Norm Bleistein understands them, then you need all the mathematics. If you want to explain shot domain migration without the math, then I can do that with chalk and a board and nothing else. I think almost all good technology is simple. It’s nice to follow the KISS rule — keep it simple stupid — because your understanding increases the simpler you make it, no matter how sharp you are. My philosophy is you understand it at the simple level first and then put all the complicated stuff in later.
[Peter]: I know migration is your favorite geophysical topic. You have also worked in other processing steps. Do you think there are other areas that are in need of work?
Multiple attenuation. The best approach is probably due to Art Weglein. I understand the math, I know what he is doing, but I can’t give you a clue how to implement it in 3D and have it output something useful. Even on today’s computers, I can’t see how to get a result that really did exactly the right thing. One of the big things that it demands is zero offset data. This means that one has to do some kind of data mapping or mapping to zero offset. Another thing it demands is point source data acquired everywhere. The data cannot be recorded with any kind of array.
If somebody could invent a 3D acquisition technique with perfectly regular sampling, I might be willing to say that the multiple suppression problem has been solved. Unfortunately, the industry doesn’t seem to want to do this or pay for it. I also don’t think any of us know what to do with converted multiples — something that went down as a P, bounced back as a multiple S, then converted back to and was recorded at the surface as a P. The math for this may be available, but I do not think we have the experience to know how to implement it or record appropriate data to allow us to suppress it. We even have trouble identifying such multiples. Since such events give you some kind of image in your data, they represent a serious interpretation problem. I feel that this problem is much more serious in exploration than we know, so my number one thing would be to solve the multiple suppression problem.
Secondly, we need to try to somehow be sure that our techniques preserve amplitudes, both at the acquisition and prestack processing stage. We do pretty well with that, especially in marine, but on land I’m not sure. I believe you have geophone plant problems, coupling problems, and near surface related material problems that make it almost impossible to argue that the final amplitudes are correct. The best we can currently do is give you an algorithm that for a given input amplitude produces the correct amplitude on output. Both Norm Bleistein and Sam Gray are better experts on this than I am, but I think they would agree that this is a difficult problem.
[Peter]: For marine data, true relative amplitude processing is being done
I think we can do true relative amplitude processing acoustically, but I don’t think we can do it elastically. Maybe a better statement would be that in marine data we can guarantee that the input amplitudes are true relative amplitudes, but even some implementation of so called “wave equation” techniques can be shown not to be true amplitude. So I’d agree, if what you have is an acoustic medium, and the only place you can come close to that is Gulf of Mexico, where you can at least produce true relative amplitude input data. On land or with OBC data, producing true relative amplitude data is much more difficult.
[Peter]: What do you think of all the interest in multicomponent exploration these days, and OBC surveys?
I think it is the right direction to go, but it has some major difficulties. For one thing, it makes marine data into land data again. It’s also not clear to me if interpreters know what to do with it. If I was interpreting I am not sure what I would be doing with it! It’s hard to identify and correlate events with understandable raypaths, reflections, or horizons.
I am aware of the fact that people at BP, specifically some of the Amoco people in BP, are actually running through a proof of concept on this; I would guess the idea is to make images from multi-component data that are separated by propagation constraints. It will be interesting to see if anything gets published and whether anything comes out of it. You mentioned research — it takes money to figure out how to do that, and the contractors are not going to bet that kind of money on a research project of this magnitude! We’ll acquire the data, but then BP’s are going to have figure it out and then tell us what they want us to do.
[Peter]: It seems like the contractors nowadays are in a position where they are forced to be leading the research in that area to some degree. Except for the BP’s, there are not many oil companies that are pushing that along.
But the research we do is not enough. We are not going to do the kind of expensive research necessary to provide the next level of interpretive imaging. When the oil companies drill a well, we get little or no information about why they were successful or why they failed. We don’t really know why or even if they want to pay for some new technique. We don’t even know what the interpretation that produced the prospect was based on. If Veritas or Western goes out and records this data, they are going to want to sell it, then they are going to want to sell reprocessing of it and finally get a reasonably return on their investment. I don’t think they will want to stick a lot of long term money in up front to develop the necessary interpretative technology necessary to make it a routine part of the exploration equation. My real guess is it’s still going to be developed inside a laboratory where they can think about it, make all the possible mistakes, and figure it out.
[Satinder]: An accurate velocity model is at the heart of any depth migration. What in your opinion are some of the preferred ways for obtaining an accurate isotropic velocity model?
Well, I have my preferences on what are the different ways, but you have lots of ways, lots of tools to do that. You can do anything from a standard iterative Dix-based Deregowski type loop to a full blown one step, take all the horizons, and pick all the arrivals tomographic inversion. Being at the extremes, it’s probably safe to say that neither of these will do the complete job. If you can constrain it, you can make tomographic inversion work. If you can’t constrain it, it just won’t produce acceptable results. You can also use coherence inversion to convert from rms to intervals, but this can be a slow process. You can do your velocity analysis on angle gathers. With angle gathers, you know what the opening angle is for that particular trace, you know how far the offset is, so you can do much more precise coherence and tomography. So, I think you have a tremendous number of tools. Mihai Popovich once said it takes us three years to figure out how an algorithm works. By that time everybody has an equivalent version, but maybe no one knows how it works. It does take you a long time to figure out how these things work. Once they work you have to explain them to somebody in a simple way — as you mentioned earlier – so that you can get them to use it and prove it in a case study setting. Only then does it become an acceptable part of the modeling sterem.
[Satinder]: What do you foresee as near term migration goals that we may see being fulfilled? Where are we headed?
You can have your pick now. You can do Kirchhoff, common azimuth, common offset or double downward continuation and shot profile migrations. All or most of these can be extended to elastic or anisotropic media. Assuming you have the computer resources to solve the particular problem, you can select the one you think most appropriate for your time constraints and accuracy needs and run it. There are two end points in my mentality. It’s easy to understand and apply Kirchhoff and it’s more flexible than all the rest. At the other end is shot profile migration. Implementation of Kirchhoff is difficult because of the need to handle amplitudes, phases, and multiple arrivals. In the shot domain, it’s easy to get good amplitudes, but it takes longer to compute. Anything between those two extremes is going to be more complicated to understand because of the approximations required to make them practical. For example, the common-azimuth is easy to implement and by far the fastest algorithm available, but a lot of assumptions were necessary to get there. You have a double square root equation in the frequency wave number domain. It is not clear how good the approximation is but, except for some missing dips, the images look great. The proof is in the pudding. You have lots of alternatives, but I think the future is really going to be full waveform type imaging. All the work Peter did 10 years before the rest of us on converted wave processing is the kind of thing we need to concentrate on. In mature basins, advanced technology is the only way to find targets you could not see before. Whether its done internally or externally, each new application produces new information and improves the chance of success. The next stage in converted wave processing is, depth migration. The progression is the same regardless of what the media is. Every step gives an additional data point.
[Satinder]: Well, I thank you Bee, for giving us the time and the opportunity to have this discussion. Peter, I thank you for joining us also.
Thank you, it was a real pleasure.
Share This Interview