Brian Russell holds a B.Sc. from the University of Saskatchewan, a M.Sc. from Durham University, U.K., and a Ph.D. from the University of Calgary, all in geophysics. He worked for Chevron, Teknica and Veritas before co-founding Hampson- Russell Software with Dan Hampson in 1987, a company that develops interactive seismic analysis software for the oil industry. Hampson-Russell is now a subsidiary of CGG, where Brian is Vice President, Software and a CGG Fellow. His research interests include rock physics, seismic inversion and seismic attribute analysis. Brian is a Past-President of both the Society of Exploration Geophysicists (SEG) and the Canadian SEG (CSEG) and has received Honorary Membership from both societies, as well as the Cecil Green Enterprise Award from SEG (jointly with Dan Hampson). Brian is a director on the CSEG Foundation Board, Chairman of the Board of the Pacific Institute for the Mathematical Sciences (PIMS) and also an Adjunct Professor in the Department of Geoscience at the University of Calgary. He is registered as a Professional Geophysicist (P.Geoph.) in the Province of Alberta.
Doing mathematics is like exercising. Do a little bit every day and you stay in shape, either intellectually (in the case of math) or physically (in the case of exercising). Neglect it and your muscles (intellectual or physical) fade away.
Geophysics is a hard science. By that I mean that it is a science based on ‘hard’ facts, but also that it can be difficult. We all struggled through tough math and physics classes at university to get our degrees. But once we were in the working world, especially if we became seismic interpreters, we tended to leave the details to the specialists. Indeed, picking up a copy of Geophysics and trying to read every article is a daunting task. And I do not expect that every exploration geophysicist should be able to understand the latest implementation of Green’s functions in anisotropic depth imaging. However, I do think that an appreciation of some of the fundamental applied mathematical ideas in our profession can go a long way towards enhancing your enjoyment and appreciation of your day-to-day job.
Two examples
Let me illustrate my point with two equations. Let us start with:
d = Gm
where d is our data, a set of n geophysical observations, m is our model, a set of k model parameters, and G is a linearized relationship that relates the observations to the parameters. This ubiquitous equation can be found in every area of geophysics, from seismology through potential fields to electromagnetic theory. The simplicity of the way I have written the equation hides the fact that d is usually written as an n-dimensional vector, m as a k-dimensional vector, and G as an n row by k column matrix.
The way that you react to these equations tells me a lot about you as a geophysicist.
Solving the equation is a little more difficult. Since n is usually greater than k, the solution can be written:
m = (GTG+mI)-1GTd = C-1h
where C is the autocorrelation matrix found by multiplying the G matrix by its transpose GT (and adding a little pre-whitening by multiplying the value λ by I, the k by k identity matrix), and h is the zero-lag cross-correlation vector, found by multiplying the transpose of the G matrix by the data vector. Again, this equation, sometimes called the Normal Equation, is ubiquitous in geophysics. It is the basis of deconvolution, AVO attribute analysis, post- and pre-stack inversion, refraction and reflection statics, and so on. So, what lesson should we take away from these equations?
My advice
The way that you react to these equations tells me a lot about you as a geophysicist. If you are thinking: ‘what’s the big deal, I use those types of equations every day,’ you probably don’t need my advice. If you are thinking: ‘yes, I saw those equations once in a class, but haven’t thought about them for years,’ perhaps I can inspire you to look at them again. On the other hand, if you are thinking: ‘why would I ever need to use those boring-looking equations,’ you are a tougher challenge! I would recommend starting with these equations and really trying to understand them (perhaps you will need to dust off your linear algebra, and I recommend the book by Gilbert Strang). Then, pick up a copy of Geophysics, or any geophysics textbook, and see how many of the equations can be expressed in the same way. Or, take some quantitative industry training courses and see what the mathematics is really telling you about your data.
I guarantee it will be good for you!
Q&A:
Brian, by saying ‘don’t neglect your math’, I guess you are essentially saying math is a tool we use to learn geophysics and so we as geophysicists should not shy away from it. On the other extreme, a heavy dose of math may put people off and there is a huge need to be able to explain difficult math concepts in simple terms. Could you please elaborate on this?
This is an interesting question because although we know that the underlying principles of geophysics are mathematically based, and although we spend a lot of time working through these equations at university, many of us stop using any mathematics at all when we join the industry and essentially just look at the data. This is good up to a point, but what I am saying in this article that you should continue to keep up with the mathematical/physical principles. In the article I use generalized linear inversion and give everyone a “homework” assignment to see how often it is used in seismic data analysis. I could have used a simpler expression like the NMO equation or the AVO equation. I feel it is important to know these key equations and how their application affects your data and its interpretation. But how much math is too much? I agree that the journal Geophysics has become too specialized for the average interpreter to read, and I would not expect most of our members to slog through the details. I agree that we need more readable descriptions and, luckily, there is a new journal coming out from AAPG and SEG called Interpretation, which will fill this gap.
In your article, ‘See the big picture’, you say ‘An integrated project in any area of geophysics involves data acquisition, modeling, analysis and interpretation…, in the 21st century no one person can be a specialist in even a sub-set of these different areas’. In my mind, for the areas of geophysics you mention, any geophysicist is still well-versed in most of these areas, if not all. Would you agree?
I agree that every exploration geophysicist should have a basic understanding of every aspect of the exploration process, from acquisition through processing to interpretation. But each area has become so complex that, unlike forty years ago, nobody can be on top of all the details. For example, as this year’s SEG, there were literally dozens of papers on new marine acquisition design techniques, so it is very hard for someone who does not specialize in marine acquisition to master it all. Thus, while every geophysicist should understand the principles of marine acquisition, we should not expect that every geophysicist could put together the specs for the latest multi-azimuth survey in the Gulf of Mexico. We have to work in teams and rely on the expertise of other members of the team. That was my point here. I think we are in agreement on the fundamentals.
A little later in the same article, you mention about ‘collaboration’ with engineers and geologists to understand how they perceive many of the terms we deal with such as anisotropy, or a seismic response of shale overlaying a carbonate reef. Yes, this will help us understand how they view our problems. I think what is more important is how we can help them quantify their analysis, e.g. by using seismic geomechanics, etc. This is what I think is interdisciplinary co-operation and a dire need in unconventional resource characterization. How would you react to that?
I think my answer here is similar to the last one. Although we should attack this problem with our own specialist tools, such as azimuthal AVO, we should make an effort to understand the language of the engineers, and to communicate our results to them in their language. For example, it is fairly easy for us to transform our measurements from P and S-wave velocity and density to Young’s modulus and Poisson’s ratio. In doing so, we make a step towards “speaking” the same language as the engineer.
Embarking on a lifetime of learning is what has been done by many individuals in the recent past. However, in the present scenario, its need has increased several fold. I think the exposure to inter-disciplinary ideas within asset teams in oil and gas companies is a good way to learn these, as well as with readily accessible literature and continuing education courses. Is there is way to somehow quicken this process? How about the organization of inter-disciplinary workshops such as the SEG/AAPG/SPE URTeC held in Denver last August?
I am all in favour of this and, although I did not get to the workshop you mention, I have been to several inter-disciplinary workshops. However, what I always observe at such workshops is that there is never a totally equal mix of representatives from all the disciplines and that there is usually one main organizer whose discipline was overwhelmingly represented. (For example, I would love to see the attendance list for the Denver conference and see if the mix was equal thirds among geophysicists, geologists and engineers.) Having said this, we definitely need to carry on trying to improve the communication among our disciplines, and this is a good way.
You mention as an example how one could begin to understand anisotropy in terms of Thomsen’s parameters, what they mean and how these details are relevant to the particular task being done. Let me take it from here and ask you this: for shale resource plays, while shales have intrinsic anisotropy (which is a case of VTI), the presence of kerogen in shale source rocks enhances this anisotropy (as enunciated by Vernik, 1994) so that it is a case of strong anisotropy now. Do you think Thomsen’s analysis of weak-anisotropy would be adequate in characterizing shale source rocks? If not, what type of analysis is required to handle strong anisotropy?
I think that Thomsen’s theory of weak anisotropy is adequate considering all the other factors that have to be taken into account. However, his theory for VTI anisotropy has to be extended to other forms of anisotropy such as HTI, TTI, and orthorhombic, as shown by researchers such as Rüger, Tsvankin and Gretchka in several papers. Also, when modeling the data we do not need to use the Thomsen approximation but can use the full form of the stiffness or compliance matrix.
As an extension of the previous question, if there are natural fractures in the shale source rocks (which are formed as the kerogen matures) it becomes a case of HTI. So, the situation is complicated in that strong VTI anisotropy clubbed with HTI, is now a case of orthorhombic anisotropy. How could one handle such a characterization? Is it being attempted? I may mention that this needs the appropriate mathematical tools to carry out the analysis.
As you can see, my previous answer anticipated this question. (I didn’t read ahead, really!). Yes, my colleague Dr. Jon Downton is working on performing orthorhombic modeling and also orthorhombic analysis using AVAz data, and I know that other researchers are doing the same, such as Professor Boris Gurevich at Curtin University in Perth. For a more detailed explanation, please look up papers by these individuals.
In shale resource rocks, pre-stack impedance inversion is usually carried out and the spatial variation of the derived attributes is examined to locate the sweet spots. The lowering of P-impedance for example is usually taken to imply large porosity. This implication may be flawed, as such a lowering could be due to large porosity, could be due to softer mineralogy or small pore-aspect ratio. One or more of these properties could be influencing the P-impedance. How does one resolve this problem?
I agree that just a pre-stack inversion in shale plays may not be enough. As discussed by Goodway et al. (2010) in The Leading Edge (TLE), the use of pre-stack inversion (in their case, where the parameters have been converted to Lambda-Rho and Mu-Rho) gives good discrimination between ductile and non-ductile shales in the isotropic case. However, to evaluate parameters like closure stress, especially in the anisotropic case, the authors recommend using either shear-wave splitting measurements or AVAz. My colleagues at Hampson-Russell in Houston also wrote a TLE article (Sena et al., 2011) in which they show that a number of attributes besides those from pre-stack inversion (including AVAz attributes) need to be utilized when analyzing resource shale plays. In their case they used a multi-attribute transform to combine the attributes and predict sweet spots for drilling.
Share This Interview