Brian Romans is a sedimentary geologist and assistant professor in the Department of Geosciences at Virginia Tech. He graduated from SUNY Buffalo with a geology degree in 1997 and then worked as a geotech for small oil and gas companies in Buffalo, New York and Denver, Colorado for a few years. Brian received an MS in geology from Colorado School of Mines in 2003 and then headed to California where he earned a Ph.D. in geological and environmental sciences from Stanford University in 2008. He worked as a research geologist for Chevron Energy Technology from 2008 to 2011 before joining the faculty at Virginia Tech. Brian’s research on the patterns and controls of clastic sedimentation during and since graduate school have resulted in numerous papers, which you can access at www.geos.vt.edu/people/romans. Brian is @clasticdetritus on Twitter and writes the blog Clastic Detrituswhere he shares thoughts and photos about earth science.
It doesn’t take long to get accustomed to the ability to sit comfortably at a computer workstation and cruise around a subterranean world with just a few mouse clicks. The technology we use to observe, describe, and interpret subsurface geology is truly amazing. The advent of 3D seismic-reflection data coupled with immersive visualization software allows us to characterize and, importantly, to conceptualize the heterogeneity and dynamics of reservoirs.
With this power, it’s sometimes easy to forget the scale of geology we are dealing with in subsurface data. The scale of features combined with the type and resolution of data you are looking at can often lead to interpretations that do not capture the true complexity and heterogeneity.
I find it useful to constantly ask myself questions about scale when interpreting the subsurface: How thick is the package of interest? How wide is it? Take a few minutes and do the back-of-the-envelope calculations to see how many Empire State buildings (about 1 million cubic metres) or Calgary Saddledomes (250 000 cubic metres) fit inside your volume of interest.
When you figure out display settings that best work for you and the data you are characterizing, calculate the vertical exaggeration. Write this on a sticky note and attach it to your monitor. It’s quite common to view these data at high vertical exaggerations – especially in fields where subtle stratigraphic traps are the issue.
Finally, and most importantly, go on a field trip at least once every couple of years. Observing and pondering geology in the field has numerous intellectual benefits – far too many to list here. Seek out the extraordinary, like the turbidites onlapping the basin margin in the Eocene of the French Alps opposite. The realization of scale is among the most critical. Spending an hour or more trekking your way up a rocky slope to see some outcrops and then finding out all that expended energy and sweat got you through about half a wavelet is an unforgettable experience.
Q&A
The title of your article is interesting and you convey an important message is that while interpreting seismic data, the scale of the features being interpreted should be kept in mind. How does it help? Does it have a bearing on the final interpretation?
Considering scale during interpretation is important because the stratigraphic features we can map in seismic-reflection data are commonly composite features. For example, there is abundant evidence from high-resolution seismic and outcrop studies showing that submarine channel fills stack to form larger-scale ‘channelform’ features commonly termed channel complexes. If an interpreter does not appreciate the composite nature of such features their characterization may lead to a significant underestimation of reservoir heterogeneity.
You also mention high vertical exaggeration at which seismic data is commonly interpreted. Again, please explain how it helps in the final interpretation.
Vertical exaggeration allows the interpreter to “see” stratigraphic features that commonly have subtle stratal geometries. For example, depositional slopes typically have gradients of only a couple of degrees. If the interpreter does not keep track of the true gradients of features it could lead to erroneous interpretation of depositional environment.
Let us talk about a different kind of scale. When we correlate seismic data with well log data by way of synthetic seismograms, usually there is a mismatch. The scales of measurement for the two types of data are different. What in your opinion is a good practice to minimize this mismatch? What would you attribute this different to?
Generally, the best way to minimize the mismatch is for the interpreter to work very closely with a geophysicist with a lot of experience in performing well ties. If not available, then the interpreter should put a lot of effort into learning more about this procedure. Specifically, it can be difficult to impossible to never have a mismatch in a well tie because of the nature of the differing data. An iterative approach is critical.
For characterization of reservoirs, seismic data requires calibration to petrophysical properties. Do you think the seismic data at its bandwidth, responds to the variation of the petrophysical properties? As well, what scale of petrophysical variation could be acceptable, keeping the seismic bandwidth in mind?
This is a very important question and difficult to generalize. This is something to consider on a case-by-case basis within a team of reservoir geoscientists. Data from repeat-seismic-surveys (commonly termed 4-D seismic) that I’ve seen presented at technical meetings demonstrate that seismic data can, in some situations, respond to changes in fluid properties. Generally, it’s my opinion that if reservoir facies are characterized such that sedimentological and petrophysical aspects are integrated and, importantly, represented in the reservoir model then we have a framework with which to address your question of relation to seismic response.
How do you think the uncertainty in these exercises can be addressed?
Whether it’s exploration, development, or production I think it’s important to always consider multiple scenarios. A simple way is to generate low, mid, and high scenarios of reservoir characteristics (e.g., net-to-gross, connectivity, resource-in-place). Nowadays, it’s common for teams to utilize quantitative risk assessment techniques that consider a probability distribution of expected values. This is great, but we always need to remember how we get the numbers in the first place.
Geoscientists usually open up their album of fond memories from their professional lives. Tell us about yours, when you do that?
As an educator at the university level I use anecdotes from my time in industry as well as experiences as an industry-funded scientist when talking to students about tackling technical problems.
As geoscientists, we are up to all kinds of challenges. Tell us about some of yours?
Yes, there are all sorts of challenges. One that comes to mind, and something touched on in a question above, is how we need to deal with uncertainty. At its essence, geoscience is about characterizing nature. And nature is beautifully messy. The complexity and richness of natural systems can sometimes be daunting, which may lead some to throw their hands in the air and proclaim we can’t improve our prediction and characterization. Figuring out how sedimentary systems behave, for example, is a huge challenge. But, we need to keep chipping away it through both fundamental and applied research.
Finally, on a lighter note, it is usually said that the definition of ‘happiness’ changes with age. The younger lot associate it with excitement and fun, while the older folks associate it with contentment. Would you agree? Please elaborate.
Happiness to me is being intellectually and mentally challenged. I’m happiest when working on a problem. As I always tell my students, not every single moment of working on scientific or technical problems may make you ‘happy’ but it’s the overall arc. I feel very fortunate to have the opportunity to do this as part of my job.
Share This Interview