Interviews

Beware the interpretation-to-data trap

An interview with Evan Bianco

Coordinated by: Satinder Chopra
Evan Bianco

Evan Bianco is the Chief Scientific Officer at Agile Geoscience. He is a blogger, freelance geophysicist, entrepreneur, and knowledge-sharing aficionado. He has an M.Sc. in geophysics from the University of Alberta and four years’ experience as an industry consultant in Halifax, Nova Scotia. Evan’s interests span a range of disciplines from time-lapse seismic in oil sands to geomodelling, seismic rock physics, and geothermal reservoir characterization. Evan tries to teach himself something new every day, and every so often, it proves useful. He can be reached at evan@agilegeoscience.com, or you can follow him on Twitter @EvanBianco.

Some say there is an art to seismic interpretation.

Art is often described as any work that is difficult for someone else to reproduce. It contains an inherent tie to the creator. In this view, it is more correct to say that seismic interpretation is art.

Subsurface geoscience in general, and seismic interpretation in particular, presents the challenge of having long, complex workflows with many interim sections, maps, and so on. As the adage of treasure and trash goes, one person’s interpretation is another person’s data. The routine assignment of interpretations as data is what I call the interpretation-to-data trap, and we all fall into it.

Is a horizon an interpretation, or is it data? It depends on whom you ask. To the interpreter, their horizons are proud pieces of art, the result of repetitious decision making, intellectual labour, and creativity. But when this art is transferred to a geomodeller, for instance, it instantly loses its rich, subjective history. It becomes just data. Without fail, interpretations become data in the possession of anyone other than the creator. It is a subtle but significant concept. And consider the source from which the interpreter’s horizon manifests: seismic amplitudes. Stacked seismic data is but one solution from a choice of migration algorithms, which is itself an interpretive process. More data from art. To some extent, this is true for anything in the subsurface, whether it be the wireline log, production log, or well top.

There are a number of personal and social forces that deepen the trap, such as lack of ownership or lack of foresight. Disowning or disliking data is easy because it is impersonal; disowning your own interpretation however is self-sabotage. People are rarely blamed for spending time on something within their job description, but it is seldom anyone’s job to transfer the implicit (the assumptions, the subjectivity, the guesswork, the art) along with the explicit. It takes foresight and communication at a personal level, and it takes a change in the culture of responsibilities, and maybe even a loosening of the goals on a team level.

Because of the interpretation-to-data trap, we must humbly recognize that even the most rigorous seismic interpretation can be misleading as it is passed down-stream and farther afield. If you have ever felt that a dataset is being pushed beyond its limits, that a horizon was not picked with sufficient precision for horizontally steering a drill bit, or a log-conditioning exercise was not suitable for an AVO analysis, it was probably a case of interpretation being mistaken for data. Sometimes these are inevitable assumptions, but it doesn’t absolve us of responsibility.

I think there are three things you can do. One is for giving, one is for receiving, and one is for finishing. When you are giving, arm yourself with the knowledge that your interpretations will transform to data in remote parts of the subsurface realm. When receiving, ask yourself, ‘Is there art going on here?’ If so, recognize that you are at risk of falling into the interpretation-to-data trap. Finally, declare your work to be work in progress, not because it is a way to cop-out or delay finishing, but because it is an opportunity to embrace iteration and make it a necessary part of your team dynamic.

 

Q & A

Seismic data contain massive amounts of information, which has to be extracted using the right tools and knowhow, a task usually entrusted to the seismic interpreter. This would entail isolating the anomalous patterns on the wiggles and understanding the implied subsurface properties, etc. What do you think are the challenges for a seismic interpreter?

The challenge is to not lose anything in the abstraction.

The notion that we take terabytes of prestack data, migrate it into gigabyte-sized cubes, and reduce that further to digitized surfaces that are hundreds of kilobytes in size, sounds like a dangerous discarding of information. That’s at least 6 orders of magnitude! The challenge for the interpreter, then, is to be darn sure that this is all you need out of your data, and if it isn’t (and it probably isn’t), knowing how to go back for more.

How do you think some these challenges can be addressed?

I have a big vision and a small vision. Both have to do with documentation and record keeping. If you imagine the entire seismic experiment upon a sort of conceptual mixing board (http://ageo.co/18GDtJG), instead of as a linear sequence of steps, elements could be revisited and modified at any time. In theory nothing would be lost in translation. The connections between inputs and outputs could be maintained, even studied, all in place. In that view, the configuration of the mixing board itself becomes a comprehensive and complete history for the data – what’s been done to it, and what has been extracted from it.

The smaller vision: there are plenty of data management solutions for geospatial information, but broadcasting the context that we bring to bear is a whole other challenge. Any tool that allows people to preserve the link between data and model should be used to transfer the implicit along with the explicit. Take auto-tracking a horizon as an example. It would be valuable if an interpreter could embed some context into an object while digitizing. Something that could later inform the geocellular modeler to proceed with caution or certainty.

One of the important tasks that a seismic interpreter faces is the prediction about the location of the hydrocarbons in the subsurface. Having come up with a hypothesis, how do you think this can be made more convincing and presented to fellow colleagues?

Coming up with a hypothesis (that is, a model) is solving an inverse problem. So there is a lot of convincing power in completing the loop. If all you have done is the inverse problem, know that you could go further. There are a lot of service companies who are in the business of solving inverse problems, not so many completing the loop with the forward problem. It’s the only way to test hypotheses without a drill bit, and gives a better handle on methodological and technological limitations.

You mention ‘absolving us of responsibility’ in your article. Could you elaborate on this a little more? Do you think there is accountability of sorts practiced in our industry?

I see accountability from a data-centric perspective. For example, think of all the ways that a digitized fault plane can be used. It could become a polygon cutting through a surface on map. It could be a wall within a geocellular model. It could be a node in a drilling prognosis. Now, if the fault is mis-picked by even one bin, this could show up hundreds of metres away, depending on the dip of the fault, compared to the prognosis. Practically speaking, accounting for mismatches like this is hard, and is usually done in an ad hoc way, if at all. What caused the error? Was it the migration or was it the picking? Or what about the error in the measurement of the drill-bit? I think accountability is loosely practised at best because we don’t know how to reconcile all these competing errors.

Until data can have a memory, being accountable means being diligent with documentation. But it is time-consuming, and there aren’t as many standards as there are data formats.

Declaring your work to be in progress could allow you to embrace iteration. I like that. However, there is usually a finite time to complete a given interpretation task; but as more and more wells are drilled, the interpretation could be updated. Do you think this practice would suit small companies that need to ‘ensure’ each new well is productive or they are doomed?

The size of the company shouldn’t have anything to do with it. Iteration is something that needs to happen after you get new information. The question is not, “Do I need to iterate, now that we have drilled a few more wells?” but “How does this new information change my previous work?” Perhaps the interpretation was too rigid – too precise – to begin with. If the interpreter sees her work as something that evolves towards a more complete picture, she needn’t be afraid of changing her mind if new information proves us to be incorrect. Depth migration, for example, exemplifies this approach. Hopefully more conceptual and qualitative aspects of subsurface work can adopt it as well.

The present day workflows for seismic interpretation for unconventional resources demand more than the usual practices followed for the conventional exploration and development. Could you comment on how these are changing?

With unconventionals, seismic interpreters are looking for different things. They aren’t looking for reservoirs, they are looking for suitable locations to create reservoirs. Seismic technologies that estimate the state of stress will become increasingly important, and interpreters will need to work in close contact to geomechanics. Also, microseismic monitoring and time-lapse technologies tend to push interpreters into the thick of the operations, which allow them to study how the properties of the earth change according to operations. What a perfect place for iterative workflows!

End

Share This Interview