The importance of making conclusions, and frameworks in reasoning.

“The best scientist makes the least and most limited conclusions,” one of my very smart friends in processing informed me. He had a point, but I really did need to make sense of the work and commit to a map right away. The point of opportunity loss was upon us and a call needed to be made.

Exploration geophysicists working in the resources sector must make conclusions in order to be of business value. Conclusions may be more or less either explicit or implicit and can be delivered in the form of maps, probability descriptions, recommendations, warnings, or advice, in reports that are formal or informal. The problem with conclusions is that good scientists are often loathe to make a lot of them. For a variety of reasons, the act of making a conclusion, a map, or a recommendation can bring about significant anxiety – even a crisis of conscience. After all, a good conclusion must be precise, correct, and clear. In an applied science that has error, non-uniqueness, and a ubiquitous lack of best practices, confidence in the precision, correctness, and clarity of the results can be justifiably low. Good scientists are very careful about the conclusions they make. Good business people know they must make decisions. How can we improve our ability to serve the business needs of our industry while maintaining the scientific integrity that will result in better long term results for everyone and probity for ourselves? How can we make better conclusions in an uncertain field?

There are many ways to look at the problem of drawing conclusions from uncertain data: so many so that we can only represent a few of them. This short note will address certain aspects of these questions briefly, including:

  • Why we must make conclusions
  • Why making conclusions can be difficult
  • Strategies around making conclusions, including
    • The scientific method and a quantitative approach
    • Making the quantitative more common
    • Statistics
    • Bayes’ theorem
    • Critical thinking, arguments, and multi-variables in seismic
    • Long term strategy: improvement

Why we need to make conclusions

In a discussion about the possible hiring of a new geoscientist, my manager asked me, “Is she a good mapper?” I thought about this long afterward, realizing that I had in fact worked with people that almost never produced a map. Generally those people were not effective at their jobs. I have known a few very talented individuals who just would not take that final step of producing a useful, conclusive, map. Even though all of our maps eventually change, are updated, or thrown away, we need them as good or as bad as they are at the time. More deeply, my manager wanted to know if the potential employee could commit to make a conclusion.

Exploration geophysicists in some way serve a business in which good decisions must be made. Good decisions rely heavily on accurate, relevant, and well described information. It is the business of exploration geophysicists to supply the geophysical elements of information to the decision processes at hand. If we are to have any relevance at all, it is through our conclusions or advice, so we had best make conclusions and make them well.

The difficulty of making conclusions

During my training at PanCanadian Petroleum, I was lucky enough to spend time with the manager of seismic acquisition, Al Goodfellow. Al’s phone rang all day, every day with enquiry after enquiry, and detail after detail about the many seismic programs he managed. I asked Al how he kept it all straight, and kept his head about him. Al replied, “I just think of it like crossing the road: I look left, then right, then left again, and cross the road.” Al was talking about reducing the information overload problem into steps.

One of the most important reasons why making conclusions is difficult is because they may be wrong. Most geophysical conclusions come in the form of a limited and uncertain estimate, and we would often be wrong to over-generalize or universalize our conclusions. The limitation or uncertainty of a conclusion or recommendation is not a good reason to fail to provide them: we can state those limitations and uncertainties succinctly. Along the same line of thinking, we hate being wrong, we hate making a mistake. Fear of making an error can be good if it drives us to improve or check our work, but it is counterproductive if it leads us to withhold potential advice to our business. Making a conclusion or a recommendation is an act of commitment that we must have the courage to make. While it is understandable to feel angst over the difficulty of committing to a map, well, or prediction, we have to do it. We all make choices; we all make conclusions. Acceptance of this exigency is the first step in getting over the angst of it.

Another kind of difficulty that may come in to play when making conclusions is the manifold complexity and enormity of the problem and the geophysical method. Points of contention and confusion are not uncommon in our complex jobs. Even simple subjects such as deconvolution can carry with them the potential for heated argument. For example: should we have pre-whitening, and how much? Should we perform cascaded surface consistent deconvolution or a single surface consistent deconvolution followed by a zero phase spectral enhancement? And if that is not enough let us talk about Vibroseis deconvolution. Another example of incongruence is the question of what brittleness is and which parameter or attribute really best describes it. Or we could discuss amplitude variations with offset and azimuth, the physical model of our fractures, and what we should expect to see on gathers sorted in offset and azimuth. All of these discussions are worthwhile, but they do illustrate that not everything within the application of our science is clear cut. Some things are confusing, arguable, and even incongruent. In an environment that contains such a degree of equivocation, uncertainty, and limitations of applicability, it is not surprising that it can be difficult to commit to making a map or conclusion.

Paradoxically the problem of making conclusions may be made worse by the abundance of conferences, papers, magazines, short articles (like this one?) and internet blogs on geophysical problems. More information is not always better. Normal human beings can experience information overload, which can happen when there seems to be too much information and the lack of standard or clear ways in which to use that information effectively. Wurman and Bradford (1996) treat this problem in detail, and although not discussing exploration geophysics, the psychological phenomena of too much information and not enough guiding structure are widespread and applicable to us.

Conclusion making strategies

“Well the seismic didn’t work that time,” my VP told me after we missed the thin Aeolian Triassic aged sand. I had explained to him the statistical chances of the prediction, but this was forgotten in the moment of shattered glory that was our dry well. I looked back at him and saw that he had his ruler out and was using his ruler-method of predicting the trend of the sand. The ruler-method was not actually inappropriate to the Aeolian sand deposition, but it had failed as readily as the seismic on the well. Neither failure nor success should have surprised us- they were both possibilities described by the statistical work I had done. What was more important was to understand those statistics and either choose to accept the statistical nature of the outcomes we endured, improve the conditions of our investigative techniques, or move on to something else.

Tools and strategies for reaching conclusions exist in plenitude. To a large degree, all of our training in science, mathematics, and logic are aimed at the goal of understanding the world around us, so we have no lack of background in solving physical problems. Remembering this training, taking a deep breath, and recalling the scientific method is a good first step when dealing with a difficult problem, information overload, or fear of being wrong.

The scientific method and a quantitative approach

The scientific method implies a quantitative approach to exploration geophysics. The method suggests that we put forward a hypothesis (usually supported by a theoretic framework), that this hypothesis makes measureable predictions, and that we make experiments or observations such that we can take those measures and confirm or deny the hypothesis (Hughes et al., 2010). It is explicit in the method that we have a way of measuring our results, and implied that the method will usually be quantitative. This suggests that we must (usually) use quantitative methods in order to employ the scientific method and all its tools to make better conclusions.

Making the quantitative more common

Quantitative interpretation is becoming more widespread with a growing body of literature behind it, such as the book of Avseth et al. (2005) on the subject. Avseth’s work, as well as the numerous contributions from Close et al. (2011) are concerned with quantitative interpretation from a rock physics perspective. Hunt et al. (2012a, 2012b) have argued that these quantitative methods should be applied to all geophysical problems – not just rock physics. The argument that everything geophysical should be treated quantitatively would make the application and rigor of the scientific method ubiquitous.

This ambitious assertion of Hunt et al. is not always easy to employ in practice because some of the predictions that geophysicists are called to make are not immediately numeric, and therefore difficult to treat quantitatively. This may make the use of quantitative methods seem impossible and the scientific method problematic. For instance, we may wish to correlate a geologic facies, depositional unit, lithology or seismic facies, to other clearly numeric seismic and geologic measures. These nonnumeric variables are sometimes called Attributes (Sokal and Rohlf, 1995). The coding of these variables such that they are treated as ranked variables is permissible, and is not uncommon in biological sciences (Sokal and Rohlf, 1995). In such a case, the upper, middle, and lower sand units of a marine barrier succession might be coded 1, 2, 3, or the lithology of a sand’s grain size might be 1, 2, 3, for fine, medium, and coarse sands. If the coding is performed in a way that is congruent to the physical nature of the problem, meaningful relationships between the coded or ranked variables and the other variables may be found with typical statistical methods. Davis (1986) describes a method called cross-association which can also be used to compare geologic attributes. Coding non-numeric variables is essential so that they may be used in quantitative studies as ranked variables. If we code the ranked variables, we can handle them together with other variables in multi-variate numeric studies, and they can support important arguments such as the permeability argument of Figure 2, which uses stratigraphic position. If we are able to code important attributes like this, we can broaden our quantitative approach and bring to bear our well known quantitative tools of scientific enquiry more easily. Ultimately, this will facilitate us making the conclusion or advice that we need to make.


The methods of statistics are an essential part of the toolkit we need in order to apply the scientific method. In fact, statistics could be described as the language of quantitative techniques. Davis (1986) has written an excellent book on statistics for geoscientists, which provides a broad geologically oriented look at the subject. In order to effectively evaluate hypotheses with noisy and uncertain data, we need to properly use the methods of statistics. This branch of mathematics is our way of managing our poor data.

Bayes’ theorem

Bayes’ theorem is commonly used in exploration geophysics. Bayes’ theorem is very practical to use with our uncertain exploration geophysics because of the way it modifies or damps our certainty from geophysical data through the use of initial probabilities. This often acts to reasonably reduce confidence in the face of new information without discarding the information itself. It has obvious utility for imperfect information, and is used in processing and in Decision Analysis (Newendorp, 1975, and Hunt, 2013), but also in interpretation. Nieto et al. (2013) demonstrated the power and practicality of Bayes’ theorem on a Montney shale gas play. Bayes’ theorem was used to combine petrophysical (original) knowledge with imperfect seismic information. Their production of probability volumes for each petrofacies of the Montney was a brilliant and clear way of handling uncertainty and the need to make a conclusion without compromising integrity or suggesting to a certainty that they did not have.

Tools from critical thinking

“Will we hit the Viking here?” the VP asked. Scott Reynolds pulled out a stack amplitude map and showed the location with reference to other producing Viking wells. He then pulled out a Lambda-rho map and did the same thing. Then a Mu-rho map, and an Rp/Rs ratio map, and finally a multi-variate Phi-h prediction map. Scott rested his conclusion on a combination of each of the maps.

The study of critical thinking may also provide some aid in our efforts to make good conclusions. Critical thinking is the study of reasoning whether a claim is true or justifiable (Hughes et al., 2010). Please see Hughes et al (2010), who I borrow from liberally, as the reference for this section. Exploration geophysics problems fall into the category of Inductive Reasoning wherein the conclusions are evaluated according to their probability of being true; they are never certainly true. According to these authors, almost all new knowledge is of this kind. Take heart in the fact that we were never expected to be perfect, and that our recommendations could only ever be expressions of probability! The scientific method is given a specific description in critical thinking: Induction by Confirmation. This form of inductive reasoning can be described as follows:

If h then o.
It is probable, therefore, that h.

Where h stands for the hypothesis, o stands for the observation statement that is logically deductible from h.

In exploration geophysics, the observation statements could be considered a variable or seismic derived rock property. An argument from exploration geophysics might be:

If the Upper Montney is porous, then the ratio λ/(λ+2μ) will be low.
λ/(λ+2μ) is low.
It is probable, therefore, that the Upper Montney is porous.

Inductive reasoning makes heavy use of statistics in its analysis. Questions of the sample size (or confirming instances of λ/(λ+2μ) and the porosity of the Upper Montney in the example above), its representational adequacy, relevance, repeatability, correlation, are all considered in inductive reasoning with statistical methods. Note that Induction by Confirmation is not a random correlation of observations, and that the meaning of the observations derives only in their relationship to a hypothesis. Hughes et al. (2010) go on to explain that data alone do not make an argument, but that the support of the hypothesis required by Induction by Confirmation is necessary to make a conclusion from data.

In order to draw a conclusion, we may use many different types of observations, and they may have a different relationship with each other as they support a conclusion. Let us consider first the T-argument structure. In a T-argument, one or more variables are required together to support a conclusion. A good example of the T-argument might be predicting flow capability of a tight gas well using Darcy’s Law. Figure 1 shows the form that the T-argument for productivity might take. Darcy’s law suggests that flow capacity depends on the multiplication of permeability, pressure draw-down, one-over-viscosity, and an area term (Holditch, 2006). I will add thickness arbitrarily as an expression of reservoir size. Since the relationship is multiplicative, an insufficiency of any of the terms would make the production capacity low, therefore, they are all required. All of the premises must be true for the conclusion to be true in a T-argument.

Another kind of argument structure is more appropriate for additive variables, and that is the V-argument. The V-argument structure also uses several variables to support the conclusion; however each variable is independent of the other. More variables provide more support for the argument. An example of the V-argument might be for permeability in a thick marine barrier sand. Figure 2 shows the form of the V-argument for permeability in a marine barrier sand might take. In such a sand, permeability may come from matrix porosity-permeability relationships, as well as from fracture permeability. The fracture permeability might be argued to be independent of the matrix permeability (this is not always true). Additionally, the porositypermeability relationships can change with stratigraphic position in the bar, so this could be another independent variable. The V-argument is different from the T-argument in that even if one of the premises fails, the argument may still be strong. For instance, we may still have good permeability even if there are no fractures.

We could make a sub-argument for fractures from the above V-argument. In the case of predicting fractures, we might consider several independent fracture inference methods. Figure 3 shows the V-argument supporting the prediction of fractures. In this case, amplitude with offset and azimuth anisotropic gradient (AVAz Bani ), VVAz velocity anisotropy (VVAz Vani ), curvature, and Young’s modulus are each measured independently, each describes a different physical phenomenon, but may have a relationship with the likelihood of fractures.

We can use sub-arguments to support a complex argument. Figure 4 shows the argument for well productivity as a complex argument with three sub-arguments supporting it. The complex argument uses both T-argument and V-argument structure as appropriate. This example is useful to us as it shows how problems may be broken down and solved in parts, ultimately leading to the important conclusion required, which is often whether a well will be good or not. In the case of information overload, reducing a large problem into sub-arguments can be a useful management technique.

It may seem a little strange bringing critical thinking tools into exploration geophysics problems, but of course we should use any reasonable methods we can. The T-argument and V-argument structures are useful models in reasoning for us because they may be useful qualitatively as well as quantitatively. Scott Reynolds (in the anecdote above) often used a qualitative T- or V-argument approach that was effective, practical, and fast. An important question Scott wrestled with in choosing the T- versus the V-argument structure, which equates to needing all maps to agree on a location or only some of them, was in the degree of independence he felt the various maps actually had. Arguments could be made for either approach with the maps given in the anecdote, where the data is partially independent. The V-argument structure can clearly be justification for a multi-linear estimate of some required property. Bayes’ theorem can also be applied in either the V or T argument structure to adjust probabilities.

Long term strategy: improvement

Even if we correctly apply the scientific method, our answers may be unsatisfactory. No amount of statistics or multivariate combination of variables should make truly poor data give a confident result. In fact, if the statistical and critical methods are applied correctly, they should clearly illustrate how poor the immediate term answers really are. How poor the conclusion may be is an essential part of the conclusion, and we should communicate this clearly. Uncertain results are not a reason for silence; they may be a call for action. We should not hesitate to spell out the potential scientific, economic, or even environmental harm that can result from unreliable data: we must report the state of our scientific reliability now and discuss how we might improve our work in the future. Hunt (2013) made a generic illustration of the economic value of reliability for a resource play scenario. Specific work of a similar nature could be applied to any problem to determine if additional efforts in acquisition or processing should be considered to improve the reliability of the geophysical data. It is not uncommon or impractical for geoscientists to employ a parallel strategy of producing the answers that they must immediately, while also working on a longer term strategy for improving their predictive abilities for the near future.


I was in my Chief Operating Officer’s office expressing my angst over how the complex and uncertain seismic data is discussed and used. In particular, I communicated my worry over the fact that there is never really enough time in our task oriented meetings to fully explain all the sources of error and non-uniqueness. I told him that, as a result of this, I sometimes felt like a bad scientist and worried that I might give the impression of higher confidence in the data than the data could really support. He smiled and said, “We know the data isn’t perfect, and we also know that you and everyone here does everything they can to make the information as good as it can be. Ultimately we all have to make the best decisions we can with the information we have at the time, and live with it.”

We all make choices; we all make conclusions. Let us accept that and then try to make the best conclusions we can whenever required. It is sometimes helpful to make use of the fields of study that have been created to help make rational conclusions.

The methods of statistics help describe our certainty, validate our results, and provide succinct language in which to frame those results. Bayes’ theorem provides a useful framework to handle probabilities and imperfect information. Critical thinking techniques lay out the methods of reasoning by which we can break complex problems into sub-arguments. All of these techniques, tools, or ways of thinking sit well within the scientific method. It is worth remembering that the scientific method is a discipline built around an imperfect, empirical world – our world.

An understandable difficulty in making conclusions is our intimate knowledge in the limitations of our methods. There is a suggestion here that little of our advice is ever final. Further work can and often should be done, as the situation requires. All of our maps and conclusions are therefore understood to be temporary, and we should take heart in our efforts to do better tomorrow than we can do today.


I would like to thank Oliver Kuhn for his comments, and inspiration towards an eclectic approach to the essay. Thanks also go to Satinder Chopra for his editorial assistance.


Avseth, P., T. Mukerji, G. Mavko, 2005, Quantitative Seismic Interpretation. Applying Rock Physics Tools to Reduce Interpretation Risk; Cambridge University Press.

Close, D.I., M. Perez, B. Goodway, F. Caycedo, and D. Monk, D., 2011, Workflows for Integrated Seismic Interpretation of Rock Properties and Geomechanical Data: Part 2 – Application and Interpretation, CSEG-CSPG-CWLS Convention extended Abstracts

Davis, J. C., 1986, Statistics and Data Analysis in Geology; John Wiley and Sons, Inc.

Holditch, S. A., 2006, Tight Gas Sands: Journal of Petroleum Technology, 58, 6, 86-93

Hughes, W., J. Lavery, and K. Doran, 2010, Critical Thinking: an Introduction to the Basic Skills, sixth edition: Broadview Press.

Hunt, L., R. Reynolds, S. Hadley, J. Downton, 2012a, Quantitative Interpretation part I: method: CSEG RECORDER, 37, 1, 7-17.

Hunt, L., R. Reynolds, S. Hadley, J. Downton, 2012b, Quantitative Interpretation part II: case studies: CSEG RECORDER, 37, 2, 44-54.

Hunt, L., 2013, Estimating the value of Geophysics: decision analysis: CSEG RECORDER, 38, 5, 40-47.

Newendorp, P. D., 1975, Decision Analysis for Petroleum Exploration, Pennwell Publishing Company.

Nieto, J., B. Batlai, and F. Delbecq, 2013, Seismic Lithology Prediction: A Montney Shale Gas Case Study: CSEG RECORDER, 38, 2, 34-42.

Sokal, R. R., and F. J. Rohlf, 1995, Biometry, Third Edition, W. H. Freeman and Company.

Wurman, R. S., P. Bradford, editors, 1996, Information Architects: Zurich, Switzerland: Graphis Press.



Share This Column