The branch of statistics, “Geostatistics”, the term “Big Data” and the scientific process of “Analytics” have been around for a while. However, their application on data acquired for subsurface evaluation and resource development is becoming increasingly important. The present day Geophysicist/ Geoscientist is having to make sense of more and more data, either as part of their job in acquisition, processing and/or interpretation. Not only does this require knowledge about the physical properties and processes of the subsurface, the Geophysicists/Geoscientist is expected to derive meaning from these large data sets and to do it quickly.
Big data refers to large and complex data sets, which nowadays there is an abundance of, especially in the oil and gas industry. It suggests that more data results in making better decisions and reduces subsurface uncertainty (this was pointed out by the authors of the first article, Jeff B. Boisvert and Clayton V. Deutsch). However, the challenges in analysing this kind of data include sparsity, confidentiality, sharing, transferring, formatting, curation, storage, cost of acquisition, processing, analysis and visualization of the data. Geophysical data sometimes fits this description, in that lots of data is acquired and needs to be integrated with other data, be it seismic, gravity & magnetic, GPR, EM, IP, microseismic, etc., but either due to cost considerations, scope of project objective and study area, operational, software and/or environmental limitations, remains under sampled and can include uncertainties.
Subsurface geophysical data comes in various forms, formats, sizes and types. In order to integrate and derive meaning from all of this data, computational efficiency, repurposing of existing techniques, algorithms, approaches and tools, new perspectives and the capacity to handle large datasets, while reducing uncertainty is going to be necessary. The geophysical approach to reservoir characterization and interpretation of subsurface physical processes associated with natural resource development, can be a lengthy exercise and takes time to do it right. In this issue of the RECORDER, two examples are presented where the approach of Geostatistics and Analytics are applied in order to make sense of multiple and/or large data sets.
The first focus article entitled “Incorporating Big Data in Geostatistical Modeling for Making Bigger Decisions in the Face of Even Bigger Uncertainty”, is written by Jeff B. Boisvert and Clayton V. Deutsch from the Centre for Computational Geostatistics (CCG) in Edmonton at the University of Alberta. They present the benefits of incorporating big data into geostatistical models for analysing subsurface reservoir properties for resource development and hydrocarbon production. The authors also explain the limitations and uncertainties involved with big data analytics and in geostatistical modeling for reservoir development.
The second focus article entitled “Data Analysis of Induced Seismicity in Western Canada”, is written by Hoda Rashedi from geologic Systems Ltd. and Alireza Babaie Mahani, from Mahan Geophysical Consulting Inc. This article describes the high demand for data, from seismographic stations, in order to understand the recent increase in the rate of seismicity in Western Canada and Central Eastern US from oil and gas activities. This article describes the efforts that have been taken to densify the seismographic stations in areas that have observed an increase in fluid injection activities. The objective of acquiring additional data is to better understand ground motion variability and assess seismic hazard to critical infrastructure. The authors introduce a ground motion prediction equation to visualize the variation of peak ground amplitudes, but analysis of the ground motion amplitudes can be improved once the density of the seismograph stations increase.
Join the Conversation
Interested in starting, or contributing to a conversation about an article or issue of the RECORDER? Join our CSEG LinkedIn Group.
Share This Article