The questions seeking ’Expert Answers‘ in this issue are concerned with digital phones and how successful they are in recording seismic data today. The 'Experts' answering these questions are Jon Tessman (Input/Output) and Norman Cooper (Mustagh Resources) . The order of the responses is the order in which we received them.

Questions

1. How much seismic data is being recorded today by using digital geophones. Is there any likelihood of this trend changing?

2. We started recording 3D seismic data using 48/96 channels. What is the maximum number of channels being used for 3D seismic data acquisition today. Is this cost effective? As per the trend in the increased number of channels how many channels can we expect in the next couple of years? Do you think higher channel counts could lead to the elimination of field arrays?

3. What other innovative technologies are being adopted in land seismic data acquisition?

4. What are the major geophysical challenges being faced in land seismic data acquisition?

 

Answers by Jon Tessman

Question 1 – Between 1999 and the end of 2003 I/O’s records indicate that greater than 500 million traces of data had been acquired worldwide using VectorSeis® TrueDigital™ technology. The lion’s share of that data had been acquired in the Western Canadian Basin. The technology employed by these systems represents a fundamental departure from the conventional techniques, both from the sensor side (high Vector Fidelity, linear frequency and phase response, broad bandwidth, etc…) as well as the telemetry infrastructure required to support efficient field operations (high levels of automated power and telemetry redundancy).

In an effort to both improve the quality of conventional P-wave data and our understanding of rock properties by acquiring shear wave data, commercial digital sensors are generally packaged as Full Wavefield (3 component) systems.

While single component systems are possible, they would negate many of the benefits as explained below and offer little advantage over conventional systems.

In a Full Wavefield system, the P-wave data benefits fro m our ability to record the entire wavefield and extract the vertical component far more precisely than ever before. Conventional systems rely on the geophone plants to be vertical to achieve this. Given all the factors present in the field (i.e. human error, temperature fluctuations, line traffic , etc…) it is a tedious and time consuming task to plant geophones precisely vertical. By recording the full wavefield we can ignore the requirement for vertical plants in the field and extract the vertical component post acquisition far more precisely (±1/4°) than conventional operations would allow. This technology coupled with a point receiver implementation (i.e. no field arrays) and other basic properties of the sensor often results in improved P-wave data (but rarely if ever detrimental to the data quality). This improvement can be especially dramatic in areas where targets are relatively shallow.

A by product of this implementation is that we also acquire high quality shear wave data simultaneously. The shear wave data is essential if we wish to improve our fundamental understanding of the reservoir properties and the fluids contained therein. While the science of processing and interpreting shear wave data is still relatively new there are tools being developed and commercialized at an ever increasing rate. A number of contractors are able to process data on a commercial basis, commercial interpretation tools a re already available, and new tools like joint PP and PS inversion for rock properties are nearing commerciality as we speak. These tools will offer an unprecedented ability to understand reservoir rocks and represent a dramatic step forward in both the reduction of exploration risk and improvements in exploitation efficiencies.

As we start into 2004 there are VectorSeis equipped crews operating in Canada (2), Russia (2), Eastern Europe (1), and China (1). Statistical information demonstrates that the industry is adopting the technology at an increasing pace.

Question 2 — The first truly commercial land 3D acquisition was enabled by the arrival of systems which transmitted data digitally from the remote boxes to the central system. These early systems (circa 1984) had a maximum capacity of 240 channels and could roll up to 10 lines simultaneously. By contrast, the first VectorSeis System Four job acquired in 2002 peaked at approximately 3000 stations (9,000 channels!) of live gear recording at 1ms. While this is the exception it none the less demonstrates that the ability to acquire true wide azimuth full wavefield data is already a commercial reality.

While operating live spreads of this size sounds like an ominous proposition in the field, it should be noted that the original operations plan for the above survey called for 75 days to acquire 55,000 shot points. The contractor was able to complete operations in only 49 days, a 35% improvement in field efficiency. Similar results have been observed elsewhere .

Since the advent of commercial land 3D, field arrays have largely been an obsolete concept. Linear arrays only perform their function (i.e. suppression of short wavelength “noise”) effectively if they are oriented in-line with the source. Given the ever decreasing line allowances, the ability to deploy a symmetrical 3D areal array is neither operationally or economically viable option. As a result, linear arrays provide at best some statistical protection against ground coupling variations and deployment tilt errors (along with a 1/Ön improvement in signal assuming good coupling and no tilt errors). The trade off is that they also operate as low pass filters which have a unique response at every azimuth angle, something routinely ignored in processing. Several years of field experience have shown that it is demonstrably easier to achieve better ground coupling with a single buried sensor package than with multiple geophones on spike bases. In addition, the reduction of total field equipment attendant with point receivers results in significant operational efficiencies.

Full wavefield recording offers some unique solutions to the problems that accompany the use of point receivers, especially in the area of noise suppression. To avoid spatial aliasing it is necessary to identify and suppress short wavelength energy. This generally falls into the category of source generated energy propagating at velocities much slower than reflected body waves (i.e. Rayleigh waves and air blast). Since these waves tend to be characterized as having retrograde elliptical partical motion, they have identical response (abet 90° out of phase) on both the vertical and radial axis. Based on this knowledge tools have been developed to identify and suppress this energy. These tools are similar to polarization filters but operate in a very different manor. Rather than assuming the polarization of the particle motion (which tends to result in excessive residual noise if the elliptical motion is distorted by natural phenomena like anisotropy) better results are obtained if an adaptive vector filtering technique is employed. In this technique, one axis represents the signal plus noise (i.e. the vertical axis) while another is used as a sample of the noise only (i.e. the radial axis). Based on a number of constrains the adaptive filter maximizes the cross correlation between the two samples (amplitude and phase) and subtracts the noise. This technique offers a number of advantages in that it requires neither fine or regular spatial sampling of the noise for effective application. In addition, it preserves the spectral content of the data not just in a 1D sense but in an azimuthal sense as well. This will be a prime requirement as we begin to unravel the effects of azimuthal anisotropy in processing.

Question 3 – As mentioned briefly in the opening paragraph, systems architectures are evolving rapidly. This is in direct response to the requirement for higher efficiency (as measured by system availability during operational hours) and the ability to support ever increasing station/channel counts. In response to this demand Input/Output has established a state-of-the-art Telemetry Design Center in Dallas Texas, where they can draw on best in class telemetry expertise used by communications giants like Nokia, Ericsson, Nortel Networks and others. The results of this are advanced telemetry systems with multiple levels of both power and telemetry redundancy. When properly deployed, these systems can automatically detect power/telemetry disruptions and instantaneously re-route data with little or no effect on field operations. In most cases the operator only finds out about a cable break or equipment failure when the system requests replacement components to be dispatched to the effected location. By employing commercial off the shelf technology in their systems designs, manufacturers can simultaneously improve their product reliability while lowering costs.

A similar challenge is in the area of data storage. Industry standard storage technology such as 3480/3590 tape drives are nearing the end of its commercial life (in fact such drives are already difficult for equipment manufactures to source). As a result it is becoming increasingly common to decouple acquisition operations from archival (taping) operations. Modern systems like the I/O System Four write data directly to massive field RAID (Redundant Array of Inexpensive Disk) systems, often with as much as a terabyte of storage available. In some instances, significant cycle time reductions have been achieved by transporting either single disk drives or entire RAID systems directly to the processing center thereby eliminating the field taping process completely. In the interim the search for a new standard acceptable to the industry will continue.

Question 4 – With the advent of these modern, full wavefield recording systems acquisition technology has made a sizable leap ahead of both processing and interpretation technology. Only when both processing and interpretation tools for shear wave data are widely available will the complete benefit of full wavefield recording be recognized. Accompanying this transition will be the requirement for both processing and interpretation staff to upgrade their skill sets. Given the demographics of our industry this may well be the most daunting task of all.

However the technology does offer significant benefits, especially in increasingly mature basins where all the “low hanging fruit” has long since been exploited. The only place to prospect for reserves in these areas are in traps which have historically proven either too difficult or too risky. These generally fall into three categories (although I have no doubt other uses will be found);

  • Stratigraphic traps where subtle lithologic changes can result in sometimes sizable reserves being ignored due to perceived risk factors. The integration of shear wave data with other data can often differentiate lithology based on rock properties to validate the presence of reservoir seal, reservoir quality rock and/or both.
  • Low Impedance reservoirs where the contrast between the reservoir and the surrounding lithology is so subtle that it is often not visible on conventional seismic data. Again, the integration of shear wave data will allow the identification of reservoir quality rocks otherwise not possible with conventional techniques.
  • Basin centered tight gas formations where fracture orientation and density determination are the key to production economics. Both P-waves and shear waves are influenced by the presence of fractures. Our understanding of this complex phenomena is in its early stages. As it matures it offers the possibility to exploit these reserves through the understanding of their azimuthal anisotropy properties.

 

Answers by Norm Cooper

Question 1 – We have tried to monitor the progress of the digital sensor since we first became aware of its development, long before the release of the first production system in 1999. Most of the current practical experience has been acquired in Western Canada. Recently, contractors in Russia, Poland and China have also started using digital sensors. Although the instrument manufacturers (both Input/Output and Sercel) have been keen to share their information with us, we have been somewhat stymied by oil companies insisting on maintaining confidentiality of their surveys and the results.

At Mustagh, we are fortunate to have been invited to see results of several of the programs and tests that have been recorded. Our general impression is that many of the data comparisons do not represent well-controlled experiments. In many cases, recent digital data is being compared to older analogue data acquired with very different parameters. From such tests, we usually hear rave reviews for the advantages of the digital sensors. On more controlled experiments, however, the conclusions have often been more moderate. After full stream processing, the digital data is usually as good as the analogue data, but it is often not dramatically better.

Our own tests using digital sensors indicate that we are benefiting from better-controlled field operations (digital sensors a re generally planted in pre-drilled holes designed to hold the sensor snugly and vertically) and considerably improved sensor coupling. We are very impressed with the repeatability of signatures from two sensors in close proximity. We are also quite excited by the vector fidelity offered by an algorithm patented by I/O whereby the recorded 3-component data can be transformed to appear virtually perfectly oriented based on very low frequency measurements of acceleration (gravity). Certainly, digital sensors are offering an opportunity to use very low frequencies, which have previously been distorted in phase and amplitude by analogue sensors.

We firmly believe that digital sensors are providing a tool whereby we may obtain enhanced image quality due to greater dynamic range, vector fidelity and stable bandwidth and phase response. However, our efforts at utilizing this potential have not yet yielded a definitive result capable of convincing our industry to adopt this technology and replace our analogue systems (at least not yet).

Some of our obstacles remain in the processing of seismic data. Dynamic range of the recording system is probably not our limiting factor. Dynamic range of complex algorithms (such as the FFT) due to numerical precision and padding limitations is one example of limitations we still face in processing.

Probably more significant is the noise floor in our recorded data due to noise modes in the natural earth as well as those generated by our energy sources. Due to discrete sampling of the wavefield at the earth’s surface, many of our noise modes are poorly sampled and appear “random” in our recorded data. The phase of such noise is apparently chaotic. Until we develop powerful processing methods to deal with this noise, we will not be able to realize the potential increased bandwidth in our signal that is being offered by current recording systems.

Recording unattenuated low frequencies (i.e. below 10 Hz) offers great potential for increasing overall bandwidth and image clarity. However, until we learn to manage the phase of noise and signal in these very low frequencies, how will we make proper use of this data? Jon Tessman has presented excellent work using adaptive filtering to suppress organized and predictable noise modes. But even if we had no elliptical noise modes to contaminate our low frequencies, how will we deconvolve our data? What kind of deconvolution operator lengths will be required to properly manage the phase of data components with periods far in excess of 100 milliseconds?

Geophysicists are now being presented with incredible new tools to ply our trade. The digital sensor is a technological marvel and offers the potential to improve seismic images. However, our greatest fear is that, as an industry, we will place low regard on this tool simply because it does not provide immediate and large-scale improvements in our data. Our firm belief is that the tool is capable of delivering many of the promises we have heard. However, it will be some time before our processing and interpretation tools evolve to take full advantage of this technology.

Where are we today? With reasonable application and thoughtful program design, the digital sensor should not deliver data that is in any way inferior to conventional analogue sensors. In many situations it will deliver data that is marginally better and in a few situations it will deliver remarkable results! As an industry, let’s commit to support the development of this technology and focus on some of the problems we must solve in order to take full advantage of the information being recorded. In the future, the full promise of digital sensor technology may be realized.

Question 2 – Our estimates show that recording channel capacity has grown by a factor of about 2.5 each decade since 1935 (when a 6-channel recorder was state-of-the-art). There were slight perturbations in this growth pattern during the ‘80s and early ‘90s as we adjusted to distributed telemetry systems. There was a limit to the number of analogue channels that could be fed to a central recorder. Digital transmission from distributed “boxes” of recording instruments allowed us to take an anomalous leap in channel capacity. Today, a 3000 channel recording crew has become the norm. It is not uncommon to see special crews outfitted with more than 6000 channels. By projection, a basic crew in 2025 will be fielding over 22,000 channels. Note that the Western Geco “Q” system is currently capable of this. Also the “IT” (infinite telemetry) system being developed by VibTech in Scotland has the potential to exceed these projections.

Amazingly, our acquisition industry has managed to keep the cost of recording seismic very close to a constant. Major increases in costs that we have seen can be largely attributed to regulatory compliance and line preparation costs. Actual recording costs have remained remarkably constant. This is an often-overlooked benefit of recent technology. Imagine if we were now paying for seismic over 40 times the cost that we paid in 1965!

We have been asked a number of times what we would do if we were given 20 times the channels we currently have available. My answer, in today’s environment, would have to be “I don’t know”.

Can we eliminate groups of geophones organized in distributed arrays and replace them with single 3-component digital sensor units? Let’s be careful. Analogue arrays have benefits as well as detriments for our data quality. Benefits include averaging of random noise, stabilizing trace-to-trace coupling variations, increased transmission power, and distributed spatial sampling. The filter effect provided by summing distributed analogue elements can provide a filter against very short wavelength components of our recorded signal that are generally far shorter than desired reflection wavelengths. Detriments include sensitivity to topographic and near surface velocity variations, and signal attenuation due to averaging of in-group statics.

Our research shows that the benefits of intelligent use of arrays still far outweigh the losses expected due to the detriments. In fact, in an environment where we typically preserve a recorded trace once every 20 meters (for 2-D) or 60 meters (for 3-D), an array is a very necessary tool as a partial spatial anti-alias filter. Jon Tessman has provided examples of adaptive filtering of elliptical noise patterns where fine spatial sampling is not required. This work is to be commended, but it does not exempt us from the need to provide proper spatial sampling of relatively unpredictable noise modes such as scattered trapped mode, source artifacts, and noise produced through non-linear processes. I disagree with Jon’s statement that arrays have become an “obsolete concept”. Unfortunately, the true understanding of array significance and skills in designing appropriate arrays are becoming obsolete concepts in our industry.

Therefore, single sensors should only replace analogue arrays when we can preserve recorded data at much smaller group intervals than are in current use. For 3-D programs, it is important to record many traces of diverse statistics. Using many more recorded channels along typical current receiver line spacings will not accomplish this objective. To make good use of a dramatic increase in channel count, we must learn how to produce many more receiver lines in an economically and environmentally responsible manner.

Question 3 – One impediment to greater expansion of channel capacity is data transmission rates. Retrieving thousands of channels of finely sampled seismic data over areas up to (and sometimes exceeding) one hundred square kilometers creates great challenges for data collection. Especially when these retrievals must be repeated every few seconds and the system doing all this must be highly mobile.

Jon Tessman has presented excellent material addressing the “serviceability” of a seismic spread and has emphasized the need for nodal, networked systems and multi-pathing. I have already mentioned the “IT” system briefly in the previous section. This system is expanding the nodal network concept through the application of cellular telephone technology. I expect this will become something of a model for future systems.

As trace count for each seismic record increases, we must give up the habit of looking at monitor records. In the field, we will no longer be able to visually scan each recorded trace (in fact we already examine only about every third record). Visual quality control will be replaced more and more by automatic (or semiautomatic) evaluation of attributes and indicators. Only limited subsets of recorded traces will be reviewed in real time.

However, perhaps some of the most interesting technologies are not just in instrumentation. Line cutting methods, survey methods, use of LIDAR for imaging and surveying … all of these technologies have significantly evolved in recent years. They are all guided toward maintaining our ability to produce many, closely spaced seismic lines in order to accurately generate and capture wavefields. We will need more progress in these fields in order to utilize more recording channels in meaningful ways.

Question 4 – Improving image quality while maintaining a low cost of our product has always been the challenge of geophysicists. I see this continuing as our major hurdle in the future.

Rising regulation and compliance costs are somewhat out of our hands (other than for the lobbying of associations such as CAPP and CAGC).

Increased availability and reduced cost of technology (in processing and acquisition) is allowing us to make use of incredibly large volumes of recorded data. Continued improvement of image quality will depend on further refinement of our imaging algorithms, but it will also demand more sampling of the wavefield during field operations.

Meeting this demand will require not only more recording channels, but also field methods that will help mitigate a growing footprint of operations on our landscape. Very low impact cutting methods, one-pass operations, and catch phrases that we have not yet heard will be a standard part of future seismic operations. All of this must be accomplished without significantly increasing the cost or risk of our product.

Amidst all of this, it will be imperative that all geophysicists remain alert and well educated the technologies available to us. It is only through wise use of our tools that we will make progress. In today’s world, geophysicists are under pressure to produce a positive result on exploration efforts. And yet time and willingness for training is limited. Our greatest challenge may be in maintaining a society of geophysicists who are well educated in the tools and emerging technologies of our trade.

End

References

Share This Column