This journey began 22 years ago when I first arrived in Dhahran, Saudi Arabia, the land of giant oil and gas fields. It was at a time when the search began for new giants outside the Eastern Province (Retained Area 1), which is home to the world’s largest oil field – Ghawar. In the east, this oil/gas field and others are low-relief structures overlain by a complex near surface formed by the dissolution of carbonate and anhydrite (karsts) formations. This dissolution process formed a complex network of open and collapsed caverns that act as secondary scatterers that mask the primary reflections during seismic acquisition. Under these conditions, when only 240-channel recording systems were available, it was routine to use a 72-geophone array (6 x 12-geophone/ string) laid out over an area 108 m by 50 m along with five vibrators per fleet, sweeping simultaneously with a 10-m move-up. Yes, it was a sea of geophones. But this would all change with advances in acquisition technology.
In 2001, a conventional 480-channel 2D crew was upgraded to 2880 channels (6 x 480). Individual geophone strings and single sweeps replaced in-field receiver and source giant arrays, and the group interval was reduced from 25 m to 5 m. In principle, this uncommitted design would deliver a higher-resolution subsurface image. But this was not the case, and it was unclear why we could not extend the high-frequency limit beyond 40 or 60 Hz at best. Was the signal buried in a broadband backscattered noise floor?
Over the next decade, as the number of recording channels increased, we uniformly improved our spatial sampling grids with offset and azimuth and shifted from narrow- to wide- to full-azimuth designs. During this evolution, each time the number of active channels was increased, the question was raised: How much fold is enough? The answer was: It is not enough, the noise leaks into my prestack migrated image! In fact, it is better to speak of source and receiver sampling density.
The ideal full-azimuth symmetric survey is designed to record unaliased noise and uniformly sampled (offset/azimuth) signal, which can be used to extract rock properties and characterize the fractures. This is why we require a high-channel count, high-productivity, single- sensor, and single-source seismic crew.
In 2010, we experienced a seismic revolution in channel capacity. The number of active channels increased to 25,000 channels for two production crews and 100,000 channels for one pilot crew. We were no longer dealing with tens of terabytes of raw source records per survey but several terabytes per day. One project exceeded one petabyte of raw source records spanning 1700 km2. To accommodate these giant volumes of seismic data, hardware and software had to be upgraded and new workflows were needed to deal with traces with lower than expected signal-to-noise ratios. The formation of an integrated team with efficient transverse communication procedures was the key to creative and innovative solutions. The high-resolution time and impedance images, attribute maps, and extracted geobodies brought us one big step closer to the engineers.
We still have not resolved all the seismic challenges. In theory, the near surface is still undersampled, and it is rare to see frequencies greater than 60 Hz at the oil and gas targets. But I am confident the density of spatial sampling grids will increase, technology will improve, and an integrated approach will lead to improved workflows and new discoveries. One thing is certain: these are exciting times in seismic exploration and development on land.
This lecture will focus on the main challenges facing onshore seismic in the past and today. We will look at how our understanding of noise and signal has changed with time and provide a peek into the future.
Join the Conversation
Interested in starting, or contributing to a conversation about an article or issue of the RECORDER? Join our CSEG LinkedIn Group.
Share This Article