Advertisement

This article is the first of a two-part series that looks at the impact of high performance computing on seismic imaging, part of the five stages of oil and gas exploration workflow. In this article, we look at reverse time migration (RTM) computations — an industry standard algorithm used to generate accurate images of the subsurface — done by the Intel High Performance Computing Center (Intel PCC) at the Alberto Luis Coimbra Institute at The Federal University of Rio de Janeiro, Brazil.

What are the costs and risks of oil and gas exploration?

The cost of offshore drilling for oil could be several hundred million dollars, with no guarantee of finding oil at all. The high cost of data acquisition, drilling and production reduces average profit margins to less than 10 percent.1 The expense and strict time limits of petroleum licenses impose a fixed time for exploration. The time limit requires data acquisition, data processing and interpretation of 3-D images within a limited time-to-solution envelope.

In today’s oil and gas industry, technology and supercomputers (such as the Intel High Performance Computing Center (Intel PCC), COPPE/UFRJ – Rio de Janeiro, Brazil) are used to help lower the costs and decrease the time required to discover deposits of petroleum buried under water and rock. Energy exploration, production and reservoir monitoring is the most significant big data and compute-intensive application in the private sector.

To help reduce costs and impact on the environment, energy companies are turning to supercomputers to aid in oil and gas exploration. In 2013, Total invested €60m to purchase the largest supercomputer in the private sector. That same year, BP purchased a supercomputer in Houston, TX, designed for commercial oil and gas exploration research. The BG Group supported the purchase of the largest supercomputer in South America specifically for energy exploration research that is to be located at SENAI CIMATEC.

Seismic imaging used to locate oil resources and reduce costs

Seismic imaging is used to map the geology of areas thought to contain hydrocarbons to reduce the need for expensive exploratory drilling. Seismic waves are produced by artificial methods to measure the geological structures of the earth’s subsurface. For offshore surveys, an acquisition vessel shoots shock waves through air guns into the water, and these sound waves refract and reflect through water and rock. Acoustic receivers, such as hydrophones, measure the time it takes for sound waves to travel through the earth’s subsurface. Seismic imaging, through a process called depth conversion, creates a realistic model of this structure. In addition to helping locate hydrocarbons, seismic imaging enables operators to monitor the reservoir throughout its lifecycle to help maximize the value of the discovered resource.

Data receivers may collect petabytes of data, and the volume of data increases rapidly as more receivers are used in an array to improve spatial resolution. Due to the large amount of data and analysis required, high performance computers which can sustain petaflops (1015 floating-point operations per second) are required.

Seismic imaging using RTM at Intel PCC COPPE/UFRJ

The Intel PCC, COPPE/ Federal University of Rio de Janeiro (UFRJ), performs seismic imaging using the reverse time migration (RTM) method, an industry standard algorithm used to generate accurate images of a subsurface. According to Alvaro Coutinho, Professor at the Intel PCC, COPPE/Federal University of Rio de Janeiro, “The project aim is to incorporate uncertainty quantification (UQ) in seismic imaging, helping to improve decision process (Figure 1) in oil and gas exploration and production (E&P).

Figure 1: Simplified workflow for decision making in E&P industry. Courtesy of Thibaut Lavril, Graduate Student at the Intel PCC, COPPE/Federal University of Rio de JaneiroQuantifying uncertainties in RTM is crucial, but it is challenging due to computational cost and highly dimensionally uncertain inputs. To tackle this issue, COPPE uses a framework that allows quantifying uncertainty on the output of a large-scale RTM given a highly dimensionally uncertain input. This process involves solving differential equations that describe the propagation inside the earth under a set of initial, final and boundary conditions. Using these techniques, a framework coupling dimension reduction and sparse grid stochastic collocation is designed to quantify uncertainty in RTM seismic imaging (Figure 2). An advantage of the COPPE simulation framework is that it is non-intrusive and helps determine subsurface petroleum information without drilling.”

Figure 2: Workflow to model uncertainties in seismic imaging. Courtesy of Thibaut Lavril, Graduate Student at the Intel PCC, COPPE/Federal University of Rio de Janeiro.How COPPE uses supercomputers and software for seismic imaging

COPPE performs the RTM analysis on Intel Xeon Phi coprocessors 7120P and Intel Xeon processors E5- 2697v2. “One of the key issues of this project is to use an optimized RTM solver for Intel Xeon Phi coprocessors and a scientific workflow management system to manage the execution of possibly thousands of deterministic solutions. This scientific workflow management system is able to collect provenance data in runtime (Ogasawara et al, 2011) and uses a distributed database solution. However, handling the computational cost, scaling the dimension reduction and visualizing uncertainties in seismic images are still challenging, making this project a good example of HPC and big data,” states Coutinho.

COPPE uses OpenMP* (Open Multi-Processing), Intel MPI and Intel VTune tools for compiler directives, library routines, environment variables and tuning the data. In addition, they use the Tuning and Analysis Utilities (TAU) (Shende and Malony 2006) tool to monitor the performance data of scientific application executions by gathering and integrating performance data-to-domain data (semantic) in a single database, “queryable” at runtime.

Supercomputers improved RTM seismic imaging calculations

The COPPE/UFRJ research looked at advantages and disadvantages of the seismic imaging approaches under these main aspects: memory demands, processor performance, frequency and number of cores, and seismic modeling runtime. HPC techniques performed in the RTM algorithm were parallelization, vectorization, thread affinity, memory alignment, padding, prefetching, loop unrolling (for the Intel Xeon processor E5) and cache blocking (for the Intel Xeon Phi coprocessor.)

The COPPE group performed tests using Intel’s Endeavor Cluster varying the number of cores. The team noted improvements in performance efficiency due to the shared file systems with high-speed synchronization and better computing power. Coutinho notes that, “The wave propagation core algorithms are optimized for Intel Xeon Phi coprocessors, and performance is excellent. The order of datasets and seismic parameters is also important in the speed of processing. COPPE experiments show that the optimized 16th order operator offers the best performance — saving more than 70 percent in processing time and up to 90 percent in memory capacity.”

For the 3-D acoustic kernel, COPPE achieved 270 Gflop/s running on an Intel Xeon Phi coprocessor 7210. This is more than what was reported by Intel authors in Characterization and Optimization Methodology applied to Stencil Computations, Chapter 23, High Performance Parallelism Pearls (see References) “With such performance, the overall time of UQ propagation will be substantially reduced, since it will be necessary to run several RTMs in parallel to propagate the uncertainties,” states Coutinho (see reference Costa et al).

Petrobras and PETREC are using COPPE supercomputer analysis in their seismic imaging

Brazilian law requires that one percent of the revenue coming from oil extracted offshore be invested in research. COPPE works on joint projects sponsored by the Brazilian Petroleum Agency to help aid in this research. The team also works with Petrobras, who is one of the country’s leading experts in seismic imaging. According to Coutinho, “We do basic research on new algorithms, methods and technologies trying to help them to improve their geological knowledge about an area of interest. COPPE’s business is knowledge, and our primary aim is to disseminate new knowledge to our partners.” The High Performance Computing Center at COPPE/UFRJ (and others) in Brazil won the Horizon2020 Brazil-EU Third Coordinated Call BR-EU in Information and Communications Technologies award for its work in the HPC area (as shown in the “Result: selected projects” of the Web page.)

According to Rui Pinheiro Silva, Manager of the Geophysics Section at the Petrobras Research Center, “Our Petrobras team works with COPPE through research projects developing algorithms for seismic imaging. Our projects with COPPE are specifically designed to test new geophysical technologies. COPPE provides Petrobras with codes that help us to test new possibilities for forward modeling and imaging. The projects have provided relevant contributions for our understanding of some geophysical processes.”

Future of seismic imaging research enabled by next-gen supercomputers

Coutinho believes that “long-running executions need to be steered by specialists. This is related to the big data challenge of putting the human in the loop (Mattoso et al.) In the future, supercomputer applications need to provide for: execution monitoring; data staging out during the execution for user steering; receiving human input to conclude the execution in advance, fine tuning parameters and decisions before the computation started, and being able to react during the execution based on these human runtime changes. Besides, we need to continue to optimize the seismic imaging kernels for future Intel Xeon Phi processors code name Knights Landing and to continue visualizing uncertainties in seismic images, making interpretation easier for the end user.”

1. Intel Parallel Computing Center PESCI: Performance PortablE SeismiC Imaging paper

References

  • The TAU Parallel Performance System, by S. Shende and A. D. Malony. International Journal of High Performance Computing Applications, Volume 20 Number 2 Summer 2006. Pages 287-311.
  • “Characterization and Optimization Methodology applied to Stencil Computations,” Chapter 23, High Performance Parallelism Pearls, C. Andreolli, P. Thierry, L. Borges, G. Skinner, C. Yount, J. Reinders & J. Jeffers, MK, 2015
  • “An algebraic approach for data-centric scientific workflows,” E Ogasawara, J Dias, D Oliveira, F Porto, P Valduriez, M Mattoso, Proc. of VLDB Endowment 4 (12), 1328-1339, 2011.
  • “A Trade-Off Analysis Between High-Order Seismic Rtm and Computational Performance Tuning,” Danilo L. Costa, Alvaro L. G. A. Coutinho, Bruno S. Silva, Josias J. Silva and Leonardo Borges, in 1st Pan-American Congress on Computational Mechanics – PANACM 2015, XI Argentine Congress on Computational Mechanics – MECOM 2015, S. Idelsohn, V. Sonzogni, A. Coutinho, M. Cruchaga, A. Lew & M. Cerrolaza (Eds), pp 955-962, International Center for Numerical Methods in Engineering (CIMNE) Barcelona, Spain, ISBN: 978-84-943928-2-5
  • “Dynamic steering of HPC scientific workflows: A survey,”  Marta Mattoso, Jonas Dias, Kary A.C.S. Ocan ̃a, Eduardo Ogasawara, Flavio Costa, Felipe Horta, Vitor Silva, Daniel de Oliveira, Future Generation Computer Systems, Volume 46, May 2015, Pages 100–113, http://dx.doi.org/10.1016/j.future.2014.11.017

Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR.

Advertisement
Advertisement