High performance computing models

Louise Davis

Bert Beals looks at how innovative HPC models can give oil and gas companies an edge over competitors

As the oil industry works to satisfy demand for hydrocarbons as part of the world's energy needs, high performance computing is becoming critical to companies that are seeking new sources and trying to maximise existing reserves at a time of low prices.

The current reality of the oil and gas industry dictates that organisations gather more data, find ways to use that information more effectively and make better decisions based on their analyses. This is leading to a huge influx of information entering into simulations, modelling and other supercomputing tasks. At the same time, the algorithms needed to support these efforts have become much more complex and demanding.

This means the industry has now reached a significant tipping point in its use of supercomputing resources. In order to remain competitive, far more powerful, purpose-built solutions will have to be implemented by oil and gas companies.

The growth of complexity

The use of these far more complex algorithms is necessary in many areas of the industry - especially in seismic processing, where they are used to higher fidelity images.

New shot records are denser and contain exponentially more attributes, which offers a component picture of much higher fidelity and accuracy. The problem is that current seismic simulation techniques already involve incredibly large datasets and complex algorithms that face limits when run on commodity clusters.

Indeed, the methodologies most commonly deployed are unable to support this rapidly advancing level of sophistication.

While it is a truism that the potential opened up by mathematics outruns the computer architectures available, it is now possible to complete tasks (such as full waveform inversions) in weeks or even days, when previously they took years.

Change is necessary

Traditionally, efforts to improve seismic processing throughput have come from the availability of newer CPUs featuring faster clock speeds, and new algorithms designed to take advantage of these faster processors and larger storage capacities.

We are seeing that some algorithms must be spread across multiple nodes Traditional interconnects such as Ethernet or InfiniBand are unsuited to satisfy the wide variety of intercommunication requirements when these algorithms need to exchange information between nodes. In addition, the input/output (I/O) demands on any single node are becoming ever-more extreme.

These computing requirements mean it is no longer possible simply to throw processing power at the problems. Systems that meet the performance requirements of the world's largest and most complex scientific and research problems must be built from the ground up.

New applications

Powerful new applications demand a new infrastructure that has the characteristics of an MPP (Massively Parallel Processing) platform but without the concomitant restrictions and performance penalties of the traditional approach.

How otherwise, can geo-scientists build a three-dimensional, high-resolution model of a ten-kilometre cube of the Earth's sub-surface and be able to navigate it as easily as Google Earth? That is the challenge. The Earth has to be modelled as a dynamic entity, matching mathematical models with the actual rock properties.

To achieve this takes an increase in computing power of at least one or perhaps two orders of magnitude compared with today. Yet it is not what current systems are built for.

Further down the line, some of the challenges may appear daunting to those outside the world of high performance computing. One multi-national energy company, for example, recognises that before the end of the current decade it will require more than a hundred petaflops to conduct the seismic analysis and processing for newer, highly dense 3D surveys.

This kind of work is likely to require a tenfold growth in supercomputing power compared with today.

A new architecture

The question is what kind of architecture is necessary to allow the oil and gas sector to fully optimise all the information and processing resources at its disposal.

The answer in large part lies in the adoption of solutions that are not built around traditional operating models and instead use innovative techniques to maximise their operational efficiency.

Specifically, there is a need for HPC systems that go beyond adding raw power to operations and instead focus on moving data between supercomputing nodes efficiently.

Innovation and tradition

Traditional computational and I/O techniques can be replaced by methods focused on improved interconnect and storage capabilities - alongside traditional computing functionality - to help oil and gas companies stay ahead of the competition.

Systems are now available that incorporate Graphics Processing Units (GPUs) and coprocessors, alternatives to traditional multicore CPUs, which can be leveraged to run today's most demanding seismic processing workflows.

The use of more powerful and scalable supercomputing resources opens the door to ever closer integration of the complete asset team, so that instead of dumping data over the wall to the next department in the chain, everyone works from the same model, optimising their performance to yield results in less time than ever.

Reservoir analysis

Yet this is only the beginning. The new capabilities available in supercomputing are set to have a huge impact on maximisation of extraction rates, by achieving proven, step-change improvements in reservoir analysis.

Unprecedented speeds can now be achieved, allowing reservoir engineers to study vastly more realisations of their models than ever possible. It is now practical, for example, to run a 45-year production simulation in just two-and-half-hours, given the right team and the right kind of computing power.

The difference in quality of analysis can be likened to that of watching a new ultra high definition TV after straining to see the image on a walnut-encased black-and-white model from the 1950s.

Boosting analytics

This new level of high performance computing power will also put the oil industry on an entirely new footing as it faces not just the increased volume of data from the massive growth in sensors, but also the growing requirement for data analytics to be conducted as part of the workflow.

In terms of volume, High performance data analytics can now collate data from thousands of well-heads, vehicles and pieces of machinery for analysis - much of it probably neglected and under-exploited as things stand.

The integration of platforms that can support a broad range of compute and data movement requirements will facilitate the execution of traditional high performance computing workflows alongside advanced techniques such as Spark or streaming analytics.

This could, for instance, enable a team to conduct pattern and correlation analysis in the middle of seismic processing workflows, reducing the number of iterations necessary, while immediately picking up outliers as part of the algorithmic results. Seismic wavelet analysis could be used to determine beneficial structures in the geology not apparent in the mathematical model.

A new level of optimisation

In truth, high-performance analytics can now be placed in many of the areas traditionally occupied solely by mathematically-modelled workflows. Analysing data as it comes in, enables an organisation to immediately determine whether information is genuine and reliable, saving costly time and resources.

In the end, oil and gas companies must be able to find and produce hydrocarbons faster, safer and more efficiently. Simply accelerating operations will not do. Global consciousness on sustainability is making energy efficiency critical while increased competition for limited natural resources makes operational excellence essential. All this pressure makes knowledge more important than ever. The right information can help companies ensure safe field operations and streamlined business functionality, giving the edge required to get ahead of competitors.

Legacy supercomputing models are not equipped to support the changing needs of oil and gas companies, but solutions are already being developed that are well positioned to meet current and future demands.

Bert Beals is Global Head of Energy, Cray.

Recent Issues