Oil and gas companies face a longstanding industry problem in accessing data. It is one of the challenges of larger, deeper and more remote operations, but now comes with the added complexity of collecting and interpreting a huge surge of real-time, digital information generated by multiple fields and plants.
Over recent years, a range of advanced tools have come to the market to help operators make sense of this so-called ‘Big Data’, in order to understand how to bolster performance across thousands of wells and, in real-time, monitor the condition of advanced equipment. But the technical limitations of today’s computing systems are already struggling to manage the amount of information that some operators are required to handle, sparking a search for smarter ways in which data could, and should, be analysed. Big Data solutions aim to effectively aid decision-making, allowing users to work more effectively by focusing on accurate information and how to use it when required.
Big Data is often characterised and quantified by reference to ‘the three Vs’ – volume, velocity, and variety – a description originally coined by Doug Laney, now research vice president of technology analysts Gartner Research.
In explorative drilling, for instance, a company will evaluate an area, drill a well, gather real-time data and input this into its system to inform planning for the next well before drilling it. Companies may re-evaluate fields every week and in many places, driving the volume of data ever upwards.
A collaborative response
As companies seek smarter ways to handle the influx of complex data, joint industry projects (JIPs) have begun to explore ways of saving time, money and energy through shared goals.
One such initiative is Optique, a four year joint industry project between several world-leading academic institutions and industry partners. It exploits recent advances in semantic technologies, in which the meaning of data is explicitly represented as part of the data model. The aim is to develop a software platform to provide end-users with flexible, comprehensive, and timely access to large and complex industrial data sets – in processing petabytes of well data, for example – by making computers use the language users understand and are used to.
University of Oslo (UiO) professor Arild Waaler, who coordinates Optique, initiated the project in 2010 and has received backing from Norwegian oil company Statoil, DNV GL, German engineering group Siemens, and fluid Operations, a German provider of innovative cloud and data management solutions. The EUR13.8 million (USD17.5m) programme launched in December 2012 with EUR9.7m European Union funding. The Optique team expects its approach to reduce turnaround time for information requests from days to minutes, while also advancing to data sets whose size and complexity is beyond the reach of existing technologies.
The big picture
Waaler believes that the majority of current solutions for Big Data focus solely on volume and processing large amounts quickly. The Optique project adds another dimension to the three Vs: complexity.
“Traditional technologies are extremely good at volume, but compromise a lot on variety, velocity and complexity,” he said. “Optique is unique in focusing on all these dimensions simultaneously. It also addresses trustworthiness by showing where data came from and how it has changed, providing transparency for the end user.”
Take the variety aspect for instance: “Statoil has hundreds of terabytes of stratigraphy and seismic interpretation data that needs analysis in large and very complex databases. You cannot do this with only the methods developed for big volumes of data, but it is a main goal for Optique. We focus on variety, velocity and complexity, then consume as much data as we can without compromising too much of the other dimensions.” Optique aims to test and implement a long-term solution for data access that creates a tool for end users to find data on their own, which they cannot do now.
Waaler explained: “Geologists and engineers know what they need, but the problem is posting a complex query to multiple databases. This is impossible without sending a request to IT experts, a scarce resource. End users must wait for these experts to create complex queries. This may take up to several weeks and considerably delays decision-making.”
Optique plans to provide tools to allow a user to query data without assistance from IT experts, and get the result in minutes, he said. “This will open up new exploratory and interactive ways of working as users get more relevant data sets in shorter time. We see Optique as the central tool for exploring information and returning timely, complete, and accurate results. Users can then focus fully on what they are trained in.”
A challenge to industry
The Optique solution has been tried and tested in the laboratory. The next step is to implement it within the industry, and DNV GL has taken on the role of bridge builder between the theoretical and practical worlds. Waaler said remaining challenges include speeding up the performance of the back end by applying massively parallelized solutions and also tools to ease establishing and maintaining installations of the Optique platform.
In early 2015, the Optique team plans to present current results at a conference in Høvik, Norway. The aim is to recruit interested companies as partners to the project. The vision is that by 2020, Optique methods and technology will be incorporated into mainstream information management products delivered by trusted vendors.
“We will deliver a good concept, but this will not be something that can be delivered to the industry two years from now. I hope that by then we have something so impressive that the industry will want to continue to fund this project. I am optimistic,” Waaler said.
For further details about Optique, contact Tore Hartvigsen, DNV GL project manager email@example.com
 Laney, D: ‘3D Data Management: Controlling data volume, velocity, and variety’; Meta Group (now Gartner) (2001)