Energy Country Review: Complimentary 7-day trial

  • News-alert sign up
  • Contact us

Subsurface assessment: the critical difference between evidence and assumptions

21/11/2023

The foundation of any assessment rests on two elements: quality evidence, and a quality understanding of what that evidence reveals. The same is true of subsurface assessments, but the difficulty is that there is a limit to the hard evidence you can gather.

When you’re carrying out a subsurface assessment, you can’t walk around an oil or gas reservoir taking direct measurements wherever you want. It’s different to assessing engineered items like cars or production facilities which are made of materials with well understood properties, and where performance can be easily measured at any point in order to fine tune predictions.

The only true hard data for an oil or gas reservoir is what you can gather in a wellbore from core, logs, samples and tests. In a scale sense, the data you can collect from a well is similar to a single lamp post in a village. A wellbore is typically of similar diameter to a lamp post and measurements within a well can “see” a short distance around a wellbore in a similar way to light from a lamp post. But that is still a tiny fraction of the total – you might be able to understand what’s happening on that portion of the pavement, but one lamp post won’t tell you what the whole village looks like.

Lamp posts

Unlike lamp posts, appraisal wells are expensive and typically only a small number are available when making the initial field development decision. Even later in field life, wells are relatively far apart and the geology between them is uncertain. In order to get a picture of what the reservoir or village looks like, you’re going to have to make some indirect measurements and assumptions.

The key to conducting a quality subsurface assessment is to be clear on the objective of the assessment and the availability and quality of evidence available to make the assessment. An impressive million cell 3D simulation model which looks good on Tik-Tok is no better than a dull spreadsheet estimate using classical methods if the question is simple or data not available.

When evaluating the subsurface it is important to be clear on what is direct evidence (eg well data) which can be relied on versus interpreted indirect evidence (eg seismic or production data) where some variability should be expected versus assumptions (eg depositional environment) which are used to guide models.

Expect variability, not consistency

When it comes to reservoir properties, variation is par for the course. Rock and fluid properties, the depositional environment and pressures routinely vary laterally and vertically across oil and gas fields.

Even if the properties you measure are the same in two well bores, that doesn’t necessarily mean you’re dealing with a uniform connected reservoir. If two lamp posts in a village are at roughly the same height, that could indicate a flat road connecting them – but they could equally be on either side of a hill or fault that you can’t see from the measurements.

If you assume the entire reservoir is consistent because of small samples of well data, single seismic interpretations and only one view of the geology and petroleum system, you’re leaving yourself open to surprises when you begin production. Suddenly finding that parts of the field diverge considerably from your expectations is a hard, expensive lesson.

A good example is fluids data. With samples at one or two wells it is too easy to believe that the oil or gas properties will be the same across the field.

The reality is that fluid properties often vary significantly across fields and vertically. There are many examples where facilities were built with insufficient gas handling capacity or ran into flow assurance problems because the actual properties differed from expectations. If the variability you find is more than the facilities design allowed for, it’s difficult and costly to alter once the field is under development.

The three types of subsurface evidence

When you’re conducting a subsurface assessment, there are three categories of information that will inform your findings: The first is direct physical evidence from the ground. The second is interpreted data from indirect measurements. The third is the assumptions you make. Making a quality assessment depends on understanding the differences between each type of evidence and when you’re dealing with known facts or expectations.

Direct evidence

Let’s begin with the gold standard. Direct evidence is the physical evidence that’s been gathered from your well bore. It’s the single lamp post in your village that you can see, touch and evaluate reliably.

Whether it’s logs, core data and fluid samples, reservoir pressures and gradients or flow test results, direct evidence gives you tangible proof of what’s going on at the site of the well bore. If the site is already producing oil or gas, then that production data is direct evidence as well.

The limitations of direct evidence need to be understood. Poor hole conditions can impact log quality. Also, multiple petrophysical interpretations can be made from well log measurements. Production data, too, has its limits. It can tell you about productivity, volumes and variations in energy influx away from the wellbore – but not the geometry.

However, discarding direct evidence which does not fit the assumed reservoir model is a frequent mistake. There has to be a reason to reject evidence due to “measurement error” or “calibration”. Just because the data is old, does not mean it is wrong.

Interpreted indirect evidence

Direct data from individual well bores leaves huge gaps when it comes to the reservoir as a whole. This is where indirect sources such as seismic and production data provide critical evidence.

Seismic continues to advance and gives a wealth of data including time / depth of surfaces, amplitudes and property variations from different offsets. In combination with well data this allows interpretations to be made of the structure and geology, and at times the rock and fluid properties too. It can be very convincing, particularly when displayed in 3D with lots of colours.

Whilst seismic provides plenty of figures, it’s vital to remember it is not a direct, physical measurement of the reservoir. However much data you have to draw interpretations from, there will always be a high degree of uncertainty. Depth conversions can vary significantly away from the well bore, while a bright amplitude response, which often indicates gas, can be deceptive.

Production data is also subject to interpretation. While the relationship between volumes produced / injected versus pressure is direct evidence, it is not three dimensional evidence. Production data can generally be matched using multiple interpretations of geology and reservoir properties. And at times the fourth dimension needs to be considered as reservoir properties change with time (e.g. chalk under compression).

Assumptions

Assumptions or models are used to guide interpretations and fill the gaps between the direct measurements and indirect / non dimensional sources. Guesswork and assumptions aren’t the same. When you make an assumption about what’s going on with a reservoir, it will generally be based on a reasonable foundation.

If you’ve worked with a channel or deltaic system before, for example, you might know something about how rock properties are likely to vary in a particular direction and this could inform well placements and production forecasts. However, even an informed theory is still just a theory until the well bore data proves or disproves it.

The most common issue with assumptions is not allowing for the kind of variability that’s part and parcel of geology. Human psychology plays a part here – people want certainty over an outcome, and so it’s easier to lock into what you believe to be true about the geology rather than spend time evaluating alternatives you think are unlikely.

People also find it difficult to let go of an idea they’ve been working with for a while, and give it too much credence as a result. If you’ve invested a lot of time into making a perfect model, you don’t want that time to be wasted – even if the model itself is based on incorrect assumptions rather than hard direct evidence. Often it’s better to make simpler models that accept variability than to try and build an all-encompassing model that uses assumptions to fill in the gaps.

It’s better to interrogate unknowns than explain them away

When you don’t know exactly what’s going on across every square metre of a reservoir, it’s easy to fall into the trap of making assumptions to fill the gaps in your knowledge. But a question mark isn’t a problem by itself. For example, the best operators will readily admit that they don’t know what’s going on in an area and adjust the drill sequence to allow it to be evaluated early in a development and allow changes made if required.

When there’s pressure to deliver an assessment, teams don’t often sit back and interrogate what is supported by direct evidence and what they’ve based on assumptions. Smaller teams might not have the breadth of expertise needed to explore alternatives or even know what they don’t know.

Sometimes it takes an objective view to ensure an assessment stays grounded in truth and keep it from being led by overconfidence in interpretations and assumptions. Third party technical experts can help to guide a team through the alternative explanations and interpretations, drawing on their many experiences of encountering variability in the past.

KeyFacts Energy Industry Directory: Rockflow 

Tags:
< Previous Next >