Automated history matching to fracture geometries as measured by Volume to First Response (VFR): A Tutorial

Summary

A variety of factors can impact created fracture geometry, making model predictions difficult without calibration. Some factors impacting geometry (not exhaustive) are: net pressure ambiguity in horizontal well stimulations (see also McClure et al., 2020), effective rock toughness, stress barriers and lithological layering, and small scale heterogeneity. Accordingly, we have always been advocates of technologies to measure fracture geometries. A couple of years ago, fiber optic strain data became more common, and allowed for direct, in-situ observations of fracture geometries, growth rates, fracture counts, and more (Shahri et al., 2021). In the years since, fiber optic strain technology has progressed rapidly with many options available (Barhaug et al., 2022). 

Another new diagnostic we are seeing with increasing frequency is Sealed Wellbore Pressure Monitoring (SWPM) (Haustveit et al., 2020). A SWPM acquisition entails leaving a well offsetting the stimulation well uncompleted (sealed) during the fracturing of the subject well. As fractures cross the sealed well they deform the casing, causing pressure signatures at the surface. These pressure spikes are correlated to the volume of first response (VFR) for each stage of the treating well (presuming the fractures reach the observation well). In this way, SWPM offers direct observations of where a fracture is at a moment in time, similar to fiber. Combining multiple wells together then allows for three-dimensional constraints of fracture geometry.

In this post, I will walk through a simple example of using SWPM to calibrate the fracture geometries of a hypothetical data set leveraging the ResFrac Automated History Matching functionality to expedite the workflow. In a follow-on post, I use the model to demonstrate some intuitions on fracture geometry using the Sensitivity Analysis functionality as well as some nuances of VFR calibration.

Finally, for examples of field cases using SWPM and ResFrac, see Ratcliff et al. (2022) and an upcoming paper at HFTC 2023 (“Sealed Wellbore Pressure Monitoring (“SWPM”) & Calibrated Fracture Modeling: The Next Step In Unconventional Completions Optimization”).

 

Base simulation

The base simulation is a three-well model in a generic tight-oil reservoir. Well_one and Well_two are landed in Pay 1, and Well_three is landed in Pay 2.

Figure 1. Well layout and stress profile.

 

I set up five stages to stimulate in Well_one. The stage length is 250 feet for all stages. The first three stages have six clusters, the fourth stage has nine clusters, and the fifth stage has 12 clusters. And finally, the stage design is identical for all stages – so the same amount of fluid and proppant is pumped into each stage.

Figure 2. Stage design and pump schedule for the five stages in Well_one.

 

Knowing that I will be using VFR as a history matching objective, I need to tell the simulator to output VFR for each well pair that I am interested in under the advanced section of Output options:

Figure 3. VFR trackers in simulation setup.

 

Automated History Matching workflow and results

 

Premise

Figure 1 shows the relative position of the three wells. In our case, Well_three is being used as the observation well and is used to test whether fractures propagate into Pay 2 from Pay 1. In the SWPM acquisition, the average VFR from Well_one to Well_three is 4000 bbls for the six-cluster stages, indicating that the barrier shown in Figure 1 is not competent. 

We can create a history matching workflow to calibrate to this data. Just as with a large modeling project, I should follow the modeling workflow described in the ResFrac A to Z Guide, namely:

  1. Create a base case
  2. List key observations and hypotheses
  3. History match to those observations

My base case simulation is shown in Figure 4. We immediately note that no fractures from Well_one are propagating into Pay 2.

Figure 4. Image of base case simulation.
Figure 4. Image of base case simulation.

 

My key observation to match is that the VFR for the Well_one to Well_three well pair is 4000 bbls. I have also been told that the stress barriers in Figure 1, in the middle of Pay 1 and at the top of Pay 2, are uncertain. I am also uncertain of the degree of lamination in rock, so will treat my fracture toughness as uncertain. Thus, my hypothesis is that some combination of toughness (both absolute magnitude and ratio of vertical to horizontal) and magnitude of the two stress barriers is controlling the height of my fracture and will dictate the connection to Pay 2.

At this point, I could launch into a history matching workflow with those four parameters (horizontal toughness, vertical toughness, Pay 1 barrier, and Pay 2 barrier). However, we always recommend “bracketing the solution” – and that applies equally to automated and manual workflows. So prior to running the automated history matching workflow, I set up a permutation of my base case where I set horizontal and vertical toughnesses to 1000 psi-in^(½) and remove the stress barriers. Figure 5 shows the results fracture geometries and a VFR of 3050 to 3450 bbls for Well_one to Well_three.

Figure 5. Simulation showing treating pressure (orange), cumulative water injection (blue), Well_one to Well_two VFR (green), and Well_one to Well_three VFR (purple). Well_one to Well_three VFR is lower than the target (too quick), adequately “bracketing the solution”.

 

Because my VFR in Figure 5 is smaller than my objective, I have bracketed the solution and can logically surmise that the solution lies within my extremes. If I were only varying one parameter, the ResFrac Sensitivity Analysis function would likely be the most efficient. However, when varying multiple parameters to get a match, the Automated History Matching tool is the best for the job.

 

Using the Automated History Matching tool

From my base case simulation, I can click “Set Up History Matching” from the simulation menu, as shown in Figure 6. 

Figure 6. Creating a history matching workflow from the simulation menu.
Figure 6. Creating a history matching workflow from the simulation menu.

 

The next step is to tell the history matching algorithm which parameters I want to vary. In the section above, I had hypothesized that the magnitude of the two stress barriers and toughness (both absolute magnitude and ratio of vertical to horizontal) is controlling the height of my fracture. The fastest way to find these two layers is to use the property preview in the Static model tab, and use your mouse to identify the depths:

Figure 7. Identifying depths of property values.
Figure 7. Identifying depths of property values.

 

Going back to the Static model tab, I select two depth cells as my first parameter group as shown in Figure 8. I then select the stress gradient at a depth of 8614.5 feet as my second parameter.

Figure 8. Adding two cells to the history match parameter list
Figure 8. Adding two cells to the history match parameter list

 

To add an entire column as a history match parameter, select the column header (which selects the entire column), then click add parameter as shown in Figure 9.

Figure 9. Adding an entire column to the history match parameter list.
Figure 9. Adding an entire column to the history match parameter list.

 

Navigating to the Decision support tab, we see that ResFrac has automatically created three parameter groups. However, we want to vary the two stress barriers independently, so we will create a fourth parameter group to provide this additional dimension as in Figure 10.

Figure 10. Adding a fourth parameter group.
Figure 10. Adding a fourth parameter group.

 

In the next table, Decision Support Parameters, we assign ranges for each parameter within which the history matching algorithm will search. Figure 11 displays the parameters range I chose.

Figure 11. Decision support parameters
Figure 11. Decision support parameters.

 

Here is the reasoning behind each:

  • Pay 1 FG and Pay 2 FG: My hypothesis is that the fracture gradient is too high for fractures to propagate downwards, so I only want to explore fracture gradients less than my base case. Secondly, I find it easier to use the “linear adder” function when thinking of the fracture gradient. With a left end of -0.1 and right end of 0, the algorithm will search for fracture gradients between 0.1 less than the base value up to the base value.
  • K1c vertical: My default K1c vertical value was 2000 psi/in^(½), so to explore a range from 1000 to 3000, I chose to use a linear multiplier with a left end of 0.5, and right end of 1.5 (0.5*2000 = 1000, and 1.5*2000 = 3000).
  • K1c horizontal: I want to explore a similar range for the horizontal toughness, but my initial value is 1500 psi/in^(½), so I make the left end multiplier 0.667 and right end 2 (0.667*1500 = 1000, and 2*1500 = 3000).

The final input is to create the objective/s for the history matching workflow to evaluate. Figure 12 shows my inputs, with red letters marking the important inputs.

Figure 12. History matching objectives.
Figure 12. History matching objectives.

 

I chose to specify a single history matching objective, though if I had additional constraints, I could specify additional objectives and assign weights to each.

    1.   Give the objective a name (particularly helpful for keeping each straight if specifying multiple)
    2.   Select the simulation output data that I want to compare my objective to
    3.   Choose how to evaluate the objective (if I were evaluating a time series like oil production rate, I would choose misfit)
    4.   Because I chose to evaluate the objective at a point in time, ResFrac will default to evaluating at the end of the simulation, I uncheck this box so that I can tell the simulator a time at which to evaluate the data in (B).
    5.   I chose 4.5 hours to correspond to the end of the second stage, which is a reasonable time to evaluate the VFR. I could have equally chosen the end of stage three. I chose to avoid the first stage as I wanted to make sure I accounted for stress shadowing.
    6.   The objective value to evaluate against. In this case, I want my VFR for the six-cluster stage to be 4000 barrels, so I have entered 4000. The simulator knows the units already from the selection in (B).

With the Automated History Match settings complete, I can now run the workflow.

 

Automated History Match results

When the automated history match completes, each individual case will be listed in order of match to the objective function (with the best match listed on top) as shown in Figure 13.

Figure 13. Automated History Matching results.
Figure 13. Automated History Matching results.

 

There are a variety of ways to start your analysis. Clicking “Postprocessing” will bring up a menu with powerful analysis plots.

Starting with the scatter plots, we can look at our target function (Stage 2 VFR) versus our history matching parameters.

Figure 14. Postprocessing analysis scatter plots. Dashed trend lines added for emphasis.
Figure 14. Postprocessing analysis scatter plots. Dashed trend lines added for emphasis.

 

The results make intuitive sense:

      • Stronger Pay 1 FGs yield lower VFRs as the fracture is forced to grow downwards
      • Weaker Pay 2 FGs yield lower VFRs as the weaker stress barrier allows more downward growth
      • Lower vertical toughness allows for faster vertical fracture growth and lower VFRs
      • Higher horizontal toughness restricts lateral fracture growth, incentivizing more downward growth and lower VFRs

Further, we observe that the trend with the FGs is steeper than with toughness, so we can conclude that the FG has a stronger effect on the VFR than the toughness does.

Understanding the impact of each parameter, we can also use a heatmap plot to inspect for non-uniquenesses. Figure 15 shows a heat map of the third iteration (last generation and most “honed-in” to target function). I chose to plot Pay 1 FG, vertical toughness, and horizontal toughness versus the Pay 2 FG (my parameter most strongly correlated to the VFR).

Figure 15. Heatmap of parameter values for the third history matching iteration.
Figure 15. Heatmap of parameter values for the third history matching iteration.

 

Several things stand out upon inspection of the heatmap:

      • All cases have low Pay 2 FG values (as we expected from the scatter plot)
      • Pay 1 FG shows a clear concentration of higher values, indicating that high Pay 1 FG is required (also expected from scatter plot)
      • Both vertical and horizontal toughness show diffuse distributions (no “hot spots”), indicating results are less sensitive to either toughness parameter). 

Exiting the post processing menu (and saving so I can reopen!), I next batch generate a line plot of each simulation which allows me to quickly scroll through and visually inspect each case. Figure 16 is an example plot of the template I used for the batch plot generation and what I visually chose as my best case (2_refined_005).

Figure 16. Example line plot used for the batch generation of all history match cases
Figure 16. Example line plot used for the batch generation of all history match cases.

 

Scrolling through all cases, I qualitatively evaluate the direction of the simulation error: in this case, is the VFR too high or too low. Figure 18 shows a quick sorting of several cases.

Figure 17. Categorizing simulation cases based on visual inspection.
Figure 17. Categorizing simulation cases based on visual inspection.

 

Navigating to the raw results of the workflow, as shown in Figure 18, I can open the pointsummary.csv file to quickly get the history match parameter values for each simulation that I noted in Figure 17.

Figure 18. Raw results folder for the workflow and pointsummary.csv file.
Figure 18. Raw results folder for the workflow and pointsummary.csv file.

 

Taking the average of the interpolator values for each group and mapping those back to the physical value, I can see how the physical simulation parameters relate to my groupings.

Figure 19. Average interpolator and simulation values for the three groupings of cases.
Figure 19. Average interpolator and simulation values for the three groupings of cases.

 

Inspecting Figure 19 helps to understand the system I am modeling. 

      • The stress gradient at the top of Pay 2 has to be below 0.72 psi/ft in order for fractures to break through into the lower pay zone.
      • The cases that are “good matches” exhibit roughly equal vertical and horizontal toughness values
        • In the case of the VFR being too low and fractures growing downward too quickly, we observe that the vertical toughness is nearly half the value of horizontal toughness. It’s also worth noting that we rarely see cases with horizontal toughness > vertical toughness, so not only do the VFR values not match our observation, the ratio of vertical to horizontal toughness (~0.55) appears unrealistic.
      • In cases where no VFR is recorded between Well_one and Well_three (the “too high” category), the Pay 2 stress barrier is higher (0.72 psi/ft) and the Pay 1 stress barrier is lower, likely resulting in preferential upward growth. Additionally, we see that vertical toughness is ~15% greater than horizontal which acts to further constrain the fractures within Pay 1.

Returning to Figure 14, we can use the category groupings to identify where the plausible solutions lay within our data ranges. Figure 20 below emphasizes these ranges.

Figure 20. Based on the category analysis, plausible values are highlighted.
Figure 20. Based on the category analysis, plausible values are highlighted.

 

Finally, to move forward with an analysis, I must either pick an individual simulation or a population of simulations with which to do my forecasting/forward analysis (or in the context of a larger project, likely a production history match). A few paragraphs above I mentioned that I had chosen 2_refined_005 as my “best” case. However, 2_refined_005, is ranked 5th by the algorithm. Why did I choose the fifth-ranked case instead of the first? When setting up the history match workflow, I only used the Stage 2 VFR as my objective – but in reality, I would like all three of my six-cluster stages (the first three stages) to be as close to 4000 barrels as possible. Figure 17 shows the results for the top ranked case, 3_refined_002, where the VFR for Stage 2 matches 4000 more closely (hence higher rank), but there is more spread in the VFRs for Stage 1 and Stage 3.

Figure 17. Screenshot of 3_refined_002 results.
Figure 17. Screenshot of 3_refined_002 results.

 

When creating the history matching workflow, I could have also listed Stage 1 and Stage 3 as additional objectives and the cases would have been ranked by minimization of error to all three objectives. However, as you see, I arrived at a similarly satisfactory result in 2_refined_005 by just specifying a single stage objective.

 

Summary

In this post, I demonstrated how to quickly calibrate simulation models to VFR data acquired in the field and how to dissect results to understand the impact of fracture gradient, horizontal toughness, and vertical toughness on the results.

If you are a ResFrac user and interested in inspecting the simulation cases yourself, send me an email at [email protected] and I will send you a link to download the full workflow.

 

References

Barhaug, Jessica, Jacqueline Bussey, Ben Schaeffer, Jule Shemeta, Mathew Lawrence, John Tran, and Price Stark. “Testing XLE For Cost Savings in the DJ Basin: A Fiber Optic Case Study.” SPE-209155-MS. 2022.

Haustveit, Kyle , Elliott, Brendan , Haffener, Jackson , Ketter, Chris , O’Brien, Josh , Almasoodi, Mouin , Moos, Sheldon , Klaassen, Trevor , Dahlgren, Kyle , Ingle, Trevor , Roberts, Jon , Gerding, Eric , Borell, Jarret , Sharma, Sundeep , and Wolfgang Deeg. “Monitoring the Pulse of a Well Through Sealed Wellbore Pressure Monitoring, a Breakthrough Diagnostic With a Multi-Basin Case Study.” Paper presented at the SPE Hydraulic Fracturing Technology Conference and Exhibition, The Woodlands, Texas, USA, February 2020. doi: https://doi.org/10.2118/199731-MS

McClure, Mark, Matteo Picone, Garrett Fowler, Dave ratcliff, Charles Kang, Soma Medam, and Joe Frantz. “Nuances and Frequently Asked Questions in Field-Scale Hydraulic Fracture Modeling.” SPE-199726-MS. 2020.

Pudugramam, Sriram, Irvin, Rohan J., McClure, Mark, Fowler, Garrett, Bessa, Fadila, Zhao, Yu, Han, Jichao, Li, Han, Kohli, Arjun, and Mark D. Zoback. “Optimizing Well Spacing and Completion Design Using Simulation Models Calibrated to the Hydraulic Fracture Test Site 2 (HFTS-2) Dataset.” Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, Houston, Texas, USA, June 2022. doi: https://doi.org/10.15530/urtec-2022-3723620

Ratcliff, Dave, Mark McClure, Garrett Fowler, Brendan Elliot, and Austin Qualls. “Modeling of Parent Child Well Interactions.” 2022.

Shahri, Mojtaba, Andrew Tucker, Craig Rice, Zach Lathrop, Dave Ratcliff, Mark McClure, and Garrett Fowler. “High Fidelity Fibre-Optic Observations and Resultsant Fracture Modeling in Support of Planarity.” SPE-204172-MS. 2021.

 

Learn why both independents and supermajors alike trust ResFrac

Search