Modeling tools like ResFrac are all about doing ‘what if’ analyses on the computer instead of in the field. A natural use case for ResFrac is to run batches of related simulations that differ in one way or another from each other in their inputs, with the goal of understanding the effects of these differences. For example, a user might start with a simulation and vary geological parameters that have a higher degree of uncertainty such as permeability while trying to match field data. Or a user might vary design parameters such as perforation cluster spacing while trying out ideas to increase production. We built ResFrac’s sensitivity analysis tools with these applications in mind. The tools help users to create and run batches of simulations that vary systematically, and then help users interpret the results from these batches of simulations.
In this blog post, I will explain the modeling approach with which we designed the sensitivity analysis tools, and then walk through an example sensitivity analysis study. This blog post follows closely after the office hour that I presented last month, which you can watch here.
Representing a single modeling idea, such as changing the cluster spacing, often entails creating ResFrac simulations that differ from each other for several simulation input parameters. In other words, to test one idea, a simulation often needs to be changed in more than one place. Additionally, it is natural to consider a few different modeling ideas at once, each of which is expressed by varying several parameters, and so the user might need to vary dozens or more parameters at a time.
ResFrac’s sensitivity analysis tools address these needs by introducing a new type of workflow in the ResFracPro user interface specifically designed for sensitivity analysis. Starting from an existing manually created simulation, the user selects a set of parameters to vary, and specifies how to vary them to create a collection of related simulations. With a single click these simulations can be sent to the ResFrac cloud service to run concurrently, and have results automatically download to local storage. The ResFracPro UI also includes new plotting features that help to interpret the results of the batch of simulations, enabling the user to visualize patterns in results across dozens of simulations.
In mathematical terms, the user wants to vary inputs in a low dimensional input space (a few modeling ideas) and have that variation be reflected in the high dimensional simulation parameter space (systematically change many parameters embodied in the simulation input and settings files). To enable this, ResFrac’s sensitivity analysis tools are built around dimension reduction of the input space via the concept of ‘parameter groups.’ One parameter group, corresponding to a single modeling idea, represents a collection of simulation input parameters that vary together.
ResFrac simulations live in a high dimensional space (thousands of dimensions) represented as X, where X is the set of all possible simulations. A single simulation x ∈ X is a point in that space. The sensitivity analysis dimension reduction procedure consists of a mapping T: Z→X, where Z is a low dimension (typically dim(Z) ~ 1 to 10) space. The mapping T is defined such that for every z ∈ Z there is a corresponding unique point x = T(z). In other words, for each point in the low dimension parameter space Z there is a corresponding unique simulation input and settings file. Note that the transformation function T is in general not invertible.
The user specifies T and Z by populating the parameter groups and parameters tables in the ResFracPro user interface. The number of parameter groups defines the dimension of Z, with each row of z representing the position of the point along the dimension of the corresponding parameter group. Each row of z takes on a scalar value between -1 and +1, with -1 indicating the “left” (typically, minimum) end along that dimension, +1 indicating the “right” (typically, maximum) end, and 0 indicating the “center.” Collectively, the values in the parameter groups and parameters tables specify the individual transformation functions that make up T.
Start-to-finish example of a sensitivity analysis study
In this section, we will walk through an example of a sensitivity analysis workflow. The main steps are as follows:
- Set up the sensitivity analysis workflow
- Create a sensitivity analysis workflow starting from a base simulation
- Select parameters to include in the study
- Specify the ranges of the parameters
- Specify how the parameters will be grouped together in parameter groups
- Specify the sampling schemes that will be used to generate simulations in the study
- Run the workflow
- Optional: Pick specific simulations for which to download full results
- Automatically download key results for all simulations
- Visualize the study results
- Define target functions (which are summary output values of interest)
- Define and show plots
(1) A sensitivity analysis workflow is a new type of workflow that is a peer to the “sandbox” workflow that is used for manually created simulations. Like a sandbox workflow, a sensitivity analysis workflow at its core consists of a group of simulations. Unlike a sandbox workflow, in a sensitivity analysis workflow, you don’t manually create simulations one at a time; instead, simulations are created automatically based on what we specify when setting up the sensitivity analysis.
(1.a.) In the sensitivity analysis tools, we start with a “base simulation” as a starting point for our sensitivity analysis study. Any simulation can be used as a base simulation for a new sensitivity analysis workflow by choosing the “Create Sensitivity Analysis” menu item in a simulation table.
This will create a new sensitivity analysis workflow using that simulation as the base simulation. On the workflow overview screen for our newly created workflow, we click on “Edit in Builder” set up what simulations will be run in the sensitivity analysis.
(1.b.) The simulation builder in a new mode that lets us select a subset of input parameters of the base simulation to vary in the sensitivity analysis. Any physically meaningful numerical value, including values in tables, can be selected as a parameter to include in the study.
Parameters that are included in the study show up highlighted in the builder.
All parameters not selected to be part of the study (i.e., the ones that remain unhighlighted in the builder) will use the value in the base simulation.
Selected parameters are displayed in the “Decision support parameters” summary table on a new “Decision Support” panel in the builder. The inputs on this panel define the details of the sensitivity study.
(1.c. and 1.d.) In the parameters table, we specify the type of transformation to apply to each parameter, the ranges of those transformations, (optionally) the center value for each parameter, and the “parameter group” to associate each parameter with.
A parameter group is a way of collecting several parameters together so that they all move in unison. Typically, we would use a parameter group to represent a single modeling idea, and we might typically have one to ten parameter groups in a study.
(1.e.) The next thing to do after filling out the parameters table is to decide what simulations to generate in the study by filling out the “Sampling schemes” input. This input gives a few choices for what simulations to run, including one-at-a-time sampling, random sampling, and user-defined (fully customizable) sampling. We can choose to apply more than one sampling scheme in which case the software will use a superset of all points.
Sampling schemes are used to generate simulations in the following manner. First, the sampling schemes generate a set of points of the same dimension as the number of parameter groups, with values between -1 and +1. In each point, each entry represents a parameter group interpolator value where the value -1 is the minimum (left), zero is the center, and +1 is the maximum (right). Then, to generate a simulation, the parameter values associated with each parameter group will obtain the same relative position in their domain. For example, if the parameter group interpolator value is -1 (minimum or left), all the associated parameters will be at their minimum (left) values.
The software shows a preview of all the points that will be run in the “List of simulations to run” table. This displays the parameter group interpolator values for each simulation that will be run in the study, with each row corresponding to one point (i.e., one simulation).
(2) Once we are done setting up the sensitivity study, we click “Run” on the workflow overview page. This will start a job on the ResFrac cloud service for the study, which will generate and run all the child simulations that we specified in the builder. The simulations run for the study will appear in the list of simulations on the workflow overview page, and we can monitor their progress in the same way as for manually created simulations. The workflow also has a comments file that summarizes what is happening in the overall workflow. (Please note: in the remainder of the blog post, I’ve switched to a different sensitivity analysis study than the one in the above “set up” figures, to include a larger number of parameters and a larger sample set.)
(2.a.) By default, sensitivity analysis simulations only download a subset of results automatically. In a sensitivity study, it’s easy to run a large number of simulations, and we don’t want to clog up our hard drive by downloading the full results by default. We can instruct the software to download full results for any simulation by using the “Download All Results” menu item on the simulation list. Any simulations that we choose this for will work just like simulations that we ran manually (i.e., we will have full 3D results viewable in the visualization tool for these simulations).
(2.b.) Without downloading all results for any individual simulations, the automatically downloaded results subset includes the sim_track_xxx.csv file that the visualization tool uses for making line plots. We can use the “Multiplot” tool to make a line plots of the results from all the simulations. Beyond that, we can use the newly added postprocessing tool to visualize summary results of the sensitivity study. This tool opens in a separate window upon clicking the “Postprocessing” button on the workflow overview page.
(3) The purpose of the postprocessing tool is to help visualize the overall trends displayed by the simulations in the study by plotting outputs of interest against the changes in simulation input values.
(3.a.) The first step in using the postprocessing tool is to define one or more “Target functions” of interest. A target function is a summary value from one column in the simulation results sim_track_xxxx.csv file. The ‘sim_track’ file contains time-series data for a variety of simulation outputs. For example, we might define cumulative oil production at five years as a target function in a study on perforation cluster spacing if we are conducting a study on what factors affect oil production. Or we might define average water cut over the first six months of production as a target function because we want to figure out how to match that aspect of field data with a ResFrac simulation. It is natural to define more than one target function because frequently more than one quantity is of interest.
(3.b.) After defining target functions, we can scroll to the “Plots” section of the postprocessing tool to define and show visualizations. Presently, the user interface provides two sensitivity analysis-specific plot types: scatterplot matrix and spider plot. A scatterplot matrix creates a grid of scatterplots with target functions on the vertical axis and parameter groups on the horizontal axis and includes all simulation points that have results. A spider plot, intended to be used along with one-at-a-time sampling, plots target function values on the vertical axis and parameter group values on the horizontal axis, for points that have at most one non-zero parameter group interpolator value (e.g., in a study with three parameter groups, the point [0, 0, 0.5] would be included in the spider plot because only one of the parameter group interpolators has non-zero value, while the point [0, 0.5, 0.5] would not be included because more than one interpolator has non-zero value). For both these plot types, we can choose which target functions and which parameter group interpolator values we want to show in the plot.
After defining what values to put on the vertical and horizontal axes, clicking “Show Plot” will bring up a new window that will show the plot results. It is often convenient to start with a spider plot to identify which parameter groups are most impactful on a particular target function.
In the above two figures, we see that relative toughness, absolute toughness, and proppant immobilization have the most influence on propped fracture surface area, while relative toughness, absolute toughness, and global permeability have the most influence on final cumulative oil production. A natural next step is to make a scatterplot matrix with propped fracture surface area and final cumulative oil production on the vertical axes and the following four parameter groups on the horizontal axis: relative toughness, absolute toughness, proppant immobilization, and global permeability.
In the above figure, we see the strength of response of the target functions to changes the parameter group interpolator values. The scatterplots use the full set of simulation points, so they provide a qualitatively different view of the results than the spider plots, which used only the one-at-a-time points.
In this blog post, we have gone through an example sensitivity workflow. We started with a single simulation as our base and defined how to systematically vary input variables by defining parameters, parameter groups, and sampling schemes. This systematic approach allows us to significantly reduce the parameter space and to investigate sensitivities with fewer number of simulations. Once the parameter ranges are set, we showed how users can run the sensitivity study on the cloud and get the results back to their computers. Finally, we demonstrated how to call out a few specific summary output quantities of interest and to make plots of those quantities against the changes in the input parameters. This helps users to identify the sensitivity of the selected output metrics to various inputs. As a next step, a user might look at the detailed 3D results for specific simulations of interest. After getting a better understanding of the trends in this sensitivity study, the user might decide to change the base simulation and create a follow-on sensitivity analysis. Alternatively, the user might decide that a single sensitivity study is sufficient to start formulating opinions and recommendations based on the observed trends.
Over the coming months, we will add refinements to the sensitivity analysis features, including additional sampling schemes, target functions and plotting types. We are also concurrently building on sensitivity analysis to implement automated history matching and optimization capabilities. I look forward to sharing more about these features as they become ready.
I am very excited about the sensitivity analysis tools now available in ResFrac, and I greatly appreciate all of the people with whom the ResFrac team and I have had discussions about these features. I would especially like to acknowledge our pre-release users for their comments and ideas.