Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
You have accessResearch articles

Towards validated multiscale simulations for fusion

,
J. Lakhlili

J. Lakhlili

Max-Planck-Institut für Plasmaphysik, Garching, Germany

Google Scholar

Find this author on PubMed

,
O. Hoenen

O. Hoenen

Max-Planck-Institut für Plasmaphysik, Garching, Germany

Google Scholar

Find this author on PubMed

,
U. von Toussaint

U. von Toussaint

Max-Planck-Institut für Plasmaphysik, Garching, Germany

Google Scholar

Find this author on PubMed

,
B. D. Scott

B. D. Scott

Max-Planck-Institut für Plasmaphysik, Garching, Germany

Google Scholar

Find this author on PubMed

and
D. P. Coster

D. P. Coster

Max-Planck-Institut für Plasmaphysik, Garching, Germany

Google Scholar

Find this author on PubMed

    Abstract

    Harnessing energy produced by thermonuclear fusion reactions has the potential to provide a clean and inexpensive source of energy to Earth. However, throughout the past seven decades, physicists learned that creating our very own fusion energy source is very difficult to achieve. We constructed a component-based, multiscale fusion workflow to model fusion plasma inside the core of a tokamak device. To ensure the simulation results agree with experimental values, the model needs to undergo the process of verification, validation and uncertainty quantification (VVUQ). This paper will go over the VVUQ work carried out in the multiscale fusion workflow (MFW), with the help of the EasyVVUQ software library developed by the VECMA project. In particular, similarity of distributions from simulation and experiment is explored as a validation metric. Such initial validation measures provide insights into the accuracy of the simulation results.

    This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico’.

    1. Introduction

    Thermonuclear fusion is a process in which lighter, positively charged nuclei fuse together to produce heavier nuclei. Energy is released during this process, and this is exactly how the Sun and other stars power themselves. Scientists have long desired to bring this power source to Earth as there are two major advantages: fuel availability and process cleanliness. For the former, heavy hydrogen isotopes, such as deuterium and tritium, required for fusion can be extracted from seawater (deuterium) and bred within the fusion process (tritium). As for the latter, fusion does not produce greenhouse gases, nor does it leave behind dangerous, long-lasting radioactive waste that we see from nuclear fission plants.

    In order for the nuclei to fuse, a gas of charged particles (known as a plasma) needs to be heated to about a hundred million degrees. The Sun is massive, therefore can contain the plasma with its force of gravity. On the contrary, scientists have come up with various means to contain the hot plasma. One such device called the tokamak has been developed to contain the plasma using a magnetic field.

    However, the plasma dynamics is very complex: micro-instabilities develop in the plasma that drive turbulence and can affect the overall transport and ultimately destroy confinement. This is a multiscale problem, in which turbulence spans millimetres in space and microseconds in time, while plasma transport happens at metres in space and seconds in time. Such scale disparity requires numerical models that can bridge those two to fully understand the dynamics of fusion plasmas.

    In the plasma physics community, there are various efforts to construct multiscale simulation models for tokamak plasmas [19]. Namely, the European Integrated Tokamak [10] Modelling Task Force (EFDA ITM-TF) has put together a software infrastructure framework for building complex workflows and tokamak modelling [11]. It uses a standardized interface and takes the component-based approach, which leads to simple substitution of any of the components with another, as long as they share identical interface. Another approach, called the multiscale modelling and simulation framework (MMSF) [1214], aims at guiding computational scientists from various domains through many steps related to the creation of a multiscale coupled application, from its conceptual design to its implementation and finally its execution on HPC. Using the ITM-TF standardized interface along with the MMSF framework, an MFW is pieced together [15,16]. The MFW has been used to tackle the time scale disparity between turbulence and transport in tokamak-like scenario. However, there is a need to ensure the simulation results are reliable compared to experiments.

    Hence, uncertainty quantification (UQ) and validation processes are crucial in order to demonstrate the robustness of the simulation model. Code results can be ‘validated’ by comparison with experiment in a number of ways, ranging from qualitative (subjective) measures to quantitative measures, e.g. apply a validation metric. In [17], the authors discuss in some detail (64 pages) ‘Verification and validation in computational fluid dynamics’. A later article involving the same first author, [18] on ‘Measures of agreement between computation and experiment: validation metrics’, provides ‘Recommended features of validation metrics’ and makes the argument for providing quantitative measures of agreement. The application to fusion has been made in a number of papers, including [19] ‘Validation in fusion research: towards guidelines and best practices’, [20] ‘Verification and validation for magnetic fusion’ and [21] ‘Validation metrics for turbulent plasma transport’. The goal of this paper is to lay out the progress we have made thus far towards a validated multiscale fusion simulation model. The rest of the paper is structured as follows: we begin by giving a description of the multiscale fusion model. Next, we present a few production runs using MFW, with details on the time-bridging method we used to couple the turbulence and transport models, and details on methods that determine the quasi-steady state of the plasma. Then, we show the UQ methods we use on the workflow; this includes the technology employed and analysis of the outcome. Afterwards, an introduction to the validation methods along with initial validation on the simulation results are presented, including how experimental data are processed and how distribution similarity measurements are used to qualitatively compare distributions obtained from simulation with the experiment’s. Finally, we conclude with a summary and outlook.

    2. Multiscale fusion workflow

    The MFW takes the component-based approach: it couples single-scale models together and allows information to pass from one to another [15,16]. The topology of the workflow is illustrated in figure 1: a transport code (ETS [22]) calculates the plasma profiles, such as densities and temperatures. This information gets distributed to the equilibrium code (CHEASE [23]), which updates the plasma equilibrium geometry, and from there to the turbulence code (analytical or stochastic model), which calculates the fluxes. The stochastic model we use is GEM [24], and it is much more expensive compared to the analytical model. Then, a module (IMP4DV [25]) converts fluxes obtained from the turbulence code into transport coefficients required by the transport code.

    Figure 1.

    Figure 1. The topology of the multiscale fusion workflow (MFW). (Online version in colour.)

    Initial data are imported into the system via another component which allows users to select from various data sources such as files and databases. The information passed from one model to another is in the form of data structures that were agreed upon by the ITM community [10]. These data structures are called consistent physical objects (CPOs) [26].

    3. Time-bridging and quasi-steady state

    We take this multiscale workflow and explore different issues such as time-scale bridging between turbulence and transport codes and searching for a way to define a quasi-steady state of plasma [15]. To bridge the temporally disparate models, we make the time step size in the transport code adaptive, with value at each iteration dependent on two parameters ΔTe,lim and ΔρTe,lim:

    ΔTe,lim>ΔTe(t)=|Te(tΔt)Te(t)Te(tΔt)| 3.1
    and
    ΔρTe,lim>ΔρTe(t)=|ρTe(tΔt)ρTe(t)|ρTe(tΔt)|+Te(tΔt)a|. 3.2

    Te is the electron temperature, ρTe is the derivative of Te with respect to the toroidal flux coordinate ρ and a is the maximum toroidal flux coordinate value. The time-bridging method is implemented into MFW to simulate a scenario from the ASDEX Upgrade (AUG) tokamak.1 With ΔTe,lim set to 0.1 and ΔρTe,lim to 0.05, we obtained the electron and ion heat fluxes, qe and qi, respectively, and Te and ion temperature Ti at each flux-tube, which are plotted in figures 2 and 3. Each flux-tube has a spatial position described by the normalized toroidal flux coordinates ||ρtor|| and colours of the traces represent the individual flux-tubes.

    Figure 2.

    Figure 2. The time history of electron heat flux qe (a, in the units of MJm2s1) and electron temperature Te (c, in the unit of eV) of each flux-tube, along with Δt (b, in the units of s). Every ETS time step is adaptive with parameters ΔTe,lim = 0.1 and ΔρTe,lim=0.05, which ranges from 1×10−8 to 1×10−2 s (b). (Online version in colour.)

    Figure 3.

    Figure 3. The time history of ion heat flux qi (a) and ion temperature Ti (c) of each flux-tube, along with Δt (b). Every ETS time step is adaptive with parameters ΔTe,lim = 0.1 and ΔρTe,lim=0.05. (Online version in colour.)

    One way to find a quasi-steady state of the plasma is to calculate the average Te (〈Te〉) for every N time step [15]. If the previous G groups of 〈Te〉 (〈TenG to 〈Ten−1) are within the current 〈Te〉 (or〈Ten) ± standard deviation (σn), then we have found a quasi-steady state. Let us call this method 1.

    Another way to locate quasi-steady state (method 2) is to take 1 s-worth of data at a time (labelled with k superscript) and find the maximum and minimum Te(Te,max and Te,min, respectively). We then average those values: Tek=(Te,mink+Te,maxk)/2.

    This value is then compared to every second from the previous M seconds worth of data, e.g. TekM to Tek1. Specifically, for every second, we calculate the percentage difference

    |TekTekm|Tek×100%, 3.3
    with 1 ≤ m ≤ M. If it is equal to or below some threshold limit L and holds true for every flux-tube, then we set this as a quasi-steady state. We can then take the 1 s worth of Te data and calculate the mean and standard deviation. Taking the production run from figures 2 and 3, we apply both quasi-steady state methods 1 and 2. For method 1, we locate a quasi-steady state by setting N = 200 steps/bin and G = 5 previous steps. For method 2, we locate a quasi-steady state by setting M = 2 s and L = 10%, or M = 2 s and L = 5%. 〈Te〉 and 〈Ti〉 of each flux-tube at quasi-steady state obtained from methods 1 and 2, along with their standard deviations, are plotted against ||ρtor|| in figure 4. Each Te profile lies within one standard deviation of another Te distribution, and the same applies to Ti profiles. This indicates the quasi-steady state temperature profiles obtained from both methods yield very similar results. Therefore, we will take the temperature profiles from method 1 and compare them with experimental data in §6(b).
    Figure 4.

    Figure 4. Te and Ti profiles at quasi-steady state of an AUG plasma scenario, based on the two methods. The production run has adaptive time step parameters ΔTe,lim = 0.1 and ΔρTe,lim=0.05. Traces with square markers are obtained using the first method. Traces with circle and triangle markers are obtained using the second method. (Online version in colour.)

    As shown in figures 2 and 3, while the adaptive time step is effective for Te, the time step sizes are too large for the ions: Ti exhibits large fluctuations. Both of the time-bridging and locating a quasi-steady state techniques described above only focused on constraining the time evolution of ΔTe and ΔρTe. Luckily, these techniques can easily be expanded to include ions as well. For example, the adaptive time steps can include two additional parameters ΔTi,lim and ΔρTi,lim that are analogous to equations (3.1) and (3.2), respectively. This would adjust the time step size at each step such that Te, Ti, ,ρTe and ρTi calculated by the transport model do not change so much that the turbulence code has difficulty adjusting.

    We simulate the same tokamak scenario above with different sets of time-bridging parameters ΔTe,lim = ΔTi,lim = 0.1 and ΔρTe,lim=ΔρTi,lim=0.025. qe, qi, Te and Ti from this production run are shown in figures 5 and 6. With smaller ΔρTe,lim value and additional time-bridging parameters on ions, the time step sizes are obviously smaller. Also, the fluctuations observed in Te and Ti are much weaker than the previous simulation.

    Figure 5.

    Figure 5. The time history of electron heat flux qe (a) and electron temperature Te (c) of each flux-tube, along with Δt (b). Every ETS time step is adaptive with parameters ΔTe,lim = ΔTi,lim = 0.1 and ΔρTe,lim=ΔρTi,lim=0.025 (b). (Online version in colour.)

    Figure 6.

    Figure 6. The time history of ion heat flux qi (a) and ion temperature Ti (c) of each flux-tube, along with Δt (b). Every ETS time step is adaptive with parameters ΔTe,lim = ΔTi,lim = 0.1 and ΔρTe,lim=ΔρTi,lim=0.025 (b). (Online version in colour.)

    Next, we use methods 1 and 2 to find quasi-steady states for this production run. Method 1, which looks into the 〈Te〉 for every N time steps, is now expanded to include ions as well, i.e. 〈Ti〉. A quasi-steady state for the electrons was found when N = 500 steps/bin and G = 5 previous steps. However, one σn is really small due to the small fluctuation on Te over time (0.001 < σn/〈Te〉 < 0.01), therefore we expand the acceptance interval to 〈Ten ± 3σn when we compare the previous G groups of 〈Te〉 values. We did the same for ions, and the ions reach a quasi-steady state earlier than the electrons. However, we have decided to study both temperature profiles obtained at the same time, when the electrons reach quasi-steady state.

    As for method 2, since we have a total of about 1 s worth of data from the 2nd production run, we take 0.1 s worth of data at a time to find Tmaxk and Tmink and not the 1 s we used earlier to study the 1st production run. In addition, we set M = 0.2 s and L = 5%. These numbers apply to both ions and electrons to obtain a quasi-steady state. The Te and Ti profiles obtained from both methods are shown in figure 7.

    Figure 7.

    Figure 7. Te and Ti profiles at quasi-steady state of an AUG plasma scenario based on the two methods. The production run has adaptive time step parameters ΔTe,lim = ΔTi,lim = 0.1 and ΔρTe,lim=ΔρTi,lim=0.025. Traces with square (Te) and triangle (Ti) markers are obtained using the first method. Traces with circle (Te) and plus (Ti) markers are obtained using the second method. (Online version in colour.)

    However, are these simulation results consistent with experiments? We want to take a scientific, systematic approach to demonstrate whether we can trust these results or not, e.g. validate against experimental data, and quantify uncertainties that exist in the inputs and outputs. By doing VVUQ, we can put ourselves at a better position in determining whether or not the simulations match with the experiments.

    4. Uncertainty quantification

    UQ is the process of characterizing, analysing, and ultimately reducing the uncertainties in a model. Under the VECMA project, various UQ patterns for multiscale models are defined [27]. Two of these patterns are non-intrusive and semi-intrusive. The non-intrusive pattern treats the entire multiscale model as a black-box, and nothing is altered inside this box. We take into consideration the properties of the uncertain input parameters, and we employ UQ to study the statistical moments of quantity of interest (QoI). Another pattern is semi-intrusive: instead of treating the entire workflow as a blackbox, we study the inputs and outputs of each single-scale model. This section emphasizes the non-intrusive pattern. To carry out the UQ process, we rely on the EasyVVUQ library [28] from the VECMA toolkit [29].

    EasyVVUQ is a Python library that aims to simplify the implementation of UQ into a workflow. The users need to provide parameter descriptions, e.g. specify model parameters, their probability distribution functions, and the range of physically acceptable values for sampling and select a UQ method. Then EasyVVUQ library translates the user-provided information into the sampler to generate a variety of simulation scenarios. The outputs of these runs are then collated for analysis. The analysis provides statistical moments of the QoI, such as expected value, standard deviation, variance and sensitivity indices.

    (a) Polynomial chaos expansion

    The polynomial chaos (PC) expansion method approximates a model Y using polynomial expansion. The polynomials are orthogonal to the probability density function of the inputs

    YY^(x,t,Q)=n=0Np1cn(x,t)Pn(Q), 4.1
    where Q, Pn, cn and Np are the uncertain parameters, polynomials, expansion coefficients and number of expansion factors, respectively [30]. The advantage of PC is that it requires fewer samples than the quasi-Monte Carlo (qMC) or Monte Carlo (MC) methods. However, such an advantage only holds true when the number of uncertain parameters is below 20 [3133]. In PC, when using tensored quadrature, the number of samples is given by NsPC = N1d, where N1 is the number of nodes in the one-dimensional quadrature rule and d is the number of uncertain parameters. Usually N1 = p + 1, where p is the polynomial degree, here we choose p = 3. In the qMC case, let NsMC be the number of samples used for model evaluations using MC to some accuracy (in our model we use NsMC = 104). Therefore, to compute the Sobol indices, we used Saltelli’s sampling scheme and the qMC number of samples chosen is: NSqMC = (d + 2) × NsMC. It is important to note that the PC expansion can only be estimated with a finite number of terms; there is always truncation error in the estimates of uncertainty. In particular, the variance, standard deviation and sensitivity indices tend to be underestimated since, in their computation, the summation of the squares of the coefficients is involved.

    (b) Sensitivity analysis

    Sensitivity analysis (SA) quantifies how much of the variance in a model output is contributed by individual uncertain parameters. In particular, we select the Sobol SA that expresses the sensitivity in terms of indices: first-order Sobol sensitivity index quantifies the direct effect of individual input parameter on output variance, while higher order sensitivity index quantifies the effect on output variance due to the interaction between inputs [34]. A low sensitivity index means alterations to a particular uncertain parameter resulting in comparatively small variations in the final model output. Similarly, a high sensitivity index means any changes to a particular uncertain parameter can lead to a strong response from the model output.

    5. Validation: similarity between code and experiment

    The validation part of VVUQ has to do with comparing simulation results and determining whether such an outcome is in quantitative agreement with the experimental data. We take the distribution obtained from simulation and compare it with the distribution from the experiment. It should be kept in mind that the results of both modelling and experiment are given by probability distributions which hold in most cases only approximately: especially in the experiment the use of a normal distribution commonly only indicates some knowledge about the mean and the variance of the result but does not imply that the uncertainty distribution is indeed following a Gaussian (ie. that there are no outliers). Nevertheless, for comparison a metric is needed which allows a quantitative assessment and at the same time matches expectations of the users and (in the best case) is in accordance with physics intuition. The EasyVVUQ library can take the distributions and compare them using a number of metrics, including Hellinger distance [35], Jensen–Shannon distance (a symmetrized and smoothed version of the Kullback–Leibler (KL) divergence [36]) and Wasserstein metrics [37]. Thus, the computed distance allows us to get an idea of the similarities between simulation and experiment distributions. The first two metrics yield answers between 0 (zero distance: identical distributions) and 1 (very different). Wasserstein instead is unrestricted with a lower limit of zero.

    Although these metrics allow in principle a quantitative ranking, it turned out (cf. figure 10) that already ‘small’ differences in shape (which are considered well within the uncertainties) typically yielded the result ‘very different’, i.e. the standard measures are too sensitive given that the involved PDFs themselves are approximations. For that reason, we suggest to use a measure which puts its emphasis on the lower moments (which are commonly the moments most reliably inferred in experiments) and less on the higher moments. The balance between the differences in the lower moments and the higher moments is given by an adjustable weighting factor w, which ranges between 0 and 1.

    Loosely following [38] where a refined bound for the KL-divergence has been suggested, we thus suggest to use a measure for the compatibility of probability distributions which focuses on the first three statistical moments, i.e.

    wtot=(1w)(μ2μ1)22(σ12+σ22)+(μ2μ1)2+w(γ2γ1)22(σ12+σ22)+(μ2μ1)2+(|γ1|+|γ2|)2. 5.1

    In this equation, wtot represents the compatibility distance. μ, σ and γ are the mean, standard deviation and skewness of the dataset. The symmetric distance between the two distributions is given by the first term, and the second term incorporates the asymmetry of the distributions but is of negligible importance if the means do not coincide. wtot ranges from value 0 to 1, with 0 representing identical distribution and 1 as completely different distribution. The difference between this metric and the ones mentioned earlier is that this metric takes into consideration the higher statistical moment, namely γ, and removes some deficiencies by taking asymmetry into account. A study based on comparisons of a variety of different distributions (split-Gaussian, Gaussians of different means and variances) resulted in a relatively small value of w = 0.05 to be in the best agreement with intuition. This will be demonstrated in the next section.

    If one chooses to completely neglect the higher moments and heads for the simple assumption that both distributions are normal distributions a suitable standard measure is given by the Z-test [39]. Here, to determine how close the mean value from one distribution is, e.g. Te obtained in simulation, to the other distribution, e.g. Te obtained from experiment, with respect to the provided standard errors. It takes the mean (μ) and standard error (σ/n, with n representing the number of data points) to determine the z-score (Z)

    Z=μ1μ2(σ12/n1+σ22/n2). 5.2
    Here Z ranges from 0 (for identical distributions) to infinity. A comparison of the properties of the different measures is provided in the following section.

    6. Uncertainty quantification and validation results

    (a) UQ example

    Uncertainties in the fusion workflow come from the applied heating sources and boundary conditions (epistemic) and from the stochastic behaviour of the turbulence (aleatoric). We only include the epistemic uncertainties: boundary conditions are defined by Te and Ti at the plasma edge (||ρtor|| = 1), and characteristics of the applied heating sources (e.g. amplitude, width and position for both ion and electron heating power distributions). It is easiest to understand the notion of heating source through the heat transport equation or the continuity equation of energy flow

    ut+q=S, 6.1
    where u, q and S represent the energy density, energy flux and heat source, respectively. The heating source is a Gaussian function characterized by its position (b), amplitude (A) and width (c)
    S(ρ)=Aexp[(ρb)22c2]. 6.2

    We selected the normal distribution to describe each of the uncertain input, with +/ −20% around the original value as physically acceptable limits. Using the PC method with six sample points along a single uncertain dimension, and four uncertain inputs b, A, c from electron heating source and Te at plasma edge, we end up with 1296 samples. However, if all eight uncertain parameters are considered (four from electrons and four from ions), then 68, or about 1.7 million samples are generated. The number of PC samples grows exponentially with the number of uncertain parameters, and processing all of these samples can become cumbersome. Therefore, qMC [40] or PC with a sparse grid [41] become better options in generating samples.

    Once the uncertain input parameters are defined, the MFW runs within a black-box for multiple time iterations until the plasma reaches steady state. The QoI from this simulation are the output Te and Ti coming from the macroscopic transport model. EasyVVUQ library carries out the UQ analysis and provides the mean, standard deviation and variance to the QoI.

    Based on the work by Lakhlili et al. [42], SA shows the variance in Te is mainly due to uncertainty from the position of heating source at the core (||ρtor|| = 0.0), boundary condition stated at the edge, and the amplitude of the heating source in between the core and edge of the plasma. On the contrary, width does not have a direct effect on the variance of Te. This tells us that we can eliminate the width of the heating source function as one of the uncertain parameters in the future and that would not affect the variance of the QoI. SA shows variance in Ti is mainly due to the uncertainty from position, amplitude of heating source at the core region and boundary condition at the edge. The width of the heating source function does not have a direct effect on the variance of Ti. This again tells us that we can reduce the number of uncertain parameters next time by eliminating the width of the electron heating source function and the temperature distributions would remain the same in terms of variance behaviour. The first-order sensitivity indices of Te and Ti are illustrated in figure 8.

    Figure 8.

    Figure 8. First-order Sobol sensitivity index for Te (a) and Ti, with uncertain inputs coming from the electron heating source and Te at plasma edge. (Online version in colour.)

    (b) Comparison with experiment

    Now that we have Te and Ti profiles from the production simulations (see §3.) and simulations equipped with UQ (see §a), we want to see how they compare to reality. To validate the temperature profiles obtained at quasi-steady state, measurements coming from 2 AUG experimental shots are taken into consideration. The profile data are provided by the integrated data analysis (IDA) scheme [43] at ASDEX Upgrade—essentially the profile estimation is based on the joint and simultaneous evaluation (Bayesian data fusion [44]) of a set of different diagnostics (Thomson scattering, lithium beam diagnostic, magnetic measurements), taking into account the respective uncertainties. The standard evaluation approximates the high-dimensional asymmetric and spatially correlated posterior distribution by the mean and one- or two-sided standard deviations of the profile quantities (e.g. Te, ne) either at a set of discrete radial locations or by the uncertainty of fitted profile functions. The reliability of the assigned uncertainties can in part be assessed by comparison with subsequent measurements using improved diagnostics with a higher accuracy [45]: the estimated overall uncertainty level is correct, but smaller deviations do exist. The ion temperature data and uncertainties are provided by measurements of the ASDEX Upgrade CXRS-spectroscopy diagnostic [46].

    The electron (a) and ion (b) temperature profiles obtained from experiment, non-intrusive UQ of the workflow, and production simulations are plotted together in figure 9. The experiment data came from AUG experiment shot no. 36266 and no. 36297. The two sets of UQ results presented here have different uncertain inputs: one set has uncertain inputs from the applied electron heating source and boundary condition for Te, while the second set has uncertain inputs from the applied ion heating source and boundary condition for Ti. One production simulation has adaptive time step parameters ΔTe,lim = 0.1 and ΔρTe,lim=0.05, while the other simulation has ΔTe,lim = ΔTi,lim = 0.1 and ΔρTe,lim=ΔρTi,lim=0.025. We will label them as simulations A and B, respectively. The quasi-steady state profiles here are obtained from method 1 of the two methods described in §3.

    Figure 9.

    Figure 9. Te (a) and Ti (b) profiles measured from two production simulations (one with adaptive time step parameters ΔTe,lim = 0.1 and ΔρTe,lim=0.05 in thick yellow lines with filled-square markers and other with ΔTe,lim = ΔTi,lim = 0.1 and ΔρTe,lim=ΔρTi,lim=0.025 in thick blue lines with filled-circle markers), 2 UQ simulations (one with uncertainties coming from electron heat source and Te boundary condition plotted in blue with asterisk markers and the other with uncertainties coming from ion heat source and Ti boundary condition plotted in gold with outlined-square markers), and experiments (shot no. 36266 in purple lines with plus markers and no. 36297 in green lines with × markers). The temperatures are plotted against ||ρtor||, or ‘rho_tor_norm’. (Online version in colour.)

    In the Te profiles, though employing a simpler analogue turbulence model, the UQ simulations provide an estimate on the uncertainties coming from heating source and boundary condition. The production runs use a stochastic turbulence model GEM that includes more physics descriptive to the turbulent behaviour of the plasma, as mentioned in §2. Although the production runs yield better agreement to the experiment, Te at the inner two flux-tubes from simulation A are closer to the experiments than the Te coming from simulation B. Similar observations are seen in the ion temperature profile (right). To give a quantitative answer, we turn to the similarity metrics mentioned in §5.

    The quasi-steady state profiles from production runs, as shown in figure 9, have eight spatial points, representing each flux-tube. Each profile from UQ results has 100 ||ρtor|| values. The Te measured from experiment shots no. 36266 and no. 36297 have 161 and 160 ||ρtor|| values, respectively. As for Ti measurements, both experiment shots have 96 ||ρtor|| values. Before we can use any of the distribution metrics, the number of ||ρtor|| values must match between simulation and experiment. Therefore, we take the simulated quasi-steady state profile obtained at a certain time bin and look into the full profile obtained in the transport model, which has a temperature span over 100 spatial grids. From there, we use interpolation to make sure the number of grids in simulation matches with the experiment measurements.

    Three distribution similarity metrics are tested on the production simulation and experiment distributions: Hellinger distance, Jensen–Shannon distance and Wasserstein metric. The Te and Ti validation results are shown in figure 10. The top panel of each figure shows the temperature profiles from simulation A and experiment shot no. 36266, and the bottom panel shows the distances measured using the similarity metrics. The probability distributions for simulation and experimental data are estimated as Gaussian and split-Gaussian, respectively.

    Figure 10.

    Figure 10. Comparison between experimental and simulated electron (top left) and ion (top right) temperature profiles in AUG tokamak. The circle markers represent Te at each flux-tube. The shaded regions represent average temperature ± standard deviation. The experiment (dashed trace) is labelled with shot no. 36266. The simulated quasi-steady state Te and Ti are plotted in solid lines. Time-bridging parameters are set to ΔTe,lim = 0.1 and ΔρTe,lim=0.05, and both Te and Ti are averaged over 200 time steps. The similarity distances are plotted at the bottom panel. (Online version in colour.)

    Based on the top left panel of figure 10, Te obtained from simulation A is deemed comparable to the experiment, especially at the region between ||ρtor||=0.14 (where the inner-most flux-tube is located) and the edge, if we only focus on the overlaps between two distributions (with mean ± standard deviation considered). However, when we use the distribution similarity metrics to qualitatively compare, the results show that they are not as similar as they seem. For example, even though Te at ||ρtor|| = 0.23 of both distributions have the same mean, the distances measured are not near 0. At ||ρtor|| = 0, average Ti from simulation and from experiment have percentage difference of less than 1%, and both distributions overlap. However, the similarity distances are not near 0 either. This is because these similarity metrics are sensitive to the variance of both distributions. Similar issues are also observed in the validation exercise for simulation B as well, in both Te and Ti similarity measures. Overall, these three similarity metrics do not reflect very well on what we see from the simulation and experiment distributions. This is why we decide to turn to the compatibility measure and Z-test.

    The compatibility measure and Z-test between one of the two production runs (simulations A and B) and experiment shot no. 36266 are shown in figures 11 and 12. Similar to figure 10, the top panel of each figure shows the temperature profiles we want to compare. The compatibility measure is applied three times, each with a different w: 0.05, 0.5, 0.95. The value of w decides the weight placed on the asymmetry characteristic of the distribution in the compatibility measure. For example, with w = 0.05, 5% of the emphasis is applied to the asymmetry of the distributions.

    Figure 11.

    Figure 11. Comparison between experimental and simulated electron temperature profiles in AUG tokamak (top). One set of simulations has time-bridging parameters set to ΔTe,lim = 0.1 and ΔρTe,lim=0.05, and Te profile averaged over 200 time steps (left). Another set of simulations has ΔTe,lim = ΔTi,lim = 0.1 and ΔρTe,lim=ΔρTi,lim=0.025, with Te averaged over 500 time steps (right). The compatibility distance with various weighting factors w and z-score are plotted at the bottom panels. (Online version in colour.)

    Figure 12.

    Figure 12. Comparison between experimental and simulated Ti profiles in AUG tokamak (top). The compatibility distance with various weighting factors w and z-score are plotted at the bottom panels. (Online version in colour.)

    First, we study the compatibility between Te from the simulations and from experiment shot no. 36266, as shown in figure 11. The compatibility distance with w = 0.05 at each flux-tube has the best agreement with intuition among the three weighting factors we have tested. In the lower left panel, the compatibility distance between simulation A and experiment is below 0.5 at every flux-tube except for the one at ||ρtor|| = 0.561, which has wtot = 0.61. For simulation B, the flux-tube at ||ρtor|| = 0.668 has the highest compatibility distance of 0.66. The z-scores have a similar trend as the compatibility distance with w = 0.05.

    Next, we study the compatibility between Ti from the simulations and from experiment shot no. 36266. Again a compatibility measure with w = 0.05 provides results that are most agreeable with intuition. As shown in the bottom left panel of figure 12, the distance between simulation A and experiment is always below 0.53. On the other hand, the distance between simulation B and experiment, except for the outermost flux-tube, is relatively high, with values ranging from 0.56 to 0.91 (bottom right panel). The z-scores also present a similar trend to that observed in compatibility measure with w = 0.05.

    Overall, comparison between the probability distributions of temperature from simulations and experiment, e.g. AUG shot no. 36266, shows that simulation B tends to have a higher compatibility distance and z-score than simulation A. This is mainly due to the magnitude of temperature fluctuations; with smaller fluctuations observed in simulation B than in simulation A. The standard deviation of the probability distribution is also lower in simulation B. According to equations (5.1), lower standard deviation leads to higher compatibility distance. The distributions from both production runs were compared to the experiment data from shot no. 36297 as well, and we draw the same conclusion regarding the compatibility distances as we did to shot no. 36266—compatibility distance calculated between simulation B and experiment tends to be greater. This validation exercise highlights the importance of the shape of distribution functions. With that said, the distribution function we used for both simulation setups are a Gaussian with the mean and standard deviation calculated based on N temperature values at quasi-steady state. However, no epistemic and aleatoric uncertainties were taken into account in these production runs. We gained intuition on the effect of epistemic uncertainties, i.e. heating source and boundary condition, to the temperature distribution functions when UQ was applied to the MFW (with an analogue turbulence model) as a black-box, as we saw in figure 9. To provide a more comprehensive validation of production simulation against experiment, we will take epistemic and aleatoric uncertainties into consideration in the future.

    7. Conclusion

    In summary, we conducted a validation study between simulation and experiment data taken from the ASDEX Upgrade tokamak by employing quantitative measures between distribution distances. The simulation result comes from the MFW that aims to simulate both the effects of turbulence and transport by coupling multiple single-scale models together. The issue of time-scale mismatch arises between the turbulence and transport model. In the previous work, this issue was tackled by implementing an adaptive time step for every transport model iteration. In this work, we expanded the adaptive time step parameters by including two parameters that put a maximum value to the change in ion temperature ΔTi,lim and the change in ion temperature gradient ΔρTi,lim, besides the already-exiting limits for electron temperature and electron temperature gradient. In addition, two methods that we used to determine quasi-steady state of a plasma are also presented. One was previously developed and the other is newly developed. These two methods yield profiles that have the mean temperature situated well within one standard deviation from the mean temperature of the other distribution. However, to make sure the simulation is reliable for modelling experiments, UQ and validation are applied to the workflow. To assist us with this task, the VECMA toolkit was used to perform UQ and SA on the entire workflow as a black-box. In this paper, we are interested in comparing results from (i) the production runs with adaptive time steps and quasi-steady state method implemented, (ii) the simulation equipped with UQ and (iii) experiments. While the temperature profiles look visually similar, especially in the region with ||ρtor|| ranges from 0.144 (where the inner-most flux-tube is located) to 1.0, the quantitative similarity measurements of distributions show us otherwise. In particular, we used measures such as Hellinger distance, Jensen-Shannon divergence and Wasserstein metric to quantitatively compare the results from two production runs and two different shots of the AUG tokamak experiment. This initial validation work shows that even though the mean temperature from simulation is rested within one standard deviation of the experiment measurement, the similarity distance can still equate to a value close to 1 (two very different distributions) due to the metrics’ sensitivity to variances from both distributions. This led us to other similarity measures that are consistent with our intuition. In particular, the compatibility measure includes a weighting factor w that determines how much emphasis is placed on the asymmetry of a distribution. Another measure is the Z-test, which treats both distributions as Gaussian. We learned that compatibility measure with a low w has the best agreement with intuition, and it shows a similar trend to the z-score. The comparison between simulations and experiment shows that simulation with adaptive time step parameters ΔTe,lim = ΔTi,lim = 0.1 and ΔρTe,lim=ΔρTi,lim=0.025 tends to have a higher compatibility distance and z-score than simulation with ΔTe,lim = 0.1 and ΔρTe,lim=0.05. This is because the magnitude of temperature fluctuations in the former simulation is smaller.

    One of the paradoxical outcomes of this work has been a regression in the agreement between the code and the experiment as methods were found to reduce the uncertainty arising from the turbulence code—since the distance between the results is weighted by the uncertainties. However, we know there are other uncertainties in the problem (in the position and characteristics of the source, in the values at the boundaries, in the assumed density profile) and methods of incorporating these uncertainties into our validation metric remain to be developed.

    In the future, we will continue exploring different methods in validating multiscale fusion simulations, including improvements to the adaptive time step method and implementation of UQ into the production simulation, by quantifying epistemic uncertainties to the workflow and aleatoric uncertainties to the stochastic turbulence model. Ultimately, the distribution similarity metrics studied in this article shall be compared to the validation metrics suggested by the articles [1721].

    Data accessibility

    To access the VECMA toolkit and tutorials, please visit https://www.vecma-toolkit.eu/.

    Authors' contributions

    O.O.L. drafted the majority of the content in this paper and carried out the simulations discussed in this paper. J.L. created and provided his expertise to the programming scripts necessary for the uncertainty quantification and validation work. O.H. provided his expertise in multiscale modelling framework.UvT provided his expertise in uncertainty quantification and validation. B.D.S. provided his expertise on the turbulence model and physics-related problems. D.P.C. provided his expertise in the single-scale models within the fusion multiscale framework, uncertainty quantification and validation.

    Competing interests

    We declare we have no competing interests.

    Funding

    This project has received funding from the European Union’s Horizon 2020research and innovation programme for the VECMA project, under grant agreement no. 800925. This project is part of the FET-Future Emerging Technologies funding schema.

    Acknowledgements

    The authors thank all the collaborators from the VECMA project. The authors also want to thank EUROfusion and the EUROfusion High Performance Computer (Marconi-Fusion) for the computing allocation, which made part of the results presented in this paper possible.

    Footnotes

    One contribution of 15 to a theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico’.

    Published by the Royal Society. All rights reserved.

    References