The point of no return for climate action: effects of climate uncertainty and risk tolerance

. If the Paris Agreement targets are to be met, there may be very few years left for policy makers to start cutting emissions. Here we calculate by what year, at the latest, one has to take action to keep global warming below the 2 K target (relative to pre-industrial levels) at the year 2100 with a 67 % probability; we call this the point of no return (PNR). Using a novel, stochastic model of CO 2 concentration and global mean surface temperature derived from the CMIP5 ensemble simulations, we ﬁnd that cumulative CO 2 emissions from 2015 onwards may not exceed 424 GtC and that the PNR is 2035 for the policy scenario where the share of renewable energy rises by 2 % year − 1 . Pushing this increase to 5 % year − 1 delays the PNR until 2045. For the 1 . 5 K target, the carbon budget is only 198 GtC and there is no time left before starting to increase the renewable share by 2 % year − 1 . If the risk tolerance is tightened to 5 %, the PNR is brought forward to 2022 for the 2 K target and has been passed already for the 1 . 5 K target. Including substantial negative emissions towards the end of the century delays the PNR from 2035 to 2042 for the 2 K target and to 2026 for the 1 . 5 K target. We thus show how the PNR is impacted not only by the temperature target and the speed by which emissions are cut but also by risk tolerance, climate uncertainties and the potential for negative emissions. Sensitivity studies show that the PNR is robust with uncertainties of at most a few years.


Introduction
The Earth system is currently in a state of rapid warming that is unprecedented even in geological records (Pachauri et al., 2014). This change is primarily driven by the rapid increase in atmospheric concentrations of greenhouse gases (GHGs) due to anthropogenic emissions since the industrial revolution (Myhre et al., 2013). Changes in natural physical and biological systems are already being observed (Rosenzweig et al., 2008), and efforts are made to determine the "anthropogenic impact" on particular (extreme weather) events (Haustein et al., 2016). Nowadays, the question is not so much if but by how much and how quickly the climate will change as a result of human interference, whether this change will be smooth or bumpy (Lenton et al., 2008) and whether it will lead to dangerous anthropogenic interference with the climate (Mann, 2009).
The climate system is characterized by positive feedbacks causing instabilities, chaos and stochastic dynamics (Dijkstra, 2013) and many details of the processes determining the future behavior of the climate state are unknown. The debate on action on climate change is therefore focused on the question of risk and how the probability of dangerous climate change can be reduced. In scientific and political discussions, targets on "allowable" warming (in terms of change in global mean surface temperature, GMST, relative to pre-industrial conditions 1 ) have turned out to be salient. The 2 K warming threshold is commonly seen -while gauging considerable uncertainties -as a safe threshold to avoid the worst effects that might occur when positive feedbacks are unleashed (Pachauri et al., 2014). Indeed, in the Paris COP21 conference it was agreed to attempt to limit warming below 1.5 K (United Nations, 2015). It is, however, questionable whether the commitments made by countries (the so-called nationally determined contributions, NDCs) are sufficient to keep temperatures below the 1.5 K and possibly even the 2.0 K target (Rogelj et al., 2016a).
A range of studies has appeared to provide insight into the safe level of cumulative emissions to stay below either the 1.5 or 2.0 K target at a certain time in the future with a specified probability, usually taken as the year 2100. The choice of a particular year is necessarily arbitrary and neglects the possibility of additional future warming. Early studies made use of Earth System Models of Intermediate Complexity (EMICs; Zickfeld et al., 2009;Huntingford et al., 2012;Steinacher et al., 2013) to obtain such estimates. Because it was found that peak warming depends on cumulative carbon emissions, E , but is independent of the emission pathway Zickfeld et al., 2012), focus has been on the specification of a safe level of E values corresponding to a certain temperature target. In more recent papers, also emulators derived from either C4MIP models (Sanderson et al., 2016) or CMIP5 (Coupled Model Intercomparison Project 5) models (Millar et al., 2017b), with specified emission scenarios, were used for this purpose. Such a methodology was recently used in Millar et al. (2017a) to argue that a post-2015 value of E ≈ 200 GtC would limit post-2015 warming to less than 0.6 • C (so meeting the 1.5 K target) with a probability of 66 %.
In this paper we pose the following question: assume one wants to limit warming to a specific threshold in the year 2100, while accepting a certain risk tolerance of exceeding it, then when, at the latest, does one have to start to ambitiously reduce fossil fuel emissions? The point in time when it is "too late" to act in order to stay below the prescribed threshold is called the point of no return (PNR; van Zalinge et al., 2017). The value of the PNR will depend on a number of quantities, such as the climate sensitivity and the means available to reduce emissions. To determine estimates of the PNR, a model is required of global climate development that (a) is accurate enough to give a realistic picture of the behavior of GMST under a wide range of climate change scenarios, (b) is forced by fossil fuel emissions, (c) is simple enough to be evaluated for a very large number of different emission and mitigation scenarios and (d) provides information about risk, i.e., it cannot be purely deterministic.
The models used in van Zalinge et al. (2017) are clearly too idealized to determine adequate estimates of the PNR un-der different conditions. In this paper, we therefore construct a stochastic state-space model from the CMIP5 results where many global climate models were subjected to the same forcing for a number of climate change scenarios (Taylor et al., 2012). This stochastic model -representing all kinds of uncertainties in the climate model ensemble -is then used together with a broad range of mitigation scenarios to determine estimates of the PNR under different risk tolerances. Stocker (2013) showed that if the Paris Agreement temperature targets are to be met, only a few years are left for policy makers to take action by cutting emissions: with an emissions reduction rate of 5 % year −1 , the 1.5 K target has become unachievable and the 2 K target becomes unachievable after 2017. The Stocker (2013) analysis highlights the crucial concept of the closing door or PNR of climate policy, but it is deterministic. It does not take account of the possibility that these targets are not met, and does not allow for negative emissions scenarios. We here show how the considerable climate uncertainties captured by our stochastic state-space model, the degree to which policy makers are willing to take risk, and the potential of negative emissions affect the carbon budget and the date at which climate policy becomes unachievable (the PNR). The climate policy is here not defined as an exponential emission reduction as in Stocker (2013) but as a steady increase in the share of renewable energy in total energy generation.

Methods
We let T be the annual-mean area-weighted global mean surface temperature (GMST) deviation from pre-industrial conditions of which the 1861-1880 mean is considered to be representative (Pachauri et al., 2014;Schurer et al., 2017). From the CMIP5 scenarios we use the simulations of the preindustrial control, abrupt quadrupling of atmospheric CO 2 , smooth increase of 1 % CO 2 year −1 and the RCP (representative concentration pathway) scenarios 2.6, 4.5, 6.0 and 8.5 (Taylor et al., 2012). The data are obtained from the German Climate Computing Center (DKRZ), the ESGF Node at the DKRZ and KNMI's Climate Explorer. The CO 2 forcings (concentrations, Meinshausen et al., 2011;and emissions, van Vuuren et al., 2007;Clarke et al., 2007;Fujino et al., 2006;Riahi et al., 2007) are obtained from the RCP Database (available at http://tntcat.iiasa.ac.at/RcpDb, last access: 28 March 2017).
As all CMIP5 models are designed to represent similar (physical) processes but use different formulations, parameterizations, resolutions and implementations, the results from different models offer a glimpse into the (statistical) properties of future climate change, including various forms of uncertainty. We perceive each model simulation as one possible, equally likely, realization of climate change. Applying ideas and methods from statistical physics (Ragone et al., 2016), in particular linear response theory (LRT), a stochas-tic model is constructed that represents the CMIP5 ensemble statistics of GMST.

Linear response theory
We only use those ensemble members from CMIP5 for which the control run and at least one perturbation run are available, leading to 34 members for the abrupt (CO 2 quadrupling) and 39 for the smooth-forcing experiment. Considering those members from the RCP runs also available in the abrupt forcing run, we have 25 members for RCP2.6, 30 for RCP4.5, 19 for RCP6.0 and 29 for RCP8.5.
The CO 2 concentration as a function of time for the abrupt quadrupling and smooth CO 2 increase is prescribed as with time in years from the start of the forcing, pre-industrial CO 2 concentration C 0 and Heaviside function θ (t). The radiative forcing F due to CO 2 relative to pre-industrial conditions is given as with α CO 2 = 5.35 Wm −2 (Myhre et al., 2013). With LRT, the Green's function for the temperature response is computed from the abrupt forcing case as the time derivative of the mean response (Ragone et al., 2016) where F abrupt (t) = ln(4C 0 /C 0 ) = ln(4). The temperature deviation from the pre-industrial state for any forcing F any is then obtained, via the convolution of the Green's function, as Because Eq. (4) is exact, we expect that Eq. (5) with F any = F abrupt will exactly reproduce the abrupt CMIP5 response. In addition, for the LRT to be a useful approximation, the response has to reasonably reproduce the smooth 1 % year −1 CMIP5 response with F any = F smooth . Figure 1a shows that LRT applied to the abrupt perturbation perfectly recovers the abrupt response -as required -and is well able to recover the response to a smooth forcing. The correspondence is very good for the mean response and also the variance is captured quite well.
changes and have to be addressed explicitly. A multi-model study of many carbon models of varying complexity under different background states and forcing scenarios was recently presented Joos et al. (2013). A fit of a three-timescale exponential with constant offset was proposed for the ensemble mean of responses to a 100 GtC emission pulse to a present-day climate of the form Coefficients µ i , i = 0, . . ., 3 and timescales τ i , i = 1. . .3 are determined using least-square fits on the multi-model mean.
The CO 2 concentration then follows from In doing so, we use a response function that is independent of the size of the impulse, i.e., the carbon cycle reacts in the same way to pulses of all sizes other than 100 GtC. This is of course a simplification, especially as very large pulses might unleash positive feedbacks to do with the saturation of natural sinks such as the oceans (Millar et al., 2017b), but works reasonably well in the range of emissions we are primarily interested in. The full (temperature and carbon) LRT model is summarized as and relates fossil CO 2 emissions, E CO 2 , to mean GMST perturbation T with initial conditions C CO 2 ,0 for CO 2 and T 0 for GMST perturbation. This is quite a simple model with few "knobs to turn". The only really free parameter is the constant A that scales up CO 2 -radiative forcing to take into account non-fossil CO 2 and non-CO 2 GHG emissions (not present in the idealized scenarios), and matches the carbon and temperature models (estimated from different model ensembles) together. The constant A = 1.48 was found in order to optimize the agreement of T with CMIP5 RCPs. The resulting reconstruction of temperatures from RCP CO 2 concentrations overlaid with CMIP5 data (Fig. 1c) gives a good agreement.
Internally, emissions need to be converted from GtC year −1 to ppm year −1 using the respective molar masses and the mass of the Earth's atmosphere as E CO 2 ppm year −1 = γ E CO 2 GtC year −1 with γ = 0.46969 ppm GtC −1 . Our estimates of the model's 10 parameters are found in Table 2.
In Fig. 2 we show the results obtained for RCP emissions. For very-high-emission scenarios we underestimate CO 2 concentrations because for such emissions natural sinks saturate, which is a process the pulse-size independent carbon response function cannot adequately capture. However, the upscaling of radiative forcing is quite successful, yielding a good temperature reconstruction.

Stochastic state-space model
The model outlined above still contains a data-based temperature response function and it informs only about the mean CMIP5 response. However, our main motivation is to obtain new insights into the possible evolution to a "safe" carbonfree state and such paths necessarily depend strongly on the variance of the climate and on the risk one is willing to take. This variance in temperature is quite substantial, as is evident from Fig. 1b and c. Therefore we translate our response function model to a stochastic state-space model and incorporate the variance via suitable stochastic terms. (c) Total anthropogenic radiative forcing (black) and radiative forcing from CO 2 only (red) (both from RCP) and reconstructed forcing using the relations above. (d) Temperature perturbation from CMIP5 RCP (ensemble mean) and the our reconstruction.
The response function G T from the 140-year abrupt quadrupling ensemble is well approximated by Although τ b0 → ∞, we require a finite τ b0 for temperatures to stabilize at some level. Hence, we choose a long timescale τ b0 = 400 years that cannot really be determined from the 140-year abrupt forcing (CMIP5) runs. By writing the LRT model can be transformed into the 7-dimensional stochastic state-space model (SSSM) shown in Table 1 with parameters in Table 2. Initial conditions are obtained by running the noise-free model forward from pre-industrial conditions (C P = C 0 and C i = T i = 0, i = 1, 2, 3) to presentday, driven by historical emissions 2 . As these temperatures are now given relative to the start of emissions, i.e., 1765, we add the 1961-1990 model mean to the HadCRUT4 dataset to get observed temperature deviation relative to 1765, and compute T relative to 1861-1880 by adding the 1861-1880 mean of this deviation time series. The major benefit of this formulation is that we can include stochasticity. We introduce additive noise to the carbon   model such that the standard deviation of the model response to an emission pulse as reported by Joos et al. (2013) is recovered. For the temperature model we introduce (small) additive noise to recover the (small) CMIP5 control run standard deviation. In the CMIP5 RCP runs the ensemble variance increases with rising ensemble mean. This calls for the introduction of (substantial) multiplicative noise, which we introduce in T 2 , letting these random fluctuations decay over an 8-year timescale. The magnitude of these fluctuations is (especially at high temperatures) likely to be unrealistic when looking at individual time series. However, the focus here is on ensemble statistics.

Transition pathways
The SSSM described in the previous section is forced with fossil CO 2 emissions. We assume that, in the absence of any mitigation actions, emissions increase from their initial value E 0 at an exponential rate g = 0.01 year −1 due to economic and population growth. Political decisions cause emissions to decrease from starting year t s onward as fossil energy generation is replaced by non-GHG producing forms such as wind, solar and water (mitigation m) and by an increasing share of fossil energy sources the emissions of which are not released but captured and stored away by carbon capture and storage (abatement m).
In addition, negative emission technologies may be employed. They cause a direct reduction in atmospheric CO 2 concentration and are here modeled as an exponential 3 E neg (t) = E neg,∞ (1 − exp(−rt)). We model this in a very simple way by letting both mitigation and abatement increase linearly until emissions are brought to zero: with constants m 0 and a 0 respectively giving the mitigation and abatement rates at the start of the scenario and m 1 the incremental year-to-year increase. The simplified model (Eq. 14) is very well able (not shown) to reproduce the integrated assessment model (IAM) pathways that fulfill the NDCs until 2030 and afterwards reach the 2 K target with a 50-66 % probability (Rogelj et al., 2016a). These pathways are exemplary for those that continue on the lowcommitment path for a while, followed by strong and decisive action. From them we obtain a family of negative emission scenarios out of which we pick a pathway with strong negative emissions. Using the starting year 2061, it is very well approximated by setting E neg,∞ = 4.21 GtC and r = 0.0283 year −1 .

Point of no return
With the emission scenarios and the SSSM -returning CO 2 concentrations and GMST for any such scenario -one can now address the issue of transitioning from the present-day (year 2015) to a carbon-free era such as to avoid catastrophic climate change. We need to take into account both the target threshold and the risk one is willing to take to exceed it. The maximum amount of cumulative CO 2 emissions that allows for reaching the 1.5 and 2 K targets, as a function of the risk tolerance, is called the safe carbon budget (SCB). It is well established in the literature Zickfeld et al., 2009) but does not contain information on how these emissions are spread in time. This is where the PNR comes in: the PNR is the point in time where starting mitigating action is insufficient to stay below a specified target with a chosen risk tolerance. Concretely, let the temperature target T max be the maximum allowable warming and denote the parameter β as the probability of staying below a given target (a measure of the risk tolerance). For example, the case T max = 2 K and β = 0.9 corresponds to a 90 % probability of staying below 2 K warming, i.e., 90 of 100 realizations of the SSSM, started in 2015 and integrated until 2100, do not exceed 2 K in the year 2100.
Then, in the context of Eq. (14), the PNR is the earliest t s that does not result in reaching the defined "Safe State" (van Zalinge et al., 2017) in terms of T max and β. It is determined from the probability distribution p( T 2100 ) of GMST in 2100.
Both SCB and PNR depend on temperature target, climate uncertainties and risk tolerance, but the PNR also depends on the aggressiveness of the climate action considered feasible (here given by the value of m 1 ). This makes the PNR such an interesting quantity, since the SCB does not depend on the time path of emission reductions.
Clearly there is a close connection between the PNR and the SCB. Indeed, one could define a PNR also in terms of the ability to reach the SCB. The one-to-one relation between cumulative emissions and warming gives the PNR in "carbon space". Its location in time, however, depends crucially on how fast a transition to a carbon-neutral economy is feasible.
For details on the scenarios, we refer to Rogelj et al. (2016a). With carbon budgets rapidly running out and the PNR approaching fast, negative emissions may have to become an essential part of the policy mix. Such policies are cheap but may only be a temporary fix and lead to undesirable spillover effects on neighboring countries (e.g., Wagner and Weitzman, 2015). We abstract from these discussions here since this is beyond the scope of the present paper.

Results
To demonstrate the quality of the SSSM we initialize it at preindustrial conditions, run it forward and compare the results with those of CMIP5 models. The SSSM is well able to reproduce the CMIP5 model behavior under the different RCP scenarios (Fig. 3, shown for RCP2.6 and RCP4.5). As these scenarios are very different in terms of rate of change and total cumulative emissions, this is not a trivial finding. It is actually remarkable that the SSSM, which is based on a limited amount of CMIP5 model ensemble members, performs so well. As an example, the RCP2.6 scenario contains substantial negative emissions, responsible for the downward trend in GMST, which our SSSM correctly reproduces. The mean response for RCP8.5 is slightly underestimated (not shown) because the uncertainty in the carbon cycle plays a rather minor role compared to that in the temperature model. In addition, for such large emission reductions, positive feedback loops set in from which our SSSM abstracts. The temperature perturbation T is very closely log-normally distributed, while for weak forcing scenarios (e.g., RCP2.6 and RCP4.5) the distribution is approximately Gaussian. The CO 2 concentration is found to be Gaussian distributed for all RCP scenarios. These findings (log-normal temperature and Gaussian CO 2 concentration) result from the multiplicative and additive noise in temperature and carbon components of the SSSM, respectively.
To determine the SCB, 6000 emission reduction strategies (with E neg (t) = 0) were generated and, using the SSSM, an 8000-member ensemble for each of these emission scenarios starting in 2015 was integrated. Emission scenarios are generated from Eq. (14) by letting a(t) = 0, a uniform m 0 ∈ [0, 0.7] and m 1 drawn from a beta distribution (with distribution function p(m) = 1 B(α,δ) m α (1 − m) (δ−1) , where B(α, δ) is the beta function; parameters are chosen as α = 1.2, δ = 3), with the [0,1] interval scaled such that m = 1 at the latest in 2080. The beta distribution is chosen for practical reasons to sample (m 0 , m 1 ) pairs. As m 0 is drawn from a uniform distribution, doing likewise for m 1 would result in many pathways with very quick mitigation and low cumulative emissions. Choosing a beta distribution for m 1 makes draws of small m 1 much more likely and leads to a better sampling of high cumulative emission scenarios. The choice of distribution has no consequences on the results.
The temperature anomaly in 2100 ( T 2100 ) as a function of cumulative CO 2 emissions E is shown in Fig. 4. The  same calculation is also shown for the deterministic case without climate uncertainty (no noise in the SSSM). In Fig. 4, the SCB is given by the point on the E axis where the (colored) line corresponding to a chosen risk tolerance crosses the (horizontal) line corresponding to a chosen temperature threshold T max . The curves T 2100 = f (E ) (Fig. 4) are very well described by expressions of the type with suitable coefficients a, b and c, each depending on the tolerance β. For the range of emissions considered here, a linear fit would be reasonable ). However, our expression also works for cumulative emissions in the range of business as usual (when fitting parameters on suitable emission trajectories). From Fig. 4 we easily find the SCB for any combination of T max and β, as shown in Table 3. Allowable emissions are drastically reduced when enforcing the target with a higher probability (following the horizontal lines from right to left in Fig. 4). These results show in particular the challenges posed by the 1.5 K compared to the 2 K target.
From IPCC-AR5 (IPCC, 2013) we find cumulative emissions post-2015 of 377 to 517 GtC in order to "likely" stay below 2 K while we find an SCB of 424 GtC for T max = 2 K and β = 0.67 which lies in the same range. Like Millar et al. (2017a) we find approximately 200 GtC to stay below 1.5 K with β = 0.67.
To determine the PNR, we resort to three illustrative choices to model the abatement and mitigation rates with E neg (t) = 0.
Following Eq. (14) we construct fast mitigation (FM) and moderate mitigation (MM) scenarios with m 1 = 0.05 and 0.02, respectively. In addition, in an extreme mitigation (EM) scenario m = 1 can be reached instantaneously. This corresponds to the most extreme physically possible scenario and serves as an upper bound.
When varying t s to find the PNR for the three scenarios, we always keep m 0 = 0.14 and a 0 = 0 at 2015 values (World Energy Council, 2016).
As an example, t s = 2025 leads to total cumulative emissions from 2015 onward of 109, 183 and 335 GtC for the mitigation scenarios EM, FM and MM, respectively. MM is the most modest scenario, but it is actually quite ambitious, considering that with m = 0.1355 in 2005 and m = 0.14 in 2015 (World Energy Council, 2016) the current year-to-year increases in the share of renewable energies are very small. Figure 5 shows the probabilities for staying below the 1.5 and 2 K thresholds in 2100 as a function of t s for different policies, including FM (m 1 = 0.05) and MM (m 1 = 0.02), while the EM policy bounds the unachievable region. It is clear that this region is larger for the 1.5 K than for the 2.0 K target, and shrinks when including negative emissions. From the plot we can directly see the consequences of delaying action until a given year. For example, if policy makers should choose to implement the MM strategy only in 2040, the chances of reaching the 1.5 K (2.0 K) target are only 2 % (47 %). We conclude that the remaining "window of action" may be small, but a window still exists for both targets. For example, the 2 K target is reached with a probability of 67 % even when starting MM is delayed until 2035. However, reaching the 1.5 K target appears unlikely as MM would be required to start in 2018 for a probability of 67 %. When requiring a high (≥ 0.9) probability, it is impossible to reach with the MM scenario. The PNR for the different targets and probabilities is shown in Table 4 and Fig. 5.
Including strong negative emissions delays the PNR by 6-10 years, which may be very valuable especially for ambitious targets. For example, one can then reach 1.5 K with a probability of up to 66 % in the MM scenario when acting before 2026, 8 years later than without.
The PNR varies substantially for slightly different temperature targets. This also illustrates the importance of the temperature baseline relative to which T is defined, as has been found previously (Schurer et al., 2017). Switching to a (lower) 18th century baseline increases current levels of warming by 0.13 K (Schurer et al., 2017) and thereby brings forward the PNR. For example, for a maximum temperature threshold of 1.5 K, the PNR moves from 2022 to 2016 in the MM scenario and from 2038 to 2033 for the EM scenario. Table 4. Point of no return as a function of threshold and safety probability β without and with strong negative emissions. It is clear that an energy transition more ambitious than RCP2.6 is required to stay below 1.5 K with some acceptable probability, and whether that is feasible is doubtful. For all other RCP scenarios, exceeding 2 K is very likely in this century (Fig. 6).
The parameter sensitivities of SCB and PNR were determined by varying each parameter by ±5 %. Table 5 shows the results for selected parameters for a small (T max = 1.5 K, β = 0.95), intermediate (T max = 1.5 K, β = 0.5) and large (T max = 2 K, β = 0.5) SCB, corresponding to a close, intermediate and far PNR.
The biggest sensitivities are found for the radiative forcing parameter A. The parameters of the carbon model (µ i , τ i ) do not have big impacts on the found SCB, on the order of 0-17 GtC, with larger numbers found for larger absolute values of SCB. The temperature-model parameters are more important, changing the SCB by up to around 10 % for large and 50 % for small values. The model is particularly sensitive to changes in the intermediate timescale (b 2 , τ b2 ). The PNR sensitivities are generally small. We find the most relevant, yet small, sensitivities in the temperature model parameters. For example, a 10 % error in τ b2 can move the PNR by 3-4 years.
The sensitivity of SCB and PNR to the noise amplitudes is small, with largest values found for the multiplicative noise amplitude σ T 2 that is responsible for most of the spread of the temperature distribution. Increasing noise amplitudes decreases the SCB, in accordance with the expectation that larger climate uncertainty leads to tighter constraints.
It is useful to remember that the stochastic formulation of our model is designed with the explicit purpose to incorporate parameter uncertainty in a natural way via the noise term, without having to make specific assumptions on the uncertainties of individual parameters.

Summary, discussion and conclusions
We have developed a novel stochastic state-space model (SSSM) to accurately capture the basic statistical properties (mean and variance) of the CMIP5 RCP ensemble, allowing us to study warming probabilities as a function of emissions. It represents an alternative to the approach that contains stochasticity in the parameters rather than the state. Although the model is highly idealized, it captures simulations of both temperature and carbon responses to RCP emission scenarios quite well. A weakness of the SSSM is the simulation of temperature trajectories beyond 2100 and for high-emission scenarios. The large multiplicative noise factor leads -especially at high mean warmings -to immensely volatile trajectories that in all likelihood are not physical (on the individual level, the distribution is still well-behaved). It might be worthwhile to investigate how this could be improved. Another weakness in the carbon component of the SSSM is that the real carbon cycle is not pulse-size independent. Hence, using a single constant response function has inherent problems, in particular when running very high-emission scenarios. This is because the efficiency of the natural carbon sinks to the ocean and land reservoirs is a function of both temperature and the reservoir sizes. The SSSM therefore has slight problems reproducing CO 2 concentration pathways (Fig. 2), a price we accept to pay as we focus on the CMIP5 temperature reproduction.
Taking account of non-CO 2 emissions more fully beyond our simple scaling and also avoiding temporary overshoots of the temperature caps would reduce the carbon budgets (Rogelj et al., 2016b) and thus lead to earlier PNRs than given here. Therefore the values might be a little too optimistic.
In Millar et al. (2017b), the authors draw a different conclusion from studying a similar problem. They introduce, in their FAIR model, response functions that dynamically adjust parameters based on warming to represent sink saturation. Consequently, their model gives much better results in terms of CO 2 concentrations. It would be an interesting lead for future research to conduct our analysis (in terms of SCB and PNR) with other simple models (such as FAIR or MAG-ICC) to discover similarities and differences. However, only rather low-emission scenarios are consistent with the 1.5 or 2 K targets, so we do not expect such nonlinearities to play a major role, and indeed our carbon budgets are very similar to Millar et al. (2017a).
The concept of a point of no return introduces a novel perspective into the discussion of carbon budgets that is often centered on the question of when the remaining budget will have "run out" at current emissions. In contrast, the PNR concept recognizes the fact that emissions will not stay constant and can decay faster or slower depending on political decisions.
With these caveats in mind, we conclude that, first, the PNR is still relatively far away for the 2 K target: with the MM scenario and β = 67 % we have 17 years left to start. When allowing for setting all emissions to zero instantaneously, the PNR is even delayed to the 2050s. Considering the slow speed of large-scale political and economic transformations, decisive action is still warranted, as the MM scenario is a large change compared to current rates. Second, the PNR is very close or passed for the 1.5 K target. Here more radical action is required -9 years remain to start the FM policy to avoid a 1.5 K increase with a 67 % chance, and strong negative emissions give us 8 years under the MM policy.
Third, we can clearly show the effects of changing T max , β and the mitigation scenario. Switching from 1.5 to 2 K buys an additional ∼ 16 years. Allowing a one-third, instead of a one-tenth, exceedance risk buys an additional 7-9 years. Allowing for the more aggressive FM policy in-stead of MM buys an additional 10 years. This allows us to assess trade-offs, for example, between tolerating higher exceedance risks and implementing more radical policies.
Fourth, negative emissions can offer a brief respite but only delay the PNR by a few years, not taking into account the possible decrease in effectiveness of these measures in the long term (Tokarska and Zickfeld, 2015).
In this work a large ensemble of simulations was used in order to average over stochastic internal variability. This allows us to determine the point in time where a threshold is crossed at a chosen probability level. Such an ensemble is not possible for more realistic models, nor do GCMs agree on details of internal variability. Therefore, in practice, the crossing of a threshold will likely be determined with hindsight and using long temporal means. This fact should lead us to be more cautious in choosing mitigation pathways.
We have shown the constraints put on future emissions by restricting GMST increase below 1.5 or 2 K, and the crucial importance of the safety probability. Further (scientific and political) debate is essential on what are the right values for both temperature threshold and probability. Our findings are sobering in light of the bold ambition in the Paris Agreement, and add to the sense of urgency to act quickly before the PNR has been crossed. Data availability. The study is based on publicly available data sets as described in the Methods section. Model and analysis scripts and outputs are available on request from the corresponding author.