the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Emergent constraints for the climate system as effective parameters of bulk differential equations
Peter M. Cox
Mark S. Williamson
Joseph J. Clarke
Paul D. L. Ritchie
Abstract. Planning for the impacts of climate change requires accurate projections by Earth System Models (ESMs). ESMs, as developed by many research centres, estimate changes to weather and climate as atmospheric Greenhouse Gases (GHGs) rise, and they inform the influential Intergovernmental Panel on Climate Change (IPCC) reports. ESMs are advancing the understanding of key climate system attributes. However, there remain substantial inter--ESM differences in their estimates of future meteorological change, even for a common GHG trajectory, and such differences make adaptation planning difficult. Until recently, the primary approach to reducing projection uncertainty has been to place emphasis on simulations that best describe the contemporary climate. Yet a model that performs well for present--day atmospheric GHG levels may not necessarily be accurate for higher GHG levels and vice-versa.
A relatively new approach of Emergent Constraints (ECs) is gaining much attention as a technique to remove uncertainty between climate models. This method involves searching for an inter--ESM link between a quantity that we can measure now and another of major importance for in describing future climate. Combining the contemporary measurement with this relationship refines the future projection. Identified ECs exist for thermal, hydrological and geochemical cycles of the climate system. As ECs grow in influence on climate policy, the method is under intense scrutiny, creating a requirement to understand them better. We hypothesise that as many Earth System components vary in both space and time, their behaviours often satisfy large--scale Partial Differential Equations (PDEs). Such PDEs are valid at coarser scales than the equations coded in ESMs which capture finer high resolution gridbox--scale effects. We suggest that many ECs link to such an effective hidden PDE that is implicit in most or all ESMs. An EC may exist because its two quantities depend similarly on an ESM--specific internal bulk parameter in such a PDE, and with measurements constraining and revealing its (implicit) value. Alternatively, well--established process understanding coded at the ESM gridbox--scale, when aggregated, may generate a bulk parameter with a common ``emergent'' value across all ESMs. This single parameter may link uncertainties in a contemporary climate driver to those of a climate--related property of interest, the EC constraining the latter by measurements of the former. We offer illustrative examples of these concepts with generic differential equations and their solutions, placed in a conceptual EC framework.
Chris Huntingford et al.
Status: closed
-
RC1: 'Comment on esd-2022-43', Anonymous Referee #1, 04 Nov 2022
Review of Huntingford et al 2023:
On one hand, this paper is clear, well-written, and its PDE examples are simple, relevant, and pleasant to work through. On the other hand, I didn’t really learn anything from reading this. For example, I can’t imagine anyone understanding Cox et al (2018) without having a deep understanding of the notion that the emergent equations governing temperature changes on various timescales are linked via heat capacity. This left me wondering whether the paper is worth publishing. Ultimately, I think the answer is yes because if someone didn’t intuitively understand that emergent constraints occur due to links between underlying governing equations, this paper would do a nice job of introducing them to the concept. I doubt the paper will be cited much, but that doesn’t mean it shouldn’t be published.
Minor comments (Note my convention is P2 L1 = Page 2, Line 1):
- P2 L1: observationalists would disagree that ESMs form the basis of climate research. I tend to say they’re a pillar of climate research.
- P2 L6: It’s not accurate to say that ESMs are typically forced with historical and scenario GHGs. A lot of time is spent on PI control, abrupt4xCO2, 1%CO2, etc. Minor rewording is needed.
- P2 L19: “main possibly simplest answer” is awkward grammar
- P3 L30 – P4 L2: The first sentence here isn’t very clear. I think you are saying that Schlund and others found that ECs based on CMIP5 were generally worse when applied to CMIP6. As written, it sounds like any EC, including ECs developed from CMIP6 data, would have wider bounds. I also found your wording a bit confusing because wider bounds could come from worse correlations between EC predictor and predictand OR from larger spread in the observations used to constrain. I guess the problem must be the former, but it takes the reader some unnecessary thought to get to that conclusion. Following on this, I think the obvious explanation for larger spread in CMIP6 is that the ECs from CMIP5 were overtrained: they are capturing noise rather than real EC signal. I’m confused how this possibility isn’t even in your proposed reasons at all.
- P5 L15-16: you introduce T* here but don’t use it again except P8 L3. I suggest deleting both T* references. In particular, the wording of the first intro to T* was very confusing (and I think, unnecessary).
- P5 L26-28: When you say “running mean”, I immediately wonder what the averaging period is. I think it would be better to call this statistic the “annual average”. Relatedly, the running mean itself isn’t a measure of climate change. The time derivative of the running mean is your proxy for climate change. But of course, the annual average isn’t special in this regard – the long-term average of the time derivative of the instantaneous T(t) equation would give the same answer because the derivative is a linear operator.
- P6 paragraph starting L8 and P8 paragraph starting L23: I think this discussion can be improved. I think the big point you’re trying to make is that while the fact that there exists a predictive relationship between the observable and the future quantity of interest allows you to predict ECS, the slope of that relationship provides interesting information about the physical equations that underpin that relationship. I think you are further pointing out that even though there may be uncertain terms in the equation governing the current-climate variable and in the equation governing future change, those uncertain terms sometimes cancel out when the quantity you’re actually interested in is the ratio between predictor and predictand. As it stands, I don’t think it is interesting that uncertainty in either of 2 parameters would give rise to the intermodel spread needed to compute an emergent constraint. I also don’t think you adequately explained why bi/H0i would be constant across models.
- This is a minor point, but the seasonal cycle in eq 4 and eq 11 won’t be exactly equal to the observed seasonal cycle in a warming planet (H0>0) because the planet will have warmed a bit in the 6 months between winter and summer.
- P8 L17: defining your current-climate metric as d/dt(annual-ave T(x=0)) makes sense, but multiplying it by sqrt(t) seems contrived. If you have such an exact understanding of the underlying equations, you’d probably already know what H0 was, so a regression would be unnecessary!
- P11 L34-P12 L1: I can’t imagine how a real emergent constraint wouldn’t have a physical underpinning that can be expressed as an equation. We may not know what that equation is, but if there truly isn’t an underlying equation behind an empirical relationship, how could that relationship possibly be real?
I don’t think the emergent relationships that distill into an EC are necessarily (or even typically) PDEs. In your examples, the fine scale governing equations are PDEs, but the equations you derive for seasonal cycle and warming tendency are not. Similarly, concepts like “if you don’t have much cloud in the current climate, then you don’t have much cloud to lose in the future” are fundamentally connecting model state to model change. This doesn’t invalidate any of this work, but a reframing of the title and some rewording in the abs
Citation: https://doi.org/10.5194/esd-2022-43-RC1 - AC1: 'Reply on RC1', Chris Huntingford, 10 Dec 2022
-
RC2: 'Comment on esd-2022-43', Anonymous Referee #2, 08 Nov 2022
The paper "Emergent constraints for the climate system as effective parameters of bulk differential equations" by Chris Huntingford et al. provides a formal description of emergent constraints as parameters of large-scale partial differential equations (PDEs). In contrast to small-scale PDEs explicitly coded into Earth system models (ESMs), these large-scale PDEs are not directly included in the models, but emerge across ESMs when aggregated across larger scales. Huntingford et al. provide two example PDEs derived from simple thermal models. By assuming different bulk parameters (e.g., heat capacities) for the different ESMs, they show that these PDEs can be used to derive emergent relationships between short-term and long-term responses of the system, which ultimately can be used as emergent constraints with appropriate measurements of the real Earth system.
General Comments
This paper reads well and provides an interesting approach that allows the derivation of emergent constraints from bulk PDEs. I agree with the authors that an emergent constraint discovery method based on physical reasoning and mathematical models is much more desirable than data mining, and will eventually lead to more credible and robust emergent constraints. However, I have some concerns about the relevance of this study regarding "real" emergent constraints.
Currently, a large part of the argumentation of the paper is based on two very simple PDEs. Especially in the context of a changing climate (which is a necessary condition here), I think the equations are too simplified. Since the PDEs are missing a "loss" term, a constant forcing will lead to an infinitely rising temperature, which is not realistic. For example, what happens if you add linear loss terms (linear feedback) -λ*T to your PDEs (e.g., so that your eq. (2) is similar to eq. (1) of Cox et al. 2018)? Could you still derive the emergent relationships from these new equations? I can imagine that there are certain conditions (e.g., small times, small λ, large forcings, …) under which your original equations are good approximations, but it would be good to guide the reader in detail through this process. Additionally, it would be very helpful if you can provide more details on these emerging bulk equations themselves and why they should be present in an ensemble of ESMs. Do you have any recommendations how to find such PDEs? An example with a real emergent constraint would also be incredibly helpful. All this will ultimately help the reader to gain more trust in your framework.
Finally, two technical comments: first, it would be very helpful if you could use continuous line numbers (and not start with "1" on every page) and also add line numbers to figure captions. Second, please consider depositing your code in a publicly accessible repository (e.g., Zenodo) to make your analysis more transparent and reproducible for other researchers.
Specific Comments
- P.2, l.30: Maybe add a reference here? E.g., Knutti et al. (2017), https://doi.org/10.1002/2016GL072012
- P.3, l.4: It would be more precise to refer to "observational" data here (alternatively "observation-based").
- P.3, l.12: A better reference for this might be Hall & Qu (2006), https://doi.org/10.1029/2005GL025127. You might also want to cite Allen & Ingram (2002), https://doi.org/10.1038/nature01092 here.
- P.4, l.1-2: It might be helpful for the reader to add the key conclusion(s) of the discussion of Fasullo et al. (2015) you mention here.
- P.4, l.29: I guess technically it’s a function of the total noise, so ε and η, not only ε.
- P.5, l.18: Required for what?
- P.6, l.15: It’s not only the data points (I guess by "data points" you are referring to the (x, y) tuples you get from the models?), but also the measurements that constrains the forcing element b.
- P.6, l.15-16: I think this sentence is not clear enough: "With the forcing uncertainties common for both short– and long–term drivers". You need to explicitly assume that bi/H0i=const across models; you should mention that.
- P.6, eq. (8): You might want to refer to Fourier’s law here.
- P.8, l.17: Why don’t you simply divide T(0, t) by sqrt(t) to get a y that is not dependent on t?
- P.12, l.10: I think this classification only applies to linear second-order PDEs, not to every PDE.
- P.12, l.10-12: Can you elaborate what you exactly mean by these "one-to-one mappings" and why this should be the case? This is not clear to me.
Technical Corrections
- P.3, l.19-20: The second part of this sentence is hard to understand, please rephrase.
- P.3, l.20-21: This sentence is also not easy to understand, please rephrase.
- P.5, l.10: I wonder if your notation would be simpler if your variable t represented seconds, not years. Then you could absorb the seconds-per-year factor into the frequency ω and drop all the primes for the heat capacity altogether.
- P.5, l.26: There is a "." missing after “Eq”.
- P.8, l.22: There is a "." missing after the end of the sentence.
- P.11, l.5: It would be good to add a name for the symbol epsilon here, maybe “error term” or similar.
- P.11, l.16-17: Something is wrong with this sentence.
- P.14, l.17: This reference points to a preprint, please update with the published reference.
- Caption of Fig. 1: I think there is a word missing after "This response contains a seasonal (x axis) and long–term (y axis, with seasonality ignored)".
- Caption of Fig. 2: "seasonal" forcing instead of "season" forcing. Second to last line: the "measured" value of ΔTS.
- Fig. 2: The argument in the cosine of the response term has a different sign than eq. (10). This does not matter due to the symmetry of the cosine, but should be identical to have a consistent notation.
- Fig. 2: The square root in the denominator of the second part of the response is missing. Same for the x and y axis label in (b).
- Figs. 1 and 2: The index "p" is missing for the heat capacity. In addition, sometimes the prime is missing.
- Fig. 1 and 2: Why are some parts of the formulas underlined?
Citation: https://doi.org/10.5194/esd-2022-43-RC2 - AC2: 'Reply on RC2', Chris Huntingford, 10 Dec 2022
Status: closed
-
RC1: 'Comment on esd-2022-43', Anonymous Referee #1, 04 Nov 2022
Review of Huntingford et al 2023:
On one hand, this paper is clear, well-written, and its PDE examples are simple, relevant, and pleasant to work through. On the other hand, I didn’t really learn anything from reading this. For example, I can’t imagine anyone understanding Cox et al (2018) without having a deep understanding of the notion that the emergent equations governing temperature changes on various timescales are linked via heat capacity. This left me wondering whether the paper is worth publishing. Ultimately, I think the answer is yes because if someone didn’t intuitively understand that emergent constraints occur due to links between underlying governing equations, this paper would do a nice job of introducing them to the concept. I doubt the paper will be cited much, but that doesn’t mean it shouldn’t be published.
Minor comments (Note my convention is P2 L1 = Page 2, Line 1):
- P2 L1: observationalists would disagree that ESMs form the basis of climate research. I tend to say they’re a pillar of climate research.
- P2 L6: It’s not accurate to say that ESMs are typically forced with historical and scenario GHGs. A lot of time is spent on PI control, abrupt4xCO2, 1%CO2, etc. Minor rewording is needed.
- P2 L19: “main possibly simplest answer” is awkward grammar
- P3 L30 – P4 L2: The first sentence here isn’t very clear. I think you are saying that Schlund and others found that ECs based on CMIP5 were generally worse when applied to CMIP6. As written, it sounds like any EC, including ECs developed from CMIP6 data, would have wider bounds. I also found your wording a bit confusing because wider bounds could come from worse correlations between EC predictor and predictand OR from larger spread in the observations used to constrain. I guess the problem must be the former, but it takes the reader some unnecessary thought to get to that conclusion. Following on this, I think the obvious explanation for larger spread in CMIP6 is that the ECs from CMIP5 were overtrained: they are capturing noise rather than real EC signal. I’m confused how this possibility isn’t even in your proposed reasons at all.
- P5 L15-16: you introduce T* here but don’t use it again except P8 L3. I suggest deleting both T* references. In particular, the wording of the first intro to T* was very confusing (and I think, unnecessary).
- P5 L26-28: When you say “running mean”, I immediately wonder what the averaging period is. I think it would be better to call this statistic the “annual average”. Relatedly, the running mean itself isn’t a measure of climate change. The time derivative of the running mean is your proxy for climate change. But of course, the annual average isn’t special in this regard – the long-term average of the time derivative of the instantaneous T(t) equation would give the same answer because the derivative is a linear operator.
- P6 paragraph starting L8 and P8 paragraph starting L23: I think this discussion can be improved. I think the big point you’re trying to make is that while the fact that there exists a predictive relationship between the observable and the future quantity of interest allows you to predict ECS, the slope of that relationship provides interesting information about the physical equations that underpin that relationship. I think you are further pointing out that even though there may be uncertain terms in the equation governing the current-climate variable and in the equation governing future change, those uncertain terms sometimes cancel out when the quantity you’re actually interested in is the ratio between predictor and predictand. As it stands, I don’t think it is interesting that uncertainty in either of 2 parameters would give rise to the intermodel spread needed to compute an emergent constraint. I also don’t think you adequately explained why bi/H0i would be constant across models.
- This is a minor point, but the seasonal cycle in eq 4 and eq 11 won’t be exactly equal to the observed seasonal cycle in a warming planet (H0>0) because the planet will have warmed a bit in the 6 months between winter and summer.
- P8 L17: defining your current-climate metric as d/dt(annual-ave T(x=0)) makes sense, but multiplying it by sqrt(t) seems contrived. If you have such an exact understanding of the underlying equations, you’d probably already know what H0 was, so a regression would be unnecessary!
- P11 L34-P12 L1: I can’t imagine how a real emergent constraint wouldn’t have a physical underpinning that can be expressed as an equation. We may not know what that equation is, but if there truly isn’t an underlying equation behind an empirical relationship, how could that relationship possibly be real?
I don’t think the emergent relationships that distill into an EC are necessarily (or even typically) PDEs. In your examples, the fine scale governing equations are PDEs, but the equations you derive for seasonal cycle and warming tendency are not. Similarly, concepts like “if you don’t have much cloud in the current climate, then you don’t have much cloud to lose in the future” are fundamentally connecting model state to model change. This doesn’t invalidate any of this work, but a reframing of the title and some rewording in the abs
Citation: https://doi.org/10.5194/esd-2022-43-RC1 - AC1: 'Reply on RC1', Chris Huntingford, 10 Dec 2022
-
RC2: 'Comment on esd-2022-43', Anonymous Referee #2, 08 Nov 2022
The paper "Emergent constraints for the climate system as effective parameters of bulk differential equations" by Chris Huntingford et al. provides a formal description of emergent constraints as parameters of large-scale partial differential equations (PDEs). In contrast to small-scale PDEs explicitly coded into Earth system models (ESMs), these large-scale PDEs are not directly included in the models, but emerge across ESMs when aggregated across larger scales. Huntingford et al. provide two example PDEs derived from simple thermal models. By assuming different bulk parameters (e.g., heat capacities) for the different ESMs, they show that these PDEs can be used to derive emergent relationships between short-term and long-term responses of the system, which ultimately can be used as emergent constraints with appropriate measurements of the real Earth system.
General Comments
This paper reads well and provides an interesting approach that allows the derivation of emergent constraints from bulk PDEs. I agree with the authors that an emergent constraint discovery method based on physical reasoning and mathematical models is much more desirable than data mining, and will eventually lead to more credible and robust emergent constraints. However, I have some concerns about the relevance of this study regarding "real" emergent constraints.
Currently, a large part of the argumentation of the paper is based on two very simple PDEs. Especially in the context of a changing climate (which is a necessary condition here), I think the equations are too simplified. Since the PDEs are missing a "loss" term, a constant forcing will lead to an infinitely rising temperature, which is not realistic. For example, what happens if you add linear loss terms (linear feedback) -λ*T to your PDEs (e.g., so that your eq. (2) is similar to eq. (1) of Cox et al. 2018)? Could you still derive the emergent relationships from these new equations? I can imagine that there are certain conditions (e.g., small times, small λ, large forcings, …) under which your original equations are good approximations, but it would be good to guide the reader in detail through this process. Additionally, it would be very helpful if you can provide more details on these emerging bulk equations themselves and why they should be present in an ensemble of ESMs. Do you have any recommendations how to find such PDEs? An example with a real emergent constraint would also be incredibly helpful. All this will ultimately help the reader to gain more trust in your framework.
Finally, two technical comments: first, it would be very helpful if you could use continuous line numbers (and not start with "1" on every page) and also add line numbers to figure captions. Second, please consider depositing your code in a publicly accessible repository (e.g., Zenodo) to make your analysis more transparent and reproducible for other researchers.
Specific Comments
- P.2, l.30: Maybe add a reference here? E.g., Knutti et al. (2017), https://doi.org/10.1002/2016GL072012
- P.3, l.4: It would be more precise to refer to "observational" data here (alternatively "observation-based").
- P.3, l.12: A better reference for this might be Hall & Qu (2006), https://doi.org/10.1029/2005GL025127. You might also want to cite Allen & Ingram (2002), https://doi.org/10.1038/nature01092 here.
- P.4, l.1-2: It might be helpful for the reader to add the key conclusion(s) of the discussion of Fasullo et al. (2015) you mention here.
- P.4, l.29: I guess technically it’s a function of the total noise, so ε and η, not only ε.
- P.5, l.18: Required for what?
- P.6, l.15: It’s not only the data points (I guess by "data points" you are referring to the (x, y) tuples you get from the models?), but also the measurements that constrains the forcing element b.
- P.6, l.15-16: I think this sentence is not clear enough: "With the forcing uncertainties common for both short– and long–term drivers". You need to explicitly assume that bi/H0i=const across models; you should mention that.
- P.6, eq. (8): You might want to refer to Fourier’s law here.
- P.8, l.17: Why don’t you simply divide T(0, t) by sqrt(t) to get a y that is not dependent on t?
- P.12, l.10: I think this classification only applies to linear second-order PDEs, not to every PDE.
- P.12, l.10-12: Can you elaborate what you exactly mean by these "one-to-one mappings" and why this should be the case? This is not clear to me.
Technical Corrections
- P.3, l.19-20: The second part of this sentence is hard to understand, please rephrase.
- P.3, l.20-21: This sentence is also not easy to understand, please rephrase.
- P.5, l.10: I wonder if your notation would be simpler if your variable t represented seconds, not years. Then you could absorb the seconds-per-year factor into the frequency ω and drop all the primes for the heat capacity altogether.
- P.5, l.26: There is a "." missing after “Eq”.
- P.8, l.22: There is a "." missing after the end of the sentence.
- P.11, l.5: It would be good to add a name for the symbol epsilon here, maybe “error term” or similar.
- P.11, l.16-17: Something is wrong with this sentence.
- P.14, l.17: This reference points to a preprint, please update with the published reference.
- Caption of Fig. 1: I think there is a word missing after "This response contains a seasonal (x axis) and long–term (y axis, with seasonality ignored)".
- Caption of Fig. 2: "seasonal" forcing instead of "season" forcing. Second to last line: the "measured" value of ΔTS.
- Fig. 2: The argument in the cosine of the response term has a different sign than eq. (10). This does not matter due to the symmetry of the cosine, but should be identical to have a consistent notation.
- Fig. 2: The square root in the denominator of the second part of the response is missing. Same for the x and y axis label in (b).
- Figs. 1 and 2: The index "p" is missing for the heat capacity. In addition, sometimes the prime is missing.
- Fig. 1 and 2: Why are some parts of the formulas underlined?
Citation: https://doi.org/10.5194/esd-2022-43-RC2 - AC2: 'Reply on RC2', Chris Huntingford, 10 Dec 2022
Chris Huntingford et al.
Chris Huntingford et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
539 | 148 | 18 | 705 | 5 | 8 |
- HTML: 539
- PDF: 148
- XML: 18
- Total: 705
- BibTeX: 5
- EndNote: 8
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1