Articles | Volume 15, issue 5
https://doi.org/10.5194/esd-15-1301-2024
© Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.
Uncertainty-informed selection of CMIP6 Earth system model subsets for use in multisectoral and impact models
Download
- Final revised paper (published on 15 Oct 2024)
- Preprint (discussion started on 05 Jan 2024)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on esd-2023-41', Anonymous Referee #1, 09 Jan 2024
- AC2: 'Reply to all reviewer comments', Abigail Snyder, 16 Apr 2024
- AC1: 'Reply to all reviewer comments', Abigail Snyder, 16 Apr 2024
-
RC2: 'Comment on esd-2023-41', Anonymous Referee #2, 11 Jan 2024
- AC1: 'Reply to all reviewer comments', Abigail Snyder, 16 Apr 2024
-
RC3: 'Comment on esd-2023-41', Anonymous Referee #3, 13 Feb 2024
- AC1: 'Reply to all reviewer comments', Abigail Snyder, 16 Apr 2024
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
ED: Reconsider after major revisions (22 Apr 2024) by Gabriele Messori
AR by Abigail Snyder on behalf of the Authors (10 Jun 2024)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (14 Jun 2024) by Gabriele Messori
RR by Anonymous Referee #2 (27 Jun 2024)
RR by Anonymous Referee #1 (28 Jun 2024)
RR by Anonymous Referee #3 (29 Jun 2024)
ED: Publish subject to minor revisions (review by editor) (30 Jun 2024) by Gabriele Messori
AR by Abigail Snyder on behalf of the Authors (16 Jul 2024)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (26 Jul 2024) by Gabriele Messori
AR by Abigail Snyder on behalf of the Authors (06 Aug 2024)
Manuscript
The study posits a strategy for selecting 5 CMIP6 GCMs that are suitable globally for impact model applications based on temperature and precipitation characteristics and the IPCC likely ECS range. The results would benefit from context, both in terms of how the study compares to previous model subselection exercises (why select the same set of models for all regions?) and in a deeper dive into the origins of the IPCC likely ECS range (where it comes from, what constraint assumptions are being made). Additionally, the methodology is hard to follow in the appendix and is worth moving to the main text. The primary issue I have, though, is that taking the "model uncertainty, scenario uncertainty, and interannual variability of the full CMIP6 ESM results" is inappropriate in an ensemble of opportunity like CMIP6 without a careful audit of model dependence.
Recommended Literature:
Abramowitz, G., Herger, N., Gutmann, E., Hammerling, D., Knutti, R., Leduc, M., Lorenz, R., Pincus, R., and Schmidt, G. A.: ESD Reviews: Model dependence in multi-model climate ensembles: weighting, sub-selection and out-of-sample testing, Earth Syst. Dynam., 10, 91–105, https://doi.org/10.5194/esd-10-91-2019, 2019.
Brands, S.: A circulation-based performance atlas of the CMIP5 and 6 models for regional climate studies in the Northern Hemisphere mid-to-high latitudes, Geosci. Model Dev., 15, 1375–1411, https://doi.org/10.5194/gmd-15-1375-2022, 2022.
Merrifield, A. L., Brunner, L., Lorenz, R., Humphrey, V., and Knutti, R.: Climate model Selection by Independence, Performance, and Spread (ClimSIPS v1.0.1) for regional applications, Geosci. Model Dev., 16, 4715–4747, https://doi.org/10.5194/gmd-16-4715-2023, 2023.
Specific Comments:
L60-62: "In a world unburdened by time and computing constraints, an impact model would take as input every projected data set available to have a full understanding of possible outcomes." - An ensemble of every projected data set in CMIP6 does not confer the full understanding of possible outcomes. It would include 50 initial condition ensemble members of certain ESMs and one ensemble member for others. Is the first 50x more likely to be true? Beyond the unequal voting power of the large ensembles in CMIP6, the ensemble contains a number of "hidden dependencies": models with different names but near-identical code. For uncertainty to mean "our full understanding of possible outcomes", model dependence must be handled properly.
L81-82: While this is an interesting objective, the size of the initial condition ensembles submitted to an exercise like CMIP is a function of computational resources and goodwill (they are submitting "free" data for others) on the side of the modeling centers. It is beneficial to many researchers that CMIP is inclusive and does not "pick favorites" thus encouraging participation.
Table 1: Of the 22 models you are using, 6 are connected, either by legacy or because they use a version, to NCAR's Community Atmosphere Model (CAM) development cycle. As this leaves the potential for CAM to have more influence on your uncertainty benchmark, the choice must be discussed. Additional similar models, such as ACCESS-CM2 / UKESM1-0-LL and MPI-ESM1-2- HR / MPI-ESM1-2- LR, present could be creating a “rather heterogeneous, clustered distribution, with families of closely related models lying close together but with significant voids in-between model clusters” (description of CMIP5 from Sanderson, B. M., Knutti, R., and Caldwell, P.: A representative democracy to reduce interdependency in a multimodel ensemble, J. Climate, 28, 5171–5194, https://doi.org/10.1175/JCLI-D-14-00362.1, 2015.). Could multiple similar models elevate outliers (e.g., MIROC) in your metric?
L166: Please justify the citation of Scafetta, 2022. See https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2022GL102530
In step 3 of Table 2, how are you computing an ensemble average for the models that only provide a single run?
L224: Omit "over time?"
L315: "Models who more closely match the trend of observational data (W5E5v2.0 (Lange et al., 2021)) over the historic period will have their observations hold more weight. " Why? Trends are highly sensitive to internal variability, which is inherently random in temporal phase, i.e. no reason a model and observation should have the same sequence of it. A match in trend between observations and a model over a particular time period often occurs by chance and is not indicative model performance.
Deser, C., Phillips, A., Alexander, M. A., and Smoliak, B. V.: Projecting North American climate over the next 50 years: Uncertainty due to internal variability, J. Climate, 27, 2271–2296, https://doi.org/10.1175/JCLI-D-13-00451.1, 2014.