the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The future of the El Niño–Southern Oscillation: using large ensembles to illuminate time-varying responses and inter-model differences
Robert C. Jnglin Wills
Pedro DiNezio
Jeremy Klavans
Sebastian Milinski
Sara C. Sanchez
Samantha Stevenson
Malte F. Stuecker
Download
- Final revised paper (published on 14 Apr 2023)
- Supplement to the final revised paper
- Preprint (discussion started on 11 Aug 2022)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on esd-2022-26', Anonymous Referee #1, 29 Aug 2022
Review of “The future of the El Nino-Southern Oscillation: Using large ensembles to illuminate time-varying responses and inter-model differences”
I find the paper to be informative and I believe that the community would be interested in this work. The paper is generally well written. My detailed comments are listed below.
- Line 19 P2, add Cai et 2012 Nature https://www.nature.com/articles/nature11358; 2014 NCC https://www.nature.com/articles/nclimate2100#citeas, as these are among the earliest papers on the topics?
- Line 23, the difference between Cai et al., 2022 and [Wengel et al., 202, Callahan et al., 2021] lies in that one is transient and the others are stabilised CO2. This should be clarified so as not to create further confusion. Line 45 seems to reinforce the confusion.
- Lines 55-60, paleoclimatic proxy suggests that there is no relationship between mean zonal SST gradient and ENSO variability (Cai et al 2021).
- Line 93, please cite a butterfly effect paper https://www.nature.com/articles/s41586-020-2641-x as it is easy to understand. I think the paper also suggests that there is an effect on future ENSO evolution from the initial period.
- Line 155 onward, it is not clear if anomalies are constructed referenced to climatology of individual experiment or the ensemble mean. It should be the former. By definition, a climatology is the average of all years that contribute, such that the anomalies sum to zero. If it is the latter, then the inter-experiment difference in climatology needs to be assessed, and the anomalies might not sum to zero.
- Lines 166-167, “CESM2 is an exception that has opposite changes in El Nino SST amplitude and La Nina duration between the two periods.” The Cai et al. 2020 seems to provide a mechanism for this?
- Figure 1, it is interesting that for a SMILE, most experiments behave in a similar way, either unidirectional or reversing, suggesting that it is strongly model dependent. What causes the dependence?
- Line 175, what is the dynamics for increased ENSO seasonality?
- Line 220, are you able to further test the idea of nonlinearity controlling mean state change by relating them in an inter-model/experiment relationship?
- Lines 229 and 289, what is the dynamics for increasing aerosols to drive an increase in ENSO variability? One would expect increasing aerosols to have an opposite impact to that of increasing CO2. Is it possible that internal variability plays a role in the result?
Citation: https://doi.org/10.5194/esd-2022-26-RC1 -
AC2: 'Reply on RC1', Nicola Maher, 04 Oct 2022
Review of “The future of the El Nino-Southern Oscillation: Using large ensembles to illuminate time-varying responses and inter-model differences”
I find the paper to be informative and I believe that the community would be interested in this work. The paper is generally well written. My detailed comments are listed below.
We thank the reviewer for their time, positive review and helpful comments.
-
Line 19 P2, add Cai et 2012 Nature https://www.nature.com/articles/nature11358; 2014 NCC https://www.nature.com/articles/nclimate2100#citeas, as these are among the earliest papers on the topics?
We will add this citation.
-
Line 23, the difference between Cai et al., 2022 and [Wengel et al., 202, Callahan et al., 2021] lies in that one is transient and the others are stabilised CO2. This should be clarified so as not to create further confusion. Line 45 seems to reinforce the confusion.
We will modify line 45 to make this difference in forcing clearer.
-
Lines 55-60, paleoclimatic proxy suggests that there is no relationship between mean zonal SST gradient and ENSO variability (Cai et al 2021).
Thanks for the reference we will add it to this discussion.
-
Line 93, please cite a butterfly effect paper https://www.nature.com/articles/s41586-020-2641-x as it is easy to understand. I think the paper also suggests that there is an effect on future ENSO evolution from the initial period.
While this is an interesting paper line 93 references the comparison of CMIP spread to the spread of a single model for which the references Maher et al, 2018 and Ng et al 2021 are more relevant. As such we choose to leave the citation as is.
-
Line 155 onward, it is not clear if anomalies are constructed referenced to climatology of individual experiment or the ensemble mean. It should be the former. By definition, a climatology is the average of all years that contribute, such that the anomalies sum to zero. If it is the latter, then the inter-experiment difference in climatology needs to be assessed, and the anomalies might not sum to zero.
We remove the ensemble mean. This is because we aim to construct anomalies where the forced signal is removed. The forced signal is well estimated by the ensemble mean (see https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019MS001639 for details on use of ensemble mean to estimate the forced response).
We will make this methodology clear in the Figure caption.
-
Lines 166-167, “CESM2 is an exception that has opposite changes in El Nino SST amplitude and La Nina duration between the two periods.” The Cai et al. 2020 seems to provide a mechanism for this?
This result is not directly comparable to Cai et al, 2020 which looks at the evolution of individual ensemble members, while we consider the ensemble mean as a estimate of the forced signal.
-
Figure 1, it is interesting that for a SMILE, most experiments behave in a similar way, either unidirectional or reversing, suggesting that it is strongly model dependent. What causes the dependence?
Given a SMILE consists of experiments that are all run with the same model, it makes sense that the overall trajectory is similar. We note that individual experiments are not shown in Figure 1 – what we show is the ensemble mean and the 5-95% range across the ensemble. This means that the internal variability or trajectory of of each individual member within the ensemble spread is not illustrated in the model. The individual members while following the same overall trajectory will show much more noise than the ensemble mean and spread.
Model dependence could be related to the following:
-Different climatological biases
- Different patterns of transient mean-state warming
-Different ENSO feedbacks and dynamics
See following references:
Planton et al, 2021: https://journals.ametsoc.org/view/journals/bams/102/2/BAMS-D-19-0337.1.xml
Bellenger et al 2014: https://link.springer.com/article/10.1007/s00382-013-1783-z
Wills et al, 2022: https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2022GL100011
Capotondi et al, 2015: https://repository.library.noaa.gov/view/noaa/31041
We will add text around this model dependence citing the references above in the revised manuscript.
-
Line 175, what is the dynamics for increased ENSO seasonality?
We cannot answer this without a full feedback analysis, which is out of the scope of this study. We will add the following text to the manuscript.
Determining the dynamical cause for the increased ENSO seasonal synchronization in most of the models will require a detailed ENSO feedback analysis (e.g., Chen & Jin 2022), including assessing potential future changes in the "southward wind shift" mechanism (e.g., McGregor et al. 2012, Stuecker et al. 2013).
References:
https://agupubs.onlinelibrary.
https://journals.ametsoc.org/
https://www.nature.com/
-
Line 220, are you able to further test the idea of nonlinearity controlling mean state change by relating them in an inter-model/experiment relationship?
We will update the text on line 220 to read
The internal variability relationship (Fig. 8a) clearly shows the role of rectification into the mean state (e.g. Hayashi et al., 2020), but for the forced changes this is only one of several mechanisms going on, so the forced changes can depart from this linear relationship.
-
Lines 229 and 289, what is the dynamics for increasing aerosols to drive an increase in ENSO variability? One would expect increasing aerosols to have an opposite impact to that of increasing CO2. Is it possible that internal variability plays a role in the result?
This is currently unresolved. Aerosol forcing, however, is not the simple inverse of CO2 forcing as it is hemispherically asymmetric unlike greenhouse gas forcing. This leads to ITCZ shifts that can influence ENSO.
See following references:
Luongo et al, submitted: https://www.essoar.org/pdfjs/10.1002/essoar.10512160.1
Kang et al 2020: https://www.science.org/doi/10.1126/sciadv.abd3021
For stratospheric volcanic aerosol:
Pausata et al , 2020: https://www.science.org/doi/10.1126/sciadv.aaz5006
We will add discussion on this point citing the above literature in the revised manuscript.
Citation: https://doi.org/10.5194/esd-2022-26-AC2 -
-
RC2: 'Comment on esd-2022-26', Anonymous Referee #2, 31 Aug 2022
This paper examines changes in ENSO SST anomalies in a number of large ensembles. It is really a 'show and tell', looking at changing SST variability using a number of different measures. There is a significant amount of data wrangling involved in this type of work and the authors are world leading in this regard. The analysis is approached in a careful way and it supports the conclusions of the paper. Figures and text are of high quality.
Perhaps the most dissapointing thing, however, is that there is little insight provided as to why the large ensembles behave in such diverse ways. Some show increases, some decreases and some show non-linear responses in variability. Understanding this latter behaviour would be of significance scientific interest to the ENSO/climate change community. There are simple metrics available to look at mechanistics aspects of ENSO changes in models and it is a bit of a shame that the authors do not try some of these e.g. assessing the atmos-ocean coupling strength and its components. Such an analysis would significantly enhance the work.
It also seems a little odd that the authors do not make some comments on minimum ensemble size for looking at changes in ENSO.
Citation: https://doi.org/10.5194/esd-2022-26-RC2 -
AC1: 'Reply on RC2', Nicola Maher, 04 Oct 2022
This paper examines changes in ENSO SST anomalies in a number of large ensembles. It is really a 'show and tell', looking at changing SST variability using a number of different measures. There is a significant amount of data wrangling involved in this type of work and the authors are world leading in this regard. The analysis is approached in a careful way and it supports the conclusions of the paper. Figures and text are of high quality.
We thank the reviewer for their time taken to review the paper and positive review.
Perhaps the most dissapointing thing, however, is that there is little insight provided as to why the large ensembles behave in such diverse ways. Some show increases, some decreases and some show non-linear responses in variability. Understanding this latter behaviour would be of significance scientific interest to the ENSO/climate change community. There are simple metrics available to look at mechanistics aspects of ENSO changes in models and it is a bit of a shame that the authors do not try some of these e.g. assessing the atmos-ocean coupling strength and its components. Such an analysis would significantly enhance the work.
The aim of our paper is to illuminate how ENSO behaves in all available large ensembles. Here, we can look at ENSO evolution over time due to the use of large ensembles and truly identify how each model behaves under a strong warming scenario. We agree that understanding why the models behave differently is an important question. However, this is out of the scope of our study, which already includes 10 Figures in the main text and 7 in the Supplementary. We hope this work will inspire others to look further into these new datasets.
It also seems a little odd that the authors do not make some comments on minimum ensemble size for looking at changes in ENSO.
This is a good point. Maher et al, 2018 (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018GL079764) first investigated this questions and Lee et al, 2021 (https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2021GL095041) thoroughly examined it. The revised paper will add these citations and discussion around them.
Citation: https://doi.org/10.5194/esd-2022-26-AC1
-
AC1: 'Reply on RC2', Nicola Maher, 04 Oct 2022
-
CC1: 'Comment on esd-2022-26', John Fasullo, 05 Oct 2022
Review of The future of the El Niño-Southern Oscillation: Using large ensembles to illuminate time-varying responses and inter-model differences
by Maher et al.
The manuscript by Maher et al. seeks to diagnose changes in ENSO in 14 single model large ensembles (so-called SMILEs). The manuscript builds upon a body of work that is often based on single members, individual models, or idealized models and so it represents an advance, particularly at resolving the decadally varying aspects of forced changes in variance - which the work shows can be important. The manuscript is clearly written, is explicit about its objectives, findings, and reasoning, and includes figures that are well-designed and clear. There is sufficient new material here to justify publication. Some aspects are frustrating - such as in cases where the robust take-home message seems to be that there is no robust-take home message. Though basic questions go unanswered concerning the origins of inter-model contrast and mechanisms of change, the broader community has also struggled to answer these questions and so this work is not unique in this regard. That said I do have some minor suggestions for improvement. This includes the general suggestion that multi-model means not be used for many of the metrics because of the disproportionate variance in some models that is swamping out the means. Rather I think medians make more sense since the broader goal is to make generalizations about model behavior, which implicitly seeks to screen out outliers. I have various other relatively minor suggestions listed below but otherwise view the manuscript as suitable for publication.
***
55: There have been various studies that show the improvement of ENSO simulation across CMIP generations. These seem useful to reference in this paragraph to provide context on the numerous SMILEs.
119: I appreciate that there can be useful examples of models that suggest that no simple relationship between the present day and future exists but perhaps a more thorough exploration across all models and metrics considered here would be move convincing?
126: It is worth calling out the very large differences between SSP370 and RCP85/SSP585 in regarding to early 21st sulfate aerosols and potential consequences for the evolution of ENSO.
Figure 1: There is substantial white space in the 3x5 layout of the figure. I recommend changing to 4x4 and ensuring there is little white space as the figure is somewhat inefficient and difficult to read as is. Also it looks as thought the vertical extent needs to be expanded as some lines go out of range. Note also that there is substantial noise in many of the time series. I suggest applying a smoother except where the variability is not irrelevant noise. That said, various models seem to have abrupt responses to forcing, such as volcanic eruptions and perhaps even biomass effects (in MIROC-ES2L), that is too abrupt to be explained by warming alone. The authors seem not to find this worthy of discussion? I recommend addressing it.
Figure 3: The multi-ensemble mean seems to be dominated by EC-Earth3. Would a multi-ensemble median perhaps be more appropriate?
Figure 4: Fonts are too small - minimum font including axes should be on par with main text. Monthly stddev lines should be make thicker.
The authors don’t provide any hypotheses for the seasonality of the change in variance? Do none exist? Again the MIROC-ES2L increase at 2000 is quite notable. Is there no explanation for why this may occur? Other models show periodic changes in variability in the future. Is this just noise? Does it suggest that again some additional smoothing is needed to deal with some small ensemble sizes using monthly data? If one looks at CESM2 there are again suggestions of periodicity? What might drive this? I suggest reducing the range of the color bar to make colors in the figure more visible.
Figure 5: Perhaps put whiskers on Fig 5 corresponding to the 2 standard error range, which seems to increases in the latter half of the year? Might multi-model median be more appropriate due to sensitivity of means to a single model?
Why are 99 members of CESM2 used for some figures and only 50 used for others? This is not discussed at all and contradicts with Table 1. I’d review all plots and ensure that the # of members used is consistent with Table 1.
Figure 6: I again question the use of multi model mean rather than median given the outsized influence of some models (CSIRO).
Figure 8: Since the abscissa is not symmetric I recommend that a vertical line at 0 be shown to avoid confusion. As is, the panels and particularly the top, are misleading.
Figure 8: Doesn’t the fact that obs have become more La Niña like suggest that we should have seen a reduction in variance?
Figure 9: There should probably two more sets of arrows at the top of the plot saying La Niña-like and El Niño-like such that they are on each plot.
226: Why not also infer changes from the CESM1-SF (and now CESM2-SF if you do add CESM1 to the figure)?
Figure 10: reference to ‘all models’ is a bit confusing given there are only 2 models. perhaps state “both models” or better yet “both ensembles”? or one could include ALL CMIP6 DAMIP simulations to provide context for canesm5 and miroct6. Also I wonder why not show these in a similar fashion to Fig 9?
255: The discussions based on MEM should be reconsidered in the context of the median I think, particularly when making categorical statements of models overall (since the mean is strongly weighted by only a few models with large variance and is therefore not representative).
I recommend that the paper include a discussion of possible mechanisms and paths forward for exploring.
It is unfortunate that there is so little consistency across models on many aspects. The reader is left to wonder a bit on what is the robust finding here relevant to nature, aside from little being consistent across models? Have the authors looked for connections between some of the metrics being shown across models (e.g. seasonality of mean state changes and ENSO variance)? Have the authors examined what systematic differences exist for models that do best in some metrics packages (e.g. CVDP) in the present day (if so it would be go to mention, if not it would be good to do)? or for which changes in variance in observations fall within the ensemble spread, though this may be a weak constraint but would still be good to mention.
304: Projections of nearly every climate quantity are nonlinear in time (since radiative forcing is nonlinear). Is there really an expectation of linearity? and if not is this really a significant result?
308: As stated earlier I’d avoid the MEM(ean) and use the median.
334: last sentence needs a period.
None of the panels in the paper are labeled (e.g. A, B, C, …). I think they should be for easier reference in this and other manuscripts that may cite specific figure panels from this work.
Citation: https://doi.org/10.5194/esd-2022-26-CC1 -
AC3: 'Reply on CC1', Nicola Maher, 13 Oct 2022
Review of The future of the El Niño-Southern Oscillation: Using large ensembles to illuminate time-varying responses and inter-model differences
by Maher et al.
The manuscript by Maher et al. seeks to diagnose changes in ENSO in 14 single model large ensembles (so-called SMILEs). The manuscript builds upon a body of work that is often based on single members, individual models, or idealized models and so it represents an advance, particularly at resolving the decadally varying aspects of forced changes in variance - which the work shows can be important. The manuscript is clearly written, is explicit about its objectives, findings, and reasoning, and includes figures that are well-designed and clear. There is sufficient new material here to justify publication. Some aspects are frustrating - such as in cases where the robust take-home message seems to be that there is no robust-take home message. Though basic questions go unanswered concerning the origins of inter-model contrast and mechanisms of change, the broader community has also struggled to answer these questions and so this work is not unique in this regard. That said I do have some minor suggestions for improvement. This includes the general suggestion that multi-model means not be used for many of the metrics because of the disproportionate variance in some models that is swamping out the means. Rather I think medians make more sense since the broader goal is to make generalizations about model behavior, which implicitly seeks to screen out outliers. I have various other relatively minor suggestions listed below but otherwise view the manuscript as suitable for publication.
Thank you for taking the time to review this paper and for your positive comments and helpful suggestions.
While we agree that the median would make more sense for broader generalizations we choose to use the multi ensemble mean (MEM) for easy comparison with previous work that uses multi-model means (MMM) from CMIP. Here, the main aim is to show the diversity of forced responses that go into such a MMM and demonstrate that these diverse individual model responses average out in the MEM to show something that looks very similar to the MMM from previous studies. As such we choose to use the MEM rather than medians in this study. We will add a sentence on this point in the revised manuscript to make the use of MEM clear to the reader.
***
55: There have been various studies that show the improvement of ENSO simulation across CMIP generations. These seem useful to reference in this paragraph to provide context on the numerous SMILEs.
We will add this to the revised version.
119: I appreciate that there can be useful examples of models that suggest that no simple relationship between the present day and future exists but perhaps a more thorough exploration across all models and metrics considered here would be move convincing?
The aim of this manuscript is to provide the community with a detailed overview of how each model behaves and demonstrate the time-dependence of the ENSO response. We agree this is interesting, but is out of scope of the manuscript and would be a great follow up study on it’s own.
126: It is worth calling out the very large differences between SSP370 and RCP85/SSP585 in regarding to early 21st sulfate aerosols and potential consequences for the evolution of ENSO.
Thanks for the comment – this will be added to the revised manuscript.
Figure 1: There is substantial white space in the 3x5 layout of the figure. I recommend changing to 4x4 and ensuring there is little white space as the figure is somewhat inefficient and difficult to read as is. Also it looks as thought the vertical extent needs to be expanded as some lines go out of range. Note also that there is substantial noise in many of the time series. I suggest applying a smoother except where the variability is not irrelevant noise. That said, various models seem to have abrupt responses to forcing, such as volcanic eruptions and perhaps even biomass effects (in MIROC-ES2L), that is too abrupt to be explained by warming alone. The authors seem not to find this worthy of discussion? I recommend addressing it.
Thanks, once the Figure is formatted into the text the white space should not be a issue. Keeping the time-series as is acts to demonstrate the inherent variability due to different ensemble sizes. Given the ensemble sizes are different this is helpful to give an idea as to how noisy the results we have are. We will check the vertical extent in the revised version.
We agree the abrupt changes are interesting, however they are difficult to diagnose without a thorough analysis of each models individual forcing differences. We will add a text highlighting these differences and speculating on why they occur in the revised version.
Figure 3: The multi-ensemble mean seems to be dominated by EC-Earth3. Would a multi-ensemble median perhaps be more appropriate?
See comment at the beginning of the response.
Figure 4: Fonts are too small - minimum font including axes should be on par with main text. Monthly stddev lines should be make thicker.
This will be changed in the revised version.
The authors don’t provide any hypotheses for the seasonality of the change in variance? Do none exist? Again the MIROC-ES2L increase at 2000 is quite notable. Is there no explanation for why this may occur? Other models show periodic changes in variability in the future. Is this just noise? Does it suggest that again some additional smoothing is needed to deal with some small ensemble sizes using monthly data? If one looks at CESM2 there are again suggestions of periodicity? What might drive this? I suggest reducing the range of the color bar to make colors in the figure more visible.
We will reduce the colorbar range. In terms of the seasonality we cannot answer this without a full feedback analysis, which is out of the scope of this study. We will add the following text to the manuscript.
Determining the dynamical cause for the increased ENSO seasonal synchronization in most of the models will require a detailed ENSO feedback analysis (e.g., Chen & Jin 2022), including assessing potential future changes in the "southward wind shift" mechanism (e.g., McGregor et al. 2012, Stuecker et al. 2013).
Figure 5: Perhaps put whiskers on Fig 5 corresponding to the 2 standard error range, which seems to increases in the latter half of the year? Might multi-model median be more appropriate due to sensitivity of means to a single model?
We will add another line for the median in this plot. Rather than add a error range we choose to show each individual model in light grey to illustrate what goes into the multi-model mean.
Why are 99 members of CESM2 used for some figures and only 50 used for others? This is not discussed at all and contradicts with Table 1. I’d review all plots and ensure that the # of members used is consistent with Table 1.
Thanks for pointing this out – we will make sure all 99 are used where there we only 50 in the revised version.
Figure 6: I again question the use of multi model mean rather than median given the outsized influence of some models (CSIRO).
See comment at the beginning of the response.
Figure 8: Since the abscissa is not symmetric I recommend that a vertical line at 0 be shown to avoid confusion. As is, the panels and particularly the top, are misleading.
This will be added into the revised version.
Figure 8: Doesn’t the fact that obs have become more La Niña like suggest that we should have seen a reduction in variance?
If you look at individual members you can see that while it is more likely with a La-Nina like change that there is a reduction in variance, not all members do this – showing the inherent noise in the system. The addition of 0 lines as suggested above will make this clearer.
Figure 9: There should probably two more sets of arrows at the top of the plot saying La Niña-like and El Niño-like such that they are on each plot.
We will add these assuming they don’t make the plot noisier and harder to read.
226: Why not also infer changes from the CESM1-SF (and now CESM2-SF if you do add CESM1 to the figure)?
Using the difference between the all and the all-but forcing used in CESM1-SF does not necessarily give the same result as a single forcing experiment. For consistency we only use single-forcing not all-but forcing experiments. CESM2-SF was not available when the workshop where the data analysis was done for this publication was held.
Figure 10: reference to ‘all models’ is a bit confusing given there are only 2 models. perhaps state “both models” or better yet “both ensembles”? or one could include ALL CMIP6 DAMIP simulations to provide context for canesm5 and miroct6. Also I wonder why not show these in a similar fashion to Fig 9?
We will update the text to both models. We do not show these the same way as Figure 9 as the periods we are looking at are much shorter and the single forcing ensemble sizes are inconsistent within the same model. Some of the small ensemble sizes made it difficult to understand the ensemble mean – this is why we show each member as well as the mean in Figure 10.
255: The discussions based on MEM should be reconsidered in the context of the median I think, particularly when making categorical statements of models overall (since the mean is strongly weighted by only a few models with large variance and is therefore not representative).
See response at the beginning of this document.
I recommend that the paper include a discussion of possible mechanisms and paths forward for exploring.
It is unfortunate that there is so little consistency across models on many aspects. The reader is left to wonder a bit on what is the robust finding here relevant to nature, aside from little being consistent across models? Have the authors looked for connections between some of the metrics being shown across models (e.g. seasonality of mean state changes and ENSO variance)? Have the authors examined what systematic differences exist for models that do best in some metrics packages (e.g. CVDP) in the present day (if so it would be go to mention, if not it would be good to do)? or for which changes in variance in observations fall within the ensemble spread, though this may be a weak constraint but would still be good to mention.
The aim of our paper is to illuminate how ENSO behaves in all available large ensembles. Here, we can look at ENSO evolution over time due to the use of large ensembles and truly identify how each model behaves under a strong warming scenario. We agree that understanding why the models behave differently and trying to better understand consistency between models is an important question. However, this is out of the scope of our study, which already includes 10 Figures in the main text and 7 in the Supplementary. We hope this work will inspire others to look further into these new datasets. Our final point in the manuscript aims to highlight this point and hopes to inspire further studies that look into this question in more detail.
“This highlights the need for further research on the mechanisms of inter-model differences in ENSO projections. There is a rich diversity of future ENSO changes projected by climate models and more work is needed to understand which aspects of these projections are robust.”
We will add the following at the end of this discussion “are robust, why the models differ and how they relate to the world as observed.”
304: Projections of nearly every climate quantity are nonlinear in time (since radiative forcing is nonlinear). Is there really an expectation of linearity? and if not is this really a significant result?
Why there is not an expectation of linearity most previous studies have not been able to look at the time-dependent response to assess whether there is linearity. For example most previous studies have considered a future time period (e.g. 2000-2099) compared to a historical period (e.g. 1850-1950).
308: As stated earlier I’d avoid the MEM(ean) and use the median.
See comment at beginning of document.
334: last sentence needs a period.
Thanks this will be added.
None of the panels in the paper are labeled (e.g. A, B, C, …). I think they should be for easier reference in this and other manuscripts that may cite specific figure panels from this work.
We choose not to add these as they will make the Figures overly busy. Other can still reference the individual Figures and it should be easy to identify individual models are they are named in the titles.
Citation: https://doi.org/10.5194/esd-2022-26-AC3
-
AC3: 'Reply on CC1', Nicola Maher, 13 Oct 2022