Articles | Volume 17, issue 1
https://doi.org/10.5194/esd-17-41-2026
© Author(s) 2026. This work is distributed under the Creative Commons Attribution 4.0 License.
Seamless climate information from months to multiple years: constraining decadal predictions with seasonal predictions and past observations, and their comparison to multi-annual predictions
Download
- Final revised paper (published on 07 Jan 2026)
- Supplement to the final revised paper
- Preprint (discussion started on 11 Aug 2025)
- Supplement to the preprint
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on egusphere-2025-3674', Anonymous Referee #1, 10 Sep 2025
- AC1: 'Reply on RC1', Carlos Delgado-Torres, 23 Sep 2025
-
RC2: 'Comment on egusphere-2025-3674', Anonymous Referee #2, 24 Oct 2025
- AC2: 'Reply on RC2', Carlos Delgado-Torres, 30 Oct 2025
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
ED: Publish subject to minor revisions (review by editor) (07 Nov 2025) by Andrey Gritsun
AR by Carlos Delgado-Torres on behalf of the Authors (11 Nov 2025)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (03 Dec 2025) by Andrey Gritsun
AR by Carlos Delgado-Torres on behalf of the Authors (11 Dec 2025)
In the manuscript entitled ‘Seamless climate information for the next months to multiple years: merging of seasonal and decadal predictions, and their comparison to multi-annual predictions’, Delgado-Torres and colleagues evaluate the added value of seamless forecasts from multi-annual predictions and from several methods based on constraining large ensembles of simulations, in comparison with seasonal and decadal prediction systems, as well as with ‘non-initialized’ large ensembles of historical simulations, to predict the Niño 3.4 index and spatial fields of temperature, precipitation, and sea-level pressure. Overall, I found that the authors carried out interesting analyses that highlight the relevance of both multi-annual predictions and constraining methods, the latter being a cost-effective alternative that can be updated much more frequently. I have some minor issues and comments, especially regarding the evaluations.
Title :
I found the title not very clear. I am not sure that the term ‘merging’ is appropriate here, as there is no actual merging of seasonal and decadal predictions in the study, but rather a constraint of decadal predictions and historical simulations based on seasonal predictions. If you used the term ‘merging’ in the sense of combining different data, then ‘blending’ may be more suitable.
Introduction :
l.38-41: I wouldn’t describe the methods cited here as ‘temporal merging’ methods, since they do not merge time series (this approach is not used in these studies). Indeed, they use observations or decadal predictions to constrain large ensembles of non-initialized historical simulations. The term ‘temporal merging’ is more consistent with the study of Befort et al. (2022), cited on line 50, where historical simulations and decadal predictions are concatenated.
Data :
l.96 : The term ‘climate projection’ with HIST as a reference is misleading, especially since there are not only climate projections but also historical simulations.
Method:
Fig S1 : What does « accum » mean ?
l.110 : Can you provide more explanation on the « bias adjustments (correcting both the mean and variance) »
Results:
Fig S3b : It is confusing for the November initialization that the skill from DP just after initialization (dark green), which starts in January as indicated in the legend, is shown as starting at the same month (0) as the other dataset that begins in November. Shouldn’t it instead start at month 2 to be consistent with the other dataset?
l.217-219 : Indeed, this is not a very fair comparison with decadal predictions. It would be preferable to use the same representation as in Fig. S3B, based on the DP system initialized in November.
Fig S4 and S5 : As in my previous comment, it would be preferable to also include the DP system initialized in November for the November forecast in the Figures.
l.223-224 : If I understand the method correctly, the selected members from the DP ensemble are also initialized 5–7 months prior for the May forecast and 10–12 months prior for the November forecast. It would be interesting to see whether selecting members from the DP system initialized in November of the same year of the forecast could increase the skill in Fig. S4b.
l. 224-227 : The fact that some methods using only HIST show such poor skill suggests that the predictor used for the constraint provides no information on the evolution of El Niño. Conversely, methods with skill comparable to SP and MP in the first forecast months appear to rely on more informative predictors. Are these best methods based solely on the Niño 3.4 index? And is there a common predictor among the worst methods as well?
l. 239-240 It seems from these figures that many members are selected from two models (MIROC6 and CESM1). Do you have any thoughts on why this is the case ? Are these models better in their representation of El Niño?
Fig 6 : It would be helpful to clarify the choice of constraints for the different tests. For example, in panels 6d, e, f, is it based on HIST+DP? Similarly, for panels 6g, h, i, is it based on OBS or SP?
Fig 6 : The legend for the fifth row is unclear and quite confusing. In the legend, you describe the mean absolute error for Nino3.4 or NAO (two scores), the spatial ACC, the spatial centered-RMSE, and the spatial uncentered-RMSE with respect to TOS or PSL (is this two scores, or four if TOS and PSL are tested for both centered- and uncentered-RMSE?). However, only four distributions are highlighted in the figure to test the scores, with, for example, only one as the error index, which I assume corresponds to the mean absolute error — but is it for Nino3.4 or NAO? Is one missing?
small correction:
Fig 1 : It is hard to see the brown HIST line over the purple lines. For the legend, it would be more convenient for the reader to indicate the period over which the skill is calculated, so that this information is available directly in the legend.
Fig S3b : the dark green line is missing in the Figure legend below the x-axis.
l.210: remove the tilde over the « 1 »
Fig S6 : It would be easy for the reader to have directly _DP at the end of the models that correspond to the DP ensemble.