Reliability of Resilience Estimation based on Multi-Instrument Time Series
- 1Institute of Geosciences, Universität Potsdam, Germany
- 2Department of Geodesy and Geo-Information, Vienna University of Technology, Vienna, Austria
- 3Global Systems Institute, University of Exeter, Exeter, UK
- 4Earth System Modelling, School of Engineering & Design, Technical University of Munich, Germany
- 5Potsdam Institute for Climate Impact Research, Germany
- 6Department of Mathematics, University of Exeter, UK
- 1Institute of Geosciences, Universität Potsdam, Germany
- 2Department of Geodesy and Geo-Information, Vienna University of Technology, Vienna, Austria
- 3Global Systems Institute, University of Exeter, Exeter, UK
- 4Earth System Modelling, School of Engineering & Design, Technical University of Munich, Germany
- 5Potsdam Institute for Climate Impact Research, Germany
- 6Department of Mathematics, University of Exeter, UK
Abstract. Many widely-used observational data sets are comprised of several overlapping instrument records. While data inter-calibration techniques often yield continuous and reliable data for trend analysis, less attention is generally paid to maintaining higher-order statistics such as variance and autocorrelation. A growing body of work uses these metrics to quantify the stability or resilience of a system under study, and potentially to anticipate an approaching critical transition in the system. Exploring the degree to which changes in resilience indicators such as the variance or autocorrelation can be attributed to non-stationary characteristics of the measurement process, rather than actual changes in the dynamical properties of the system, is important in this context. In this work we use both synthetic and empirical data to explore how changes in the noise structure of a data set are propagated into the commonly used resilience metrics lag-one autocorrelation and variance. We focus on examples from remotely sensed vegetation indicators such as the Vegetation Optical Depth and the Normalized Difference Vegetation Index from different satellite sources. We find that varying satellite noise levels and data aggregation schemes can lead to biases in inferred resilience changes. These biases are typically more pronounced when resilience metrics are aggregated (for example, by land-cover type or region), whereas estimates for individual time series remain reliable at reasonable sensor noise levels. Our work provides guidelines for the treatment and aggregation of multi-instrument data in studies of critical transitions and resilience.
- Preprint
(4249 KB) -
Supplement
(1521 KB) - BibTeX
- EndNote
Taylor Smith et al.
Status: closed
-
RC1: 'Comment on esd-2022-41', Anonymous Referee #1, 07 Oct 2022
This manuscript addressed an interesting topic by investigating whether measurement noises can impact the inference of resilience using remote sensing data. They used a simulation approach to investigate how signal-to-noise ratio (SNR) influences the calculations of two indicators of resilience, namely lag-1 autocorrleation (AR1) and variance. Their results have implications for assessing the possible impact of measurement errors in observational data. Overall I found this study well designed and conducted. I have a few comments that may help improve the paper.
(1) the study generated simulated time series by combining a background time series with a number of random noises. I was wondering if this characterized the realistic errors introduced by changing instruments. I was not an expert in remote sensing, but I thought in some cases the change of instrument might induce sudden increase/decrease in the time series (rather than a small noise term). Such changes can have major impacts on the calculation of resilience indicators, keeping in mind that such indicators were used to detect ‘sudden changes’ in the time series, whether they were due to measurement errors or underlying processes?
(2) AR1 and variance are two important indicators of resilience, or early warning signals (EWS) for catastrophic changes, but there are more. Moreover, researchers had been developing composite EWS by combining different metrics. Given that measurement errors may influence AR1 and variance differently or in opposite directions, I was wondering if a composite EWS would be more robust to measurement errors.
(3) the authors discussed about the difference between the average of variance from individual time series and AR1 of the aggregate time series, particularly their different behaviors in the presence of measurement errors. Similarly, the reference Feng et al. (2021) found different temporal trends of these two metrics. However, these two metrics represent different properties (i.e., local- vs. larger-scale resilience) and they did not necessarily exhibit different patterns, even if there was no measurement error. The problem is, the local-scale variance did not add up to give the larger-scale variance, but modulated by the synchrony between local grids. I attached a theoretical paper illustrating this:
Wang, S. & Loreau, M. Ecosystem stability in space: α, β and γ variability. Ecol. Lett. 17, 891–901 (2014).
(4) while the manuscript was overall well written, I had to say that I was confused by the different metrics involved in the figures, which seemed to be quite related but differ in important ways. For instance, the authors calculate resilience indicators using several approaches, e.g., deriving the numbers for an individual time series, first aggregating the time series and then calculating AR1 and variance, or first calculating AR1 and variance and then averaging them. They also calculate correlation between AR1 and variance at different levels of complexity. I would suggest to add a table to clearly define all key metrics in the figures, with explanations what a positive/negative or higher/lower value mean.
Specific comments:
L52: What does ‘synthetic series’ mean? I think it is simply a simulated time series.
L79: How was this ‘aggregating’ implemented?
Figure 2: Not sure that I understood these figures correctly. Did the ‘median corrcoef median signlas’ on the right represent the median of the ‘corrcoeff of median signal’ on the left? Why they were so difficult, even by sign?
L130: “the correlation between AR1 and variance is generally positive for individual synthetic series” – any result supporting this argument?
Figure 3: I must miss something. How to determine the SNR in the real data? And did it happen to exhibit SNR = 0.1 and 2 in the empirical data?
L242: what does it mean by ‘aggregated’? You had explained that first aggregating time series and then calculating AR1 and variance can remove the influence of changes in satellite instruments to some extent. So did you mean ‘first calculating AR1 and variance and then taking the averaging of these metrics’?
-
AC1: 'Reply on RC1', Taylor Smith, 07 Nov 2022
The comment was uploaded in the form of a supplement: https://esd.copernicus.org/preprints/esd-2022-41/esd-2022-41-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Taylor Smith, 07 Nov 2022
-
RC2: 'Comment on esd-2022-41', Anonymous Referee #2, 20 Oct 2022
The impact of data noise on estimating two resilience metrics, variance and lag-1 autocorrelation, with satellite data was assessed in this study. The topic addressed is very importance, because satellite products are widely used to quantify the resilience of terrestrial ecosystems. My major concern is that it is within our expectation that data noise will affect the reliability of the metrics, what’s the new finding of this study? I hope two aspects may be investigated in depth: 1. What’s the uncertainty of the existing satellite products when used for quantifying resilience? For this purpose, the ‘real noise’ of the data needs to be quantified. 2. What’s the uncertainty of using the products to depict the temporal changes in ecosystem resilience? For this purpose, the temporal changes in the noise are also need to be quantified.
-
AC2: 'Reply on RC2', Taylor Smith, 07 Nov 2022
The comment was uploaded in the form of a supplement: https://esd.copernicus.org/preprints/esd-2022-41/esd-2022-41-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Taylor Smith, 07 Nov 2022
Status: closed
-
RC1: 'Comment on esd-2022-41', Anonymous Referee #1, 07 Oct 2022
This manuscript addressed an interesting topic by investigating whether measurement noises can impact the inference of resilience using remote sensing data. They used a simulation approach to investigate how signal-to-noise ratio (SNR) influences the calculations of two indicators of resilience, namely lag-1 autocorrleation (AR1) and variance. Their results have implications for assessing the possible impact of measurement errors in observational data. Overall I found this study well designed and conducted. I have a few comments that may help improve the paper.
(1) the study generated simulated time series by combining a background time series with a number of random noises. I was wondering if this characterized the realistic errors introduced by changing instruments. I was not an expert in remote sensing, but I thought in some cases the change of instrument might induce sudden increase/decrease in the time series (rather than a small noise term). Such changes can have major impacts on the calculation of resilience indicators, keeping in mind that such indicators were used to detect ‘sudden changes’ in the time series, whether they were due to measurement errors or underlying processes?
(2) AR1 and variance are two important indicators of resilience, or early warning signals (EWS) for catastrophic changes, but there are more. Moreover, researchers had been developing composite EWS by combining different metrics. Given that measurement errors may influence AR1 and variance differently or in opposite directions, I was wondering if a composite EWS would be more robust to measurement errors.
(3) the authors discussed about the difference between the average of variance from individual time series and AR1 of the aggregate time series, particularly their different behaviors in the presence of measurement errors. Similarly, the reference Feng et al. (2021) found different temporal trends of these two metrics. However, these two metrics represent different properties (i.e., local- vs. larger-scale resilience) and they did not necessarily exhibit different patterns, even if there was no measurement error. The problem is, the local-scale variance did not add up to give the larger-scale variance, but modulated by the synchrony between local grids. I attached a theoretical paper illustrating this:
Wang, S. & Loreau, M. Ecosystem stability in space: α, β and γ variability. Ecol. Lett. 17, 891–901 (2014).
(4) while the manuscript was overall well written, I had to say that I was confused by the different metrics involved in the figures, which seemed to be quite related but differ in important ways. For instance, the authors calculate resilience indicators using several approaches, e.g., deriving the numbers for an individual time series, first aggregating the time series and then calculating AR1 and variance, or first calculating AR1 and variance and then averaging them. They also calculate correlation between AR1 and variance at different levels of complexity. I would suggest to add a table to clearly define all key metrics in the figures, with explanations what a positive/negative or higher/lower value mean.
Specific comments:
L52: What does ‘synthetic series’ mean? I think it is simply a simulated time series.
L79: How was this ‘aggregating’ implemented?
Figure 2: Not sure that I understood these figures correctly. Did the ‘median corrcoef median signlas’ on the right represent the median of the ‘corrcoeff of median signal’ on the left? Why they were so difficult, even by sign?
L130: “the correlation between AR1 and variance is generally positive for individual synthetic series” – any result supporting this argument?
Figure 3: I must miss something. How to determine the SNR in the real data? And did it happen to exhibit SNR = 0.1 and 2 in the empirical data?
L242: what does it mean by ‘aggregated’? You had explained that first aggregating time series and then calculating AR1 and variance can remove the influence of changes in satellite instruments to some extent. So did you mean ‘first calculating AR1 and variance and then taking the averaging of these metrics’?
-
AC1: 'Reply on RC1', Taylor Smith, 07 Nov 2022
The comment was uploaded in the form of a supplement: https://esd.copernicus.org/preprints/esd-2022-41/esd-2022-41-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Taylor Smith, 07 Nov 2022
-
RC2: 'Comment on esd-2022-41', Anonymous Referee #2, 20 Oct 2022
The impact of data noise on estimating two resilience metrics, variance and lag-1 autocorrelation, with satellite data was assessed in this study. The topic addressed is very importance, because satellite products are widely used to quantify the resilience of terrestrial ecosystems. My major concern is that it is within our expectation that data noise will affect the reliability of the metrics, what’s the new finding of this study? I hope two aspects may be investigated in depth: 1. What’s the uncertainty of the existing satellite products when used for quantifying resilience? For this purpose, the ‘real noise’ of the data needs to be quantified. 2. What’s the uncertainty of using the products to depict the temporal changes in ecosystem resilience? For this purpose, the temporal changes in the noise are also need to be quantified.
-
AC2: 'Reply on RC2', Taylor Smith, 07 Nov 2022
The comment was uploaded in the form of a supplement: https://esd.copernicus.org/preprints/esd-2022-41/esd-2022-41-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Taylor Smith, 07 Nov 2022
Taylor Smith et al.
Taylor Smith et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
518 | 133 | 17 | 668 | 31 | 6 | 10 |
- HTML: 518
- PDF: 133
- XML: 17
- Total: 668
- Supplement: 31
- BibTeX: 6
- EndNote: 10
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1