the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Simple physics-based adjustments reconcile the results of Eulerian and Lagrangian techniques for moisture tracking
Abstract. The increase in the number and quality of numerical moisture tracking tools has greatly improved our understanding of the hydrological cycle in recent years. However, the lack of observations has prevented a direct validation of these tools, and it is common to find large discrepancies among the results produced by them, especially between Eulerian and Lagrangian methodologies. Here, we evaluate two diagnostic tools for moisture tracking, WaterSip and UTrack, using simulations from the Lagrangian model FLEXPART. We assess their performance against the Weather Research and Forecasting (WRF) model with Eulerian Water Vapor Tracers (WRF-WVTs). Assuming WRF-WVTs results as a proxy for reality, we explore the discrepancies between the Eulerian and Lagrangian approaches for five precipitation events associated with atmospheric rivers and propose some physics-based adjustments to the Lagrangian tools. Our findings reveal that UTrack, constrained by evaporation and precipitable water data, has a slightly better agreement with WRF-WVTs than WaterSip, constrained by specific humidity data. As in previous studies, we find a negative bias in the contribution of remote sources, such as tropical ones, and an overestimation of local contributions. Quantitatively, the root-mean-square-error (RMSE) for contributions from selected source regions is 5.55 for WaterSip and 4.64 for UTrack, highlighting UTrack's narrowly superior performance. Implementing our simple and logical corrections leads to a significant improvement in both methodologies, effectively reducing the RMSE by over 50 % and bridging the gap between Eulerian and Lagrangian outcomes. Our results suggest that the major discrepancies between the different methodologies were not rooted in their inherently different nature, but in the obviation of basic physical considerations that may be easily straightened out.
- Preprint
(2017 KB) - Metadata XML
-
Supplement
(39498 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on esd-2024-18', Anonymous Referee #1, 15 Jul 2024
General comments
This study investigates the uncertainty in precipitation source regions estimated by three different modeling approaches. Precipitation sources estimated by the online Eulerian-based WRF-WVT method are taken as the reference, against which estimates from two offline Lagrangian-based methods are compared: the WaterSip and UTrack methods. Both methods are found to exhibit biases in the estimated precipitation sources compared to the reference data set, in particular showing sources to be geographically closer to the precipitation than the more remote sources estimated by the reference. The study then tests a structural modification to each of the WaterSip and UTrack methods and finds bias is reduced and precipitation sources are made geographically closer to those of the WRF-WVT reference. A key conclusion of the study is that the Lagrangian methods can serve as viable alternatives to the more computationally-expensive WRF-WVT method. The study is well-defined, well-written and the conclusions logically follow the results. In particular, the authors are to be commended for detailing the structural differences between the models. The main area of improvement needed is the clarification of the proposed modifications to the Lagrangian models, and their resulting evaluation against the reference dataset.
Specifically, the modification of the UTrack model appears to contain two changes: (1) only parcels released from above 2km may be used for tracking, and (2) of those parcels, only those with relative humidity above 90% are subsequently tracked. It is unclear which modification dominates the reported changes to precipitation sources relative to the WRF-WVT sources. Of more minor importance, it is unclear why a higher relative humidity threshold is applied to the UTrack model compared to the WaterSip model; this choice of model modification needs to be clarified.
The modification of the WaterSip model, requiring parcels to have a minimum relative humidity of 80% immediately before a decrease in specific humidity, needs to be explained more clearly. It needs to be made clearer what the exact problem is with the way WaterSip reduces parcel specific humidity en route, and how applying an 80% threshold of relative humidity helps.
Specific comments
L47: Which problem is being referred to here?
L55/60: Here it is asserted that Eulerian approaches are more accurate than Lagrangian approaches. I do not think it is true that, in general, Eulerian tracing approaches are considered to be more reliable than Lagrangian approaches in accurately estimating precipitation sources. Perhaps you mean online Eulerian water vapor tracers are considered more accurate? If this is the case, I suggest rephrasing to clarify. Furthermore, if Lagrangian approaches are asserted to contain “more uncertainty”, than these uncertainties need to be outlined. Relatedly, I think it is important to be careful about asserting that WRF-WVTs can be “considered as synthetic observations”. There needs to be some evidence that WRF-WVTs can in fact accurately represent real observations, for example through comparison with satellite observations of atmospheric moisture. If this or a similar type of evaluation has been done, please refer to it here. Otherwise, I would tone down the language by changing the words “considered as synthetic observations” in L63 (also in L436) to “used as a reference”.
L145: Is the specific humidity assimilated from ERA5 like the evaporation field? Does the WRF model close the water balance if ERA5 evaporation is assimilated?
L155: While the manuscript makes it clear that parcel trajectories are calculated using WRF data in the first case, and ERA5 data in the second case, it is a little unclear which dataset was used to calculate the moisture contribution for each Lagrangian model. From reading section 2.3, I interpret that in the first case, “FLEXPART-WRF”, WaterSip reads the specific humidity field from WRF, and UTrack reads the precipitable water field from WRF but the evaporation field is ERA5 data assimilated into WRF. In the second case, “FLEXPART-ERA5”, I interpret that both WaterSip and UTrack read all fields from ERA5. If this is not the correct interpretation, please clarify.
L172 & L210: The Dirmeyer and Brubaker approach is also used by other studies, whose moisture tracking method is very similar to UTrack, e.g. Holgate, C. M., J. P. Evans, A. I. J. M. van Dijk, A. J. Pitman, and G. D. Virgilio, 2020: Australian Precipitation Recycling and Evaporative Source Regions. Journal of Climate, 33, 8721–8735, https://doi.org/10.1175/JCLI-D-19-0926.1. Similarly, the WaterSip approach is also used by other studies, e.g. Cheng, T. F., and M. Lu, 2023: Global Lagrangian Tracking of Continental Precipitation Recycling, Footprints, and Cascades. Journal of Climate, https://doi.org/10.1175/JCLI-D-22-0185.1. Though these specific methods are not formerly evaluated here, it would be pertinent to acknowledge them.
Figures 3 and 4: it would be helpful to the reader if these figures could be placed side by side for easier comparison. Is it possible to combine the two figures into one?
L230: To make it easier for the reader to interpret the error scores, it would be helpful to add a sentence linking each score with a physical meaning, e.g. a higher value of MAESS refers to a more accurate comparison with the reference dataset.
L303: To make it clearer to the reader, it would be helpful for the accumulation over time to be shown with a simple example. As the manuscript currently reads, it is unclear what the problem with the WaterSip method is.
L378: The original configuration of UTrack appears to release parcels from a random, humidity-weighted vertical level, indicating the starting parcel levels will be in the lower part of the troposphere. Yet here, and in Figure 7, it is indicated that the starting parcel level is 0km. Was the starting parcel height set at 0km in this study, or was a random, humidity-weighted vertical level used as in the original model? Further, did this study use a random, humidity-weighted vertical release height and simply ignore those parcels starting below 2km, or was the release height set at a constant 2km level in the modified case?
L416: Can you provide some reasoning as to why WaterSip is superior to UTrack when using ERA5 data?
L475: The statement that the Lagrangian methods can serve as viable alternatives for WRF-WVTs is a key conclusion of the study. I would suggest including this conclusion in the abstract.
Technical corrections
Figure 1: it would be helpful if the subplots each had a title describing their geographic location, e.g. “South Africa”. These location labels can then be added to Table 1 to make it easier for the reader to associate the numerical description with a real-world location.
Figure 2: “Tropical Indic” should perhaps be “Tropical Indian” (same issue applies to later figures). Also some parts of the world are classed as “Tropical land” when they are in fact desert regions (e.g. northern and southern Africa, central Australia, Arabian peninsula). To avoid re-running the model with different regions, I suggest touching on the implications of this classification in your results.
L165: Should “Except for the position and the…” be “Except for the position of the parcel and the …”?
Citation: https://doi.org/10.5194/esd-2024-18-RC1 -
RC2: 'Reviewer comment on esd-2024-18', Harald Sodemann, 16 Aug 2024
Review of "Simple physics-based adjustments reconcile the results of Eulerian and Lagrangian techniques for moisture tracking" by Crespo-Otero et al., submitted to Earth System Dynamics Discussions
The authors present a study focused on the comparison between Eulerian and Lagrangian approaches to trace moisture and to identify the evaporation sources of precipitation. Using a regional model simulation with water tagging as a reference, they then evaluate two Lagrangian offline approaches in that framework for a set of Atmospheric River events from different regions. Two tunings are proposed to reduce a general bias towards shorter transport distances in Lagrangian methods. The study is overall interesting, presented clearly, and well written. However, the fairly coarse choice of tagging regions, as well as the exclusive selection of AR cases introduces limitations that are currently not well addressed. A more careful and balanced discussion of the results and implications from this study are thus advised. I also see further issues with the proposed tuning and with regards to some parts of the literature detailed below that the authors should address when preparing a revised manuscript.
Major comments
1. Coarse definition of tagging regions. The authors subdivide the hemispheric land and ocean into 9 sectors, separated along 30 N and S. This allows only for a very coarse distinction between ocean basins and continental areals and the boundaries. As the Lagrangian diagnostics are showing, the majority of sources are located at different regions within the same ocean basin. As a consequence, the RMSE computed here only picks up the outermost differences. An example for this is seen for the Greenland AR, where the structures in the North Atlantic region widely differ between the two Lagrangian approaches. The current tagging setup misses these differences entirely, and exclusively focusses on the fringe of the moisture sources. There are probably two ways to approach this deficiency: One is to increase the number of tracer subdivisions depending on every case, adding complexity to the study, but providing more sharpness in the tagging simulation (e.g. using a setup similar to Sodemann and Stohl, 2013). The other way is to openly address this deficiency in the study design, and adjust the discussion to be more nuanced, and formulate the conclusions more carefully.
2. Biased selection of cases. The study includes five AR cases from different parts of the world. All cases are thus potentially related to a large amount of long-range transport. While this selection in itself is no matter of concern, proposing a general tuning of the Lagrangian methods based on a selection of long-range transport cases only is problematic, as it may introduce biases during cases of more local precipitation sources (e.g. convective summertime precipitation, weaker precipitation cases). The focus on AR cases only, and the limitations following along with that, should be more clearly highlighted in the title, abstract, and conclusions.
3. Proposed tuning to the WaterSip method. The authors propose to introduce a relative humidity threshold in the WaterSip method during the identification of precipitation/moisture loss events en route. While such a proposal seems physically plausible at first, there are some downsides as well. Importantly, a moisture loss can be due to one of two reasons, either removal of water vapour from the atmosphere due to condensation and precipitation, or due to the mixing with drier air masses. The second case will be necessarily ignored in unsaturated situations if a relative humidity threshold is introduced as proposed here. Ignoring the lowering of specific humidity due to mixing can then lead to an over-accounting of the moisture sources, i.e. a larger amount of uptakes are assigned to the specific humidity of the airparcel that are contained within. Duetsch et al. 2018 proposed a distinction between mixing events and rainout events. However, both types of situations still need to be part of the accounting method to be physically plausible.
A more conventional tuning of the WaterSip method is to change the specific humidity thresholds and the time step. While the authors have tested different time steps, the specific humidity threshold has been set to a quite low value compared to literature (a common value is 0.2 g kg-1 6h-1). The specific humidity threshold will have a similar effect as the RH threshold, and is justified by interpolation errors in the offline approach. Can the authors report how sensitive the moisture sources are, and thus the RMSE values to a variety of changes in the specific humidity threshold?
Ultimately, I think one also has to acknowledge that offline trajectory methods do have their inherent limitations, both from the computation of trajectories, and the specifics of the moisture source diagnostic, which are sort of the price for the lower computational expense, and the more detailed spatial information on the source location. Knowing different methods' limitations may be in the end more valuable than tuning methods towards an expected or desired outcome for a specific type of cases. Maybe the authors could reflect on this perspective in their discussion and conclusions?
4. Title, abstract and conclusions appear too wide-ranging. As partly commented in the points above, the present study has limitations from the method design with respect to tracer setup and case selection, and the tuning of Lagrangian methods can lead to inconsistencies in the method. The discussion throughout the manuscript should be more nuanced and balanced by taking up these limitations. In particular the abstract is now formulated in a very definite, concluding language, which does not seem justified in the light of the limitations mentioned above. The title also suggests to a superficial reader that studies should generally apply the proposed tunings, but their general validity is questionable, or is at least not generally established. In particular the aspect of AR case selection could be included in the title. The study design with coarse tagging regions does in my view not 'reconcile' different approaches, but is rather a tuning using a particular choice of parameters. Maybe the title could be rephrased in terms of sensitivity, and mention the importance of long-range transport for the examined cases?
5. Use of literature. There are some citations of previous tagging studies that are missing or could be valuable to add. There are also some wrong citations (Lagrangian method cited in Eulerian context). These publications are listed in the detailed comments below.
Detailed comments
L. 21: What unit does the RMSE have, is this in percent, or a fraction?
L. 22: "narrowly superior performance": How significant are the differences of less than 1 (%?) between both methods considering all sources of uncertainty?
L. 23: Maybe clarify that this is a relative improvement, since the RMSE appears to have the same units. The 50 % relative improvement could be misleading, both because the untis are the same as for the RMSE, and given the overall quite small RMSE difference. Can the overall result be presented more balanced and objective here?
L. 24: I think this conclusion statement is going too far. The selection of cases and limitations in the setup does not allow this conclusion. Expressed more neutrally, the sensitivity test and tuning performed here increase the amount of long-range transport detected from the Lagrangian methods. There is not sufficient evidence presented supporting that the tuning is valid generally in all cases. Maybe instead it could be emphasized that the overall approach of using a Eulerian tagging setup to validate Lagrangian methods is promising, but needs further refinement for generally valid modifications.
L. 35 and elsewhere: It is customary to sort references by year of publication. Consider adding Yoshimura et al., 2004 to this list. Sodemann and Stohl (2009) is a Lagrangian study, did you mean to cite here Sodemann et al. (2009)?
L. 36: "Lagrangian transport models": Lagrangian transport models are the general category of models that simulate airmass transport. To be more specific to the case here, consider rephrasing as "Lagrangian moisture source diagnostics".
L. 39: I do not know of an existing online implementation of a Lagrangian moisture source diagnostics. The offline/online distinction can however be made regarding the tagging and Lagrangian methods.
L. 40: "most academics often use": this point is debatable, there exist a range of studies that do make such comparison efforts.
L. 41: "results can be highly discrepant": Winschall et al., 2014 does not provide highly discrepant results, at least that is not what is said in this paper. Please rephrase to do justice to the actual state of the literature, and to better clarify the intent and actual novelty of this study. In this context, please also consider the book chapter of Sodemann and Joos (2021).
L. 46: Consider adding references to the original AR studies in this context.
L. 47: This statement does not seem to do justice to the existing literature. See Sodemann and Stohl (2013) for a tagging study focused on AR events, as well as Stohl et al. (2008) for a study with Lagrangian methods. There are also a range of studies from other regions and locations (e.g. Terpstra et al., 2021, Bonne et al., 2015). Please update this statement in light of existing studies, and clarify what this study adds to the existing literature. Please also take notice of the book chapter about AR moisture budgets (Sodemann et al. 2020). What is meant by "go beyond the identification of moisture sources to quantify them?"
L. 55: There are two aspect here that are a little bit mixed together. One is that the tagging simulation is also only a model representation of the actual water cycle in nature. At the grid resolution of the model (here 20 km horizontally), a large spectrum of the processes affecting the water cycle are parameterized. I assume that also a deep and shallow convection parameterisation (which one?) has been used in the Eulerian model simulation. Obviously, the model will thus not be identical with nature. However, the approach and argument of the present study is, as I understand it, that the tagging water cycle and the Lagrangian methods are internally consistent, even if the tagging results differ from nature. This is important, as the authors write, since the source information that is being sought after is not directly available from observations.
L. 57: Another important limitation of the tagging approach, which also becomes apparent in this study, is the requirement to predefine moisture sources in this forward calculation approach. If more spatial detail is required, the computational overhead multiplies and can become prohibitive. In contrast, the Lagrangian backward approaches provide spatially detailed information, that can be more easily interpreted, for example in terms of the physical processes related to weather systems. This discrepancy between both approaches is important to mention here.
L. 59: Maybe mention here that the Lagrangian methods, being offline diagnostics, require a range of assumptions and parameter choices to which these methods are sensitive. Your comparison framework allows to assess what biases exist with the different diagnostics, and how those are related to parameters and assumpations in the Lagrangian methods.
L. 61: "fully validated": I assume this relates to the internal consistency of the tagging approach. Validation can be misunderstood as a comparison to observable quantities. Please clarify/rephrase.
L. 70: This is not correct, Sodemann et al. (2008) used trajectories from the LAGRANTO model (Sprenger and Wernli, 2015).
L. 73: "limited to highlighting ... large discrepancies": Please rephrase to do more justice to what is presented in the cited studies. For example, Winschall et al. (2014) specifically investigated the basis of the boundary layer vs. free troposphere distinction in the WaterSip method.
L. 77: "two of the most widely used" -> "two widely used"
L. 84: "vast majority ... force": there is no evidence supporting this statement. I don't think it is necessary to make this statement, adding reanalysis data is useful because unlike forecast data, it includes analysis increments from data assimilation, see Fremme et al., 2023.
L. 93: It is certainly positive with different AR cases, but these cases are all long-range transport events. Can you add some clear justification for this focus in the introduction? Some of the writing makes the impression that you seek general validity, while the focus on AR events only seems in contradiction to this.
Figure 1: Please add panel labels, and mention all figure panels in the caption. It would be a large advantage to have common color bars for the left and right column panels each. For precipitation amount, it is quite common to use a categorized color bar to that end. This would avoid the saturation of the color scale that now seems to occur.
Table 1: Could this table include information about the total rainfall amount of these events in the model and maybe observations? Is it correct that the two last events have the same date and time, but different regions?
L. 119: What has been used in terms of deep and shallow convection, microphysics schemes in the WRF simulation?
L. 121: Which fields have been nudged, only winds or also specific humidity? How does the nudging affect the tagging? The authors emphasize the importance of the nudging, but actually I think for the study objective it does not make a difference if the results resemble the actual events closely or not.
L. 126: What is meant by this statement, and how does it relate to this citation? Consider maybe citing Gimeno et al., 2021 here.
L. 133: If QFX is assimilated from ERA5, this can introduce an inconsistency into WRF due to differences in resolution. What is the reason for this choice? How different are the results when using the WRF-internal evaporation flux?
L. 138: Is a time interpolation used here?
L. 149: It may be useful to have some basic information in the main manuscript, such as the chosen parameterisation schemes, and the fact that simulations are hemispheric (?)
Figure 2: The source regions are very large in comparison to the scale of the moisture sources revealed by the Lagrangian diagnostics. A separation into e.g. 10 degree latitude bands or latitude-longitude boxes could allow for a much more detailed comparison and evaluation of Lagrangian models in the Eulerian framework.
L. 166: "FLEXPART assimilates hourly data": FLEXPART does not perform data assimilation, please rephrase. It is not clear what is said from this sentence, the previous section described WRF, not FLEXPART. How exactly is FLEXPART run with WRF? Maybe some of the details from the supplement could be moved to the main text. In particular, it is important to describe how particles were initiated and released, and if any convection parameterisation was active in FLEXPART.
L. 174: "it starts by assuming": This sentence and the following sound a bit strange. What you describe seems to be the basic idea of Lagrangian analysis, which is not particular to WaterSip. It would be useful to cite Stohl and James (2004) in this context, or shorten the section altogether, because all of this has already been said elsewhere.
L. 181 to 207: This section repeats a lot of information that is found in the original publication, and is not necessarily more easy to follow. I recommend limiting this to the most essential parts of the method which are modified here.
L. 188: The threshold value has been repeatedly shown to be a key sensitivity parameter (e.g., Sodemann and Stohl, 2009; Fremme and Sodemann, 2019). In addition, this value is on the very low end, that has been previously recommended for Arctic studies. How sensitive are your baseline results to this choice? To be in line with literature, I recommend a delta q of 0.2 for a 6h time interval.
Regarding section S3.1 referenced here, I wonder about what the role of this mathematical description is for the manuscript. There seem to be some arguments about correspondences between the two Lagrangian methods mathematically, but conceptually the two are quite different (e.g., well-mixed properties of the atmosphere). Section S3.1 could benefit from a closer connection to published literature to clarify its purpose. Does this section describe what has been published before, but mathematically in a common framework?
L. 209: The authors refer to the UTrack method as the Dirmeyer and Brubaker (1999) implementation they use. However, as noted in L. 218, UTrack computes its own trajectories. Is it then not more correct to refer to the second model as the Dirmeyer and Brubaker (1999) method? What really distinguishes the approach used here from UTrack and Dirmeyer and Brubaker (1999), respectively?
L. 252: It has been common to initialize the domain at model time zero with all water vapour currently in the domain to achieve 100% accounting. Has this been tested here?
L. 255: This is not correct. The Dirmeyer and Brubaker (1999) method stops accounting evaporation when 100% have been reached. The WaterSip method does not generally reach 100% (see Sodemann et al., 2008).
L. 256: What is meant by "the bias will also be calculated after adjusting for these precipitation fractions"? This scaling should be explained in the methods section. Why is a scaling necessary at all? Is it not more correct to compare the actual identified fractions? What about comparing amounts rather than fractions?
L. 266: I think the reference to Winschall et al. 2014 is not justified in such a general statement as done here. Winschall et al. 2014 did a sensitivity test of different tagging approaches, and their conclusion was: "The results of the Lagrangian diagnostics are similar to the Eulerian results, with the fraction of remote versus local moisture sources lying in between the two realisations of the tagging technique."
L. 268: Is the RMSE expressed as a fraction as in Eq. (6) or in percent?
L. 274: It is interesting to note that the biases of the UTrack method are different. Why is that the case? The US West Coast case for example, UTrack has a lower performance.
Figure 4: It is not possible the read the numbers printed in white on a light colour background.
L. 279: I do not see a value of 29.6 in Fig. 3, nor of 14.88 for the Tropical Atlantic in Fig. 4. Is this example part of the supplement information? How does the scaling impact the results here?
L. 291: This statement applies to both Lagrangian methods. Before proceeding to tune the methods, it would be useful to quantify the overall bias of the Lagrangian vs. Eulerian methods, maybe at the end of Sec. 3.1, potentially as a function of distance from the arrival location. It may also be worthwhile to comment on the overall consistency of the results from the 3 approaches here. It would also be interesting to know more about the sensitivity here already regarding the specific setup you chose. How different are the errors/biases for a time interval of 6h, and when increasing the specific humidity threshold to 0.2 g kg-1 6h (or more)?
L. 295: What is presented here is exactly the argument for introducting a specific humidity threshold in WaterSip. So this needs not be formulated as a (new) hypothesis, it is part of the known uncertainty of the WaterSip diagnostic.
L. 315: This distinction and modification have already been proposed by Dütsch et al., 2018 (Their Sec. 3.2). However, it is important to note that mixing with dry air can also lead to a specific humidity decrease. By only allowing for precipitation events to decrease specific humidity, a bias is intoduced into the method. This can also result in an over-accounting of sources (more than 100% of moisture accounted for).
Figure 8: Comparing the UTrack results with the corresponding results from WaterSip in Fig. 7, it is very interesting to note how different the spatial maps are from the two methods. UTrack basically shows almost no sources at all in the vicinity of Greenland. While we don't know which one of the results is more correct, this difference is not picked up by the comparison to water vapour tagging in the present setup. This fact points to the current tracer setup being not sufficiently sharp (or detailed enough) to resolve and quantify such differences.
References
Sodemann, H., Wernli, H. and Schwierz, C., 2009: Sources of water vapour contributing to the Elbe flood in August 2002: A tagging study in a mesoscale model, Quart. J. Royal Meteorol. Soc., 135, 205-223, doi:10.1002/qj.374.
Sodemann, H.and Stohl, A., 2013: Moisture origin and meridional transport in atmospheric rivers, and their association with multiple cyclones. Mon. Wea. Rev., 141, 2850–2868, https://doi.org/10.1175/MWR-D-12-00256.1.
Stohl, A., Forster, C. and Sodemann, H., 2008: Remote sources of water vapor forming precipitation on the Norwegian west coast at 60°N - a tale of hurricanes and an atmospheric river, J. Geophys. Res., 113, D05102, doi:10.1029/2007JD009006.
Terpstra, A., Gorodetskaya, I. V., Sodemann, H.: Linking sub‐tropical evaporation and extreme precipitation over East Antarctica:an atmospheric river case study, J. Geophys. Res., 126, https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020JD033617, 2021
Bonne, J.-L., Steen-Larsen, H. C., Risi, C., Werner, M., Sodemann, H., Lacour, J.-L., Fettweis, X., Cesana, G., Delmotte, M., Cattani, O., Vallelonga, P., Kjær, H. A., Clerbaux, C., Sveinbjörnsdóttir, A. E., and Masson-Delmotte, V., 2015: The summer 2012 Greenland heat wave: In situ and remote sensing observations of water vapor isotopic composition during an atmospheric river event, J. Geophys. Res., 120, 2970-2989, doi:10.1002/2014JD022602.
Sodemann, H. and Joos, H., 2021: Numerical methods to identify model uncertainty in: Ólafsson, H. and Bao, J.-W. (Eds), Uncertainties in Numerical Weather Prediction, Elsevier, 309-329, doi: 10.1016/B978-0-12-815491-5.00012-4.
2020Sodemann, H., Wernli, H., Knippertz, P., Cordeira, J. M., Dominguez, F., Guan, B. Hu, H., Ralph, M. F., and Stohl, A., 2020: Structure, Process and Mechanism in: Ralph, F. M., Dettinger, M. D., Rutz, J. J., and Waliser, D. E. (Eds), Atmospheric Rivers, Springer International, 15-43, doi: 10.1007/978-3-030-28906-5.
Sprenger, M. and Wernli, H.: The LAGRANTO Lagrangian analysis tool – version 2.0, Geosci. Model Dev., 8, 2569–2586, https://doi.org/10.5194/gmd-8-2569-2015, 2015.
Fremme, A., Hezel, P. J., Seland, Ø., and Sodemann, H.: Model-simulated hydroclimate in the East Asian summer monsoon region during past and future climate: a pilot study with a moisture source perspective, Weather Clim. Dynam., 4, 449–470, https://doi.org/10.5194/wcd-4-449-2023, 2023.
Gimeno, L., Eiras-Barca, J., Durán-Quesada, A.M., Dominguez. F., van der Ent, R., Sodemann, H., Sánchez-Murillo, R., Nieto, R. and Kirchner, J. W.: The residence time of water vapour in the atmosphere. Nat. Rev. Earth Environ., https://doi.org/10.1038/s43017-021-00181-9, 2021.
Fremme, A. and Sodemann, H., 2019: The role of land and ocean evaporation on the variability of precipitation in the Yangtze River valley, HESS, 23, 2525–2540.
https://www.hydrol-earth-syst-sci.net/23/2525/2019/hess-23-2525-2019.htmlDütsch, M., Pfahl, S., Meyer, M., and Wernli, H.: Lagrangian process attribution of isotopic variations in near-surface water vapour in a 30-year regional climate simulation over Europe, Atmos. Chem. Phys., 18, 1653–1669, https://doi.org/10.5194/acp-18-1653-2018, 2018.
Yoshimura, K., T Oki, N Ohte, S Kanae, 2004: Colored moisture analysis estimates of variations in 1998 Asian monsoon water sources, Journal of the Meteorological Society of Japan. Ser. II 82 (5), 1315-1329.
Citation: https://doi.org/10.5194/esd-2024-18-RC2
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
271 | 53 | 20 | 344 | 31 | 14 | 13 |
- HTML: 271
- PDF: 53
- XML: 20
- Total: 344
- Supplement: 31
- BibTeX: 14
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1