257 research outputs found

    Drizzle rates versus cloud depths for marine stratocumuli

    Get PDF
    Marine stratocumuli make a major contribution to Earth’s radiation budget. Drizzle in such clouds can greatly affect their albedo, lifetime and fractional coverage, so drizzle rate prediction is important. Here we examine a question: does a drizzle rate (R) depend on cloud depth (H) and/or drop number concentration n in a simple way? This question was raised empirically in several recent publications and an approximate H3/n dependence was observed. Here we suggest a simple explanation for H3 scaling from viewing the drizzle rate as a sedimenting volume fraction ( f ) of water drops (radius r) in air, i.e. R = f u(r ), where u is the fall speed of droplets at the cloud base. Both R and u have units of speed. In our picture, drizzle drops begin from condensation growth on the way up and continue with accretion on the way down. The ascent contributes H ( f ∝ H) and the descent H2 (u ∝ r ∝ f H) to the drizzle rate. A more precise scaling formula is also derived and may serve as a guide for parameterization in global climate models. The number concentration dependence is also discussed and a plausibility argument is given for the observed n−1 dependence of the drizzle rate. Our results suggest that deeper stratocumuli have shorter washout times

    Obtaining the drop size distribution

    Get PDF
    his document is a supplement to “Fluctuations and Luck in Droplet Growth by Coalescence,” by Alexander B. Kostinski and RaymondA. Shaw (Bull. Amer. Meteor. Soc.,86, 235–244) • ©2005 American Meteorological Societ

    Fluctuations and luck in droplet growth by coalescence

    Get PDF
    After the initial rapid growth by condensation, further growth of a cloud droplet is punctuated by coalescence events. Such a growth process is essentially stochastic. Yet, computational approaches to this problem dominate and transparent quantitative theory remains elusive. The stochastic coalescence problem is revisited and it is shown, via simple back-of-the-envelope results, that regardless of the initial size, the fastest one-in-a-million droplets, required for warm rain initiation, grow about 10 times faster than the average droplet. While approximate, the development presented herein is based on a realistic expression for the rate of coalescence. The results place a lower bound on the relative velocity of neighboring droplets, necessary for warm rain initiation. Such velocity differences may arise from a variety of physical mechanisms. As an example, turbulent shear is considered and it is argued that even in the most pessimistic case of a cloud composed of single-sized droplets, rain can still form in 30 min under realistic conditions. More importantly, this conclusion is reached without having to appeal to giant nuclei or droplet clustering, only occasional “fast eddies.” This is so because, combined with the factor of 10 accelerated growth of the one-in-a-million fastest droplets, the traditional turbulent energy cascade provides sufficient microshear at interdroplet scales to initiate warm rain in cumulus clouds within the observed times of about 30 min. The simple arguments presented here are readily generalized for a variety of time scales, drizzle production, and other coagulation processes

    Evolution and distribution of record-breaking high and low monthly mean temperatures

    Get PDF
    The ratio of record highs to record lows is examined with respect to extent of time series for monthly mean temperatures within the continental United States for 1900–2006. In counting the number of records that occur in a single year, the authors find a ratio greater than unity in 2006, increasing nearly monotonically as the time series increases in length via a variable first year over 1900–76. For example, in 2006, the ratio of record highs to record lows ≈ 13:1 with 1950 as the first year and ≈ 25:1 with 1900 as the first year; both ratios are an order of magnitude greater than 3σ for stationary simulations. This indicates a warming trend. It is also found that records are more sensitive to trends in time series of monthly averages than in time series of corresponding daily values. When the last year (1920–2006, starting in 1900) is varied, it is found that the ratio of record highs to record lows is strongly correlated with the ensemble mean temperature. Correlation coefficients are 0.76 and 0.82 for 1900–2006 and 1950–2006, respectively; 3σ = 0.3 for pairs of uncorrelated stationary time series. Similar values are found for globally distributed time series: 0.87 and 0.92 for 1900–2006 and 1950–2006, respectively. The ratios evolve differently, however: global ratios increase throughout (1920–2006) whereas continental U.S. ratios decrease from about 1940 to 1970. Last, the geographical and seasonal distributions of trends are considered by summing records over time rather than ensemble. In the continental United States, the greatest excess of record highs occurs in February (≈2:1) and the greatest excess of record lows occurs in October (≈2:3). In addition, ratios are pronounced in certain regions: in February in the Midwest the ratio ≈ 5:2, and in October in the Southeast the ratio ≈ 1:2

    Reversible record breaking and variability: Temperature distributions across the globe

    Get PDF
    Based on counts of record highs and lows, and employing reversibility in time, an approach to examining natural variability is proposed. The focus is on intrinsic variability; that is, variance separated from the trend in the mean. A variability index α is suggested and studied for an ensemble of monthly temperature time series around the globe. Deviation of 〈α〉 (mean α) from zero, for an ensemble of time series, signifies a variance trend in a distribution-independent manner. For 15 635 monthly temperature time series from different geographical locations (Global Historical Climatology Network), each time series about a century-long, 〈α〉 = −1.0, indicating decreasing variability. This value is an order of magnitude greater than the 3σ value of stationary simulations. Using the conventional best-fit Gaussian temperature distribution, the trend is associated with a change of about −0.2°C (106 yr)−1 in the standard deviation of interannual monthly mean temperature distributions (about 10%)

    Universal rank-order transform to extract signals from noisy data

    Get PDF
    We introduce an ordinate method for noisy data analysis, based solely on rank information and thus insensitive to outliers. The method is nonparametric and objective, and the required data processing is parsimonious. The main ingredients include a rank-order data matrix and its transform to a stable form, which provide linear trends in excellent agreement with least squares regression, despite the loss of magnitude information. A group symmetry orthogonal decomposition of the 2D rank-order transform for iid (white) noise is further ordered by principal component analysis. This two-step procedure provides a noise “etalon” used to characterize arbitrary stationary stochastic processes. The method readily distinguishes both the Ornstein-Uhlenbeck process and chaos generated by the logistic map from white noise. Ranking within randomness differs fundamentally from that in deterministic chaos and signals, thus forming the basis for signal detection. To further illustrate the breadth of applications, we apply this ordinate method to the canonical nonlinear parameter estimation problem of two-species radioactive decay, outperforming special-purpose least squares software. We demonstrate that the method excels when extracting trends in heavy-tailed noise and, unlike the Thiele-Sen estimator, is not limited to linear regression. A simple expression is given that yields a close approximation for signal extraction of an underlying, generally nonlinear signal

    What is raindrop size distribution?

    Get PDF
    It is commonly understood that the number of drops that one happens to measure as a function of diameter in some sample represents the drop size distribution. However, recent observations show that rain is “patchy” suggesting that such a seemingly “obvious” definition is incomplete. That is, rain consists of patches of elementary drop size distributions over a range of different scales. All measured drop size distributions, then, are statistical mixtures of these patches. Moreover, it is shown that the interpretation of the measured distribution depends upon whether the rain is statistically homogeneous or not. It is argued and demonstrated using Monte Carlo simulations that in statistically homogeneous rain, as the number of patches included increases, the observed spectrum of drop sizes approaches a “steady” distribution. On the other hand, it is argued and demonstrated using video disdrometer data that in statistically inhomogeneous rain, there is no such steady distribution. Rather as long as one keeps measuring, the drop size distribution continues to change. What is observed, then, depends on when one chooses to stop adding measurements. Consequently, the distributions measured in statistically inhomogeneous rain are statisticalentities of mean drop concentrations best suited to statistical interpretations. In contrast, steady distributions in statistically homogeneous rain are more amenable to deterministic interpretations since they depend upon factors independent of the measurement process. These findings have implications addressed in two additional questions, namely, Are computer-created virtual drop size distributions really the same as those observed? What is the appropriate drop size distribution when several measurements used in an algorithm for rain estimations are made at different resolutions
    corecore