520 research outputs found
Recommended from our members
Mechanisms producing different precipitation patterns over northâeastern Italy: insights from HyMeXâSOP1 and previous events
During the first HyMeX Special Observation Period (SOP1) field campaign, the target site of northâeastern Italy (NEI) experienced a large amount of precipitation, locally exceeding the climatological values and distributed among several heavyârainfall episodes. In particular, two events that occurred during the last period of the campaign drew our attention. These events had common largeâscale patterns and a similar mesoscale setting, characterised by southerly lowâlevel flow interacting with the Alpine orography, but the precipitation distribution was very different. During Intensive Observing Period IOP18 (31 Octoberâ1 November 2012), convective systems were responsible for intense rainfall mainly located over a flat area of the eastern Po Valley, well upstream of the orography. Conversely, during IOP19 (4/5 November 2012), heavy precipitation affected only the Alpine area. In addition to IOP18 and IOP19, the present study analyses other heavyâprecipitation episodes that display similar characteristics and which occurred over NEI during the autumn of recent years. A highâresolution (2 km grid spacing) nonâhydrostatic NWP model and available observations are used for this purpose.
The two different observed precipitation patterns are explained in terms of interaction between the impinging flow and the Alps. Depending on the thermodynamic profile, convection can be triggered when the impinging flow is forced to rise over a preâexisting coldâair layer at the base of the orography. In this situation a persistent blockedâflow condition and upstream convergence are responsible for heavy rain localized over the plain. Conversely, if convection does not develop, flowâover conditions are established and heavy rain affects the Alps. Numerical parameters proposed in the literature are used to support the analysis.
Finally, the role of evaporative cooling beneath the convective systems is evaluated. It turns out that the stationarity of the systems upstream of the Alps is mainly attributable to persistent blockedâflow conditions, while convective outflow slightly modifies the location of precipitation
A recursive coupling-decoupling approach to improve experimental frequency based substructuring results
Substructure decoupling techniques allow identifying the dynamic behavior of a substructure starting from the dynamic behavior or the assembled system and a residual subsystem. Standard approaches rely on the knowledge of all FRFs at the interface DOFs between the two substructures. However, as these typically include also rotational DOFs which are extremely difficult and most of the time impossible to measure, several techniques have been investigated to overcome these limitations. A very attractive solution consists in defining mixed or pseudo interfaces, that allow to substitute unmeasurable coupling DOFs with internal DOFs on the residual substructure. Additionally, smoothing/denoising techniques have been proposed to reduce the detrimental effect of FRF noise and inconsistencies on the decoupling results. Starting from these results, some recent analysis on the possibility of combining coupling and decoupling FBS to validate the results and compensate for inconsistencies will be presented in this paper. The proposed method relies on errors introduced in the substructuring process when assuming that the interface behaves rigidly, while it is generally known that this assumption is seldom verified. Consequently, a recursive coupling-decoupling approach will be used to improve the estimation of the dynamic response of either the residual structure (for decoupling) or the assembly (for coupling). The method, validated on analytical data, will be here analyzed on a numerical example inspired by an experimental campaign used to validate the finite element models and on which standard substructuring techniques showed some limitations. The results discussed in this paper will be then used as guidelines to apply the proposed methodologies on experimental data in the future
How observations from automatic hail sensors in Switzerland shed light on local hailfall duration and compare with hailpad measurements
Measuring the properties of hailstorms is a difficult task due to the rarity and mainly small spatial extent of the events. Especially, hail observations from ground-based time-recording instruments are scarce. We present the first study of extended field observations made by a network of 80 automatic hail sensors from Switzerland. The main benefits of the sensors are the live recording of the hailstone kinetic energy and the precise timing of the impacts. Its potential limitations include a diameter-dependent dead time, which results in less than 5â% of missed impacts, and the possible recording of impacts that are not due to hail, which can be filtered using a radar reflectivity filter. We assess the robustness of the sensors' measurements by doing a statistical comparison of the sensor observations with hailpad observations, and we show that, despite their different measurement approaches, both devices measure the same hail size distributions. We then use the timing information to measure the local duration of hail events, the cumulative time distribution of impacts, and the time of the largest hailstone during a hail event. We find that 75â% of local hailfalls last just a few minutes (from less than 4.4âmin to less than 7.7âmin, depending on a parameter to delineate the events) and that 75â% of the impacts occur in less than 3.3âmin to less than 4.7âmin. This time distribution suggests that most hailstones, including the largest, fall during a first phase of high hailstone density, while a few remaining and smaller hailstones fall in a second low-density phase.</p
Recommended from our members
Equitability revisited: why the âequitable threat scoreâ is not equitable
In the forecasting of binary events, verification measures that are âequitableâ were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used âequitable threat scoreâ (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as âasymptotically equitable.â In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around â0.5, reducing in magnitude to â0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphyâs two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures
Autoantibodies Against Oxidized LDLs and Atherosclerosis in Type 2 Diabetes
OBJECTIVE âThe aim of our study was to examine, in type 2 diabetic patients, the relationship between autoantibodies against oxidatively modified LDL (oxLDL Ab) and two indexes of atherosclerosis, intimal-medial thickness of the common carotid artery (CCA-IMT), which reflects early atherosclerosis, and the ankle-brachial index (ABI), which reflects advanced atherosclerosis. RESEARCH DESIGN AND METHODS âThirty newly diagnosed type 2 diabetic patients, 30 type 2 diabetic patients with long duration of disease, and 56 control subjects were studied. To detect oxLDL Ab, the ImmunoLisa Anti-oxLDL Antibody ELISA was used. ABI was estimated at rest by strain-gauge plethysmography. Carotid B-mode imaging was performed on a high-resolution imaging system (ATL HDI 5000). RESULTS âIn patients with long duration of disease, IgG oxLDL Ab were significantly higher and ABI significantly lower compared with the other two groups. We found a correlation between IgG oxLDL Ab and CCA-IMT in all diabetic patients. A significant inverse correlation between IgG oxLDL Ab and ABI only in patients with long duration of disease was seen, demonstrating a close relationship between these autoantibodies and advanced atherosclerosis. CONCLUSIONS âIgG OxLDL Ab may be markers of the advanced phase of the atherosclerotic process and the response of the immunological system to the oxLDL, which are present within atherosclerotic lesions
A systematic approach to the Planck LFI end-to-end test and its application to the DPC Level 1 pipeline
The Level 1 of the Planck LFI Data Processing Centre (DPC) is devoted to the
handling of the scientific and housekeeping telemetry. It is a critical
component of the Planck ground segment which has to strictly commit to the
project schedule to be ready for the launch and flight operations. In order to
guarantee the quality necessary to achieve the objectives of the Planck
mission, the design and development of the Level 1 software has followed the
ESA Software Engineering Standards. A fundamental step in the software life
cycle is the Verification and Validation of the software. The purpose of this
work is to show an example of procedures, test development and analysis
successfully applied to a key software project of an ESA mission. We present
the end-to-end validation tests performed on the Level 1 of the LFI-DPC, by
detailing the methods used and the results obtained. Different approaches have
been used to test the scientific and housekeeping data processing. Scientific
data processing has been tested by injecting signals with known properties
directly into the acquisition electronics, in order to generate a test dataset
of real telemetry data and reproduce as much as possible nominal conditions.
For the HK telemetry processing, validation software have been developed to
inject known parameter values into a set of real housekeeping packets and
perform a comparison with the corresponding timelines generated by the Level 1.
With the proposed validation and verification procedure, where the on-board and
ground processing are viewed as a single pipeline, we demonstrated that the
scientific and housekeeping processing of the Planck-LFI raw data is correct
and meets the project requirements.Comment: 20 pages, 7 figures; this paper is part of the Prelaunch status LFI
papers published on JINST:
http://www.iop.org/EJ/journal/-page=extra.proc5/jins
Observational Mass-to-Light Ratio of Galaxy Systems: from Poor Groups to Rich Clusters
We study the mass-to-light ratio of galaxy systems from poor groups to rich
clusters, and present for the first time a large database for useful
comparisons with theoretical predictions. We extend a previous work, where B_j
band luminosities and optical virial masses were analyzed for a sample of 89
clusters. Here we also consider a sample of 52 more clusters, 36 poor clusters,
7 rich groups, and two catalogs, of about 500 groups each, recently identified
in the Nearby Optical Galaxy sample by using two different algorithms. We
obtain the blue luminosity and virial mass for all systems considered. We
devote a large effort to establishing the homogeneity of the resulting values,
as well as to considering comparable physical regions, i.e. those included
within the virial radius. By analyzing a fiducial, combined sample of 294
systems we find that the mass increases faster than the luminosity: the linear
fit gives M\propto L_B^{1.34 \pm 0.03}, with a tendency for a steeper increase
in the low--mass range. In agreement with the previous work, our present
results are superior owing to the much higher statistical significance and the
wider dynamical range covered (about 10^{12}-10^{15} M_solar). We present a
comparison between our results and the theoretical predictions on the relation
between M/L_B and halo mass, obtained by combining cosmological numerical
simulations and semianalytic modeling of galaxy formation.Comment: 25 pages, 12 eps figures, accepted for publication in Ap
Optimization of Planck/LFI on--board data handling
To asses stability against 1/f noise, the Low Frequency Instrument (LFI)
onboard the Planck mission will acquire data at a rate much higher than the
data rate allowed by its telemetry bandwith of 35.5 kbps. The data are
processed by an onboard pipeline, followed onground by a reversing step. This
paper illustrates the LFI scientific onboard processing to fit the allowed
datarate. This is a lossy process tuned by using a set of 5 parameters Naver,
r1, r2, q, O for each of the 44 LFI detectors. The paper quantifies the level
of distortion introduced by the onboard processing, EpsilonQ, as a function of
these parameters. It describes the method of optimizing the onboard processing
chain. The tuning procedure is based on a optimization algorithm applied to
unprocessed and uncompressed raw data provided either by simulations, prelaunch
tests or data taken from LFI operating in diagnostic mode. All the needed
optimization steps are performed by an automated tool, OCA2, which ends with
optimized parameters and produces a set of statistical indicators, among them
the compression rate Cr and EpsilonQ. For Planck/LFI the requirements are Cr =
2.4 and EpsilonQ <= 10% of the rms of the instrumental white noise. To speedup
the process an analytical model is developed that is able to extract most of
the relevant information on EpsilonQ and Cr as a function of the signal
statistics and the processing parameters. This model will be of interest for
the instrument data analysis. The method was applied during ground tests when
the instrument was operating in conditions representative of flight. Optimized
parameters were obtained and the performance has been verified, the required
data rate of 35.5 Kbps has been achieved while keeping EpsilonQ at a level of
3.8% of white noise rms well within the requirements.Comment: 51 pages, 13 fig.s, 3 tables, pdflatex, needs JINST.csl, graphicx,
txfonts, rotating; Issue 1.0 10 nov 2009; Sub. to JINST 23Jun09, Accepted
10Nov09, Pub.: 29Dec09; This is a preprint, not the final versio
Off-line radiometric analysis of Planck/LFI data
The Planck Low Frequency Instrument (LFI) is an array of 22
pseudo-correlation radiometers on-board the Planck satellite to measure
temperature and polarization anisotropies in the Cosmic Microwave Background
(CMB) in three frequency bands (30, 44 and 70 GHz). To calibrate and verify the
performances of the LFI, a software suite named LIFE has been developed. Its
aims are to provide a common platform to use for analyzing the results of the
tests performed on the single components of the instrument (RCAs, Radiometric
Chain Assemblies) and on the integrated Radiometric Array Assembly (RAA).
Moreover, its analysis tools are designed to be used during the flight as well
to produce periodic reports on the status of the instrument. The LIFE suite has
been developed using a multi-layered, cross-platform approach. It implements a
number of analysis modules written in RSI IDL, each accessing the data through
a portable and heavily optimized library of functions written in C and C++. One
of the most important features of LIFE is its ability to run the same data
analysis codes both using ground test data and real flight data as input. The
LIFE software suite has been successfully used during the RCA/RAA tests and the
Planck Integrated System Tests. Moreover, the software has also passed the
verification for its in-flight use during the System Operations Verification
Tests, held in October 2008.Comment: Planck LFI technical papers published by JINST:
http://www.iop.org/EJ/journal/-page=extra.proc5/1748-022
- âŠ