196 research outputs found

    Characterization of extreme weather events on Italian roads

    Get PDF
    According to climate modellers, probability, frequency, duration, intensity (seriousness) of extreme weather events (extreme temperatures and rainfall) are increasing and will be more frequent in future. The former will lead to higher surface runoff and flood events while the latter will cause landslides phenomena and a break of roads network. The impact of such events depends greatly on the physical hydraulic and mechanical properties of soils. Increasing numbers of extreme events in winter time in recent years have demonstrated the paramount importance of effective and integrated management of land resources in the protection of the environment and of the road network. In Italy more than 10% of the territory has been classified as having a high or very high hydro-geological risk, affecting 80% of the Italian municipalities. The impacts on population and the economic damages are relevant. In Italy over the last 20 years, floods and landslides had an impact on more than 70 000 people and caused economic damage of at least 11 billion euro. Since 2000, the Italian Ministry for the Environment entrusted ISPRA the task of monitoring the programmes of emergency measures toreduce hydrogeological risk. (ReNDiS project, database of mitigation measures against floods and landslides)

    Metal-Free Modified Boron Nitride for Enhanced CO2 Capture

    Get PDF
    Porous boron nitride is a new class of solid adsorbent with applications in CO2 capture. In order to further enhance the adsorption capacities of materials, new strategies such as porosity tuning, element doping and surface modification have been taken into account. In this work, metal-free modification of porous boron nitride (BN) has been prepared by a structure directing agent via simple heat treatment under N2 flow. We have demonstrated that textural properties of BN play a pivotal role in CO2 adsorption behavior. Therefore, addition of a triblock copolymer surfactant (P123) has been adopted to improve the pore ordering and textural properties of porous BN and its influence on the morphological and structural properties of pristine BN has been characterized. The obtained BN-P123 exhibits a high surface area of 476 m2/g, a large pore volume of 0.83 cm3/g with an abundance of micropores. More importantly, after modification with P123 copolymer, the capacity of pure CO2 on porous BN has improved by about 34.5% compared to pristine BN (2.69 mmol/g for BN-P123 vs. 2.00 mmol/g for pristine BN under ambient condition). The unique characteristics of boron nitride opens up new routes for designing porous BN, which could be employed for optimizing CO2 adsorption

    Il rischio idrogeologico e la rete viaria nazionale minore

    Get PDF
    Il rischio definisce la possibilità che un fenomeno naturale o antropico possa causare effetti dannosi sulla popolazione, sugli insediamenti, sulle infrastrutture o in generale su quelli che vengono definiti elementi esposti. Il concetto di rischio è legato non solo alla capacità di calcolare la probabilità che un evento pericoloso accada, ma anche alla capacità di definire e quantificarne il danno provocato. Il territorio nazionale italiano, data la sua conformazione orografica, geologica e geomorfologica caratterizzata da un???orografia giovane e da rilievi in sollevamento, è sempre stato interessato da fenomeni idraulici e geologici (fenomeno impropriamente chiamato, anche se ormai di uso corrente, (dissesto idrogeologico) di notevole intensità. Il caso di studio adottato fa riferimento alla Provincia di Lucca, data la particolare rilevanza di fenomeni di dissesto idrogeologico in questa area

    The Association of Left Ventricular Hypertrophy with Metabolic Syndrome is Dependent on Body Mass Index in Hypertensive Overweight or Obese Patients

    Get PDF
    Overweight (Ow) and obesity (Ob) influence blood pressure (BP) and left ventricular hypertrophy (LVH). It is unclear whether the presence of metabolic syndrome (MetS) independently affects echocardiographic parameters in hypertension.380 Ow/Ob essential hypertensive patients (age ≤ 65 years) presenting for referred BP control-related problems. MetS was defined according to NCEP III/ATP with AHA modifications and LVH as LVM/h(2.7) ≥ 49.2 g/m(2.7) in males and ≥ 46.7 g/m(2.7) in females. Treatment intensity score (TIS) was used to control for BP treatment as previously reported.Hypertensive patients with MetS had significantly higher BMI, systolic and mean BP, interventricular septum and relative wall thickness and lower ejection fraction than those without MetS. LVM/h(2.7) was significantly higher in MetS patients (59.14 ± 14.97 vs. 55.33 ± 14.69 g/m(2.7); p = 0.022). Hypertensive patients with MetS had a 2.3-fold higher risk to have LVH/h(2.7) after adjustment for age, SBP and TIS (OR 2.34; 95%CI 1.40-3.92; p = 0.001), but MetS lost its independent relationship with LVH when BMI was included in the model.In Ow/Ob hypertensive patients MetS maintains its role of risk factor for LVH independently of age, SBP, and TIS, resulting in a useful predictor of target organ damage in clinical practice. However, MetS loses its independent relationship when BMI is taken into account, suggesting that the effects on MetS on LV parameters are mainly driven by the degree of adiposity

    Accumulation of neutral lipids in peripheral blood mononuclear cells as a distinctive trait of Alzheimer patients and asymptomatic subjects at risk of disease

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Alzheimer's disease is the most common progressive neurodegenerative disease. In recent years, numerous progresses in the discovery of novel Alzheimer's disease molecular biomarkers in brain as well as in biological fluids have been made. Among them, those involving lipid metabolism are emerging as potential candidates. In particular, an accumulation of neutral lipids was recently found by us in skin fibroblasts from Alzheimer's disease patients. Therefore, with the aim to assess whether peripheral alterations in cholesterol homeostasis might be relevant in Alzheimer's disease development and progression, in the present study we analyzed lipid metabolism in plasma and peripheral blood mononuclear cells from Alzheimer's disease patients and from their first-degree relatives.</p> <p>Methods</p> <p>Blood samples were obtained from 93 patients with probable Alzheimer's disease and from 91 of their first-degree relatives. As controls we utilized 57, cognitively normal, over-65 year-old volunteers and 113 blood donors aged 21-66 years, respectively. Data are reported as mean ± standard error. Statistical calculations were performed using the statistical analysis software Origin 8.0 version. Data analysis was done using the Student t-test and the Pearson test.</p> <p>Results</p> <p>Data reported here show high neutral lipid levels and increased ACAT-1 protein in about 85% of peripheral blood mononuclear cells freshly isolated (<it>ex vivo</it>) from patients with probable sporadic Alzheimer's disease compared to about 7% of cognitively normal age-matched controls. A significant reduction in high density lipoprotein-cholesterol levels in plasma from Alzheimer's disease blood samples was also observed. Additionally, correlation analyses reveal a negative correlation between high density lipoprotein-cholesterol and cognitive capacity, as determined by Mini Mental State Examination, as well as between high density lipoprotein-cholesterol and neutral lipid accumulation. We observed great variability in the neutral lipid-peripheral blood mononuclear cells data and in plasma lipid analysis of the subjects enrolled as Alzheimer's disease-first-degree relatives. However, about 30% of them tend to display a peripheral metabolic cholesterol pattern similar to that exhibited by Alzheimer's disease patients.</p> <p>Conclusion</p> <p>We suggest that neutral lipid-peripheral blood mononuclear cells and plasma high density lipoprotein-cholesterol determinations might be of interest to outline a distinctive metabolic profile applying to both Alzheimer's disease patients and asymptomatic subjects at higher risk of disease.</p

    Predictive Power Estimation Algorithm (PPEA) - A New Algorithm to Reduce Overfitting for Genomic Biomarker Discovery

    Get PDF
    Toxicogenomics promises to aid in predicting adverse effects, understanding the mechanisms of drug action or toxicity, and uncovering unexpected or secondary pharmacology. However, modeling adverse effects using high dimensional and high noise genomic data is prone to over-fitting. Models constructed from such data sets often consist of a large number of genes with no obvious functional relevance to the biological effect the model intends to predict that can make it challenging to interpret the modeling results. To address these issues, we developed a novel algorithm, Predictive Power Estimation Algorithm (PPEA), which estimates the predictive power of each individual transcript through an iterative two-way bootstrapping procedure. By repeatedly enforcing that the sample number is larger than the transcript number, in each iteration of modeling and testing, PPEA reduces the potential risk of overfitting. We show with three different cases studies that: (1) PPEA can quickly derive a reliable rank order of predictive power of individual transcripts in a relatively small number of iterations, (2) the top ranked transcripts tend to be functionally related to the phenotype they are intended to predict, (3) using only the most predictive top ranked transcripts greatly facilitates development of multiplex assay such as qRT-PCR as a biomarker, and (4) more importantly, we were able to demonstrate that a small number of genes identified from the top-ranked transcripts are highly predictive of phenotype as their expression changes distinguished adverse from nonadverse effects of compounds in completely independent tests. Thus, we believe that the PPEA model effectively addresses the over-fitting problem and can be used to facilitate genomic biomarker discovery for predictive toxicology and drug responses

    E-learning e documentazione didattica: un approccio organizzativo modulare

    No full text
    Questo lavoro presenta un modello per l’organizzazione di materiale didattico multimediale fruibile in rete e relativo ad un corso di laurea universitario. Attuando una strutturazione modulare e gerarchica, il modello classifica le risorse didattiche, costituite da componenti atomici multimediali, sia rispetto ad un insieme predefinito di categorie didattiche, sia rispetto agli argomenti fondamentali di un corso. I componenti che si riferiscono al medesimo argomento sono raggruppati in uno stesso contenitore logico, chiamato modulo didattico, la cui struttura è definita da un insieme di meta-dati contenuti in un catalogo. Tale catalogo descrive inoltre la struttura delle categorie didattiche e la combinazione dei diversi moduli in un unico corso, facilitando sia il riuso dei componenti atomici sia la scalabilità del modello

    Stability in biomarker discovery: does ensemble feature selection really help?

    No full text
    Ensemble feature selection has been recently explored as a promising paradigm to improve the stability, i.e. the robustness with respect to sample variation, of subsets of informative features extracted from high-dimensional domains including genetics and medicine. Though recent literature discusses a number of cases where ensemble approaches seem to be capable of providing more stable results, especially in the context of biomarker discovery, there is a lack of systematic studies aiming at providing insight on when, and to which extent, the use of an ensemble method is to be preferred to a simple one. Using a well-known benchmark from the genomics domain, this paper presents an empirical study which evaluates ten selection methods, representatives of different selection approaches, investigating if they get significantly more stable when used in an ensemble fashion. Results of our study provide interesting indications on benefits and limitations of the ensemble paradigm in terms of stability

    Similarity of feature selection methods: An empirical study across data intensive classification tasks

    No full text
    In the past two decades, the dimensionality of datasets involved in machine learning and data mining applications has increased explosively. Therefore, feature selection has become a necessary step to make the analysis more manageable and to extract useful knowledge about a given domain. A large variety of feature selection techniques are available in literature, and their comparative analysis is a very difficult task. So far, few studies have investigated, from a theoretical and/or experimental point of view, the degree of similarity/dissimilarity among the available techniques, namely the extent to which they tend to produce similar results within specific application contexts. This kind of similarity analysis is of crucial importance when two or more methods are combined in an ensemble fashion: indeed the ensemble paradigm is beneficial only if the involved methods are capable of giving different and complementary representations of the considered domain. This paper gives a contribution in this direction by proposing an empirical approach to evaluate the degree of consistency among the outputs of different selection algorithms in the context of high dimensional classification tasks. Leveraging on a proper similarity index, we systematically compared the feature subsets selected by eight popular selection methods, representatives of different selection approaches, and derived a similarity trend for feature subsets of increasing size. Through an extensive experimentation involving sixteen datasets from three challenging domains (Internet advertisements, text categorization and micro-array data classification), we obtained useful insight into the pattern of agreement of the considered methods. In particular, our results revealed how multivariate selection approaches systematically produce feature subsets that overlap to a small extent with those selected by the other methods

    Knowledge Discovery in Gene Expression Data via Evolutionary Algorithms

    No full text
    Methods currently used for micro-array data classification aim to select a minimum subset of features, namely a predictor, that is necessary to construct a classifier of best accuracy. Although effective, they lack in facing the primary goal of domain experts that are interested in detecting different groups of biologically relevant markers. In this paper, we present and test a framework which aims to provide different subsets of relevant genes. It considers initial gene filtering to define a set of feature spaces each of ones is further refined by taking advantage from a genetic algorithm. Experiments show that the overall process results in a certain number of predictors with high classification accuracy. Compared to state-of-art feature selection algorithms, the proposed framework consistently generates better feature subsets and keeps improving the quality of selected subsets in terms of accuracy and size
    • …
    corecore