186 research outputs found

    Robust weighted aggregation of expert opinions in futures studies

    Get PDF
    Expert judgments are widespread in many fields, and the way in which they are collected and the procedure by which they are aggregated are considered crucial steps. From a statistical perspective, expert judgments are subjective data and must be gathered and treated as carefully and scientifically as possible. In the elicitation phase, a multitude of experts is preferable to a single expert, and techniques based on anonymity and iterations, such as Delphi, offer many advantages in terms of reducing distortions, which are mainly related to cognitive biases. There are two approaches to the aggregation of the judgments given by a panel of experts, referred to as behavioural (implying an interaction between the experts) and mathematical (involving non-interacting participants and the aggregation of the judgments using a mathematical formula). Both have advantages and disadvantages, and with the mathematical approach, the main problem concerns the subjective choice of an appropriate formula for both normalization and aggregation. We propose a new method for aggregating and processing subjective data collected using the Delphi method, with the aim of obtaining robust rankings of the outputs. This method makes it possible to normalize and aggregate the opinions of a panel of experts, while modelling different sources of uncertainty. We use an uncertainty analysis approach that allows the contemporaneous use of different aggregation and normalization functions, so that the result does not depend on the choice of a specific mathematical formula, thereby solving the problem of choice. Furthermore, we can also model the uncertainty related to the weighting system, which reflects the different expertise of the participants as well as expert opinion accuracy. By combining the Delphi method with the robust ranking procedure, we offer a new protocol covering the elicitation, the aggregation and the processing of subjective data used in the construction of Delphi-based future scenarios. The method is very flexible and can be applied to the aggregation and processing of any subjective judgments, i.e. also those outside the context of futures studies. Finally, we show the validity, reproducibility and potential of the method through its application with regard to the future of Italian families

    A Bootstrapped Modularised method of Global Sensitivity Analysis applied to Probabilistic Seismic Hazard Assessment

    Get PDF
    Probabilistic Seismic Hazard Assessment (PSHA) evaluates the probability of exceedance of a given earthquake intensity threshold like the Peak Ground Acceleration, at a target site for a given exposure time. The stochasticity of the occurrence of seismic events is modelled by stochastic processes and the propagation of the earthquake wave in the soil is typically evaluated by empirical relationships called Ground Motion Prediction Equations. The large uncertainty affecting PSHA is quantified by defining alternative model settings and/or model parametri-zations. In this work, we propose a novel Bootstrapped Modularised Global Sensitivity Analysis (BMGSA) method for identifying the model parameters most important for the uncertainty in PSHA, that consists in generating alternative artificial datasets by bootstrapping an available input-output dataset and aggregating the individual rankings obtained with the modularized method from each of those.The proposed method is tested on a realistic PSHA case study in Italy. The results are compared with a standard variance-based Global Sensitivity Analysis (GSA) method of literature. The novelty and strength of the proposed BMGSA method are both in the fact that its application only requires input-output data and not the use of a PSHA code for repeated calculations

    Loss of Ambra1 promotes melanoma growth and invasion

    Get PDF
    Melanoma is the deadliest skin cancer. Despite improvements in the understanding of the molecular mechanisms underlying melanoma biology and in defining new curative strategies, the therapeutic needs for this disease have not yet been fulfilled. Herein, we provide evidence that the Activating Molecule in Beclin-1-Regulated Autophagy (Ambra1) contributes to melanoma development. Indeed, we show that Ambra1 deficiency confers accelerated tumor growth and decreased overall survival in Braf/Pten-mutated mouse models of melanoma. Also, we demonstrate that Ambra1 deletion promotes melanoma aggressiveness and metastasis by increasing cell motility/invasion and activating an EMT-like process. Moreover, we show that Ambra1 deficiency in melanoma impacts extracellular matrix remodeling and induces hyperactivation of the focal adhesion kinase 1 (FAK1) signaling, whose inhibition is able to reduce cell invasion and melanoma growth. Overall, our findings identify a function for AMBRA1 as tumor suppressor in melanoma, proposing FAK1 inhibition as a therapeutic strategy for AMBRA1 low-expressing melanoma. The absence of scaffold protein Ambra1 leads to hyperproliferation and growth in mouse models. Here the authors show that Ambra1 deficiency accelerates melanoma growth and increases metastasis in mouse models of melanoma through FAK1 hyperactivation

    Multi-source statistics:Basic situations and methods

    Get PDF
    Many National Statistical Institutes (NSIs), especially in Europe, are moving from single‐source statistics to multi‐source statistics. By combining data sources, NSIs can produce more detailed and more timely statistics and respond more quickly to events in society. By combining survey data with already available administrative data and Big Data, NSIs can save data collection and processing costs and reduce the burden on respondents. However, multi‐source statistics come with new problems that need to be overcome before the resulting output quality is sufficiently high and before those statistics can be produced efficiently. What complicates the production of multi‐source statistics is that they come in many different varieties as data sets can be combined in many different ways. Given the rapidly increasing importance of producing multi‐source statistics in Official Statistics, there has been considerable research activity in this area over the last few years, and some frameworks have been developed for multi‐source statistics. Useful as these frameworks are, they generally do not give guidelines to which method could be applied in a certain situation arising in practice. In this paper, we aim to fill that gap, structure the world of multi‐source statistics and its problems and provide some guidance to suitable methods for these problems

    Personalizing Cancer Pain Therapy: Insights from the Rational Use of Analgesics (RUA) Group

    Get PDF
    Introduction: A previous Delphi survey from the Rational Use of Analgesics (RUA) project involving Italian palliative care specialists revealed some discrepancies between current guidelines and clinical practice with a lack of consensus on items regarding the use of strong opioids in treating cancer pain. Those results represented the basis for a new Delphi study addressing a better approach to pain treatment in patients with cancer. Methods: The study consisted of a two-round multidisciplinary Delphi study. Specialists rated their agreement with a set of 17 statements using a 5-point Likert scale (0 = totally disagree and 4 = totally agree). Consensus on a statement was achieved if the median consensus score (MCS) (expressed as value at which at least 50% of participants agreed) was at least 4 and the interquartile range (IQR) was 3–4. Results: This survey included input from 186 palliative care specialists representing all Italian territory. Consensus was reached on seven statements. More than 70% of participants agreed with the use of low dose of strong opioids in moderate pain treatment and valued transdermal route as an effective option when the oral route is not available. There was strong consensus on the importance of knowing opioid pharmacokinetics for therapy personalization and on identifying immediate-release opioids as key for tailoring therapy to patients’ needs. Limited agreement was reached on items regarding breakthrough pain and the management of opioid-induced bowel dysfunction. Conclusion: These findings may assist clinicians in applying clinical evidence to routine care settings and call for a reappraisal of current pain treatment recommendations with the final aim of optimizing the clinical use of strong opioids in patients with cancer

    Railway bridge structural health monitoring and fault detection: state-of-the-art methods and future challenges

    Get PDF
    Railway importance in the transportation industry is increasing continuously, due to the growing demand of both passenger travel and transportation of goods. However, more than 35% of the 300,000 railway bridges across Europe are over 100-years old, and their reliability directly impacts the reliability of the railway network. This increased demand may lead to higher risk associated with their unexpected failures, resulting safety hazards to passengers and increased whole life cycle cost of the asset. Consequently, one of the most important aspects of evaluation of the reliability of the overall railway transport system is bridge structural health monitoring, which can monitor the health state of the bridge by allowing an early detection of failures. Therefore, a fast, safe and cost-effective recovery of the optimal health state of the bridge, where the levels of element degradation or failure are maintained efficiently, can be achieved. In this article, after an introduction to the desired features of structural health monitoring, a review of the most commonly adopted bridge fault detection methods is presented. Mainly, the analysis focuses on model-based finite element updating strategies, non-model-based (data-driven) fault detection methods, such as artificial neural network, and Bayesian belief network–based structural health monitoring methods. A comparative study, which aims to discuss and compare the performance of the reviewed types of structural health monitoring methods, is then presented by analysing a short-span steel structure of a railway bridge. Opportunities and future challenges of the fault detection methods of railway bridges are highlighted

    On the mechanisms governing gas penetration into a tokamak plasma during a massive gas injection

    Get PDF
    A new 1D radial fluid code, IMAGINE, is used to simulate the penetration of gas into a tokamak plasma during a massive gas injection (MGI). The main result is that the gas is in general strongly braked as it reaches the plasma, due to mechanisms related to charge exchange and (to a smaller extent) recombination. As a result, only a fraction of the gas penetrates into the plasma. Also, a shock wave is created in the gas which propagates away from the plasma, braking and compressing the incoming gas. Simulation results are quantitatively consistent, at least in terms of orders of magnitude, with experimental data for a D 2 MGI into a JET Ohmic plasma. Simulations of MGI into the background plasma surrounding a runaway electron beam show that if the background electron density is too high, the gas may not penetrate, suggesting a possible explanation for the recent results of Reux et al in JET (2015 Nucl. Fusion 55 093013)

    Southern African Large Telescope Spectroscopy of BL Lacs for the CTA project

    Get PDF
    In the last two decades, very-high-energy gamma-ray astronomy has reached maturity: over 200 sources have been detected, both Galactic and extragalactic, by ground-based experiments. At present, Active Galactic Nuclei (AGN) make up about 40% of the more than 200 sources detected at very high energies with ground-based telescopes, the majority of which are blazars, i.e. their jets are closely aligned with the line of sight to Earth and three quarters of which are classified as high-frequency peaked BL Lac objects. One challenge to studies of the cosmological evolution of BL Lacs is the difficulty of obtaining redshifts from their nearly featureless, continuum-dominated spectra. It is expected that a significant fraction of the AGN to be detected with the future Cherenkov Telescope Array (CTA) observatory will have no spectroscopic redshifts, compromising the reliability of BL Lac population studies, particularly of their cosmic evolution. We started an effort in 2019 to measure the redshifts of a large fraction of the AGN that are likely to be detected with CTA, using the Southern African Large Telescope (SALT). In this contribution, we present two results from an on-going SALT program focused on the determination of BL Lac object redshifts that will be relevant for the CTA observatory

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)

    Get PDF

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field
    corecore