1,001 research outputs found

    Early handling and repeated cross-fostering have opposite effect on mouse emotionality

    Get PDF
    Early life events have a crucial role in programming the individual phenotype and exposure to traumatic experiences during infancy can increase later risk for a variety of neuropsychiatric conditions, including mood and anxiety disorders. Animal models of postnatal stress have been developed in rodents to explore molecular mechanisms responsible for the observed short and long lasting neurobiological effects of such manipulations. The main aim of this study was to compare the behavioral and hormonal phenotype of young and adult animals exposed to different postnatal treatments. Outbred mice were exposed to (i) the classical Handling protocol (H: 15 min-day of separation from the mother from day 1 to 14 of life) or to (ii) a Repeated Cross-Fostering protocol (RCF: adoption of litters from day 1 to 4 of life by different dams). Handled mice received more maternal care in infancy and showed the already described reduced emotionality at adulthood. Repeated cross fostered animals did not differ for maternal care received, but showed enhanced sensitivity to separation from the mother in infancy and altered respiratory response to 6% CO2 in breathing air in comparison with controls. Abnormal respiratory responses to hypercapnia are commonly found among humans with panic disorders (PD), and point to RCF-induced instability of the early environment as a valid developmental model for PD. The comparisons between short-and long-term effects of postnatal handling vs. RCF indicate that different types of early adversities are associated with different behavioral profiles, and evoke psychopathologies that can be distinguished according to the neurobiological systems disrupted by early-life manipulation

    Artificial Intelligence Algorithms in Precision Medicine: A New Approach in Clinical Decision-Making

    Get PDF
    US National Institutes of Health described the precision medicine as ‘an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment and lifestyle for each person.’ In other words, on the basis of the definition, the precision medicine allows to treat patients based on their genetic, lifestyle, and environmental data. Nevertheless, the complexity and rise of data in healthcare arising from cheap genome sequencing, advanced biotechnology, health sensors patients use at home, and the collection of information about patients’ journey in healthcare with hand-held devices unquestionably require a suitable toolkit and advanced analytics for processing the huge information. The artificial intelligence algorithms (AI) can remarkably improve the ability to use big data to make predictions by reducing the cost of making predictions. The advantages of artificial intelligence algorithms have been extensively discussed in the medical literature. In this paper based on the collection of the data relevant for the health of a given individual and the inference obtained by AI, we provide a simulation environment for understanding and suggesting the best actions that need to be performed to improve the individual’s health. Such simulation modelling can help improve clinical decision-maing and the fundamental understanding of the healthcare system and clinical process

    Editorial: Recent Advances in Seismic Risk Assessment and Its Applications

    Get PDF
    This special issue discusses recent advances in seismic risk assessment with particular attention to the development and validation of new procedures that are capable of assessing failure modes and the fragility curves of existing buildings. The studies presented have also a probabilistic background, and show the importance of typological characteristics in the seismic response of a building. Furthermore, non-linear numerical analyses have confirmed the importance of implementing specific models in order to design appropriate interventions aimed at reducing the seismic risk of a specific construction

    Detecting Common Longevity Trends by a Multiple Population Approach

    Get PDF
    Recently the interest in the development of country and longevity risk models has been growing. The investigation of long-run equilibrium relationships could provide valuable information about the factors driving changes in mortality, in particular across ages and across countries. In order to investigate cross-country common longevity trends, tools to quantify, compare, and model the strength of dependence become essential. On one hand, it is necessary to take into account either the dependence for adjacent age groups or the dependence structure across time in a single population setting-a sort of intradependence structure. On the other hand, the dependence across multiple populations, which we describe as interdependence, can be explored for capturing common long-run relationships between countries. The objective of our work is to produce longevity projections by taking into account the presence of various forms of cross-sectional and temporal dependencies in the error processes of multiple populations, considering mortality data from different countries. The algorithm that we propose combines model-based predictions in the Lee-Carter (LC) framework with a bootstrap procedure for dependent data, and so both the historical parametric structure and the intragroup error correlation structure are preserved. We introduce a model which applies a sieve bootstrap to the residuals of the LC model and is able to reproduce, in the sampling, the dependence structure of the data under consideration. In the current article, the algorithm that we build is applied to a pool of populations by using ideas from panel data; we refer to this new algorithm as the Multiple Lee-Carter Panel Sieve (MLCPS). We are interested in estimating the relationship between populations of similar socioeconomic conditions. The empirical results show that the MLCPS approach works well in the presence of dependence

    Fair graph representation learning: Empowering NIFTY via Biased Edge Dropout and Fair Attribute Preprocessing

    Get PDF
    The increasing complexity and amount of data available in modern applications strongly demand Trustworthy Learning algorithms that can be fed directly with complex and large graphs data. In fact, on one hand, machine learning models must meet high technical standards (e.g., high accuracy with limited computational requirements), but, at the same time, they must be sure not to discriminate against subgroups of the population (e.g., based on gender or ethnicity). Graph Neural Networks (GNNs) are currently the most effective solution to meet the technical requirements, even if it has been demonstrated that they inherit and amplify the biases contained in the data as a reflection of societal inequities. In fact, when dealing with graph data, these biases can be hidden not only in the node attributes but also in the connections between entities. Several Fair GNNs have been proposed in the literature, with uNIfying Fairness and stabiliTY (NIFTY) (Agarwal et al., 2021) being one of the most effective. In this paper, we will empower NIFTY's fairness with two new strategies. The first one is a Biased Edge Dropout, namely, we drop graph edges to balance homophilous and heterophilous sensitive connections, mitigating the bias induced by subgroup node cardinality. The second one is Attributes Preprocessing, which is the process of learning a fair transformation of the original node attributes. The effectiveness of our proposal will be tested on a series of datasets with increasingly challenging scenarios. These scenarios will deal with different levels of knowledge about the entire graph, i.e., how many portions of the graph are known and which sub-portion is labelled at the training and forward phases

    Investigating the genetic basis of salt-tolerance in common bean: a genome-wide association study at the early vegetative stage

    Get PDF
    Salinity poses a significant challenge to global crop productivity, affecting approximately 20% of cultivated and 33% of irrigated farmland, and this issue is on the rise. Negative impact of salinity on plant development and metabolism leads to physiological and morphological alterations mainly due to high ion concentration in tissues and the reduced water and nutrients uptake. Common bean (Phaseolus vulgaris L.), a staple food crop accounting for a substantial portion of consumed grain legumes worldwide, is highly susceptible to salt stress resulting in noticeable reduction in dry matter gain in roots and shoots even at low salt concentrations. In this study we screened a common bean panel of diversity encompassing 192 homozygous genotypes for salt tolerance at seedling stage. Phenotypic data were leveraged to identify genomic regions involved in salt stress tolerance in the species through GWAS. We detected seven significant associations between shoot dry weight and SNP markers. The candidate genes, in linkage with the regions associated to salt tolerance or harbouring the detected SNP, showed strong homology with genes known to be involved in salt tolerance in Arabidopsis. Our findings provide valuable insights onto the genetic control of salt tolerance in common bean and represent a first contribution to address the challenge of salinity-induced yield losses in this species and poses the ground to eventually breed salt tolerant common bean varieties

    Vine copula modeling dependence among cyber risks: A dangerous regulatory paradox

    Get PDF
    Dependence among different cyber risk classes is a fundamentally underexplored topic in the literature. However, disregarding the dependence structure in cyber risk management leads to inconsistent estimates of potential unintended losses. To bridge this gap, this article adopts a regulatory perspective to develop vine copulas to capture dependence. In quantifying the solvency capital requirement gradient for cyber risk measurement according to Solvency II, a dangerous paradox emerges: an insurance company does not tend to provide cyber risk hedging products as they are excessively expensive and would require huge premiums that it would not be possible to find policyholders
    • …
    corecore