116 research outputs found

    Phylodynamic analysis of porcine circovirus type 2: Methodological approach and datasets

    Get PDF
    Since its first description, PCV2 has emerged as one of the most economically relevant diseases for the swine industry. Despite the introduction of vaccines effective in controlling clinical syndromes, PCV2 spread was not prevented and some potential evidences of vaccine immuno escape have recently been reported (“Complete genome sequence of a novel porcine circovirus type 2b variant present in cases of vaccine failures in the United States” (Xiao and Halbur, 2012) [1], “Genetic and antigenic characterization of a newly emerging porcine circovirus type 2b mutant first isolated in cases of vaccine failure in Korea” (Seo et al., 2014) [2]). In this article, we used a collection of PCV2 full genomes, provided in the present manuscript, and several phylogentic, phylodynamic and bioinformatic methods to investigate different aspects of PCV2 epidemiology, history and evolution (more thoroughly described in “PHYLODYNAMIC ANALYSIS of PORCINE CIRCOVIRUS TYPE 2 REVEALS GLOBAL WAVES of EMERGING GENOTYPES and the CIRCULATION of RECOMBINANT FORMS”[3]). The methodological approaches used to consistently detect recombiantion events and estimate population dymanics and spreading patterns of rapidly evolving ssDNA viruses are herein reported. Programs used are described and original scripts have been provided. Ensembled databases used are also made available. These consist of a broad collection of complete genome sequences (i.e. 843 sequences; 63 complete genomes of PCV2a, 310 of PCV2b, 4 of PCV2c, 217 of PCV2d, 64 of CRF01, 140 of CRF02 and 45 of CRF03.), divided in differnt ORF (i.e. ORF1, ORF2 and intergenic regions), of PCV2 genotypes and major Circulating Recombinat Forms (CRF) properly annotated with respective collection data and country. Globally, all of these data can be used as a starting point for further studies and for classification purpose

    Cumplimos 20 años!

    Get PDF

    Embodied Artificial Intelligence through Distributed Adaptive Control: An Integrated Framework

    Full text link
    In this paper, we argue that the future of Artificial Intelligence research resides in two keywords: integration and embodiment. We support this claim by analyzing the recent advances of the field. Regarding integration, we note that the most impactful recent contributions have been made possible through the integration of recent Machine Learning methods (based in particular on Deep Learning and Recurrent Neural Networks) with more traditional ones (e.g. Monte-Carlo tree search, goal babbling exploration or addressable memory systems). Regarding embodiment, we note that the traditional benchmark tasks (e.g. visual classification or board games) are becoming obsolete as state-of-the-art learning algorithms approach or even surpass human performance in most of them, having recently encouraged the development of first-person 3D game platforms embedding realistic physics. Building upon this analysis, we first propose an embodied cognitive architecture integrating heterogenous sub-fields of Artificial Intelligence into a unified framework. We demonstrate the utility of our approach by showing how major contributions of the field can be expressed within the proposed framework. We then claim that benchmarking environments need to reproduce ecologically-valid conditions for bootstrapping the acquisition of increasingly complex cognitive skills through the concept of a cognitive arms race between embodied agents.Comment: Updated version of the paper accepted to the ICDL-Epirob 2017 conference (Lisbon, Portugal

    Cumplimos 20 años!

    Get PDF

    QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    Get PDF
    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain)

    Satellite downlink scheduling problem: A case study

    Get PDF
    The synthetic aperture radar (SAR) technology enables satellites to efficiently acquire high quality images of the Earth surface. This generates significant communication traffic from the satellite to the ground stations, and, thus, image downlinking often becomes the bottleneck in the efficiency of the whole system. In this paper we address the downlink scheduling problem for Canada's Earth observing SAR satellite, RADARSAT-2. Being an applied problem, downlink scheduling is characterised with a number of constraints that make it difficult not only to optimise the schedule but even to produce a feasible solution. We propose a fast schedule generation procedure that abstracts the problem specific constraints and provides a simple interface to optimisation algorithms. By comparing empirically several standard meta-heuristics applied to the problem, we select the most suitable one and show that it is clearly superior to the approach currently in use.Comment: 23 page

    A new detector for the beam energy measurement in proton therapy: a feasibility study

    Get PDF
    Fast procedures for the beam quality assessment and for the monitoring of beam energy modulations during the irradiation are among the most urgent improvements in particle therapy. Indeed, the online measurement of the particle beam energy could allow assessing the range of penetration during treatments, encouraging the development of new dose delivery techniques for moving targets. Towards this end, the proof of concept of a new device, able to measure in a few seconds the energy of clinical proton beams (from 60 to 230 MeV) from the Time of Flight (ToF) of protons, is presented. The prototype consists of two Ultra Fast Silicon Detector (UFSD) pads, featuring an active thickness of 80 um and a sensitive area of 3 x 3 mm2, aligned along the beam direction in a telescope configuration, connected to a broadband amplifier and readout by a digitizer. Measurements were performed at the Centro Nazionale di Adroterapia Oncologica (CNAO, Pavia, Italy), at five different clinical beam energies and four distances between the sensors (from 7 to 97 cm) for each energy. In order to derive the beam energy from the measured average ToF, several systematic effects were considered, Monte Carlo simulations were developed to validate the method and a global fit approach was adopted to calibrate the system. The results were benchmarked against the energy values obtained from the water equivalent depths provided by CNAO. Deviations of few hundreds of keV have been achieved for all considered proton beam energies for both 67 and 97 cm distances between the sensors and few seconds of irradiation were necessary to collect the required statistics. These preliminary results indicate that a telescope of UFSDs could achieve in a few seconds the accuracy required for the clinical application and therefore encourage further investigations towards the improvement and the optimization of the present prototype

    Discovery of extreme particle acceleration in the microquasar Cygnus X-3

    Full text link
    The study of relativistic particle acceleration is a major topic of high-energy astrophysics. It is well known that massive black holes in active galaxies can release a substantial fraction of their accretion power into energetic particles, producing gamma-rays and relativistic jets. Galactic microquasars (hosting a compact star of 1-10 solar masses which accretes matter from a binary companion) also produce relativistic jets. However, no direct evidence of particle acceleration above GeV energies has ever been obtained in microquasar ejections, leaving open the issue of the occurrence and timing of extreme matter energization during jet formation. Here we report the detection of transient gamma-ray emission above 100 MeV from the microquasar Cygnus X-3, an exceptional X-ray binary which sporadically produces powerful radio jets. Four gamma-ray flares (each lasting 1-2 days) were detected by the AGILE satellite simultaneously with special spectral states of Cygnus X-3 during the period mid-2007/mid-2009. Our observations show that very efficient particle acceleration and gamma-ray propagation out of the inner disk of a microquasar usually occur a few days before major relativistic jet ejections. Flaring particle energies can be thousands of times larger than previously detected maximum values (with Lorentz factors of 105 and 102 for electrons and protons, respectively). We show that the transitional nature of gamma-ray flares and particle acceleration above GeV energies in Cygnus X-3 is clearly linked to special radio/X-ray states preceding strong radio flares. Thus gamma-rays provide unique insight into the nature of physical processes in microquasars.Comment: 29 pages (including Supplementary Information), 8 figures, 2 tables version submitted to Nature on August 7, 2009 (accepted version available at http://www.nature.com/nature/journal/vaop/ncurrent/pdf/nature08578.pdf

    A multidimensional account of democratic legitimacy: how to make robust decisions in a non-idealized deliberative context

    Get PDF
    This paper analyses the possibility of granting legitimacy to democratic decisionmaking procedures in a context of deep pluralism. We defend a multidimensional account according to which a legitimate system needs to grant, on the one hand, that citizens should be included on an equal footing and acknowledged as reflexive political agents rather than mere beneficiaries of policies, and, on the other hand, that their decisions have an epistemic quality. While Estlund\u2019s account of imperfect epistemic proceduralism might seem to embody a dualistic conception of democratic legitimacy, we point out that it is not able to recognize citizens as reflexive political agents and is grounded in an idealized model of the circumstances of deliberation. To overcome these ambiguities, we develop an account of democratic legitimacy according to which disagreement is the proper expression of citizens\u2019 reflexive agency and the attribution of epistemic authority does not stem from a major expertise or specific ability, but it comes through the public confrontation among disagreeing agents. Consequently, the epistemic value of deliberation should be derived from the reasons-giving process rather than from the reference to the alleged quality of its outcomes. In this way, we demonstrate the validity of the multidimensional perspective of legitimacy, yet abstain from introducing any outcome-oriented criterion. Finally, we argue that this account of legitimacy is well suited for modeling deliberative democracy as a decision-making procedure that respects the agency of every citizen and grants her opportunity to influence public choices

    Quantum memories at finite temperature

    Get PDF
    To use quantum systems for technological applications one first needs to preserve their coherence for macroscopic time scales, even at finite temperature. Quantum error correction has made it possible to actively correct errors that affect a quantum memory. An attractive scenario is the construction of passive storage of quantum information with minimal active support. Indeed, passive protection is the basis of robust and scalable classical technology, physically realized in the form of the transistor and the ferromagnetic hard disk. The discovery of an analogous quantum system is a challenging open problem, plagued with a variety of no-go theorems. Several approaches have been devised to overcome these theorems by taking advantage of their loopholes. The state-of-the-art developments in this field are reviewed in an informative and pedagogical way. The main principles of self-correcting quantum memories are given and several milestone examples from the literature of two-, three- and higher-dimensional quantum memories are analyzed
    • …
    corecore