35 research outputs found

    Electronic system for drift clock calculation and synchronization for seafloor observatory

    Get PDF
    The paper describes a new electronic device that allows an easily measurement of the drift between a reference time source (usually GPS) and an atomic rubidium clock which is normally used in seafloor observatories. The Rubidium clock is used in autonomous seafloor observatories to supply reference time for data acquisition with the precision of milliseconds. During the deployment of seafloor observatory the clock is synchronized with GPS. It is critical to evaluate the time drift between the clock and the GPS, when the observatory is recovered. In fact, thanks to an accurate drift measurement it’s possible to have a correct timestamp for data series collected by seafloor observatory’s instruments. The device described in this paper is composed by an Arduino mega shield integrated with other electronic circuits. The device is easily customizable for different clocks in fact Arduino IDE allows development of the desired features for the rubidium clock used in the specific application.Peer Reviewe

    Developmental excitatory-to-inhibitory GABA-polarity switch is disrupted in 22q11.2 deletion syndrome: a potential target for clinical therapeutics.

    Get PDF
    Individuals with 22q11.2 microdeletion syndrome (22q11.2 DS) show cognitive and behavioral dysfunctions, developmental delays in childhood and risk of developing schizophrenia and autism. Despite extensive previous studies in adult animal models, a possible embryonic root of this syndrome has not been determined. Here, in neurons from a 22q11.2 DS mouse model (Lgdel +/-), we found embryonic-premature alterations in the neuronal chloride cotransporters indicated by dysregulated NKCC1 and KCC2 protein expression levels. We demonstrate with large-scale spiking activity recordings a concurrent deregulation of the spontaneous network activity and homeostatic network plasticity. Additionally, Lgdel +/- networks at early development show abnormal neuritogenesis and void of synchronized spontaneous activity. Furthermore, parallel experiments on Dgcr8 +/- mouse cultures reveal a significant, yet not exclusive contribution of the dgcr8 gene to our phenotypes of Lgdel +/- networks. Finally, we show that application of bumetanide, an inhibitor of NKCC1, significantly decreases the hyper-excitable action of GABAA receptor signaling and restores network homeostatic plasticity in Lgdel +/- networks. Overall, by exploiting an on-a-chip 22q11.2 DS model, our results suggest a delayed GABA-switch in Lgdel +/- neurons, which may contribute to a delayed embryonic development. Prospectively, acting on the GABA-polarity switch offers a potential target for 22q11.2 DS therapeutic intervention

    NEMO-SN1 Abyssal Cabled Observatory in the Western Ionian Sea

    Get PDF
    The NEutrinoMediterranean Observatory—Submarine Network 1 (NEMO-SN1) seafloor observatory is located in the central Mediterranean Sea, Western Ionian Sea, off Eastern Sicily (Southern Italy) at 2100-m water depth, 25 km from the harbor of the city of Catania. It is a prototype of a cabled deep-sea multiparameter observatory and the first one operating with real-time data transmission in Europe since 2005. NEMO-SN1 is also the first-established node of the European Multidisciplinary Seafloor Observatory (EMSO), one of the incoming European large-scale research infrastructures included in the Roadmap of the European Strategy Forum on Research Infrastructures (ESFRI) since 2006. EMSO will specifically address long-term monitoring of environmental processes related to marine ecosystems, marine mammals, climate change, and geohazards

    The EMSO Generic Instrument Module (EGIM): Standardized and interoperable instrumentation for ocean observation

    Get PDF
    The oceans are a fundamental source for climate balance, sustainability of resources and life on Earth, therefore society has a strong and pressing interest in maintaining and, where possible, restoring the health of the marine ecosystems. Effective, integrated ocean observation is key to suggesting actions to reduce anthropogenic impact from coastal to deep-sea environments and address the main challenges of the 21st century, which are summarized in the UN Sustainable Development Goals and Blue Growth strategies. The European Multidisciplinary Seafloor and water column Observatory (EMSO), is a European Research Infrastructure Consortium (ERIC), with the aim of providing long-term observations via fixed-point ocean observatories in key environmental locations across European seas from the Arctic to the Black Sea. These may be supported by ship-based observations and autonomous systems such as gliders. In this paper, we present the EMSO Generic Instrument Module (EGIM), a deployment ready multi-sensor instrumentation module, designed to measure physical, biogeochemical, biological and ecosystem variables consistently, in a range of marine environments, over long periods of time. Here, we describe the system, features, configuration, operation and data management. We demonstrate, through a series of coastal and oceanic pilot experiments that the EGIM is a valuable standard ocean observation module, which can significantly improve the capacity of existing ocean observatories and provides the basis for new observatories. The diverse examples of use included the monitoring of fish activity response upon oceanographic variability, hydrothermal vent fluids and particle dispersion, passive acoustic monitoring of marine mammals and time series of environmental variation in the water column. With the EGIM available to all the EMSO Regional Facilities, EMSO will be reaching a milestone in standardization and interoperability, marking a key capability advancement in addressing issues of sustainability in resource and habitat management of the oceans

    S-MARL: An Algorithm for Single-To-Multi-Agent Reinforcement Learning : Case Study: Formula 1 Race Strategies

    No full text
    A Multi-Agent System is a group of autonomous, intelligent, interacting agents sharing an environment that they observe through sensors, and upon which they act with actuators. The behaviors of these agents can be either defined upfront by programmers or learned by trial-and-error resorting to Reinforcement Learning. In this last context, the approaches proposed by literature can be categorized either as Single-Agent or Multi-Agent. The former approaches experience more stable training at the cost of defining upfront the policies of all the agents that are not learning, with the risk of limiting the performances of the learned policy. The latter approaches do not have such a limitation but experience higher training instability. Therefore, we propose a new approach based on the transition from Single-Agent to Multi-Agent Reinforcement Learning that exploits the benefits of both approaches: higher stability at the beginning of the training to learn the environment’s dynamics, and unconstrained agents in the latest phases. To conduct this study, we chose Formula 1 as the Multi-Agent System, a complex environment with more than two interacting agents. In doing so, we designed a realistic racing simulation environment, framed as a Markov Decision Process, able to reproduce the core dynamics of races. After that, we trained three agents based on Semi-Gradient Q-Learning with different frameworks: pure Single-Agent, pure Multi-Agent, and Single-to-Multi-Agent. The results established that, given the same initial conditions and training episodes, our approach outperforms both the Single-Agent and Multi-Agent frameworks, obtaining higher scores in the proposed benchmarks.Ett system med flera agenter Ă€r en grupp autonoma, intelligenta, interagerande agenter som delar en miljö som de observerar med hjĂ€lp av sensorer och som de agerar pĂ„ med hjĂ€lp av agenter. Beteendena hos dessa agenter kan antingen definieras i förvĂ€g av programmerare eller lĂ€ras in genom försök och misstag med hjĂ€lp av förstĂ€rkningsinlĂ€rning. I det sistnĂ€mnda sammanhanget kan de metoder som föreslagits i litteraturen kategoriseras som antingen en eller flera agenter. De förstnĂ€mnda tillvĂ€gagĂ„ngssĂ€tten ger en stabilare utbildning till priset av att man i förvĂ€g mĂ„ste definiera politiken för alla de agenter som inte lĂ€r sig, vilket innebĂ€r en risk för att den inlĂ€rda politikens prestanda begrĂ€nsas. De senare metoderna har inte en sĂ„dan begrĂ€nsning men upplever en högre instabilitet i utbildningen. DĂ€rför föreslĂ„r vi en ny metod som bygger pĂ„ övergĂ„ngen frĂ„n förstĂ€rkningsinlĂ€rning med en agent till förstĂ€rkningsinlĂ€rning med flera agenter och som utnyttjar fördelarna med bĂ„da metoderna: högre stabilitet i början av utbildningen för att lĂ€ra sig miljöns dynamik och agenter utan begrĂ€nsningar i de senaste faserna. För att genomföra den hĂ€r studien valde vi Formel 1 som ett system med flera agenter, en komplex miljö med mer Ă€n tvĂ„ interagerande agenter. Vi utformade dĂ€rför en realistisk simulering av tĂ€vlingar som Ă€r utformad som en Markov-beslutsprocess och som kan Ă„terge den centrala dynamiken i tĂ€vlingar. DĂ€refter trĂ€nade vi tre agenter baserat pĂ„ Semi-Gradient Q-Learning med olika ramar: ren Single-Agent, ren Multi-Agent och Single-to-Multi-Agent. Resultaten visade att vĂ„r metod, med samma startvillkor och trĂ€ningsepisoder, övertrĂ€ffar bĂ„de Single-Agent- och Multi-Agent-ramarna och fĂ„r högre poĂ€ng i de föreslagna riktmĂ€rkena

    S-MARL: An Algorithm for Single-To-Multi-Agent Reinforcement Learning : Case Study: Formula 1 Race Strategies

    No full text
    A Multi-Agent System is a group of autonomous, intelligent, interacting agents sharing an environment that they observe through sensors, and upon which they act with actuators. The behaviors of these agents can be either defined upfront by programmers or learned by trial-and-error resorting to Reinforcement Learning. In this last context, the approaches proposed by literature can be categorized either as Single-Agent or Multi-Agent. The former approaches experience more stable training at the cost of defining upfront the policies of all the agents that are not learning, with the risk of limiting the performances of the learned policy. The latter approaches do not have such a limitation but experience higher training instability. Therefore, we propose a new approach based on the transition from Single-Agent to Multi-Agent Reinforcement Learning that exploits the benefits of both approaches: higher stability at the beginning of the training to learn the environment’s dynamics, and unconstrained agents in the latest phases. To conduct this study, we chose Formula 1 as the Multi-Agent System, a complex environment with more than two interacting agents. In doing so, we designed a realistic racing simulation environment, framed as a Markov Decision Process, able to reproduce the core dynamics of races. After that, we trained three agents based on Semi-Gradient Q-Learning with different frameworks: pure Single-Agent, pure Multi-Agent, and Single-to-Multi-Agent. The results established that, given the same initial conditions and training episodes, our approach outperforms both the Single-Agent and Multi-Agent frameworks, obtaining higher scores in the proposed benchmarks.Ett system med flera agenter Ă€r en grupp autonoma, intelligenta, interagerande agenter som delar en miljö som de observerar med hjĂ€lp av sensorer och som de agerar pĂ„ med hjĂ€lp av agenter. Beteendena hos dessa agenter kan antingen definieras i förvĂ€g av programmerare eller lĂ€ras in genom försök och misstag med hjĂ€lp av förstĂ€rkningsinlĂ€rning. I det sistnĂ€mnda sammanhanget kan de metoder som föreslagits i litteraturen kategoriseras som antingen en eller flera agenter. De förstnĂ€mnda tillvĂ€gagĂ„ngssĂ€tten ger en stabilare utbildning till priset av att man i förvĂ€g mĂ„ste definiera politiken för alla de agenter som inte lĂ€r sig, vilket innebĂ€r en risk för att den inlĂ€rda politikens prestanda begrĂ€nsas. De senare metoderna har inte en sĂ„dan begrĂ€nsning men upplever en högre instabilitet i utbildningen. DĂ€rför föreslĂ„r vi en ny metod som bygger pĂ„ övergĂ„ngen frĂ„n förstĂ€rkningsinlĂ€rning med en agent till förstĂ€rkningsinlĂ€rning med flera agenter och som utnyttjar fördelarna med bĂ„da metoderna: högre stabilitet i början av utbildningen för att lĂ€ra sig miljöns dynamik och agenter utan begrĂ€nsningar i de senaste faserna. För att genomföra den hĂ€r studien valde vi Formel 1 som ett system med flera agenter, en komplex miljö med mer Ă€n tvĂ„ interagerande agenter. Vi utformade dĂ€rför en realistisk simulering av tĂ€vlingar som Ă€r utformad som en Markov-beslutsprocess och som kan Ă„terge den centrala dynamiken i tĂ€vlingar. DĂ€refter trĂ€nade vi tre agenter baserat pĂ„ Semi-Gradient Q-Learning med olika ramar: ren Single-Agent, ren Multi-Agent och Single-to-Multi-Agent. Resultaten visade att vĂ„r metod, med samma startvillkor och trĂ€ningsepisoder, övertrĂ€ffar bĂ„de Single-Agent- och Multi-Agent-ramarna och fĂ„r högre poĂ€ng i de föreslagna riktmĂ€rkena

    DGCR8 Promotes Neural Progenitor Expansion and Represses Neurogenesis in the Mouse Embryonic Neocortex

    No full text
    DGCR8 and DROSHA are the minimal functional core of the Microprocessor complex essential for biogenesis of canonical microRNAs and for the processing of other RNAs. Conditional deletion of Dgcr8 and Drosha in the murine telencephalon indicated that these proteins exert crucial functions in corticogenesis. The identification of mechanisms of DGCR8- or DROSHA-dependent regulation of gene expression in conditional knockout mice are often complicated by massive apoptosis. Here, to investigate DGCR8 functions on amplification/differentiation of neural progenitors cells (NPCs) in corticogenesis, we overexpress Dgcr8 in the mouse telencephalon, by in utero electroporation (IUEp). We find that DGCR8 promotes the expansion of NPC pools and represses neurogenesis, in absence of apoptosis, thus overcoming the usual limitations of Dgcr8 knockout-based approach. Interestingly, DGCR8 selectively promotes basal progenitor amplification at later developmental stages, entailing intriguing implications for neocortical expansion in evolution. Finally, despite a 3- to 5-fold increase of DGCR8 level in the mouse telencephalon, the composition, target preference and function of the DROSHA-dependent Microprocessor complex remain unaltered. Thus, we propose that DGCR8-dependent modulation of gene expression in corticogenesis is more complex than previously known, and possibly DROSHA-independent
    corecore