29 research outputs found

    Npn-1 Contributes to Axon-Axon Interactions That Differentially Control Sensory and Motor Innervation of the Limb

    Get PDF
    The initiation, execution, and completion of complex locomotor behaviors are depending on precisely integrated neural circuitries consisting of motor pathways that activate muscles in the extremities and sensory afferents that deliver feedback to motoneurons. These projections form in tight temporal and spatial vicinities during development, yet the molecular mechanisms and cues coordinating these processes are not well understood. Using cell-type specific ablation of the axon guidance receptor Neuropilin-1 (Npn-1) in spinal motoneurons or in sensory neurons in the dorsal root ganglia (DRG), we have explored the contribution of this signaling pathway to correct innervation of the limb. We show that Npn-1 controls the fasciculation of both projections and mediates inter-axonal communication. Removal of Npn-1 from sensory neurons results in defasciculation of sensory axons and, surprisingly, also of motor axons. In addition, the tight coupling between these two heterotypic axonal populations is lifted with sensory fibers now leading the spinal nerve projection. These findings are corroborated by partial genetic elimination of sensory neurons, which causes defasciculation of motor projections to the limb. Deletion of Npn-1 from motoneurons leads to severe defasciculation of motor axons in the distal limb and dorsal-ventral pathfinding errors, while outgrowth and fasciculation of sensory trajectories into the limb remain unaffected. Genetic elimination of motoneurons, however, revealed that sensory axons need only minimal scaffolding by motor axons to establish their projections in the distal limb. Thus, motor and sensory axons are mutually dependent on each other for the generation of their trajectories and interact in part through Npn-1-mediated fasciculation before and within the plexus region of the limbs

    Introducing v0.5 of the AI Safety Benchmark from MLCommons

    Get PDF
    This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark

    Introducing v0.5 of the AI Safety Benchmark from MLCommons

    Get PDF
    This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark

    Motivating Sustainable Behaviour Change in Apartment Residents of Moreland, Australia

    No full text
    The Moreland Energy Foundation in Melbourne, - Australia, sought to lower electricity use among apartment residents, who face special barriers to living sustainably. We developed a two-pronged, research-based strategy to promote energy saving habits and devices: information about energy saving habits and devices and competition among residents to reduce consumption as a motivator. A case-control study demonstrated the value of competition: competing residents adopted 250% more energy saving habits and devices than the non- competing control group

    Theoretical Model and Test Bed for the Development and Validation of Ornithopter Designs

    Get PDF
    Ornithopters, systems that utilize flapping to generate lift, are a growing field of robotics. Although they are of particular interest, there are currently no successful large scale hovering ornithopters. Continuing from last year’s MQP, this project developed a testbed that can effectively examine ornithopter designs. Realization was guided by a theoretical model developed in MATLAB. Utilizing load cells, cameras, and a LabVIEW interface, the testbed allows for the examination of different wing designs and wing motion

    Paul Valéry, Cahiers 1894-1914 , vol. XIII

    No full text
    Avec ce volume qui va de mars 1914 à janvier 1915, treizième et dernier tome, s’achève l’édition scientifique sur papier des Cahiers 1894-1914 de Valéry. Le texte questionne l’intelligence, le langage, la musique, le hasard, le rêve, la sensibilité, en des séquences thématiques qui vont de l’aphorisme au petit traité psychologique. Ces cahiers mettent en évidence le retentissement de la Première Guerre mondiale sur le psychisme de Valéry : on y voit le thème et les mots mêmes des premiers vers de La Jeune Parque sortir du choc de la déclaration de guerre

    Introducing v0.5 of the AI Safety Benchmark from MLCommons

    Get PDF
    This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark

    The Einstein Toolkit

    No full text
    The Einstein Toolkit is a community-driven software platform of core computational tools to advance and support research in relativistic astrophysics and gravitational physics. The Einstein Toolkit has been supported by NSF 2004157/2004044/2004311/2004879/2003893/1550551/1550461/1550436/1550514, Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation
    corecore