10 research outputs found

    Learning to Run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments

    Full text link
    In the NIPS 2017 Learning to Run challenge, participants were tasked with building a controller for a musculoskeletal model to make it run as fast as possible through an obstacle course. Top participants were invited to describe their algorithms. In this work, we present eight solutions that used deep reinforcement learning approaches, based on algorithms such as Deep Deterministic Policy Gradient, Proximal Policy Optimization, and Trust Region Policy Optimization. Many solutions use similar relaxations and heuristics, such as reward shaping, frame skipping, discretization of the action space, symmetry, and policy blending. However, each of the eight teams implemented different modifications of the known algorithms.Comment: 27 pages, 17 figure

    Deep learning for understanding satellite imagery : an experimental survey

    Get PDF
    Translating satellite imagery into maps requires intensive effort and time, especially leading to inaccurate maps of the affected regions during disaster and conflict. The combination of availability of recent datasets and advances in computer vision made through deep learning paved the way toward automated satellite image translation. To facilitate research in this direction, we introduce the Satellite Imagery Competition using a modified SpaceNet dataset. Participants had to come up with different segmentation models to detect positions of buildings on satellite images. In this work, we present five approaches based on improvements of U-Net and Mask R-Convolutional Neuronal Networks models, coupled with unique training adaptations using boosting algorithms, morphological filter, Conditional Random Fields and custom losses. The good results-as high as AP=0.937 and AR=0.959 -from these models demonstrate the feasibility of Deep Learning in automated satellite image annotation

    On the Design of a Youth-Led, Issue-Based, Crowdsourced Global Monitoring Framework for the SDGs

    No full text
    In this paper, we propose a novel methodology and design to contribute towards the achievement of the 17 Sustainable Development Goals (SDGs) adopted by member states of the United Nations for a better and more sustainable future for all. We particularly focus on achieving SDG 4.7—using education to ensure all learners acquire the knowledge and skills needed to promote sustainable development. We describe the design of a crowdsourced approach to monitor issues at a local level, and then use the insights gained to indicate how learning can be achieved by the entire community. We begin by encouraging local communities to identify issues that they are concerned about, with an assumption that any issue identified will fall within the purview of the 17 SDGs. Each issue is then tagged with a plurality of actions taken to address it. Finally, we tag the positive or negative changes in the issue as perceived by members of the local community. This data is used to broadly indicate quantitative measures of community learning when solving a societal problem, in turn telling us how SDG 4.7 is being achieved. The paper describes the design of a unique, youth-led, technology-based, bottom-up approach, applicable to communities across the globe, which can potentially ensure transgressive learning through participation of and monitoring by the local community leading to sustainable developmen

    Using 2D video-based pose estimation for automated prediction of autism spectrum disorders in young children

    No full text
    Clinical research in autism has recently witnessed promising digital phenotyping results, mainly focused on single feature extraction, such as gaze, head turn on name-calling or visual tracking of the moving object. The main drawback of these studies is the focus on relatively isolated behaviors elicited by largely controlled prompts. We recognize that while the diagnosis process understands the indexing of the specific behaviors, ASD also comes with broad impairments that often transcend single behavioral acts. For instance, the atypical nonverbal behaviors manifest through global patterns of atypical postures and movements, fewer gestures used and often decoupled from visual contact, facial affect, speech. Here, we tested the hypothesis that a deep neural network trained on the non-verbal aspects of social interaction can effectively differentiate between children with ASD and their typically developing peers. Our model achieves an accuracy of 80.9% (F1 score: 0.818; precision: 0.784; recall: 0.854) with the prediction probability positively correlated to the overall level of symptoms of autism in social affect and repetitive and restricted behaviors domain. Provided the non-invasive and affordable nature of computer vision, our approach carries reasonable promises that a reliable machine-learning-based ASD screening may become a reality not too far in the future

    Deep Learning for Understanding Satellite Imagery: An Experimental Survey

    No full text
    Mohanty SP, Czakon J, Kaczmarek KA, et al. Deep Learning for Understanding Satellite Imagery: An Experimental Survey. Frontiers in Artificial Intelligence. 2020;3: 534696.Translating satellite imagery into maps requires intensive effort and time, especially leading to inaccurate maps of the affected regions during disaster and conflict. The combination of availability of recent datasets and advances in computer vision made through deep learning paved the way toward automated satellite image translation. To facilitate research in this direction, we introduce the Satellite Imagery Competition using a modified SpaceNet dataset. Participants had to come up with different segmentation models to detect positions of buildings on satellite images. In this work, we present five approaches based on improvements of U-Net and Mask R-Convolutional Neuronal Networks models, coupled with unique training adaptations using boosting algorithms, morphological filter, Conditional Random Fields and custom losses. The good results—as high as AP=0.937 and AR=0.959—from these models demonstrate the feasibility of Deep Learning in automated satellite image annotation

    Learning to Run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments

    No full text
    Kidziński Ł, Mohanty SP, Ong C, et al. Learning to Run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments. Presented at the 31nd Conference on Neural Information Processing Systems (NIPS 2017), Competition Track, Long Beach, USA.In the NIPS 2017 Learning to Run challenge, participants were tasked with building a controller for a musculoskeletal model to make it run as fast as possible through an obstacle course. Top participants were invited to describe their algorithms. In this work, we present eight solutions that used deep reinforcement learning approaches, based on algorithms such as Deep Deterministic Policy Gradient, Proximal Policy Optimization, and Trust Region Policy Optimization. Many solutions use similar relaxations and heuristics, such as reward shaping, frame skipping, discretization of the action space, symmetry, and policy blending. However, each of the eight teams implemented different modifications of the known algorithms
    corecore