2,517 research outputs found

    Credit assignment in multiple goal embodied visuomotor behavior

    Get PDF
    The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain’s abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise

    A Review of Shared Control for Automated Vehicles: Theory and Applications

    Get PDF
    The last decade has shown an increasing interest on advanced driver assistance systems (ADAS) based on shared control, where automation is continuously supporting the driver at the control level with an adaptive authority. A first look at the literature offers two main research directions: 1) an ongoing effort to advance the theoretical comprehension of shared control, and 2) a diversity of automotive system applications with an increasing number of works in recent years. Yet, a global synthesis on these efforts is not available. To this end, this article covers the complete field of shared control in automated vehicles with an emphasis on these aspects: 1) concept, 2) categories, 3) algorithms, and 4) status of technology. Articles from the literature are classified in theory- and application-oriented contributions. From these, a clear distinction is found between coupled and uncoupled shared control. Also, model-based and model-free algorithms from these two categories are evaluated separately with a focus on systems using the steering wheel as the control interface. Model-based controllers tested by at least one real driver are tabulated to evaluate the performance of such systems. Results show that the inclusion of a driver model helps to reduce the conflicts at the steering. Also, variables such as driver state, driver effort, and safety indicators have a high impact on the calculation of the authority. Concerning the evaluation, driver-in-the-loop simulators are the most common platforms, with few works performed in real vehicles. Implementation in experimental vehicles is expected in the upcoming years

    A Review of Shared Control for Automated Vehicles: Theory and Applications

    Get PDF
    The last decade has shown an increasing interest on advanced driver assistance systems (ADAS) based on shared control, where automation is continuously supporting the driver at the control level with an adaptive authority. A first look at the literature offers two main research directions: 1) an ongoing effort to advance the theoretical comprehension of shared control, and 2) a diversity of automotive system applications with an increasing number of works in recent years. Yet, a global synthesis on these efforts is not available. To this end, this article covers the complete field of shared control in automated vehicles with an emphasis on these aspects: 1) concept, 2) categories, 3) algorithms, and 4) status of technology. Articles from the literature are classified in theory- and application-oriented contributions. From these, a clear distinction is found between coupled and uncoupled shared control. Also, model-based and model-free algorithms from these two categories are evaluated separately with a focus on systems using the steering wheel as the control interface. Model-based controllers tested by at least one real driver are tabulated to evaluate the performance of such systems. Results show that the inclusion of a driver model helps to reduce the conflicts at the steering. Also, variables such as driver state, driver effort, and safety indicators have a high impact on the calculation of the authority. Concerning the evaluation, driver-in-the-loop simulators are the most common platforms, with few works performed in real vehicles. Implementation in experimental vehicles is expected in the upcoming years.This work was supported in part by the ECSEL Joint Undertaking, which funded the PRYSTINE project under Grant 783190, and in part by the AUTOLIB project (ELKARTEK 2019 ref. KK-2019/00035; Gobierno Vasco Dpto. Desarrollo económico e infraestructuras)

    Human-robot collaborative multi-agent path planning using Monte Carlo tree search and social reward sources

    Get PDF
    © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThe collaboration between humans and robots in an object search task requires the achievement of shared plans obtained from communicating and negotiating. In this work, we assume that the robot computes, as a first step, a multiagent plan for both itself and the human. Then, both plans are submitted to human scrutiny, who either agrees or modifies it forcing the robot to adapt its own restrictions or preferences. This process is repeated along the search task as many times as required by the human. Our planner is based on a decentralized variant of Monte Carlo Tree Search (MCTS), with one robot and one human as agents. Moreover, our algorithm allows the robot and the human to optimize their own actions by maintaining a probability distribution over the plans in a joint action space. The method allows an objective function definition over action sequences, it assumes intermittent communication, it is anytime and suitable for on-line replanning. To test it, we have developed a human-robot communication mobile phone interface. Validation is provided by real-life search experiments of a Parcheesi token in an urban space, including also an acceptability study.Work supported under the Spanish State Research Agency through the Maria de Maeztu Seal of Excellence to IRI (MDM-2016- 0656), ROCOTRANSP project (PID2019-106702RB-C21 / AEI / 10.13039/501100011033), TERRINet (H2020-INFRAIA-2017-1-two-stage730994) and AI4EU (H2020-ICT-2018-2-825619)Peer ReviewedPostprint (published version

    A Foveated Silicon Retina for Two-Dimensional Tracking

    Get PDF
    A silicon retina chip with a central foveal region for smooth-pursuit tracking and a peripheral region for saccadic target acquisition is presented. The foveal region contains a 9 x 9 dense array of large dynamic range photoreceptors and edge detectors. Two-dimensional direction of foveal motion is computed outside the imaging array. The peripheral region contains a sparse array of 19 x 17 similar, but larger, photoreceptors with in-pixel edge and temporal ON-set detection. The coordinates of moving or flashing targets are computed with two one-dimensional centroid localization circuits located on the outskirts of the peripheral region. The chip is operational for ambient intensities ranging over six orders of magnitude, targets contrast as low as 10%, foveal speed ranging from 1.5 to 10K pixels/s, and peripheral ON-set frequencies from \u3c0.1 to 800 kHz. The chip is implemented in 2-μm N well CMOS process and consumes 15 mW (V dd = 4 V) in normal indoor light (25 μW/cm2). It has been used as a person tracker in a smart surveillance system and a road follower in an autonomous navigation system

    Automatic human face detection in color images

    Get PDF
    Automatic human face detection in digital image has been an active area of research over the past decade. Among its numerous applications, face detection plays a key role in face recognition system for biometric personal identification, face tracking for intelligent human computer interface (HCI), and face segmentation for object-based video coding. Despite significant progress in the field in recent years, detecting human faces in unconstrained and complex images remains a challenging problem in computer vision. An automatic system that possesses a similar capability as the human vision system in detecting faces is still a far-reaching goal. This thesis focuses on the problem of detecting human laces in color images. Although many early face detection algorithms were designed to work on gray-scale Images, strong evidence exists to suggest face detection can be done more efficiently by taking into account color characteristics of the human face. In this thesis, we present a complete and systematic face detection algorithm that combines the strengths of both analytic and holistic approaches to face detection. The algorithm is developed to detect quasi-frontal faces in complex color Images. This face class, which represents typical detection scenarios in most practical applications of face detection, covers a wide range of face poses Including all in-plane rotations and some out-of-plane rotations. The algorithm is organized into a number of cascading stages including skin region segmentation, face candidate selection, and face verification. In each of these stages, various visual cues are utilized to narrow the search space for faces. In this thesis, we present a comprehensive analysis of skin detection using color pixel classification, and the effects of factors such as the color space, color classification algorithm on segmentation performance. We also propose a novel and efficient face candidate selection technique that is based on color-based eye region detection and a geometric face model. This candidate selection technique eliminates the computation-intensive step of window scanning often employed In holistic face detection, and simplifies the task of detecting rotated faces. Besides various heuristic techniques for face candidate verification, we developface/nonface classifiers based on the naive Bayesian model, and investigate three feature extraction schemes, namely intensity, projection on face subspace and edge-based. Techniques for improving face/nonface classification are also proposed, including bootstrapping, classifier combination and using contextual information. On a test set of face and nonface patterns, the combination of three Bayesian classifiers has a correct detection rate of 98.6% at a false positive rate of 10%. Extensive testing results have shown that the proposed face detector achieves good performance in terms of both detection rate and alignment between the detected faces and the true faces. On a test set of 200 images containing 231 faces taken from the ECU face detection database, the proposed face detector has a correct detection rate of 90.04% and makes 10 false detections. We have found that the proposed face detector is more robust In detecting in-plane rotated laces, compared to existing face detectors. +D2

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    Evidencing the goals of competition law in the People’s Republic of China: inside the merger laboratory

    Get PDF
    In the analysis of competition law the most fundamental question to be asked of any regime is that of what the goals of that regime are. The goals of competition law will determine the outcomes of cases, and transparency in goals will permit robust analysis of decisions against a clear benchmark, and facilitate firms’ analysis of transactional risk. Mergers which are notified to multiple authorities provide a distinctive opportunity to compare the operation of the different regimes in respect of, in essence, the same case at the same time. Where divergent outcomes are identified these may simply indicate that in the face of complex sets of facts different conclusions are drawn, or that competitive conditions vary across the relevant regimes. More importantly, divergence may suggest that different goals are being applied. This article focusses on the approaches taken in the People’s Republic of China (PRC), the United States and the European Union – the three ‘key’ merger regimes, from each of which a clearance is a ‘must have’ – in a defined set of merger cases in which at least two of these jurisdictions applied, covering the years 2013–2016. Recognizing the limitations pertaining to any such analysis, I compare the approaches taken across this set of merger cases seeking to explain and critique any divergence, focussing in particular on the more expansive approach to merger control demonstrated here to be applied in the PRC. The focus throughout is on the operation of the substantive test(s) of merger control, which provide a focal point for testing the goals of competition law and policy
    • …
    corecore