6,896 research outputs found

    Goal accomplishment tracking for automatic supervision of plan execution

    Get PDF
    It is common practice to break down plans into a series of goals or sub-goals in order to facilitate plan execution, thereby only burdening the individual agents responsible for their execution with small, easily achievable objectives at any one time, or providing a simple way of sharing these objectives amongst a group of these agents. Ensuring that plans are executed correctly is an essential part of any team management. To allow proper tracking of an agent's progress through a pre-planned set of goals, it is imperative to keep track of which of these goals have already been accomplished. This centralised approach is essential when the agent is part of a team of humans and/or robots, and goal accomplishment is not always being tracked at a low level. This paper presents a framework for an automated supervision system to keep track of changes in world states so as to chart progress through a pre-planned set of goals. An implementation of this framework on a mobile service robot is presented, and applied in an experiment which demonstrates its feasibility

    Artificial Intelligence and Systems Theory: Applied to Cooperative Robots

    Full text link
    This paper describes an approach to the design of a population of cooperative robots based on concepts borrowed from Systems Theory and Artificial Intelligence. The research has been developed under the SocRob project, carried out by the Intelligent Systems Laboratory at the Institute for Systems and Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the project stands both for "Society of Robots" and "Soccer Robots", the case study where we are testing our population of robots. Designing soccer robots is a very challenging problem, where the robots must act not only to shoot a ball towards the goal, but also to detect and avoid static (walls, stopped robots) and dynamic (moving robots) obstacles. Furthermore, they must cooperate to defeat an opposing team. Our past and current research in soccer robotics includes cooperative sensor fusion for world modeling, object recognition and tracking, robot navigation, multi-robot distributed task planning and coordination, including cooperative reinforcement learning in cooperative and adversarial environments, and behavior-based architectures for real time task execution of cooperating robot teams

    Vehicle recognition and tracking using a generic multi-sensor and multi-algorithm fusion approach

    Get PDF
    International audienceThis paper tackles the problem of improving the robustness of vehicle detection for Adaptive Cruise Control (ACC) applications. Our approach is based on a multisensor and a multialgorithms data fusion for vehicle detection and recognition. Our architecture combines two sensors: a frontal camera and a laser scanner. The improvement of the robustness stems from two aspects. First, we addressed the vision-based detection by developing an original approach based on fine gradient analysis, enhanced with a genetic AdaBoost-based algorithm for vehicle recognition. Then, we use the theory of evidence as a fusion framework to combine confidence levels delivered by the algorithms in order to improve the classification 'vehicle versus non-vehicle'. The final architecture of the system is very modular, generic and flexible in that it could be used for other detection applications or using other sensors or algorithms providing the same outputs. The system was successfully implemented on a prototype vehicle and was evaluated under real conditions and over various multisensor databases and various test scenarios, illustrating very good performances

    Context-based Information Fusion: A survey and discussion

    Get PDF
    This survey aims to provide a comprehensive status of recent and current research on context-based Information Fusion (IF) systems, tracing back the roots of the original thinking behind the development of the concept of \u201ccontext\u201d. It shows how its fortune in the distributed computing world eventually permeated in the world of IF, discussing the current strategies and techniques, and hinting possible future trends. IF processes can represent context at different levels (structural and physical constraints of the scenario, a priori known operational rules between entities and environment, dynamic relationships modelled to interpret the system output, etc.). In addition to the survey, several novel context exploitation dynamics and architectural aspects peculiar to the fusion domain are presented and discussed

    A Sports Technology Needs Assessment for Performance Monitoring in Swimming

    Get PDF
    AbstractIn recent years, technology has played an increasing role in many sports, including swimming. Far beyond the stopwatch and hand marked events, detailed biomechanical attributes can be measured using technology such as instrumented blocks, wire tethers and underwater/dolly cameras. With the advent of micro-technology, there has been an increasing trend toward the use of wearable sensors such as heart rate monitors, cadence aids and – more recently – activity monitors. The micro-electromechanical system (MEMS)-based inertial sensor class of activity monitor is of particular interest to the CWMA (Centre for Wireless Monitoring and Applications) at Griffith University. Due to the intensely competitive nature of professional sport, the difference between winning and not winning can be as little as a few hundredths of a second. An improvement to any single physiological or psychological parameter could potentially give one athlete a ‘winning edge’ over his or her competitors. This paper provides a context-driven needs assessment to illustrate the use of technology in various situational contexts related to swimming. The end goal is to improve training outcomes by allowing the strategies and requirements of stakeholders to be targeted

    Robust Digital-Twin Localization via An RGBD-based Transformer Network and A Comprehensive Evaluation on a Mobile Dataset

    Full text link
    The potential of digital-twin technology, involving the creation of precise digital replicas of physical objects, to reshape AR experiences in 3D object tracking and localization scenarios is significant. However, enabling robust 3D object tracking in dynamic mobile AR environments remains a formidable challenge. These scenarios often require a more robust pose estimator capable of handling the inherent sensor-level measurement noise. In this paper, recognizing the challenges of comprehensive solutions in existing literature, we propose a transformer-based 6DoF pose estimator designed to achieve state-of-the-art accuracy under real-world noisy data. To systematically validate the new solution's performance against the prior art, we also introduce a novel RGBD dataset called Digital Twin Tracking Dataset v2 (DTTD2), which is focused on digital-twin object tracking scenarios. Expanded from an existing DTTD v1 (DTTD1), the new dataset adds digital-twin data captured using a cutting-edge mobile RGBD sensor suite on Apple iPhone 14 Pro, expanding the applicability of our approach to iPhone sensor data. Through extensive experimentation and in-depth analysis, we illustrate the effectiveness of our methods under significant depth data errors, surpassing the performance of existing baselines. Code and dataset are made publicly available at: https://github.com/augcog/DTTD

    XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera

    Full text link
    We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals.We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fully connected neural network turns the possibly partial (on account of occlusion) 2Dpose and 3Dpose features for each subject into a complete 3Dpose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that do not produce joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes.Comment: To appear in ACM Transactions on Graphics (SIGGRAPH) 202
    • …
    corecore