551,847 research outputs found

    A Mobile Robot Project

    Get PDF
    We are building a mobile robot which will roam around the AI lab observing and later perhaps doing. Our approach to building the robot and its controlling software differs from that used in many other projects in a number of ways. (1) We model the world as three dimensional rather than two. (2) We build no special environment for our robot and insist that it must operate in the same real world that we inhabit. (3) In order to adequately deal with uncertainty of perception and control we build relational maps rather than maps embedded in a coordinate system, and we maintain explicit models of all uncertainties. (4) We explicitly monitor the computational performance of the components of the control system, in order to refine the design of a real time control system for mobile robots based on a special purpose distributed computation engine. (5) We use vision as our primary sense and relegate acoustic sensors to local obstacle detection. (6) We use a new architecture for an intelligent system designed to provide integration of many early vision processes, and robust real-time performance even in cases of sensory overload, failure of certain early vision processes to deliver much information in particular situations, and computation module failure.MIT Artificial Intelligence Laborator

    The PANOPTIC Camera: A Plenoptic Sensor with Real-Time Omnidirectional Capability

    Get PDF
    A new biologically-inspired vision sensor made of one hundred "eyes” is presented, which is suitable for real-time acquisition and processing of 3-D image sequences. This device, named the Panoptic camera, consists of a layered arrangement of approximately 100 classical CMOS imagers, distributed over a hemisphere of 13cm in diameter. The Panoptic camera is a polydioptric system where all imagers have their own vision of the world, each with a distinct focal point, which is a specific feature of the Panoptic system. This enables 3-D information recording such as omnidirectional stereoscopy or depth estimation, applying specific signal processing. The algorithms dictating the image reconstruction of an omnidirectional observer located at any point inside the hemisphere are presented. A hardware architecture which has the capability of handling these algorithms, and the flexibility to support additional image processing in real time, has been developed as a two-layer system based on FPGAs. The detail of the hardware architecture, its internal blocks, the mapping of the algorithms onto the latter elements, and the device calibration procedure are presented, along with imaging result

    6G White Paper on Edge Intelligence

    Get PDF
    In this white paper we provide a vision for 6G Edge Intelligence. Moving towards 5G and beyond the future 6G networks, intelligent solutions utilizing data-driven machine learning and artificial intelligence become crucial for several real-world applications including but not limited to, more efficient manufacturing, novel personal smart device environments and experiences, urban computing and autonomous traffic settings. We present edge computing along with other 6G enablers as a key component to establish the future 2030 intelligent Internet technologies as shown in this series of 6G White Papers. In this white paper, we focus in the domains of edge computing infrastructure and platforms, data and edge network management, software development for edge, and real-time and distributed training of ML/AI algorithms, along with security, privacy, pricing, and end-user aspects. We discuss the key enablers and challenges and identify the key research questions for the development of the Intelligent Edge services. As a main outcome of this white paper, we envision a transition from Internet of Things to Intelligent Internet of Intelligent Things and provide a roadmap for development of 6G Intelligent Edge

    Realistic Physical Simulation and Analysis of Shepherding Algorithms using Unmanned Aerial Vehicles

    Get PDF
    Advancements in UAV technology have offered promising avenues for wildlife management, specifically in the herding of wild animals. However, existing algorithms frequently simulate two-dimensional scenarios with the unrealistic assumption of continuous knowledge of animal positions or involve the use of a scouting UAV in addition to the herding UAV to localize the position of the animals. Addressing this shortcoming, our research introduces a novel herding strategy using a single UAV, integrating a computer vision algorithm in a three-dimensional simulation through the Gazebo platform with Robot Operating System 2 (ROS2) middleware. The UAV, simulated with a PX4 flight controller, detects animals using ArUco markers and uses their real-time positions to update their last known positions. The performance of our computer-vision-assisted herding algorithm was evaluated in comparison with conventional, position-aware/dual UAV herding strategies. Findings suggest that one of our vision-based strategies exhibits comparable performance to the baseline for smaller populations and loosely packed scenarios, albeit with sporadic herding failures and performance decrement in very tightly packed flocking scenarios and very sparsely distributed flocking scenarios. The proposed algorithm demonstrates potential for future real-world applications, marking a significant stride towards realistic, autonomous wildlife management using UAVs in three-dimensional spaces

    Autonomic Road Transport Support Systems

    Get PDF
    The work on Autonomic Road Transport Support (ARTS) presented here aims at meeting the challenge of engineering autonomic behavior in Intelligent Transportation Systems (ITS) by fusing research from the disciplines of traffic engineering and autonomic computing. Ideas and techniques from leading edge artificial intelligence research have been adapted for ITS over the last years. Examples include adaptive control embedded in real time traffic control systems, heuristic algorithms (e.g. in SAT-NAV systems), image processing and computer vision (e.g. in automated surveillance interpretation). Autonomic computing which is inspired from the biological example of the body’s autonomic nervous system is a more recent development. It allows for a more efficient management of heterogeneous distributed computing systems. In the area of computing, autonomic systems are endowed with a number of properties that are generally referred to as self-X properties, including self-configuration, self-healing, self-optimization, self-protection and more generally self-management. Some isolated examples of autonomic properties such as self-adaptation have found their way into ITS technology and have already proved beneficial. This edited volume provides a comprehensive introduction to Autonomic Road Transport Support (ARTS) and describes the development of ARTS systems. It starts out with the visions, opportunities and challenges, then presents the foundations of ARTS and the platforms and methods used and it closes with experiences from real-world applications and prototypes of emerging applications. This makes it suitable for researchers and practitioners in the fields of autonomic computing, traffic and transport management and engineering, AI, and software engineering. Graduate students will benefit from state-of-the-art description, the study of novel methods and the case studies provided

    Developmental Robots - A New Paradigm

    Get PDF
    It has been proved to be extremely challenging for humans to program a robot to such a sufficient degree that it acts properly in a typical unknown human environment. This is especially true for a humanoid robot due to the very large number of redundant degrees of freedom and a large number of sensors that are required for a humanoid to work safely and effectively in the human environment. How can we address this fundamental problem? Motivated by human mental development from infancy to adulthood, we present a theory, an architecture, and some experimental results showing how to enable a robot to develop its mind automatically, through online, real time interactions with its environment. Humans mentally “raise” the robot through “robot sitting” and “robot schools” instead of task-specific robot programming
    • …
    corecore