9 research outputs found

    Integrated System for Stereoscopic Cognitive Vision, Localization, Mapping, and Communication with a Mobile Service Robot

    Get PDF
    This paper describes a stereo-vision-based mobile robot that can navigate and explore its environment autonomously and safely and simultaneously building a tridimensional virtual map of the environment. The control strategy is rule-based and the interaction with robot is done via Bluetooth. The stereoscopic vision allows the robot to recognize objects and to determine the distance to the analyzed objects. The robot is able to generate and simultaneously update a full colour 3D map of the environment that is being explored. The position and type of each detected and recognized object is marked in this 3D map. Furthermore, the robot will be able to use a gripper in order to collect detected objects and carry them to dedicated collecting bins, and so will be able to work in commercial waste cleanup applications. This application represents a successful integration of computers, control and communication techniques in mobile service robot control

    Computer Vision and Image Understanding xxx

    Get PDF
    Abstract 13 This paper presents a panoramic virtual stereo vision approach to the problem of detecting 14 and localizing multiple moving objects (e.g., humans) in an indoor scene. Two panoramic 15 cameras, residing on different mobile platforms, compose a virtual stereo sensor with a flexible 16 baseline. A novel ''mutual calibration'' algorithm is proposed, where panoramic cameras on 17 two cooperative moving platforms are dynamically calibrated by looking at each other. A de-18 tailed numerical analysis of the error characteristics of the panoramic virtual stereo vision 19 (mutual calibration error, stereo matching error, and triangulation error) is given to derive 20 rules for optimal view planning. Experimental results are discussed for detecting and localizing 21 multiple humans in motion using two cooperative robot platforms. 2

    Development of an Active Vision System for the Remote Identification of Multiple Targets

    Get PDF
    This thesis introduces a centralized active vision system for the remote identification of multiple targets in applications where the targets may outnumber the active system resources. Design and implementation details of a modular active vision system are presented, from which a prototype has been constructed. The system employs two different, yet complimentary, camera technologies. Omnidirectional cameras are used to detect and track targets at a low resolution, while perspective cameras mounted to pan-tilt stages are used to acquire high resolution images suitable for identification. Five greedy-based scheduling policies have been developed and implemented to manage the active system resources in an attempt to achieve optimal target-to-camera assignments. System performance has been evaluated using both simulated and real-world experiments under different target and system configurations for all five scheduling policies. Parameters affecting performance that were considered include: target entry conditions, congestion levels, target to camera speeds, target trajectories, and number of active cameras. An overall trend in the relative performance of the scheduling algorithms was observed. The Least System Reconfiguration and Future Least System Reconfiguration scheduling policies performed the best for the majority of conditions investigated, while the Load Sharing and First Come First Serve policies performed the poorest. The performance of the Earliest Deadline First policy was seen to be highly dependent on target predictability

    Development of an Active Vision System for the Remote Identification of Multiple Targets

    Get PDF
    This thesis introduces a centralized active vision system for the remote identification of multiple targets in applications where the targets may outnumber the active system resources. Design and implementation details of a modular active vision system are presented, from which a prototype has been constructed. The system employs two different, yet complimentary, camera technologies. Omnidirectional cameras are used to detect and track targets at a low resolution, while perspective cameras mounted to pan-tilt stages are used to acquire high resolution images suitable for identification. Five greedy-based scheduling policies have been developed and implemented to manage the active system resources in an attempt to achieve optimal target-to-camera assignments. System performance has been evaluated using both simulated and real-world experiments under different target and system configurations for all five scheduling policies. Parameters affecting performance that were considered include: target entry conditions, congestion levels, target to camera speeds, target trajectories, and number of active cameras. An overall trend in the relative performance of the scheduling algorithms was observed. The Least System Reconfiguration and Future Least System Reconfiguration scheduling policies performed the best for the majority of conditions investigated, while the Load Sharing and First Come First Serve policies performed the poorest. The performance of the Earliest Deadline First policy was seen to be highly dependent on target predictability

    Self-* properties of multi sensing entities in smart environments

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.Includes bibliographical references (p. 78-87).Computers and sensors are more and more often embedded into everyday objects, woven into garments, "painted" on architecture or deployed directly into the environment. They monitor the environment, process the information and extract knowledge that their designed and programmers hope will be interesting. As the number and variety of these sensors and their connections increase, so does the complexity of the networks in which they operate. Deployment, management, and repair become difficult to perform manually. It is, then, particularly appealing to design a software architecture that can achieve the necessary organizational structures without requiring human intervention. Focusing on image sensing and machine vision techniques, we propose to investigate how small, unspecialized, low-processing sensing entities can self-organize to create a scalable, fault tolerant, decentralized, and easily reconfigurable system for smart environments and how these entities self-adapt to optimize their contribution in the presence of constraints inherent to sensor networks.by Arnaud Pilpré.S.M

    The role of groups in smart camera networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 103-111).Recent research in sensor networks has made it possible to deploy networks of sensors with significant local processing. These sensor networks are revolutionising information collection and processing in many different environments. Often the amount of local data produced by these devices, and their sheer number, makes centralised data processing infeasible. Smart camera networks represent a particular challenge in this regard, partly because of the amount of data produced by each camera, but also because many high level vision algorithms require data from more than one camera. Many distributed algorithms exist that work locally to produce results from a collection of nodes, but as this number grows the algorithm's performance is quickly crippled by the resulting exponential increase in communication overhead. This thesis examines the limits this puts on peer-to-peer cooperation between nodes, and demonstrates how for large networks these can only be circumvented by locally formed organisations of nodes. A local group forming protocol is described that provides a method for nodes to create a bottom-up organisation based purely on local conditions. This allows the formation of a dynamic information network of cooperating nodes, in which a distributed algorithm can organise the communications of its nodes using purely local knowledge to maintain its global network performance.(cont.) Building on recent work using SIFT feature detection, this protocol is demonstrated in a network of smart cameras. Local groups with shared views are established, which allow each camera to locally determine their relative position with others in the network. The result partitions the network into groups of cameras with known visual relationships, which can then be used for further analysis.by Jacky Mallett.Ph.D

    Myriad : a distributed machine vision application framework

    Get PDF
    This thesis examines the potential for the application of distributed computing frameworks to industrial and also lightweight consumer-level Machine Vision (MV) applications. Traditional, stand-alone MV systems have many benefits in well-defined, tightly- controlled industrial settings, but expose limitations in interactive, de-localised and small-task applications that seek to utilise vision techniques. In these situations, single-computer solutions fail to suffice and greater flexibility in terms of system construction, interactivity and localisation are required. Network-connected and distributed vision systems are proposed as a remedy to these problems, providing dynamic, componentised systems that may optionally be independent of location, or take advantage of networked computing tools and techniques, such as web servers, databases, proxies, wireless networking, secure connectivity, distributed computing clusters, web services and load balancing. The thesis discusses a system named Myriad, a distributed computing framework for Machine Vision applications. Myriad is composed components, such as image processing engines and equipment controllers, which behave as enhanced web servers and communicate using simple HTTP requests. The roles of HTTP-based distributed computing servers in simplifying rapid development of networked applications and integrating those applications with existing networked tools and business processes are explored. Prototypes of Myriad components, written in Java, along with supporting PHP, Perl and Prolog scripts and user interfaces in C , Java, VB and C++/Qt are examined. Each component includes a scripting language named MCS, enabling remote clients (or other Myriad components) to issue single commands or execute sequences of commands locally to the component in a sustained session. The advantages of server- side scripting in this manner for distributed computing tasks are outlined with emphasis on Machine Vision applications, as a means to overcome network connection issues and address problems where consistent processing is required. Furthermore, the opportunities to utilise scripting to form complex distributed computing network topologies and fully-autonomous federated networked applications are described, and examples given on how to achieve functionality such as clusters of image processing nodes. Through the medium of experimentation involving the remote control of a model train set, cameras and lights, the ability of Myriad to perform traditional roles of fixed, stand-alone Machine Vision systems is supported, along with discussion of opportunities to incorporate these elements into network-based dynamic collaborative inspection applications. In an example of 2D packing of remotely-acquired shapes, distributed computing extensions to Machine Vision tasks are explored, along with integration into larger business processes. Finally, the thesis examines the use of Machine Vision techniques and Myriad components to construct distributed computing applications with the addition of vision capabilities, leading to a new class of image-data-driven applications that exploit mobile computing and Pervasive Computing trends
    corecore