256,275 research outputs found

    Intelligent manipulation technique for multi-branch robotic systems

    Get PDF
    New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system

    Self-Adaptive Architecture for Multi-sensor Embedded Vision System

    Get PDF
    International audienceArchitectural optimization for heterogeneous multi-sensor processing is a real technological challenge. Most of the vision systems involve only one single color sensor and they do not address the heterogeneous sensors challenge. However, more and more applications require other types of sensor in addition, such as infrared or low-light sensor, so that the vision system could face various luminosity conditions. These heterogeneous sensors could differ in the spectral band, the resolution or even the frame rate. Such sensor variety needs huge computing performance , but embedded systems have stringent area and power constraints. Reconfigurable architecture makes possible flexible computing while respecting the latter constraints. Many reconfigurable architectures for vision application have been proposed in the past. Yet, few of them propose a real dynamic adaptation capability to manage sensor heterogeneity. In this paper, a self-adaptive architecture is proposed to deal with heterogeneous sensors dynamically. This architecture supports on-the-fly sensor switch. Architecture of the system is self-adapted thanks to a system monitor and an adaptation controller. A stream header concept is used to convey sensor information to the self-adaptive architecture. The proposed architecture was implemented in Altera Cyclone V FPGA. In this implementation, adaptation of the architecture consists in Dynamic and Partial Reconfiguration of FPGA. The self-adaptive ability of the architecture has been proved with low resource overhead and an average global adaptation time of 75 ms

    An active stereo vision-based learning approach for robotic tracking, fixating and grasping control

    Full text link
    In this paper, an active stereo vision-based learning approach is proposed for a robot to track, fixate and grasp an object in unknown environments. First, the functional mapping relationships between the joint angles of the active stereo vision system and the spatial representations of the object are derived and expressed in a three-dimensional workspace frame. Next, the self-adaptive resonance theory-based neural networks and the feedforward neural networks are used to learn the mapping relationships in a self-organized way. Then, the approach is verified by simulation using the models of an active stereo vision system which is installed in the end-effector of a robot. Finally, the simulation results confirm the effectiveness of the present approach. <br /

    CONFIGR: A Vision-Based Model for Long-Range Figure Completion

    Full text link
    CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.Air Force Office of Scientific Research (F49620-01-1-0423); National Geospatial-Intelligence Agency (NMA 201-01-1-0216); National Science Foundation (SBE-0354378); Office of Naval Research (N000014-01-1-0624

    An efficient object recognition and self-localization system for humanoid soccer robot

    Get PDF
    [[abstract]]In the RoboCup soccer humanoid league competition, the vision system is used to collect various environment information as the terminal data to finish the functions of object recognition, coordinate establishment, robot localization, robot tactic, barrier avoiding, etc. Thus, a real-time object recognition and high accurate self-localization system of the soccer robot becomes the key technology to improve the performance. In this work we proposed an efficient object recognition and self-localization system for the RoboCup soccer humanoid league rules of the 2009 competition. We proposed two methods : 1) In the object recognition part, the real-time vision-based method is based on the adaptive resolution method (ARM). It can select the most proper resolution for different situations in the competition. ARM can reduce the noises interference and make the object recognition system more robust as well. 2) In the self-localization part, we proposed a new approach, adaptive vision-based self-localization system (AVBSLS), which uses the trigonometric function to find the coarse location of the robot and further adopts the measuring artificial neural network technique to adjust the humanoid robot position adaptively. The experimental results indicate that the proposed system is not easily affected by the light illumination. The object recognition accuracy rate is more than 93% on average and the average frame rate can reach 32 fps (frame per second). It does not only maintain the higher recognition accuracy rate for the high resolution frames, but also increase the average frame rate for about 11 fps compared to the conventional high resolution approach and the average accuracy ratio of the localization is 92.3%.[[notice]]需補會議地點、主辦單位[[conferencetype]]國際[[conferencedate]]20100818~2010082

    Adaptive Multi-sensor Perception for Driving Automation in Outdoor Contexts

    Get PDF
    In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system

    Towards the integration of process and quality control using multi-agent technology

    Get PDF
    The paper introduces a vision on the design of distributed manufacturing control systems using the multi-agent principles to enhance the integration of the production and quality control processes. It is highlighted how agent technology may enforce interaction of manufacturing execution system and distributed control system, enhancing the exploitation of the available information at the quality control and process control levels. A specific focus is made on a suitable engineering methodology for the design and realization of such concept. Innovation is also presented at the level of adaptive process control and self-optimizing quality control, with examples related to a home appliance production line

    Self-organization via active exploration in robotic applications

    Get PDF
    We describe a neural network based robotic system. Unlike traditional robotic systems, our approach focussed on non-stationary problems. We indicate that self-organization capability is necessary for any system to operate successfully in a non-stationary environment. We suggest that self-organization should be based on an active exploration process. We investigated neural architectures having novelty sensitivity, selective attention, reinforcement learning, habit formation, flexible criteria categorization properties and analyzed the resulting behavior (consisting of an intelligent initiation of exploration) by computer simulations. While various computer vision researchers acknowledged recently the importance of active processes (Swain and Stricker, 1991), the proposed approaches within the new framework still suffer from a lack of self-organization (Aloimonos and Bandyopadhyay, 1987; Bajcsy, 1988). A self-organizing, neural network based robot (MAVIN) has been recently proposed (Baloch and Waxman, 1991). This robot has the capability of position, size rotation invariant pattern categorization, recognition and pavlovian conditioning. Our robot does not have initially invariant processing properties. The reason for this is the emphasis we put on active exploration. We maintain the point of view that such invariant properties emerge from an internalization of exploratory sensory-motor activity. Rather than coding the equilibria of such mental capabilities, we are seeking to capture its dynamics to understand on the one hand how the emergence of such invariances is possible and on the other hand the dynamics that lead to these invariances. The second point is crucial for an adaptive robot to acquire new invariances in non-stationary environments, as demonstrated by the inverting glass experiments of Helmholtz. We will introduce Pavlovian conditioning circuits in our future work for the precise objective of achieving the generation, coordination, and internalization of sequence of actions
    corecore