29 research outputs found

    An Active Observer

    Get PDF
    In this paper we present a framework for research into the development of an Active Observer. The components of such an observer are the low and intermediate visual processing modules. Some of these modules have been adapted from the community and some have been investigated in the GRASP laboratory, most notably modules for the understanding of surface reflections via color and multiple views and for the segmentation of three dimensional images into first or second order surfaces via superquadric/parametric volumetric models. However the key problem in Active Observer research is the control structure of its behavior based on the task and situation. This control structure is modeled by a formalism called Discrete Events Dynamic Systems (DEDS)

    Workshop on multisensor integration in manufacturing automation

    Get PDF
    Journal ArticleMany people helped make the Workshop a success, but special thanks must be given to Howard Moraff for his support, and to Vicky Jackson for her efforts in making things run smoothly. Finally, thanks to Jake Aggarwal for helping to start the ball rolling

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1

    Get PDF
    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications

    Applied Instrumentation : student works

    Get PDF
    Conté els treballs dels estudiants de l'assignatura: T1. Sensors and Electronic Instrumentation in the Present and Future of Gravitational Wave Astronomy. D. Canyameres, M. Font, J. Ruiz, D. Zafra. T2. Electrical signals and their physiological significance in plants. D.Gil, O. Rovira, A. Samaniego, F. Serra, J. Vilaró. T3. Eye tracking technology and its applications. A. Acitores, J. Brieva, A. Doñate, F. Poca. T4. Non invasive ultrasound in humans. M. López, M. Massó, G. Morales, J. Navas, H. Sama. T5. Ús d’un activímetre per mesures de radiació. M. Llano, M. López, M. Montaña, L. Pedro-Botet, R. Prats. T6. Utilització d’un equip d’adquisició d’imatges: qualitat d’imatge d’un tomògraf PET A.Lopera, O. Parera, J. Pérez, A. Ramos, M. Tomàs, P. Villén. T7. Usage of a Tomographic Gammacamera for Image Acquisition. J. Amigó, D. Garcín, M. Isern, M. Maroto, S. Moll. T8. Instrumentació en Radiodiagnòstic. Utilització d’un equip d’adquisició d’imatges: Tomògraf TC. Qualitat dels paràmetres del feix de radiació G. Comas, S. Vives, L. García, A. Cortés, M. Burjalès. T9. Oncologia Radioteràpica. M. Escolà, J. González, M. Jiménez, P. Montero, A. Valenzuela.2022/202

    Informed Data Selection For Dynamic Multi-Camera Clusters

    Get PDF
    Traditional multi-camera systems require a fixed calibration between cameras to provide the solution at the correct scale, which places many limitations on its performance. This thesis investigates the calibration of dynamic camera clusters, or DCCs, where one or more of the cluster cameras is mounted to an actuated mechanism, such as a gimbal or robotic manipulator. Our novel calibration approach parameterizes the actuated mechanism using the Denavit-Hartenberg convention, then determines the calibration parameters which allow for the estimation of the time varying extrinsic transformations between the static and dynamic camera frames. A degeneracy analysis is also presented, which identifies redundant parameters of the DCC calibration system. In order to automate the calibration process, this thesis also presents two information theoretic methods which selects the optimal calibration viewpoints using a next-best-view strategy. The first strategy looks at minimizing the entropy of the calibration parameters, while the second method selects the viewpoints which maximize the mutual information between the joint angle input and calibration parameters. Finally, the effective selection of key-frames is also an essential aspect of robust visual navigation algorithms, as it ensures metrically consistent mapping solutions while reducing the computational complexity of the bundle adjustment process. To that end, we propose two entropy based methods which aim to insert key-frames that will directly improve the system's ability to localize. The first approach inserts key-frames based on the cumulative point entropy reduction in the existing map, while the second approach uses the predicted point flow discrepancy to select key-frames which best initialize new features for the camera to track against in the future. The DCC calibration methods are verified in both simulation and using physical hardware consisting of a 5-DOF Fanuc manipulator and a 3-DOF Aeryon Skyranger gimbal. We demonstrate that the proposed methods are able to achieve high quality calibrations using RMSE pixel error metrics, as well as through analysis of the estimator covariance matrix. The key-frame insertion methods are implemented within the Multi-Camera Parallel Mapping and Tracking (MCPTAM) framework, and we confirm the effectiveness of these approaches using high quality ground truth collected using an indoor positioning system

    The Sixth Annual Workshop on Space Operations Applications and Research (SOAR 1992)

    Get PDF
    This document contains papers presented at the Space Operations, Applications, and Research Symposium (SOAR) hosted by the U.S. Air Force (USAF) on 4-6 Aug. 1992 and held at the JSC Gilruth Recreation Center. The symposium was cosponsored by the Air Force Material Command and by NASA/JSC. Key technical areas covered during the symposium were robotic and telepresence, automation and intelligent systems, human factors, life sciences, and space maintenance and servicing. The SOAR differed from most other conferences in that it was concerned with Government-sponsored research and development relevant to aerospace operations. The symposium's proceedings include papers covering various disciplines presented by experts from NASA, the USAF, universities, and industry

    Design and Development of Robotic Part Assembly System under Vision Guidance

    Get PDF
    Robots are widely used for part assembly across manufacturing industries to attain high productivity through automation. The automated mechanical part assembly system contributes a major share in production process. An appropriate vision guided robotic assembly system further minimizes the lead time and improve quality of the end product by suitable object detection methods and robot control strategies. An approach is made for the development of robotic part assembly system with the aid of industrial vision system. This approach is accomplished mainly in three phases. The first phase of research is mainly focused on feature extraction and object detection techniques. A hybrid edge detection method is developed by combining both fuzzy inference rule and wavelet transformation. The performance of this edge detector is quantitatively analysed and compared with widely used edge detectors like Canny, Sobel, Prewitt, mathematical morphology based, Robert, Laplacian of Gaussian and wavelet transformation based. A comparative study is performed for choosing a suitable corner detection method. The corner detection technique used in the study are curvature scale space, Wang-Brady and Harris method. The successful implementation of vision guided robotic system is dependent on the system configuration like eye-in-hand or eye-to-hand. In this configuration, there may be a case that the captured images of the parts is corrupted by geometric transformation such as scaling, rotation, translation and blurring due to camera or robot motion. Considering such issue, an image reconstruction method is proposed by using orthogonal Zernike moment invariants. The suggested method uses a selection process of moment order to reconstruct the affected image. This enables the object detection method efficient. In the second phase, the proposed system is developed by integrating the vision system and robot system. The proposed feature extraction and object detection methods are tested and found efficient for the purpose. In the third stage, robot navigation based on visual feedback are proposed. In the control scheme, general moment invariants, Legendre moment and Zernike moment invariants are used. The selection of best combination of visual features are performed by measuring the hamming distance between all possible combinations of visual features. This results in finding the best combination that makes the image based visual servoing control efficient. An indirect method is employed in determining the moment invariants for Legendre moment and Zernike moment. These moments are used as they are robust to noise. The control laws, based on these three global feature of image, perform efficiently to navigate the robot in the desire environment
    corecore