3,371 research outputs found

    Application of advanced technology to space automation

    Get PDF
    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits

    Towards High-Frequency Tracking and Fast Edge-Aware Optimization

    Full text link
    This dissertation advances the state of the art for AR/VR tracking systems by increasing the tracking frequency by orders of magnitude and proposes an efficient algorithm for the problem of edge-aware optimization. AR/VR is a natural way of interacting with computers, where the physical and digital worlds coexist. We are on the cusp of a radical change in how humans perform and interact with computing. Humans are sensitive to small misalignments between the real and the virtual world, and tracking at kilo-Hertz frequencies becomes essential. Current vision-based systems fall short, as their tracking frequency is implicitly limited by the frame-rate of the camera. This thesis presents a prototype system which can track at orders of magnitude higher than the state-of-the-art methods using multiple commodity cameras. The proposed system exploits characteristics of the camera traditionally considered as flaws, namely rolling shutter and radial distortion. The experimental evaluation shows the effectiveness of the method for various degrees of motion. Furthermore, edge-aware optimization is an indispensable tool in the computer vision arsenal for accurate filtering of depth-data and image-based rendering, which is increasingly being used for content creation and geometry processing for AR/VR. As applications increasingly demand higher resolution and speed, there exists a need to develop methods that scale accordingly. This dissertation proposes such an edge-aware optimization framework which is efficient, accurate, and algorithmically scales well, all of which are much desirable traits not found jointly in the state of the art. The experiments show the effectiveness of the framework in a multitude of computer vision tasks such as computational photography and stereo.Comment: PhD thesi

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    Camera Marker Networks for Pose Estimation and Scene Understanding in Construction Automation and Robotics.

    Full text link
    The construction industry faces challenges that include high workplace injuries and fatalities, stagnant productivity, and skill shortage. Automation and Robotics in Construction (ARC) has been proposed in the literature as a potential solution that makes machinery easier to collaborate with, facilitates better decision-making, or enables autonomous behavior. However, there are two primary technical challenges in ARC: 1) unstructured and featureless environments; and 2) differences between the as-designed and the as-built. It is therefore impossible to directly replicate conventional automation methods adopted in industries such as manufacturing on construction sites. In particular, two fundamental problems, pose estimation and scene understanding, must be addressed to realize the full potential of ARC. This dissertation proposes a pose estimation and scene understanding framework that addresses the identified research gaps by exploiting cameras, markers, and planar structures to mitigate the identified technical challenges. A fast plane extraction algorithm is developed for efficient modeling and understanding of built environments. A marker registration algorithm is designed for robust, accurate, cost-efficient, and rapidly reconfigurable pose estimation in unstructured and featureless environments. Camera marker networks are then established for unified and systematic design, estimation, and uncertainty analysis in larger scale applications. The proposed algorithms' efficiency has been validated through comprehensive experiments. Specifically, the speed, accuracy and robustness of the fast plane extraction and the marker registration have been demonstrated to be superior to existing state-of-the-art algorithms. These algorithms have also been implemented in two groups of ARC applications to demonstrate the proposed framework's effectiveness, wherein the applications themselves have significant social and economic value. The first group is related to in-situ robotic machinery, including an autonomous manipulator for assembling digital architecture designs on construction sites to help improve productivity and quality; and an intelligent guidance and monitoring system for articulated machinery such as excavators to help improve safety. The second group emphasizes human-machine interaction to make ARC more effective, including a mobile Building Information Modeling and way-finding platform with discrete location recognition to increase indoor facility management efficiency; and a 3D scanning and modeling solution for rapid and cost-efficient dimension checking and concise as-built modeling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113481/1/cforrest_1.pd

    Sense and avoid using hybrid convolutional and recurrent neural networks

    Get PDF
    This work develops a Sense and Avoid strategy based on a deep learning approach to be used by UAVs using only one electro-optical camera to sense the environment. Hybrid Convolutional and Recurrent Neural Networks (CRNN) are used for object detection, classification and tracking whereas an Extended Kalman Filter (EKF) is considered for relative range estimation. Probabilistic conflict detection and geometric avoidance trajectory are considered for the last stage of this technique. The results show that the considered deep learning approach can work faster than other state-of-the-art computer vision methods. They also show that the collision can be successfully avoided considering design parameters that can be adjusted to adapt to different scenarios

    A framework for flexible integration in robotics and its applications for calibration and error compensation

    Get PDF
    Robotics has been considered as a viable automation solution for the aerospace industry to address manufacturing cost. Many of the existing robot systems augmented with guidance from a large volume metrology system have proved to meet the high dimensional accuracy requirements in aero-structure assembly. However, they have been mainly deployed as costly and dedicated systems, which might not be ideal for aerospace manufacturing having low production rate and long cycle time. The work described in this thesis is to provide technical solutions to improve the flexibility and cost-efficiency of such metrology-integrated robot systems. To address the flexibility, a software framework that supports reconfigurable system integration is developed. The framework provides a design methodology to compose distributed software components which can be integrated dynamically at runtime. This provides the potential for the automation devices (robots, metrology, actuators etc.) controlled by these software components to be assembled on demand for various assembly applications. To reduce the cost of deployment, this thesis proposes a two-stage error compensation scheme for industrial robots that requires only intermittent metrology input, thus allowing for one expensive metrology system to be used by a number of robots. Robot calibration is employed in the first stage to reduce the majority of robot inaccuracy then the metrology will correct the residual errors. In this work, a new calibration model for serial robots having a parallelogram linkage is developed that takes into account both geometric errors and joint deflections induced by link masses and weight of the end-effectors. Experiments are conducted to evaluate the two pieces of work presented above. The proposed framework is adopted to create a distributed control system that implements calibration and error compensation for a large industrial robot having a parallelogram linkage. The control system is formed by hot-plugging the control applications of the robot and metrology used together. Experimental results show that the developed error model was able to improve the 3 positional accuracy of the loaded robot from several millimetres to less than one millimetre and reduce half of the time previously required to correct the errors by using only the metrology. The experiments also demonstrate the capability of sharing one metrology system to more than one robot

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Full text link
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin
    corecore