30 research outputs found

    Experimental modal analysis of an automobile tire under static load

    Get PDF

    Real-time centre detection of an OLED structure

    Get PDF
    The research presented in this paper focuses on real-time image processing for visual servoing, i.e. the positioning of a x-y table by using a camera only instead of encoders. A camera image stream plus real-time image processing determines the position in the next iteration of the table controller. With a frame rate of 1000 fps, a maximum processing time of only 1 millisecond is allowed for each image of 80x80 pixels. This visual servoing task is performed on an OLED (Organic Light Emitting Diode) substrate that can be found in displays, with a typical size of 100 by 200 µm. The presented algorithm detects the center of an OLED well with sub-pixel accuracy (1 pixel equals 4 µm, sub-pixel accuracy reliable up to ±1 µm) and a computation time less than 1 millisecond

    Growth of Focal Nodular Hyperplasia is Not a Reason for Surgical Intervention, but Patients Should be Referred to a Tertiary Referral Centre

    Get PDF
    Background: When a liver lesion diagnosed as focal nodular hyperplasia (FNH) increases in size, it may cause doubt about the initial diagnosis. In many cases, additional investigations will follow to exclude hepatocellular adenoma or malignancy. This retrospective cohort study addresses the implications of growth of FNH for clinical management. Methods: We included patients diagnosed with FNH based on ≥2 imaging modalities between 2002 and 2015. Characteristics

    Dutch Robotics 2011 adult-size team description

    Get PDF
    This document presents the 2011 edition of the team Dutch Robotics from The Netherlands. Our team gathers three Dutch technical universities, namely Delft University of Technology, Eindhoven University of Technology and University of Twente, and the commercial company Philips. We contribute an adult-size humanoid robot TUlip, which is designed based on theory of the limit cycle walking developed in our earlier research. The key of our theory is that stable periodic walking gaits can be achieved even without high-bandwidth robot position control. Our control approach is based on simultaneous position and force control. For accurate force control, we make use of the Series Elastic Actuation. The control software of TUlip is based on the Darmstadt’s RoboFrame, and it runs on a PC104 computer with Linux Xenomai. The vision system consists of two wide-angle cameras, each interfaced with a dedicated Blackfin processor running vision algorithms, and a wireless networking interface

    Dutch Robotics 2010 adult-size team description

    Get PDF
    This document presents the 2010 edition of the team Dutch Robotics from The Netherlands. Our team gathers three Dutch technical universities, namely Delft University of Technology, Eindhoven University of Technology and University of Twente, and the commercial company Philips. We contribute an adult-size humanoid robot TUlip, which is designed based on theory of the limit cycle walking developed in our earlier research. The key of our theory is that stable periodic walking gaits can be achieved even without high-bandwidth robot position control. Our control approach is based on simultaneous position and force control. For accurate force control, we make use of the Series Elastic Actuation. The control software of TUlip is based on the Darmstadt’s RoboFrame, and it runs on a PC104 computer with Linux Xenomai. The vision system consists of two wide-angle cameras, each interfaced with a dedicated Blackfin processor running vision algorithms, and a wireless networking interface

    Product pattern-based camera calibration for microrobotics

    No full text
    Traditional macro camera calibration techniques use multiple images at multiple orientations to obtain the camera's internal and external calibration parameters. Microscopic camera calibration differs in this by limiting to only one image at fixed orientation (parallel to the image plane) and requiring a special calibration pattern. We propose a method to use the repetitive product pattern as calibration object. A parametric model of an optical microscope validates this approach by comparing the results from the product pattern with an industrial high-accuracy calibration pattern

    Direct motion planning for vision-based control

    No full text
    This paper presents direct methods for vision-based control for the application of industrial inkjet printing. In this, visual control is designed with a direct coupling between camera measurements and joint motion. Traditional visual servoing commonly has a slow visual update rate and needs an additional local joint controller to guarantee stability. By only using the product as reference and sampling with a high update rate, direct visual measurements are sufficient for controlled positioning. The proposed method is simpler and more reliable than standard motor encoders, despite the tight real-time constraints. This direct visual control method is experimentally verified with a 2D planar motion stage for micrometer positioning. To achieve accurate and fast motion, a balance is found between frame rate and image size. With a frame rate of 1600 fps and an image size of 160 × 100 pixels we show the effectiveness of the approach

    Feed forward visual servoing for object exploration

    No full text
    A new visual servoing method is proposed which uses position based visual servoing (PBVS) in combination with an additional image based control layer on the target pose to maintain fixation on an object. The proposed method (denoted feed forward PBVS) does not require trajectory generation but instead uses via-points to explore the object. It exploits the advantages of PBVS without the disadvantages of image based visual servoing (IBVS) as occurs in hybrid approaches. The proposed method is experimentally validated with a redundant 7-DOF manipulator. Comparison with existing visual servoing methods (PBVS and one partitioned approach) shows the effectiveness of the method

    1000 fps visual servoing on the reconfigurable wide SIMD processor

    No full text
    Visual servoing has been proven to obtain better performance than encoders at comparable cost. However, the often computationally intensive vision algorithms and the ever growing demands for higher frame rate make its realization very challenging. This paper demonstrated the feasibility of achieving high frame-rate visual servoing applications on the wide Single-Instruction-Multiple-Data (SIMD) processors. An industrial application, Organic-Light-Emitting-Diode (OLED) substrate localization, is chosen as a typical example. Firstly, we proposed an improved vision pipeline for repetitive structure localization, which is more robust, and also more friendly to embedded processors and FPGA/ASIC realization. Then, a highly-efficient SIMD processor for vision servoing is also proposed. The number of processing elements (PEs) in this proposed SIMD processor is dynamically reconfigurable, which (i) enables efficient processing for different vector lengths; (ii) can easily adjust to the various performance requirements; and (iii) saves energy consumption. For input frames of 120 x 45 resolution, the proposed vision pipeline can process a frame within 275 mus, sufficient to meet the throughput requirement of 1000 fps with a latency of less than 1 ms for the whole vision servoing system. Compared to the reference realization on MicroBlaze, the proposed SIMD processor achieves a 18x performance improvement

    Proof of concept of a projection-based safety system for human-robot collaborative engine assembly

    Get PDF
    In the past years human-robot collaboration has gained interest among industry and production environments. While there is interest towards the topic, there is a lack of industrially relevant cases utilizing novel methods and technologies. The feasibility of the implementation, worker safety and production efficiency are the key questions in the field. The aim of the proposed work is to provide a conceptual safety system for context-dependent, multi-modal communication in human-robot collaborative assembly, which will contribute to safety and efficiency of the collaboration. The approach we propose offers an addition to traditional interfaces like push buttons installed at fixed locations. We demonstrate an approach and corresponding technical implementation of the system with projected safety zones based on the dynamically updated depth map and a graphical user interface (GUI). The proposed interaction is a simplified two-way communication between human and the robot to allow both parties to notify each other, and for the human to coordinate the operations.acceptedVersionPeer reviewe
    corecore