42 research outputs found

    Experimental modal analysis of an automobile tire under static load

    Get PDF

    Real-time centre detection of an OLED structure

    Get PDF
    The research presented in this paper focuses on real-time image processing for visual servoing, i.e. the positioning of a x-y table by using a camera only instead of encoders. A camera image stream plus real-time image processing determines the position in the next iteration of the table controller. With a frame rate of 1000 fps, a maximum processing time of only 1 millisecond is allowed for each image of 80x80 pixels. This visual servoing task is performed on an OLED (Organic Light Emitting Diode) substrate that can be found in displays, with a typical size of 100 by 200 µm. The presented algorithm detects the center of an OLED well with sub-pixel accuracy (1 pixel equals 4 µm, sub-pixel accuracy reliable up to ±1 µm) and a computation time less than 1 millisecond

    Distinct Responses to Menin Inhibition and Synergy with DOT1L Inhibition in KMT2A-Rearranged Acute Lymphoblastic and Myeloid Leukemia

    Get PDF
    Pediatric acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) exhibit favorable survival rates. However, for AML and ALL patients carrying KMT2A gene translocations clinical outcome remains unsatisfactory. Key players in KMT2A-fusion-driven leukemogenesis include menin and DOT1L. Recently, menin inhibitors like revumenib have garnered attention for their potential therapeutic efficacy in treating KMT2A-rearranged acute leukemias. However, resistance to menin inhibition poses challenges, and identifying which patients would benefit from revumenib treatment is crucial. Here, we investigated the in vitro response to revumenib in KMT2A-rearranged ALL and AML. While ALL samples show rapid, dose-dependent induction of leukemic cell death, AML responses are much slower and promote myeloid differentiation. Furthermore, we reveal that acquired resistance to revumenib in KMT2A-rearranged ALL cells can occur either through the acquisition of MEN1 mutations or independently of mutations in MEN1. Finally, we demonstrate significant synergy between revumenib and the DOT1L inhibitor pinometostat in KMT2A-rearranged ALL, suggesting that such drug combinations represent a potent therapeutic strategy for these patients. Collectively, our findings underscore the complexity of resistance mechanisms and advocate for precise patient stratification to optimize the use of menin inhibitors in KMT2A-rearranged acute leukemia.</p

    Growth of Focal Nodular Hyperplasia is Not a Reason for Surgical Intervention, but Patients Should be Referred to a Tertiary Referral Centre

    Get PDF
    Background: When a liver lesion diagnosed as focal nodular hyperplasia (FNH) increases in size, it may cause doubt about the initial diagnosis. In many cases, additional investigations will follow to exclude hepatocellular adenoma or malignancy. This retrospective cohort study addresses the implications of growth of FNH for clinical management. Methods: We included patients diagnosed with FNH based on ≥2 imaging modalities between 2002 and 2015. Characteristics

    Dutch Robotics 2011 adult-size team description

    Get PDF
    This document presents the 2011 edition of the team Dutch Robotics from The Netherlands. Our team gathers three Dutch technical universities, namely Delft University of Technology, Eindhoven University of Technology and University of Twente, and the commercial company Philips. We contribute an adult-size humanoid robot TUlip, which is designed based on theory of the limit cycle walking developed in our earlier research. The key of our theory is that stable periodic walking gaits can be achieved even without high-bandwidth robot position control. Our control approach is based on simultaneous position and force control. For accurate force control, we make use of the Series Elastic Actuation. The control software of TUlip is based on the Darmstadt’s RoboFrame, and it runs on a PC104 computer with Linux Xenomai. The vision system consists of two wide-angle cameras, each interfaced with a dedicated Blackfin processor running vision algorithms, and a wireless networking interface

    Dutch Robotics 2010 adult-size team description

    Get PDF
    This document presents the 2010 edition of the team Dutch Robotics from The Netherlands. Our team gathers three Dutch technical universities, namely Delft University of Technology, Eindhoven University of Technology and University of Twente, and the commercial company Philips. We contribute an adult-size humanoid robot TUlip, which is designed based on theory of the limit cycle walking developed in our earlier research. The key of our theory is that stable periodic walking gaits can be achieved even without high-bandwidth robot position control. Our control approach is based on simultaneous position and force control. For accurate force control, we make use of the Series Elastic Actuation. The control software of TUlip is based on the Darmstadt’s RoboFrame, and it runs on a PC104 computer with Linux Xenomai. The vision system consists of two wide-angle cameras, each interfaced with a dedicated Blackfin processor running vision algorithms, and a wireless networking interface

    Direct methods for vision-based robot control : application and implementation

    No full text
    With the growing interest of integrating robotics into everyday life and industry, the requirements towards the quality and quantity of applications grows equally hard. This trend is profoundly recognized in applications involving visual perception. Whereas visual sensing in home environments tend to be mainly used for recognition and localization, safety becomes the driving factor for developing intelligent visual control algorithms. More specifically, a robot operating in a human environment should not collide with obstacles and executed motion should be as smooth as possible. Furthermore, as the environment is not known on beforehand, a high demand on the robustness of visual processing is a necessity. On the other hand, in an industrial setting, the environment is known on beforehand and safety is mainly guaranteed by excluding a human operator. Despite these reasons, and the fact that visual servoing has gained much attention from industry to become a standard solution for robotic automation tasks, applications are highly simplified. For example, methods such as visual fault detection are already a mature technique in industrial manufacturing, where a fixed camera observes a product (e.g., on a conveyor belt) and checks whether it meets certain requirements. These operations can be executed at a fairly high rate due to the simplicity of the system (e.g., static camera) and the simplification of the processing task (e.g., binary images). For both areas the identified difficulties are similar. Foremost, this is the slow nature of (robust) visual processing, in respect to the ever growing demand of increasing speed and reducing delay. These two application areas with analogous limitations motivate the design of more direct approaches of vision in visual control systems. Therefore, in order to meet the requirements for next generation visual control systems, this thesis presents approaches which employ visual measurements as a direct feedback to design constrained motion. First, for industrial robotics, in order to obtain the required positioning accuracy, the measurement and fixation system have to be highly rigid and welldesigned, implying high cost and long design time. By measuring the position of objects directly with a camera, instead of indirectly by motor encoders, the requirements of the measurement and fixation system are less demanding. Moreover, this motivates the miniaturization of the complete control system. The approach is validated in experiments on a simplified 2D planar stage (i.e., considerable friction, poor fixation), which attains similar performance compared to encoder-based positioning systems. Secondly, in a human-centred environment, this direct sensing can improve traditional visual control systems, when subject to certain disturbances. More specifically, a method is proposed that uses an image-based feedforward controller on top of traditional position-based visual servo control to overcome disturbances such as friction or poorly designed local motor controllers. This visual feedforward control action is only active when an image-based error is present and vanishes when that error goes to zero. The method is validated on an anthropomorphic robotic manipulator with 7 degrees of freedom, intended for operation in the human care environment. Third, sensing the product directly gives rise to designing motion directly. Whereas in traditional approaches the motion trajectory is designed offline and can not be changed at runtime, direct trajectory generation computes the motion of the next step based on current state and events. This means that at any instance in time, the trajectory of a motion system can be altered with respect to certain desired kinematic or dynamic constraints. For industrial applications this makes manufacturing on near-repetitive or non-rigid structures (e.g. flexible displays) possible. When applied to a robotic manipulator, this enables obstacle avoidance to no longer be on path planning level, but on trajectory planning level, where kinematic or dynamic constraints can be taken into account. This results in a motion that is smoother than when obstacle avoidance with path planning is employed. For both application areas this direct trajectory generation method is implemented and shows high flexibility in constrained motion trajectory design

    Product pattern-based camera calibration for microrobotics

    No full text
    Traditional macro camera calibration techniques use multiple images at multiple orientations to obtain the camera's internal and external calibration parameters. Microscopic camera calibration differs in this by limiting to only one image at fixed orientation (parallel to the image plane) and requiring a special calibration pattern. We propose a method to use the repetitive product pattern as calibration object. A parametric model of an optical microscope validates this approach by comparing the results from the product pattern with an industrial high-accuracy calibration pattern

    High performance visual servoing for controlled μm-positioning

    No full text
    The research presented in this paper focuses on high performance visual servoing for controlled positioning on micrometer scale. Visual servoing means that the positioning of a x-y table is performed with a camera instead of encoders and one camera image and image processing should determine the position in the next iteration. With a frame rate of 1000 [fps], a maximum processing time of 1 millisecond is allowed for each image (132 × 45 pixels). The visual servoing task is performed on an OLED substrate, which can be found in displays, with a typical size of 80 by 220 µm. The repetitive pattern of the OLED substrate itself is used as an encoder and thus closing the control loop. We present the design choices made for the image processing algorithms and the experimental setup as well as their limitations

    Direct motion planning for vision-based control

    No full text
    This paper presents direct methods for vision-based control for the application of industrial inkjet printing. In this, visual control is designed with a direct coupling between camera measurements and joint motion. Traditional visual servoing commonly has a slow visual update rate and needs an additional local joint controller to guarantee stability. By only using the product as reference and sampling with a high update rate, direct visual measurements are sufficient for controlled positioning. The proposed method is simpler and more reliable than standard motor encoders, despite the tight real-time constraints. This direct visual control method is experimentally verified with a 2D planar motion stage for micrometer positioning. To achieve accurate and fast motion, a balance is found between frame rate and image size. With a frame rate of 1600 fps and an image size of 160 × 100 pixels we show the effectiveness of the approach
    corecore