24,694 research outputs found

    Detection of leukocytes stained with acridine orange using unique spectral features acquired from an image-based spectrometer

    Get PDF
    A leukocyte differential count can be used to diagnosis a myriad blood disorders, such as infections, allergies, and efficacy of disease treatments. In recent years, attention has been focused on developing point-of-care (POC) systems to provide this test in global health settings. Acridine orange (AO) is an amphipathic, vital dye that intercalates leukocyte nucleic acids and acidic vesicles. It has been utilized by POC systems to identify the three main leukocyte subtypes: granulocytes, monocytes, and lymphocytes. Subtypes of leukocytes can be characterized using a fluorescence microscope, where the AO has a 450 nm excitation wavelength and has two peak emission wavelengths between 525 nm (green) and 650 nm (red), depending on the cellular content and concentration of AO in the cells. The full spectra of AO stained leukocytes has not been fully explored for POC applications. Optical instruments, such as a spectrometer that utilizes a diffraction grating, can give specific spectral data by separating polychromatic light into distinct wavelengths. The spectral data from this setup can be used to create object-specific emission profiles. Yellow-green and crimson microspheres were used to model the emission peaks and profiles of AO stained leukocytes. Whole blood was collected via finger stick and stained with AO to gather preliminary leukocyte emission profiles. A MATLAB algorithm was designed to analyze the spectral data within the images acquired using the image-based spectrometer. The algorithm utilized watershed segmentation and centroid location functions to isolate independent spectra from an image. The output spectra represent the average line intensity profiles for each pixel across a slice of an object. First steps were also taken in processing video frames of manually translated microspheres. The high-speed frame rate allowed objects to appear in multiple consecutive images. A function was applied to each image cycle to identify repeating centroid locations. The yellow-green (515 nm) and crimson (645 nm) microspheres exhibited a distinct separation in colorimetric emission with a peak-to-peak difference of 36 pixels, which is related to the 130 nm peak emission difference. Two AO stained leukocytes exhibited distinct spectral profiles and peaks across different wavelengths. This could be due to variations in the staining method (incubation period and concentration) effecting the emissions or variations in cellular content indicating different leukocyte subtypes. The algorithm was also effective when isolating unique centroids between video frames. We have demonstrated the ability to extract spectral information from data acquired from the image-based spectrometer of microspheres, as a control, and AO stained leukocytes. We determined that the spectral information from yellow-green and crimson microspheres could be used to represent the wavelength range of AO stained leukocytes, thus providing a calibration tool. Also, preliminary spectral information was successfully extracted from yellow-green microspheres translated under the linear slit using stationary images and video frames, thus demonstrating the feasibility of collecting data from a large number of objects

    Kitting in the Wild through Online Domain Adaptation

    Get PDF
    Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain

    Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

    Full text link
    We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled through geometric constraints. Consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. To that end, we introduce Competitive Collaboration, a framework that facilitates the coordinated training of multiple specialized neural networks to solve complex problems. Competitive Collaboration works much like expectation-maximization, but with neural networks that act as both competitors to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state-of-the-art performance among joint unsupervised methods on all sub-problems.Comment: CVPR 201

    Vehicle detection and tracking using homography-based plane rectification and particle filtering

    Get PDF
    This paper presents a full system for vehicle detection and tracking in non-stationary settings based on computer vision. The method proposed for vehicle detection exploits the geometrical relations between the elements in the scene so that moving objects (i.e., vehicles) can be detected by analyzing motion parallax. Namely, the homography of the road plane between successive images is computed. Most remarkably, a novel probabilistic framework based on Kalman filtering is presented for reliable and accurate homography estimation. The estimated homography is used for image alignment, which in turn allows to detect the moving vehicles in the image. Tracking of vehicles is performed on the basis of a multidimensional particle filter, which also manages the exit and entries of objects. The filter involves a mixture likelihood model that allows a better adaptation of the particles to the observed measurements. The system is specially designed for highway environments, where it has been proven to yield excellent results
    corecore