1,216,632 research outputs found

    Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans

    Get PDF
    In the clinical photoacoustic (PA) imaging, ultrasound (US) array transducers are typically used to provide B-mode images in real-time. To form a B-mode image, delay-and-sum (DAS) beamforming algorithm is the most commonly used algorithm because of its ease of implementation. However, this algorithm suffers from low image resolution and low contrast drawbacks. To address this issue, delay-multiply-and-sum (DMAS) beamforming algorithm has been developed to provide enhanced image quality with higher contrast, and narrower main lobe compared but has limitations on the imaging speed for clinical applications. In this paper, we present an enhanced real-time DMAS algorithm with modified coherence factor (CF) for clinical PA imaging of humans in vivo. Our algorithm improves the lateral resolution and signal-to-noise ratio (SNR) of original DMAS beam-former by suppressing the background noise and side lobes using the coherence of received signals. We optimized the computations of the proposed DMAS with CF (DMAS-CF) to achieve real-time frame rate imaging on a graphics processing unit (GPU). To evaluate the proposed algorithm, we implemented DAS and DMAS with/without CF on a clinical US/PA imaging system and quantitatively assessed their processing speed and image quality. The processing time to reconstruct one B-mode image using DAS, DAS with CF (DAS-CF), DMAS, and DMAS-CF algorithms was 7.5, 7.6, 11.1, and 11.3 ms, respectively, all achieving the real-time imaging frame rate. In terms of the image quality, the proposed DMAS-CF algorithm improved the lateral resolution and SNR by 55.4% and 93.6 dB, respectively, compared to the DAS algorithm in the phantom imaging experiments. We believe the proposed DMAS-CF algorithm and its real-time implementation contributes significantly to the improvement of imaging quality of clinical US/PA imaging system.11Ysciescopu

    Coagulation time detection by means of a real-time image processing

    Get PDF
    Several techniques for semi-automatic or automatic detection of coagulation time in blood or in plasma analysis are available in the literature. However, these techniques are either complex and demand for specialized equipment, or allow the analysis of very few samples in parallel. In this paper a new system based on computer vision is presented. An easy image processing algorithm has been developed, which leads to an accurate estimation of the coagulation time of several samples in parallel. The estimation can be performed in real time using transputer architecture supported by a PC.Peer ReviewedPostprint (published version

    Advantages of 3D time-of-flight range imaging cameras in machine vision applications

    Get PDF
    Machine vision using image processing of traditional intensity images is in wide spread use. In many situations environmental conditions or object colours or shades cannot be controlled, leading to difficulties in correctly processing the images and requiring complicated processing algorithms. Many of these complications can be avoided by using range image data, instead of intensity data. This is because range image data represents the physical properties of object location and shape, practically independently of object colour or shading. The advantages of range image processing are presented, along with three example applications that show how robust machine vision results can be obtained with relatively simple range image processing in real-time applications

    Low-level processing for real-time image analysis

    Get PDF
    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given

    Optical processing for distributed sensors in control of flexible spacecraft

    Get PDF
    A recent potential of distributed image processing is discussed. Applications in the control of flexible spacecraft are emphasized. Devices are currently being developed at NASA and in universities and industries that allow the real-time processing of holographic images. Within 5 years, it is expected that, in real-time, one may add or subtract holographic images at optical accuracy. Images are stored and processed in crystal mediums. The accuracy of their storage and processing is dictated by the grating level of laser holograms. It is far greater than that achievable using current analog-to-digital, pixel oriented, image digitizing and computing techniques. Processors using image processing algebra can conceptually be designed to mechanize Fourier transforms, least square lattice filters, and other complex control system operations. Thus, actuator command inputs derived from complex control laws involving distributed holographic images can be generated by such an image processor. Plans are revealed for the development of a Conjugate Optics Processor for control of a flexible object

    Synthetic generation of address-events for real-time image processing

    Get PDF
    Address-event-representation (AER) is a communication protocol that emulates the nervous system's neurons communication, and that is typically used for transferring images between chips. It was originally developed for bio-inspired and real-time image processing systems. Such systems may consist of a complicated hierarchical structure with many chips that transmit images among them in real time, while performing some processing. In this paper several software methods for generating AER streams from images stored in a computer's memory are presented. A hardware version that works in real-time is also being studied. All of them have been evaluated and compared.Comisión Europea IST-2001-34102

    Hierarchical stack filtering : a bitplane-based algorithm for massively parallel processors

    Get PDF
    With the development of novel parallel architectures for image processing, the implementation of well-known image operators needs to be reformulated to take advantage of the so-called massive parallelism. In this work, we propose a general algorithm that implements a large class of nonlinear filters, called stack filters, with a 2D-array processor. The proposed method consists of decomposing an image into bitplanes with the bitwise decomposition, and then process every bitplane hierarchically. The filtered image is reconstructed by simply stacking the filtered bitplanes according to their order of significance. Owing to its hierarchical structure, our algorithm allows us to trade-off between image quality and processing time, and to significantly reduce the computation time of low-entropy images. Also, experimental tests show that the processing time of our method is substantially lower than that of classical methods when using large structuring elements. All these features are of interest to a variety of real-time applications based on morphological operations such as video segmentation and video enhancement

    Azimuth correlator for real-time synthetic aperture radar image processing

    Get PDF
    An azimuth correlator architecture is defined wherein a number of serial range-line buffer memories are cascaded such that the output stages of all buffer memories together form a complete and unique range bin in the azimuthal dimension at any given time. A range bin is automatically read out of the last stages of the registers in parallel on a range line sample-by-sample basis for subsequent range migration correction and correlation. Range migration correction is performed on the range bins by effectively varying the length of a delay register at the output of each range-line buffer memory. The corrected range bin output from the delay registers is then correlated with a Doppler reference function to form an image element on a real-time basis
    corecore