14 research outputs found

    Motor starting with shunt capacitors: An alternate approach to voltage dip control

    Full text link
    Induction motors are known to cause undesirable voltage dip because of high inrush current during starting, especially when fed from weak AC systems. For this reason, large motors are often started with reduced voltage. This thesis proposes full-voltage motor starting with shunt capacitors. Two types of capacitors are used: power factor correction capacitor and start capacitor. The start capacitor is determined to maximize the input impedance during starting in order to reduce the initial inrush current. The analysis shows that shunt capacitors improve the voltage dip and the motor acceleration significantly but introduce some level of waveform distortion during each starting period. The start capacitor is found to be very effective in voltage control, but additional components such as damping resistor must be added to effectively reduce the waveform distortion. A centrifugal switch is used to replace the start capacitor and damping resistor by a power factor correction capacitor as the motor reaches a predetermined speed. The feasibility of the proposed scheme is proven through a mathematical model and associated computer simulations. The simulation results are verified by laboratory experiments

    Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans

    Get PDF
    In the clinical photoacoustic (PA) imaging, ultrasound (US) array transducers are typically used to provide B-mode images in real-time. To form a B-mode image, delay-and-sum (DAS) beamforming algorithm is the most commonly used algorithm because of its ease of implementation. However, this algorithm suffers from low image resolution and low contrast drawbacks. To address this issue, delay-multiply-and-sum (DMAS) beamforming algorithm has been developed to provide enhanced image quality with higher contrast, and narrower main lobe compared but has limitations on the imaging speed for clinical applications. In this paper, we present an enhanced real-time DMAS algorithm with modified coherence factor (CF) for clinical PA imaging of humans in vivo. Our algorithm improves the lateral resolution and signal-to-noise ratio (SNR) of original DMAS beam-former by suppressing the background noise and side lobes using the coherence of received signals. We optimized the computations of the proposed DMAS with CF (DMAS-CF) to achieve real-time frame rate imaging on a graphics processing unit (GPU). To evaluate the proposed algorithm, we implemented DAS and DMAS with/without CF on a clinical US/PA imaging system and quantitatively assessed their processing speed and image quality. The processing time to reconstruct one B-mode image using DAS, DAS with CF (DAS-CF), DMAS, and DMAS-CF algorithms was 7.5, 7.6, 11.1, and 11.3 ms, respectively, all achieving the real-time imaging frame rate. In terms of the image quality, the proposed DMAS-CF algorithm improved the lateral resolution and SNR by 55.4% and 93.6 dB, respectively, compared to the DAS algorithm in the phantom imaging experiments. We believe the proposed DMAS-CF algorithm and its real-time implementation contributes significantly to the improvement of imaging quality of clinical US/PA imaging system.11Ysciescopu

    Programmable ultrasound color flow system

    No full text
    Thesis (Ph. D.)--University of Washington, 2000Ultrasound color flow systems are widely used in medical imaging because these are safe, noninvasive, and relatively inexpensive and displays images in real time. However to meet the real time requirement, these systems have been built with fixed function hardware, i.e., specialized electronic boards. This hardwired approach hinder the development of innovative algorithms to enhance the image quality and developing new applications to improve the diagnostic capability since incorporating a new application/algorithm is quite expensive, requiring redesigns ranging from hardware chips up to complete boards or some times even the complete system. On the other hand, a programmable system could be reprogrammed to quickly adapt to new tasks and offer advantages, such as reducing costs and the time-to-market of new ideas. Despite these benefits, a completely programmable color flow system that meets the real-time requirement has not been possible due to the limited computing power, inadequate data flow bandwidth or topology, algorithms not optimized for the architecture of programmable processors. This research has addressed these issues by developing a multiprocessor architecture capable of handling the computation and data flow requirements for a real-time system utilizing new generation VLIW processors, and by designing efficient ultrasound algorithms tightly integrated with the underlying architecture.These new generation VLIW processors can deliver increased computing performance through on-chip and data-level parallelism. Even with such a flexible and powerful architecture, to achieve good performance necessitates the careful design and mapping of algorithms that can make good use of the available parallelism. We developed several algorithm-mapping techniques for the efficient implementation of ultrasound algorithms utilizing both on-chip and data-level parallelism. We then designed a low-cost, high-performance multiprocessor architecture capable of meeting the realtime requirements of the ultrasound color flow system. To demonstrate this multiprocessor architecture and algorithms meet the real-time requirements, we developed a multiprocessor simulation environment with a board-level VHDL simulator. Our simulation results indicate that the two-board system with 4 MAP1000s on each board is capable of supporting all the ultrasound color flow system requirements. Thus, we have demonstrated that a fully programmable ultrasound system can be developed with the reasonable number of programmable processors

    Three-Dimensional Ultrasound: From Acquisition to Visualization and From Algorithms to Systems

    No full text

    Fast Adaptive Unsharp Masking with Programmable Mediaprocessors

    No full text
    Unsharp masking is a widely used image-enhancement method in medical imaging. Hardware-based solutions can be developed to support high computational demand for unsharp masking, but they suffer from limited flexibility. Software solutions can easily incorporate new features and modify key parameters, such as filtering kernel size, but they have not been able to meet the fast computing requirement. Modern programmable mediaprocessors can meet both fast computing and flexibility requirements, which will benefit medical image computing. In this article, we present fast adaptive unsharp masking on two leading mediaprocessors or high-end digital signal processors, Hitachi/Equator Technologies MAP-CA and Texas Instruments TMS320C64x. For a 2k × 2k 16-bit image, our adaptive unsharp masking with a 201 × 201 boxcar kernel takes 225 ms on a 300-MHz MAP-CA and 74 ms on a 600-MHz TMS320C64x. This fast unsharp masking enables technologists and/or physicians to adjust parameters interactively for optimal quality assurance and image viewing

    Realistic tissue visualization using photoacoustic image

    No full text
    Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.1

    Multi-Channel Transfer Learning of Chest X-ray Images for Screening of COVID-19

    No full text
    The 2019 novel coronavirus (COVID-19) has spread rapidly all over the world. The standard test for screening COVID-19 patients is the polymerase chain reaction test. As this method is time consuming, as an alternative, chest X-rays may be considered for quick screening. However, specialization is required to read COVID-19 chest X-ray images as they vary in features. To address this, we present a multi-channel pre-trained ResNet architecture to facilitate the diagnosis of COVID-19 chest X-ray. Three ResNet-based models were retrained to classify X-rays in a one-against-all basis from (a) normal or diseased, (b) pneumonia or non-pneumonia, and (c) COVID-19 or non-COVID19 individuals. Finally, these three models were ensembled and fine-tuned using X-rays from 1579 normal, 4245 pneumonia, and 184 COVID-19 individuals to classify normal, pneumonia, and COVID-19 cases in a one-against-one framework. Our results show that the ensemble model is more accurate than the single model as it extracts more relevant semantic features for each class. The method provides a precision of 94% and a recall of 100%. It could potentially help clinicians in screening patients for COVID-19, thus facilitating immediate triaging and treatment for better outcomes.11Ysciescopu

    Improved real-time delay-multiply-and-sum beamforming with coherence factor

    No full text
    Delay-and-sum (DAS) beamforming is the most commonly used algorithm to form photoacoustic (PA) and ultrasound (US) images because of its simple implementation. However, it has several drawbacks such as low image resolution and contrast. To deal with this problem, delay-multiply-and-sum (DMAS) beamforming algorithm was developed a few years ago. It is known that DMAS can improve the image quality by providing higher contrast and narrower main lobe compared to DAS, but its calculation speed is too slow to be implemented for clinical applications. Herein, we introduce an improved DMAS in terms of both imaging speed and quality, and we demonstrated real-time clinical PA imaging. The proposed DMAS provided better lateral resolution and signal-to-noise ratio (SNR) than the original DMAS through a modified coherence factor. Then we accelerated its computation speed by optimizing the algorithm and parallelizing the process using a graphics processing unit (GPU). We quantitatively compared the processing time and the image quality of the proposed algorithm with the conventional algorithms. As the result, it was observed that our proposed algorithm showed better spatial resolution and SNR while achieving real-time imaging framerate. Due to the improvement, the proposed algorithm was successfully implemented on a programmable clinical PA/US imaging system and showed clearer real-time PA images than the conventional DAS images.1

    Deep learning‐based multimodal fusion network for segmentation and classification of breast cancers using B‐mode and elastography ultrasound images

    No full text
    Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer-aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)-based methods to segment the lesions and then classify benign from malignant, utilizing both B-mode and strain elastography (SE-mode) images. We propose a weighted multimodal U-Net (W-MM-U-Net) model for segmenting lesions where optimum weight is assigned on different imaging modalities using a weighted-skip connection method to emphasize its importance. We design a multimodal fusion framework (MFF) on cropped B-mode and SE-mode ultrasound (US) lesion images to classify benign and malignant lesions. The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNNs) trained using B-mode and SE-mode US images. The features from the CNNs are ensembled using the multimodal EmbraceNet model and DN classifies the images using those features. The experimental results (sensitivity of 100 ± 0.00% and specificity of 94.28 ± 7.00%) on the real-world clinical data showed that the proposed method outperforms the existing single- and multimodal methods. The proposed method predicts seven benign patients as benign three times out of five trials and six malignant patients as malignant five out of five trials. The proposed method would potentially enhance the classification accuracy of radiologists for breast cancer detection in US images. © 2022 The Authors. Bioengineering & Translational Medicine published by Wiley Periodicals LLC on behalf of American Institute of Chemical Engineers.11Ysciescopu
    corecore