161,466 research outputs found

    A survey of the state-of-the-art and focused research in range systems, task 1

    Get PDF
    This final report presents the latest research activity in voice compression. We have designed a non-real time simulation system that is implemented around the IBM-PC where the IBM-PC is used as a speech work station for data acquisition and analysis of voice samples. A real-time implementation is also proposed. This real-time Voice Compression Board (VCB) is built around the Texas Instruments TMS-3220. The voice compression algorithm investigated here was described in an earlier report titled, Low Cost Voice Compression for Mobile Digital Radios, by the author. We will assume the reader is familiar with the voice compression algorithm discussed in this report. The VCB compresses speech waveforms at data rates ranging from 4.8 K bps to 16 K bps. This board interfaces to the IBM-PC 8-bit bus, and plugs into a single expansion slot on the mother board

    Enhancement Of Medical Image Compression Algorithm In Noisy WLANS Transmission

    Get PDF
    Advances in telemedicine technology enable rapid medical diagnoses with visualization and quantitative assessment by medical practitioners.In healthcare and hospital networks,medical data exchange-based wireless local area network (WLAN) transceivers remain challenging because of their growing data size,real-time contact with compressed images,and range of bandwidths requiring transmission support.Prior to transmission,medical data are compressed to minimize transmission bandwidth and save transmitting power.Researchers address many challenges in improving performance of compression approaches.Such challenges include energy compaction, computational complexity,high entropy value,drive low compression ratio (CR) and high computational complexity in real-time implementation.Thus,a new approach called Enhanced Independent Component Analysis (EICA) for medical image compression has been developed to boost compression techniques;which transform the image data by block-based Independent Component Analysis (ICA).The proposed method uses Fast Independent Component Analysis (FastICA) algorithm followed by developed quantization architecture based zero quantized coefficients percentage (ZQCP) prediction model using artificial neural network. For image reconstruction,decoding steps based the developed quantization architecture are examined.The EICA is particularly useful where the size of the transmitted data needs to be reduced to minimize the image transmission time.For data compression with suitable and effective performance,enhanced independent components analysis (EICA) is proposed as an algorithm for compression and decompression of medical data.A comparative analysis is performed based on existing data compression techniques:discrete cosine transform (DCT), set partitioning in hierarchical trees (SPIHT),and Joint Photographic Experts Group (JPEG 2000).Three main modules,namely,compression segment (CS),transceiver segment (TRS),and outcome segment (OTS) modules,are developed to realize a fully computerized simulation tool for medical data compression with suitable and effective performance.To compress medical data using algorithms,CS module involves four different approaches which are DCT, SPIHT,JPEG 2000 and EICA.TRS module is processed by low-cost WLANs with low-bandwidth transmission.Finally,OTS is used for data decompression and visualization result.In terms of compression module,results show the benefits of applying EICA in medical data compression and transmission.While for system design,the developed system displays favorable outcomes in compressing and transmitting medical data.In conclusion,all three modules (CS,TRS,and OTS) are integrated to yield a computerized prototype named as Medical Data Simulation System(Medata-SIM) computerized system that includes medical data compression and transceiver for visualization to aid medical practitioners in carrying out rapid diagnoses

    On-the-Fly Calculation of Time-Averaged Acoustic Intensity in Time-Domain Ultrasound Simulations Using a k-Space Pseudospectral Method

    Get PDF
    OBJECTIVE: This paper presents a method to calculate the average acoustic intensity during ultrasound simulation using a new approach that exploits compression of intermediate results. METHODS: One of the applications of high-intensity focused ultrasound (HIFU) simulations is the calculation of the thermal dose, which indicates the amount of tissue destroyed using a state-of-the-art k-space pseudospectral method. The thermal simulation is preceded by the calculation of the average intensity within the acoustic simulation. Due to the time staggering between the particle velocity and the acoustic pressure used in such simulations, the average intensity calculation is typically executed offline after the acoustic simulation consuming both disk space and time (the data can spread over terabytes). Our new approach calculates the average intensity during the acoustic simulation using the output coefficients of a new compression method which enables resolving the time staggering on-the-fly with huge disk space savings. To reduce RAM requirements, the article also presents a new 40-bit method for encoding compression complex coefficients. RESULTS: Experimental numerical simulations with the proposed method have shown that disk space requirements are up to 99 % lower. The simulation speed was not significantly affected by the approach and the compression error did not affect the prediction accuracy of the thermal dose. CONCLUSION: From the standpoint of supercomputers, the new approach is significantly more economical. SIGNIFICANCE: Saving computing resources increases the chances of real use of acoustic simulations in practice. The method can be applied to signals of a similar character, e.g., for electromagnetic radio waves

    An Unsupervised Approach to Ultrasound Elastography with End-to-end Strain Regularisation

    Get PDF
    Quasi-static ultrasound elastography (USE) is an imaging modality that consists of determining a measure of deformation (i.e. strain) of soft tissue in response to an applied mechanical force. The strain is generally determined by estimating the displacement between successive ultrasound frames acquired before and after applying manual compression. The computational efficiency and accuracy of the displacement prediction, also known as time-delay estimation, are key challenges for real-time USE applications. In this paper, we present a novel deep-learning method for efficient time-delay estimation between ultrasound radio-frequency (RF) data. The proposed method consists of a convolutional neural network (CNN) that predicts a displacement field between a pair of pre- and post-compression ultrasound RF frames. The network is trained in an unsupervised way, by optimizing a similarity metric between the reference and compressed image. We also introduce a new regularization term that preserves displacement continuity by directly optimizing the strain smoothness. We validated the performance of our method by using both ultrasound simulation and in vivo data on healthy volunteers. We also compared the performance of our method with a state-of-the-art method called OVERWIND [17]. Average contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of our method in 30 simulation and 3 in vivo image pairs are 7.70 and 6.95, 7 and 0.31, respectively. Our results suggest that our approach can effectively predict accurate strain images. The unsupervised aspect of our approach represents a great potential for the use of deep learning application for the analysis of clinical ultrasound data

    A low complexity image compression algorithm for Bayer color filter array

    Get PDF
    Digital image in their raw form requires an excessive amount of storage capacity. Image compression is a process of reducing the cost of storage and transmission of image data. The compression algorithm reduces the file size so that it requires less storage or transmission bandwidth. This work presents a new color transformation and compression algorithm for the Bayer color filter array (CFA) images. In a full color image, each pixel contains R, G, and B components. A CFA image contains single channel information in each pixel position, demosaicking is required to construct a full color image. For each pixel, demosaicking constructs the missing two-color information by using information from neighbouring pixels. After demosaicking, each pixel contains R, G, and B information, and a full color image is constructed. Conventional CFA compression occurs after the demosaicking. However, the Bayer CFA image can be compressed before demosaicking which is called compression-first method, and the algorithm proposed in this research follows the compression-first or direct compression method. The compression-first method applies the compression algorithm directly onto the CFA data and shifts demosaicking to the other end of the transmission and storage process. The advantage of the compression-first method is that it requires three time less transmission bandwidth for each pixel than conventional compression. Compression-first method of CFA data produces spatial redundancy, artifacts, and false high frequencies. The process requires a color transformation with less correlation among the color components than that Bayer RGB color space. This work analyzes correlation coefficient, standard deviation, entropy, and intensity range of the Bayer RGB color components. The analysis provides two efficient color transformations in terms of features of color transformation. The proposed color components show lesser correlation coefficient than occurs with the Bayer RGB color components. Color transformations reduce both the spatial and spectral redundancies of the Bayer CFA image. After color transformation, the components are independently encoded using differential pulse-code modulation (DPCM) in raster order fashion. The residue error of DPCM is mapped to a positive integer for the adaptive Golomb rice code. The compression algorithm includes both the adaptive Golomb rice and Unary coding to generate bit stream. Extensive simulation analysis is performed on both simulated CFA and real CFA datasets. This analysis is extended for the WCE (wireless capsule endoscopic) images. The compression algorithm is also realized with a simulated WCE CFA dataset. The results show that the proposed algorithm requires less bits per pixel than the conventional CFA compression. The algorithm also outperforms recent works on CFA compression algorithms for both real and simulated CFA datasets

    Real-time demonstration hardware for enhanced DPCM video compression algorithm

    Get PDF
    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home)

    An Unsupervised Approach to Ultrasound Elastography with End-to-end Strain Regularisation

    Get PDF
    Quasi-static ultrasound elastography (USE) is an imaging modality that consists of determining a measure of deformation (i.e.strain) of soft tissue in response to an applied mechanical force. The strain is generally determined by estimating the displacement between successive ultrasound frames acquired before and after applying manual compression. The computational efficiency and accuracy of the displacement prediction, also known as time-delay estimation, are key challenges for real-time USE applications. In this paper, we present a novel deep-learning method for efficient time-delay estimation between ultrasound radio-frequency (RF) data. The proposed method consists of a convolutional neural network (CNN) that predicts a displacement field between a pair of pre- and post-compression ultrasound RF frames. The network is trained in an unsupervised way, by optimizing a similarity metric be-tween the reference and compressed image. We also introduce a new regularization term that preserves displacement continuity by directly optimizing the strain smoothness. We validated the performance of our method by using both ultrasound simulation and in vivo data on healthy volunteers. We also compared the performance of our method with a state-of-the-art method called OVERWIND [17]. Average contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of our method in 30 simulation and 3 in vivo image pairs are 7.70 and 6.95, 7 and 0.31, respectively. Our results suggest that our approach can effectively predict accurate strain images. The unsupervised aspect of our approach represents a great potential for the use of deep learning application for the analysis of clinical ultrasound data.Comment: Accepted at MICCAI 202
    corecore