3,946 research outputs found

    Lossy and Lossless Video Frame Compression: A Novel Approach for the High-Temporal Video Data Analytics

    Get PDF
    The smart city concept has attracted high research attention in recent years within diverse application domains, such as crime suspect identification, border security, transportation, aerospace, and so on. Specific focus has been on increased automation using data driven approaches, while leveraging remote sensing and real-time streaming of heterogenous data from various resources, including unmanned aerial vehicles, surveillance cameras, and low-earth-orbit satellites. One of the core challenges in exploitation of such high temporal data streams, specifically videos, is the trade-off between the quality of video streaming and limited transmission bandwidth. An optimal compromise is needed between video quality and subsequently, recognition and understanding and efficient processing of large amounts of video data. This research proposes a novel unified approach to lossy and lossless video frame compression, which is beneficial for the autonomous processing and enhanced representation of high-resolution video data in various domains. The proposed fast block matching motion estimation technique, namely mean predictive block matching, is based on the principle that general motion in any video frame is usually coherent. This coherent nature of the video frames dictates a high probability of a macroblock having the same direction of motion as the macroblocks surrounding it. The technique employs the partial distortion elimination algorithm to condense the exploration time, where partial summation of the matching distortion between the current macroblock and its contender ones will be used, when the matching distortion surpasses the current lowest error. Experimental results demonstrate the superiority of the proposed approach over state-of-the-art techniques, including the four step search, three step search, diamond search, and new three step search

    A novel numerical framework for simulation of multiscale spatio-temporally non-linear systems in additive manufacturing processes.

    Get PDF
    New computationally efficient numerical techniques have been formulated for multi-scale analysis in order to bridge mesoscopic and macroscopic scales of thermal and mechanical responses of a material. These numerical techniques will reduce computational efforts required to simulate metal based Additive Manufacturing (AM) processes. Considering the availability of physics based constitutive models for response at mesoscopic scales, these techniques will help in the evaluation of the thermal response and mechanical properties during layer-by-layer processing in AM. Two classes of numerical techniques have been explored. The first class of numerical techniques has been developed for evaluating the periodic spatiotemporal thermal response involving multiple time and spatial scales at the continuum level. The second class of numerical techniques is targeted at modeling multi-scale multi-energy dissipative phenomena during the solid state Ultrasonic Consolidation process. This includes bridging the mesoscopic response of a crystal plasticity finite element framework at inter- and intragranular scales and a point at the macroscopic scale. This response has been used to develop an energy dissipative constitutive model for a multi-surface interface at the macroscopic scale. An adaptive dynamic meshing strategy as a part of first class of numerical techniques has been developed which reduces computational cost by efficient node element renumbering and assembly of stiffness matrices. This strategy has been able to reduce the computational cost for solving thermal simulation of Selective Laser Melting process by ~100 times. This method is not limited to SLM processes and can be extended to any other fusion based additive manufacturing process and more generally to any moving energy source finite element problem. Novel FEM based beam theories have been formulated which are more general in nature compared to traditional beam theories for solid deformation. These theories have been the first to simulate thermal problems similar to a solid beam analysis approach. These are more general in nature and are capable of simulating general cross-section beams with an ability to match results for complete three dimensional analysis. In addition to this, a traditional Cholesky decomposition algorithm has been modified to reduce the computational cost of solving simultaneous equations involved in FEM simulations. Solid state processes have been simulated with crystal plasticity based nonlinear finite element algorithms. This algorithm has been further sped up by introduction of an interfacial contact constitutive model formulation. This framework has been supported by a novel methodology to solve contact problems without additional computational overhead to incorporate constraint equations averting the usage of penalty springs

    Robust Magnetic Resonance Imaging of Short T2 Tissues

    Get PDF
    Tissues with short transverse relaxation times are defined as ‘short T2 tissues’, and short T2 tissues often appear dark on images generated by conventional magnetic resonance imaging techniques. Common short T2 tissues include tendons, meniscus, and cortical bone. Ultrashort Echo Time (UTE) pulse sequences can provide morphologic contrasts and quantitative maps for short T2 tissues by reducing time-of-echo to the system minimum (e.g., less than 100 us). Therefore, UTE sequences have become a powerful imaging tool for visualizing and quantifying short T2 tissues in many applications. In this work, we developed a new Flexible Ultra Short time Echo (FUSE) pulse sequence employing a total of thirteen acquisition features with adjustable parameters, including optimized radiofrequency pulses, trajectories, choice of two or three dimensions, and multiple long-T2 suppression techniques. Together with the FUSE sequence, an improved analytical density correction and an auto-deblurring algorithm were incorporated as part of a novel reconstruction pipeline for reducing imaging artifacts. Firstly, we evaluated the FUSE sequence using a phantom containing short T2 components. The results demonstrated that differing UTE acquisition methods, improving the density correction functions and improving the deblurring algorithm could reduce the various artifacts, improve the overall signal, and enhance short T2 contrast. Secondly, we applied the FUSE sequence in bovine stifle joints (similar to the human knee) for morphologic imaging and quantitative assessment. The results showed that it was feasible to use the FUSE sequence to create morphologic images that isolate signals from the various knee joint tissues and carry out comprehensive quantitative assessments, using the meniscus as a model, including the mappings of longitudinal relaxation (T1) times, quantitative magnetization transfer parameters, and effective transverse relaxation (T2*) times. Lastly, we utilized the FUSE sequence to image the human skull for evaluating its feasibility in synthetic computed tomography (CT) generation and radiation treatment planning. The results demonstrated that the radiation treatment plans created using the FUSE-based synthetic CT and traditional CT data were able to present comparable dose calculations with the dose difference of mean less than a percent. In summary, this thesis clearly demonstrated the need for the FUSE sequence and its potential for robustly imaging short T2 tissues in various applications

    Current video compression algorithms: Comparisons, optimizations, and improvements

    Full text link
    Compression algorithms have evolved significantly in recent years. Audio, still image, and video can be compressed significantly by taking advantage of the natural redundancies that occur within them. Video compression in particular has made significant advances. MPEG-1 and MPEG-2, two of the major video compression standards, allowed video to be compressed at very low bit rates compared to the original video. The compression ratio for video that is perceptually lossless (losses can\u27t be visually perceived) can even be as high as 40 or 50 to 1 for certain videos. Videos with a small degradation in quality can be compressed at 100 to 1 or more; Although the MPEG standards provided low bit rate compression, even higher quality compression is required for efficient transmission over limited bandwidth networks, wireless networks, and broadcast mediums. Significant gains have been made over the current MPEG-2 standard in a newly developed standard called the Advanced Video Coder, also known as H.264 and MPEG-4 part 10. (Abstract shortened by UMI.)

    Comparison of the suitability of intra-oral scanning with conventional impression of edentulous maxilla in vivo. A preliminary study

    Get PDF
    Aim According to recent literature, the accuracy of digital impression can be compared with traditional impressions for most indications. However, little is known about their suitability in digitizing edentulous jaws in view of mobile prosthetic rehabilitation. The aim of this study was to compare in vivo an intra-oral scanner with conventional impression in case of maxillary edentulous jaws. Material and methods Four (1 male, 3 female) subjects who had no previous experience with either conventional or digital impression participated in this study. Digital impression were taken using an intra-oral scanner. After that conventional impressions of maxillary edentulous jaws were taken with an irreversible hydrocolloid impression material. Then all IOSs datasets were loaded in a three-dimensional evaluation software (3DReshaper 2017, Hexagon), where they were superimposed on the model obtained using conventional impression and compared. Results The mean value of difference between the two impression techniques ranged from 219 to 347 μm. The comparison of models obtained with the two techniques showed that the compression given by the impression material on the peripheral areas, such as oral vestibule and soft palate, determined the most important differences recorded. Conclusion Digitizing edentulous jaws with the use of IOS appeared to be feasible in vivo, although peripheral tissue were not effectively reproduced. On the basis of the results of this study, the authors could not recommend the use of IOS for digitization of edentulous jaws in vivo in view of mobile prosthetic rehabilitation, until it will be found a way to give a selective pressure in peripheral areas as occurs during edging of impression tray

    Advanced Stereoscopy towards On-Machine Surface Metrology and Inspection

    Get PDF
    With the goal of inventing an integral on-machine integral 3D machine vision inspection system, which monitors the parts quality and extract required patterns or structures during the manufacturing process using low-cost hardware and in a high-speed mode, this dissertation discussed the newly developed strobe-stereoscopy (SS) technique for in- motion targets examination. Stereoscopy is utilized for 3D reconstruction from recorded image pairs based on the triangulation of the display pixels, test target, and cameras. Stroboscopy is introduced to lock the moving target at different locations by frequency matching between the light source and the controlled motor. Fluorescent fluid was introduced and implemented to the SS system for high-gloss reflective surface inspection. Stereoscopy technique is limited on the diffused surface because of the sensitivity to illumination dispersion, fluorescent strobe-stereoscopy (FSS) technique overcomes the limitation to polished surface inspection and is applied to step- by-step fabrication process monitoring thus complete the metrology-in-loop for the automated production. The surface filtering-based image selection and extraction approach (ISE) is created for quick pattern extraction from the freeform base structure, which was integrated into the built hardware configuration. In this dissertation, the performance of inspection systems has been analyzed and validated with comprehensive experiment results. Potential and future work of the proposed technique was included as well

    A method for the dynamic correction of B0-related distortions in single-echo EPI at 7 T

    Get PDF
    We propose a method to calculate field maps from the phase of each EPI in an fMRI time series. These field maps can be used to correct the corresponding magnitude images for distortion caused by inhomogeneity in the static magnetic field. In contrast to conventional static distortion correction, in which one 'snapshot’ field map is applied to all subsequent fMRI time points, our method also captures dynamic changes to B0which arise due to motion and respiration. The approach is based on the assumption that the non-B0-related contribution to the phase measured by each radio-frequency coil, which is dominated by the coil sensitivity, is stable over time and can therefore be removed to yield a field map from EPI. Our solution addresses imaging with multi-channel coils at ultra-high field (7 T), where phase offsets vary rapidly in space, phase processing is non-trivial and distortions are comparatively large. We propose using dual-echo gradient echo reference scan for the phase offset calculation, which yields estimates with high signal-to-noise ratio. An extrapolation method is proposed which yields reliable estimates for phase offsets even where motion is large and a tailored phase unwrapping procedure for EPI is suggested which gives robust results in regions with disconnected tissue or strong signal decay. Phase offsets are shown to be stable during long measurements (40 min) and for large head motions. The dynamic distortion correction proposed here is found to work accurately in the presence of large motion (up to 8.1°), whereas a conventional method based on single field map fails to correct or even introduces distortions (up to 11.2 mm). Finally, we show that dynamic unwarping increases the temporal stability of EPI in the presence of motion. Our approach can be applied to any EPI measurements without the need for sequence modification

    Reconfigurable Architecture For H.264/avc Variable Block Size Motion Estimation Based On Motion Activity And Adaptive Search Range

    Get PDF
    Motion Estimation (ME) technique plays a key role in the video coding systems to achieve high compression ratios by removing temporal redundancies among video frames. Especially in the newest H.264/AVC video coding standard, ME engine demands large amount of computational capabilities due to its support for wide range of different block sizes for a given macroblock in order to increase accuracy in finding best matching block in the previous frames. We propose scalable architecture for H.264/AVC Variable Block Size (VBS) Motion Estimation with adaptive computing capability to support various search ranges, input video resolutions, and frame rates. Hardware architecture of the proposed ME consists of scalable Sum of Absolute Difference (SAD) arrays which can perform Full Search Block Matching Algorithm (FSBMA) for smaller 4x4 blocks. It is also shown that by predicting motion activity and adaptively adjusting the Search Range (SR) on the reconfigurable hardware platform, the computational cost of ME required for inter-frame encoding in H.264/AVC video coding standard can be reduced significantly. Dynamic Partial Reconfiguration is a unique feature of Field Programmable Gate Arrays (FPGAs) that makes best use of hardware resources and power by allowing adaptive algorithm to be implemented during run-time. We exploit this feature of FPGA to implement the proposed reconfigurable architecture of ME and maximize the architectural benefits through prediction of motion activities in the video sequences ,adaptation of SR during run-time, and fractional ME refinement. The implemented ME architecture can support real time applications at a maximum frequency of 90MHz with multiple reconfigurable regions. iv When compared to reconfiguration of complete design, partial reconfiguration process results in smaller bitstream size which allows FPGA to implement different configurations at higher speed. The proposed architecture has modular structure, regular data flow, and efficient memory organization with lower memory accesses. By increasing the number of active partial reconfigurable modules from one to four, there is a 4 fold increase in data re-use. Also, by introducing adaptive SR reduction algorithm at frame level, the computational load of ME is reduced significantly with only small degradation in PSNR (≤0.1dB)

    Longitudinal in vivo MRI in a Huntington’s disease mouse model: global atrophy in the absence of white matter microstructural damage

    Get PDF
    Huntington’s disease (HD) is a genetically-determined neurodegenerative disease. Characterising neuropathology in mouse models of HD is commonly restricted to cross-sectional ex vivo analyses, beset by tissue fixation issues. In vivo longitudinal magnetic resonance imaging (MRI) allows for disease progression to be probed non-invasively. In the HdhQ150 mouse model of HD, in vivo MRI was employed at two time points, before and after the onset of motor signs, to assess brain macrostructure and white matter microstructure. Ex vivo MRI, immunohistochemistry, transmission electron microscopy and behavioural testing were also conducted. Global brain atrophy was found in HdhQ150 mice at both time points, with no neuropathological progression across time and an elective sparing of the cerebellum. In contrast, no white matter abnormalities were detected from the MRI images or electron microscopy images alike. The relationship between motor function and MR-based structural measurements was different for the HdhQ150 and wild-type mice, although there was no relationship between motor deficits and histopathology. Widespread neuropathology prior to symptom onset is consistent with patient studies, whereas the absence of white matter abnormalities conflicts with patient data. The myriad reasons for this inconsistency require further attention to improve the translatability from mouse models of disease

    Efficient Motion Estimation and Mode Decision Algorithms for Advanced Video Coding

    Get PDF
    H.264/AVC video compression standard achieved significant improvements in coding efficiency, but the computational complexity of the H.264/AVC encoder is drastically high. The main complexity of encoder comes from variable block size motion estimation (ME) and rate-distortion optimized (RDO) mode decision methods. This dissertation proposes three different methods to reduce computation of motion estimation. Firstly, the computation of each distortion measure is reduced by proposing a novel two step edge based partial distortion search (TS-EPDS) algorithm. In this algorithm, the entire macroblock is divided into different sub-blocks and the calculation order of partial distortion is determined based on the edge strength of the sub-blocks. Secondly, we have developed an early termination algorithm that features an adaptive threshold based on the statistical characteristics of rate-distortion (RD) cost regarding current block and previously processed blocks and modes. Thirdly, this dissertation presents a novel adaptive search area selection method by utilizing the information of the previously computed motion vector differences (MVDs). In H.264/AVC intra coding, DC mode is used to predict regions with no unified direction and the predicted pixel values are same and thus smooth varying regions are not well de-correlated. This dissertation proposes an improved DC prediction (IDCP) mode based on the distance between the predicted and reference pixels. On the other hand, using the nine prediction modes in intra 4x4 and 8x8 block units needs a lot of overhead bits. In order to reduce the number of overhead bits, an intra mode bit rate reduction method is suggested. This dissertation also proposes an enhanced algorithm to estimate the most probable mode (MPM) of each block. The MPM is derived from the prediction mode direction of neighboring blocks which have different weights according to their positions. This dissertation also suggests a fast enhanced cost function for mode decision of intra encoder. The enhanced cost function uses sum of absolute Hadamard-transformed differences (SATD) and mean absolute deviation of the residual block to estimate distortion part of the cost function. A threshold based large coefficients count is also used for estimating the bit-rate part
    • …
    corecore