118 research outputs found

    Image Compression Techniques: A Survey in Lossless and Lossy algorithms

    Get PDF
    The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System. This paper provides a survey on various image compression techniques, their limitations, compression rates and highlights current research in medical image compression

    Constructing sampling schemes via coupling: Markov semigroups and optimal transport

    Get PDF
    In this paper we develop a general framework for constructing and analyzing coupled Markov chain Monte Carlo samplers, allowing for both (possibly degenerate) diffusion and piecewise deterministic Markov processes. For many performance criteria of interest, including the asymptotic variance, the task of finding efficient couplings can be phrased in terms of problems related to optimal transport theory. We investigate general structural properties, proving a singularity theorem that has both geometric and probabilistic interpretations. Moreover, we show that those problems can often be solved approximately and support our findings with numerical experiments. For the particular objective of estimating the variance of a Bayesian posterior, our analysis suggests using novel techniques in the spirit of antithetic variates. Addressing the convergence to equilibrium of coupled processes we furthermore derive a modified Poincaré inequality

    Non-acyclicity of coset lattices and generation of finite groups

    Get PDF

    Development of Interatomic Potentials with Uncertainty Quantification: Applications to Two-dimensional Materials

    Get PDF
    University of Minnesota Ph.D. dissertation.July 2019. Major: Aerospace Engineering and Mechanics. Advisor: Ellad Tadmor. 1 computer file (PDF); xiii, 198 pages.Atomistic simulation is a powerful computational tool to investigate materials on the microscopic scale and is widely employed to study a large variety of problems in science and engineering. Empirical interatomic potentials have proven to be an indis- pensable part of atomistic simulation due to their unrivaled computational efficiency in describing the interactions between atoms, which produce the forces governing atomic motion and deformation. Atomistic simulation with interatomic potentials, however, has historically been viewed as a tool limited to provide only qualitative insight. A key reason is that in such simulations there are many sources of uncertainty that are difficult to quantify, thus failing to give confidence interval on the obtained results. This thesis presents my research work on the development of interatomic potentials with the ability to quantify the uncertainty in simulation results. The methods to train interatomic po- tentials and quantify the uncertainty are demonstrated via two-dimensional materials and heterostructures throughout this thesis, whose low-dimensional nature makes them distinct from their three-dimensional counterparts in many aspects. Both physics-based and machine learning interatomic potentials are developed for MoS2 and multilayer graphene structures. The new potentials accurately model the interactions in these systems, reproducing a number of structural, energetic, elastic, and thermal properties obtained from first-principles calculations and experiments. For physics-based poten- tials, a method based on Fisher information theory is used to analyze the parametric sensitivity and the uncertainty in material properties obtained from phase average. We show that the dropout technique can be applied to train neural network potentials and demonstrate how to obtain the predictions and the associated uncertainties of material properties practically and efficiently from such potentials. Putting all these ingredients of my research work together, we create an open-source fitting framework to train inter- atomic potentials and hope it can make the development and deployment of interatomic potentials easier and less error prone for other researchers

    CONTINUUM DAMAGE MODEL FOR NONLINEAR ANALYSIS OF MASONRY STRUCTURES

    Get PDF
    The present work focuses on the formulation of a Continuum Damage Mechanics model for nonlinear analysis of masonry structural elements. The material is studied at the macro-level, i.e. it is modelled as a homogeneous orthotropic continuum. The orthotropic behaviour is simulated by means of an original methodology, which is based on nonlinear damage constitutive laws and on the concept of mapped tensors from the anisotropic real space to the isotropic fictitious one. It is based on establishing a one-to-one mapping relationship between the behaviour of an anisotropic real material and that of an isotropic fictitious one. Therefore, the problem is solved in the isotropic fictitious space and the results are transported to the real field. The application of this idea to strain-based Continuum Damage Models is rather innovative. The proposed theory is a generalization of classical theories and allows us to use the models and algorithms developed for isotropic materials. A first version of the model makes use of an isotropic scalar damage model. The adoption of such a simple constitutive model in the fictitious space, together with an appropriate definition of the mathematical transformation between the two spaces, provides a damage model for orthotropic materials able to reproduce the overall nonlinear behaviour, including stiffness degradation and strain-hardening/softening response. The relationship between the two spaces is expressed in terms of a transformation tensor which contains all the information concerning the real orthotropy of the material. A major advantage of this working strategy lies in the possibility of adjusting an arbitrary isotropic criterion to the particular behaviour of the orthotropic material. Moreover, orthotropic elastic and inelastic behaviours can be modelled in such a way that totally different mechanical responses can be predicted along the material axes. The aforementioned approach is then refined in order to account for different behaviours of masonry in tension and compression. The aim of studying a real material via an equivalent fictitious solid is achieved by means of the appropriate definitions of two transformation tensors related to tensile or compressive states, respectively. These important assumptions permit to consider two individual damage criteria, according to different failure mechanisms, i.e. cracking and crushing. The constitutive model adopted in the fictitious space makes use of two scalar variables, which monitor the local damage under tension and compression, respectively. Such a model, which is based on a stress tensor split into tensile and compressive contributions that allows the model to contemplate orthotropic induced damage, permits also to account for masonry unilateral effects. The orthotropic nature of the Tension-Compression Damage Model adopted in the fictitious space is demonstrated. This feature, both with the assumption of two distinct damage criteria for tension and compression, does not permit to term the fictitious space as “isotropic”. Therefore, the proposed formulation turns the original concept of “mapping the real space into an isotropic fictitious one” into the innovative and more general one of “mapping the real space into a favourable (or convenient) fictitious one”. Validation of the model is carried out by means of comparisons with experimental results on different types of orthotropic masonry. The model is fully formulated for the 2-dimensional case. However, it can be easily extended to the 3-dimensional case. It provides high algorithmic efficiency, a feature of primary importance when analyses of even large scale masonry structures are carried out. To account for this requisite it adopts a strain-driven formalism consistent with standard displacement-based finite element codes. The implementation in finite element programs is straightforward. Finally, a localized damage model for orthotropic materials is formulated. This is achieved by means of the implementation of a crack tracking algorithm, which forces the crack to develop along a single row of finite elements. Compared with the smeared cracking approach, such an approach shows a better capacity to predict realistic collapsing mechanisms. The resulting damage in the ultimate condition appears localized in individual cracks. Moreover, the results do not suffer from spurious mesh-size or mesh-bias dependence. The numerical tool is finally validated via a finite element analysis of an in-plane loaded masonry shear wall

    Simulation assisted performance optimization of large-scale multiparameter technical systems

    Get PDF
    During the past two decades the role of dynamic process simulation within the research and development work of process and control solutions has grown tremendously. As the simulation assisted working practices have become more and more popular, also the accuracy requirements concerning the simulation results have tightened. The accuracy improvement of complex, plant-wide models via parameter tuning necessitates implementing practical, scalable methods and tools operating on the correct level of abstraction. In modern integrated process plants, it is not only the performance of individual controllers but also their interactions that determine the overall performance of the large-scale control systems. However, in practice it has become customary to split large-scale problems into smaller pieces and to use traditional analytical control engineering approaches, which inevitably end in suboptimal solutions. The performance optimization problems related to large control systems and to plant-wide process models are essentially connected in the context of new simulation assisted process and control design practices. The accuracy of the model that is obtained with data-based parameter tuning determines the quality of the simulation assisted controller tuning results. In this doctoral thesis both problems are formulated in the same framework depicted in the title of the thesis. To solve the optimization problem, a novel method called Iterative Regression Tuning (IRT) applying numerical optimization and multivariate regression is presented. IRT method has been designed especially for large-scale systems and it allows the incorporation of domain area expertise into the optimization goals. The thesis introduces different variations on the IRT method, technical details related to their application and various use cases of the algorithm. The simulation assisted use case is presented through a number of application examples of control performance and model accuracy optimization

    Induced seismicity analysis for reservoir characterization at a petroleum field in Oman

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2005.Includes bibliographical references.This thesis presents the analysis and interpretation of passive seismic data collected in a 20-month monitoring period. The investigation is divided into four studies, each focusing on a different aspect of the seismic data to infer the reservoir properties. First, I applied three different methods (the iterative linearized, nonlinear grid-search, and double-difference methods) to relocate 405 microearthquakes that occurred between October 1999 and June 2001 in a producing field in Oman. A numerical technique is applied to "collapse" the relocated hypocenters and to find the simplest structural interpretation consistent with the data. Comparing the methods, the applicability of waveform correlation methods such as the double-difference in this case is limited by the relatively large number of events with dissimilar waveforms. Unlike the iterative linearized method, the nonlinear grid-search method gives the best results with the smallest average rms error of the absolute locations because it avoids the local minimum problem.(cont.) The relocated hypocenters clearly delineate nearly vertical, northeast-southwest striking faults near the crest of the field, which is consistent with the graben fault system mapped by surface geologic surveys and reflection seismic interpretations. I also performed statistical tests to estimate location errors, and found that the station geometry is the major factor that limits the accuracy of focal depths. Secondly, this thesis presents a non-linear wavelet-based approach to linear waveform inversion of high-frequency seismograms for the estimation of a point source mechanism and its time function. For earthquake mechanism inversions, it is important to stabilize the problem by reducing the number of parameters to be determined. Commonly, overlapping isosceles triangles or boxcar functions are used for the parameterization of the moment tensor rate functions (MTRFs). Here, I develop a wavelet-based strategy that allows us to construct an adaptive, problem-dependent parameterization for the MTRFs employing fractional spline wavelets. Synthetic results demonstrate that the adaptive parameterization improves the numerical approximation to the model space and therefore, allows more accurate estimations of the MTRFs.(cont.) The waveform inversion is performed in the wavelet domain and leads to a multiresolution sparse matrix representation of the inverse problem. At each resolution level a regularized least-squares solution is obtained using the conjugate gradient method. The wavelet-based waveform inversion method has been applied successfully in three real- data examples: the April 22, 2002 Au Sable Forks, New York earthquake, the September 3, 2002 Yorba Linda, California earthquakes, and 11 M>1 microearthquakes in a producing field in Oman. In the Oman field, the dominant styles of focal mechanism are left-lateral strike-slip for events with focal depths less than 1.5 km, and dip-slip along an obliquely trending fault for those with focal depths greater than 2.0 km. Thirdly, the covariance matrix method of shear-wave splitting analysis is presented. Different from conventional methods that usually analyze only two horizontal components, this method processes all three components of the seismogram simultaneously, allowing not only orientation but also dip information of fractures to be resolved. Synthetic test results show that this method is stable even for high noise level.(cont.) The method is applied to the Oman microearthquake records that display distinctive shear-wave splitting and polarization directions. From the polarizations, I estimate the predominant subsurface fracture directions and dipping angles. From the time delays of the split wave I determine the fracture density distributions in the reservoir. Finally, I examine the spatio-temporal characteristics of the microseismicity in the producing reservoir. The frequency-magnitude distribution measured by the b-value is determined using the maximum likelihood method. I found that b-values are higher for events below the deeper Shuaiba oil reservoir than those above. Also, the feasibility of monitoring the temporal change of b-values is demonstrated. The analysis of production and injection well data shows that seismicity event rates in the field all strongly correlated with gas production from the shallower Natih Formation. Microseismicity, focal mechanisms, GPS analysis, and production / injection well data all suggest the NE- SW bounding graben fault system responds elastically to the gas-production-induced stresses. Normal faulting is enhanced in the reservoirs by the compaction related stresses acting on the graben fault system.by Edmond Kin-Man Sze.Ph.D

    Imagerie du tenseur de diffusion du cerveau : vers des outils cliniques quantitatifs

    Get PDF
    The thesis explores three major methodological questions in clinical brain DTI, in the context of a clinical study on HIV. The first question is how to improve the DTI resolution. The second problem addressed in the thesis is how to create a multimodal population specific atlas. The third question is on the computation of statistics to compare white matter (WM) regions among controls and HIV patients. Clinical DTIs have low spatial resolution and signal-to-noise ratio making it difficult to compute meaningful statistics. We propose a super-resolution (SRR) algorithm for improving DTI resolution. The SRR is achieved using anisotropic regularization prior. This method demonstrates improved fractional anisotropy and tractography. In order to spatially normalize all images in a consistent coordinate system, we create a multimodal population specific brain atlas using the T1 and DTI images from a HIV dataset. We also transfer WM labels from an existing white matter parcellation map to create probabilistic WM atlas. This atlas can be used for region of interest based statistics and refining manual segmentation. On the statistical analysis side, we improve the existing tract based spatial statistics (TBSS) by using DTI based registration for spatial normalization. Contrary to traditional TBSS routines, we use multivariate statistics for detecting changes in WM tracts. With the improved method it is possible to detect differences in WM regions and correlate it with the neuropschylogical test scores of the subjects.La thèse explore trois questions méthodologiques en imagerie de diffusion (DTI) clinique du cerveau, dans le contexte d’une étude sur le VIH. La première question est comment améliorer la résolution du DTI. Le deuxième problème est comment créer un atlas multimodal spécifique à la population. La troisième question porte sur le calcul des statistiques pour comparer les zones de matière blanche entre les contrôles et patients. Les DTI cliniques ont une résolution spatiale et un rapport signal sur bruit faibles, ce qui rend difficile le calcul de statistiques significatives. Nous proposons un algorithme de super-résolution pour améliorer la résolution qui utilise un a priori spatial anisotrope. Cette méthode démontre une amélioration de l’anisotropie fractionnelle et de la tractographie. Pour normaliser spatialement les images du cerveau dans un système de coordonnées commun, nous proposons ensuite de construire un atlas multimodal spécifique á la population. Ceci permet de créer un atlas probabiliste de la matière blanche qui est consistant avec l’atlas anatomique. Cet atlas peut être utilisé pour des statistiques basées sur des régions d’intérêt ou pour le raffinement d’une segmentation. Enfin, nous améliorons les résultats de la méthode TBSS (Tract-Based Spatial Statistics) en utilisant le recalage des images DTI. Contrairement á la méthode TBSS traditionnelle, nous utilisons ici des statistiques multivariées. Nous montrons que ceci permet de détecter des différences dans les régions de matière blanche qui étaient non significatives auparavant, et de les corréler avec les scores des tests neuropsychologiques

    Fourth SIAM Conference on Applications of Dynamical Systems

    Get PDF
    corecore