270 research outputs found

    On optimal design and applications of linear transforms

    Get PDF
    Linear transforms are encountered in many fields of applied science and engineering. In the past, conventional block transforms provided acceptable answers to different practical problems. But now, under increasing competitive pressures, with the growing reservoir of theory and a corresponding development of computing facilities, a real demand has been created for methods that systematically improve performance. As a result the past two decades have seen the explosive growth of a class of linear transform theory known as multiresolution signal decomposition. The goal of this work is to design and apply these advanced signal processing techniques to several different problems. The optimal design of subband filter banks is considered first. Several design examples are presented for M-band filter banks. Conventional design approaches are found to present problems when the number of constraints increases. A novel optimization method is proposed using a step-by-step design of a hierarchical subband tree. This method is shown to possess performance improvements in applications such as subband image coding. The subband tree structuring is then discussed and generalized algorithms are presented. Next, the attention is focused on the interference excision problem in direct sequence spread spectrum (DSSS) communications. The analytical and experimental performance of the DSSS receiver employing excision are presented. Different excision techniques are evaluated and ranked along with the proposed adaptive subband transform-based excises. The robustness of the considered methods is investigated for either time-localized or frequency-localized interferers. A domain switchable excision algorithm is also presented. Finally, sonic of the ideas associated with the interference excision problem are utilized in the spectral shaping of a particular biological signal, namely heart rate variability. The improvements for the spectral shaping process are shown for time-frequency analysis. In general, this dissertation demonstrates the proliferation of new tools for digital signal processing

    Adaptive Constraint Solving for Information Flow Analysis

    Get PDF
    In program analysis, unknown properties for terms are typically represented symbolically as variables. Bound constraints on these variables can then specify multiple optimisation goals for computer programs and nd application in areas such as type theory, security, alias analysis and resource reasoning. Resolution of bound constraints is a problem steeped in graph theory; interdependencies between the variables is represented as a constraint graph. Additionally, constants are introduced into the system as concrete bounds over these variables and constants themselves are ordered over a lattice which is, once again, represented as a graph. Despite graph algorithms being central to bound constraint solving, most approaches to program optimisation that use bound constraint solving have treated their graph theoretic foundations as a black box. Little has been done to investigate the computational costs or design e cient graph algorithms for constraint resolution. Emerging examples of these lattices and bound constraint graphs, particularly from the domain of language-based security, are showing that these graphs and lattices are structurally diverse and could be arbitrarily large. Therefore, there is a pressing need to investigate the graph theoretic foundations of bound constraint solving. In this thesis, we investigate the computational costs of bound constraint solving from a graph theoretic perspective for Information Flow Analysis (IFA); IFA is a sub- eld of language-based security which veri es whether con dentiality and integrity of classified information is preserved as it is manipulated by a program. We present a novel framework based on graph decomposition for solving the (atomic) bound constraint problem for IFA. Our approach enables us to abstract away from connections between individual vertices to those between sets of vertices in both the constraint graph and an accompanying security lattice which defines ordering over constants. Thereby, we are able to achieve significant speedups compared to state-of-the-art graph algorithms applied to bound constraint solving. More importantly, our algorithms are highly adaptive in nature and seamlessly adapt to the structure of the constraint graph and the lattice. The computational costs of our approach is a function of the latent scope of decomposition in the constraint graph and the lattice; therefore, we enjoy the fastest runtime for every point in the structure-spectrum of these graphs and lattices. While the techniques in this dissertation are developed with IFA in mind, they can be extended to other application of the bound constraints problem, such as type inference and program analysis frameworks which use annotated type systems, where constants are ordered over a lattice

    A gravity interpretation of the Central North Sea

    Get PDF
    A gravity investigation of the Central North Sea has been undertaken with the aim of supplementing a parallel seismic investigation (Arsenikos et al., 2015) by targeting those areas where the seismic information was sparse or of poor quality. By stripping the gravity effect of the Zechstein and younger sequence it was hoped that concealed Upper Palaeozoic basins could be identified in the residual gravity signatures and distinguished from anomalies associated with Late Caledonian granitic plutons. Density logs from a set of wells across the region were compiled and used to calibrate a density model for the cover sequence. This model employed a combination of compaction trends and burial anomalies in the post-Zechstein units and a relationship between overall thickness and average density in the Zechstein unit. It was used, together with a depth-converted structural model from the seismic interpretation, to calculate the gravity effect down to base Zechstein. This, along with a long-wavelength background field, was subtracted from the observations to leave a residual gravity anomaly that was inverted to produce a 3D model of variations in the thickness of a pre-Zechstein layer that incorporated the effects of both basins and granites. The modelling results were analysed in combination with magnetic imaging and available mapping of intra-Upper Palaeozoic seismic reflectors. Granites were often easy to identify on the basis of a low in the gravity inversion surface that coincided with a structural high defined seismically and, in some cases, a magnetic signature. There are, however, some more ambiguous features that cannot be confidently classified without further information. Relatively low density rocks within the Lower Palaeozoic basement and zones of high density basement or pervasive high density intrusive rocks introduce distortion into the model, and the identification and separation of these influences requires more detailed combined seismic, gravity and magnetic modelling. Potential targets (areas of pre-Zechstein sedimentary thickening) were identified in Quads 19-20, Quads 26-28, and just to the north of an 150 km offshore extrapolation of the line which forms the southern margin of the Tweed Basin in the onshore area (the Pressen-Flodden-Ford faults). Geophysical anomalies in the Q36-37 area suggest a complex interplay between sedimentary and igneous features and would also benefit from further investigation. A ‘ramp’ in the gravity inversion surface appears to be linked, at least in part, to lateral density variations associated with overcompaction along the Sole Pit axis. The geophysical feature extends beyond previous mapping of that axis and is overlain by the Breagh gas field, so is an appropriate target for more detailed study (which could address the possibility of a basement influence on the observed anomalies). The results obtained indicate that gravity/magnetic interpretation provides a useful supplement to seismic reflection surveys, even where the latter form the primary exploration method. There are, for example, features at the southern margin of the Forth Approaches Basin and possible intra-basinal structures within the North Dogger Basin that could add to our understanding of those areas. The new government-funded seismic/gravity/magnetic surveys over the Central North Sea, which were conducted in 2015 and will be released in 2016, will provide the ideal resource with which to follow up the results of this investigation

    Approaches to Conflict-free Replicated Data Types

    Full text link
    Conflict-free Replicated Data Types (CRDTs) allow optimistic replication in a principled way. Different replicas can proceed independently, being available even under network partitions, and always converging deterministically: replicas that have received the same updates will have equivalent state, even if received in different orders. After a historical tour of the evolution from sequential data types to CRDTs, we present in detail the two main approaches to CRDTs, operation-based and state-based, including two important variations, the pure operation-based and the delta-state based. Intended as a tutorial for prospective CRDT researchers and designers, it provides solid coverage of the essential concepts, clarifying some misconceptions which frequently occur, but also presents some novel insights gained from considerable experience in designing both specific CRDTs and approaches to CRDTs.Comment: 36 page

    Wavelets and multirate filter banks : theory, structure, design, and applications

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2004.Includes bibliographical references (p. 219-230) and index.Wavelets and filter banks have revolutionized signal processing with their ability to process data at multiple temporal and spatial resolutions. Fundamentally, continuous-time wavelets are governed by discrete-time filter banks with properties such as perfect reconstruction, linear phase and regularity. In this thesis, we study multi-channel filter bank factorization and parameterization strategies, which facilitate designs with specified properties that are enforced by the actual factorization structure. For M-channel filter banks (M =/> 2), we develop a complete factorization, M-channel lifting factorization, using simple ladder-like structures as predictions between channels to provide robust and efficient implementation; perfect reconstruction is structurally enforced, even under finite precision arithmetic and quantization of lifting coefficients. With lifting, optimal low-complexity integer wavelet transforms can thus be designed using a simple and fast algorithm that incorporates prescribed limits on hardware operations for power-constrained environments. As filter bank regularity is important for a variety of reasons, an aspect of particular interest is the structural imposition of regularity onto factorizations based on the dyadic form uvt. We derive the corresponding structural conditions for regularity, for which M-channel lifting factorization provides an essential parameterization. As a result, we are able to design filter banks that are exactly regular and amenable to fast implementations with perfect reconstruction, regardless of the choice of free parameters and possible finite precision effects. Further constraining u = v ensures regular orthogonal filter banks,(cont.) whereas a special dyadic form is developed that guarantees linear phase. We achieve superior coding gains within 0.1% of the optimum, and benchmarks conducted on image compression applications show clear improvements in perceptual and objective performance. We also consider the problem of completing an M-channel filter bank, given only its scaling filter. M-channel lifting factorization can efficiently complete such biorthogonal filter banks. On the other hand, an improved scheme for completing paraunitary filter banks is made possible by a novel order-one factorization which allows greater design flexibility, resulting in improved frequency selectivity and energy compaction over existing state of the art methods. In a dual setting, the technique can be applied to transmultiplexer design to achieve higher-rate data transmissions.by Ying-Jui Chen.Ph.D

    Numerical modelling of post-seismic rupture propagation after the Sumatra 26.12.2004 earthquake constrained by GRACE gravity data

    No full text
    International audienceIn the last decades, the development of the surface and satellite geodetic and geophysical observations brought a new insights into the seismic cycle, documenting new features of inter-, co-, and post-seismic processes. In particular since 2002 satellite mission GRACE provides monthly models of the global gravity field with unprecedented accuracy showing temporal variations of the Earth's gravity field, including those caused by mass redistribution associated with earthquake processes. When combined with GPS measurements, these new data have allowed to assess the relative importance of afterslip and viscoelastic relaxation after the Sumatra 26.12.2004 earthquake. Indeed the observed post-seismic crustal displacements were fitted well by a viscoelastic relaxation model assuming Burgers body rheology for the asthenosphere (60–220 km deep) with a transient viscosity as low as 4 × 10^17 Pas and constant ~ 10^19 Pas steady state viscosity in the 60–660-km depth range. However, even the low-viscosity asthenosphere provides the amplitude of strain which gravity effect does not exceed 50 per cent of the GRACE gravity variations, thus additional localized slip of about 1 m was suggested at downdip extension of the coseismic rupture. Post-seismic slip at coseismic rupture or its downdip extension has been suggested by several authors but the mechanism of the post-seismic fault propagation has never been investigated numerically. Depth and size of localized slip area as well as rate and time decay during the post-seismic stage were either assigned a priory or estimated by fitting real geodesy or gravity data. In this paper we investigate post-seismic rupture propagation by modelling two consequent stages. First, we run a long-term, geodynamic simulation to self-consistently produce the initial stress and temperature distribution. At the second stage, we simulate a seismic cycle using results of the first step as initial conditions. The second short-term simulation involves three substeps, including additional stress accumulation after part of the subduction channel was locked; spontaneous coseismic slip; formation and development of damage zones producing afterslip. During the last substep post-seismic stress leads to gradual ~1 m slip localized at three faults around ~100-km downdip extension of the coseismic rupture. We used the displacement field caused by the slip to calculate pressure and density variations and to simulate gravity field variations. Wavelength of calculated gravity anomaly fits well to that of the real data and its amplitude provides about 60 per cent of the observed GRACE anomaly. Importantly, the surface displacements caused by the estimated afterslip are much smaller than those registered by GPS networks. As a result cumulative effect of Burgers rheology viscoelastic relaxation (which explains measured GPS displacements and about a half of gravity variations) plus post-seismic slip predicted by damage rheology model (which causes much smaller surface displacements but provides another half of the GRACE gravity variations) fits well to both sets of the real data. Hence, the presented numerical modelling based on damage rheology supports the process of post-seismic downdip rupture propagation previously hypothesized from the GRACE gravity data

    Numerical modelling of post-seismic rupture propagation after the Sumatra 26.12.2004 earthquake constrained by GRACE gravity data

    Get PDF
    In the last decades, the development of the surface and satellite geodetic and geophysical observations brought a new insights into the seismic cycle, documenting new features of inter-, co-, and post-seismic processes. In particular since 2002 satellite mission GRACE provides monthly models of the global gravity field with unprecedented accuracy showing temporal variations of the Earth's gravity field, including those caused by mass redistribution associated with earthquake processes. When combined with GPS measurements, these new data have allowed to assess the relative importance of afterslip and viscoelastic relaxation after the Sumatra 26.12.2004 earthquake. Indeed the observed post-seismic crustal displacements were fitted well by a viscoelastic relaxation model assuming Burgers body rheology for the asthenosphere (60-220 km deep) with a transient viscosity as low as 4× 1017 Pas and constant∼1019 Pas steady state viscosity in the 60-660-km depth range. However, even the low-viscosity asthenosphere provides the amplitude of strain which gravity effect does not exceed 50 per cent of the GRACE gravity variations, thus additional localized slip of about 1 m was suggested at downdip extension of the coseismic rupture. Post-seismic slip at coseismic rupture or its downdip extension has been suggested by several authors but the mechanism of the post-seismic fault propagation has never been investigated numerically. Depth and size of localized slip area as well as rate and time decay during the post-seismic stage were either assigned a priory or estimated by fitting real geodesy or gravity data. In this paper we investigate post-seismic rupture propagation by modelling two consequent stages. First, we run a long-term, geodynamic simulation to self-consistently produce the initial stress and temperature distribution. At the second stage, we simulate a seismic cycle using results of the first step as initial conditions. The second short-term simulation involves three substeps, including additional stress accumulation after part of the subduction channel was locked; spontaneous coseismic slip; formation and development of damage zones producing afterslip. During the last substep post-seismic stress leads to gradual∼1 m slip localized at three faults around∼100-km downdip extension of the coseismic rupture. We used the displacement field caused by the slip to calculate pressure and density variations and to simulate gravity field variations. Wavelength of calculated gravity anomaly fits well to that of the real data and its amplitude provides about 60 per cent of the observed GRACE anomaly. Importantly, the surface displacements caused by the estimated afterslip are much smaller than those registered by GPS networks. As a result cumulative effect of Burgers rheology viscoelastic relaxation (which explains measured GPS displacements and about a half of gravity variations) plus post-seismic slip predicted by damage rheology model (which causes much smaller surface displacements but provides another half of the GRACE gravity variations) fits well to both sets of the real data. Hence, the presented numerical modelling based on damage rheology supports the process of post-seismic downdip rupture propagation previously hypothesized from the GRACE gravity dat
    • …
    corecore