218 research outputs found

    A MILP approach for designing robust variable-length codes based on exact free distance computation

    No full text
    International audienceThis paper addresses the design of joint source-channel variable-length codes with maximal free distance for given codeword lengths. While previous design methods are mainly based on bounds on the free distance of the code, the proposed algorithm exploits an exact characterization of the free distance. The code optimization is cast in the framework of mixed-integer linear programming and allows to tackle practical alphabet sizes in reasonable computing time

    Iterative Construction of Reversible Variable-Length Codes and Variable-Length Error-Correcting Codes

    Full text link

    New Free Distance Bounds and Design Techniques for Joint Source-Channel Variable-Length Codes

    No full text
    International audienceThis paper proposes branch-and-prune algorithms for searching prefix-free joint source-channel codebooks with maximal free distance for given codeword lengths. For that purpose, it introduces improved techniques to bound the free distance of variable-length codes

    Identification of Biomolecular Conformations from Incomplete Torsion Angle Observations by Hidden Markov Models

    Get PDF
    We present a novel method for the identification of the most important conformations of a biomolecular system from molecular dynamics or Metropolis Monte Carlo time series by means of Hidden Markov Models (HMMs). We show that identification is possible based on the observation sequences of some essential torsion or backbone angles. In particular, the method still provides good results even if the conformations do have a strong overlap in these angles. To apply HMMs to angular data, we use von Mises output distributions. The performance of the resulting method is illustrated by numerical tests and by application to a hybrid Monte Carlo time series of trialanine and to MD simulation results of a DNA-oligomer

    A general computational tool for structure synthesis

    Get PDF
    Synthesis of structures is a very difficult task even with only a small number of components that form a system; yet it is the catalyst of innovation. Molecular structures and nanostructures typically have a large number of similar components but different connections, which manifests a more challenging task for their synthesis. This thesis presents a novel method and its related algorithms and computer programs for the synthesis of structures. This novel method is based on several concepts: (1) the structure is represented by a graph and further by the adjacency matrix; and (2) instead of only exploiting the eigenvalue of the adjacency matrix, both the eigenvalue and the eigenvector are exploited; specifically the components of the eigenvector have been found very useful in algorithm development. This novel method is called the Eigensystem method. The complexity of the Eigensystem method is equal to that of the famous program called Nauty in the combinatorial world. However, the Eigensystem method can work for the weighted and both directed and undirected graph, while the Nauty program can only work for the non-weighted and both directed and undirected graph. The cause for this is the different philosophies underlying these two methods. The Nauty program is based on the recursive component decomposition strategy, which could involve some unmanageable complexities when dealing with the weighted graph, albeit no such an attempt has been reported in the literature. It is noted that in practical applications of structure synthesis, weighted graphs are more useful than non-weighted graphs for representing physical systems. Pivoted at the Eigensystem method, this thesis presents the algorithms and computer programs for the three fundamental problems in structure synthesis, namely the isomorphism/automorphism, the unique labeling, and the enumeration of the structures or graphs

    Verification of communication protocols in web-services

    Get PDF
    The last decade has seen a massive migration towards the service oriented paradigm that has resulted in 1) resolving the software interoperability issues, 2) increased re-usability of the code, 3) easy inter-application communications, and 4) significant cost reduction. However, individual web-services seldom meet the business requirements of an application. Usually an application life-cycle involves interacting with several web-services based on its workflow. Considering that this might require 1) sharing data with multiple services, 2) tracking the response for each service request, 3) tracking and compensating the service failures, etc., usually a domain-specific language is used for service composition. Each service has an interface to outline its functionality and they are composed based on these interfaces. Nevertheless, any error or omission in these exposed interfaces could result in a myriad of glitches in the composition and the overlying application. This is further exacerbated by dynamic service composition techniques wherein services could be added, removed or updated at runtime. Consequently service consuming applications heavily depend on the verification techniques to vouch for their reliability and usability. The scope of applications based on service composition is rapidly expanding into critical domains where the stakes are high (e.g. stock markets). Consequently their reliability cannot be solely based on testing, wherein educated guesses are involved. Model-checking is a formal method that has an unprecedented ability to endorse the correctness of a system. It involves modeling a system before verifying it for a set of properties using a model-checking tool. However it has hitherto been sparingly used because of the associated time and memory requirements. This thesis proposes novel solutions to deal with these limitations in verifying a service composition. We propose a technique for modeling a service composition prior to verifying it using a model-checking tool. Compared to existing techniques that are ad-hoc and temporary, our solution streamlines the transformation by introducing a generic framework that transforms the composition into intermediate data transfer objects (DTOs) before the actual modeling. These DTOs help in automating the transformation by allowing access to the required information programmatically. The experimental results indicate that the framework takes less than a second (on average) in transforming BPEL specifications. The solution is made more appealing by further reducing the aforementioned time and memory requirements for model-checking. The additional reduction in memory is attributed to storing the states as the difference from an adjoining state. The reduction in time is realized by exploring the modules of a hierarchical model concurrently. These techniques offer up to 95% reduction in memory requirements and 86% reduction in time requirements. Furthermore, the time reduction technique is also extended to non-hierarchical models. This involves introducing hierarchy into a flat model in linear time before applying the time reduction techniques. As compared to other techniques, our method ensures that the transformed model is equivalent to the original model

    The contour tree image encoding technique and file format

    Get PDF
    The process of contourization is presented which converts a raster image into a discrete set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimises noticeable artifacts in the simplified image. The contour merging technique offers a complementary lossy compression system to the QDCT (Quantised Discrete Cosine Transform). The artifacts introduced by the two methods are very different; QDCT produces a general blurring and adds extra highlights in the form of overshoots, whereas contour merging sharpens edges, reduces highlights and introduces a degree of false contouring. A format based on the contourization technique which caters for most image types is defined, called the contour tree image format. Image operations directly on this compressed format have been studied which for certain manipulations can offer significant operational speed increases over using a standard raster image format. A couple of examples of operations specific to the contour tree format are presented showing some of the features of the new format.Science and Engineering Research Counci
    • …
    corecore