2,556 research outputs found

    Reconstruction of permutations distorted by single transposition errors

    Get PDF
    The reconstruction problem for permutations on nn elements from their erroneous patterns which are distorted by transpositions is presented in this paper. It is shown that for any n3n \geq 3 an unknown permutation is uniquely reconstructible from 4 distinct permutations at transposition distance at most one from the unknown permutation. The {\it transposition distance} between two permutations is defined as the least number of transpositions needed to transform one into the other. The proposed approach is based on the investigation of structural properties of a corresponding Cayley graph. In the case of at most two transposition errors it is shown that 32(n2)(n+1)\frac32(n-2)(n+1) erroneous patterns are required in order to reconstruct an unknown permutation. Similar results are obtained for two particular cases when permutations are distorted by given transpositions. These results confirm some bounds for regular graphs which are also presented in this paper.Comment: 5 pages, Report of paper presented at ISIT-200

    Phase Retrieval for Sparse Signals: Uniqueness Conditions

    Get PDF
    In a variety of fields, in particular those involving imaging and optics, we often measure signals whose phase is missing or has been irremediably distorted. Phase retrieval attempts the recovery of the phase information of a signal from the magnitude of its Fourier transform to enable the reconstruction of the original signal. A fundamental question then is: "Under which conditions can we uniquely recover the signal of interest from its measured magnitudes?" In this paper, we assume the measured signal to be sparse. This is a natural assumption in many applications, such as X-ray crystallography, speckle imaging and blind channel estimation. In this work, we derive a sufficient condition for the uniqueness of the solution of the phase retrieval (PR) problem for both discrete and continuous domains, and for one and multi-dimensional domains. More precisely, we show that there is a strong connection between PR and the turnpike problem, a classic combinatorial problem. We also prove that the existence of collisions in the autocorrelation function of the signal may preclude the uniqueness of the solution of PR. Then, assuming the absence of collisions, we prove that the solution is almost surely unique on 1-dimensional domains. Finally, we extend this result to multi-dimensional signals by solving a set of 1-dimensional problems. We show that the solution of the multi-dimensional problem is unique when the autocorrelation function has no collisions, significantly improving upon a previously known result.Comment: submitted to IEEE TI

    Two fluid space-time discontinuous Galerkin finite element method. Part I: numerical algorithm

    Get PDF
    A novel numerical method for two fluid flow computations is presented, which combines the space-time discontinuous Galerkin finite element discretization with the level set method and cut-cell based interface tracking. The space-time discontinuous Galerkin (STDG) finite element method offers high accuracy, an inherent ability to handle discontinuities and a very local stencil, making it relatively easy to combine with local {\it hp}-refinement. The front tracking is incorporated via cut-cell mesh refinement to ensure a sharp interface between the fluids. To compute the interface dynamics the level set method (LSM) is used because of its ability to deal with merging and breakup. Also, the LSM is easy to extend to higher dimensions. Small cells arising from the cut-cell refinement are merged to improve the stability and performance. The interface conditions are incorporated in the numerical flux at the interface and the STDG discretization ensures that the scheme is conservative as long as the numerical fluxes are conservative

    Cayley graphs and reconstruction problems

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Gene order rearrangement methods for the reconstruction of phylogeny

    Get PDF
    The study of phylogeny, i.e. the evolutionary history of species, is a central problem in biology and a key for understanding characteristics of contemporary species. Many problems in this area can be formulated as combinatorial optimisation problems which makes it particularly interesting for computer scientists. The reconstruction of the phylogeny of species can be based on various kinds of data, e.g. morphological properties or characteristics of the genetic information of the species. Maximum parsimony is a popular and widely used method for phylogenetic reconstruction aiming for an explanation of the observed data requiring the least evolutionary changes. A certain property of the genetic information gained much interest for the reconstruction of phylogeny in recent time: the organisation of the genomes of species, i.e. the arrangement of the genes on the chromosomes. But the idea to reconstruct phylogenetic information from gene arrangements has a long history. In Dobzhansky and Sturtevant (1938) it was already pointed out that “a comparison of the different gene arrangements in the same chromosome may, in certain cases, throw light on the historical relationships of these structures, and consequently on the history of the species as a whole”. This kind of data is promising for the study of deep evolutionary relationships because gene arrangements are believed to evolve slowly (Rokas and Holland, 2000). This seems to be the case especially for mitochondrial genomes which are available for a wide range of species (Boore, 1999). The development of methods for the reconstruction of phylogeny from gene arrangement data has made considerable progress during the last years. Prominent examples are the computation of parsimonious evolutionary scenarios, i.e. a shortest sequence of rearrangements transforming one arrangement of genes into another or the length of such a minimal scenario (Hannenhalli and Pevzner, 1995b; Sankoff, 1992; Watterson et al., 1982); the reconstruction of parsimonious phylogenetic trees from gene arrangement data (Bader et al., 2008; Bernt et al., 2007b; Bourque and Pevzner, 2002; Moret et al., 2002a); or the computation of the similarities of gene arrangements (Bergeron et al., 2008a; Heber et al., 2009). 1 1 Introduction The central theme of this work is to provide efficient algorithms for modified versions of fundamental genome rearrangement problems using more plausible rearrangement models. Two types of modified rearrangement models are explored. The first type is to restrict the set of allowed rearrangements as follows. It can be observed that certain groups of genes are preserved during evolution. This may be caused by functional constraints which prevented the destruction (Lathe et al., 2000; Sémon and Duret, 2006; Xie et al., 2003), certain properties of the rearrangements which shaped the gene orders (Eisen et al., 2000; Sankoff, 2002; Tillier and Collins, 2000), or just because no destructive rearrangement happened since the speciation of the gene orders. It can be assumed that gene groups, found in all studied gene orders, are not acquired independently. Accordingly, these gene groups should be preserved in plausible reconstructions of the course of evolution, in particular the gene groups should be present in the reconstructed putative ancestral gene orders. This can be achieved by restricting the set of rearrangements, which are allowed for the reconstruction, to those which preserve the gene groups of the given gene orders. Since it is difficult to determine functionally what a gene group is, it has been proposed to consider common combinatorial structures of the gene orders as gene groups (Marcotte et al., 1999; Overbeek et al., 1999). The second considered modification of the rearrangement model is extending the set of allowed rearrangement types. Different types of rearrangement operations have shuffled the gene orders during evolution. It should be attempted to use the same set of rearrangement operations for the reconstruction otherwise distorted or even wrong phylogenetic conclusions may be obtained in the worst case. Both possibilities have been considered for certain rearrangement problems before. Restricted sets of allowed rearrangements have been used successfully for the computation of parsimonious rearrangement scenarios consisting of inversions only where the gene groups are identified as common intervals (Bérard et al., 2007; Figeac and Varré, 2004). Extending the set of allowed rearrangement operations is a delicate task. On the one hand it is unknown which rearrangements have to be regarded because this is part of the phylogeny to be discovered. On the other hand, efficient exact rearrangement methods including several operations are still rare, in particular when transpositions should be included. For example, the problem to compute shortest rearrangement scenarios including transpositions is still of unknown computational complexity. Currently, only efficient approximation algorithms are known (e.g. Bader and Ohlebusch, 2007; Elias and Hartman, 2006). Two problems have been studied with respect to one or even both of these possibilities in the scope of this work. The first one is the inversion median problem. Given the gene orders of some taxa, this problem asks for potential ancestral gene orders such that the corresponding inversion scenario is parsimonious, i.e. has a minimum length. Solving this problem is an essential component 2 of algorithms for computing phylogenetic trees from gene arrangements (Bourque and Pevzner, 2002; Moret et al., 2002a, 2001). The unconstrained inversion median problem is NP-hard (Caprara, 2003). In Chapter 3 the inversion median problem is studied under the additional constraint to preserve gene groups of the input gene orders. Common intervals, i.e. sets of genes that appear consecutively in the gene orders, are used for modelling gene groups. The problem of finding such ancestral gene orders is called the preserving inversion median problem. Already the problem of finding a shortest inversion scenario for two gene orders is NP-hard (Figeac and Varré, 2004). Mitochondrial gene orders are a rich source for phylogenetic investigations because they are known for more than 1 000 species. Four rearrangement operations are reported at least in the literature to be relevant for the study of mitochondrial gene order evolution (Boore, 1999): That is inversions, transpositions, inverse transpositions, and tandem duplication random loss (TDRL). Efficient methods for a plausible reconstruction of genome rearrangements for mitochondrial gene orders using all four operations are presented in Chapter 4. An important rearrangement operation, in particular for the study of mitochondrial gene orders, is the tandem duplication random loss operation (e.g. Boore, 2000; Mauro et al., 2006). This rearrangement duplicates a part of a gene order followed by the random loss of one of the redundant copies of each gene. The gene order is rearranged depending on which copy is lost. This rearrangement should be regarded for reconstructing phylogeny from gene order data. But the properties of this rearrangement operation have rarely been studied (Bouvel and Rossin, 2009; Chaudhuri et al., 2006). The combinatorial properties of the TDRL operation are studied in Chapter 5. The enumeration and counting of sorting TDRLs, that is TDRL operations reducing the distance, is studied in particular. Closed formulas for computing the number of sorting TDRLs and methods for the enumeration are presented. Furthermore, TDRLs are one of the operations considered in Chapter 4. An interesting property of this rearrangement, distinguishing it from other rearrangements, is its asymmetry. That is the effects of a single TDRL can (in the most cases) not be reversed with a single TDRL. The use of this property for phylogeny reconstruction is studied in Section 4.3. This thesis is structured as follows. The existing approaches obeying similar types of modified rearrangement models as well as important concepts and computational methods to related problems are reviewed in Chapter 2. The combinatorial structures of gene orders that have been proposed for identifying gene groups, in particular common intervals, as well as the computational approaches for their computation are reviewed in Section 2.2. Approaches for computing parsimonious pairwise rearrangement scenarios are outlined in Section 2.3. Methods for the computation genome rearrangement scenarios obeying biologically motivated constraints, as introduced above, are detailed in Section 2.4. The approaches for the inversion median problem are covered in Section 2.5. Methods for the reconstruction of phylogenetic trees from gene arrangement data are briefly outlined in Section 2.6.3 1 Introduction Chapter 3 introduces the new algorithms CIP, ECIP, and TCIP for solving the preserving inversion median problem. The efficiency of the algorithm is empirically studied for simulated as well as mitochondrial data. The description of algorithms CIP and ECIP is based on Bernt et al. (2006b). TCIP has been described in Bernt et al. (2007a, 2008b). But the theoretical foundation of TCIP is extended significantly within this work in order to allow for more than three input permutations. Gene order rearrangement methods that have been developed for the reconstruction of the phylogeny of mitochondrial gene orders are presented in the fourth chapter. The presented algorithm CREx computes rearrangement scenarios for pairs of gene orders. CREx regards the four types of rearrangement operations which are important for mitochondrial gene orders. Based on CREx the algorithm TreeREx for assigning rearrangement events to a given tree is developed. The quality of the CREx reconstructions is analysed in a large empirical study for simulated gene orders. The results of TreeREx are analysed for several mitochondrial data sets. Algorithms CREx and TreeREx have been published in Bernt et al. (2008a, 2007c). The analysis of the mitochondrial gene orders of Echinodermata was included in Perseke et al. (2008). Additionally, a new and simple method is presented to explore the potential of the CREx method. The new method is applied to the complete mitochondrial data set. The problem of enumerating and counting sorting TDRLs is studied in Chapter 5. The theoretical results are covered to a large extent by Bernt et al. (2009b). The missing combinatorial explanation for some of the presented formulas is given here for the first time. Therefor, a new method for the enumeration and counting of sorting TDRLs has been developed (Bernt et al., 2009a)

    Computing stationary free-surface shapes in microfluidics

    Full text link
    A finite-element algorithm for computing free-surface flows driven by arbitrary body forces is presented. The algorithm is primarily designed for the microfluidic parameter range where (i) the Reynolds number is small and (ii) force-driven pressure and flow fields compete with the surface tension for the shape of a stationary free surface. The free surface shape is represented by the boundaries of finite elements that move according to the stress applied by the adjacent fluid. Additionally, the surface tends to minimize its free energy and by that adapts its curvature to balance the normal stress at the surface. The numerical approach consists of the iteration of two alternating steps: The solution of a fluidic problem in a prescribed domain with slip boundary conditions at the free surface and a consecutive update of the domain driven by the previously determined pressure and velocity fields. ...Comment: Revised versio

    Brain iron deposition is linked with cognitive severity in Parkinson’s disease

    Get PDF
    Background: Dementia is common in Parkinson’s disease (PD) but measures that track cognitive change in PD are lacking. Brain tissue iron accumulates with age and co-localises with pathological proteins linked to PD dementia such as amyloid. We used quantitative susceptibility mapping (QSM) to detect changes related to cognitive change in PD. Methods: We assessed 100 patients with early-stage to mid-stage PD, and 37 age-matched controls using the Montreal Cognitive Assessment (MoCA), a validated clinical algorithm for risk of cognitive decline in PD, measures of visuoperceptual function and the Movement Disorders Society Unified Parkinson’s Disease Rating Scale part 3 (UPDRS-III). We investigated the association between these measures and QSM, an MRI technique sensitive to brain tissue iron content. Results: We found QSM increases (consistent with higher brain tissue iron content) in PD compared with controls in prefrontal cortex and putamen (p<0.05 corrected for multiple comparisons). Whole brain regression analyses within the PD group identified QSM increases covarying: (1) with lower MoCA scores in the hippocampus and thalamus, (2) with poorer visual function and with higher dementia risk scores in parietal, frontal and medial occipital cortices, (3) with higher UPDRS-III scores in the putamen (all p<0.05 corrected for multiple comparisons). In contrast, atrophy, measured using voxel-based morphometry, showed no differences between groups, or in association with clinical measures. Conclusions: Brain tissue iron, measured using QSM, can track cognitive involvement in PD. This may be useful to detect signs of early cognitive change to stratify groups for clinical trials and monitor disease progression

    Functional architecture of the rat parasubiculum

    Get PDF
    The parasubiculum is a major input structure of layer 2 of medial entorhinal cortex, where most grid cells are found. Here we investigated parasubicular circuits of the rat by anatomical analysis combined with juxtacellular recording/labeling and tetrode recordings during spatial exploration. In tangential sections, the parasubiculum appears as a linear structure flanking the medial entorhinal cortex mediodorsally. With a length of ∼5.2 mm and a width of only ∼0.3 mm (approximately one dendritic tree diameter), the parasubiculum is both one of the longest and narrowest cortical structures. Parasubicular neurons span the height of cortical layers 2 and 3, and we observed no obvious association of deep layers to this structure. The "superficial parasubiculum" (layers 2 and 1) divides into ∼15 patches, whereas deeper parasubicular sections (layer 3) form a continuous band of neurons. Anterograde tracing experiments show that parasubicular neurons extend long "circumcurrent" axons establishing a "global" internal connectivity. The parasubiculum is a prime target of GABAergic and cholinergic medial septal inputs. Other input structures include the subiculum, presubiculum, and anterior thalamus. Functional analysis of identified and unidentified parasubicular neurons shows strong theta rhythmicity of spiking, a large fraction of head-direction selectivity (50%, 34 of 68), and spatial responses (grid, border and irregular spatial cells, 57%, 39 of 68). Parasubicular output preferentially targets patches of calbindin-positive pyramidal neurons in layer 2 of medial entorhinal cortex, which might be relevant for grid cell function. These findings suggest the parasubiculum might shape entorhinal theta rhythmicity and the (dorsoventral) integration of information across grid scales

    Space-time discontinuous Galerkin finite element method for two-fluid flows

    Get PDF
    Multifluid and multiphase flows involve combinations of fluids and interfaces which separate these. These flows are of importance in many natural and industrial processes including fluidized beds and bubble columns. Often the interface is not static but moves with the fluid flow velocity. Also, interface topological changes due to breakup and coalescence processes may occur. Solutions typically have a discontinuous character at the interface between different fluids because of curvature and surface tension effects. In addition, the density and pressure differences across the interface can be very high, like in the case of liquid-gas flows. Also, the existence of shock or contact waves can introduce additional discontinuities into the problem. The aim of this research project was to develop a discontinuous Galerkin method for two-fluid flows, which is accurate, versatile and can alleviate some of the problems commonly encountered with existing methods. A novel numerical method for two-fluid flow computations is presented, which combines the space-time discontinuous Galerkin finite element discretization with the level set method and cut-cell based interface tracking. The space-time discontinuous Galerkin (STDG) finite element method offers high accuracy, an inherent ability to handle discontinuities and a very local stencil, making it relatively easy to combine with local {\it hp}-refinement. A front tracking approach is chosen because these methods ensure a sharp interface between the fluids are capable of high accuracy. The front tracking is incorporated by means of cut-cell mesh refinement, because this type of refinement is very local in nature and hence combines well with the STGD. To compute the interface dynamics the level set method (LSM) is chosen, because of its ability to deal with merging and breakup, since it was expected that the LSM combines well with the cut-cell mesh refinement and also because the LSM is easy to extend to higher dimensions. The small cell problem caused by the cut-cell refinement is solved by using a merging procedure involving bounding box elements, which improves stability and performance of the method. The interface conditions are incorporated in the numerical flux at the interface and the STDG discretization ensures that the scheme is conservative as long as the numerical fluxes are conservative. All possible cuts the 0-level set can make with square and cube shaped background elements are identified and for each cut an element refinement is defined explicitly. To ensure connectivity of the refined mesh, the dimdim-dimensional face refinements are defined equal to the dim1dim-1-dimensional element refinements. It is expected that this scheme can accurately solve smaller scale problems where the interface shape is of importance and where complex interface physics are involved. To investigate the numerical properties and performance of the numerical algorithm it is applied to a number of one and two dimensional single and two-fluid test problems, including a magma - ideal gas shocktube and a helium cylinder - shock wave interaction problem. To remove oscillations in the flow field near the interface a novel interface flux is presented, which is based on the HLLC flux for a contact discontinuity and can compensate for small errors in the interface position by allowing for a small mass loss. Slope limiting was found to reduce spikes in the solution at the cost of a decrease in accuracy. It was found that the level set deformation restricted the simulation lengths. This problem can be solved by adding a level set reinitialization procedure. To improve the efficiency and stability of the two-fluid numerical algorithm it is advised to incorporate {\it hp}-refinement and a multigrid algorithm. Next, the Object Oriented Programming (OOP) design and implementation of the two-fluid method were discussed. The choice for the OOP language C++ was motivated by the general advantages of OOP such as reusability, reliability, robustness, extensibility and maintainability. In addition the use of OOP allowed for a strong connection between the numerical method and its implementation. In addition, {\it hp}GEM, an OOP package for DG methods was presented. The use of {\it hp}GEM allowed for a reduction of the development time and provided quality control and a coding standard which benefitted the sharing and maintenance of the codes
    corecore