480 research outputs found

    Barycentric Subspace Analysis on the Sphere and Image Manifolds

    Get PDF
    In this dissertation we present a generalization of Principal Component Analysis (PCA) to Riemannian manifolds called Barycentric Subspace Analysis and show some applications. The notion of barycentric subspaces has been first introduced first by X. Pennec. Since they lead to hierarchy of properly embedded linear subspaces of increasing dimension, they define a generalization of PCA on manifolds called Barycentric Subspace Analysis (BSA). We present a detailed study of the method on the sphere since it can be considered as the finite dimensional projection of a set of probability densities that have many practical applications. We also show an application of the barycentric subspace method for the study of cardiac motion in the problem of image registration, following the work of M.M. Rohé

    Defining the Pose of any 3D Rigid Object and an Associated Distance

    Get PDF
    The pose of a rigid object is usually regarded as a rigid transformation, described by a translation and a rotation. However, equating the pose space with the space of rigid transformations is in general abusive, as it does not account for objects with proper symmetries -- which are common among man-made objects.In this article, we define pose as a distinguishable static state of an object, and equate a pose with a set of rigid transformations. Based solely on geometric considerations, we propose a frame-invariant metric on the space of possible poses, valid for any physical rigid object, and requiring no arbitrary tuning. This distance can be evaluated efficiently using a representation of poses within an Euclidean space of at most 12 dimensions depending on the object's symmetries. This makes it possible to efficiently perform neighborhood queries such as radius searches or k-nearest neighbor searches within a large set of poses using off-the-shelf methods. Pose averaging considering this metric can similarly be performed easily, using a projection function from the Euclidean space onto the pose space. The practical value of those theoretical developments is illustrated with an application of pose estimation of instances of a 3D rigid object given an input depth map, via a Mean Shift procedure

    Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    Get PDF
    abstract: Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions.The article is published at http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.100456

    Computational Approaches to Simulation and Analysis of Large Conformational Transitions in Proteins

    Get PDF
    abstract: In a typical living cell, millions to billions of proteins—nanomachines that fluctuate and cycle among many conformational states—convert available free energy into mechanochemical work. A fundamental goal of biophysics is to ascertain how 3D protein structures encode specific functions, such as catalyzing chemical reactions or transporting nutrients into a cell. Protein dynamics span femtosecond timescales (i.e., covalent bond oscillations) to large conformational transition timescales in, and beyond, the millisecond regime (e.g., glucose transport across a phospholipid bilayer). Actual transition events are fast but rare, occurring orders of magnitude faster than typical metastable equilibrium waiting times. Equilibrium molecular dynamics (EqMD) can capture atomistic detail and solute-solvent interactions, but even microseconds of sampling attainable nowadays still falls orders of magnitude short of transition timescales, especially for large systems, rendering observations of such "rare events" difficult or effectively impossible. Advanced path-sampling methods exploit reduced physical models or biasing to produce plausible transitions while balancing accuracy and efficiency, but quantifying their accuracy relative to other numerical and experimental data has been challenging. Indeed, new horizons in elucidating protein function necessitate that present methodologies be revised to more seamlessly and quantitatively integrate a spectrum of methods, both numerical and experimental. In this dissertation, experimental and computational methods are put into perspective using the enzyme adenylate kinase (AdK) as an illustrative example. We introduce Path Similarity Analysis (PSA)—an integrative computational framework developed to quantify transition path similarity. PSA not only reliably distinguished AdK transitions by the originating method, but also traced pathway differences between two methods back to charge-charge interactions (neglected by the stereochemical model, but not the all-atom force field) in several conserved salt bridges. Cryo-electron microscopy maps of the transporter Bor1p are directly incorporated into EqMD simulations using MD flexible fitting to produce viable structural models and infer a plausible transport mechanism. Conforming to the theme of integration, a short compendium of an exploratory project—developing a hybrid atomistic-continuum method—is presented, including initial results and a novel fluctuating hydrodynamics model and corresponding numerical code.Dissertation/ThesisDoctoral Dissertation Physics 201

    Joint acoustic full-waveform and gravity inversion - development and synthetic application to a salt dome = Simultane Inversion akustischer Wellenformen und gravimetrischer Daten - Entwicklung und Anwendung auf einen Salzdom

    Get PDF
    Die seismische full-waveform inversion (FWI) genießt in jĂŒngster Zeit wegen ihres hohen Auflösungsvermögens und der FĂ€higkeit auch hochgradig heterogenen Untergrund aufzulösen immer grĂ¶ĂŸere Beliebtheit. Seismische Geschwindigkeitsmodelle können in der Regel zuverlĂ€ssig bestimmt werden, wohingegen immer noch große Schwierigkeiten bei der Inversion nach der Dichte bestehen. Dies liegt zum einen an der geringen SensitivitĂ€t seismischer Wellen in Bezug auf Dichtevariationen und zum anderen an trade-off Effekten zwischen den involvierten Parametern. Außerdem ist die Interpretation der seismischen Amplituden grundsĂ€tzlich mehrdeutig, da neben der Dichte auch Effekte wie DĂ€mpfung, Anisotropie und Rauschen eine Rolle spielen. Die klassische Schwereinversion leidet dagegen an inhĂ€renter Mehrdeutigkeit, da die Schwere ein Potentialfeld darstellt, und weist nur eine Auflösbarkeit im langwelligen Bereich auf. Um diese EinschrĂ€nkungen zu ĂŒberwinden fĂŒhren wir eine gemeinsame Inversion von Druckseismogrammen und Schweresignalen durch. Dabei benutzen wir weder empirische Beziehungen noch Kriterien, die auf strukturellen Ähnlichkeiten basieren. Als Testmodell fĂŒr einen Rekonstruktionstest benutzen wir einen Salzdom, der in einige Sedimentschichten eingebettet ist. Das rekonstruierte Dichtemodell der gemeinsamen Inversion enthĂ€lt kurzwellige Informationen ĂŒber die Sedimentschichten, welche nicht in den Resultaten der reinen Schwereinversion enthalten sind. Im Vergleich zu der reinen FWI leidet die gemeinsame Inversion weniger an gegenlĂ€ufigen AbhĂ€ngigkeiten zwischen den unterschiedlichen Parametern. Außerdem erklĂ€rt das finale Dichtemodell die pseudo-observierten Schweresignale fast perfekt. Dies ist bei der reinen FWI nicht der Fall. Abschließend ist zu sagen, dass die Integration von Schweredaten in der Tat die trade-off Effekte zwischen der P-Wellengeschwindigkeit und der Dichte verringern kann und dass außerdem eine höhere Auflösung des Dichtemodelles im Vergleich zu der reinen Schwereinversion erreichbar ist. Unsere Inversionsstrategie erlaubt es, eine vertrauenswĂŒrdigere Interpretation von Dichteverteilungen durchzufĂŒhren als es die individuellen Methoden gewĂ€hrleisten

    Sensitivity and background estimates towards Phase-I of the COMET muon-to-electron conversion search

    Get PDF
    COMET is a future high-precision experiment searching for charged lepton flavour violation through the muon-to-electron conversion process. It aims to push the intensity frontier of particle physics by coupling an intense muon beam with cutting-edge detector technology. The first stage of the experiment, COMET Phase-I, is currently being assembled and will soon enter its data acquisition period. It plans to achieve a single event sensitivity to ÎŒ-e conversion in aluminium of 3.1x10⁻Âč⁔. This thesis presents a study of the sensitivity and backgrounds of COMET Phase-I using the latest Monte Carlo simulation data produced. The background contribution from cosmic ray-induced atmospheric muons is estimated using a backward Monte Carlo approach, which allows computational resources to be focused on the most critical signal-mimicking events. Analysis of a ÎŒ-e conversion simulation sample suggests that COMET Phase-I will reach a single event sensitivity of 3.6x10⁻Âč⁔ within 146 days of data acquisition. Our results suggest that, in that period, on the order of 10Âł atmospheric muons will enter the detector system and produce an event similar enough to the conversion signal to pass all the signal selection criteria. Most of these events will be rejected by the Cosmic Ray Veto system, however, we expect at least 2.2 background events to sneak in unnoticed. It is vital for the conversion search that these events be discriminated from conversion electrons, for instance by using Cherenkov threshold counters to distinguish between muons and electrons or, alternatively, by developing a direction identification algorithm to reject some fraction of the ÎŒâș-induced events.Open Acces

    Algorithms for Context-Aware Trajectory Analysis

    Get PDF
    • 

    corecore