867 research outputs found

    A Study of Yield Predictions for a Model of Homogeneous Self-Assembling Components

    Get PDF
    Self-assembly of homogeneous components has the advantage of being a decentralised and highly parallel method for assembling multiple target structures, and is ideal for effective large-scale manufacturing. Yet assembly yield may be negatively affected by the formation of incompatible substructures that prevent the formation of complete target structures. In this work we present physical and theoretical analysis of a simple magnetomechanical self-assembling systems exhibiting the problem of incompatible substructures in the formation of closed circular target structures out of eight homogeneous components. The assembly yield of physical experiments from 8 to 40 components is compared with the predictions of a computational model, and the model is found to accurately predict both the mean and standard deviation of the experimental yield

    From surfaces to objects : Recognizing objects using surface information and object models.

    Get PDF
    This thesis describes research on recognizing partially obscured objects using surface information like Marr's 2D sketch ([MAR82]) and surface-based geometrical object models. The goal of the recognition process is to produce a fully instantiated object hypotheses, with either image evidence for each feature or explanations for their absence, in terms of self or external occlusion. The central point of the thesis is that using surface information should be an important part of the image understanding process. This is because surfaces are the features that directly link perception to the objects perceived (for normal "camera-like" sensing) and because surfaces make explicit information needed to understand and cope with some visual problems (e.g. obscured features). Further, because surfaces are both the data and model primitive, detailed recognition can be made both simpler and more complete. Recognition input is a surface image, which represents surface orientation and absolute depth. Segmentation criteria are proposed for forming surface patches with constant curvature character, based on surface shape discontinuities which become labeled segmentation- boundaries. Partially obscured object surfaces are reconstructed using stronger surface based constraints. Surfaces are grouped to form surface clusters, which are 3D identity-independent solids that often correspond to model primitives. These are used here as a context within which to select models and find all object features. True three-dimensional properties of image boundaries, surfaces and surface clusters are directly estimated using the surface data. Models are invoked using a network formulation, where individual nodes represent potential identities for image structures. The links between nodes are defined by generic and structural relationships. They define indirect evidence relationships for an identity. Direct evidence for the identities comes from the data properties. A plausibility computation is defined according to the constraints inherent in the evidence types. When a node acquires sufficient plausibility, the model is invoked for the corresponding image structure.Objects are primarily represented using a surface-based geometrical model. Assemblies are formed from subassemblies and surface primitives, which are defined using surface shape and boundaries. Variable affixments between assemblies allow flexibly connected objects. The initial object reference frame is estimated from model-data surface relationships, using correspondences suggested by invocation. With the reference frame, back-facing, tangential, partially self-obscured, totally self-obscured and fully visible image features are deduced. From these, the oriented model is used for finding evidence for missing visible model features. IT no evidence is found, the program attempts to find evidence to justify the features obscured by an unrelated object. Structured objects are constructed using a hierarchical synthesis process. Fully completed hypotheses are verified using both existence and identity constraints based on surface evidence. Each of these processes is defined by its computational constraints and are demonstrated on two test images. These test scenes are interesting because they contain partially and fully obscured object features, a variety of surface and solid types and flexibly connected objects. All modeled objects were fully identified and analyzed to the level represented in their models and were also acceptably spatially located. Portions of this work have been reported elsewhere ([FIS83], [FIS85a], [FIS85b], [FIS86]) by the author

    Studying nonlinear optical properties of the plant light-harvesting protein LHCII

    Get PDF
    Ultraschnelle Energietransferprozesse zwischen den AnregungszustĂ€nden organischer PigmentmolekĂŒle in photosynthetischen Lichtsammelkomplexen gehören zu den schnellsten bisher untersuchten biologischen Ereignissen. Diese VorgĂ€nge wurden insbesondere auch fĂŒr den Haupt-Antennenkomplex der höheren Pflanzen (LHCII) beobachtet, der mehr als die HĂ€lfte des pflanzlichen Chlorophylls (Chl) bindet (5 Chl b und 7 Chl a pro Monomer). Offenbar ist dieser Pigment-Protein-Komplex entscheidend fĂŒr Regulationsmechanismen verantwortlich, die eine schnelle Adaptation des Photosyntheseapparats an wechselnde Licht- bedingungen ermöglichen. Die Struktur von LHCII ist mit einer Auflösung von 3.4 Å bekannt und erlaubt (im Prinzip) die Berechnung des Anregungsenergietransfers auf Basis eines Förster-Mechanismus. In diesem Zusammenhang gibt es jedoch noch zahlreiche ungeklĂ€rte Fragen, die vor allem die Orientierung der Pigmente zueinander sowie deren mögliche starke (exzitonische) Wechselwirkung betreffen. Allerdings sind konventionelle spektroskopische Methoden nicht geeignet, diese Merkmale ausreichend aufzuklĂ€ren. Aus diesem Grund wird in dieser Arbeit untersucht, inwieweit neuere laserspektroskopische Methoden wie die nichtlineare Polarisationsspektroskopie in der FrequenzdomĂ€ne (NLPF) zur Ermittlung unbekannter Parameter beitragen können. AnfĂ€nglich ergaben sich besonders Fragen der Anwendbarkeit der NLPF auf solche hoch- komplexen Untersuchungsobjekte sowie der Signifikanz eventuell erzielbarer Ergebnisse. Aufbauend auf einer parallel verfaßten Dissertation zu theoretischen Aspekten der NLPF- Methode [1] wurde daher ein vereinfachtes System modelliert, das die HeterogenitĂ€t der individuellen Chl(e) im LHCII widerspiegelt. Die gewonnenen Resultate ließen vermuten, daß die reine Simulation von NLPF-Spektren nicht ausreicht, um eindeutige Aussagen ĂŒber die MolekĂŒlparameter zu gewinnen. Um den benötigten zusĂ€tzlichen Erkenntnisgewinn zu erreichen, wurden daher Paralleluntersuchungen mit anderen laserspektroskopischen Methoden (nichtlineare Absorption mit fs-Pulsen, intensitĂ€tsabhĂ€ngige NLPF, EinzelmolekĂŒlspektroskopie, Tieftemperatur-NLPF) sowie mit in vitro rekonstituierten Protein-Mutanten durchgefĂŒhrt. Als Ergebnis konnte die Subbstruktur der Qy- Absorptionsbande der ersten angeregten ZustĂ€nde der Chl(e) fĂŒr LHCII ausreichend beschrieben werden. DarĂŒber hinaus ergaben sich Aussagen zu exzitonischen Wechselwirkungen zwischen bestimmten Chl(en), die unter anderem Einfluß auf das Energie- transferverhalten haben. Diese zusĂ€tzlichen Untersuchungen erlaubten letztendlich eine Modellierung der bei Raum- temperatur an LHCII gemessenen NLPF-Spektren. Neben dem dabei implizit gewonnenen VerstĂ€ndnis der nichtlinearen optischen Eigenschaften im Bereich der Qy-Absorption ließen sich so Aussagen ĂŒber bestimmte Modellparameter, besonders ĂŒber die Orientierung von Übergangsdipolmomenten, ableiten. Abschließend wurde die Auswirkung der Erkenntnisse auf das VerstĂ€ndnis der Struktur-Funktionsbeziehungen fĂŒr intra- und inter-komplexen Energietransfer erlĂ€utert.Ultra-fast excitation energy transfer (EET) between excited states of organic pigment molecules in photosynthetic antenna complexes belongs to the fastest observed biological processes. Such EET phenomena has been studied to a large extent for the main light- harvesting complex of the higher plants (LHCII), which appears to play an exceptional role for the regulatory function (i.e. light adaptation) of the plant photosynthetic apparatus. The structure of this pigment-protein complex harboring more than 50 % of the total chlorophyll (Chl) content is known with 3.4 Å resolution and reveals the binding sites of 5 Chl b and 7 Chl a per monomeric unit. Based on this structure analysis, EET calculations are (in principle) available on the molecular level under the assumption of Förster-type transfer. However, several molecular features like mutual pigment orientations and electronic interactions between their transition dipoles are still rather uncertain. Since conventional spectroscopic techniques can hardly reveal the corresponding parameters, this work was aimed at the evaluation of newly introduced laser spectroscopic techniques with respect to these questions. In the beginning, suitability and significance of the method when applied to highly complicated structures like pigment-protein complexes were studied by modeling heterogeneous, LHCII-like absorption systems in NLPF experiments. Based on recent improvements in the NLPF theory by a parallel theoretical investigation [1], these simulations clarified the sensitivity of the NLPF method on numerous physical parameters. As a major consequence, unambiguous evaluations of NLPF measurements appear to require substantial additional information about the investigated system. Accordingly, several supplementary methods like nonlinear absorption (using fs-pulses), intensity-dependent NLPF, single- molecule spectroscopy, and NLPF at low temperatures were employed. These investigations revealed unique information about excitonic interaction between certain Chl(s), including implications for the overall EET scheme. The sub-structure model for the Qy-absorption region of LHCII was further essentially improved by the analysis of reconstituted proteins with selectively modified Chl binding residues in the amino-acid sequence. The sum of all complementary investigations allowed finally the evaluation of room temperature NLPF measurements of trimeric LHCII. Due to the unique selectivity of the spectra to individual transition-dipole directions, several orientation parameters have been obtained. Under this point of view, the NLPF method has indeed revealed a high potential as compared to conventional techniques like circular dichroism spectroscopy. Moreover, the understanding of nonlinear phenomena in the Qy-absorption region of LHCII as a consequence of molecular interaction provides further knowledge for the application of other nonlinear optical experiments. Concluding, implications of the obtained results for the structure-function relationship of intra- and inter-complex EET were elucidated

    Cluster Lenses

    Get PDF
    Clusters of galaxies are the most recently assembled, massive, bound structures in the Universe. As predicted by General Relativity, given their masses, clusters strongly deform space-time in their vicinity. Clusters act as some of the most powerful gravitational lenses in the Universe. Light rays traversing through clusters from distant sources are hence deflected, and the resulting images of these distant objects therefore appear distorted and magnified. Lensing by clusters occurs in two regimes, each with unique observational signatures. The strong lensing regime is characterized by effects readily seen by eye, namely, the production of giant arcs, multiple-images, and arclets. The weak lensing regime is characterized by small deformations in the shapes of background galaxies only detectable statistically. Cluster lenses have been exploited successfully to address several important current questions in cosmology: (i) the study of the lens(es) - understanding cluster mass distributions and issues pertaining to cluster formation and evolution, as well as constraining the nature of dark matter; (ii) the study of the lensed objects - probing the properties of the background lensed galaxy population - which is statistically at higher redshifts and of lower intrinsic luminosity thus enabling the probing of galaxy formation at the earliest times right up to the Dark Ages; and (iii) the study of the geometry of the Universe - as the strength of lensing depends on the ratios of angular diameter distances between the lens, source and observer, lens deflections are sensitive to the value of cosmological parameters and offer a powerful geometric tool to probe Dark Energy. In this review, we present the basics of cluster lensing and provide a current status report of the field.Comment: About 120 pages - Published in Open Access at: http://www.springerlink.com/content/j183018170485723/ . arXiv admin note: text overlap with arXiv:astro-ph/0504478 and arXiv:1003.3674 by other author

    Predicting the Most Tractable Protein Surfaces in the Human Proteome for Developing New Therapeutics

    Get PDF
    A critical step in the target identification phase of drug discovery is evaluating druggability, i.e., whether a protein can be targeted with high affinity using drug-like ligands. The overarching goal of my PhD thesis is to build a machine learning model that predicts the binding affinity that can be attained when addressing a given protein surface. I begin by examining the lead optimization phase of drug development, where I find that in a test set of 297 examples, 41 of these (14%) change binding mode when a ligand is elaborated. My analysis shows that while certain ligand physiochemical properties predispose changes in binding mode, particularly those properties that define fragments, simple structure-based modeling proves far more effective for identifying substitutions that alter the binding mode. My proposed measure of RMAC (rmsd after minimization of the aligned complex) can help determine whether a given ligand can be reliably elaborated without changing binding mode, thus enabling straightforward interpretation of the resulting structure-activity relationships. Moving forward, I next noted that a very popular machine learning algorithm for regression tasks, random forest, has a systematic bias in the predictions it generates; this bias is present in both real-world datasets and synthetic datasets. To address this, I define a numerical transformation that can be applied to the output of random forest models. This transformation fully removes the bias in the resulting predictions, and yields improved predictions across all datasets. Finally, taking advantage of this improved machine learning approach, I describe a model that predicts the “attainable binding affinity” for a given binding pocket on a protein surface. This model uses 13 physiochemical and structural features calculated from the protein structure, without any information about the ligand. While details of the ligand must (of course) contribute somewhat to the binding affinity, I find that this model still recapitulates the binding affinity for 848 different protein-ligand complexes (across 230 different proteins) with correlation coefficient 0.57. I further find that this model is not limited to “traditional” drug targets, but rather that it works just as well for emerging “non-traditional” drug targets such as inhibitors of protein-protein interactions. Collectively, I anticipate that the tools and insights generated in the course of my PhD research will play an important role in facilitating the key target selection phase of drug discovery projects

    Squeak and Rattle Prediction for Robust Product Development in the automotive industry

    Get PDF
    Squeak and rattle are nonstationary, irregular, and impulsive sounds that are audible inside the car cabin. For decades, customer complaints about squeak and rattle have been, and still are, among the top quality issues in the automotive industry. These annoying sounds are perceived as quality defect indications and burden warranty costs to the car manufacturers. Today, the quality improvements regarding the persistent type of sounds in the car, as well as the increasing popularity of electric engines, as green and quiet propulsion solutions, stress the necessity for attenuating annoying sounds like squeak and rattle more than in the past. The economical and robust solutions to this problem are to be sought in the pre-design-freeze phases of the product development and by employing design-concept-related practices. To achieve this goal, prediction and evaluation tools and methods are required to deal with the squeak and rattle quality issues upfront in the product development process. The available tools and methods for the prediction of squeak and rattle sounds in the pre-design-freeze phases of a car development process are not yet sufficiently mature. The complexity of the squeak and rattle events, the existing knowledge gap about the mechanisms behind the squeak and rattle sounds, the lack of accurate simulation and post-processing methods, as well as the computational cost of complex simulations are some of the significant hurdles in this immaturity. This research addresses this problem by identifying a framework for the prediction of squeak and rattle sounds based on a cause-and-effect diagram. The main domains and the elements and the sub-contributors to the problem in each domain within this framework are determined through literature studies, field explorations and descriptive studies conducted on the subject. Further, improvement suggestions for the squeak and rattle evaluation and prediction methods are proposed through prescriptive studies. The applications of some of the proposed methods in the automotive industry are demonstrated and examined in industrial problems.The outcome of this study enhances the understanding of some of the parameters engaged in the squeak and rattle generation. Simulation methods are proposed to actively involve the contributing factors studied in this work for squeak and rattle risk evaluation. To enhance the efficiency and accuracy of the risk evaluation process, methods were investigated and proposed for the system excitation efficiency, modelling accuracy and efficiency and quantification of the response in the time and frequency domains. The demonstrated simulation methods besides the improved understanding of the mechanisms behind the phenomenon can facilitate a more accurate and robust prediction of squeak and rattle risk during the pre-design-freeze stages of the car development

    Design and analysis of geodesic tensegrity structures with agriculture applications.

    Get PDF
    "This report aims to promulgate and elucidate the effective application of scientific principles in the design and optimisation of tensegrity structures for practical applications. By developing the intrinsic geometry of the geodesic dome and applying tensegrity design principles, a range of efficient, lightweight, modular structures are developed and broadly classified as geodesic tensegrity structures. Novel systems for clustering domes in two dimensions are considered and the analytical geometry required to generate various dome structures is derived from first principles. Computational methods for performing the design optimisation of tensegrity structures are reviewed and explained in detail. It is shown how an efficient, unified computational framework, suitable for the analysis of tensegrity structures in general, may be developed using computations which involve the equilibrium matrix of a structure. The importance of exploiting symmetry to simplify structural computations is highlighted throughout, as this is especially relevant in the analysis of large dome structures. A novel approach to generating the global equilibrium matrix of a structure from element vectors and implementing symmetry subspace methods is presented, which relies on the choice of an appropriate coordinate system to reflect the symmetry of a structure. A new algorithm is developed for implementing symmetry subspace methods in a computer program which enables the symmetry-adapted vector basis to be generated more efficiently. Methods for analysing kinematically indeterminate tensegrities and prestressed mechanisms and performing the prestress optimisation of a tensegrity structure are briefly reviewed and explained. Efficient tensegrity modular systems are developed for constructing a range of double-layer geodesic tensegrity domes and grids, based on the pioneering work of the artist, Kenneth Snelson. Finally, the cultural significance of tensegrity technology is illustrated by focusing on a range of novel applications in agriculture and sustainable development and adopting the holistic, "design science, "approach advocated by Buckminster Fuller.

    Sixth NASTRAN (R) Users' Colloquium

    Get PDF
    Papers are presented on NASTRAN programming, and substructuring methods, as well as on fluids and thermal applications. Specific applications and capabilities of NASTRAN were also delineated along with general auxiliary programs

    Twentieth NASTRAN (R) Users' Colloquium

    Get PDF
    The proceedings of the conference are presented. Some comprehensive general papers are presented on applications of finite elements in engineering, comparisons with other approaches, unique applications, pre and post processing with other auxiliary programs, and new methods of analysis with NASTRAN
    • 

    corecore