167 research outputs found

    Synergistic Solutions on MultiSets

    Get PDF
    Karp et al. (1988) described Deferred Data Structures for Multisets as "lazy" data structures which partially sort data to support online rank and select queries, with the minimum amount of work in the worst case over instances of size n and number of queries q fixed. Barbay et al. (2016) refined this approach to take advantage of the gaps between the positions hit by the queries (i.e., the structure in the queries). We develop new techniques in order to further refine this approach and take advantage all at once of the structure (i.e., the multiplicities of the elements), some notions of local order (i.e., the number and sizes of runs) and global order (i.e., the number and positions of existing pivots) in the input; and of the structure and order in the sequence of queries. Our main result is a synergistic deferred data structure which outperforms all solutions in the comparison model that take advantage of only a subset of these features. As intermediate results, we describe two new synergistic sorting algorithms, which take advantage of some notions of structure and order (local and global) in the input, improving upon previous results which take advantage only of the structure (Munro and Spira 1979) or of the local order (Takaoka 1997) in the input; and one new multiselection algorithm which takes advantage of not only the order and structure in the input, but also of the structure in the queries

    Structural Analysis Algorithms for Nanomaterials

    Get PDF

    Self learning strategies for experimental design and response surface optimization

    Get PDF
    Most preset RSM designs offer ease of implementation and good performance over a wide range of process and design optimization applications. These designs often lack the ability to adapt the design based on the characteristics of application and experimental space so as to reduce the number of experiments necessary. Hence, they are not cost effective for applications where the cost of experimentation is high or when the experimentation resources are limited. In this dissertation, we present a number of self-learning strategies for optimization of different types of response surfaces for industrial experiments with noise, high experimentation cost, and requiring high design optimization performance. The proposed approach is a sequential adaptive experimentation approach which combines concepts from nonlinear optimization, non-parametric regression, statistical analysis, and response surface optimization. The proposed strategies uses the information gained from the previous experiments to design the subsequent experiment by simultaneously reducing the region of interest and identifying factor combinations for new experiments. Its major advantage is the experimentation efficiency such that, for a given response target, it identifies the input factor combination (or containing region) in less number of experiments than the classical designs. Through extensive simulated experiments and real-world case studies, we show that the proposed ASRSM method clearly outperforms the classical CCD and BBD methods, works superior to optimal A- D- and V- optimal designs on average and compares favorably with global optimizations methods including Gaussian Process and RBF

    Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    Get PDF
    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a divide-and-conquer scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected vertices to produce triangulated mesh models. These triangulated mesh models are suitable for many applications, such as 3D mapping, urban planning and augmented reality

    A methodology for rapid vehicle scaling and configuration space exploration

    Get PDF
    Drastic changes in aircraft operational requirements and the emergence of new enabling technologies often occur symbiotically with advances in technology inducing new requirements and vice versa. These changes sometimes lead to the design of vehicle concepts for which no prior art exists. They lead to revolutionary concepts. In such cases the basic form of the vehicle geometry can no longer be determined through an ex ante survey of prior art as depicted by aircraft concepts in the historical domain. Ideally, baseline geometries for revolutionary concepts would be the result of exhaustive configuration space exploration and optimization. Numerous component layouts and their implications for the minimum external dimensions of the resultant vehicle would be evaluated. The dimensions of the minimum enclosing envelope for the best component layout(s) (as per the design need) would then be used as a basis for the selection of a baseline geometry. Unfortunately layout design spaces are inherently large and the key contributing analysis i.e. collision detection, can be very expensive as well. Even when an appropriate baseline geometry has been identified, another hurdle i.e. vehicle scaling has to be overcome. Through the design of a notional Cessna C-172R powered by a liquid hydrogen Proton Exchange Membrane (PEM) fuel cell, it has been demonstrated that the various forms of vehicle scaling i.e. photographic and historical-data-based scaling can result in highly sub-optimal results even for very small O(10-3) scale factors. There is therefore a need for higher fidelity vehicle scaling laws especially since emergent technologies tend to be volumetrically and/or gravimetrically constrained when compared to incumbents. The Configuration-space Exploration and Scaling Methodology (CESM) is postulated herein as a solution to the above-mentioned challenges. This bottom-up methodology entails the representation of component or sub-system geometries as matrices of points in 3D space. These typically large matrices are reduced using minimal convex sets or convex hulls. This reduction leads to significant gains in collision detection speed at minimal approximation expense. (The Gilbert-Johnson-Keerthi algorithm is used for collision detection purposes in this methodology.) Once the components are laid out, their collective convex hull (from here on out referred to as the super-hull) is used to approximate the inner mold line of the minimum enclosing envelope of the vehicle concept. A sectional slicing algorithm is used to extract the sectional dimensions of this envelope. An offset is added to these dimensions in order to come up with the sectional fuselage dimensions. Once the lift and control surfaces are added, vehicle level objective functions can be evaluated and compared to other designs. For each design, changes in the super-hull dimensions in response to perturbations in requirements can be tracked and regressed to create custom geometric scaling laws. The regressions are based on dimensionally consistent parameter groups in order to come up with dimensionally consistent and thus physically meaningful laws. CESM enables the designer to maintain design freedom by portably carrying multiple designs deeper into the design process. Also since CESM is a bottom-up approach, all proposed baseline concepts are implicitly volumetrically feasible. Furthermore the scaling laws developed from custom data for each concept are subject to less design noise than say, regression based approaches. Through these laws, key physics-based characteristics of vehicle subsystems such as energy density can be mapped onto key system level metrics such as fuselage volume or take-off gross weight. These laws can then substitute some historical-data based analyses thereby improving the fidelity of the analyses and reducing design time.Ph.D.Committee Chair: Dr. Dimitri Mavris; Committee Member: Dean Ward; Committee Member: Dr. Daniel Schrage; Committee Member: Dr. Danielle Soban; Committee Member: Dr. Sriram Rallabhandi; Committee Member: Mathias Emenet

    Bending rigidity, supercoiling and knotting of ring polymers: models and simulations

    Get PDF
    The first part of the thesis was focussed on the interplay between knotting propensity and bending rigidity of equilibrated rings polymers. We found a surprising result: the equilibrium incidence of knots has a strongly non- monotonic dependence on bending, with a maximum at intermediate flexural rigidities. We next provided a quantitative framework, based on the balance of bending energy and configurational entropy, that allowed for rationalizing this counter-intuitive effect. We next extended the investigation to rings of much larger number of beads, via an heuristic model mapping between our semiflexible rings of beads and self-avoiding rings of cylinder. By the mapping, we not only confirmed the unimodal knotting profile for chains of 1,000 beads, but further found that chains of > 20,000 beads are expected to feature a bi-modal profile. We believe it would be most interesting to direct future efforts to confirm this transition from uni- to bi-modality using advanced sampling techniques for very long polymer rings. The second part of the thesis focused on the interplay of DNA knots and su- percoiling which are typically simultaneously present in vivo. We first studied this interplay by using oxDNA, an accurate mesoscopic DNA model and using it to study ings of thousands of base pairs tied in complex knots and with or without negative supercoiling (as appropriate for bacterial plasmids). By monitoring the dynamics of the DNA rings we found that the simultaneous presence of knots and supercoiling, and only their simultaneous presence, leads to a dramatic slowing down of the system reconfiguration dynamics. In particular, the essential tangles in the knotted region acquire a very long-lived character that, we speculate, could aid their recognition and simplification by topoisomerase. Finally, motivated by the recent experimental breakthrough that detected knots in eukaryotic DNA, we investigated the relationship between the compactness, writhe and knotting probability. The model was tuned to capture some of the salient properties of yeast minichromosomes, which were shown experimentally to become transiently highly knotted during transcription

    Le nuage de point intelligent

    Full text link
    Discrete spatial datasets known as point clouds often lay the groundwork for decision-making applications. E.g., we can use such data as a reference for autonomous cars and robot’s navigation, as a layer for floor-plan’s creation and building’s construction, as a digital asset for environment modelling and incident prediction... Applications are numerous, and potentially increasing if we consider point clouds as digital reality assets. Yet, this expansion faces technical limitations mainly from the lack of semantic information within point ensembles. Connecting knowledge sources is still a very manual and time-consuming process suffering from error-prone human interpretation. This highlights a strong need for domain-related data analysis to create a coherent and structured information. The thesis clearly tries to solve automation problematics in point cloud processing to create intelligent environments, i.e. virtual copies that can be used/integrated in fully autonomous reasoning services. We tackle point cloud questions associated with knowledge extraction – particularly segmentation and classification – structuration, visualisation and interaction with cognitive decision systems. We propose to connect both point cloud properties and formalized knowledge to rapidly extract pertinent information using domain-centered graphs. The dissertation delivers the concept of a Smart Point Cloud (SPC) Infrastructure which serves as an interoperable and modular architecture for a unified processing. It permits an easy integration to existing workflows and a multi-domain specialization through device knowledge, analytic knowledge or domain knowledge. Concepts, algorithms, code and materials are given to replicate findings and extend current applications.Les ensembles discrets de données spatiales, appelés nuages de points, forment souvent le support principal pour des scénarios d’aide à la décision. Par exemple, nous pouvons utiliser ces données comme référence pour les voitures autonomes et la navigation des robots, comme couche pour la création de plans et la construction de bâtiments, comme actif numérique pour la modélisation de l'environnement et la prédiction d’incidents... Les applications sont nombreuses et potentiellement croissantes si l'on considère les nuages de points comme des actifs de réalité numérique. Cependant, cette expansion se heurte à des limites techniques dues principalement au manque d'information sémantique au sein des ensembles de points. La création de liens avec des sources de connaissances est encore un processus très manuel, chronophage et lié à une interprétation humaine sujette à l'erreur. Cela met en évidence la nécessité d'une analyse automatisée des données relatives au domaine étudié afin de créer une information cohérente et structurée. La thèse tente clairement de résoudre les problèmes d'automatisation dans le traitement des nuages de points pour créer des environnements intelligents, c'est-àdire des copies virtuelles qui peuvent être utilisées/intégrées dans des services de raisonnement totalement autonomes. Nous abordons plusieurs problématiques liées aux nuages de points et associées à l'extraction des connaissances - en particulier la segmentation et la classification - la structuration, la visualisation et l'interaction avec les systèmes cognitifs de décision. Nous proposons de relier à la fois les propriétés des nuages de points et les connaissances formalisées pour extraire rapidement les informations pertinentes à l'aide de graphes centrés sur le domaine. La dissertation propose le concept d'une infrastructure SPC (Smart Point Cloud) qui sert d'architecture interopérable et modulaire pour un traitement unifié. Elle permet une intégration facile aux flux de travail existants et une spécialisation multidomaine grâce aux connaissances liée aux capteurs, aux connaissances analytiques ou aux connaissances de domaine. Plusieurs concepts, algorithmes, codes et supports sont fournis pour reproduire les résultats et étendre les applications actuelles.Diskrete räumliche Datensätze, so genannte Punktwolken, bilden oft die Grundlage für Entscheidungsanwendungen. Beispielsweise können wir solche Daten als Referenz für autonome Autos und Roboternavigation, als Ebene für die Erstellung von Grundrissen und Gebäudekonstruktionen, als digitales Gut für die Umgebungsmodellierung und Ereignisprognose verwenden... Die Anwendungen sind zahlreich und nehmen potenziell zu, wenn wir Punktwolken als Digital Reality Assets betrachten. Allerdings stößt diese Erweiterung vor allem durch den Mangel an semantischen Informationen innerhalb von Punkt-Ensembles auf technische Grenzen. Die Verbindung von Wissensquellen ist immer noch ein sehr manueller und zeitaufwendiger Prozess, der unter fehleranfälliger menschlicher Interpretation leidet. Dies verdeutlicht den starken Bedarf an domänenbezogenen Datenanalysen, um eine kohärente und strukturierte Information zu schaffen. Die Arbeit versucht eindeutig, Automatisierungsprobleme in der Punktwolkenverarbeitung zu lösen, um intelligente Umgebungen zu schaffen, d.h. virtuelle Kopien, die in vollständig autonome Argumentationsdienste verwendet/integriert werden können. Wir befassen uns mit Punktwolkenfragen im Zusammenhang mit der Wissensextraktion - insbesondere Segmentierung und Klassifizierung - Strukturierung, Visualisierung und Interaktion mit kognitiven Entscheidungssystemen. Wir schlagen vor, sowohl Punktwolkeneigenschaften als auch formalisiertes Wissen zu verbinden, um schnell relevante Informationen mithilfe von domänenzentrierten Grafiken zu extrahieren. Die Dissertation liefert das Konzept einer Smart Point Cloud (SPC) Infrastruktur, die als interoperable und modulare Architektur für eine einheitliche Verarbeitung dient. Es ermöglicht eine einfache Integration in bestehende Workflows und eine multidimensionale Spezialisierung durch Gerätewissen, analytisches Wissen oder Domänenwissen. Konzepte, Algorithmen, Code und Materialien werden zur Verfügung gestellt, um Erkenntnisse zu replizieren und aktuelle Anwendungen zu erweitern

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling
    • …
    corecore