297 research outputs found

    Discovering Regularity in Point Clouds of Urban Scenes

    Full text link
    Despite the apparent chaos of the urban environment, cities are actually replete with regularity. From the grid of streets laid out over the earth, to the lattice of windows thrown up into the sky, periodic regularity abounds in the urban scene. Just as salient, though less uniform, are the self-similar branching patterns of trees and vegetation that line streets and fill parks. We propose novel methods for discovering these regularities in 3D range scans acquired by a time-of-flight laser sensor. The applications of this regularity information are broad, and we present two original algorithms. The first exploits the efficiency of the Fourier transform for the real-time detection of periodicity in building facades. Periodic regularity is discovered online by doing a plane sweep across the scene and analyzing the frequency space of each column in the sweep. The simplicity and online nature of this algorithm allow it to be embedded in scanner hardware, making periodicity detection a built-in feature of future 3D cameras. We demonstrate the usefulness of periodicity in view registration, compression, segmentation, and facade reconstruction. The second algorithm leverages the hierarchical decomposition and locality in space of the wavelet transform to find stochastic parameters for procedural models that succinctly describe vegetation. These procedural models facilitate the generation of virtual worlds for architecture, gaming, and augmented reality. The self-similarity of vegetation can be inferred using multi-resolution analysis to discover the underlying branching patterns. We present a unified framework of these tools, enabling the modeling, transmission, and compression of high-resolution, accurate, and immersive 3D images

    Impact of the inherent periodic structure on the effective medium description of left-handed and related meta-materials

    Full text link
    We study the frequency dependence of the effective electromagnetic parameters of left-handed and related meta-materials of the split ring resonator and wire type. We show that the reduced translational symmetry (periodic structure) inherent to these meta-materials influences their effective electromagnetic response. To anticipate this periodicity, we formulate a periodic effective medium model which enables us to distinguish the resonant behavior of electromagnetic parameters from effects of the periodicity of the structure. We use this model for the analysis of numerical data for the transmission and reflection of periodic arrays of split ring resonators, thin metallic wires, cut wires as well as the left-handed structures. The present method enables us to identify the origin of the previously observed resonance/anti-resonance coupling as well as the occurrence of negative imaginary parts in the effective permittivities and permeabilities of those materials. Our analysis shows that the periodicity of the structure can be neglected only for the wavelength of the electromagnetic wave larger than 30 space periods of the investigated structure.Comment: 23 pages, 14 figure

    Synthesis and Optimization of Reversible Circuits - A Survey

    Full text link
    Reversible logic circuits have been historically motivated by theoretical research in low-power electronics as well as practical improvement of bit-manipulation transforms in cryptography and computer graphics. Recently, reversible circuits have attracted interest as components of quantum algorithms, as well as in photonic and nano-computing technologies where some switching devices offer no signal gain. Research in generating reversible logic distinguishes between circuit synthesis, post-synthesis optimization, and technology mapping. In this survey, we review algorithmic paradigms --- search-based, cycle-based, transformation-based, and BDD-based --- as well as specific algorithms for reversible synthesis, both exact and heuristic. We conclude the survey by outlining key open challenges in synthesis of reversible and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table

    Observation of higher-order topological states on a quantum computer

    Full text link
    Programmable quantum simulators such as superconducting quantum processors and ultracold atomic lattices represent rapidly developing emergent technology that may one day qualitatively outperform existing classical computers. Yet, apart from a few breakthroughs, the range of viable computational applications with current-day noisy intermediate-scale quantum (NISQ) devices is still significantly limited by gate errors, quantum decoherence, and the number of high-quality qubits. In this work, we develop an approach that places NISQ hardware as a particularly suitable platform for simulating multi-dimensional condensed matter systems, including lattices beyond three dimensions which are difficult to realize or probe in other settings. By fully exploiting the exponentially large Hilbert space of a quantum chain, we encoded a high-dimensional model in terms of non-local many-body interactions that can further be systematically transcribed into quantum gates. We demonstrate the power of our approach by realizing, on IBM transmon-based quantum computers, higher-order topological states in up to four dimensions, which are exotic phases that have never been realized in any quantum setting. With the aid of in-house circuit compression and error mitigation techniques, we measured the topological state dynamics and their protected mid-gap spectra to a high degree of accuracy, as benchmarked by reference exact diagonalization data. The time and memory needed with our approach scale favorably with system size and dimensionality compared to exact diagonalization on classical computers.Comment: 21 pages, 8 figures in main text; 4 pages, 2 tables in supplementary informatio

    Improving Usability in Procedural Modeling

    Get PDF
    This work presents new approaches and algorithms for procedural modeling geared towards user convenience and improving usability, in order to increase artists’ productivity. Procedural models create geometry for 3D models from sets of rules. Existing approaches that allow to model trees, buildings, and terrain are reviewed and possible improvements are discussed. A new visual programming language for procedural modeling is discussed, where the user connects operators to visual programs called model graphs. These operators create geometry with textures, assign or evaluate variables or control the sequence of operations. When the user moves control points using the mouse in 3D space, the model graph is executed to change the geometry interactively. Thus, model graphs combine the creativity of freehand modeling with the power of programmed modeling while displaying the program structure more clearly than textbased approaches. Usability is increased as a result of these advantages. Also, an interactive editor for botanical trees is demonstrated. In contrast to previous tree modeling systems, we propose linking rules, parameters and geometry to semantic entities. This has the advantage that problems of associating parameters and instances are completely avoided. When an entity is clicked in the viewport, its parameters are displayed immediately, changes are applied to selected entities, and viewport editing operations are reflected in the parameter set. Furthermore, we store the entities in a hierarchical data structure and allow the user to activate recursive traversal via selection options for all editing operations. The user may choose to apply viewport or parameter changes to a single entity or many entities at once, and only the geometry for the affected entities needs to be updated. The proposed user interface simplifies the modeling process and increases productivity. Interactive editing approaches for 3D models often allow more precise control over a model than a global set of parameters that is used to generate a shape. However, usually scripted procedural modeling generates shapes directly from a fixed set of parameters, and interactive editing mostly uses a fixed set of tools. We propose to use scripts not only to generate models, but also for manipulating the models. A base script would set up the state of an object, and tool scripts would modify that state. The base script and the tool scripts generate geometry when necessary. Together, such a collection of scripts forms a template, and templates can be created for various types of objects. We examine how templates simplify the procedural modeling workflow by allowing for editing operations that are context-sensitive, flexible and powerful at the same time. Many algorithms have been published that produce geometry for fictional landscapes. There are algorithms which produce terrain with minimal setup time, allowing to adapt the level of detail as the user zooms into the landscape. However, these approaches lack plausible river networks, and algorithms that create eroded terrain with river networks require a user to supervise creation and minutes or hours of computation. In contrast to that, this work demonstrates an algorithm that creates terrain with plausible river networks and adaptive level of detail with no more than a few seconds of preprocessing. While the system can be configured using parameters, this text focuses on the algorithm that produces the rivers. However, integrating more tools for user-controlled editing of terrain would be possible.Verbesserung der Usability bei prozeduraler Modellierung Ziel der vorliegenden Arbeit ist es, prozedurale Modellierung durch neue neue Ansätze und Algorithmen einfacher, bequemer und anwendungsfreundlicher zu machen, und damit die Produktivität der Künstler zu erhöhen. Diese Anforderungen werden häufig unter dem Stichwort Usability zusammengefasst. Prozedurale Modelle spezifizieren 3D-Modelle über Regeln. Existierende Ansätze für Bäume, Gebäude und Terrain werden untersucht und es werden mögliche Verbesserungen diskutiert. Eine neue visuelle Programmiersprache für prozedurale Modelle wird vorgestellt, bei der Operatoren zu Modellgraphen verschaltet werden. Die Operatoren erzeugen texturierte Geometrie, weisen Variablen zu und werten sie aus, oder sie steuern den Ablauf der Operationen. Wenn der Benutzer Kontrollpunkte im Viewport mit der Maus verschiebt, wird der Modellgraph ausgeführt, um interaktiv neue Geometrie für das Modell zu erzeugen. Modellgraphen kombinieren die kreativen Möglichkeiten des freihändigen Editierens mit der Mächtigkeit der prozeduralen Modellierung. Darüber hinaus sind Modellgraphen eine visuelle Programmiersprache und stellen die Struktur der Algorithmen deutlicher dar als textbasierte Programmiersprachen. Als Resultat dieser Verbesserungen erhöht sich die Usability. Ein interaktiver Editor für botanische Bäume wird ebenfalls vorgestellt. Im Gegensatz zu früheren Ansätzen schlagen wir vor, Regeln, Parameter und Geometrie zu semantischen Entitäten zu verschmelzen. Auf diese Weise werden Zuordnungsprobleme zwischen Parametern und deren Instanzen komplett vermieden. Wenn im Viewport eine Instanz angeklickt wird, werden sofort ihre Parameter angezeigt, alle Änderungen wirken sich direkt auf die betroffenen Instanzen aus, und Änderungen im Viewport werden sofort in den Parametern reflektiert. Darüber hinaus werden die Entitäten in einer hierarchischen Datenstruktur gespeichert und alle Änderungen können rekursiv auf der Hierarchie ausgeführt werden. Dem Benutzer werden Selektionsoptionen zur Verfügung gestellt, über die er Änderungen an den Parametern oder Änderungen im Viewport an einzelnen oder vielen Instanzen gleichzeitig vornehmen kann. Anschließend muss das System nur die Geometrie der betroffenen Instanzen aktualisieren. Auch hier ist das Ziel, das User Interface möglichst an den Bedürfnissen des Benutzers auszurichten, um Vereinfachungen und eine Erhöhung der Produktivität zu erreichen. Interaktive Editieransätze für 3D-Modelle erlauben häufig eine präzisere Kontrolle über ein Modell als ein globaler Parametersatz, der für die Erzeugung des Modells genutzt wird. Trotzdem erzeugen prozedurale Modellierskripte ihre Modelle meist direkt aus einem festen Parametersatz, während interaktive Tools meist mit hartkodierten Operationen arbeiten. Wir schlagen vor, Skripte nicht nur zur Erzeugung der Modelle zu verwenden, sondern auch um die erzeugten Modelle zu editieren. Ein Basisskript soll die Statusinformationen eines Objekts anlegen, während weitere Skripte diesen Status verändern und passende Geometrie erzeugen. Diese Skripte bilden dann ein Template zum Erzeugen einer Klasse von Objekten. Verschiedene Objekttypen können jeweils ihr eigenes Template haben. Wir zeigen, wie Templates den Workflow mit prozeduralen Modellen vereinfachen können, indem Operationen geschaffen werden, die gleichzeitig kontext-sensitiv, mächtig und flexibel sind. Es existiert eine Reihe von Verfahren, um Geometrie für synthetische Landschaften zu erzeugen. Ein Teil der Algorithmen erzeugt Geometrie mit minimaler Vorberechnung und erlaubt es, den Detailgrad der Landschaft interaktiv an die Perspektive anzupassen. Leider fehlen den so erzeugten Landschaften plausible Flussnetze. Algorithmen, die erodiertes Terrain mit Flussnetzen erzeugen, müssen aufwendig vom Benutzer überwacht werden und brauchen Minuten oder Stunden Rechenzeit. Im Gegensatz dazu stellen wir einen Algorithmus vor, der plausible Flussnetze erzeugt, während sich der Betrachter interaktiv durch die Szene bewegt. Das System kann über Parameter gesteuert werden, aber der Fokus liegt auf dem Algorithmus zur Erzeugung der Flüsse. Dennoch wäre es möglich, Tools zum benutzergesteuerten Editieren von Terrain zu integrieren

    Heterogeneous parallel algorithms for computational fluid dynamics on unstructured meshes

    Get PDF
    Frontiers of computational fluid dynamics (CFD) are constantly expanding and eagerly demanding more computational resources. Currently, we are experiencing an rapid evolution in the high performance computing systems driven by power consumption constraints. New HPC nodes incorporate accelerators that are used as math co-processors for increasing the throughput and the FLOP per watt ratio. On the other hand, multi-core CPUs have turned into energy efficient system-on-chip architectures. By doing so, the main components of the node are fused and integrated into a single chip reducing the energy costs. Nowadays, several institutions and governments are investing in the research and development of different aspects of HPC that could lead to the next generations of supercomputers. This initiatives have entitled the problem as the exascale challenge. This goal can only be achieved by incorporating major changes in computer architecture, memory design and network interfaces. The CFD community faces an important challenge: keep the pace at the rapid changes in the HPC resources. The codes and formulations need to be re-design in other to exploit the different levels of parallelism and complex memory hierarchies of the new heterogeneous systems. The main characteristics demanded to the new CFD software are: memory awareness, extreme concurrency, modularity and portability. This thesis is devoted to the study of a CFD algorithm re-factoring for the adoption of new technologies. Our application context is the solution of incompressible flows (DNS or LES) on unstructured meshes. The first approach was using GPUs for accelerating the Poisson solver, that is the most computational intensive part of our application. The positive results obtained in this first step motivated us to port the complete time integration phase of our application. This requires a major redesign of the code. We propose a portable implementation model for CFD applications. The main idea was substituting stencil data structures and kernels by algebraic storage formats and operators. By doing so, the algorithm was restructured into a minimal set of algebraic operations. The implementation strategy consisted in the creation of a low-level algebraic layer for computations on CPUs and GPUs, and a high-level user-friendly discretization layer for CPUs that is fully localized at the preprocessing stage where performance does not play an important role. As a result, at the time-integration phase the code relies only on three algebraic kernels: sparse-matrix-vector product (SpMV), linear combination of two vectors (AXPY) and dot product (DOT). Such a simple set of basic linear algebra operations naturally provides the desired portability to any computing architecture. Special attention was paid at the development of data structures compatibles with the stream processing model. A detailed performance analysis was studied in both sequential and parallel execution engaging up to 128 GPUs in a hybrid CPU/GPU supercomputer. Moreover, we tested the portable implementation model of TermoFluids code in the Mont-Blanc mobile-based supercomputer. The re-design of the kernels exploits a heterogeneous execution model using both computing devices CPU and GPU of the ARM-based nodes. The load balancing between the two computing devices exploits a tabu search strategy that tunes the workload distribution during the preprocessing stage. A comparison of the Mont-Blanc prototypes with high-end supercomputers in terms of the achieved net performance and energy consumption provided some guidelines of the behavior of CFD applications in ARM-based architectures. Finally, we present a memory aware auto-tuned Poisson solver for problems with one Fourier diagonalizable direction. This work was developed and tested in the BlueGene/Q Vesta supercomputer, and aims at demonstrating the relevance of vectorization and memory awareness for fully exploiting the modern energy efficient CPUs.Las fronteras de la dinámica de fluidos computacional (CFD) están en constante expansión y demandan más y más recursos computacionales. Actualmente, estamos experimentando una evolución en los sistemas de computación de alto rendimiento (HPC) impulsado por restricciones de consumo de energía. Los nuevos nodos HPC incorporan aceleradores que se utilizan como co-procesadores para incrementar el rendimiento y la relación FLOP por vatio. Por otro lado, CPUs multi-core se han convertido en arquitecturas system-on-chip. Hoy en día, varias instituciones y gobiernos están invirtiendo en la investigación y desarrollo de los diferentes aspectos de HPC que podrían llevar a las próximas generaciones de superordenadores. Estas iniciativas han titulado el problema como el "exascale challenge". Este objetivo sólo puede lograrse mediante la incorporación de cambios importantes en: la arquitectura de ordenador, diseño de la memoria y las interfaces de red. La comunidad de CFD se enfrenta a un reto importante: mantener el ritmo a los rápidos cambios en las infraestructuras de HPC. Los códigos y formulaciones necesitan ser rediseñados para explotar los diferentes niveles de paralelismo y complejas jerarquías de memoria de los nuevos sistemas heterogéneos. Las principales características exigidas al nuevo software CFD son: estructuras de datos, la concurrencia extrema, modularidad y portabilidad. Esta tesis está dedicada al estudio de un modelo de implementation CFD para la adopción de nuevas tecnologías. Nuestro contexto de aplicación es la solución de los flujos incompresibles (DNS o LES) en mallas no estructuradas. El primer enfoque se basó en utilizar GPUs para acelerar el solver de Poisson. Los resultados positivos obtenidos en este primer paso nos motivaron a la portabilidad completa de la fase de integración temporal de nuestra aplicación. Esto requiere un importante rediseño del código. Proponemos un modelo de implementacion portable para aplicaciones de CFD. La idea principal es sustituir las estructuras de datos de los stencils y kernels por formatos de almacenamiento algebraicos y operadores. La estrategia de implementación consistió en la creación de una capa algebraica de bajo nivel para los cálculos de CPU y GPU, y una capa de discretización fácil de usar de alto nivel para las CPU. Como resultado, la fase de integración temporal del código se basa sólo en tres funciones algebraicas: producto de una matriz dispersa con un vector (SPMV), combinación lineal de dos vectores (AXPY) y producto escalar (DOT). Además, se prestó especial atención en el desarrollo de estructuras de datos compatibles con el modelo stream processing. Un análisis detallado de rendimiento se ha estudiado tanto en ejecución secuencial y paralela utilizando hasta 128 GPUs en un superordenador híbrido CPU / GPU. Por otra parte, hemos probado el nuevo modelo de TermoFluids en el superordenador Mont-Blanc basado en tecnología móvil. El rediseño de las funciones explota un modelo de ejecución heterogénea utilizando tanto la CPU y la GPU de los nodos basados en arquitectura ARM. El equilibrio de carga entre las dos unidades de cálculo aprovecha una estrategia de búsqueda tabú que sintoniza la distribución de carga de trabajo durante la etapa de preprocesamiento. Una comparación de los prototipos Mont-Blanc con superordenadores de alta gama en términos de rendimiento y consumo de energía nos proporcionó algunas pautas del comportamiento de las aplicaciones CFD en arquitecturas basadas en ARM. Por último, se presenta una estructura de datos auto-sintonizada para el solver de Poisson en problemas con una dirección diagonalizable mediante una descomposicion de Fourier. Este trabajo fue desarrollado y probado en la superordenador BlueGene / Q Vesta, y tiene por objeto demostrar la relevancia de vectorización y las estructuras de datos para aprovechar plenamente las CPUs de los superodenadores modernos

    Characterising delamination in composite materials : a combined genetic algorithm - finite element approach

    Get PDF
    A novel delamination identification technique based on a low-population genetic algorithm for the quantitative characterisation of a single delamination in composite laminated panels is developed, and validated experimentally The damage identification method is formulated as an inverse problem through which system parameters are identified. The input of the inverse problem, the central geometric moments (CGM), is calculated from the surface out-of-plane displacements measurements of a delaminated panel obtained from Digital Speckle Pattern Interferometry (DSPI). The output parameters, the planar location, size and depth of the flaw, are the solution to the inverse problem to characterise an idealised elliptical flaw. The inverse problem is then reduced to an optimisation problem where the objective function is defined as the L2 norm of the difference between the CGM obtained from a finite element (FE) model with a trial delamination and the moments computed from the DSPI measurements. The optimum crack parameters are found by minimising the objective function through the use of a low-population real-coded genetic algorithm (LARGA). DSPI measurements of ten delaminated T700/LTM-45EL carbon/epoxy laminate panels with embedded delaminations are used to validate the methodology presented in this thesis.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Quantification of Order in Point Patterns

    Get PDF
    Pattern attributes are important in many disciplines, e.g. developmental biology, but there are few objective measures of them. Here we concentrate on the attribute of order in point patterns and its objective measurement. We examine perception of order and develop analysis algorithms that quantify the attribute in accordance with perception of it. Based on pairwise ranking of point patterns by degree of order, we show that judgements are highly consistent across individuals and that the perceptual dimension has an interval scale structure, spanning roughly 10 just-noticeable differences (jnds) between disorder and order. We designed a geometric algorithm that estimates order to an accuracy of half a jnd by quantifying the variability of the spaces between points. By anchoring the output of the algorithm so that Poisson point processes score on average 0, and perfect lattices score 10, we constructed an absolute interval scale of order. We demonstrated its utility in biology by quantifying the order of the Drosophila dorsal thorax epithelium during development. The psychophysical scaling method used relies on the comparison of stimuli with similar levels of order yielding a discrimination-based scale. As with other perceptual dimensions, an interesting question is whether supra-threshold perceptual differences are consistent with this scale. To test that we collected discrimination data, and data based on comparison of perceptual differences. Although the judgements of perceptual differences were found to be consistent with an interval scale, like the discrimination judgements, no common interval scale that could predict both sets of data was possible. Point patterns are commonly displayed as arrangements of dots. To examine how presentation parameters (dot size, dot numbers, and pattern area) affect discrimination, we collected discrimination data for ten presentation conditions. We found that discrimination performance depends on the ratio ‘dot diameter / average dot spacing’

    Towards a more efficient spectrum usage: spectrum sensing and cognitive radio techniques

    No full text
    The traditional approach of dealing with spectrum management in wireless communications has been through the definition on a license user granted exclusive exploitation rights for a specific frequency.Peer ReviewedPostprint (published version
    corecore