40 research outputs found

    Intelligent Computational Transportation

    Get PDF
    Transportation is commonplace around our world. Numerous researchers dedicate great efforts to vast transportation research topics. The purpose of this dissertation is to investigate and address a couple of transportation problems with respect to geographic discretization, pavement surface automatic examination, and traffic ow simulation, using advanced computational technologies. Many applications require a discretized 2D geographic map such that local information can be accessed efficiently. For example, map matching, which aligns a sequence of observed positions to a real-world road network, needs to find all the nearby road segments to the individual positions. To this end, the map is discretized by cells and each cell retains a list of road segments coincident with this cell. An efficient method is proposed to form such lists for the cells without costly overlapping tests. Furthermore, the method can be easily extended to 3D scenarios for fast triangle mesh voxelization. Pavement surface distress conditions are critical inputs for quantifying roadway infrastructure serviceability. Existing computer-aided automatic examination techniques are mainly based on 2D image analysis or 3D georeferenced data set. The disadvantage of information losses or extremely high costs impedes their effectiveness iv and applicability. In this study, a cost-effective Kinect-based approach is proposed for 3D pavement surface reconstruction and cracking recognition. Various cracking measurements such as alligator cracking, traverse cracking, longitudinal cracking, etc., are identified and recognized for their severity examinations based on associated geometrical features. Smart transportation is one of the core components in modern urbanization processes. Under this context, the Connected Autonomous Vehicle (CAV) system presents a promising solution towards the enhanced traffic safety and mobility through state-of-the-art wireless communications and autonomous driving techniques. Due to the different nature between the CAVs and the conventional Human- Driven-Vehicles (HDVs), it is believed that CAV-enabled transportation systems will revolutionize the existing understanding of network-wide traffic operations and re-establish traffic ow theory. This study presents a new continuum dynamics model for the future CAV-enabled traffic system, realized by encapsulating mutually-coupled vehicle interactions using virtual internal and external forces. A Smoothed Particle Hydrodynamics (SPH)-based numerical simulation and an interactive traffic visualization framework are also developed

    Design and Implementation of Asymptotically Optimal Mesh Slicing Algorithms Using Parallel Processing

    Get PDF
    Mesh slicing is the process of taking a three dimensional model and reducing it to 2.5 dimensional layers that together create a layered representation of the model. The process is used in layered additive manufacturing, three dimensional voxelization, and other similar problems in computational geometry. The slicing process is computationally expensive, and the time required to slice an object can inhibit the viability of layered manufacturing in some industries. We designed and developed a fast implementation of the slicing process, called Sunder, that uses new asymptotically optimal algorithms and takes advantage of parallel processing platforms. To our knowledge, no other slicing implementation leverages massive parallel execution hardware, such as graphics processing units (GPUs), leaving significant potential for improvement. Furthermore, no published set of slicing algorithms completes all three major steps in the slicing process (preprocessing, slicing, and contour assembly) in linear time complexity, which our design achieves. Therefore, our implementation improves the current state of the art in mesh slicing

    Efficient voxelization using projected optimal scanline

    Get PDF
    In the paper, we propose an efficient algorithm for the surface voxelization of 3D geometrically complex models. Unlike recent techniques relying on triangle-voxel intersection tests, our algorithm exploits the conventional parallel-scanline strategy. Observing that there does not exist an optimal scanline interval in general 3D cases if one wants to use parallel voxelized scanlines to cover the interior of a triangle, we subdivide a triangle into multiple axis-aligned slices and carry out the scanning within each polygonal slice. The theoretical optimal scanline interval can be obtained to maximize the efficiency of the algorithm without missing any voxels on the triangle. Once the collection of scanlines are determined and voxelized, we obtain the surface voxelization. We fine tune the algorithm so that it only involves a few operations of integer additions and comparisons for each voxel generated. Finally, we comprehensively compare our method with the state-of-the-art method in terms of theoretical complexity, runtime performance and the quality of the voxelization on both CPU and GPU of a regular desktop PC, as well as on a mobile device. The results show that our method outperforms the existing method, especially when the resolution of the voxelization is high

    Procedural generation of features for volumetric terrains using a rule-based approach.

    Get PDF
    Terrain generation is a fundamental requirement of many computer graphics simulations, including computer games, flight simulators and environments in feature films. Volumetric representations of 3D terrains can create rich features that are either impossible or very difficult to construct in other forms of terrain generation techniques, such as overhangs, arches and caves. While a considerable amount of literature has focused on procedural generation of terrains using heightmap-based implementations, there is little research found on procedural terrains utilising a voxel-based approach. This thesis contributes two methods to procedurally generate features for terrains that utilise a volumetric representation. The first method is a novel grammar-based approach to generate overhangs and caves from a set of rules. This voxel grammar provides a flexible and intuitive method of manipulating voxels from a set of symbol/transform pairs that can provide a variety of different feature shapes and sizes. The second method implements three parametric functions for overhangs, caves and arches. This generates a set of voxels procedurally based on the parameters of a function selected by the user. A small set of parameters for each generator function yields a widely varied set of features and provides the user with a high degree of expressivity. In order to analyse the expressivity, this thesis’ third contribution is an original method of quantitatively valuing a result of a generator function. This research is a collaboration with Sony Interactive Entertainment and their proprietary game engine PhyreEngineTM. The methods presented have been integrated into the engine’s terrain system. Thus, there is a focus on real-time performance so as to be feasible for game developers to use while adhering to strict sub-second frame times of modern computer games

    GPU data structures for graphics and vision

    Get PDF
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch für den Programmierer verfügbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU für ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle Datenabhängigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber für die daten-parallele GPU kontraproduktiv . Diese Arbeit präsentiert neue Herangehensweisen für bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frühen Computergraphik-Forschung an das beschränkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die präsentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich

    Geometric algorithms for cavity detection on protein surfaces

    Get PDF
    Macromolecular structures such as proteins heavily empower cellular processes or functions. These biological functions result from interactions between proteins and peptides, catalytic substrates, nucleotides or even human-made chemicals. Thus, several interactions can be distinguished: protein-ligand, protein-protein, protein-DNA, and so on. Furthermore, those interactions only happen under chemical- and shapecomplementarity conditions, and usually take place in regions known as binding sites. Typically, a protein consists of four structural levels. The primary structure of a protein is made up of its amino acid sequences (or chains). Its secondary structure essentially comprises -helices and -sheets, which are sub-sequences (or sub-domains) of amino acids of the primary structure. Its tertiary structure results from the composition of sub-domains into domains, which represent the geometric shape of the protein. Finally, the quaternary structure of a protein results from the aggregate of two or more tertiary structures, usually known as a protein complex. This thesis fits in the scope of structure-based drug design and protein docking. Specifically, one addresses the fundamental problem of detecting and identifying protein cavities, which are often seen as tentative binding sites for ligands in protein-ligand interactions. In general, cavity prediction algorithms split into three main categories: energy-based, geometry-based, and evolution-based. Evolutionary methods build upon evolutionary sequence conservation estimates; that is, these methods allow us to detect functional sites through the computation of the evolutionary conservation of the positions of amino acids in proteins. Energy-based methods build upon the computation of interaction energies between protein and ligand atoms. In turn, geometry-based algorithms build upon the analysis of the geometric shape of the protein (i.e., its tertiary structure) to identify cavities. This thesis focuses on geometric methods. We introduce here three new geometric-based algorithms for protein cavity detection. The main contribution of this thesis lies in the use of computer graphics techniques in the analysis and recognition of cavities in proteins, much in the spirit of molecular graphics and modeling. As seen further ahead, these techniques include field-of-view (FoV), voxel ray casting, back-face culling, shape diameter functions, Morse theory, and critical points. The leading idea is to come up with protein shape segmentation, much like we commonly do in mesh segmentation in computer graphics. In practice, protein cavity algorithms are nothing more than segmentation algorithms designed for proteins.Estruturas macromoleculares tais como as proteínas potencializam processos ou funções celulares. Estas funções resultam das interações entre proteínas e peptídeos, substratos catalíticos, nucleótideos, ou até mesmo substâncias químicas produzidas pelo homem. Assim, há vários tipos de interacções: proteína-ligante, proteína-proteína, proteína-DNA e assim por diante. Além disso, estas interações geralmente ocorrem em regiões conhecidas como locais de ligação (binding sites, do inglês) e só acontecem sob condições de complementaridade química e de forma. É também importante referir que uma proteína pode ser estruturada em quatro níveis. A estrutura primária que consiste em sequências de aminoácidos (ou cadeias), a estrutura secundária que compreende essencialmente por hélices e folhas , que são subsequências (ou subdomínios) dos aminoácidos da estrutura primária, a estrutura terciária que resulta da composição de subdomínios em domínios, que por sua vez representa a forma geométrica da proteína, e por fim a estrutura quaternária que é o resultado da agregação de duas ou mais estruturas terciárias. Este último nível estrutural é frequentemente conhecido por um complexo proteico. Esta tese enquadra-se no âmbito da conceção de fármacos baseados em estrutura e no acoplamento de proteínas. Mais especificamente, aborda-se o problema fundamental da deteção e identificação de cavidades que são frequentemente vistos como possíveis locais de ligação (putative binding sites, do inglês) para os seus ligantes (ligands, do inglês). De forma geral, os algoritmos de identificação de cavidades dividem-se em três categorias principais: baseados em energia, geometria ou evolução. Os métodos evolutivos baseiam-se em estimativas de conservação das sequências evolucionárias. Isto é, estes métodos permitem detectar locais funcionais através do cálculo da conservação evolutiva das posições dos aminoácidos das proteínas. Em relação aos métodos baseados em energia estes baseiam-se no cálculo das energias de interação entre átomos da proteína e do ligante. Por fim, os algoritmos geométricos baseiam-se na análise da forma geométrica da proteína para identificar cavidades. Esta tese foca-se nos métodos geométricos. Apresentamos nesta tese três novos algoritmos geométricos para detecção de cavidades em proteínas. A principal contribuição desta tese está no uso de técnicas de computação gráfica na análise e reconhecimento de cavidades em proteínas, muito no espírito da modelação e visualização molecular. Como pode ser visto mais à frente, estas técnicas incluem o field-of-view (FoV), voxel ray casting, back-face culling, funções de diâmetro de forma, a teoria de Morse, e os pontos críticos. A ideia principal é segmentar a proteína, à semelhança do que acontece na segmentação de malhas em computação gráfica. Na prática, os algoritmos de detecção de cavidades não são nada mais que algoritmos de segmentação de proteínas

    Simulating RF Field Propagation with Stochastic Ray Tracing

    Full text link
    This work details the development of an application for fast simulations of the steady state far-field electromagnetic (EM) field strength and power in arbitrary environments. These environments consist of radiating antennas and solid 3-Dimensional (3D) occluding bodies. The simulation is accomplished using a variation of stochastic ray tracing that uses Monte Carlo integration to solve the light transport equation. The primary variations to the standard algorithm that are proposed here are twofold. First, a grid acceleration structure is used to reduce the number of computationally expensive ray-triangle intersection tests that need to be performed. The grid is chosen over other acceleration structures, as the requirement to compute field strength within a volume necessitates stepping the rays through the space regardless of whether the grid is used or not. The second variation is the implementation of diffraction. Existing ray tracers neglect diffraction as they typically deal with light of optical frequencies above 400 THz, where the amount of diffracted light around any large-scale object is negligible. As this application must handle much lower frequencies to simulate radio interactions, diffraction is implemented using a novel technique that involves extending the edges of triangles by constant width “diffraction margins” and allowing rays that hit the margins to bend inward probabilistically according to the Heisenberg momenta uncertainty associated with the new information about the position of the ray’s associated “photon bundle” due to its closeness to the surface.Master of Science in EngineeringComputer Engineering, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/156108/1/Timothy Kleinow Final Thesis.pdfDescription of Timothy Kleinow Final Thesis.pdf : Thesi

    System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging

    Get PDF
    In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies

    Signal processing with Fourier analysis, novel algorithms and applications

    Get PDF
    Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions, also analogously known as sinusoidal modeling. The original idea of Fourier had a profound impact on mathematical analysis, physics and engineering because it diagonalizes time-invariant convolution operators. In the past signal processing was a topic that stayed almost exclusively in electrical engineering, where only the experts could cancel noise, compress and reconstruct signals. Nowadays it is almost ubiquitous, as everyone now deals with modern digital signals. Medical imaging, wireless communications and power systems of the future will experience more data processing conditions and wider range of applications requirements than the systems of today. Such systems will require more powerful, efficient and flexible signal processing algorithms that are well designed to handle such needs. No matter how advanced our hardware technology becomes we will still need intelligent and efficient algorithms to address the growing demands in signal processing. In this thesis, we investigate novel techniques to solve a suite of four fundamental problems in signal processing that have a wide range of applications. The relevant equations, literature of signal processing applications, analysis and final numerical algorithms/methods to solve them using Fourier analysis are discussed for different applications in the electrical engineering/computer science. The first four chapters cover the following topics of central importance in the field of signal processing: • Fast Phasor Estimation using Adaptive Signal Processing (Chapter 2) • Frequency Estimation from Nonuniform Samples (Chapter 3) • 2D Polar and 3D Spherical Polar Nonuniform Discrete Fourier Transform (Chapter 4) • Robust 3D registration using Spherical Polar Discrete Fourier Transform and Spherical Harmonics (Chapter 5) Even though each of these four methods discussed may seem completely disparate, the underlying motivation for more efficient processing by exploiting the Fourier domain signal structure remains the same. The main contribution of this thesis is the innovation in the analysis, synthesis, discretization of certain well known problems like phasor estimation, frequency estimation, computations of a particular non-uniform Fourier transform and signal registration on the transformed domain. We conduct propositions and evaluations of certain applications relevant algorithms such as, frequency estimation algorithm using non-uniform sampling, polar and spherical polar Fourier transform. The techniques proposed are also useful in the field of computer vision and medical imaging. From a practical perspective, the proposed algorithms are shown to improve the existing solutions in the respective fields where they are applied/evaluated. The formulation and final proposition is shown to have a variety of benefits. Future work with potentials in medical imaging, directional wavelets, volume rendering, video/3D object classifications, high dimensional registration are also discussed in the final chapter. Finally, in the spirit of reproducible research we release the implementation of these algorithms to the public using Github
    corecore