796 research outputs found

    GADGET: A code for collisionless and gasdynamical cosmological simulations

    Full text link
    We describe the newly written code GADGET which is suitable both for cosmological simulations of structure formation and for the simulation of interacting galaxies. GADGET evolves self-gravitating collisionless fluids with the traditional N-body approach, and a collisional gas by smoothed particle hydrodynamics. Along with the serial version of the code, we discuss a parallel version that has been designed to run on massively parallel supercomputers with distributed memory. While both versions use a tree algorithm to compute gravitational forces, the serial version of GADGET can optionally employ the special-purpose hardware GRAPE instead of the tree. Periodic boundary conditions are supported by means of an Ewald summation technique. The code uses individual and adaptive timesteps for all particles, and it combines this with a scheme for dynamic tree updates. Due to its Lagrangian nature, GADGET thus allows a very large dynamic range to be bridged, both in space and time. So far, GADGET has been successfully used to run simulations with up to 7.5e7 particles, including cosmological studies of large-scale structure formation, high-resolution simulations of the formation of clusters of galaxies, as well as workstation-sized problems of interacting galaxies. In this study, we detail the numerical algorithms employed, and show various tests of the code. We publically release both the serial and the massively parallel version of the code.Comment: 32 pages, 14 figures, replaced to match published version in New Astronomy. For download of the code, see http://www.mpa-garching.mpg.de/gadget (new version 1.1 available

    Multilateratin index.

    Get PDF
    We present an alternative method for pre-processing and storing point data, particularly for Geospatial points, by storing multilateration distances to fixed points rather than coordinates such as Latitude and Longitude. We explore the use of this data to improve query performance for some distance related queries such as nearest neighbor and query-within-radius (i.e. “find all points in a set P within distance d of query point q”). Further, we discuss the problem of “Network Adequacy” common to medical and communications businesses, to analyze questions such as “are at least 90% of patients living within 50 miles of a covered emergency room.” This is in fact the class of question that led to the creation of our pre-processing and algorithms, and is a generalization of a class of Nearest-Neighbor problems. We hypothesize that storing the distances from fixed points (typically three, as in trilateration) as an alternative to Latitude and Longitude can be used to improve performance on distance functions when large numbers of points are involved, allowing algorithms that are efficient for Nearest Neighbor and Network Adequacy queries. This effectively creates a coordinate system where the coordinates are the trilateration distances. We explore this alternative coordinate system and the theoretical, technical, and practical implications of using it. Multilateration itself is a common technique in surveying and geo-location widely used in cartography, surveying, and orienteering, although algorithmic use of these concepts for NN-style problems are scarce. GPS uses the concept of detecting the distance of a device to multiple satellites to determine the location of the device; a concept known as true-range multilateration. However while the approach is common, the distance values from multilateration are typically immediately converted to Latitude/Longitude and then discarded. Here we attempt to use those intermediate distance values to computational benefit. Conceptually, our multilateration construction is applicable to metric spaces in any number of dimensions. Rather than requiring the complex pre-calculated tree structures (as in Ball and KD-Trees)(Liu, Moore, and Gray 2006), or high cost pre-calculated nearest-neighbor graphs (as in FAISS)(Johnson, Douze, and Jégou 2017), we rely only on sorted arrays as indexes. This approach also allows for processing computationally intensive distance queries (such as nearest-neighbor) in a way that is easily implemented with data manipulation languages such as SQL. We experiment with simple algorithms using the multilateration index to exploit these features. Weset up experiments for Nearest Neighbor and Network Adequacy on high computational cost distance functions, on various sized data sets to compare our performance to other existing algorithms. Our results include a roughly 10x performance improvement to existing query logic using SQL engines, and a 30x performance gain in Cython - compared to other NN algorithms using the popular ann-benchmark tool - when the cost of the atomic distance calculation itself is high, such as with geodesic distances on earth requiring high precision. While we focus primarily on geospatial data, potential applications to this approach extend to any distance-measured n-dimensional metric space where the distance function itself has a high computational cost

    The use of primitives in the calculation of radiative view factors

    Get PDF
    Compilations of radiative view factors (often in closed analytical form) are readily available in the open literature for commonly encountered geometries. For more complex three-dimensional (3D) scenarios, however, the effort required to solve the requisite multi-dimensional integrations needed to estimate a required view factor can be daunting to say the least. In such cases, a combination of finite element methods (where the geometry in question is sub-divided into a large number of uniform, often triangular, elements) and Monte Carlo Ray Tracing (MC-RT) has been developed, although frequently the software implementation is suitable only for a limited set of geometrical scenarios. Driven initially by a need to calculate the radiative heat transfer occurring within an operational fibre-drawing furnace, this research set out to examine options whereby MC-RT could be used to cost-effectively calculate any generic 3D radiative view factor using current vectorisation technologies

    Image Stitching

    Get PDF
    Projecte final de carrera fet en col.laboració amb University of Limerick. Department of Electronic and Computer EngineeringEnglish: Image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image processing techniques involve treating the image as a two-dimensional signal and applying standard signal processing techniques to it. Specifically, image stitching presents different stages to render two or more overlapping images into a seamless stitched image, from the detection of features to blending in a final image. In this process, Scale Invariant Feature Transform (SIFT) algorithm can be applied to perform the detection and matching control points step, due to its good properties. The process of create an automatic and effective whole stitching process leads to analyze different methods of the stitching stages. Several commercial and online software tools are available to perform the stitching process, offering diverse options in different situations. This analysis involves the creation of a script to deal with images and project data files. Once the whole script is generated, the stitching process is able to achieve an automatic execution allowing good quality results in the final composite image.Castellano: Procesado de imagen es cualquier tipo de procesado de señal en aquel que la entrada es una imagen, como una fotografía o fotograma de video; la salida puede ser una imagen o conjunto de características y parámetros relacionados con la imagen. Muchas de las técnicas de procesado de imagen implican un tratamiento de la imagen como señal en dos dimensiones, y para ello se aplican técnicas estándar de procesado de señal. Concretamente, la costura o unión de imágenes presenta diferentes etapas para unir dos o más imágenes superpuestas en una imagen perfecta sin costuras, desde la detección de puntos clave en las imágenes hasta su mezcla en la imagen final. En este proceso, el algoritmo Scale Invariant Feature Transform (SIFT) puede ser aplicado para desarrollar la fase de detección y selección de correspondencias entre imágenes debido a sus buenas cualidades. El desarrollo de la creación de un completo proceso de costura automático y efectivo, pasa por analizar diferentes métodos de las etapas del cosido de las imágenes. Varios software comerciales y gratuitos son capaces de llevar a cabo el proceso de costura, ofreciendo diferentes alternativas en distintas situaciones. Este análisis implica la creación de una secuencia de comandos que trabaja con las imágenes y con archivos de datos del proyecto generado. Una vez esta secuencia es creada, el proceso de cosido de imágenes es capaz de lograr una ejecución automática permitiendo unos resultados de calidad en la imagen final.Català: Processament d'imatge és qualsevol tipus de processat de senyal en aquell que l'entrada és una imatge, com una fotografia o fotograma de vídeo, i la sortida pot ser una imatge o conjunt de característiques i paràmetres relacionats amb la imatge. Moltes de les tècniques de processat d'imatge impliquen un tractament de la imatge com a senyal en dues dimensions, i per això s'apliquen tècniques estàndard de processament de senyal. Concretament, la costura o unió d'imatges presenta diferents etapes per unir dues o més imatges superposades en una imatge perfecta sense costures, des de la detecció de punts clau en les imatges fins a la seva barreja en la imatge final. En aquest procés, l'algoritme Scale Invariant Feature Transform (SIFT) pot ser aplicat per desenvolupar la fase de detecció i selecció de correspondències entre imatges a causa de les seves bones qualitats. El desenvolupament de la creació d'un complet procés de costura automàtic i efectiu, passa per analitzar diferents mètodes de les etapes del cosit de les imatges. Diversos programari comercials i gratuïts són capaços de dur a terme el procés de costura, oferint diferents alternatives en diverses situacions. Aquesta anàlisi implica la creació d'una seqüència de commandes que treballa amb les imatges i amb arxius de dades del projecte generat. Un cop aquesta seqüència és creada, el procés de cosit d'imatges és capaç d'aconseguir una execució automàtica permetent uns resultats de qualitat en la imatge final

    3D shape matching and registration : a probabilistic perspective

    Get PDF
    Dense correspondence is a key area in computer vision and medical image analysis. It has applications in registration and shape analysis. In this thesis, we develop a technique to recover dense correspondences between the surfaces of neuroanatomical objects over heterogeneous populations of individuals. We recover dense correspondences based on 3D shape matching. In this thesis, the 3D shape matching problem is formulated under the framework of Markov Random Fields (MRFs). We represent the surfaces of neuroanatomical objects as genus zero voxel-based meshes. The surface meshes are projected into a Markov random field space. The projection carries both geometric and topological information in terms of Gaussian curvature and mesh neighbourhood from the original space to the random field space. Gaussian curvature is projected to the nodes of the MRF, and the mesh neighbourhood structure is projected to the edges. 3D shape matching between two surface meshes is then performed by solving an energy function minimisation problem formulated with MRFs. The outcome of the 3D shape matching is dense point-to-point correspondences. However, the minimisation of the energy function is NP hard. In this thesis, we use belief propagation to perform the probabilistic inference for 3D shape matching. A sparse update loopy belief propagation algorithm adapted to the 3D shape matching is proposed to obtain an approximate global solution for the 3D shape matching problem. The sparse update loopy belief propagation algorithm demonstrates significant efficiency gain compared to standard belief propagation. The computational complexity and convergence property analysis for the sparse update loopy belief propagation algorithm are also conducted in the thesis. We also investigate randomised algorithms to minimise the energy function. In order to enhance the shape matching rate and increase the inlier support set, we propose a novel clamping technique. The clamping technique is realized by combining the loopy belief propagation message updating rule with the feedback from 3D rigid body registration. By using this clamping technique, the correct shape matching rate is increased significantly. Finally, we investigate 3D shape registration techniques based on the 3D shape matching result. Based on the point-to-point dense correspondences obtained from the 3D shape matching, a three-point based transformation estimation technique is combined with the RANdom SAmple Consensus (RANSAC) algorithm to obtain the inlier support set. The global registration approach is purely dependent on point-wise correspondences between two meshed surfaces. It has the advantage that the need for orientation initialisation is eliminated and that all shapes of spherical topology. The comparison of our MRF based 3D registration approach with a state-of-the-art registration algorithm, the first order ellipsoid template, is conducted in the experiments. These show dense correspondence for pairs of hippocampi from two different data sets, each of around 20 60+ year old healthy individuals

    A COLLISION AVOIDANCE SYSTEM FOR AUTONOMOUS UNDERWATER VEHICLES

    Get PDF
    The work in this thesis is concerned with the development of a novel and practical collision avoidance system for autonomous underwater vehicles (AUVs). Synergistically, advanced stochastic motion planning methods, dynamics quantisation approaches, multivariable tracking controller designs, sonar data processing and workspace representation, are combined to enhance significantly the survivability of modern AUVs. The recent proliferation of autonomous AUV deployments for various missions such as seafloor surveying, scientific data gathering and mine hunting has demanded a substantial increase in vehicle autonomy. One matching requirement of such missions is to allow all the AUV to navigate safely in a dynamic and unstructured environment. Therefore, it is vital that a robust and effective collision avoidance system should be forthcoming in order to preserve the structural integrity of the vehicle whilst simultaneously increasing its autonomy. This thesis not only provides a holistic framework but also an arsenal of computational techniques in the design of a collision avoidance system for AUVs. The design of an obstacle avoidance system is first addressed. The core paradigm is the application of the Rapidly-exploring Random Tree (RRT) algorithm and the newly developed version for use as a motion planning tool. Later, this technique is merged with the Manoeuvre Automaton (MA) representation to address the inherent disadvantages of the RRT. A novel multi-node version which can also address time varying final state is suggested. Clearly, the reference trajectory generated by the aforementioned embedded planner must be tracked. Hence, the feasibility of employing the linear quadratic regulator (LQG) and the nonlinear kinematic based state-dependent Ricatti equation (SDRE) controller as trajectory trackers are explored. The obstacle detection module, which comprises of sonar processing and workspace representation submodules, is developed and tested on actual sonar data acquired in a sea-trial via a prototype forward looking sonar (AT500). The sonar processing techniques applied are fundamentally derived from the image processing perspective. Likewise, a novel occupancy grid using nonlinear function is proposed for the workspace representation of the AUV. Results are presented that demonstrate the ability of an AUV to navigate a complex environment. To the author's knowledge, it is the first time the above newly developed methodologies have been applied to an A UV collision avoidance system, and, therefore, it is considered that the work constitutes a contribution of knowledge in this area of work.J&S MARINE LT

    Ab-initio design of bulk materials assembled with silicon clusters

    Get PDF
    Tese de doutoramento, Física, Universidade de Lisboa, Faculdade de Ciências, 2011The fact that the founding papers of Density Functional Theory are among the most cited papers ever, testi es for the importance of Quantum Mechanics and its (often) counter intuitive features in characterizing many-particle systems at a nano and sub-nano scale. Density Functional Theory has enabled one to use the computer to predict quantitatively several of the properties of the aforementioned many-particle systems. The prediction of new materials, often exhibiting meta-stability, is one of its distinctive features. In this work we will discuss a new class of meta-materials which, being silicon based, exhibit properties which in no way resemble those of its main constituent. We investigate the feasibility of assembling the exceptionally stable isovalent X@Si16 (X=Ti, Zr and Hf) nanoparticles to form new bulk materials. We use rst principles density functional theory. Our results predict the formation of stable, wide band-gap materials crystallizing in HCP structures in which the cages bind weakly, similar to fullerite. The present study suggests new pathways through which endohedral cage clusters may constitute viable means toward the production of synthetic materials with pre-de ned physical and chemical properties. Within the same rst-principles framework we will investigate the vibrational modes and infrared spectra of the isovalent X@Si16 (X=Ti, Zr and Hf) nanoparticles. Our results predict the existence of high-intensity modes of low frequency. An estimate of the electron-phonon coupling strength is also provided based on a single-molecule method introduced recently. The large value of combined with predicted stability of bulk materials assembled with these nanoparticles suggest that these new materials, when appropriately doped, may exhibit high-temperature superconducting properties

    Hierarchical High-Point Energy Flow Network for Jet Tagging

    Full text link
    Jet substructure observable basis is a systematic and powerful tool for analyzing the internal energy distribution of constituent particles within a jet. In this work, we propose a novel method to insert neural networks into jet substructure basis as a simple yet efficient interpretable IRC-safe deep learning framework to discover discriminative jet observables. The Energy Flow Polynomial (EFP) could be computed with a certain summation order, resulting in a reorganized form which exhibits hierarchical IRC-safety. Thus inserting non-linear functions after the separate summation could significantly extend the scope of IRC-safe jet substructure observables, where neural networks can come into play as an important role. Based on the structure of the simplest class of EFPs which corresponds to path graphs, we propose the Hierarchical Energy Flow Networks and the Local Hierarchical Energy Flow Networks. These two architectures exhibit remarkable discrimination performance on the top tagging dataset and quark-gluon dataset compared to other benchmark algorithms even only utilizing the kinematic information of constituent particles
    corecore