2,933 research outputs found

    GRMHD prediction of coronal variability in accreting black holes

    Full text link
    On the basis of data from an energy-conserving 3D general relativistic MHD simulation, we predict the statistical character of variability in the coronal luminosity from accreting black holes. When the inner boundary of the corona is defined to be the electron scattering photosphere, its location depends only on the mass accretion rate in Eddington units (\dot{M}). Nearly independent of viewing angle and \dot{M}, the power spectrum over the range of frequencies from approximately the orbital frequency at the innermost stable circular orbit (ISCO) to ~100 times lower is well approximated by a power-law with index -2, crudely consistent with the observed power spectra of hard X-ray fluctuations in AGN and the hard states of Galactic binary black holes. The underlying physical driver for variability in the light curve is variations in the accretion rate caused by the chaotic character of MHD turbulence, but the power spectrum of the coronal light output is significantly steeper. Part of this contrast is due to the fact that the mass accretion rate can be significantly modulated by radial epicyclic motions that do not result in dissipation, and therefore do not drive luminosity fluctuations. The other part of this contrast is due to the inward decrease of the characteristic inflow time, which leads to decreasing radial coherence length with increasing fluctuation frequency.Comment: Accepted for publication in ApJ, 35 pages, 11 figures (8 color and 3 greyscale), AASTEX. High-resolution versions can be found at the following links: [PS] http://www.pha.jhu.edu/~scn/papers/grmhd_var.ps [PDF] http://www.pha.jhu.edu/~scn/papers/grmhd_var.pd

    Estudios de sensibilidad para el Cherenkov Telescope Array

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Física Atómica, Molecular y Nuclear, leída el 28-09-2015Since the creation of the first telescope in the 17th century, every major discovery in astrophysics has been the direct consequence of the development of novel observation techniques, opening new windows in the electromagnetic spectrum. After Karl Jansky discovered serendipitously the first radio source in 1933, Grote Reber built the first parabolic radio telescope in his backyard, planting the seed of a whole new field in astronomy. Similarly, new technologies in the 1950s allowed the establishment of other fields, such as the infrared, ultraviolet or the X-rays. The highest energy end of the electromagnetic spectrum, the gamma-ray range, represents the last unexplored window for astronomers and should reveal the most extreme phenomena that take place in the Universe. Given the technical complexity of gamma-ray detection and the extremely relative low fluxes, gamma-ray astronomy has undergone a slower development compared to other wavelengths. Nowadays, the great success of consecutive space missions together with the development and refinement of new detection techniques from the ground, has allowed outstanding scientific results and has brought gamma-ray astronomy to a worthy level in par with other astronomy fields. This work is devoted to the study and improvement of the future Cherenkov Telescope Array (CTA), the next generation of ground based gamma-ray detectors, designed to observe photons with the highest energies ever observed from cosmic sources. These results on the sensitivity studies performed for the CTA collaboration evaluate the observatory performance through the analysis of large-scale Monte Carlo (MC) simulations, along with an estimation of its future potential on specific physics cases. Together with the testing and development of the analysis tools employed, these results are critical to understand CTA's future capabilities, the efficiency of different telescope placement approaches and the effect on performance of the construction site, related to parameters such as the altitude or the geomagnetic field. The Northern Hemisphere proposed construction sites were analyzed and evaluated, providing an accurate estimation of their capabilities to host the observatory. As for the CTA layout candidates, an unbiased comparison of the different arrays proposed by the collaboration was performed, using Fermi-LAT catalogs to forecast the performance of each array over specific scientific cases. In addition, the application of machine learning algorithms on gamma-ray astronomy was studied, comparing alternative methods for energy reconstruction and background suppression and introducing new applications to these algorithms, such as the determination of gamma-ray source types through the training of their spectral features. The analysis presented here of both CTA-N and CTA-S candidates represents the most comprehensive study of CTA capabilities performed by the collaboration to date. Experience gained with the improvement of this software will guide the future \gls{cta} analysis pipelines by comparing the attained sensitivity by alternative analysis chains. From these results, both CTA-N and CTA-S candidates "2N" and "2Q" fulfill the sensitivity, angular and energy resolution, effective area and off-axis performance requirements. MC simulations provide an useful test-bench for the different designs within the CTA project, and these results demonstrate their correct implementation would attain the desired performance and potential scientific output.Depto. de Estructura de la Materia, Física Térmica y ElectrónicaFac. de Ciencias FísicasTRUEunpu

    Higher Performance Traversal and Construction of Tree-Based Raytracing Acceleration Structures

    Get PDF
    Ray tracing is an important computational primitive used in different algorithms including collision detection, line-of-sight computations, ray tracing-based sound propagation, and most prominently light transport algorithms. It computes the closest intersections for a given set of rays and geometry. The geometry is usually modeled with a set of geometric primitives such as triangles or quadrangles which define a scene. An efficient ray tracing implementation needs to rely on an acceleration structure to decouple ray tracing complexity from scene complexity as far as possible. The most common ray tracing acceleration structures are kd-trees and bounding volume hierarchies (BVHs) which have an O(log n) ray tracing complexity in the number of scene primitives. Both structures offer similar ray tracing performance in practice. This thesis presents theoretical insights and practical approaches for higher quality, improved graphics processing unit (GPU) ray tracing performance, and faster construction of BVHs and kd-trees, where the focus is on BVHs. The chosen construction strategy for BVHs and kd-trees has a significant impact on final ray tracing performance. The most common measure for the quality of BVHs and kd-trees is the surface area metric (SAM). Using assumptions on the distribution of ray origins and directions the SAM gives an approximation for the cost of traversing an acceleration structure without having to trace a single ray. High quality construction algorithms aim at reducing the SAM cost. The most widespread high quality greedy plane-sweep algorithm applies the surface area heuristic (SAH) which is a simplification of the SAM. Advances in research on quality metrics for BVHs have shown that greedy SAH-based plane-sweep builders often construct BVHs with superior traversal performance despite the fact that the resulting SAM costs are higher than those created by more sophisticated builders. Motivated by this observation we examine different construction algorithms that use the SAM cost of temporarily constructed SAH-built BVHs to guide the construction to higher quality BVHs. An extensive evaluation reveals that the resulting BVHs indeed achieve significantly higher trace performance for primary and secondary diffuse rays compared to BVHs constructed with standard plane-sweeping. Compared to the Spatial-BVH, a kd-tree/BVH hybrid, we still achieve an acceptable increase in performance. We show that the proposed algorithm has subquadratic computational complexity in the number of primitives, which renders it usable in practical applications. An alternative construction algorithm to the plane-sweep BVH builder is agglomerative clustering, which constructs BVHs in a bottom-up fashion. It clusters primitives with a SAM-inspired heuristic and gives mixed quality BVHs compared to standard plane-sweeping construction. While related work only focused on the construction speed of this algorithm we examine clustering heuristics, which aim at higher hierarchy quality. We propose a fully SAM-based clustering heuristic which on average produces better performing BVHs compared to original agglomerative clustering. The definitions of SAM and SAH are based on assumptions on the distribution of ray origins and directions to define a conditional geometric probability for intersecting nodes in kd-trees and BVHs. We analyze the probability function definition and show that the assumptions allow for an alternative probability definition. Unlike the conventional probability, our definition accounts for directional variation in the likelihood of intersecting objects from different directions. While the new probability does not result in improved practical tracing performance, we are able to provide an interesting insight on the conventional probability. We show that the conventional probability function is directly linked to our examined probability function and can be interpreted as covertly accounting for directional variation. The path tracing light transport algorithm can require tracing of billions of rays. Thus, it can pay off to construct high quality acceleration structures to reduce the ray tracing cost of each ray. At the same time, the arising number of trace operations offers a tremendous amount of data parallelism. With CPUs moving towards many-core architectures and GPUs becoming more general purpose architectures, path tracing can now be well parallelized on commodity hardware. While parallelization is trivial in theory, properties of real hardware make efficient parallelization difficult, especially when tracing so called incoherent rays. These rays cause execution flow divergence, which reduces efficiency of SIMD-based parallelism and memory read efficiency due to incoherent memory access. We investigate how different BVH and node memory layouts as well as storing the BVH in different memory areas impacts the ray tracing performance of a GPU path tracer. We also optimize the BVH layout using information gathered in a pre-processing pass by applying a number of different BVH reordering techniques. This results in increased ray tracing performance. Our final contribution is in the field of fast high quality BVH and kd-tree construction. Increased quality usually comes at the cost of higher construction time. To reduce construction time several algorithms have been proposed to construct acceleration structures in parallel on GPUs. These are able to perform full rebuilds in realtime for moderate scene sizes if all data completely fits into GPU memory. The sheer amount of data arising from geometric detail used in production rendering makes construction on GPUs, however, infeasible due to GPU memory limitations. Existing out-of-core GPU approaches perform hybrid bottom-up top-down construction which suffers from reduced acceleration structure quality in the critical upper levels of the tree. We present an out-of-core multi-GPU approach for full top-down SAH-based BVH and kd-tree construction, which is designed to work on larger scenes than conventional approaches and yields high quality trees. The algorithm is evaluated for scenes consisting of up to 1 billion triangles and performance scales with an increasing number of GPUs

    The automatic definition and generation of axial lines and axial maps

    Get PDF

    QuadStack: An Efficient Representation and Direct Rendering of Layered Datasets

    Get PDF
    We introduce QuadStack, a novel algorithm for volumetric data compression and direct rendering. Our algorithm exploits the data redundancy often found in layered datasets which are common in science and engineering fields such as geology, biology, mechanical engineering, medicine, etc. QuadStack first compresses the volumetric data into vertical stacks which are then compressed into a quadtree that identifies and represents the layered structures at the internal nodes. The associated data (color, material, density, etc.) and shape of these layer structures are decoupled and encoded independently, leading to high compression rates (4× to 54× of the original voxel model memory footprint in our experiments). We also introduce an algorithm for value retrieving from the QuadStack representation and we show that the access has logarithmic complexity. Because of the fast access, QuadStack is suitable for efficient data representation and direct rendering. We show that our GPU implementation performs comparably in speed with the state-of-the-art algorithms (18-79 MRays/s in our implementation), while maintaining a significantly smaller memory footprint

    A Human-Centered Approach for the Design of Perimeter Office Spaces Based on Visual Environment Criteria

    Get PDF
    With perimeter office spaces with large glazing facades being an indisputable trend in modern architecture, human comfort has been in the scope of Building science; the necessity to improve occupants’ satisfaction, along with maintaining sustainability has become apparent, as productivity and even the well-being of occupants are connected with maintaining a pleasant environment in the interior. While thermal comfort has been extensively studied, the satisfaction with the visual environment has still aspects that are either inadequately explained, or even entirely absent from literature. This Thesis investigated most aspects of the visual environment, including visual comfort, lighting energy performance through the utilization of daylight and connection to the outdoors, using experimental studies, simulation studies and human subjects’ based experiments

    Reformulating Space Syntax: The Automatic Definition and Generation of Axial Lines and Axial Maps

    Get PDF
    Space syntax is a technique for measuring the relative accessibility of different locations in a spatial system which has been loosely partitioned into convex spaces.These spaces are approximated by straight lines, called axial lines, and the topological graph associated with their intersection is used to generate indices of distance, called integration, which are then used as proxies for accessibility. The most controversial problem in applying the technique involves the definition of these lines. There is no unique method for their generation, hence different users generate different sets of lines for the same application. In this paper, we explore this problem, arguing that to make progress, there need to be unambiguous, agreed procedures for generating such maps. The methods we suggest for generating such lines depend on defining viewsheds, called isovists, which can be approximated by their maximum diameters,these lengths being used to form axial maps similar to those used in space syntax. We propose a generic algorithm for sorting isovists according to various measures,approximating them by their diameters and using the axial map as a summary of the extent to which isovists overlap (intersect) and are accessible to one another. We examine the fields created by these viewsheds and the statistical properties of the maps created. We demonstrate our techniques for the small French town of Gassin used originally by Hillier and Hanson (1984) to illustrate the theory, exploring different criteria for sorting isovists, and different axial maps generated by changing the scale of resolution. This paper throws up as many problems as it solves but we believe it points the way to firmer foundations for space syntax

    Miniaturization of an optoelectronic holographic otoscope for measurement of nanodisplacements in tympanic membranes

    Get PDF
    An optoelectronic holographic otoscope (OEHO) is currently in use in a major hospital. The OEHO allows for nanometer-displacement measurements of the deformation of mammalian tympanic membrane (TM) under acoustic stimulation. The optical head of the current system is sufficient for laboratory use, but requires improved optical performance and a miniaturized size to be suitable for the clinic. A new optical head configuration is designed, aided by ray tracing analysis and research of the biomechanical and optical properties of the TM. A prototype is built and the optical performance quantified via developed image processing algorithms. The device is validated through comparison of analytical, computational, and experimental results and through interferometric chinchilla TM measurements
    corecore