436 research outputs found

    Proton imaging of stochastic magnetic fields

    Full text link
    Recent laser-plasma experiments report the existence of dynamically significant magnetic fields, whose statistical characterisation is essential for understanding the physical processes these experiments are attempting to investigate. In this paper, we show how a proton imaging diagnostic can be used to determine a range of relevant magnetic field statistics, including the magnetic-energy spectrum. To achieve this goal, we explore the properties of an analytic relation between a stochastic magnetic field and the image-flux distribution created upon imaging that field. We conclude that features of the beam's final image-flux distribution often display a universal character determined by a single, field-scale dependent parameter - the contrast parameter - which quantifies the relative size of the correlation length of the stochastic field, proton displacements due to magnetic deflections, and the image magnification. For stochastic magnetic fields, we establish the existence of four contrast regimes - linear, nonlinear injective, caustic and diffusive - under which proton-flux images relate to their parent fields in a qualitatively distinct manner. As a consequence, it is demonstrated that in the linear or nonlinear injective regimes, the path-integrated magnetic field experienced by the beam can be extracted uniquely, as can the magnetic-energy spectrum under a further statistical assumption of isotropy. This is no longer the case in the caustic or diffusive regimes. We also discuss complications to the contrast-regime characterisation arising for inhomogeneous, multi-scale stochastic fields, as well as limitations currently placed by experimental capabilities on extracting magnetic field statistics. The results presented in this paper provide a comprehensive description of proton images of stochastic magnetic fields, with applications for improved analysis of given proton-flux images.Comment: Main paper pp. 1-29; appendices pp. 30-84. 24 figures, 2 table

    Light in Power: A General and Parameter-free Algorithm for Caustic Design

    Get PDF
    We present in this paper a generic and parameter-free algorithm to efficiently build a wide variety of optical components, such as mirrors or lenses, that satisfy some light energy constraints. In all of our problems, one is given a collimated or point light source and a desired illumination after reflection or refraction and the goal is to design the geometry of a mirror or lens which transports exactly the light emitted by the source onto the target. We first propose a general framework and show that eight different optical component design problems amount to solving a light energy conservation equation that involves the computation of visibility diagrams. We then show that these diagrams all have the same structure and can be obtained by intersecting a 3D Power diagram with a planar or spherical domain. This allows us to propose an efficient and fully generic algorithm capable to solve these eight optical component design problems. The support of the prescribed target illumination can be a set of directions or a set of points located at a finite distance. Our solutions satisfy design constraints such as convexity or concavity. We show the effectiveness of our algorithm on simulated and fabricated examples

    State of the Art on Stylized Fabrication

    Get PDF
    © 2018 The Authors Computer Graphics Forum © 2018 The Eurographics Association and John Wiley & Sons Ltd. Digital fabrication devices are powerful tools for creating tangible reproductions of 3D digital models. Most available printing technologies aim at producing an accurate copy of a tridimensional shape. However, fabrication technologies can also be used to create a stylistic representation of a digital shape. We refer to this class of methods as ‘stylized fabrication methods’. These methods abstract geometric and physical features of a given shape to create an unconventional representation, to produce an optical illusion or to devise a particular interaction with the fabricated model. In this state-of-the-art report, we classify and overview this broad and emerging class of approaches and also propose possible directions for future research

    Optimal Survey Strategies and Predicted Planet Yields for the Korean Microlensing Telescope Network

    Get PDF
    The Korean Microlensing Telescope Network (KMTNet) will consist of three 1.6m telescopes each with a 4 deg^{2} field of view (FoV) and will be dedicated to monitoring the Galactic Bulge to detect exoplanets via gravitational microlensing. KMTNet's combination of aperture size, FoV, cadence, and longitudinal coverage will provide a unique opportunity to probe exoplanet demographics in an unbiased way. Here we present simulations that optimize the observing strategy for, and predict the planetary yields of, KMTNet. We find preferences for four target fields located in the central Bulge and an exposure time of t_{exp} = 120s, leading to the detection of ~2,200 microlensing events per year. We estimate the planet detection rates for planets with mass and separation across the ranges 0.1 <= M_{p}/M_{Earth} <= 1000 and 0.4 <= a/AU <= 16, respectively. Normalizing these rates to the cool-planet mass function of Cassan (2012), we predict KMTNet will be approximately uniformly sensitive to planets with mass 5 <= M_{p}/M_{Earth} <= 1000 and will detect ~20 planets per year per dex in mass across that range. For lower-mass planets with mass 0.1 <= M_{p}/M_{Earth} < 5, we predict KMTNet will detect ~10 planets per year. We also compute the yields KMTNet will obtain for free-floating planets (FFPs) and predict KMTNet will detect ~1 Earth-mass FFP per year, assuming an underlying population of one such planet per star in the Galaxy. Lastly, we investigate the dependence of these detection rates on the number of observatories, the photometric precision limit, and optimistic assumptions regarding seeing, throughput, and flux measurement uncertainties.Comment: 29 pages, 31 figures, submitted to ApJ. For a brief video explaining the key results of this paper, please visit: https://www.youtube.com/watch?v=e5rWVjiO26

    Utilising path-vertex data to improve Monte Carlo global illumination.

    Get PDF
    Efficient techniques for photo-realistic rendering are in high demand across a wide array of industries. Notable applications include visual effects for film, entertainment and virtual reality. Less direct applications such as visualisation for architecture, lighting design and product development also rely on the synthesis of realistic and physically based illumination. Such applications assert ever increasing demands on light transport algorithms, requiring the computation of photo-realistic effects while handling complex geometry, light scattering models and illumination. Techniques based on Monte Carlo integration handle such scenarios elegantly and robustly, but despite seeing decades of focused research and wide commercial support, these methods and their derivatives still exhibit undesirable side effects that are yet to be resolved. In this thesis, Monte Carlo path tracing techniques are improved upon by utilizing path vertex data and intermediate radiance contributions readily available during rendering. This permits the development of novel progressive algorithms that render low noise global illumination while striving to maintain the desirable accuracy and convergence properties of unbiased methods. The thesis starts by presenting a discussion into optical phenomenon, physically based rendering and achieving photo realistic image synthesis. This is followed by in-depth discussion of the published theoretical and practical research in this field, with a focus on stochastic methods and modem rendering methodologies. This provides insight into the issues surrounding Monte Carlo integration both in the general and rendering specific contexts, along with an appreciation for the complexities of solving global light transport. Alternative methods that aim to address these issues are discussed, providing an insight into modem rendering paradigms and their characteristics. Thus, an understanding of the key aspects is obtained, that is necessary to build up and discuss the novel research and contributions to the field developed throughout this thesis. First, a path space filtering strategy is proposed that allows the path-based space of light transport to be classified into distinct subsets. This permits the novel combination of robust path tracing and recent progressive photon mapping algorithms to handle each subset based on the characteristics of the light transport in that space. This produces a hybrid progressive rendering technique that utilises the strengths of existing state of the art Monte Carlo and photon mapping methods to provide efficient and consistent rendering of complex scenes with vanishing bias. The second original contribution is a probabilistic image-based filtering and sample clustering framework that provides high quality previews of global illumination whilst remaining aware of high frequency detail and features in geometry, materials and the incident illumination. As will be seen, the challenges of edge-aware noise reduction are numerous and long standing, particularly when identifying high frequency features in noisy illumination signals. Discontinuities such as hard shadows and glossy reflections are commonly overlooked by progressive filtering techniques, however by dividing path space into multiple layers, once again based on utilising path vertex data, the overlapping illumination of varying intensities, colours and frequencies is more effectively handled. Thus noise is removed from each layer independent of features present in the remaining path space, effectively preserving such features

    Master of Science

    Get PDF
    thesisVirtual point lights (VPLs) provide an effective solution to global illumination computation by converting the indirect illumination into direct illumination from many virtual light sources. This approach results in a less noisy image compare to Monte Carlo methods. In addition, the number of VPLs to generate can be specified in advance; therefore, it can be adjusted depending on the scene, desired quality, time budget, and the available computational power. In this thesis, we investigate a new technique that carefully places VPLs for providing improved rendering quality for computing global illumination using VPLs. Our method consists of three different passes. In the first pass, we randomly generate a large number of VPLs in the scene starting from the camera to place them in positions that can contribute to the final rendered image. Then, we remove a considerable number of these VPLs using a Poisson disk sample elimination method to get a subset of VPLs that are uniformly distributed over the part of the scene that is indirectly visible to the camera. The second pass is to estimate the radiant intensity of these VPLs by performing light tracing starting from the original light sources in the scene and scatter the radiance of light rays at a hit-point to the VPLs close to that point. The final pass is rendering the scene, which consists of shading all points in the scene visible to the camera using the original light sources and VPLs

    The Fractal Geometry of the Cosmic Web and its Formation

    Full text link
    The cosmic web structure is studied with the concepts and methods of fractal geometry, employing the adhesion model of cosmological dynamics as a basic reference. The structures of matter clusters and cosmic voids in cosmological N-body simulations or the Sloan Digital Sky Survey are elucidated by means of multifractal geometry. A non-lacunar multifractal geometry can encompass three fundamental descriptions of the cosmic structure, namely, the web structure, hierarchical clustering, and halo distributions. Furthermore, it explains our present knowledge of cosmic voids. In this way, a unified theory of the large-scale structure of the universe seems to emerge. The multifractal spectrum that we obtain significantly differs from the one of the adhesion model and conforms better to the laws of gravity. The formation of the cosmic web is best modeled as a type of turbulent dynamics, generalizing the known methods of Burgers turbulence.Comment: 35 pages, 8 figures; corrected typos, added references; further discussion of cosmic voids; accepted by Advances in Astronom
    • …
    corecore