67 research outputs found
The persistent cosmic web and its filamentary structure II: Illustrations
The recently introduced discrete persistent structure extractor (DisPerSE,
Soubie 2010, paper I) is implemented on realistic 3D cosmological simulations
and observed redshift catalogues (SDSS); it is found that DisPerSE traces
equally well the observed filaments, walls, and voids in both cases. In either
setting, filaments are shown to connect onto halos, outskirt walls, which
circumvent voids. Indeed this algorithm operates directly on the particles
without assuming anything about the distribution, and yields a natural
(topologically motivated) self-consistent criterion for selecting the
significance level of the identified structures. It is shown that this
extraction is possible even for very sparsely sampled point processes, as a
function of the persistence ratio. Hence astrophysicists should be in a
position to trace and measure precisely the filaments, walls and voids from
such samples and assess the confidence of the post-processed sets as a function
of this threshold, which can be expressed relative to the expected amplitude of
shot noise. In a cosmic framework, this criterion is comparable to friend of
friend for the identifications of peaks, while it also identifies the connected
filaments and walls, and quantitatively recovers the full set of topological
invariants (Betti numbers) {\sl directly from the particles} as a function of
the persistence threshold. This criterion is found to be sufficient even if one
particle out of two is noise, when the persistence ratio is set to 3-sigma or
more. The algorithm is also implemented on the SDSS catalogue and used to locat
interesting configurations of the filamentary structure. In this context we
carried the identification of an ``optically faint'' cluster at the
intersection of filaments through the recent observation of its X-ray
counterpart by SUZAKU. The corresponding filament catalogue will be made
available online.Comment: A higher resolution version is available at
http://www.iap.fr/users/sousbie together with complementary material (movie
and data). Submitted to MNRA
Recommended from our members
Exploring the impact of detection physics in X-ray CCD imagers and spectrometers
This thesis is concerned with exploring the way in which the physics of the detection process affects the quality of a CCD-based X-ray detector system. The physical processes which lead to the final images and spectra achieved with a CCD-based camera system are investigated through a combination of simulations and experimental techniques with the aim of improving the detector performance and allowing future detectors to be designed with optimal characteristics. Techniques developed throughout the study and the results of the simulations have wide-ranging impacts on the areas concerned. The study is split into two main sections, the first regarding a high-resolution, high-energy, photon-counting X/γ-ray camera. In medical imaging, X-rays and gamma-rays are often used for the purposes of diagnostic imaging. In many synchrotron based research programmes, such as protein crystallography and X-ray diffraction imaging, X-rays are used, once again, for imaging purposes. In both cases, a high-resolution detector with a high frame-rate is required such that images can be taken with a spatial resolution of the order of micrometers to tens of micrometers. If one is able to distinguish the energy of the incident X-rays and gamma-rays (with energies of 20-200 keV) then these spectral capabilities add to the functionality of the detector, allowing, for example, the removal of fluorescence X-rays. Chapter 2 reviews the relevant detector physics and theory before providing a critical review of current gammacameras. Chapter 3 outlines the feasibility study for the scintillator-coupled EM-CCD detailing the development of a new energy discrimination methodology. Also described is the development of a full system simulation which can be used to troubleshoot problems found when calibrating and optimising the device. Chapter 4 details the characterisation and optimisation of the detector making use of the aforementioned simulations where appropriate. Chapter 5 presents the results of the study, showing how the resolution can be dramatically improved and how energy discrimination can be implemented. The second section of the thesis regards instrument background. The use of CCDs for space borne X-ray detection in scientific satellites is wide-spread. Whilst in-orbit the CCDs are subjected to an incident flux of high energy particles. These particles may be detected, both as the primaries themselves and by means of secondaries produced in the detector shielding, and will produce a background level formed by components indistinguishable from the X-rays for which the mission was designed to detect. Chapter 6 presents an introduction to the theory behind the instrument background experienced by CCD-based detector systems in orbit. A simulation has been developed which is in very good agreement with data received from the spacecrafts, described in Chapter 7. Finally, Chapter 8 summarises the outcomes of these studies and provides insight into future work which will further aid the improvement of gamma-cameras for medical imaging and synchrotron-based research and will allow future CCD-based camera systems to be designed for increased sensitivity in-orbit
Development of a Low Cost Autopilot System for Unmanned Aerial Vehicles
The purpose of this thesis was to develop a low cost autonomous flight control system for small unmanned aerial vehicles with the aim to support collaborative systems. A low cost hardware solution was achieved by careful selection of sensors, integration of hardware subsystems, and the use of new microcontroller technologies. Flight control algorithms to guide a vehicle though waypoint based flight paths and loiter about a point were implemented using direction fields. A hardware in the loop simulator was developed to ensure proper operation of all hardware and software components prior to flight testing. The resulting flight control system achieved stable and accurate flight while reducing the total system cost to less than $250
Exploration of fault tolerance in Apache Spark
This thesis provides an exploration of two techniques for solving fault tolerance for batch processing in Apache Spark. We evaluate the benefits and challenges of these approaches.
Apache Spark is a cluster computing system comprised of three main components: the driver program, the cluster manager, and the worker nodes. Spark already tolerates the loss of worker nodes, and other external tools already provide fault tolerance solutions for the cluster manager. For example, the cluster manager deployed using Apache Mesos provides fault tolerance to the cluster manager. Spark does not support driver fault tolerance for batch processing. The driver program stores critical state of the running job by maintaining oversight of the workers; failure of the driver program always results in loss of all oversight over the worker nodes and is equivalent to catastrophic failure of the entire Spark application.
In this thesis, we explore two approaches to achieve fault tolerance in Apache Spark for batch processing, enabling promised execution of long-running critical jobs and consistent performance while still supporting high uptime. The first approach serializes critical state of the driver program and relay that state to passive processors. Upon failure, this state is loaded by a secondary processor and computation is resumed. The second approach narrows the scope of the problem and synchronizes block information between primary and secondary drivers so that locations of cached aggregated data is not lost after primary driver failure. Loss of these locations leads to a state from which computation cannot be resumed. Both approaches propose considerable changes to the Apache Spark architecture in order to support high availability of batch processing jobs
A systematic review of the literature on methods and technologies for teaching parallel and distributed computing in universities
There is a growing demand for software developers who have experience writing parallel programs rather than just" parallelizing" sequential systems as computer hardware gets more and more parallel. In order to develop the skills of future software engineers, it is crucial to teach pupils parallelism in elementary computer science courses. We searched the Scopus database for articles on" teaching parallel and distributed computing" and" parallel programming," published in English between 2008 and 2019. 26 papers were included in the study after quality review. As a result, a lab course using the C++ programming language and MPI library serves as the primary teaching tool for parallel and distributed computing
A systematic review of the literature on methods and technologies for teaching parallel and distributed computing in universities
There is a growing demand for software developers who have experience writing parallel programs rather than just "parallelizing" sequential systems as computer hardware gets more and more parallel. In order to develop the skills of future software engineers, it is crucial to teach pupils parallelism in elementary computer science courses. We searched the Scopus database for articles on "teaching parallel and distributed computing" and "parallel programming," published in English between 2008 and 2019. 26 papers were included in the study after quality review. As a result, a lab course using the C++ programming language and MPI library serves as the primary teaching tool for parallel and distributed computing
A search for muon neutrinos coincident with Gamma-ray Bursts with the IceCube 59-String detector
Gamma-Ray Bursts (GRBs) are believed to be prime candidates to produce the cosmic ray flux above 10^18 eV. Cosmic rays are deflected by galactic and inter-galactic magnetic fields and do not point back to their source, therefore cosmic ray observations cannot confirm or rule out GRBs as a source. Leading theories predict that if GRBs are indeed responsible for the highest energy cosmic rays, then they would produce a detectable TeV-scale neutrino flux in a km^3 sized neutrino detector. Neutrinos are not deflected by magnetic fields and point back to their source, making it possible to correlate a neutrino flux with its source. The detection of a neutrino flux from GRBs would be strong evidence that GRBs are a source of the highest energy cosmic rays.
IceCube is the first km^3 sized neutrino detector in the world and is therefore sensitive to the predicted TeV neutrino flux from GRBs. The finished detector consists of 5160 light-sensitive Digital Optical Modules (DOM) arranged on 86 Strings. There are 60 DOMs on a single string deployed at depths between 1450 and 2450 meters below the surface. The first IceCube String was deployed during the South Pole summer of 2004-2005 with construction of the IceCube detector finishing during the austral summer of 2011. The results presented here are from the 59-string detector, which operated from May 2009 to May 2010. IceCube is able to detect charged particles moving through its instrumented volume near the speed of light by detecting the Cherenkov light given off by those charged particles. Muon and anti-muon neutrinos produce secondary muons if they interact with a nucleon. If this interaction happens in or near the instrumented volume IceCube can detect those secondary muons. By searching for a neutrino signal coincident in time and space with satellite detected gamma rays from GRBs, the analysis presented here pushes the sensitivity for neutrinos from GRBs to 0.46 times the theoretically predicted neutrino flux. The result is combined with the previous search and a combined 90% upper limit of 0.22 times the theoretical predicted flux is set. The implication of this stringent limit on the model is discussed and future IceCube sensitivities are outlined.
IceCube is the largest neutrino detector in the world and with this result has entered the era of neutrino astrophysics by constraining long standing astrophysical neutrino production models
Identification and evolution of quantities of interest for a stochastic process view of complex space system development
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 115-116).The objective of stochastic process design is to strategically identify, measure, and reduce sources of uncertainty to guide the development of complex systems. Fundamental to this design approach is the idea that system development is driven by measurable characteristics called quantities of interest. These quantities of interest collectively describe the state of system development and evolve as the system matures. This thesis provides context for the contributions of quantities of interest to a stochastic process view of complex system development using three space hardware development projects. The CASTOR satellite provides the opportunity for retrospective identification of quantities of interest and their evolution through time. As a complement to CASTOR, the preliminary design of the REXIS x-ray spectrometer provides the foundation for applying stochastic process approaches during the early phases of system development. Lastly, a spacecraft panel structural dynamics experiment is presented that illustrates analysis techniques commonly employed in stochastic process analysis.by George Ralph Sondecker, IV.S.M
- …