172 research outputs found

    Testing the methods to reconstruct and model the Baryonic Acoustic Oscillations of different tracers using N-body simulations

    Get PDF
    The accelerated expansion of the Universe and the nature of the Dark Energy are still open questions in cosmology. One of the most powerful ways to investigate these issues is to map the large-scale structure of the Universe, to constrain its expansion history and growth of structures. In particular, baryon acoustic oscillations (BAO) occurred at recombination make a peak in the correlation function of galaxies at the characteristic scale of the sound horizon (a sufficiently large scale to “protect” the signal from strong non-linearities), or alternatively a series of oscillations in the power spectrum. Since the sound horizon can be estimated with a great precision from the position of the first peak in the angular power spectrum of the Cosmic Microwave Background (which has the same physical origin of BAO, oscillations of the baryons-photons plasma), the BAO peak in the correlation function can be used as a standard ruler, providing paramount cosmological information. The aim of this thesis is to systematically test and possibly improve the state-of- the-art statistical methods to model the BAO peak, taking into account the non-linear evolution of matter overdensities, redshift-space distortions and the bias of cosmic tracers. To do that, we analyse mock samples of galaxies, quasars and galaxy clusters extracted from one of the largest available cosmological hydrodynamical simulations. We extract cosmological constraints from the BAO peak through different statistical tools in the redshift range 0.2 < z < 2. Although the BAO peak is at large scales, non-linear growth and galaxy peculiar velocities make the BAO signal smoothed and broader with respect to linear predictions, especially at low redshifts. A possible method to overcome these issues is the so-called reconstruction of the density field: one of the primary goals of this work is to implement a reconstruction method, to check its performances as a function of sample selections and redshift

    Asteroid hazard mitigation: deflection models and mission analysis

    Get PDF
    Small celestial bodies such as Near Earth Objects (NEOs) have become a common subject of study because of their importance in uncovering the mysteries of the composition, formation and evolution of the solar system. Among all asteroids, NEOs have stepped into prominence because of two important aspects: they are among the easiest celestial bodies to reach from Earth, in some cases with less demanding trajectories than a simple Earth-Moon trajectory and, even more meaningful, they may pose a threat to our planet. The purpose of this thesis is to provide a comprehensive insight into the asteroid hazard problem and particularly to its mitigation. Six different concepts are fully described; specifically models for nuclear interceptor, kinetic impactor, low-thrust propulsion, mass driver, solar collector and gravity tug are developed and their efficiency is assessed for a complete set of different types of hazardous celestial objects. A multi-criteria optimization is then used to construct a set of Pareto-optimal asteroid deflection missions. The Pareto-optimality is here achieved not only by maximizing the deflection of the threatening object, but also by minimizing the total mass of the deflection mission at launch and the warning time required to deflect the asteroid. A dominance criterion is also defined and used to compare all the Pareto sets for all the various mitigation strategies. The Technology Readiness Level for each strategy is also accounted for in the comparison. Finally, this thesis will also show that impulsive deflection methods may easily catastrophically disrupt an asteroid if the required energy for a deflection reaches a certain limit threshold. A statistical model is presented to approximate both the number and size of the fragments and their initial dispersion of velocity and then used to assess the potential risk to Earth posed by the fragmentation of an asteroid as a possible outcome of a hazard mitigation mission

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    Love and Rayleigh waves in the microseismic noise field

    Get PDF

    Towards AMR Simulations of Galaxy Formation

    Get PDF
    Numerical simulations present a fundamental building block of our modern theoretical understanding of the Universe. As such the work in this thesis is primarily concerned with understanding fundamental differences that lie between the different hydrodynamic schemes. In chapter 3 I outline the optimisations I make to the FLASH code to enable larger simulations to be run. These include developing and testing a new FFT gravity solver. With these complete, in chapter 4 I present results from a collaborative code comparison project in which we test a series of different hydrodynamics codes against a suite of demanding test problems with astrophysical relevance. As the problems have known solutions, we can quantify their performance and are able to develop a resolution criteria which allows for the two different types to be reliably compared. In chapter 5 we develop an analytic model for ram pressure stripping of the hot gaseous haloes of galaxies in groups and clusters. We test the model against a suite of hydrodynamic simulations in the SPH GADGET-2 code. To ensure that the spurious suppression of hydrodynamic instabilities by SPH codes does not bias our results, I compare our findings to those obtained with the FLASH AMR code and find excellent agreement. Chapter 6 presents work in which we unambiguously determine the origin of the difference between the entropy cores formed in AMR and SPH codes. By running mergers of model clusters we are able to systematically explore the various proposed mechanisms and determine that turbulent mixing generates the higher entropy cores within AMR codes but is suppressed in SPH codes. The startling differences between the two hydrodynamic schemes presented in chapter 6 leads me to investigate their affect upon different sub-grid physical recipes. In chapter 7 I present the implementation of a sub-grid star formation recipe in FLASH and find strong differences in the way the two codes model pressure laws. I extend the work in chapter 8 by implementing a kinetic supernova feedback mechanism in FLASH and contrasting it with the results from the GADGET-2 code. I find that AMR codes dissipate energy much more efficiently than in SPH codes

    Models of rotationally symmetric, collision-dominated debris discs

    Get PDF
    Die vorliegende Arbeit behandelt Modelle der Größenverteilung und der räumlichen Verteilung des Materials in sogenannten zirkumstellaren Trümmerscheiben mit Rotationssymmetrie. Diese Scheiben, die als Überbleibsel der Planetenenstehung betrachtet werden, umkreisen Hauptreihensterne und bestehen aus Objekten von submikrometergroßem Staub bis möglicherweise hin zu Planetesimalen von einigen hundert Kilometern Durchmesser. Kollisionen und das Herauswerfen sehr kleinen Staubes durch den stellaren Strahlungsdruck führen bei ansonsten ungestörten Scheiben zu einem stetigen Ausdünnen. Als Werkzeug zur Modellierung wurde dabei einerseits eine numerische Implementierung der kinetischen, statistischen Theorie verwendet. Andererseits wurden analytische Näherungen zur Verifizierung und Interpretation der erhaltenen Ergebnisse entwickelt. Am Beispiel der Trümmerscheibe um Wega wurden erwartete typische Wellen in der Staubgrößenverteilung bestätigt sowie die Produktions- und verlustrate ungebundener Kleinstmeteoroiden bestimmt. Dabei wurde festgestellt, dass die andernorts empirisch ermittelte große Menge dieses vom Strahlungsdruck dezimierten kleinsten Staubes im Widerspruch steht zu den hier erhaltenen numerischen Ergebnissen sowie zu grundlegenden Argumenten. Desweiteren konnte allgemeingültig die radiale Abhängigkeit der Dichte der von einem Planetesimalgürtel erzeugten Staubverteilung numerisch bestimmt werden. Diese wird von lose gebundenen Staubteilchen auf hochexzentrischen Bahnen dominiert. Bezüglich der Langzeitentwicklung einer Trümmerscheibe konnte festgestellt werden, dass diese maßgeblich bestimmt wird vom Übergang der Population der großen und größten Planetesimale von der in der Wachstumsphase entstandenen Größenverteilung hin zum Gleichgewicht zerstörerischer Kollisionen. Relevant ist dies sowohl unmittelbar für den zeitlichen Verlauf der Masse und der Leuchtkraft des beobachtbaren Staubes als auch indirekt für die daraus gefolgerte Gesamtmasse einer Scheibe. Es konnte gezeigt werden das die gemachten Vorhersagen mit Beobachtungsstatistiken vereinbar sind. Mittels Numerik und Analytik wurden Skalierungsregeln ermittelt. Diese Regeln beschreiben die Abhängigkeit der Kollisionszeitskalen von Scheibenmasse, Scheibenradius und mittlerer Bahnexzentrizität der Planetesimale.The subject of the work presented here has been models of the size distribution and the spatial distribution of the material in rotationally symmetric so-called debris disc around main-sequence stars. These discs, which are considered remnants of the formation of planetary systems, are an ensemble of objects from sub-micron-sized dust to planetesimals with diameters up to hundreds of kilometres. Mutual collisions and the ejection of very small dust by the stellar radiation pressure lead to a steady decay of otherwise unperturbed debris discs. The models used are a numerical implementation of the kinetic theory of statistical physics as well as analytic approximations intended for verification and interpretation. Exemplified by the debris disc found around Vega, the expected wavy size distribution in the dust regime is confirmed, and the production and loss rate is determined for the unbound micro-meteoroids that are ejected from the system due the stellar radiation pressure. It is concluded that the elsewhere proposed high abundance of those unbound grains is incompatible with the numerical results presented here and with more fundamental considerations. A general conclusion is drawn concerning the radial distribution of dust produced by a planetesimal belt: it is dominated by barely bound grains on highly eccentric orbits. The long-term evolution of a debris disc is shown to be dominated by the slow transition of the population of planetesimals from the size distribution set in the planet formation and growth phase to the steady-state size distribution defined by disruptive collisions. This transition is directly relevant to the temporal evolution of the observable dust masses and luminosities and indirectly to the deduced total disc masses. The developed models are compatible with observational statistics. From numerics and analytics, scaling laws are derived for the dependence of the collisional timescales on the disc mass, the radial distance to the star, and the planetesimals' orbital eccentricities

    Aerospace Medicine and Biology: A continuing bibliography with indexes (supplement 258)

    Get PDF
    This bibliography lists 308 reports, articles and other documents introduced into the NASA scientific and technical information system in April 1984

    Intelligent Sensor Networks

    Get PDF
    In the last decade, wireless or wired sensor networks have attracted much attention. However, most designs target general sensor network issues including protocol stack (routing, MAC, etc.) and security issues. This book focuses on the close integration of sensing, networking, and smart signal processing via machine learning. Based on their world-class research, the authors present the fundamentals of intelligent sensor networks. They cover sensing and sampling, distributed signal processing, and intelligent signal learning. In addition, they present cutting-edge research results from leading experts

    Human perception capabilities for socially intelligent domestic service robots

    Get PDF
    The daily living activities for an increasing number of frail elderly people represent a continuous struggle both for them as well as for their extended families. These people have difficulties coping at home alone but are still sufficiently fit not to need the round-the-clock care provided by a nursing home. Their struggle can be alleviated by the deployment of a mechanical helper in their home, i.e. a service robot that can execute a range of simple object manipulation tasks. Such a robotic application promises to extend the period of independent home living for elderly people, while providing them with a better quality of life. However, despite the recent technological advances in robotics, there are still some remaining challenges, mainly related to the human factors. Arguably, the lack of consistently dependable human detection, localisation, position and pose tracking information and insufficiently refined processing of sensor information makes the close range physical interaction between a robot and a human a high-risk task. The work described in this thesis addresses the deficiencies in the processing of the human information of today’s service robots. This is achieved through proposing a new paradigm for the robot’s situational awareness in regard to people as well as a collection of methods and techniques, operating at the lower levels of the paradigm, i.e. perception of new human information. The collection includes methods for obtaining and processing of information about the presence, location and body pose of the people. In addition to the availability of reliable human perception information, the integration between the separate levels of paradigm is considered to be a critically important factor for achieving the human-aware control of the robot. Improving the cognition, judgment and decision making action links between the paradigm’s layers leads to enhanced capability of the robot to engage in a natural and more meaningful interaction with people and, therefore, to a more enjoyable user experience. Therefore, the proposed paradigm and methodology are envisioned to contribute to making the prolonged assisted living of elderly people at home a more feasible and realistic task. In particular, this thesis proposes a set of methods for human presence detection, localisation and body pose tracking that are operating on the perception level of the paradigm. Also, the problem of having only limited visibility of a person from the on-board sensors of the robot is addressed by the proposed classifier fusion method that combines information from several types of sensors. A method for improved real-time human body pose tracking is also investigated. Additionally, a method for estimation of the multiple human tracks from noisy detections, as well as analysis of the computed human tracks for cognition about the social interactions within the social group, operating at the comprehension level of the robot’s situational awareness paradigm, is proposed. Finally, at the human-aware planning layer, a method that utilises the human related information, generated by the perception and comprehension layers to compute a minimally intrusive navigation path to a target person within a human group, is proposed. This method demonstrates how the improved human perception capabilities of the robot, through its judgement activity, ii ABSTRACT can be utilised by the highest level of the paradigm, i.e. the decision making layer, to achieve user friendly human-robot interactions. Overall, the research presented in this work, drawing on recent innovation in statistical learning, data fusion and optimisation methods, improves the overall situational awareness of the robot in regard to people with the main focus placed on human sensing capabilities of service robots. The improved overall situational awareness of the robot regarding people, as defined by the proposed paradigm, enables more meaningful human-robot interactions
    corecore