469 research outputs found
Pushing 1D CCSNe to explosions: model and SN 1987A
We report on a method, PUSH, for triggering core-collapse supernova
explosions of massive stars in spherical symmetry. We explore basic explosion
properties and calibrate PUSH such that the observables of SN1987A are
reproduced. Our simulations are based on the general relativistic hydrodynamics
code AGILE combined with the detailed neutrino transport scheme IDSA for
electron neutrinos and ALS for the muon and tau neutrinos. To trigger
explosions in the otherwise non-exploding simulations, we rely on the
neutrino-driven mechanism. The PUSH method locally increases the energy
deposition in the gain region through energy deposition by the heavy neutrino
flavors. Our setup allows us to model the explosion for several seconds after
core bounce. We explore the progenitor range 18-21M. Our studies
reveal a distinction between high compactness (HC) and low compactness (LC)
progenitor models, where LC models tend to explore earlier, with a lower
explosion energy, and with a lower remnant mass. HC models are needed to obtain
explosion energies around 1 Bethe, as observed for SN1987A. However, all the
models with sufficiently high explosion energy overproduce Ni. We
conclude that fallback is needed to reproduce the observed nucleosynthesis
yields. The nucleosynthesis yields of Ni depend sensitively on the
electron fraction and on the location of the mass cut with respect to the
initial shell structure of the progenitor star. We identify a progenitor and a
suitable set of PUSH parameters that fit the explosion properties of SN1987A
when assuming 0.1M of fallback. We predict a neutron star with a
gravitational mass of 1.50M. We find correlations between explosion
properties and the compactness of the progenitor model in the explored
progenitors. However, a more complete analysis will require the exploration of
a larger set of progenitors with PUSH.Comment: revised version as accepted by ApJ (results unchanged, text modified
for clarification, a few references added); 26 pages, 20 figure
Interpolation temporelle des images avec estimation de mouvement raffinée basée pixel et réduction de l'effet de halo
Dans le prĂ©sent travail, aprĂšs un rĂ©sumĂ© de l'Ă©tat de l'art, une nouvelle interpolation temporelle des images avec rĂ©duction de halo est proposĂ©e. D'abord, pour la tĂ©lĂ©vision de dĂ©finition standard, une estimation de mouvement dont la rĂ©solution est le pixel, est suggĂ©rĂ©e. L'estimation se fait par l'appariement des blocs, et est suivie par un raffinement basĂ© pixel en considĂ©rant des vecteurs de mouvement environnant. La rĂ©duction de halo se faisant Ă l'aide d'une fenĂȘtre glissante de forme adaptative ne recourt pas Ă une dĂ©tection explicite des rĂ©gions d'occlusion. Ensuite, pour la tĂ©lĂ©vision Ă haute dĂ©finition, dans le but de rĂ©duire la complexitĂ©, l'estimation de mouvement de rĂ©solution pixel ainsi que la rĂ©duction de halo sont gĂ©nĂ©ralisĂ©es dans le contexte d'une dĂ©composition hiĂ©rarchique. L'interpolation finale proposĂ©e est gĂ©nĂ©rique et est fonction Ă la fois de la position de l'image et de la fiabilitĂ© de l'estimation. Plusieurs . post-traitements pour amĂ©liorer la qualitĂ© de l'image sont aussi suggĂ©rĂ©s. L'algorithme proposĂ© intĂ©grĂ© dans un ASIC selon la technologie de circuit intĂ©grĂ© contemporain fonctionne en temps rĂ©el
Performance Comparison of Dual Connectivity and Hard Handover for LTE-5G Tight Integration in mmWave Cellular Networks
MmWave communications are expected to play a major role in the Fifth
generation of mobile networks. They offer a potential multi-gigabit throughput
and an ultra-low radio latency, but at the same time suffer from high isotropic
pathloss, and a coverage area much smaller than the one of LTE macrocells. In
order to address these issues, highly directional beamforming and a very
high-density deployment of mmWave base stations were proposed. This Thesis aims
to improve the reliability and performance of the 5G network by studying its
tight and seamless integration with the current LTE cellular network. In
particular, the LTE base stations can provide a coverage layer for 5G mobile
terminals, because they operate on microWave frequencies, which are less
sensitive to blockage and have a lower pathloss. This document is a copy of the
Master's Thesis carried out by Mr. Michele Polese under the supervision of Dr.
Marco Mezzavilla and Prof. Michele Zorzi. It will propose an LTE-5G tight
integration architecture, based on mobile terminals' dual connectivity to LTE
and 5G radio access networks, and will evaluate which are the new network
procedures that will be needed to support it. Moreover, this new architecture
will be implemented in the ns-3 simulator, and a thorough simulation campaign
will be conducted in order to evaluate its performance, with respect to the
baseline of handover between LTE and 5G.Comment: Master's Thesis carried out by Mr. Michele Polese under the
supervision of Dr. Marco Mezzavilla and Prof. Michele Zorz
Multimedia
The nowadays ubiquitous and effortless digital data capture and processing capabilities offered by the majority of devices, lead to an unprecedented penetration of multimedia content in our everyday life. To make the most of this phenomenon, the rapidly increasing volume and usage of digitised content requires constant re-evaluation and adaptation of multimedia methodologies, in order to meet the relentless change of requirements from both the user and system perspectives. Advances in Multimedia provides readers with an overview of the ever-growing field of multimedia by bringing together various research studies and surveys from different subfields that point out such important aspects. Some of the main topics that this book deals with include: multimedia management in peer-to-peer structures & wireless networks, security characteristics in multimedia, semantic gap bridging for multimedia content and novel multimedia applications
Study and development of innovative strategies for energy-efficient cross-layer design of digital VLSI systems based on Approximate Computing
The increasing demand on requirements for high performance and energy efficiency in modern digital systems has led to the research of new design approaches that are able to go beyond the established energy-performance tradeoff. Looking at scientific literature, the Approximate Computing paradigm has been particularly prolific. Many applications in the domain of signal processing, multimedia, computer vision, machine learning are known to be particularly resilient to errors occurring on their input data and during computation, producing outputs that, although degraded, are still largely acceptable from the point of view of quality. The Approximate Computing design paradigm leverages the characteristics of this group of applications to develop circuits, architectures, algorithms that, by relaxing design constraints, perform their computations in an approximate or inexact manner reducing energy consumption. This PhD research aims to explore the design of hardware/software architectures based on Approximate Computing techniques, filling the gap in literature regarding effective applicability and deriving a systematic methodology to characterize its benefits and tradeoffs. The main contributions of this work are: -the introduction of approximate memory management inside the Linux OS, allowing dynamic allocation and de-allocation of approximate memory at user level, as for normal exact memory; - the development of an emulation environment for platforms with approximate memory units, where faults are injected during the simulation based on models that reproduce the effects on memory cells of circuital and architectural techniques for approximate memories; -the implementation and analysis of the impact of approximate memory hardware on real applications: the H.264 video encoder, internally modified to allocate selected data buffers in approximate memory, and signal processing applications (digital filter) using approximate memory for input/output buffers and tap registers; -the development of a fully reconfigurable and combinatorial floating point unit, which can work with reduced precision formats
Recommended from our members
Perceptual models for high-refresh-rate rendering
Rendering realistic images requires substantial computational power. With new high-refresh-rate displays as well as the renaissance of virtual reality (VR) and augmented reality (AR), one cannot expect that GPU performance will scale fast enough to meet the requirements of immersive photo-realistic rendering with current rendering techniques.
In this dissertation, I follow the dual of the well-known computer vision approach: vision is inverse graphics: to improve graphical algorithms, I consider the operation of the human visual system. I propose to model and exploit the limitations of the visual system in the context of novel high-refresh-rate displays; specifically, I focus on spatio-temporal perception, a topic that has received remarkably less attention than spatial-only perception so far.
I present three main contributions. First, I demonstrate the validity of the perceptual approach by presenting a conceptually simple rendering technique motivated by our eyes' limited sensitivity to high spatio-temporal change which reduces the rendering load and transmission requirement of current-generation VR headsets without introducing perceivable visual artefacts. Second, I present two visual models related to motion perception: (a) a metric for detecting flicker; and (b) a comprehensive visual model to predict perceived motion quality on monitors with arbitrary refresh rates and monitor resolutions. Third, I propose an adaptive rendering algorithm that utilises the proposed models. All algorithms operate on physical colorimetric units (instead of display-referenced pixel values), for which I provide the appropriate display measurements and models. All proposed algorithms and visual models are calibrated and validated with psychophysical experiments
- âŠ