26 research outputs found
Data access layer optimization of the Gaia data processing in Barcelona for spatially arranged data
Gaia is an ambitious astrometric space mission adopted within the scientific programme
of the European Space Agency (ESA) in October 2000. It measures with very high
accuracy the positions and velocities of a large number of stars and astronomical objects.
At the end of the mission, a detailed three-dimensional map of more than one billion
stars will be obtained. The spacecraft is currently orbiting around the L2 Lagrangian
Point, 1.5 million kilometers from the Earth. It is providing a complete survey down to
the 20th magnitude. The two telescopes of Gaia will observe each object 85 times on
average during the 5 years of the mission, recording each time its brightness, color and,
most important, its position. This leads to an enormous quantity of complex, extremely
precise data, representing the multiple observations of a billion different objects by an
instrument that is spinning and precessing. The Gaia data challenge, processing raw
satellite telemetry to produce valuable science products, is a huge task in terms of
expertise, effort and computing power. To handle the reduction of the data, an iterative
process between several systems has been designed, each solving different aspects of the
mission.
The Data Analysis and Processing Consortium (DPAC), a large team of scientists and
software developers, is in charge of processing the Gaia data with the aim of producing
the Gaia Catalogue. It is organized in Coordination Units (CUs), responsible of science
and software development and validation, and Data Processing Centers (DPCs), which
actually operate and execute the software systems developed by the CUs. This project
has been developed within the frame of the Core Processing Unit (CU3) and the Data
Processing Center of Barcelona (DPCB).
One of the most important DPAC systems is the Intermediate Data Updating (IDU),
executed at the Marenostrum supercomputer hosted by the Barcelona Supercomputing
Center (BSC), which is the core of the DPCB hardware framework. It must reprocess,
once every few months, all raw data accumulated up to that moment, giving a higher coherence to the scientific results and correcting any possible errors or wrong approximations
from previous iterations. It has two main objectives: to refine the image
parameters from the astrometric images acquired by the instrument, and to refine the
Cross Match (XM) for all the detections. In particular, the XM will handle an enormous
number of detections at the end of the mission, so it will obviously not be possible to
handle them in a single process. Moreover, one should also consider some limitations
and constraints imposed by the features of the execution environment (the Marenostrum
supercomputer). Therefore, it is necessary to optimize the Data Access Layer (DAL) in
order to efficiently store the huge amount of data coming from the spacecraft, and to
access it in a smart manner. This is the main scope of this project. We have developed
and implemented an efficient and flexible file format based on Hierarchical Data Format
version 5 (HDF5), arranging the detections by a spatial index such as Hierarchical Equal
Area isoLatitude Pixelization (HEALPix) to tessellate the sphere. In this way it is possible
to distribute and process the detections separately and in parallel, according to
their distribution on the sky. Moreover, the HEALPix library and the framework implemented
here allows to consider the data at different resolution levels according to the
desired precision. In this project we consider up to level 12, that is, 201 million pixels
in the sphere.
Two different alternatives have been designed and developed, namely, a Flat solution
and a Hierarchical solution. It refers to the distribution of the data through the file.
In the first case, all the dataset is contained inside a single group. On the other hand,
the hierarchical solution stores the groups of data in a hierarchical way according to the
HEALPix hierarchy.
The Gaia DPAC software is implemented in Java, where the HDF5 Application Programming
Interface (API) support is quite limited. Thus, it has also been necessary
to use the Java Native Interface (JNI) to adapt the software developed in this project
(in C language), which follows the HDF5 C API. On the Java side, two main classes
have been implemented to read and write the data: FileHdf5Archiver and FileArchiveHdf5FileReader.
The Java part of this project has been integrated into an existing
operational software library, DpcbTools, in coordination with the Barcelona IDU/DPCB
team. This has allowed to integrate the work done in this project into the existing DAL
architecture in the most efficient way.
Prior to the testing of the operational code, we have first evaluated the time required
by the creation of the whole empty structure of the file. It has been done with a simple
program written in C which, depending on the HEALPix level requested, creates the
skeleton of the file. It has been implemented for both alternatives previously mentioned.
Up to HEALPix level 6 it is not possible to notice a relevant difference. For level 7onwards the difference becomes more and more important, especially starting with level
9 where the creation time is uncontrollable for the Flat solution. Anyhow, the creation
of the whole file is not convenient in the real case. Therefore, in order to evaluate the
most suitable alternative, we have simply considered the Input/Output performance.
Finally, we have run the performance tests in order to evaluate how the two solutions
perform when actually dealing with data contents. Also the TAR and ZIP solutions have
been tested in order to compare and appraise the speedup and the efficiency of our new
two alternatives. The analysis of the results has been based on the time to write and read
data, the compression ratio and the read/write rate. Moreover, the different alternatives
have been evaluated on two systems with different sets of data as input. The speedup
and the compression ratio improvement compared to the previously adopted solutions
is considerable for both HDF5-based alternatives, whereas the difference between the
two alternatives. The integration of one of these two solutions will allow the Gaia
IDU software to handle the data in a more efficient manner, increasing the final I/O
performance remarkably
X-ray analysis of the accreting supermassive black hole in the radio galaxy PKS 2251+11
We investigate the dichotomy between jetted and non-jetted Active Galactic
Nuclei (AGNs), focusing on the fundamental differences of these two classes in
the accretion physics onto the central supermassive black hole (SMBH). Our aim
is to study and constrain the structure, kinematics and physical state of the
nuclear environment in the Broad Line Radio Galaxy (BLRG) PKS 2251+11. The high
X-ray luminosity and the relative proximity make such AGN an ideal candidate
for a detailed analysis of the accretion regions in radio galaxies. We
performed a spectral and timing analysis of a 64 ks observation of PKS
2251+11 in the X-ray band with XMM-Newton. We modeled the spectrum considering
an absorbed power law superimposed to a reflection component. We performed a
time-resolved spectral analysis to search for variability of the X-ray flux and
of the individual spectral components. We found that the power law has a photon
index , absorbed by an ionized partial covering medium with
a column density cm, a ionization
parameter erg s cm and a covering factor
. Considering a density of the absorber typical of the Broad Line
Region (BLR), its distance from the central SMBH is of the order of
pc. An Fe K emission line is found at 6.4 keV, whose intensity shows
variability on time scales of hours. We derived that the reflecting material is
located at a distance , where is the Schwarzschild
radius. Concerning the X-ray properties, we found that PKS 2251+11 does not
differ significantly from the non-jetted AGNs, confirming the validity of the
unified model in describing the inner regions around the central SMBH, but the
lack of information regarding the state of the very innermost disk and SMBH
spin still leave unconstrained the origin of the jet
Predicting Students’ Financial Knowledge from Attitude towards Finance
Attitude towards finance and financial attitude are very different constructs. Despite the popularity of the latter, it has recently been subject to criticism. Following Di Martino & Zan (2010), the former explicitly considers emotions and beliefs (about self and finance) and the mutual relationship between them. At present, there is a paucity of evidence on how ‘attitude toward finance’ may impact financial knowledge: this is a new area of inquiry in academic literature. Research is at a preliminary stage, although the jigsaw of financial literacy is receiving greater attention worldwide. This study measures individual attitudes towards finance and determines the effects of this profile on financial knowledge level. It uses about 500 economics students in Italy as sample respondents. It is based on a structured questionnaire survey as a data collection method. The data is analysed using Structural Equation Modeling. A significant positive correlation is found between financial knowledge and attitude toward finance. The direction of causality is found to be from attitude toward finance to financial knowledge, and this finding suggests that attitude toward finance can play an important role in financial education. Among the various dimensions of attitude toward finance, emotional disposition towards finance, and secondly, the self-confidence level, are the most influential factors on economic students’ financial knowledge level. Gender is also found to be closely correlated to both financial knowledge and attitude toward finance. Findings mainly suggest the importance of attitude toward finance on financial knowledge. For policymakers, the results of this study could indicate new ways of solving the financial illiteracy problem
Sound event localization and detection based on crnn using rectangular filters and channel rotation data augmentation
Sound Event Localization and Detection refers to the problem of identifying
the presence of independent or temporally-overlapped sound sources, correctly
identifying to which sound class it belongs, estimating their spatial
directions while they are active. In the last years, neural networks have
become the prevailing method for sound Event Localization and Detection task,
with convolutional recurrent neural networks being among the most used systems.
This paper presents a system submitted to the Detection and Classification of
Acoustic Scenes and Events 2020 Challenge Task 3. The algorithm consists of a
convolutional recurrent neural network using rectangular filters, specialized
in recognizing significant spectral features related to the task. In order to
further improve the score and to generalize the system performance to unseen
data, the training dataset size has been increased using data augmentation. The
technique used for that is based on channel rotations and reflection on the xy
plane in the First Order Ambisonic domain, which allows improving Direction of
Arrival labels keeping the physical relationships between channels. Evaluation
results on the development dataset show that the proposed system outperforms
the baseline results, considerably improving Error Rate and F-score for
location-aware detection
A benchmark of state-of-the-art sound event detection systems evaluated on synthetic soundscapes
International audienceThis paper proposes a benchmark of submissions to Detection and Classification Acoustic Scene and Events 2021 Challenge (DCASE) Task 4 representing a sampling of the state-of-the-art in Sound Event Detection task. The submissions are evaluated according to the two polyphonic sound detection score scenarios proposed for the DCASE 2021 Challenge Task 4, which allow to make an analysis on whether submissions are designed to perform fine-grained temporal segmentation, coarse-grained temporal segmentation, or have been designed to be polyvalent on the scenarios proposed. We study the solutions proposed by participants to analyze their robustness to varying level target to non-target signal-to-noise ratio and to temporal localization of target sound events. A last experiment is proposed in order to study the impact of non-target events on systems outputs. Results show that systems adapted to provide coarse segmentation outputs are more robust to different target to non-target signal-to-noise ratio and, with the help of specific data augmentation methods, they are more robust to time localization of the original event. Results of the last experiment display that systems tend to spuriously predict short events when non-target events are present. This is particularly true for systems that are tailored to have a fine segmentation
The impact of non-target events in synthetic soundscapes for sound event detection
International audienceDetection and Classification Acoustic Scene and Events Challenge 2021 Task 4 uses a heterogeneous dataset that includes both recorded and synthetic soundscapes. Until recently only target sound events were considered when synthesizing the soundscapes. However, recorded soundscapes often contain a substantial amount of non-target events that may affect the performance. In this paper, we focus on the impact of these non-target events in the synthetic soundscapes. Firstly, we investigate to what extent using non-target events alternatively during the training or validation phase (or none of them) helps the system to correctly detect target events. Secondly, we analyze to what extend adjusting the signal-to-noise ratio between target and non-target events at training improves the sound event detection performance. The results show that using both target and non-target events for only one of the phases (validation or training) helps the system to properly detect sound events, outperforming the baseline (which uses non-target events in both phases). The paper also reports the results of a preliminary study on evaluating the system on clips that contain only non-target events. This opens questions for future work on non-target subset and acoustic similarity between target and non-target events which might confuse the system
Description and analysis of novelties introduced in DCASE Task 4 2022 on the baseline system
The aim of the Detection and Classification of Acoustic Scenes and Events
Challenge Task 4 is to evaluate systems for the detection of sound events in
domestic environments using an heterogeneous dataset. The systems need to be
able to correctly detect the sound events present in a recorded audio clip, as
well as localize the events in time. This year's task is a follow-up of DCASE
2021 Task 4, with some important novelties. The goal of this paper is to
describe and motivate these new additions, and report an analysis of their
impact on the baseline system. We introduced three main novelties: the use of
external datasets, including recently released strongly annotated clips from
Audioset, the possibility of leveraging pre-trained models, and a new energy
consumption metric to raise awareness about the ecological impact of training
sound events detectors. The results on the baseline system show that leveraging
open-source pretrained on AudioSet improves the results significantly in terms
of event classification but not in terms of event segmentation
Lunar Gravitational-Wave Antenna
Monitoring of vibrational eigenmodes of an elastic body excited by
gravitational waves was one of the first concepts proposed for the detection of
gravitational waves. At laboratory scale, these experiments became known as
resonant-bar detectors first developed by Joseph Weber in the 1960s. Due to the
dimensions of these bars, the targeted signal frequencies were in the kHz
range. Weber also pointed out that monitoring of vibrations of Earth or Moon
could reveal gravitational waves in the mHz band. His Lunar Surface Gravimeter
experiment deployed on the Moon by the Apollo 17 crew had a technical failure
rendering the data useless. In this article, we revisit the idea and propose a
Lunar Gravitational-Wave Antenna (LGWA). We find that LGWA could become an
important partner observatory for joint observations with the space-borne,
laser-interferometric detector LISA, and at the same time contribute an
independent science case due to LGWA's unique features. Technical challenges
need to be overcome for the deployment of the experiment, and development of
inertial vibration sensor technology lays out a future path for this exciting
detector concept.Comment: 29 pages, 17 figure
Science with the Einstein Telescope: a comparison of different designs
The Einstein Telescope (ET), the European project for a third-generation
gravitational-wave detector, has a reference configuration based on a
triangular shape consisting of three nested detectors with 10 km arms, where in
each arm there is a `xylophone' configuration made of an interferometer tuned
toward high frequencies, and an interferometer tuned toward low frequencies and
working at cryogenic temperature. Here, we examine the scientific perspectives
under possible variations of this reference design. We perform a detailed
evaluation of the science case for a single triangular geometry observatory,
and we compare it with the results obtained for a network of two L-shaped
detectors (either parallel or misaligned) located in Europe, considering
different choices of arm-length for both the triangle and the 2L geometries. We
also study how the science output changes in the absence of the low-frequency
instrument, both for the triangle and the 2L configurations. We examine a broad
class of simple `metrics' that quantify the science output, related to compact
binary coalescences, multi-messenger astronomy and stochastic backgrounds, and
we then examine the impact of different detector designs on a more specific set
of scientific objectives.Comment: 197 pages, 72 figure
Prevalence, associated factors and outcomes of pressure injuries in adult intensive care unit patients: the DecubICUs study
Funder: European Society of Intensive Care Medicine; doi: http://dx.doi.org/10.13039/501100013347Funder: Flemish Society for Critical Care NursesAbstract: Purpose: Intensive care unit (ICU) patients are particularly susceptible to developing pressure injuries. Epidemiologic data is however unavailable. We aimed to provide an international picture of the extent of pressure injuries and factors associated with ICU-acquired pressure injuries in adult ICU patients. Methods: International 1-day point-prevalence study; follow-up for outcome assessment until hospital discharge (maximum 12 weeks). Factors associated with ICU-acquired pressure injury and hospital mortality were assessed by generalised linear mixed-effects regression analysis. Results: Data from 13,254 patients in 1117 ICUs (90 countries) revealed 6747 pressure injuries; 3997 (59.2%) were ICU-acquired. Overall prevalence was 26.6% (95% confidence interval [CI] 25.9–27.3). ICU-acquired prevalence was 16.2% (95% CI 15.6–16.8). Sacrum (37%) and heels (19.5%) were most affected. Factors independently associated with ICU-acquired pressure injuries were older age, male sex, being underweight, emergency surgery, higher Simplified Acute Physiology Score II, Braden score 3 days, comorbidities (chronic obstructive pulmonary disease, immunodeficiency), organ support (renal replacement, mechanical ventilation on ICU admission), and being in a low or lower-middle income-economy. Gradually increasing associations with mortality were identified for increasing severity of pressure injury: stage I (odds ratio [OR] 1.5; 95% CI 1.2–1.8), stage II (OR 1.6; 95% CI 1.4–1.9), and stage III or worse (OR 2.8; 95% CI 2.3–3.3). Conclusion: Pressure injuries are common in adult ICU patients. ICU-acquired pressure injuries are associated with mainly intrinsic factors and mortality. Optimal care standards, increased awareness, appropriate resource allocation, and further research into optimal prevention are pivotal to tackle this important patient safety threat