49 research outputs found
X-ray analysis of the accreting supermassive black hole in the radio galaxy PKS 2251+11
We investigate the dichotomy between jetted and non-jetted Active Galactic
Nuclei (AGNs), focusing on the fundamental differences of these two classes in
the accretion physics onto the central supermassive black hole (SMBH). Our aim
is to study and constrain the structure, kinematics and physical state of the
nuclear environment in the Broad Line Radio Galaxy (BLRG) PKS 2251+11. The high
X-ray luminosity and the relative proximity make such AGN an ideal candidate
for a detailed analysis of the accretion regions in radio galaxies. We
performed a spectral and timing analysis of a 64 ks observation of PKS
2251+11 in the X-ray band with XMM-Newton. We modeled the spectrum considering
an absorbed power law superimposed to a reflection component. We performed a
time-resolved spectral analysis to search for variability of the X-ray flux and
of the individual spectral components. We found that the power law has a photon
index , absorbed by an ionized partial covering medium with
a column density cm, a ionization
parameter erg s cm and a covering factor
. Considering a density of the absorber typical of the Broad Line
Region (BLR), its distance from the central SMBH is of the order of
pc. An Fe K emission line is found at 6.4 keV, whose intensity shows
variability on time scales of hours. We derived that the reflecting material is
located at a distance , where is the Schwarzschild
radius. Concerning the X-ray properties, we found that PKS 2251+11 does not
differ significantly from the non-jetted AGNs, confirming the validity of the
unified model in describing the inner regions around the central SMBH, but the
lack of information regarding the state of the very innermost disk and SMBH
spin still leave unconstrained the origin of the jet
Data access layer optimization of the Gaia data processing in Barcelona for spatially arranged data
Gaia is an ambitious astrometric space mission adopted within the scientific programme
of the European Space Agency (ESA) in October 2000. It measures with very high
accuracy the positions and velocities of a large number of stars and astronomical objects.
At the end of the mission, a detailed three-dimensional map of more than one billion
stars will be obtained. The spacecraft is currently orbiting around the L2 Lagrangian
Point, 1.5 million kilometers from the Earth. It is providing a complete survey down to
the 20th magnitude. The two telescopes of Gaia will observe each object 85 times on
average during the 5 years of the mission, recording each time its brightness, color and,
most important, its position. This leads to an enormous quantity of complex, extremely
precise data, representing the multiple observations of a billion different objects by an
instrument that is spinning and precessing. The Gaia data challenge, processing raw
satellite telemetry to produce valuable science products, is a huge task in terms of
expertise, effort and computing power. To handle the reduction of the data, an iterative
process between several systems has been designed, each solving different aspects of the
mission.
The Data Analysis and Processing Consortium (DPAC), a large team of scientists and
software developers, is in charge of processing the Gaia data with the aim of producing
the Gaia Catalogue. It is organized in Coordination Units (CUs), responsible of science
and software development and validation, and Data Processing Centers (DPCs), which
actually operate and execute the software systems developed by the CUs. This project
has been developed within the frame of the Core Processing Unit (CU3) and the Data
Processing Center of Barcelona (DPCB).
One of the most important DPAC systems is the Intermediate Data Updating (IDU),
executed at the Marenostrum supercomputer hosted by the Barcelona Supercomputing
Center (BSC), which is the core of the DPCB hardware framework. It must reprocess,
once every few months, all raw data accumulated up to that moment, giving a higher coherence to the scientific results and correcting any possible errors or wrong approximations
from previous iterations. It has two main objectives: to refine the image
parameters from the astrometric images acquired by the instrument, and to refine the
Cross Match (XM) for all the detections. In particular, the XM will handle an enormous
number of detections at the end of the mission, so it will obviously not be possible to
handle them in a single process. Moreover, one should also consider some limitations
and constraints imposed by the features of the execution environment (the Marenostrum
supercomputer). Therefore, it is necessary to optimize the Data Access Layer (DAL) in
order to efficiently store the huge amount of data coming from the spacecraft, and to
access it in a smart manner. This is the main scope of this project. We have developed
and implemented an efficient and flexible file format based on Hierarchical Data Format
version 5 (HDF5), arranging the detections by a spatial index such as Hierarchical Equal
Area isoLatitude Pixelization (HEALPix) to tessellate the sphere. In this way it is possible
to distribute and process the detections separately and in parallel, according to
their distribution on the sky. Moreover, the HEALPix library and the framework implemented
here allows to consider the data at different resolution levels according to the
desired precision. In this project we consider up to level 12, that is, 201 million pixels
in the sphere.
Two different alternatives have been designed and developed, namely, a Flat solution
and a Hierarchical solution. It refers to the distribution of the data through the file.
In the first case, all the dataset is contained inside a single group. On the other hand,
the hierarchical solution stores the groups of data in a hierarchical way according to the
HEALPix hierarchy.
The Gaia DPAC software is implemented in Java, where the HDF5 Application Programming
Interface (API) support is quite limited. Thus, it has also been necessary
to use the Java Native Interface (JNI) to adapt the software developed in this project
(in C language), which follows the HDF5 C API. On the Java side, two main classes
have been implemented to read and write the data: FileHdf5Archiver and FileArchiveHdf5FileReader.
The Java part of this project has been integrated into an existing
operational software library, DpcbTools, in coordination with the Barcelona IDU/DPCB
team. This has allowed to integrate the work done in this project into the existing DAL
architecture in the most efficient way.
Prior to the testing of the operational code, we have first evaluated the time required
by the creation of the whole empty structure of the file. It has been done with a simple
program written in C which, depending on the HEALPix level requested, creates the
skeleton of the file. It has been implemented for both alternatives previously mentioned.
Up to HEALPix level 6 it is not possible to notice a relevant difference. For level 7onwards the difference becomes more and more important, especially starting with level
9 where the creation time is uncontrollable for the Flat solution. Anyhow, the creation
of the whole file is not convenient in the real case. Therefore, in order to evaluate the
most suitable alternative, we have simply considered the Input/Output performance.
Finally, we have run the performance tests in order to evaluate how the two solutions
perform when actually dealing with data contents. Also the TAR and ZIP solutions have
been tested in order to compare and appraise the speedup and the efficiency of our new
two alternatives. The analysis of the results has been based on the time to write and read
data, the compression ratio and the read/write rate. Moreover, the different alternatives
have been evaluated on two systems with different sets of data as input. The speedup
and the compression ratio improvement compared to the previously adopted solutions
is considerable for both HDF5-based alternatives, whereas the difference between the
two alternatives. The integration of one of these two solutions will allow the Gaia
IDU software to handle the data in a more efficient manner, increasing the final I/O
performance remarkably
Predicting Students’ Financial Knowledge from Attitude towards Finance
Attitude towards finance and financial attitude are very different constructs. Despite the popularity of the latter, it has recently been subject to criticism. Following Di Martino & Zan (2010), the former explicitly considers emotions and beliefs (about self and finance) and the mutual relationship between them. At present, there is a paucity of evidence on how ‘attitude toward finance’ may impact financial knowledge: this is a new area of inquiry in academic literature. Research is at a preliminary stage, although the jigsaw of financial literacy is receiving greater attention worldwide. This study measures individual attitudes towards finance and determines the effects of this profile on financial knowledge level. It uses about 500 economics students in Italy as sample respondents. It is based on a structured questionnaire survey as a data collection method. The data is analysed using Structural Equation Modeling. A significant positive correlation is found between financial knowledge and attitude toward finance. The direction of causality is found to be from attitude toward finance to financial knowledge, and this finding suggests that attitude toward finance can play an important role in financial education. Among the various dimensions of attitude toward finance, emotional disposition towards finance, and secondly, the self-confidence level, are the most influential factors on economic students’ financial knowledge level. Gender is also found to be closely correlated to both financial knowledge and attitude toward finance. Findings mainly suggest the importance of attitude toward finance on financial knowledge. For policymakers, the results of this study could indicate new ways of solving the financial illiteracy problem
Synthetic Training Set Generation using Text-To-Audio Models for Environmental Sound Classification
In recent years, text-to-audio models have revolutionized the field of automatic audio generation. This paper investigates their application in generating synthetic datasets for training data-driven models. Specifically, this study analyzes the performance of two environmental sound classification systems trained with data generated from text-to-audio models. We considered three scenarios: a) augmenting the training dataset with data generated by text-to-audio models; b) using a mixed training dataset combining real and synthetic text-driven generated data; and c) using a training dataset composed entirely of synthetic audio. In all cases, the performance of the classification models was tested on real data. Results indicate that text-to-audio models are effective for dataset augmentation, with consistent performance when replacing a subset of the recorded dataset. However, the performance of the audio recognition models drops when relying entirely on generated audio
Sound event localization and detection based on crnn using rectangular filters and channel rotation data augmentation
Sound Event Localization and Detection refers to the problem of identifying
the presence of independent or temporally-overlapped sound sources, correctly
identifying to which sound class it belongs, estimating their spatial
directions while they are active. In the last years, neural networks have
become the prevailing method for sound Event Localization and Detection task,
with convolutional recurrent neural networks being among the most used systems.
This paper presents a system submitted to the Detection and Classification of
Acoustic Scenes and Events 2020 Challenge Task 3. The algorithm consists of a
convolutional recurrent neural network using rectangular filters, specialized
in recognizing significant spectral features related to the task. In order to
further improve the score and to generalize the system performance to unseen
data, the training dataset size has been increased using data augmentation. The
technique used for that is based on channel rotations and reflection on the xy
plane in the First Order Ambisonic domain, which allows improving Direction of
Arrival labels keeping the physical relationships between channels. Evaluation
results on the development dataset show that the proposed system outperforms
the baseline results, considerably improving Error Rate and F-score for
location-aware detection
A benchmark of state-of-the-art sound event detection systems evaluated on synthetic soundscapes
International audienceThis paper proposes a benchmark of submissions to Detection and Classification Acoustic Scene and Events 2021 Challenge (DCASE) Task 4 representing a sampling of the state-of-the-art in Sound Event Detection task. The submissions are evaluated according to the two polyphonic sound detection score scenarios proposed for the DCASE 2021 Challenge Task 4, which allow to make an analysis on whether submissions are designed to perform fine-grained temporal segmentation, coarse-grained temporal segmentation, or have been designed to be polyvalent on the scenarios proposed. We study the solutions proposed by participants to analyze their robustness to varying level target to non-target signal-to-noise ratio and to temporal localization of target sound events. A last experiment is proposed in order to study the impact of non-target events on systems outputs. Results show that systems adapted to provide coarse segmentation outputs are more robust to different target to non-target signal-to-noise ratio and, with the help of specific data augmentation methods, they are more robust to time localization of the original event. Results of the last experiment display that systems tend to spuriously predict short events when non-target events are present. This is particularly true for systems that are tailored to have a fine segmentation
Room Transfer Function Reconstruction Using Complex-valued Neural Networks and Irregularly Distributed Microphones
Reconstructing the room transfer functions needed to calculate the complex
sound field in a room has several important real-world applications. However,
an unpractical number of microphones is often required. Recently, in addition
to classical signal processing methods, deep learning techniques have been
applied to reconstruct the room transfer function starting from a very limited
set of measurements at scattered points in the room. In this paper, we employ
complex-valued neural networks to estimate room transfer functions in the
frequency range of the first room resonances, using a few irregularly
distributed microphones. To the best of our knowledge, this is the first time
that complex-valued neural networks are used to estimate room transfer
functions. To analyze the benefits of applying complex-valued optimization to
the considered task, we compare the proposed technique with a state-of-the-art
kernel-based signal processing approach for sound field reconstruction, showing
that the proposed technique exhibits relevant advantages in terms of phase
accuracy and overall quality of the reconstructed sound field. For informative
purposes, we also compare the model with a similarly-structured data-driven
approach that, however, applies a real-valued neural network to reconstruct
only the magnitude of the sound field.Comment: Accepted at EUSIPCO 202
Synthetic training set generation using text-to-audio models for environmental sound classification
In recent years, text-to-audio models have revolutionized the field of automatic audio generation. This paper investigates their application in generating synthetic datasets for training data-driven models. Specifically, this study analyzes the performance of two environmental sound classification systems trained with data generated from text-to-audio models. We considered three scenarios: a) augmenting the training dataset with data generated by text-to-audio models; b) using a mixed training dataset combining real and synthetic text-driven generated data; and c) using a training dataset composed entirely of synthetic audio. In all cases, the performance of the classification models was tested on real data. Results indicate that text-to-audio models are effective for dataset augmentation, with consistent performance when replacing a subset of the recorded dataset. However, the performance of the audio recognition models drops when relying entirely on generated audio
Description and analysis of novelties introduced in DCASE Task 4 2022 on the baseline system
The aim of the Detection and Classification of Acoustic Scenes and Events
Challenge Task 4 is to evaluate systems for the detection of sound events in
domestic environments using an heterogeneous dataset. The systems need to be
able to correctly detect the sound events present in a recorded audio clip, as
well as localize the events in time. This year's task is a follow-up of DCASE
2021 Task 4, with some important novelties. The goal of this paper is to
describe and motivate these new additions, and report an analysis of their
impact on the baseline system. We introduced three main novelties: the use of
external datasets, including recently released strongly annotated clips from
Audioset, the possibility of leveraging pre-trained models, and a new energy
consumption metric to raise awareness about the ecological impact of training
sound events detectors. The results on the baseline system show that leveraging
open-source pretrained on AudioSet improves the results significantly in terms
of event classification but not in terms of event segmentation
The impact of non-target events in synthetic soundscapes for sound event detection
International audienceDetection and Classification Acoustic Scene and Events Challenge 2021 Task 4 uses a heterogeneous dataset that includes both recorded and synthetic soundscapes. Until recently only target sound events were considered when synthesizing the soundscapes. However, recorded soundscapes often contain a substantial amount of non-target events that may affect the performance. In this paper, we focus on the impact of these non-target events in the synthetic soundscapes. Firstly, we investigate to what extent using non-target events alternatively during the training or validation phase (or none of them) helps the system to correctly detect target events. Secondly, we analyze to what extend adjusting the signal-to-noise ratio between target and non-target events at training improves the sound event detection performance. The results show that using both target and non-target events for only one of the phases (validation or training) helps the system to properly detect sound events, outperforming the baseline (which uses non-target events in both phases). The paper also reports the results of a preliminary study on evaluating the system on clips that contain only non-target events. This opens questions for future work on non-target subset and acoustic similarity between target and non-target events which might confuse the system
