110 research outputs found
A Dynamic Clustering and Resource Allocation Algorithm for Downlink CoMP Systems with Multiple Antenna UEs
Coordinated multi-point (CoMP) schemes have been widely studied in the recent
years to tackle the inter-cell interference. In practice, latency and
throughput constraints on the backhaul allow the organization of only small
clusters of base stations (BSs) where joint processing (JP) can be implemented.
In this work we focus on downlink CoMP-JP with multiple antenna user equipments
(UEs) and propose a novel dynamic clustering algorithm. The additional degrees
of freedom at the UE can be used to suppress the residual interference by using
an interference rejection combiner (IRC) and allow a multistream transmission.
In our proposal we first define a set of candidate clusters depending on
long-term channel conditions. Then, in each time block, we develop a resource
allocation scheme by jointly optimizing transmitter and receiver where: a)
within each candidate cluster a weighted sum rate is estimated and then b) a
set of clusters is scheduled in order to maximize the system weighted sum rate.
Numerical results show that much higher rates are achieved when UEs are
equipped with multiple antennas. Moreover, as this performance improvement is
mainly due to the IRC, the gain achieved by the proposed approach with respect
to the non-cooperative scheme decreases by increasing the number of UE
antennas.Comment: 27 pages, 8 figure
A hybrid supervised/unsupervised machine learning approach to solar flare prediction
We introduce a hybrid approach to solar flare prediction, whereby a
supervised regularization method is used to realize feature importance and an
unsupervised clustering method is used to realize the binary flare/no-flare
decision. The approach is validated against NOAA SWPC data
Expectation Maximization for Hard X-ray Count Modulation Profiles
This paper is concerned with the image reconstruction problem when the
measured data are solar hard X-ray modulation profiles obtained from the Reuven
Ramaty High Energy Solar Spectroscopic Imager (RHESSI)} instrument. Our goal is
to demonstrate that a statistical iterative method classically applied to the
image deconvolution problem is very effective when utilized for the analysis of
count modulation profiles in solar hard X-ray imaging based on Rotating
Modulation Collimators. The algorithm described in this paper solves the
maximum likelihood problem iteratively and encoding a positivity constraint
into the iterative optimization scheme. The result is therefore a classical
Expectation Maximization method this time applied not to an image deconvolution
problem but to image reconstruction from count modulation profiles. The
technical reason that makes our implementation particularly effective in this
application is the use of a very reliable stopping rule which is able to
regularize the solution providing, at the same time, a very satisfactory
Cash-statistic (C-statistic). The method is applied to both reproduce synthetic
flaring configurations and reconstruct images from experimental data
corresponding to three real events. In this second case, the performance of
Expectation Maximization, when compared to Pixon image reconstruction, shows a
comparable accuracy and a notably reduced computational burden; when compared
to CLEAN, shows a better fidelity with respect to the measurements with a
comparable computational effectiveness. If optimally stopped, Expectation
Maximization represents a very reliable method for image reconstruction in the
RHESSI context when count modulation profiles are used as input data
A consistent and numerically efficient variable selection method for sparse Poisson regression with applications to learning and signal recovery
We propose an adaptive 1-penalized estimator in the framework of Generalized Linear Models with identity-link and Poisson
data, by taking advantage of a globally quadratic approximation of the Kullback-Leibler divergence. We prove that this
approximation is asymptotically unbiased and that the proposed estimator has the variable selection consistency property in
a deterministic matrix design framework. Moreover, we present a numerically efficient strategy for the computation of the
proposed estimator, making it suitable for the analysis of massive counts datasets. We show with two numerical experiments
that the method can be applied both to statistical learning and signal recovery problems
Bad and good errors: value-weighted skill scores in deep ensemble learning
In this paper we propose a novel approach to realize forecast verification.
Specifically, we introduce a strategy for assessing the severity of forecast
errors based on the evidence that, on the one hand, a false alarm just
anticipating an occurring event is better than one in the middle of consecutive
non-occurring events, and that, on the other hand, a miss of an isolated event
has a worse impact than a miss of a single event, which is part of several
consecutive occurrences. Relying on this idea, we introduce a novel definition
of confusion matrix and skill scores giving greater importance to the value of
the prediction rather than to its quality. Then, we introduce a deep ensemble
learning procedure for binary classification, in which the probabilistic
outcomes of a neural network are clustered via optimization of these
value-weighted skill scores. We finally show the performances of this approach
in the case of three applications concerned with pollution, space weather and
stock prize forecasting
Inverse diffraction for the Atmospheric Imaging Assembly in the Solar Dynamics Observatory
The Atmospheric Imaging Assembly in the Solar Dynamics Observatory provides
full Sun images every 1 seconds in each of 7 Extreme Ultraviolet passbands.
However, for a significant amount of these images, saturation affects their
most intense core, preventing scientists from a full exploitation of their
physical meaning. In this paper we describe a mathematical and automatic
procedure for the recovery of information in the primary saturation region
based on a correlation/inversion analysis of the diffraction pattern associated
to the telescope observations. Further, we suggest an interpolation-based
method for determining the image background that allows the recovery of
information also in the region of secondary saturation (blooming)
A hybrid time-frequency parametric modelling of medical ultrasound signal transmission
Medical ultrasound imaging is the most widespread real-time non-invasive
imaging system and its formulation comprises signal transmission, signal
reception, and image formation. Ultrasound signal transmission modelling has
been formalized over the years through different approaches by exploiting the
physics of the associated wave problem. This work proposes a novel
computational framework for modelling the ultrasound signal transmission step
in the time-frequency domain for a linear-array probe. More specifically, from
the impulse response theory defined in the time domain, we derived a parametric
model in the corresponding frequency domain, with appropriate approximations
for the narrowband case. To validate the model, we implemented a numerical
simulator and tested it with synthetic data. Numerical experiments demonstrate
that the proposed model is computationally feasible, efficient, and compatible
with realistic measurements and existing state-of-the-art simulators. The
formulated model can be employed for analyzing how the involved parameters
affect the generated beam pattern, and ultimately for optimizing measurement
settings in an automatic and systematic way
A stochastic approach to delays optimization for narrowband transmit beam pattern in medical ultrasound
Ultrasound imaging is extensively employed in clinical settings due to its
non-ionizing nature and real-time capabilities. The beamformer represents a
crucial component of an ultrasound machine, playing a significant role in
shaping the ultimate quality of the reconstructed image. Therefore, Transmit
Beam Pattern (TBP) optimization is an important task in medical ultrasound, but
state-of-the-art TBP optimization has well-known drawbacks like non-uniform
beam width over depth, presence of significant side lobes, and quick energy
drop out after the focal depth. To overcome these limitations, we developed a
novel optimization approach for TBP by focusing the analysis on its narrowband
approximation, particularly suited for Acoustic Radiation Force Impulse (ARFI)
elastography, and considering transmit delays as free variables instead of
linked to a specific focal depth. We formulate the problem as a non linear
Least Squares problem to minimize the difference between the TBP corresponding
to a set of delays and the desired one, modeled as a 2D rectangular shape
elongated in the direction of the beam axis. In order to quantitatively
evaluate the results, we define three quality metrics based on main lobe width,
side lobe level, and central line power. Results obtained by our synthetic
software simulation show that the main lobe width is considerably more intense
and uniform over the whole depth range with respect to classical focalized Beam
Patterns, and our optimized delay profile results in a combination of standard
delay profiles at different focal depths. The application of the proposed
method to ARFI elastography shows improvements in the concentration of the
ultrasound energy along a desired axis.Comment: 14 pages, 14 figure
A comprehensive theoretical framework for the optimization of neural networks classification performance with respect to weighted metrics
In many contexts, customized and weighted classification scores are designed
in order to evaluate the goodness of the predictions carried out by neural
networks. However, there exists a discrepancy between the maximization of such
scores and the minimization of the loss function in the training phase. In this
paper, we provide a complete theoretical setting that formalizes weighted
classification metrics and then allows the construction of losses that drive
the model to optimize these metrics of interest. After a detailed theoretical
analysis, we show that our framework includes as particular instances
well-established approaches such as classical cost-sensitive learning, weighted
cross entropy loss functions and value-weighted skill scores
- …