173 research outputs found
Binary domain generalization for sparsifying binary neural networks
Binary neural networks (BNNs) are an attractive solution for developing and
deploying deep neural network (DNN)-based applications in resource constrained
devices. Despite their success, BNNs still suffer from a fixed and limited
compression factor that may be explained by the fact that existing pruning
methods for full-precision DNNs cannot be directly applied to BNNs. In fact,
weight pruning of BNNs leads to performance degradation, which suggests that
the standard binarization domain of BNNs is not well adapted for the task. This
work proposes a novel more general binary domain that extends the standard
binary one that is more robust to pruning techniques, thus guaranteeing
improved compression and avoiding severe performance losses. We demonstrate a
closed-form solution for quantizing the weights of a full-precision network
into the proposed binary domain. Finally, we show the flexibility of our
method, which can be combined with other pruning strategies. Experiments over
CIFAR-10 and CIFAR-100 demonstrate that the novel approach is able to generate
efficient sparse networks with reduced memory usage and run-time latency, while
maintaining performance.Comment: Accepted as conference paper at ECML PKDD 202
Elastic Registration of Geodesic Vascular Graphs
Vascular graphs can embed a number of high-level features, from morphological
parameters, to functional biomarkers, and represent an invaluable tool for
longitudinal and cross-sectional clinical inference. This, however, is only
feasible when graphs are co-registered together, allowing coherent multiple
comparisons. The robust registration of vascular topologies stands therefore as
key enabling technology for group-wise analyses. In this work, we present an
end-to-end vascular graph registration approach, that aligns networks with
non-linear geometries and topological deformations, by introducing a novel
overconnected geodesic vascular graph formulation, and without enforcing any
anatomical prior constraint. The 3D elastic graph registration is then
performed with state-of-the-art graph matching methods used in computer vision.
Promising results of vascular matching are found using graphs from synthetic
and real angiographies. Observations and future designs are discussed towards
potential clinical applications
In vitro and in vivo comparison of the anti-staphylococcal efficacy of generic products and the innovator of oxacillin
<p>Abstract</p> <p>Background</p> <p>Oxacillin continues to be an important agent in the treatment of staphylococcal infections; many generic products are available and the only requirement for their approval is demonstration of pharmaceutical equivalence. We tested the assumption that pharmaceutical equivalence predicts therapeutic equivalence by comparing 11 generics with the innovator product in terms of concentration of the active pharmaceutical ingredient (API), minimal inhibitory (MIC) and bactericidal concentrations (MBC), and antibacterial efficacy in the neutropenic mouse thigh infection model.</p> <p>Methods</p> <p>The API in each product was measured by a validated microbiological assay and compared by slope (potency) and intercept (concentration) analysis of linear regressions. MIC and MBC were determined by broth microdilution according to Clinical and Laboratory Standard Institute (CLSI) guidelines. For in vivo efficacy, neutropenic ICR mice were inoculated with a clinical strain of <it>Staphylococcus aureus</it>. The animals had 4.14 ± 0.18 log<sub>10 </sub>CFU/thigh when treatment started. Groups of 10 mice per product received a total dose ranging from 2.93 to 750 mg/kg per day administered q1h. Sigmoidal dose-response curves were generated by nonlinear regression fitted to Hill equation to compute maximum effect (E<sub>max</sub>), slope (N), and the effective dose reaching 50% of the E<sub>max </sub>(ED<sub>50</sub>). Based on these results, bacteriostatic dose (BD) and dose needed to kill the first log of bacteria (1LKD) were also determined.</p> <p>Results</p> <p>4 generic products failed pharmaceutical equivalence due to significant differences in potency; however, all products were undistinguishable from the innovator in terms of MIC and MBC. Independently of their status with respect to pharmaceutical equivalence or in vitro activity, all generics failed therapeutic equivalence in vivo, displaying significantly lower E<sub>max </sub>and requiring greater BD and 1LKD, or fitting to a non-sigmoidal model.</p> <p>Conclusions</p> <p>Pharmaceutical or in vitro equivalence did not entail therapeutic equivalence for oxacillin generic products, indicating that criteria for approval deserve review to include evaluation of in vivo efficacy.</p
Do Deep Neural Networks Contribute to Multivariate Time Series Anomaly Detection?
Anomaly detection in time series is a complex task that has been widely
studied. In recent years, the ability of unsupervised anomaly detection
algorithms has received much attention. This trend has led researchers to
compare only learning-based methods in their articles, abandoning some more
conventional approaches. As a result, the community in this field has been
encouraged to propose increasingly complex learning-based models mainly based
on deep neural networks. To our knowledge, there are no comparative studies
between conventional, machine learning-based and, deep neural network methods
for the detection of anomalies in multivariate time series. In this work, we
study the anomaly detection performance of sixteen conventional, machine
learning-based and, deep neural network approaches on five real-world open
datasets. By analyzing and comparing the performance of each of the sixteen
methods, we show that no family of methods outperforms the others. Therefore,
we encourage the community to reincorporate the three categories of methods in
the anomaly detection in multivariate time series benchmarks
Maximum Roaming Multi-Task Learning
Multi-task learning has gained popularity due to the advantages it provides
with respect to resource usage and performance. Nonetheless, the joint
optimization of parameters with respect to multiple tasks remains an active
research topic. Sub-partitioning the parameters between different tasks has
proven to be an efficient way to relax the optimization constraints over the
shared weights, may the partitions be disjoint or overlapping. However, one
drawback of this approach is that it can weaken the inductive bias generally
set up by the joint task optimization. In this work, we present a novel way to
partition the parameter space without weakening the inductive bias.
Specifically, we propose Maximum Roaming, a method inspired by dropout that
randomly varies the parameter partitioning, while forcing them to visit as many
tasks as possible at a regulated frequency, so that the network fully adapts to
each update. We study the properties of our method through experiments on a
variety of visual multi-task data sets. Experimental results suggest that the
regularization brought by roaming has more impact on performance than usual
partitioning optimization strategies. The overall method is flexible, easily
applicable, provides superior regularization and consistently achieves improved
performances compared to recent multi-task learning formulations.Comment: Accepted at the 35th AAAI Conference on Artificial Intelligence (AAAI
2021
- …