1,047 research outputs found
Non-adaptive pooling strategies for detection of rare faulty items
We study non-adaptive pooling strategies for detection of rare faulty items.
Given a binary sparse N-dimensional signal x, how to construct a sparse binary
MxN pooling matrix F such that the signal can be reconstructed from the
smallest possible number M of measurements y=Fx? We show that a very low number
of measurements is possible for random spatially coupled design of pools F. Our
design might find application in genetic screening or compressed genotyping. We
show that our results are robust with respect to the uncertainty in the matrix
F when some elements are mistaken.Comment: 5 page
Sparse Estimation with the Swept Approximated Message-Passing Algorithm
Approximate Message Passing (AMP) has been shown to be a superior method for
inference problems, such as the recovery of signals from sets of noisy,
lower-dimensionality measurements, both in terms of reconstruction accuracy and
in computational efficiency. However, AMP suffers from serious convergence
issues in contexts that do not exactly match its assumptions. We propose a new
approach to stabilizing AMP in these contexts by applying AMP updates to
individual coefficients rather than in parallel. Our results show that this
change to the AMP iteration can provide theoretically expected, but hitherto
unobtainable, performance for problems on which the standard AMP iteration
diverges. Additionally, we find that the computational costs of this swept
coefficient update scheme is not unduly burdensome, allowing it to be applied
efficiently to signals of large dimensionality.Comment: 11 pages, 3 figures, implementation available at
https://github.com/eric-tramel/SwAMP-Dem
Intelligent fault detection and classification based on hybrid deep learning methods for Hardware-in-the-Loop test of automotive software systems
Hardware-in-the-Loop (HIL) has been recommended by ISO 26262 as an essential test bench for determining the safety and reliability characteristics of automotive software systems (ASSs). However, due to the complexity and the huge amount of data recorded by the HIL platform during the testing process, the conventional data analysis methods used for detecting and classifying faults based on the human expert are not realizable. Therefore, the development of effective means based on the historical data set is required to analyze the records of the testing process in an efficient manner. Even though data-driven fault diagnosis is superior to other approaches, selecting the appropriate technique from the wide range of Deep Learning (DL) techniques is challenging. Moreover, the training data containing the automotive faults are rare and considered highly confidential by the automotive industry. Using hybrid DL techniques, this study proposes a novel intelligent fault detection and classification (FDC) model to be utilized during the V-cycle development process, i.e., the system integration testing phase. To this end, an HIL-based real-time fault injection framework is used to generate faulty data without altering the original system model. In addition, a combination of the Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) is employed to build the model structure. In this study, eight types of sensor faults are considered to cover the most common potential faults in the signals of ASSs. As a case study, a gasoline engine system model is used to demonstrate the capabilities and advantages of the proposed method and to verify the performance of the model. The results prove that the proposed method shows better detection and classification performance compared to other standalone DL methods. Specifically, the overall detection accuracies of the proposed structure in terms of precision, recall and F1-score are 98.86%, 98.90% and 98.88%, respectively. For classification, the experimental results also demonstrate the superiority under unseen test data with an average accuracy of 98.8%
New Constructions for Competitive and Minimal-Adaptive Group Testing
Group testing (GT) was originally proposed during the World War II in an attempt to minimize the \emph{cost} and \emph{waiting time} in performing identical blood tests of the soldiers for a low-prevalence disease.
Formally, the GT problem asks to find \emph{defective} elements out of elements by querying subsets (pools) for the presence of defectives.
By the information-theoretic lower bound, essentially queries are needed in the worst-case.
An \emph{adaptive} strategy proceeds sequentially by performing one query at a time, and it can achieve the lower bound. In various applications, nothing is known about beforehand and a strategy for this scenario is called \emph{competitive}. Such strategies are usually adaptive and achieve query optimality within a constant factor called the \emph{competitive ratio}.
In many applications, queries are time-consuming. Therefore, \emph{minimal-adaptive} strategies which run in a small number of stages of parallel
queries are favorable.
This work is mainly devoted to the design of minimal-adaptive strategies combined with other demands of both theoretical and practical interest. First we target unknown and show that actually competitive GT is possible in as few as stages only.
The main ingredient is our randomized estimate of a previously unknown using nonadaptive queries.
In addition, we have developed a systematic approach to obtain optimal competitive ratios for our strategies.
When is a known upper bound,
we propose randomized GT strategies which asymptotically achieve query optimality in just , or stages depending upon the growth of versus .
Inspired by application settings, such as at American Red Cross, where in most cases GT is applied to small instances, \textit{e.g.}, . We extended our study of query-optimal GT strategies to solve a given problem instance with fixed values , and . We also considered the situation when
elements to test cannot be divided physically (electronic devices), thus the pools must be disjoint. For GT with \emph{disjoint} simultaneous pools, we show that tests are sufficient, and also necessary for certain ranges of the parameters
Inference of the sparse kinetic Ising model using the decimation method
In this paper we study the inference of the kinetic Ising model on sparse
graphs by the decimation method. The decimation method, which was first
proposed in [Phys. Rev. Lett. 112, 070603] for the static inverse Ising
problem, tries to recover the topology of the inferred system by setting the
weakest couplings to zero iteratively. During the decimation process the
likelihood function is maximized over the remaining couplings. Unlike the
-optimization based methods, the decimation method does not use the
Laplace distribution as a heuristic choice of prior to select a sparse
solution. In our case, the whole process can be done automatically without
fixing any parameters by hand. We show that in the dynamical inference problem,
where the task is to reconstruct the couplings of an Ising model given the
data, the decimation process can be applied naturally into a maximum-likelihood
optimization algorithm, as opposed to the static case where pseudo-likelihood
method needs to be adopted. We also use extensive numerical studies to validate
the accuracy of our methods in dynamical inference problems. Our results
illustrate that on various topologies and with different distribution of
couplings, the decimation method outperforms the widely-used -optimization based methods.Comment: 11 pages, 5 figure
Group testing:an information theory perspective
The group testing problem concerns discovering a small number of defective
items within a large population by performing tests on pools of items. A test
is positive if the pool contains at least one defective, and negative if it
contains no defectives. This is a sparse inference problem with a combinatorial
flavour, with applications in medical testing, biology, telecommunications,
information technology, data science, and more. In this monograph, we survey
recent developments in the group testing problem from an information-theoretic
perspective. We cover several related developments: efficient algorithms with
practical storage and computation requirements, achievability bounds for
optimal decoding methods, and algorithm-independent converse bounds. We assess
the theoretical guarantees not only in terms of scaling laws, but also in terms
of the constant factors, leading to the notion of the {\em rate} of group
testing, indicating the amount of information learned per test. Considering
both noiseless and noisy settings, we identify several regimes where existing
algorithms are provably optimal or near-optimal, as well as regimes where there
remains greater potential for improvement. In addition, we survey results
concerning a number of variations on the standard group testing problem,
including partial recovery criteria, adaptive algorithms with a limited number
of stages, constrained test designs, and sublinear-time algorithms.Comment: Survey paper, 140 pages, 19 figures. To be published in Foundations
and Trends in Communications and Information Theor
Reliability and Security Assessment of Modern Embedded Devices
L'abstract è presente nell'allegato / the abstract is in the attachmen
Automatic performance optimisation of component-based enterprise systems via redundancy
Component technologies, such as J2EE and .NET have been extensively adopted for building complex enterprise applications. These technologies help address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components.
Static component testing and tuning procedures provide insufficient performance guarantees for components deployed and run in diverse assemblies, under unpredictable workloads and on different platforms. Often, there is no single component implementation or deployment configuration that can yield optimal performance in all possible conditions under which a component may run. Manually optimising and adapting complex applications to changes in their running environment is a costly and error-prone management task.
The thesis presents a solution for automatically optimising the performance of component-based enterprise systems. The proposed approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different execution environment. A management framework automatically administers the available redundant variants and adapts the system to external changes. The framework uses runtime monitoring data to detect performance anomalies and significant variations in the application's execution environment. It automatically adapts the application so as to use the optimal component configuration under the current running conditions. An automatic clustering mechanism analyses monitoring data and infers information on the components' performance characteristics. System administrators use decision policies to state high-level performance goals and configure system management processes.
A framework prototype has been implemented and tested for automatically managing a J2EE application. Obtained results prove the framework's capability to successfully manage a software system without human intervention. The management overhead induced during normal system execution and through management operations indicate the framework's feasibility
Dynamics and termination cost of spatially coupled mean-field models
This work is motivated by recent progress in information theory and signal
processing where the so-called `spatially coupled' design of systems leads to
considerably better performance. We address relevant open questions about
spatially coupled systems through the study of a simple Ising model. In
particular, we consider a chain of Curie-Weiss models that are coupled by
interactions up to a certain range. Indeed, it is well known that the pure
(uncoupled) Curie-Weiss model undergoes a first order phase transition driven
by the magnetic field, and furthermore, in the spinodal region such systems are
unable to reach equilibrium in sub-exponential time if initialized in the
metastable state. By contrast, the spatially coupled system is, instead, able
to reach the equilibrium even when initialized to the metastable state. The
equilibrium phase propagates along the chain in the form of a travelling wave.
Here we study the speed of the wave-front and the so-called `termination
cost'--- \textit{i.e.}, the conditions necessary for the propagation to occur.
We reach several interesting conclusions about optimization of the speed and
the cost.Comment: 12 pages, 11 figure
- …