794 research outputs found
Conformant Planning via Symbolic Model Checking
We tackle the problem of planning in nondeterministic domains, by presenting
a new approach to conformant planning. Conformant planning is the problem of
finding a sequence of actions that is guaranteed to achieve the goal despite
the nondeterminism of the domain. Our approach is based on the representation
of the planning domain as a finite state automaton. We use Symbolic Model
Checking techniques, in particular Binary Decision Diagrams, to compactly
represent and efficiently search the automaton. In this paper we make the
following contributions. First, we present a general planning algorithm for
conformant planning, which applies to fully nondeterministic domains, with
uncertainty in the initial condition and in action effects. The algorithm is
based on a breadth-first, backward search, and returns conformant plans of
minimal length, if a solution to the planning problem exists, otherwise it
terminates concluding that the problem admits no conformant solution. Second,
we provide a symbolic representation of the search space based on Binary
Decision Diagrams (BDDs), which is the basis for search techniques derived from
symbolic model checking. The symbolic representation makes it possible to
analyze potentially large sets of states and transitions in a single
computation step, thus providing for an efficient implementation. Third, we
present CMBP (Conformant Model Based Planner), an efficient implementation of
the data structures and algorithm described above, directly based on BDD
manipulations, which allows for a compact representation of the search layers
and an efficient implementation of the search steps. Finally, we present an
experimental comparison of our approach with the state-of-the-art conformant
planners CGP, QBFPLAN and GPT. Our analysis includes all the planning problems
from the distribution packages of these systems, plus other problems defined to
stress a number of specific factors. Our approach appears to be the most
effective: CMBP is strictly more expressive than QBFPLAN and CGP and, in all
the problems where a comparison is possible, CMBP outperforms its competitors,
sometimes by orders of magnitude
Learning Discrete-Time Markov Chains Under Concept Drift
Learning under concept drift is a novel and promising research area aiming at designing learning algorithms able to deal with nonstationary data-generating processes. In this research field, most of the literature focuses on learning nonstationary probabilistic frameworks, while some extensions about learning graphs and signals under concept drift exist. For the first time in the literature, this paper addresses the problem of learning discrete-time Markov chains (DTMCs) under concept drift. More specifically, following a hybrid active/passive approach, this paper introduces both a family of change-detection mechanisms (CDMs), differing in the required assumptions and performance, for detecting changes in DTMCs and an adaptive learning algorithm able to deal with DTMCs under concept drift. The effectiveness of both the proposed CDMs and the adaptive learning algorithm has been extensively tested on synthetically generated experiments and real data sets
Privacy-Preserving Deep Learning With Homomorphic Encryption: An Introduction
Privacy-preserving deep learning with homomorphic encryption (HE) is a novel and promising research area aimed at designing deep learning solutions that operate while guaranteeing the privacy of user data. Designing privacy-preserving deep learning solutions requires one to completely rethink and redesign deep learning models and algorithms to match the severe technological and algorithmic constraints of HE. This paper provides an introduction to this complex research area as well as a methodology for designing privacy-preserving convolutional neural networks (CNNs). This methodology was applied to the design of a privacy-preserving version of the well-known LeNet-1 CNN, which was successfully operated on two benchmark datasets for image classification. Furthermore, this paper details and comments on the research challenges and software resources available for privacy-preserving deep learning with HE
TinyML for UWB-radar based presence detection
Tiny Machine Learning (TinyML) is a novel research area aiming at designing machine and deep learning models and algorithms able to be executed on tiny devices such as Internet-of-Things units, edge devices or embedded systems. In this paper we introduce, for the first time in the literature, a TinyML solution for presence-detection based on UltrawideBand (UWB) radar, which is a particularly promising radar technology for pervasive systems. To achieve this goal we introduce a novel family of tiny convolutional neural networks for the processing of UWB-radar data characterized by a reduced memory footprint and computational demand so as to satisfy the severe technological constraints of tiny devices. From this technological perspective, UWB-radars are particularly relevant in the presence-detection scenario since they do not acquire sensitive information of users (e.g., images, videos or audio), hence preserving their privacy.The proposed solution has been successfully tested on a public-available benchmark for the indoor presence detection and on a real-world application of in-car presence detection
The record of the Paleocene-Eocene thermal maximum in the Ager Basin (Central Pyrenees, Spain)
The sedimentary record straddling the Paleocene/Eocene boundary in the Ager Basin (southern Central Pyrenees) was investigated by combining facies analysis, sequence stratigraphy and stable isotope data, within an interval characterized by a great variability of depositional environments. The occurrence of the Paleocene-Eocene Thermal Maximum (PETM) climatic anomaly is tentatively constrained by analogy with its stratigraphic range in the adjacent Tremp-Graus Basin. The main body of the carbon isotope excursion associated with the PETM may be recorded by lacustrine carbonates characterized by a ~ -3‰ shift in d13C with respect to analogous deposits of Thanetian age; a similar shift is recorded between in situ and resedimented pedogenic carbonates, a feature that suggests the partial erosion of the P/E boundary in the Ager Basin
Are fluid inclusions in gypsum reliable paleoenvironmental indicators? An assessment of the evidence from the Messinian evaporites
The paleosalinity of water from which the gypsum precipitated during the Messinian
salinity crisis is a controversial issue. Recent microthermometry studies on primary fluid
inclusions in gypsum provided very low salinity values not compatible with precipitation from
seawater, and suggested strong mixing between seawater and nonmarine waters enriched in
calcium sulfate. We applied a new microthermometric protocol on gypsum crystals from nine
Mediterranean sections that were experimentally stretched to measure a larger population
of fluid inclusions. The results show salinities ranging from 9 to 238 wt‰ NaCl equivalent,
largely falling within the evaporation path of normal seawater. The data from previous studies
were obtained mostly from those fluid inclusions capable of nucleating a stable bubble
after a weak stretching, which probably correspond to those having a lower salinity acquired
through post-depositional crack-and-seal processes. Our data suggest instead that the primary
gypsum precipitated from a marine brine, later modified by post-trapping processes
during tectonics and exhumation
A Network Architecture for Point Cloud Classification via Automatic Depth Images Generation
© 2018 IEEE. We propose a novel neural network architecture for point cloud classification. Our key idea is to automatically transform the 3D unordered input data into a set of useful 2D depth images, and classify them by exploiting well performing image classification CNNs. We present new differentiable module designs to generate depth images from a point cloud. These modules can be combined with any network architecture for processing point clouds. We utilize them in combination with state-of-the-art classification networks, and get results competitive with the state of the art in point cloud classification. Furthermore, our architecture automatically produces informative images representing the input point cloud, which could be used for further applications such as point cloud visualization
Guest Editorial Special Issue on Recent Advances in Theory, Methodology, and Applications of Imbalanced Learning
Imbalanced learning is a challenging task in machine learning, faced by practitioners, and intensively investigated by researchers from a wide range of communities. However, as pointed out in the book titled “ Imbalanced Learning: Foundations, Algorithms, and Applications ” and collectively authored by experts in the field, many if not most of the approaches to imbalanced learning are heuristic and ad hoc in nature, hence leaving many questions unanswered. To fill this gap, the aim of this Special Issue is to collect recent research works that focus on the theory, methodology, and applications of imbalanced learning. After carefully reviewing a large number of submissions, we selected 15 works to be included in this Special Issue. These works can be roughly categorized into three types: deep-learning-based methods (6), methods based on other machine-learning paradigms (7), and empirical comparative studies (2)
Formal methods for industrial critical systems, preface to the special section
[EN] This special issue contains improved versions of selected papers from the workshops
on Formal Methods for Industrial Critical Systems (FMICS) held in Eindhoven,
The Netherlands, in November 2009 and in Antwerp, Belgium, in September
2010. These were, respectively, the 14th and 15th of a series of international
workshops organized by an open working group supported by ERCIM (European
Research Consortium for Informatics and Mathematics) that promotes research in
all aspects of formal methods (see details in http://www.inrialpes.fr/vasy/fmics/).
The FMICS workshops that have produced this special issue considered papers
describing original, previously unpublished research and not simultaneously submitted
for publication elsewhere, and dealing with the following themes:
Design, specification, code generation and testing based on formal methods.
Methods, techniques and tools to support automated analysis, certification,
debugging, learning, optimization and transformation of complex, distributed, real-time and embedded systems.
Verification and validation methods that address shortcomings of existing
methods with respect to their industrial applicability (e.g., scalability and
usability issues).
Tools for the development of formal design descriptions.
Case studies and experience reports on industrial applications of formal
methods, focusing on lessons learned or new research directions.
Impact and costs of the adoption of formal methods.
Application of formal methods in standardization and industrial forums.
The selected papers are the result of several evaluation steps. In response to the
call for papers, FMICS 2009 received 24 papers and FMICS 2010 received 33
papers, with 10 and 14 accepted, respectively, which were published by Springer-
Verlag in the series Lecture Notes in Computer Science (volumes 5825 [1] and
6371 [2]). Each paper was reviewed by at least three anonymous referees which
provided full written evaluations. After the workshops, the authors of 10 papers
were invited to submit extended journal versions to this special issue. These papers
passed two review phases, and finally 7 were accepted to be included in the
journal.his work has been partially supported by the EU (FEDER) and the Spanish MEC TIN2010-21062-C02-02 project, MICINN INNCORPORA-PTQ program, and by Generalitat Valenciana, ref. PROMETEO2011/052.Alpuente Frasnedo, M.; Joubert ., C.; Kowalewski, S.; Roveri, M. (2013). Formal methods for industrial critical systems, preface to the special section. Science of Computer Programming. 78(7):775-777. doi:10.1016/j.scico.2012.05.005S77577778
- …