93 research outputs found
Computer Aided Verification
This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications
Computer Aided Verification
This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications
Recommended from our members
Aspects of emergent cyclicity in language and computation
This thesis has four parts, which correspond to the presentation and development of a theoretical
framework for the study of cognitive capacities qua physical phenomena, and a case study of locality conditions over natural languages.
Part I deals with computational considerations, setting the tone of the rest of the thesis, and introducing and defining critical concepts like ‘grammar’, ‘automaton’, and the relations between them
. Fundamental questions concerning the place of formal language theory in
linguistic inquiry, as well as the expressibility of linguistic and computational concepts in
common terms, are raised in this part.
Part II further explores the issues addressed in Part I with particular emphasis on how
grammars are implemented by means of automata, and the properties of the formal languages
that these automata generate. We will argue against the equation between effective computation
and function-based computation, and introduce examples of computable procedures which are
nevertheless impossible to capture using traditional function-based theories. The connection
with cognition will be made in the light of dynamical frustrations: the irreconciliable tension
between mutually incompatible tendencies that hold for a given dynamical system. We will
provide arguments in favour of analyzing natural language as emerging from a tension between
different systems (essentially, semantics and morpho-phonology) which impose orthogonal
requirements over admissible outputs. The concept of level of organization or scale comes to
the foreground here; and apparent contradictions and incommensurabilities between concepts
and theories are revisited in a new light: that of dynamical nonlinear systems which are
fundamentally frustrated. We will also characterize the computational system that emerges from
such an architecture: the goal is to get a syntactic component which assigns the simplest
possible structural description to sub-strings, in terms of its computational complexity. A
system which can oscillate back and forth in the hierarchy of formal languages in assigning
structural representations to local domains will be referred to as a computationally mixed
system.
Part III is where the really fun stuff starts. Field theory is introduced, and its applicability to
neurocognitive phenomena is made explicit, with all due scale considerations. Physical and
mathematical concepts are permanently interacting as we analyze phrase structure in terms of
pseudo-fractals (in Mandelbrot’s sense) and define syntax as a (possibly unary) set of
topological operations over completely Hausdorff (CH) ultrametric spaces. These operations, which makes field perturbations interfere, transform that initial completely Hausdorff
ultrametric space into a metric, Hausdorff space with a weaker separation axiom. Syntax, in this
proposal, is not ‘generative’ in any traditional sense –except the ‘fully explicit theory’ one-:
rather, it partitions (technically, ‘parametrizes’) a topological space. Syntactic dependencies are
defined as interferences between perturbations over a field, which reduce the total entropy of
the system per cycles, at the cost of introducing further dimensions where attractors
corresponding to interpretations for a phrase marker can be found.
Part IV is a sample of what we can gain by further pursuing the physics of language approach,
both in terms of empirical adequacy and theoretical elegance, not to mention the unlimited
possibilities of interdisciplinary collaboration. In this section we set our focus on island
phenomena as defined by Ross (1967), critically revisiting the most relevant literature on this
topic, and establishing a typology of constructions that are strong islands, which cannot be
violated. These constructions are particularly interesting because they limit the phase space of
what is expressible via natural language, and thus reveal crucial aspects of its underlying
dynamics. We will argue that a dynamically frustrated system which is characterized by
displaying mixed computational dependencies can provide straightforward characterizations of
cyclicity in terms of changes in dependencies in local domains
Harmonic analysis of music using combinatory categorial grammar
FP7 grant 249520 (GRAMPLUS)Various patterns of the organization of Western tonal music exhibit hierarchical structure,
among them the harmonic progressions underlying melodies and the metre underlying
rhythmic patterns. Recognizing these structures is an important part of unconscious
human cognitive processing of music. Since the prosody and syntax of natural
languages are commonly analysed with similar hierarchical structures, it is reasonable
to expect that the techniques used to identify these structures automatically in natural
language might also be applied to the automatic interpretation of music.
In natural language processing (NLP), analysing the syntactic structure of a sentence
is prerequisite to semantic interpretation. The analysis is made difficult by the
high degree of ambiguity in even moderately long sentences. In music, a similar sort of
structural analysis, with a similar degree of ambiguity, is fundamental to tasks such as
key identification and score transcription. These and other tasks depend on harmonic
and rhythmic analyses. There is a long history of applying linguistic analysis techniques
to musical analysis. In recent years, statistical modelling, in particular in the
form of probabilistic models, has become ubiquitous in NLP for large-scale practical
analysis of language. The focus of the present work is the application of statistical
parsing to automatic harmonic analysis of music.
This thesis demonstrates that statistical parsing techniques, adapted from NLP with
little modification, can be successfully applied to recovering the harmonic structure
underlying music. It shows first how a type of formal grammar based on one used
for linguistic syntactic processing, Combinatory Categorial Grammar (CCG), can be
used to analyse the hierarchical structure of chord sequences. I introduce a formal
language similar to first-order predicate logical to express the hierarchical tonal harmonic
relationships between chords. The syntactic grammar formalism then serves as
a mechanism to map an unstructured chord sequence onto its structured analysis.
In NLP, the high degree of ambiguity of the analysis means that a parser must
consider a huge number of possible structures. Chart parsing provides an efficient
mechanism to explore them. Statistical models allow the parser to use information
about structures seen before in a training corpus to eliminate improbable interpretations
early on in the process and to rank the final analyses by plausibility. To apply the
same techniques to harmonic analysis of chord sequences, a corpus of tonal jazz chord
sequences annotated by hand with harmonic analyses is constructed. Two statistical
parsing techniques are adapted to the present task and evaluated on their success at recovering the annotated structures. The experiments show that parsing using a statistical
model of syntactic derivations is more successful than a Markovian baseline
model at recovering harmonic structure. In addition, the practical technique of statistical
supertagging serves to speed up parsing without any loss in accuracy.
This approach to recovering harmonic structure can be extended to the analysis of
performance data symbolically represented as notes. Experiments using some simple
proof-of-concept extensions of the above parsing models demonstrate one probabilistic
approach to this. The results reported provide a baseline for future work on the task of
harmonic analysis of performances
Temporal Information in Data Science: An Integrated Framework and its Applications
Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems.Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems
BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference
Automatic Pain Assessment by Learning from Multiple Biopotentials
Kivun täsmällinen arviointi on tärkeää kivunhallinnassa, erityisesti sairaan- hoitoa vaativille ipupotilaille. Kipu on subjektiivista, sillä se ei ole pelkästään aistituntemus, vaan siihen saattaa liittyä myös tunnekokemuksia. Tällöin itsearviointiin perustuvat kipuasteikot ovat tärkein työkalu, niin auan kun potilas pystyy kokemuksensa arvioimaan. Arviointi on kuitenkin haasteellista potilailla, jotka eivät itse pysty kertomaan kivustaan. Kliinisessä hoito- työssä kipua pyritään objektiivisesti arvioimaan esimerkiksi havainnoimalla fysiologisia muuttujia kuten sykettä ja käyttäytymistä esimerkiksi potilaan kasvonilmeiden perusteella. Tutkimuksen päätavoitteena on automatisoida arviointiprosessi hyödyntämällä koneoppimismenetelmiä yhdessä biosignaalien prosessointnin kanssa.
Tavoitteen saavuttamiseksi mitattiin autonomista keskushermoston toimintaa kuvastavia biopotentiaaleja: sydänsähkökäyrää, galvaanista ihoreaktiota ja kasvolihasliikkeitä mittaavaa lihassähkökäyrää. Mittaukset tehtiin terveillä vapaaehtoisilla, joille aiheutettiin kokeellista kipuärsykettä. Järestelmän kehittämiseen tarvittavaa tietokantaa varten rakennettiin biopotentiaaleja keräävä Internet of Things -pohjainen tallennusjärjestelmä. Koostetun tietokannan avulla kehitettiin biosignaaleille prosessointimenetelmä jatku- vaan kivun arviointiin. Signaaleista eroteltiin piirteitä sekuntitasoon mukautetuilla aikaikkunoilla. Piirteet visualisoitiin ja tarkasteltiin eri luokittelijoilla kivun ja kiputason tunnistamiseksi. Parhailla luokittelumenetelmillä saavutettiin kivuntunnistukseen 90% herkkyyskyky (sensitivity) ja 84% erottelukyky (specificity) ja kivun voimakkuuden arviointiin 62,5% tarkkuus (accuracy).
Tulokset vahvistavat kyseisen käsittelytavan käyttökelpoisuuden erityis- esti tunnistettaessa kipua yksittäisessä arviointi-ikkunassa. Tutkimus vahvistaa biopotentiaalien avulla kehitettävän automatisoidun kivun arvioinnin toteutettavuuden kokeellisella kivulla, rohkaisten etenemään todellisen kivun tutkimiseen samoilla menetelmillä. Menetelmää kehitettäessä suoritettiin lisäksi vertailua ja yhteenvetoa automaattiseen kivuntunnistukseen kehitettyjen eri tutkimusten välisistä samankaltaisuuksista ja eroista. Tarkastelussa löytyi signaalien eroavaisuuksien lisäksi tutkimusmuotojen aiheuttamaa eroa arviointitavoitteisiin, mikä hankaloitti tutkimusten vertailua. Lisäksi pohdit- tiin mitkä perinteisten prosessointitapojen osiot rajoittavat tai edistävät ennustekykyä ja miten, sekä tuoko optimointi läpimurtoa järjestelmän näkökulmasta.Accurate pain assessment plays an important role in proper pain management, especially among hospitalized people experience acute pain. Pain is subjective in nature which is not only a sensory feeling but could also combine affective factors. Therefore self-report pain scales are the main assessment tools as long as patients are able to self-report. However, it remains a challenge to assess the pain from the patients who cannot self-report. In clinical practice, physiological parameters like heart rate and pain behaviors including facial expressions are observed as empirical references to infer pain objectively. The main aim of this study is to automate such process by leveraging machine learning methods and biosignal processing.
To achieve this goal, biopotentials reflecting autonomic nervous system activities including electrocardiogram and galvanic skin response, and facial expressions measured with facial electromyograms were recorded from healthy volunteers undergoing experimental pain stimulus. IoT-enabled biopotential acquisition systems were developed to build the database aiming at providing compact and wearable solutions. Using the database, a biosignal processing flow was developed for continuous pain estimation. Signal features were extracted with customized time window lengths and updated every second. The extracted features were visualized and fed into multiple classifiers trained to estimate the presence of pain and pain intensity separately. Among the tested classifiers, the best pain presence estimating sensitivity achieved was 90% (specificity 84%) and the best pain intensity estimation accuracy achieved was 62.5%.
The results show the validity of the proposed processing flow, especially in pain presence estimation at window level. This study adds one more piece of evidence on the feasibility of developing an automatic pain assessment tool from biopotentials, thus providing the confidence to move forward to real pain cases. In addition to the method development, the similarities and differences between automatic pain assessment studies were compared and summarized. It was found that in addition to the diversity of signals, the estimation goals also differed as a result of different study designs which made cross dataset comparison challenging. We also tried to discuss which parts in the classical processing flow would limit or boost the prediction performance and whether optimization can bring a breakthrough from the system’s perspective
- …