59 research outputs found
Deterministic Autopoietic Automata
This paper studies two issues related to the paper on Computing by
Self-reproduction: Autopoietic Automata by Jiri Wiedermann. It is shown that
all results presented there extend to deterministic computations. In
particular, nondeterminism is not needed for a lineage to generate all
autopoietic automata
The complexity of presburger arithmetic with bounded quantifier alternation depth
AbstractIt is shown how the method of Fischer and Rabin can be extended to get good lower bounds for Presburger arithmetic with a bounded number of quantifier alternations. In this case, the complexity is one exponential lower than in the unbounded case. This situation is typical for first order theories
An Improvement of Reed's Treewidth Approximation
We present a new approximation algorithm for the treewidth problem which
constructs a corresponding tree decomposition as well. Our algorithm is a
faster variation of Reed's classical algorithm. For the benefit of the reader,
and to be able to compare these two algorithms, we start with a detailed time
analysis for Reed's algorithm. We fill in many details that have been omitted
in Reed's paper. Computing tree decompositions parameterized by the treewidth
is fixed parameter tractable (FPT), meaning that there are algorithms
running in time where is a computable function, is a
polynomial function, and is the number of vertices. An analysis of Reed's
algorithm shows and for a
5-approximation. Reed simply claims time for bounded for his
constant factor approximation algorithm, but the bound of is well known. From a practical point of view, we notice that the
time of Reed's algorithm also contains a term of ,
which for small is much worse than the asymptotically leading term of
. We analyze more precisely, because the
purpose of this paper is to improve the running times for all reasonably small
values of .
Our algorithm runs in too, but with a much
smaller dependence on . In our case, . This
algorithm is simple and fast, especially for small values of . We should
mention that Bodlaender et al.\ [2016] have an asymptotically faster algorithm
running in time . It relies on a very sophisticated data
structure and does not claim to be useful for small values of
Approximately Counting Embeddings into Random Graphs
Let H be a graph, and let C_H(G) be the number of (subgraph isomorphic)
copies of H contained in a graph G. We investigate the fundamental problem of
estimating C_H(G). Previous results cover only a few specific instances of this
general problem, for example, the case when H has degree at most one
(monomer-dimer problem). In this paper, we present the first general subcase of
the subgraph isomorphism counting problem which is almost always efficiently
approximable. The results rely on a new graph decomposition technique.
Informally, the decomposition is a labeling of the vertices such that every
edge is between vertices with different labels and for every vertex all
neighbors with a higher label have identical labels. The labeling implicitly
generates a sequence of bipartite graphs which permits us to break the problem
of counting embeddings of large subgraphs into that of counting embeddings of
small subgraphs. Using this method, we present a simple randomized algorithm
for the counting problem. For all decomposable graphs H and all graphs G, the
algorithm is an unbiased estimator. Furthermore, for all graphs H having a
decomposition where each of the bipartite graphs generated is small and almost
all graphs G, the algorithm is a fully polynomial randomized approximation
scheme.
We show that the graph classes of H for which we obtain a fully polynomial
randomized approximation scheme for almost all G includes graphs of degree at
most two, bounded-degree forests, bounded-length grid graphs, subdivision of
bounded-degree graphs, and major subclasses of outerplanar graphs,
series-parallel graphs and planar graphs, whereas unbounded-length grid graphs
are excluded.Comment: Earlier version appeared in Random 2008. Fixed an typo in Definition
3.
Supervised Speaker Diarization Using Random Forests: A Tool for Psychotherapy Process Research
Speaker diarization is the practice of determining who speaks when in audio recordings. Psychotherapy research often relies on labor intensive manual diarization. Unsupervised methods are available but yield higher error rates. We present a method for supervised speaker diarization based on random forests. It can be considered a compromise between commonly used labor-intensive manual coding and fully automated procedures. The method is validated using the EMRAI synthetic speech corpus and is made publicly available. It yields low diarization error rates (M: 5.61%, STD: 2.19). Supervised speaker diarization is a promising method for psychotherapy research and similar fields
Withdrawal ruptures in adolescents with borderline personality disorder psychotherapy are marked by increased speech pauses-can minimal responses be automatically detected?
Alliance ruptures of the withdrawal type are prevalent in adolescents with borderline personality disorder (BPD). Longer speech pauses are negatively perceived by these patients. Safran and Muran's rupture model is promising but its application is very work intensive. This workload makes research costly and limits clinical usage. We hypothesised that pauses can be used to automatically detect one of the markers of the rupture model i.e. the minimal response marker. Additionally, the association of withdrawal ruptures with pauses was investigated. A total of 516 ruptures occurring in 242 psychotherapy sessions collected in 22 psychotherapies of adolescent patients with BPD and subthreshold BPD were investigated. Trained observers detected ruptures based on video and audio recordings. In contrast, pauses were automatically marked in the audio-recordings of the psychotherapy sessions and automatic speaker diarisation was used to determine the speaker-switching patterns in which the pauses occur. A random forest classifier detected time frames in which ruptures with the minimal response marker occurred based on the quantity of pauses. Performance was very good with an area under the ROC curve of 0.89. Pauses which were both preceded and followed by therapist speech were the most important predictors for minimal response ruptures. Research costs can be reduced by using machine learning techniques instead of manual rating for rupture detection. In combination with other video and audio derived features like movement analysis or automatic facial emotion detection, more complete rupture detection might be possible in the future. These innovative machine learning techniques help to narrow down the mechanisms of change of psychotherapy, here specifically of the therapeutic alliance. They might also be used to technologically augment psychotherapy training and supervision
- …