311 research outputs found
Beyond Bayesian Model Averaging over Paths in Probabilistic Programs with Stochastic Support
The posterior in probabilistic programs with stochastic support decomposes as
a weighted sum of the local posterior distributions associated with each
possible program path. We show that making predictions with this full posterior
implicitly performs a Bayesian model averaging (BMA) over paths. This is
potentially problematic, as model misspecification can cause the BMA weights to
prematurely collapse onto a single path, leading to sub-optimal predictions in
turn. To remedy this issue, we propose alternative mechanisms for path
weighting: one based on stacking and one based on ideas from PAC-Bayes. We show
how both can be implemented as a cheap post-processing step on top of existing
inference engines. In our experiments, we find them to be more robust and lead
to better predictions compared to the default BMA weights
Accessing spoken interaction through dialogue processing [online]
Zusammenfassung
Unser Leben, unsere Leistungen und unsere Umgebung, alles wird
derzeit durch Schriftsprache dokumentiert. Die rasante
Fortentwicklung der technischen Möglichkeiten Audio, Bilder und
Video aufzunehmen, abzuspeichern und wiederzugeben kann genutzt
werden um die schriftliche Dokumentation von menschlicher
Kommunikation, zum Beispiel Meetings, zu unterstĂŒtzen, zu
ergÀnzen oder gar zu ersetzen. Diese neuen Technologien können
uns in die Lage versetzen Information aufzunehmen, die
anderweitig verloren gehen, die Kosten der Dokumentation zu
senken und hochwertige Dokumente mit audiovisuellem Material
anzureichern. Die Indizierung solcher Aufnahmen stellt die
Kerntechnologie dar um dieses Potential auszuschöpfen. Diese
Arbeit stellt effektive Alternativen zu schlĂŒsselwortbasierten
Indizes vor, die SuchraumeinschrÀnkungen bewirken und teilweise
mit einfachen Mitteln zu berechnen sind.
Die Indizierung von Sprachdokumenten kann auf verschiedenen
Ebenen erfolgen: Ein Dokument gehört stilistisch einer
bestimmten Datenbasis an, welche durch sehr einfache Merkmale
bei hoher Genauigkeit automatisch bestimmt werden kann.
Durch diese Art von Klassifikation kann eine Reduktion des
Suchraumes um einen Faktor der GröĂenordnung 4Â10 erfolgen. Die
Anwendung von thematischen Merkmalen zur Textklassifikation
bei einer Nachrichtendatenbank resultiert in einer Reduktion um
einen Faktor 18. Da Sprachdokumente sehr lang sein können mĂŒssen
sie in thematische Segmente unterteilt werden. Ein neuer
probabilistischer Ansatz sowie neue Merkmale (SprecherinitiaÂ
tive und Stil) liefern vergleichbare oder bessere Resultate als
traditionelle schlĂŒsselwortbasierte AnsĂ€tze. Diese thematische
Segmente können durch die vorherrschende AktivitÀt
charakterisiert werden (erzÀhlen, diskutieren, planen, ...),
die durch ein neuronales Netz detektiert werden kann. Die
Detektionsraten sind allerdings begrenzt da auch Menschen
diese AktivitÀten nur ungenau bestimmen. Eine maximale
Reduktion des Suchraumes um den Faktor 6 ist bei den verwendeten
Daten theoretisch möglich. Eine thematische Klassifikation
dieser Segmente wurde ebenfalls auf einer Datenbasis
durchgefĂŒhrt, die Detektionsraten fĂŒr diesen Index sind jedoch
gering.
Auf der Ebene der einzelnen ĂuĂerungen können Dialogakte wie
Aussagen, Fragen, RĂŒckmeldungen (aha, ach ja, echt?, ...) usw.
mit einem diskriminativ trainierten Hidden Markov Model erkannt
werden. Dieses Verfahren kann um die Erkennung von kurzen Folgen
wie Frage/AntwortÂSpielen erweitert werden (Dialogspiele).
Dialogakte und Âspiele können eingesetzt werden um
Klassifikatoren fĂŒr globale Sprechstile zu bauen. Ebenso
könnte ein Benutzer sich an eine bestimmte Dialogaktsequenz
erinnern und versuchen, diese in einer grafischen
ReprÀsentation wiederzufinden.
In einer Studie mit sehr pessimistischen Annahmen konnten
Benutzer eines aus vier Àhnlichen und gleichwahrscheinlichen
GesprÀchen mit einer Genauigkeit von ~ 43% durch eine graphische
ReprÀsentation von AktivitÀt bestimmt.
Dialogakte könnte in diesem Szenario ebenso nĂŒtzlich sein, die
Benutzerstudie konnte aufgrund der geringen Datenmenge darĂŒber
keinen endgĂŒltigen AufschluĂ geben. Die Studie konnte allerdings
fĂŒr detailierte Basismerkmale wie FormalitĂ€t und
SprecheridentitÀt keinen Effekt zeigen.
Abstract
Written language is one of our primary means for documenting our
lives, achievements, and environment. Our capabilities to
record, store and retrieve audio, still pictures, and video are
undergoing a revolution and may support, supplement or even
replace written documentation. This technology enables us to
record information that would otherwise be lost, lower the cost
of documentation and enhance highÂquality documents with
original audiovisual material.
The indexing of the audio material is the key technology to
realize those benefits. This work presents effective
alternatives to keyword based indices which restrict the search
space and may in part be calculated with very limited resources.
Indexing speech documents can be done at a various levels:
Stylistically a document belongs to a certain database which can
be determined automatically with high accuracy using very simple
features. The resulting factor in search space reduction is in
the order of 4Â10 while topic classification yielded a factor
of 18 in a news domain.
Since documents can be very long they need to be segmented into
topical regions. A new probabilistic segmentation framework as
well as new features (speaker initiative and style) prove to be
very effective compared to traditional keyword based methods. At
the topical segment level activities (storytelling, discussing,
planning, ...) can be detected using a machine learning approach
with limited accuracy; however even human annotators do not
annotate them very reliably. A maximum search space reduction
factor of 6 is theoretically possible on the databases used. A
topical classification of these regions has been attempted
on one database, the detection accuracy for that index, however,
was very low.
At the utterance level dialogue acts such as statements,
questions, backchannels (aha, yeah, ...), etc. are being
recognized using a novel discriminatively trained HMM procedure.
The procedure can be extended to recognize short sequences such
as question/answer pairs, so called dialogue games.
Dialog acts and games are useful for building classifiers for
speaking style. Similarily a user may remember a certain dialog
act sequence and may search for it in a graphical
representation.
In a study with very pessimistic assumptions users are able to
pick one out of four similar and equiprobable meetings correctly
with an accuracy ~ 43% using graphical activity information.
Dialogue acts may be useful in this situation as well but the
sample size did not allow to draw final conclusions. However the
user study fails to show any effect for detailed basic features
such as formality or speaker identity
Benchopt: Reproducible, efficient and collaborative optimization benchmarks
Numerical validation is at the core of machine learning research as it allows
to assess the actual impact of new methods, and to confirm the agreement
between theory and practice. Yet, the rapid development of the field poses
several challenges: researchers are confronted with a profusion of methods to
compare, limited transparency and consensus on best practices, as well as
tedious re-implementation work. As a result, validation is often very partial,
which can lead to wrong conclusions that slow down the progress of research. We
propose Benchopt, a collaborative framework to automate, reproduce and publish
optimization benchmarks in machine learning across programming languages and
hardware architectures. Benchopt simplifies benchmarking for the community by
providing an off-the-shelf tool for running, sharing and extending experiments.
To demonstrate its broad usability, we showcase benchmarks on three standard
learning tasks: -regularized logistic regression, Lasso, and ResNet18
training for image classification. These benchmarks highlight key practical
findings that give a more nuanced view of the state-of-the-art for these
problems, showing that for practical evaluation, the devil is in the details.
We hope that Benchopt will foster collaborative work in the community hence
improving the reproducibility of research findings.Comment: Accepted in proceedings of NeurIPS 22; Benchopt library documentation
is available at https://benchopt.github.io
Path dependence, its critics and the quest for âhistorical economicsâ
The concept of path dependence refers to a property of contingent, non- reversible dynamical processes, including a wide array of biological and social processes that can properly be described as 'evolutionary.' To dispell existing confusions in the literature, and clarify the meaning and significance of path dependence for economists, the paper formulates definitions that relate the phenomenon to the property of non-ergodicity in stochastic processes; it examines the nature of the relationship between between path dependence and 'market failure,' and discusses the meaning of 'lock-in.' Unlike tests for the presence of non-ergodicity, assessments of the economic significance of path dependence are shown to involve difficult issues of counterfactual specification, and the welfare evaluation of alternative dynamic paths rather than terminal states. The policy implications of the existence of path dependence are shown to be more subtle and, as a rule, quite different from those which have been presumed by critics of the concept. A concluding section applies the notion of 'lock-in' reflexively to the evolution of economic analysis, suggesting that resistence to historical economics is a manifestation of 'sunk cost hysteresis' in the sphere of human cognitive development.path dependence, non-ergodicity, irreversibility, lock-in, counterfactual analysis
Taming Model Uncertainty in Self-adaptive Systems Using Bayesian Model Averaging
Research on uncertainty quantification and mitigation of software-intensive systems and (self-)adaptive systems, is increasingly gaining momentum, especially with the availability of statistical inference techniques (such as Bayesian reasoning) that make it possible to mitigate uncertain (quality) attributes of the system under scrutiny often encoded in the system model in terms of model parameters. However, to the best of our knowledge, the uncertainty about the choice of a specific system model did not receive the deserved attention.This paper focuses on self-adaptive systems and investigates how to mitigate the uncertainty related to the model selection process, that is, whenever one model is chosen over plausible alternative and competing models to represent the understanding of a system and make predictions about future observations. In particular, we propose to enhance the classical feedback loop of a self-adaptive system with the ability to tame the model uncertainty using Bayesian Model Averaging. This method improves the predictions made by the analyze component as well as the plan that adopts metaheuristic optimizing search to guide the adaptation decisions. Our empirical evaluation demonstrates the cost-effectiveness of our approach using an exemplar case study in the robotics domain
- âŠ