108 research outputs found
Public Reason and Teaching Science in a Multicultural World: a Comment on Cobern and Loving: "An Essay for Educators ' in the Light of John Rawls' Political Philosophy
This is a comment on the article "An Essay for Educators: Epistemological Realism Really is Common Sense” written by Cobern and Loving in Science & Education. The skillful analysis of the two authors concerning the problematic role of scientism in school science is fully appreciated, as is their diagnosis that it is scientism not universal scientific realism which is the cause of epistemological imperialism. But how should science teachers deal with scientism in the concrete every day situation of the science classroom and in contact with classes and students? John Rawls' concept of public reason offers three "cardinal strategies” to achieve this aim: proviso, declaration and conjecture. The theoretical framework is provided, the three strategies are described and their relevance is fleshed out in a concrete exampl
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition
We compare the fast training and decoding speed of RETURNN of attention
models for translation, due to fast CUDA LSTM kernels, and a fast pure
TensorFlow beam search decoder. We show that a layer-wise pretraining scheme
for recurrent attention models gives over 1% BLEU improvement absolute and it
allows to train deeper recurrent encoder networks. Promising preliminary
results on max. expected BLEU training are presented. We are able to train
state-of-the-art models for translation and end-to-end models for speech
recognition and show results on WMT 2017 and Switchboard. The flexibility of
RETURNN allows a fast research feedback loop to experiment with alternative
architectures, and its generality allows to use it on a wide range of
applications.Comment: accepted as demo paper on ACL 201
Language Modeling with Deep Transformers
We explore deep autoregressive Transformer models in language modeling for
speech recognition. We focus on two aspects. First, we revisit Transformer
model configurations specifically for language modeling. We show that well
configured Transformer models outperform our baseline models based on the
shallow stack of LSTM recurrent neural network layers. We carry out experiments
on the open-source LibriSpeech 960hr task, for both 200K vocabulary word-level
and 10K byte-pair encoding subword-level language modeling. We apply our
word-level models to conventional hybrid speech recognition by lattice
rescoring, and the subword-level models to attention based encoder-decoder
models by shallow fusion. Second, we show that deep Transformer language models
do not require positional encoding. The positional encoding is an essential
augmentation for the self-attention mechanism which is invariant to sequence
ordering. However, in autoregressive setup, as is the case for language
modeling, the amount of information increases along the position dimension,
which is a positional signal by its own. The analysis of attention weights
shows that deep autoregressive self-attention models can automatically make use
of such positional information. We find that removing the positional encoding
even slightly improves the performance of these models.Comment: To appear in the proceedings of INTERSPEECH 201
Improved training of end-to-end attention models for speech recognition
Sequence-to-sequence attention-based models on subword units allow simple
open-vocabulary end-to-end speech recognition. In this work, we show that such
models can achieve competitive results on the Switchboard 300h and LibriSpeech
1000h tasks. In particular, we report the state-of-the-art word error rates
(WER) of 3.54% on the dev-clean and 3.82% on the test-clean evaluation subsets
of LibriSpeech. We introduce a new pretraining scheme by starting with a high
time reduction factor and lowering it during training, which is crucial both
for convergence and final performance. In some experiments, we also use an
auxiliary CTC loss function to help the convergence. In addition, we train long
short-term memory (LSTM) language models on subword units. By shallow fusion,
we report up to 27% relative improvements in WER over the attention baseline
without a language model.Comment: submitted to Interspeech 201
A mirror of society: a discourse analytic study of 15- to 16-year-old Swiss students' talk about environment and environmental protection
Environment and environmental protection are on the forefront of political concerns globally. But how are the media and political discourses concerning these issues mirrored in the public more generally and in the discourses of school science students more specifically? In this study, we analyze the discourse mobilized in whole-class conversations of and interviews with 15- to 16-year-old Swiss junior high school students. We identify two core interpretive repertoires (each unfolding into two second-order repertoires) that turn out to be the building blocks of environmental discourse, which is characteristic not only of these students but also of Swiss society more generally. The analysis of our students' discourse demonstrates how their use of interpretive repertoires locks them in belief talk that they have no control over ecological issues, which can put them in the danger of falling prey to ecological passivity. As a consequence of our findings we suggest that teachers should be endorsed to interpret their teaching of environmental issues in terms of the enriching and enlarging of their students' interpretive repertoire
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention -- w/o Data Augmentation
We present state-of-the-art automatic speech recognition (ASR) systems
employing a standard hybrid DNN/HMM architecture compared to an attention-based
encoder-decoder design for the LibriSpeech task. Detailed descriptions of the
system development, including model design, pretraining schemes, training
schedules, and optimization approaches are provided for both system
architectures. Both hybrid DNN/HMM and attention-based systems employ
bi-directional LSTMs for acoustic modeling/encoding. For language modeling, we
employ both LSTM and Transformer based architectures. All our systems are built
using RWTHs open-source toolkits RASR and RETURNN. To the best knowledge of the
authors, the results obtained when training on the full LibriSpeech training
set, are the best published currently, both for the hybrid DNN/HMM and the
attention-based systems. Our single hybrid system even outperforms previous
results obtained from combining eight single systems. Our comparison shows that
on the LibriSpeech 960h task, the hybrid DNN/HMM system outperforms the
attention-based system by 15% relative on the clean and 40% relative on the
other test sets in terms of word error rate. Moreover, experiments on a reduced
100h-subset of the LibriSpeech training corpus even show a more pronounced
margin between the hybrid DNN/HMM and attention-based architectures.Comment: Proceedings of INTERSPEECH 201
Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept
With the advent of direct models in automatic speech recognition (ASR), the
formerly prevalent frame-wise acoustic modeling based on hidden Markov models
(HMM) diversified into a number of modeling architectures like encoder-decoder
attention models, transducer models and segmental models (direct HMM). While
transducer models stay with a frame-level model definition, segmental models
are defined on the level of label segments directly. While
(soft-)attention-based models avoid explicit alignment, transducer and
segmental approach internally do model alignment, either by segment hypotheses
or, more implicitly, by emitting so-called blank symbols. In this work, we
prove that the widely used class of RNN-Transducer models and segmental models
(direct HMM) are equivalent and therefore show equal modeling power. It is
shown that blank probabilities translate into segment length probabilities and
vice versa. In addition, we provide initial experiments investigating decoding
and beam-pruning, comparing time-synchronous and label-/segment-synchronous
search strategies and their properties using the same underlying model.Comment: accepted at Interspeech202
Klinisches Assessment Basiswissen fĂĽr Pflegefachpersonen und Hebammen : Arbeitsheft Abdomen
Die Studierenden
können gezielt eine symptomfokussierte Anamnese und die körperliche Untersuchung durchführen, anschliessend die gesammelten Daten zusammenfassen / analysieren und das weitere Vorgehen planen, gemäss SOAP-Schema (Subjective-Objective-Analyse-Plan)
können gezielt und systematisch eine Anamnese zum Abdomen erheben, inklusive Grunddaten, Hauptbeschwerden, symptomfokussierter Anamnese anhand der Leitsymptome Abdomen, erweiterter Anamnese Abdomen, medizinischer Vorgeschichte, Familienanamnese, Sozialanamnese
führen eine systematische körperliche Untersuchung des Abdomens in folgender Reihenfolge durch und setzen Untersuchungshilfsmittel ein: Allgemeinzustand (AZ), Vitalzeichen (VZ), wichtige systemrelevante Parameter, Inspektion, Auskultation, Perkussion, Palpation
führen zusätzliche Untersuchungen durch, wie zum Beispiel Appendizitis-Zeichen
erkennen die physiologischen Befunde und / oder deren Abweichungen
interpretieren diese und stellen eine Arbeitshypothese auf
beurteilen die Dringlichkeit und planen weitere Interventionen
rapportieren die Befunde gemäss dem Rapportraster Identifikation – Situation – Background – Assessment – Recommendation (ISBAR) in Fachsprache an das interprofessionelle Team (Arzt / Ärztin – Pflegefachpersonen / Hebammen) und
dokumentieren die Ergebnisse des klinischen Assessments in Fachsprache
- …