1,125 research outputs found

    Facing affect / synthetic interface & meaning.

    Get PDF
    This paper describes a practice-led research project that addresses issues of emotional creativity and affect. A series of three-dimensional works were developed to discuss and demonstrate an exciting moment of new tangentiality; and an understanding of the emotional face as interface evolved. With this research a crossover zone was explored, where computer technology affects the material realm and where digitally driven processes interact with traditional ones, describing a hybrid practice. The practice aims to reflect on an interdisciplinary research process, including the study of creativity and synthetic emotions. This research is carried out in collaboration with the Digital Media Research Innovation Institute at OCAD University in Toronto, and the Rapidform Print Research department at the Royal College of Art in London

    e_motions in process

    Get PDF
    This research project maps virtual emotions. Rauch uses 3D-surface capturing devices to scan facial expressions in (stuffed) animals and humans, which she then sculpts with the Phantom Arm/ SensAble FreeForm device in 3D virtual space. The results are rapidform printed objects and 3D animations of morphing faces and gestures. Building on her research into consciousness studies and emotions, she has developed a new artwork to reveal characteristic aspects of human emotions (i.e. laughing, crying, frowning, sneering, etc.), which utilises new technology, in particular digital scanning devices and special effects animation software. The proposal is to use a 3D high-resolution laser scanner to capture animal faces and, using the data of these faces, animate and then combine them with human emotional facial expressions. The morphing of the human and animal facial data are not merely layers of the different scans but by applying an algorithmic programme to the data, crucial landmarks in the animal face are merged in order to match with those of the human. The results are morphings of the physical characteristics of animals with the emotional characteristics of the human face in 3D. The focus of this interdisciplinary research project is a collaborative practice that brings together researchers from UCL in London and researchers at OCAD University’s data and information visualization lab. Rauch uses Darwin’s metatheory of the continuity of species and other theories on evolution and internal physiology (Ekman et al) in order to re-examine previous and new theories with the use of new technologies, including the SensAble FreeForm Device, which, as an interface, allows for haptic feedback from digital data

    Modelling the nonstationarity of speech in the maximum negentropy beamformer

    Get PDF
    State-of-the-art automatic speech recognition (ASR) systems can achieve very low word error rates (WERs) of below 5% on data recorded with headsets. However, in many situations such as ASR at meetings or in the car, far field microphones on the table, walls or devices such as laptops are preferable to microphones that have to be worn close to the user\u27s mouths. Unfortunately, the distance between speakers and microphones introduces significant noise and reverberation, and as a consequence the WERs of current ASR systems on this data tend to be unacceptably high (30-50% upwards). The use of a microphone array, i.e. several microphones, can alleviate the problem somewhat by performing spatial filtering: beamforming techniques combine the sensors\u27 output in a way that focuses the processing on a particular direction. Assuming that the signal of interest comes from a different direction than the noise, this can improve the signal quality and reduce the WER by filtering out sounds coming from non-relevant directions. Historically, array processing techniques developed from research on non-speech data, e.g. in the fields of sonar and radar, and as a consequence most techniques were not created to specifically address beamforming in the context of ASR. While this generality can be seen as an advantage in theory, it also means that these methods ignore characteristics which could be used to improve the process in a way that benefits ASR. An example of beamforming adapted to speech processing is the recently proposed maximum negentropy beamformer (MNB), which exploits the statistical characteristics of speech as follows. "Clean" headset speech differs from noisy or reverberant speech in its statistical distribution, which is much less Gaussian in the clean case. Since negentropy is a measure of non-Gaussianity, choosing beamformer weights that maximise the negentropy of the output leads to speech that is closer to clean speech in its distribution, and this in turn has been shown to lead to improved WERs [Kumatani et al., 2009]. In this thesis several refinements of the MNB algorithm are proposed and evaluated. Firstly, a number of modifications to the original MNB configuration are proposed based on theoretical or practical concerns. These changes concern the probability density function (pdf) used to model speech, the estimation of the pdf parameters, and the method of calculating the negentropy. Secondly, a further step is taken to reflect the characteristics of speech by introducing time-varying pdf parameters. The original MNB uses fixed estimates per utterance, which do not account for the nonstationarity of speech. Several time-dependent variance estimates are therefore proposed, beginning with a simple moving average window and including the HMM-MNB, which derives the variance estimate from a set of auxiliary hidden Markov models. All beamformer algorithms presented in this thesis are evaluated through far-field ASR experiments on the Multi-Channel Wall Street Journal Audio-Visual Corpus, a database of utterances captured with real far-field sensors, in a realistic acoustic environment, and spoken by real speakers. While the proposed methods do not lead to an improvement in ASR performance, a more efficient MNB algorithm is developed, and it is shown that comparable results can be achieved with significantly less data than all frames of the utterance, a result which is of particular relevance for real-time implementations.Automatische Spracherkennungssysteme können heutzutage sehr niedrige Wortfehlerraten (WER) unter 5% erreichen, wenn die Sprachdaten mit einem Headset oder anderem Nahbesprechungsmikrofon aufgezeichnet wurden. Allerdings hat das Tragen eines mundnahen Mikrofons in vielen Situationen, wie z.B. der Spracherkennung im Auto oder während einer Besprechung, praktische Nachteile, und ein auf dem Tisch, an der Wand oder am Laptop befestigtes Mikrofon wäre in dem Fall vorteilhaft. Bei einer größeren Distanz zwischen Mikrofon und Sprecher werden andererseits aber verstärkt Hintergrundgeräusche und Hall aufgenommen, wodurch die Wortfehlerraten häufig in einen unakzeptablen Bereich von 30—50% und höher steigen. Ein Mikrofonarray, d.h. eine Gruppe von Mikrofonen, kann hierbei durch räumliches Filtern in gewissem Maße Abhilfe schaffen: sogenannte Beamforming-Methoden können die Daten der einzelnen Sensoren so kombinieren, dass der Fokus auf eine bestimmte Richtung gerichtet wird. Wenn nun ein Zielsignal aus einer anderen Richtung als die Störgeräusche kommt, kann dieser Prozess die Signalqualität erhöhen und WER-Werte reduzieren, indem die Geräusche aus den nicht-relevanten Richtungen herausgefiltert werden. Da Beamforming-Techniken sich aus der Forschung an nicht-sprachlichen Daten wie Sonar und Radar entwickelt haben, sind die wenigsten Methoden in diesem Bereich speziell auf das Problem der Spracherkennung ausgerichtet. Während eine Anwendungsunabhängigkeit von Vorteil sein kann, bedeutet sie aber auch, dass Eigenschaften der Spracherkennung ignoriert werden, die zur Verbesserung des Ergebnisses genutzt werden könnten. Ein Beispiel für einen Beamforming-Algorithmus, der speziell für die Verarbeitung von Sprache entwickelt wurde, ist der Maximum Negentropy Beamformer (MNB). Der MNB nutzt die Tatsache, dass "saubere" Sprache, die mit einem Nahbesprechungsmikrofon aufgenommen wurde, eine andere Wahrscheinlichkeitsverteilung aufweist als verrauschte oder verhallte Sprache: Die Verteilung sauberer Sprache unterscheidet sich von der Normalverteilung sehr viel stärker als die von fern aufgezeichneter Sprache. Der MNB wählt Beamforming-Gewichte, die den Negentropy-Wert maximieren, und da Negentropy misst, wie sehr sich eine Verteilung von der Normalverteilung unterscheidet, ähnelt die vom MNB produzierte Sprache statistisch gesehen sauberer Sprache, was zu verbesserten WER-Werten geführt hat [Kumatani et al., 2009]. Das Thema dieser Dissertation ist die Entwicklung und Evaluierung von verschiedenen Modifikationen des MNB. Erstens wird eine Anzahl von praktisch und theoretisch motivierten Veränderungen vorgeschlagen, die die Form der Wahrscheinlichkeitsverteilung zur Sprachmodellierung, die Schätzung der Parameter dieser Verteilung und die Berechnung der Negentropy-Werte betreffen. Zweitens wird ein weiterer Schritt zur Berücksichtigung der Eigenschaften von Sprache unternommen, indem die Zeitabhängigkeit der Verteilungsparameter eingeführt wird; im ursprünglichen MNB-Algorithmus sind diese für eine Äußerung konstant, was im Gegensatz zur nicht-konstanten Eigenschaft von Sprache steht. Mehrere zeitabhängige Varianz-Schätzungmethoden werden beschrieben und evaluiert, von einem einfachen gleitenden Durchschnittswert bis zum komplexeren HMM-MNB, der die Varianz aus Hidden-Markov-Modellen ableitet. Alle Beamforming-Algorithmen, die in dieser Arbeit vorgestellt werden, werden durch Spracherkennungsexperimente mit dem Multi-Channel Wall Street Journal Audio-Visual Corpus evaluiert. Dieser Korpus wurde nicht durch Simulation erstellt, sondern besteht aus Äußerungen von Personen, die mit echten Sensoren in einer realistischen akustischen Umgebung aufgenommen wurden. Die Ergebnisse zeigen, dass mit den bisher entwickelten Methoden keine Verbesserung der Wortfehlerrate erreicht werden kann. Allerdings wurde ein effizienterer MNB-Algorithmus entwickelt, der vergleichbare Erkennungsraten mit deutlich weniger Sprachdaten erreichen kann, was vor allem für eine Echtzeitimplementierung relevant ist

    On the Velocity Field and the 3D Structure of the Galactic Soccer Ball Abell 43

    Full text link
    Planetary nebulae (PNe) and their central stars (CSs) are ideal tools to test evolutionary theory: photospheric properties of their exciting stars give stringent constraints for theoretical predictions of stellar evolution. The nebular abundances display the star's photosphere at the time of the nebula's ejection which allows to look back into the history of stellar evolution - but, more importantly, they even provide a possibility to investigate on the chemical evolution of our Galaxy because most of the nuclear processed material goes back into the interstellar medium via PNe. The recent developments in observation techniques and a new three-dimensional photoionization code MOCASSIN enable us to analyze PNe properties precisely by the construction of consistent models of PNe and CSs. In addition to PNe imaging and spectroscopy, detailed information about the velocity field within the PNe is a pre-requisite to employ de-projection techniques in modeling the physical structureof the PNe.Comment: 1 page, 1 figur

    Playshops: Workshop series exploring play

    Get PDF
    Playshops was a collaborative project between OCAD University research labs, faculty, and Symon Oliver of ALSO Collective. Expanding on the conventional model of academic workshops, Playshops incorporated both theory and practice in an attempt to investigate methods of play and how they relate to research and innovation. The workshop itself was composed of practice-based exercises followed by discussion periods. The documentation, and post-workshop writing was gathered and designed into the Playshops publication. The publication can be read cover to cover as a conventional book; however, the signatures fold out into posters that relate to the individual exercises on play

    Empirical Essays on Fiscal Federalism

    Get PDF
    This dissertation contains four empirical papers on fiscal federalism. It puts established economic principles and tools in the federal context by employing data on German municipalities

    NAIP proteins are required for cytosolic detection of specific bacterial ligands in vivo.

    Get PDF
    NLRs (nucleotide-binding domain [NBD] leucine-rich repeat [LRR]-containing proteins) exhibit diverse functions in innate and adaptive immunity. NAIPs (NLR family, apoptosis inhibitory proteins) are NLRs that appear to function as cytosolic immunoreceptors for specific bacterial proteins, including flagellin and the inner rod and needle proteins of bacterial type III secretion systems (T3SSs). Despite strong biochemical evidence implicating NAIPs in specific detection of bacterial ligands, genetic evidence has been lacking. Here we report the use of CRISPR/Cas9 to generate Naip1(-/-) and Naip2(-/-) mice, as well as Naip1-6(Δ/Δ) mice lacking all functional Naip genes. By challenging Naip1(-/-) or Naip2(-/-) mice with specific bacterial ligands in vivo, we demonstrate that Naip1 is uniquely required to detect T3SS needle protein and Naip2 is uniquely required to detect T3SS inner rod protein, but neither Naip1 nor Naip2 is required for detection of flagellin. Previously generated Naip5(-/-) mice retain some residual responsiveness to flagellin in vivo, whereas Naip1-6(Δ/Δ) mice fail to respond to cytosolic flagellin, consistent with previous biochemical data implicating NAIP6 in flagellin detection. Our results provide genetic evidence that specific NAIP proteins function to detect specific bacterial proteins in vivo

    This is Research; Rauch and Gay: A Hundred Thousand Lousy Cats

    Get PDF
    This practice-led research project adopts some feminist principles of data visualization proposed by Catherine D'lgnazio and Lauren Klein. We are interested in looking at the context of data, methods and politics of data collection, and the resulting visualizations/materializations. Data, design, and community use of the data, are all intertwined. We use those preliminary principles to structure our research process and findings. We explore machine-human creative collaborations, the act of training Al systems, with some consideration around socio-political implications of classifications and categorizations. Using the Google QuickDraw dataset and platform, we explore the potential differences of algorithmic "machine", or digitally constructed drawings, and fictional associative "hand" drawings and collages, questioning and exploring then, what it means to draw and to work within classification systems in an algorithm-leaning world. We ask, what is data and what is its politics? How do critical arts practices work to complicate algorithmic logic and question practices of optimization through data

    Acquired von Willebrand Syndrome in Patients With Ventricular Assist Device

    Get PDF
    During the last decade the use of ventricular assist devices (VADs) for patients with severe heart failure has increased tremendously. However, flow disturbances, mainly high shear induced by the device is associated with bleeding complications. Shear stress-induced changes in VWF conformation are associated with a loss of high molecular weight multimers (HMW) of VWF and an increased risk of bleeding. This phenomenon and its cause will be elaborated and reviewed in the following
    • …
    corecore