1,996 research outputs found

    Modeling tissue-specific structural patterns in human and mouse promoters

    Get PDF
    Sets of genes expressed in the same tissue are believed to be under the regulation of a similar set of transcription factors, and can thus be assumed to contain similar structural patterns in their regulatory regions. Here we present a study of the structural patterns in promoters of genes expressed specifically in 26 human and 34 mouse tissues. For each tissue we constructed promoter structure models, taking into account presences of motifs, their positioning to the transcription start site, and pairwise positioning of motifs. We found that 35 out of 60 models (58%) were able to distinguish positive test promoter sequences from control promoter sequences with statistical significance. Models with high performance include those for liver, skeletal muscle, kidney and tongue. Many of the important structural patterns in these models involve transcription factors of known importance in the tissues in question and structural patterns tend to be conserved between human and mouse. In addition to that, promoter models for related tissues tend to have high inter-tissue performance, indicating that their promoters share common structural patterns. Together, these results illustrate the validity of our models, but also indicate that the promoter structures for some tissues are easier to model than those of others

    Interactive speech-driven facial animation

    Get PDF
    One of the fastest developing areas in the entertainment industry is digital animation. Television programmes and movies frequently use 3D animations to enhance or replace actors and scenery. With the increase in computing power, research is also being done to apply these animations in an interactive manner. Two of the biggest obstacles to the success of these undertakings are control (manipulating the models) and realism. This text describes many of the ways to improve control and realism aspects, in such a way that interactive animation becomes possible. Specifically, lip-synchronisation (driven by human speech), and various modeling and rendering techniques are discussed. A prototype that shows that interactive animation is feasible, is also described.Mr. A. Hardy Prof. S. von Solm

    DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

    Full text link
    There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives

    Automatic signal and image-based assessments of spinal cord injury and treatments.

    Get PDF
    Spinal cord injury (SCI) is one of the most common sources of motor disabilities in humans that often deeply impact the quality of life in individuals with severe and chronic SCI. In this dissertation, we have developed advanced engineering tools to address three distinct problems that researchers, clinicians and patients are facing in SCI research. Particularly, we have proposed a fully automated stochastic framework to quantify the effects of SCI on muscle size and adipose tissue distribution in skeletal muscles by volumetric segmentation of 3-D MRI scans in individuals with chronic SCI as well as non-disabled individuals. We also developed a novel framework for robust and automatic activation detection, feature extraction and visualization of the spinal cord epidural stimulation (scES) effects across a high number of scES parameters to build individualized-maps of muscle recruitment patterns of scES. Finally, in the last part of this dissertation, we introduced an EMG time-frequency analysis framework that implements EMG spectral analysis and machine learning tools to characterize EMG patterns resulting in independent or assisted standing enabled by scES, and identify the stimulation parameters that promote muscle activation patterns more effective for standing. The neurotechnological advancements proposed in this dissertation have greatly benefited SCI research by accelerating the efforts to quantify the effects of SCI on muscle size and functionality, expanding the knowledge regarding the neurophysiological mechanisms involved in re-enabling motor function with epidural stimulation and the selection of stimulation parameters and helping the patients with complete paralysis to achieve faster motor recovery
    • ā€¦
    corecore