1,917 research outputs found

    User Behavior-Based Implicit Authentication

    Get PDF
    In this work, we proposed dynamic retraining (RU), wind vane module (WVM), BubbleMap (BMap), and reinforcement authentication (RA) to improve the efficacy of implicit authentication (IA). Motivated by the great potential of implicit and seamless user authentication, we have built an implicit authentication system with adaptive sampling that automatically selects dynamic sets of activities for user behavior extraction. Various activities, such as user location, application usage, user motion, and battery usage have been popular choices to generate behaviors, the soft biometrics, for implicit authentication. Unlike password-based or hard biometric-based authentication, implicit authentication does not require explicit user action or expensive hardware. However, user behaviors can change unpredictably, which renders it more challenging to develop systems that depend on them. In addition to dynamic behavior extraction, the proposed implicit authentication system differs from the existing systems in terms of energy efficiency for battery-powered mobile devices. Since implicit authentication systems rely on machine learning, the expensive training process needs to be outsourced to the remote server. However, mobile devices may not always have reliable network connections to send real-time data to the server for training. In addition, IA systems are still at their infancy and exhibit many limitations, one of which is how to determine the best retraining frequency when updating the user behavior model. Another limitation is how to gracefully degrade user privilege when authentication fails to identify legitimate users (i.e., false negatives) for a practical IA system.To address the retraining problem, we proposed an algorithm that utilizes Jensen-Shannon (JS)-dis(tance) to determine the optimal retraining frequency, which is discussed in Chapter 2. We overcame the limitation of traditional IA by proposing a W-layer, an overlay that provides a practical and energy-efficient solution for implicit authentication on mobile devices. The W-layer is discussed in Chapter 3 and 4. In Chapter 5, a novel privilege-control mechanism, BubbleMap (BMap), is introduced to provide fine-grained privileges to users based on their behavioral scores. In the same chapter, we describe reinforcement authentication (RA) to achieve a more reliable authentication

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Evaluation of SpliceAI for Improved Genetic Variant Classification in Inherited Ophthalmic Disease Genes

    Get PDF
    ABSTRACT EVALUATION OF SPLICEAI FOR IMPROVED GENETIC VARIANT CLASSIFICATION IN INHERITED OPHTHALMIC DISEASE GENES By Melissa Jean Reeves, Ph.D. A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at Virginia Commonwealth University. Virginia Commonwealth University, 2023 Major Director: Melissa Jamerson, PhD, MLS(ASCP) Associate Professor, Department of Medical Laboratory Sciences Inherited ophthalmic diseases impact individuals around the globe. Inherited retinal diseases (IRDs) are the leading cause of blindness in individuals aged 15 to 45. The personal, social, and economic impact of vision loss is profound. Due to individual differences, symptoms can be variable, and it may be difficult to diagnose some diseases based on phenotype alone. Clinicians often seek out genetic testing to confirm clinical diagnoses when other avenues have failed. Clinical laboratories use all available data, such as frequency, population, or computational data, to evaluate genetic variants and determine their classification. Clinical laboratories may not have enough evidence to classify a genetic variant as pathogenic or benign when testing is performed, so variants may be classified of uncertain significance. Because inherited retinal diseases are considered rare, there are limited treatments available, and most treatment is offered through clinical trials. Clinical trials often have stringent inclusion and exclusion criteria to ensure the most optimum outcome for the study. Due to constraints of a study, patients often must have definitive genetic results to qualify for a trial. A variant of uncertain significance would likely disqualify an individual for a clinical trial. Functional assays, such as the minigene assay, have been used extensively across multiple genes and diseases with ease. This study aimed to investigate a novel methodology for the minigene assay and establish the sensitivity of SpliceAI for predicting synonymous splice effects in variants with a SpliceAI change (∆) score ≥ 0.8 in inherited ophthalmic disease genes. This study used the “P” or process component of the Structure-Process-Outcome (SPO) Donabedian model to evaluate the addition of the minigene assay to the clinical testing workflow. This study also highlights the importance of using a well-validated framework, such as Donabedian, in conjunction with clinical laboratory quality improvements. Of the 617 synonymous variants in 20 ophthalmic disease genes targeted in the database, 86 synonymous variants in 14 genes were scored ≥ 0.8. Twenty synonymous variants in two ophthalmic disease genes (ABCA4 and CHD7) were selected for this preliminary study. Twenty wildtype and variant pairs were assessed using the novel minigene test to review splice outcomes. This study established that this novel minigene test could be used in a clinical laboratory as a part of the clinical testing pipeline. Of the 20 variants targeted, 14 variants could be evaluated by minigene. Six variants did not produce high-quality data and will need to be repeated. Eleven of the 14 variants reviewed showed aberrant splice effects through the minigene assay, matching the SpliceAI prediction. Three variants matched the wildtype transcript and were therefore considered discordant. Based on these results, the sensitivity of SpliceAI for predicting splice effects in synonymous variants in inherited ophthalmic diseases is approximately 79%, slightly less than the expected 80%. The shift in sensitivity is likely due to the small sample size in this study. A Fisher’s exact test was performed to evaluate the concordance rate between minigene outcomes and SpliceAI predictions with a p value of 0.2222, indicating no statistical difference between SpliceAI predictions and minigene outcomes. The results of this study indicate that SpliceAI has a predictive efficiency in ophthalmic disease genes of 79%, which is well below what would be needed (\u3e 95%) for a clinical laboratory to rely solely for variant classification. Though the predictive efficiency is less than expected, this preliminary study offers insight into the predictive value of SpliceAI for synonymous variants in inherited ophthalmic disease genes. This study also introduces a novel minigene method that other clinical laboratories across other diseases and genes can reliably use

    Methods For Robust Quantification Of Rna Alternative Splicing In Heterogeneous Rna-Seq Datasets

    Get PDF
    RNA alternative splicing is primarily responsible for transcriptome diversity and is relevant to human development and disease. However, current approaches to splicing quantication make simplifying assumptions which are violated when RNA sequencing data are heterogeneous. Influences from genetic and environmental background contribute to variability within a group of samples purported to represent the same biological condition. This work describes three methods which account for data heterogeneity when detecting differential RNA splicing between sample groups. First, a robust model is implemented for outlier detection within a group of purported replicates. Next, large RNA-seq datasets with high within-group variability are addressed with a statistical approach which retains power to detect changing splice junctions without sacricing specicity. Finally, applying these tools to call sQTLs in GTEx tissues has identified splicing variations associated with risk loci for cardiovascular disease and anomalous skeletal development. Each of these methods correctly handles the properties of heterogeneous RNA-seq data to improve precision and reduce false discovery rate

    Computational and chemical approaches to drug repurposing

    Get PDF
    Drug repurposing, which entails discovering novel therapeutic applications for already existing drugs, provides numerous benefits compared to conventional drug discovery methods. This strategy can be pursued through two primary approaches: computational and chemical. Computational methods involve the utilization of data mining and bioinformatics techniques to identify potential drug candidates, while chemical approaches involve experimental screens oriented to finding new potential treatments based on existing drugs. Both computational and chemical methods have proven successful in uncovering novel therapeutic uses for established drugs. During my PhD, I participated in several experimental drug repurposing screens based on high-throughput phenotypic approaches. Finally, attracted by the potential of computational drug repurposing pipelines, I decided to contribute and generate a web platform focused on the use of transcriptional signatures to identify potential new treatments for human disease. A summary of these studies follows: In Study I, we utilized the tetracycline repressor (tetR)-regulated mechanism to create a human osteosarcoma cell line (U2OS) with the ability to express TAR DNA-binding protein 43 (TDP-43) upon induction. TDP-43 is a protein known for its association with several neurodegenerative diseases. We implemented a chemical screening with this system as part of our efforts to repurpose approved drugs. While the screening was unsuccessful to identify modulators of TDP-43 toxicity, it revealed compounds capable of inhibiting the doxycyclinedependent TDP-43 expression. Furthermore, a complementary CRISPR/Cas9 screening using the same cell system identified additional regulators of doxycycline-dependent TDP43 expression. This investigation identifies new chemical and genetic modulators of the tetR system and highlights potential limitations of using this system for chemical or genetic screenings in mammalian cells. In Study II, our objective was to reposition compounds that could potentially reduce the toxic effects of a fragment of the Huntingtin (HTT) protein containing a 94 amino acid long glutamine stretch (Htt-Q94), a feature of Huntington's disease (HD). To achieve this, we carried out a high-throughput chemical screening using a varied collection of 1,214 drugs, largely sourced from a drug repurposing library. Through our screening process, we singled out clofazimine, an FDA-approved anti-leprosy drug, as a potential therapeutic candidate. Its effectiveness was validated across several in vitro models as well as a zebrafish model of polyglutamine (polyQ) toxicity. Employing a combination of computational analysis of transcriptional signatures, molecular modeling, and biochemical assays, we deduced that clofazimine is an agonist for the peroxisome proliferator-activated receptor gamma (PPARγ), a receptor previously suggested to be a viable therapeutic target for HD due to its role in promoting mitochondrial biogenesis. Notably, clofazimine was successful in alleviating the mitochondrial dysfunction triggered by the expression of Htt-Q94. These findings lend substantial support to the potential of clofazimine as a viable candidate for drug repurposing in the treatment of polyQ diseases. In Study III, we explored the molecular mechanism of a previously identified repurposing example, the use of diethyldithiocarbamate-copper complex (CuET), a disulfiram metabolite, for cancer treatment. We found CuET effectively inhibits cancer cell growth by targeting the NPL4 adapter of the p97VCP segregase, leading to translational arrest and stress in tumor cells. CuET also activates ribosomal biogenesis and autophagy in cancer cells, and its cytotoxicity can be enhanced by inhibiting these pathways. Thus, CuET shows promise as a cancer treatment, especially in combination therapies. In Study IV, we capitalized on the Molecular Signatures Database (MSigDB), one of the largest signature repositories, and drug transcriptomic profiles from the Connectivity Map (CMap) to construct a comprehensive and interactive drug-repurposing database called the Drug Repurposing Encyclopedia (DRE). Housing over 39.7 million pre-computed drugsignature associations across 20 species, the DRE allows users to conduct real-time drugrepurposing analysis. This can involve comparing user-supplied gene signatures with existing ones in the DRE, carrying out drug-gene set enrichment analyses (drug-GSEA) using submitted drug transcriptomic profiles, or conducting similarity analyses across all database signatures using user-provided gene sets. Overall, the DRE is an exhaustive database aimed at promoting drug repurposing based on transcriptional signatures, offering deep-dive comparisons across molecular signatures and species. Drug repurposing presents a valuable strategy for discovering fresh therapeutic applications for existing drugs, offering numerous benefits compared to conventional drug discovery methods. The studies conducted in this thesis underscore the potential of drug repurposing and highlight the complementary roles of computational and chemical approaches. These studies enhance our understanding of the mechanistic properties of repurposed drugs, such as clofazimine and disulfiram, and reveal novel mechanisms for targeting specific disease pathways. Additionally, the development of the DRE platform provides a comprehensive tool to support researchers in conducting drug-repositioning analyses, further facilitating the advancement of drug repurposing studies

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusätzliche Herausforderungen: Diverse Eingabegeräte mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. Darüber hinaus zwingt der eingeschränkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurückzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusätzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. Größe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen Bedürfnisse der Benutzer zu berücksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und Produktivität von VR zu erhöhen. Zunächst werden PC-basierte Hardware und Software in die virtuelle Welt übertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen Geräten, z.B. Tastatur und Tablet, und ein VR-Modus für Anwendungen ermöglichen es dem Benutzer reale Fähigkeiten in die virtuelle Welt zu übertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-Geräte mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung für die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. Darüber hinaus werden personalisierte räumliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale Präsenz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen Fähigkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewährleisten. Darüber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare Einschränkungen der realen Welt zu überwinden und das Erlebnis von VR-Umgebungen zu steigern

    Functional genome-wide siRNA screen identifies KIAA0586 as mutated in Joubert syndrome

    Get PDF
    Defective primary ciliogenesis or cilium stability forms the basis of human ciliopathies, including Joubert syndrome (JS), with defective cerebellar vermis development. We performed a high-content genome wide siRNA screen to identify genes regulating ciliogenesis as candidates for JS. We analyzed results with a supervised learning approach, using SYSCILIA gold standard, Cildb3.0, a centriole siRNA screen and the GTex project, identifying 591 likely candidates. Intersection of this data with whole exome results from 145 individuals with unexplained JS identified six families with predominantly compound heterozygous mutations in KIAA0586. A c.428del base deletion in 0.1% of the general population was found in trans with a second mutation in an additional set of 9 of 163 unexplained JS patients. KIAA0586 is an orthologue of chick Talpid3, required for ciliogenesis and sonic hedgehog signaling. Our results uncover a relatively high frequency cause for JS and contribute a list of candidates for future gene discoveries in ciliopathies

    Video interaction using pen-based technology

    Get PDF
    Dissertação para obtenção do Grau de Doutor em InformáticaVideo can be considered one of the most complete and complex media and its manipulating is still a difficult and tedious task. This research applies pen-based technology to video manipulation, with the goal to improve this interaction. Even though the human familiarity with pen-based devices, how they can be used on video interaction, in order to improve it, making it more natural and at the same time fostering the user’s creativity is an open question. Two types of interaction with video were considered in this work: video annotation and video editing. Each interaction type allows the study of one of the interaction modes of using pen-based technology: indirectly, through digital ink, or directly, trough pen gestures or pressure. This research contributes with two approaches for pen-based video interaction: pen-based video annotations and video as ink. The first uses pen-based annotations combined with motion tracking algorithms, in order to augment video content with sketches or handwritten notes. It aims to study how pen-based technology can be used to annotate a moving objects and how to maintain the association between a pen-based annotations and the annotated moving object The second concept replaces digital ink by video content, studding how pen gestures and pressure can be used on video editing and what kind of changes are needed in the interface, in order to provide a more familiar and creative interaction in this usage context.This work was partially funded by the UTAustin-Portugal, Digital Media, Program (Ph.D. grant: SFRH/BD/42662/2007 - FCT/MCTES); by the HP Technology for Teaching Grant Initiative 2006; by the project "TKB - A Transmedia Knowledge Base for contemporary dance" (PTDC/EAT/AVP/098220/2008 funded by FCT/MCTES); and by CITI/DI/FCT/UNL (PEst-OE/EEI/UI0527/2011

    A study, exploration and development of the interaction of music production techniques in a contemporary desktop setting

    Get PDF
    As with all computer-based technologies, music production is advancing at a rate comparable to ‘Moore’s law’. Developments within the discipline are gathering momentum exponentially; stretching the boundaries of the field, deepening the levels to which mediation can be applied, concatenating previously discrete hardware technologies into the desktop domain, demanding greater insight from practitioners to master these technologies and even defining new genres of music through the increasing potential for sonic creativity to evolve. This DMus project will draw from the implications of the above developments and study the application of technologies currently available in the desktop environment, from emulations of that which was traditionally hardware to the latest spectrally based audio-manipulation tools. It will investigate the interaction of these technologies, and explore creative possibilities that were unattainable only a few years ago – all as exemplified through the production of two contrasting albums of music. In addition, new software will be developed to actively contribute to the evolution of music production as we know it. The focus will be on extended production technique and innovation, through both development and context. The commentary will frame the practical work. It will offer a research context with a number of foci in preference to literal questions, it will qualify the methodology and then form a literature & practice review. It will then present a series of frameworks that analyse music production contexts and technologies in a historical perspective. By setting such a trajectory, the current state-of-the-art can be best placed, and a number of the progressive production techniques associated with the submitted artefacts can then by contextualised. It will terminate with a discussion of the work that moves from the specific to the general
    corecore