853 research outputs found
Multi-modal and multi-dimensional biomedical image data analysis using deep learning
There is a growing need for the development of computational methods and tools for automated, objective, and quantitative analysis of biomedical signal and image data to facilitate disease and treatment monitoring, early diagnosis, and scientific discovery. Recent advances in artificial intelligence and machine learning, particularly in deep learning, have revolutionized computer vision and image analysis for many application areas. While processing of non-biomedical signal, image, and video data using deep learning methods has been very successful, high-stakes biomedical applications present unique challenges such as different image modalities, limited training data, need for explainability and interpretability etc. that need to be addressed. In this dissertation, we developed novel, explainable, and attention-based deep learning frameworks for objective, automated, and quantitative analysis of biomedical signal, image, and video data. The proposed solutions involve multi-scale signal analysis for oraldiadochokinesis studies; ensemble of deep learning cascades using global soft attention mechanisms for segmentation of meningeal vascular networks in confocal microscopy; spatial attention and spatio-temporal data fusion for detection of rare and short-term video events in laryngeal endoscopy videos; and a novel discrete Fourier transform driven class activation map for explainable-AI and weakly-supervised object localization and segmentation for detailed vocal fold motion analysis using laryngeal endoscopy videos. Experiments conducted on the proposed methods showed robust and promising results towards automated, objective, and quantitative analysis of biomedical data, that is of great value for potential early diagnosis and effective disease progress or treatment monitoring.Includes bibliographical references
Speech Recognition
Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes
Effects of errorless learning on the acquisition of velopharyngeal movement control
Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio
Face Image and Video Analysis in Biometrics and Health Applications
Computer Vision (CV) enables computers and systems to derive meaningful information from acquired visual inputs, such as images and videos, and make decisions based on the extracted information. Its goal is to acquire, process, analyze, and understand the information by developing a theoretical and algorithmic model. Biometrics are distinctive and measurable human characteristics used to label or describe individuals by combining computer vision with knowledge of human physiology (e.g., face, iris, fingerprint) and behavior (e.g., gait, gaze, voice). Face is one of the most informative biometric traits. Many studies have investigated the human face from the perspectives of various different disciplines, ranging from computer vision, deep learning, to neuroscience and biometrics. In this work, we analyze the face characteristics from digital images and videos in the areas of morphing attack and defense, and autism diagnosis. For face morphing attacks generation, we proposed a transformer based generative adversarial network to generate more visually realistic morphing attacks by combining different losses, such as face matching distance, facial landmark based loss, perceptual loss and pixel-wise mean square error. In face morphing attack detection study, we designed a fusion-based few-shot learning (FSL) method to learn discriminative features from face images for few-shot morphing attack detection (FS-MAD), and extend the current binary detection into multiclass classification, namely, few-shot morphing attack fingerprinting (FS-MAF). In the autism diagnosis study, we developed a discriminative few shot learning method to analyze hour-long video data and explored the fusion of facial dynamics for facial trait classification of autism spectrum disorder (ASD) in three severity levels. The results show outstanding performance of the proposed fusion-based few-shot framework on the dataset. Besides, we further explored the possibility of performing face micro- expression spotting and feature analysis on autism video data to classify ASD and control groups. The results indicate the effectiveness of subtle facial expression changes on autism diagnosis
Biometrics
Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book
QUEST Hierarchy for Hyperspectral Face Recognition
Face recognition is an attractive biometric due to the ease in which photographs of the human face can be acquired and processed. The non-intrusive ability of many surveillance systems permits face recognition applications to be used in a myriad of environments. Despite decades of impressive research in this area, face recognition still struggles with variations in illumination, pose and expression not to mention the larger challenge of willful circumvention. The integration of supporting contextual information in a fusion hierarchy known as QUalia Exploitation of Sensor Technology (QUEST) is a novel approach for hyperspectral face recognition that results in performance advantages and a robustness not seen in leading face recognition methodologies. This research demonstrates a method for the exploitation of hyperspectral imagery and the intelligent processing of contextual layers of spatial, spectral, and temporal information. This approach illustrates the benefit of integrating spatial and spectral domains of imagery for the automatic extraction and integration of novel soft features (biometric). The establishment of the QUEST methodology for face recognition results in an engineering advantage in both performance and efficiency compared to leading and classical face recognition techniques. An interactive environment for the testing and expansion of this recognition framework is also provided
Cultural Context-Aware Models and IT Applications for the Exploitation of Musical Heritage
Information engineering has always expanded its scope by inspiring innovation in different scientific disciplines. In particular, in the last sixty years, music and engineering have forged a strong connection in the discipline known as “Sound and Music Computing”. Musical heritage is a paradigmatic case that includes several multi-faceted cultural artefacts and traditions. Several issues arise from the analog-digital transfer of cultural objects, concerning their creation, preservation, access, analysis and experiencing. The keystone is the relationship of these digitized cultural objects with their carrier and cultural context. The terms “cultural context” and “cultural context awareness” are delineated, alongside the concepts of contextual information and metadata. Since they maintain the integrity of the object, its meaning and cultural context, their role is critical. This thesis explores three main case studies concerning historical audio recordings and ancient musical instruments, aiming to delineate models to preserve, analyze, access and experience the digital versions of these three prominent examples of musical heritage.
The first case study concerns analog magnetic tapes, and, in particular, tape music, a particular experimental music born in the second half of the XX century. This case study has relevant implications from the musicology, philology and archivists’ points of view, since the carrier has a paramount role and the tight connection with its content can easily break during the digitization process or the access phase. With the aim to help musicologists and audio technicians in their work, several tools based on Artificial Intelligence are evaluated in tasks such as the discontinuity detection and equalization recognition. By considering the peculiarities of tape music, the philological problem of stemmatics in digitized audio documents is tackled: an algorithm based on phylogenetic techniques is proposed and assessed, confirming the suitability of these techniques for this task. Then, a methodology for a historically faithful access to digitized tape music recordings is introduced, by considering contextual information and its relationship with the carrier and the replay device. Based on this methodology, an Android app which virtualizes a tape recorder is presented, together with its assessment. Furthermore, two web applications are proposed to faithfully experience digitized 78 rpm discs and magnetic tape recordings, respectively. Finally, a prototype of web application for musicological analysis is presented. This aims to concentrate relevant part of the knowledge acquired in this work into a single interface.
The second case study is a corpus of Arab-Andalusian music, suitable for computational research, which opens new opportunities to musicological studies by applying data-driven analysis. The description of the corpus is based on the five criteria formalized in the CompMusic project of the University Pompeu Fabra of Barcelona: purpose, coverage, completeness, quality and re-usability. Four Jupyter notebooks were developed with the aim to provide a useful tool for computational musicologists for analyzing and using data and metadata of such corpus.
The third case study concerns an exceptional historical musical instrument: an ancient Pan flute exhibited at the Museum of Archaeological Sciences and Art of the University of Padova. The final objective was the creation of a multimedia installation to valorize this precious artifact and to allow visitors to interact with the archaeological find and to learn its history. The case study provided the opportunity to study a methodology suitable for the valorization of this ancient musical instrument, but also extendible to other artifacts or museum collections. Both the methodology and the resulting multimedia installation are presented, followed by the assessment carried out by a multidisciplinary group of experts
- …