1,534 research outputs found

    A Comparison of Open-Source Segmentation Architectures for Dealing with Imperfect Data from the Media in Speech Synthesis

    Get PDF
    Proceedings of: 15th Annual Conference of the International Speech Communication Association. Singapore, September 14-18, 2014.Traditional Text-To-Speech (TTS) systems have been developed using especially-designed non-expressive scripted recordings. In order to develop a new generation of expressive TTS systems in the Simple4All project, real recordings from the media should be used for training new voices with a whole new range of speaking styles. However, for processing this more spontaneous material, the new systems must be able to deal with imperfect data (multi-speaker recordings, background and fore-ground music and noise), filtering out low-quality audio segments and creating mono-speaker clusters. In this paper we compare several architectures for combining speaker diarization and music and noise detection which improve the precision and overall quality of the segmentation.This work has been carried out during the research stay of A. Gallardo-AntolĂ­n and J. M. Montero at the Centre for Speech Technology Research (CSTR), University of Edinburgh, supported by the Spanish Ministry of Education, Culture and Sports under the National Program of Human Resources Mobility from the I+D+i 2008-2011 National Program, extended by agreement of the Council of Ministers in October 7th, 2011. The work leading to these results has received funding from the European Union under grant agreement No 287678. It has also been supported by EPSRC Programme Grant grant, no. EP/I031022/1 (Natural Speech Technology, NST) and Spanish Government grants TEC2011-26807 and DPI2010-21247-C02-02.Publicad

    Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions

    Full text link
    Multimodal machine learning is a vibrant multi-disciplinary research field that aims to design computer agents with intelligent capabilities such as understanding, reasoning, and learning through integrating multiple communicative modalities, including linguistic, acoustic, visual, tactile, and physiological messages. With the recent interest in video understanding, embodied autonomous agents, text-to-image generation, and multisensor fusion in application domains such as healthcare and robotics, multimodal machine learning has brought unique computational and theoretical challenges to the machine learning community given the heterogeneity of data sources and the interconnections often found between modalities. However, the breadth of progress in multimodal research has made it difficult to identify the common themes and open questions in the field. By synthesizing a broad range of application domains and theoretical frameworks from both historical and recent perspectives, this paper is designed to provide an overview of the computational and theoretical foundations of multimodal machine learning. We start by defining two key principles of modality heterogeneity and interconnections that have driven subsequent innovations, and propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification covering historical and recent trends. Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches. We end by motivating several open problems for future research as identified by our taxonomy

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Using vision transformer to synthesize computed tomography perfusion images in ischemic stroke patients

    Get PDF
    Computed tomography perfusion (CTP) imaging is crucial for diagnosing and determining the extent of damage in cerebral stroke patients [1]. Automatic segmentation of ischemic core and penumbra regions in CTP images is desired, given the limitations of manual examination. Self-supervised segmentation has gained attention [2], but it requires a large training set that can be obtained by synthesizing CTP images. Deep convolutional generative adversarial networks (DCGANs) have been used for this purpose [3], but high-resolution image synthesis remains a challenge. To address this, we propose to tailor the high-resolution transformer-based generative adversarial network (HiT-GAN) model, proposed by Zhao et al. [4], which utilizes vision transformers and self-attention mechanisms for the purposes of generating high-quality CTP data. Our proposed model was trained using CTP images from 157 patients, categorized based on vessel occlusion. The dataset consisted of 70,050 raw data images, which were normalized and downsampled. Comparative evaluation with DCGAN showed that HiT-GAN achieved a significantly lower fréchet inception distance (FID) score of 77.4, compared to 143.0 for the DCGAN, indicating superior image generation performance. The generated images were visually compared with real samples, demonstrating promising results. While the current focus is on generating 2D images, future work aims to extend the model to generate 3D CTP data conditioned on labeled brain slices. Overall, our study highlights the potential of HiT-GAN for synthesizing high-resolution CTP images, although its significance in advancing automatic segmentation techniques for ischemic stroke analysis is yet to be examined

    Natural image processing and synthesis using deep learning

    Full text link
    Nous Ă©tudions dans cette thĂšse comment les rĂ©seaux de neurones profonds peuvent ĂȘtre utilisĂ©s dans diffĂ©rents domaines de la vision artificielle. La vision artificielle est un domaine interdisciplinaire qui traite de la comprĂ©hension d’images et de vidĂ©os numĂ©riques. Les problĂšmes de ce domaine ont traditionnellement Ă©tĂ© adressĂ©s avec des mĂ©thodes ad-hoc nĂ©cessitant beaucoup de rĂ©glages manuels. En effet, ces systĂšmes de vision artificiels comprenaient jusqu’à rĂ©cemment une sĂ©rie de modules optimisĂ©s indĂ©pendamment. Cette approche est trĂšs raisonnable dans la mesure oĂč, avec peu de donnĂ©es, elle bĂ©nĂ©ficient autant que possible des connaissances du chercheur. Mais cette avantage peut se rĂ©vĂ©ler ĂȘtre une limitation si certaines donnĂ©es d’entrĂ© n’ont pas Ă©tĂ© considĂ©rĂ©es dans la conception de l’algorithme. Avec des volumes et une diversitĂ© de donnĂ©es toujours plus grands, ainsi que des capacitĂ©s de calcul plus rapides et Ă©conomiques, les rĂ©seaux de neurones profonds optimisĂ©s d’un bout Ă  l’autre sont devenus une alternative attrayante. Nous dĂ©montrons leur avantage avec une sĂ©rie d’articles de recherche, chacun d’entre eux trouvant une solution Ă  base de rĂ©seaux de neurones profonds Ă  un problĂšme d’analyse ou de synthĂšse visuelle particulier. Dans le premier article, nous considĂ©rons un problĂšme de vision classique: la dĂ©tection de bords et de contours. Nous partons de l’approche classique et la rendons plus ‘neurale’ en combinant deux Ă©tapes, la dĂ©tection et la description de motifs visuels, en un seul rĂ©seau convolutionnel. Cette mĂ©thode, qui peut ainsi s’adapter Ă  de nouveaux ensembles de donnĂ©es, s’avĂšre ĂȘtre au moins aussi prĂ©cis que les mĂ©thodes conventionnelles quand il s’agit de domaines qui leur sont favorables, tout en Ă©tant beaucoup plus robuste dans des domaines plus gĂ©nĂ©rales. Dans le deuxiĂšme article, nous construisons une nouvelle architecture pour la manipulation d’images qui utilise l’idĂ©e que la majoritĂ© des pixels produits peuvent d’ĂȘtre copiĂ©s de l’image d’entrĂ©e. Cette technique bĂ©nĂ©ficie de plusieurs avantages majeurs par rapport Ă  l’approche conventionnelle en apprentissage profond. En effet, elle conserve les dĂ©tails de l’image d’origine, n’introduit pas d’aberrations grĂące Ă  la capacitĂ© limitĂ©e du rĂ©seau sous-jacent et simplifie l’apprentissage. Nous dĂ©montrons l’efficacitĂ© de cette architecture dans le cadre d’une tĂąche de correction du regard, oĂč notre systĂšme produit d’excellents rĂ©sultats. Dans le troisiĂšme article, nous nous Ă©clipsons de la vision artificielle pour Ă©tudier le problĂšme plus gĂ©nĂ©rale de l’adaptation Ă  de nouveaux domaines. Nous dĂ©veloppons un nouvel algorithme d’apprentissage, qui assure l’adaptation avec un objectif auxiliaire Ă  la tĂąche principale. Nous cherchons ainsi Ă  extraire des motifs qui permettent d’accomplir la tĂąche mais qui ne permettent pas Ă  un rĂ©seau dĂ©diĂ© de reconnaĂźtre le domaine. Ce rĂ©seau est optimisĂ© de maniĂšre simultanĂ© avec les motifs en question, et a pour tĂąche de reconnaĂźtre le domaine de provenance des motifs. Cette technique est simple Ă  implĂ©menter, et conduit pourtant Ă  l’état de l’art sur toutes les tĂąches de rĂ©fĂ©rence. Enfin, le quatriĂšme article prĂ©sente un nouveau type de modĂšle gĂ©nĂ©ratif d’images. À l’opposĂ© des approches conventionnels Ă  base de rĂ©seaux de neurones convolutionnels, notre systĂšme baptisĂ© SPIRAL dĂ©crit les images en termes de programmes bas-niveau qui sont exĂ©cutĂ©s par un logiciel de graphisme ordinaire. Entre autres, ceci permet Ă  l’algorithme de ne pas s’attarder sur les dĂ©tails de l’image, et de se concentrer plutĂŽt sur sa structure globale. L’espace latent de notre modĂšle est, par construction, interprĂ©table et permet de manipuler des images de façon prĂ©visible. Nous montrons la capacitĂ© et l’agilitĂ© de cette approche sur plusieurs bases de donnĂ©es de rĂ©fĂ©rence.In the present thesis, we study how deep neural networks can be applied to various tasks in computer vision. Computer vision is an interdisciplinary field that deals with understanding of digital images and video. Traditionally, the problems arising in this domain were tackled using heavily hand-engineered adhoc methods. A typical computer vision system up until recently consisted of a sequence of independent modules which barely talked to each other. Such an approach is quite reasonable in the case of limited data as it takes major advantage of the researcher's domain expertise. This strength turns into a weakness if some of the input scenarios are overlooked in the algorithm design process. With the rapidly increasing volumes and varieties of data and the advent of cheaper and faster computational resources end-to-end deep neural networks have become an appealing alternative to the traditional computer vision pipelines. We demonstrate this in a series of research articles, each of which considers a particular task of either image analysis or synthesis and presenting a solution based on a ``deep'' backbone. In the first article, we deal with a classic low-level vision problem of edge detection. Inspired by a top-performing non-neural approach, we take a step towards building an end-to-end system by combining feature extraction and description in a single convolutional network. The resulting fully data-driven method matches or surpasses the detection quality of the existing conventional approaches in the settings for which they were designed while being significantly more usable in the out-of-domain situations. In our second article, we introduce a custom architecture for image manipulation based on the idea that most of the pixels in the output image can be directly copied from the input. This technique bears several significant advantages over the naive black-box neural approach. It retains the level of detail of the original images, does not introduce artifacts due to insufficient capacity of the underlying neural network and simplifies training process, to name a few. We demonstrate the efficiency of the proposed architecture on the challenging gaze correction task where our system achieves excellent results. In the third article, we slightly diverge from pure computer vision and study a more general problem of domain adaption. There, we introduce a novel training-time algorithm (\ie, adaptation is attained by using an auxilliary objective in addition to the main one). We seek to extract features that maximally confuse a dedicated network called domain classifier while being useful for the task at hand. The domain classifier is learned simultaneosly with the features and attempts to tell whether those features are coming from the source or the target domain. The proposed technique is easy to implement, yet results in superior performance in all the standard benchmarks. Finally, the fourth article presents a new kind of generative model for image data. Unlike conventional neural network based approaches our system dubbed SPIRAL describes images in terms of concise low-level programs executed by off-the-shelf rendering software used by humans to create visual content. Among other things, this allows SPIRAL not to waste its capacity on minutae of datasets and focus more on the global structure. The latent space of our model is easily interpretable by design and provides means for predictable image manipulation. We test our approach on several popular datasets and demonstrate its power and flexibility

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Addressing the data bottleneck in medical deep learning models using a human-in-the-loop machine learning approach

    Get PDF
    [Abstract]: Any machine learning (ML) model is highly dependent on the data it uses for learning, and this is even more important in the case of deep learning models. The problem is a data bottleneck, i.e. the difficulty in obtaining an adequate number of cases and quality data. Another issue is improving the learning process, which can be done by actively introducing experts into the learning loop, in what is known as human-in-the-loop (HITL) ML. We describe an ML model based on a neural network in which HITL techniques were used to resolve the data bottleneck problem for the treatment of pancreatic cancer. We first augmented the dataset using synthetic cases created by a generative adversarial network. We then launched an active learning (AL) process involving human experts as oracles to label both new cases and cases by the network found to be suspect. This AL process was carried out simultaneously with an interactive ML process in which feedback was obtained from humans in order to develop better synthetic cases for each iteration of training. We discuss the challenges involved in including humans in the learning process, especially in relation to human–computer interaction, which is acquiring great importance in building ML models and can condition the success of a HITL approach. This paper also discusses the methodological approach adopted to address these challenges.This work has been supported by the State Research Agency of the Spanish Government (Grant PID2019-107194GB-I00/AEI/10.13039/501100011033) and by the Xunta de Galicia (Grant ED431C 2022/44), supported in turn by the EU European Regional Development Fund. We wish to acknowledge support received from the Centro de Investigación de Galicia CITIC, funded by the Xunta de Galicia and the European Regional Development Fund (Galicia 2014–2020 Program; Grant ED431G 2019/01).Xunta de Galicia; ED431C 2022/44Xunta de Galicia; ED431G 2019/0
    • 

    corecore