52 research outputs found

    Asian female facial beauty prediction using deep neural networks via transfer learning and multi-channel feature fusion

    Get PDF
    Facial beauty plays an important role in many fields today, such as digital entertainment, facial beautification surgery and etc. However, the facial beauty prediction task has the challenges of insufficient training datasets, low performance of traditional methods, and rarely takes advantage of the feature learning of Convolutional Neural Networks. In this paper, a transfer learning based CNN method that integrates multiple channel features is utilized for Asian female facial beauty prediction tasks. Firstly, a Large-Scale Asian Female Beauty Dataset (LSAFBD) with a more reasonable distribution has been established. Secondly, in order to improve CNN's self-learning ability of facial beauty prediction task, an effective CNN using a novel Softmax-MSE loss function and a double activation layer has been proposed. Then, a data augmentation method and transfer learning strategy were also utilized to mitigate the impact of insufficient data on proposed CNN performance. Finally, a multi-channel feature fusion method was explored to further optimize the proposed CNN model. Experimental results show that the proposed method is superior to traditional learning method combating the Asian female FBP task. Compared with other state-of-the-art CNN models, the proposed CNN model can improve the rank-1 recognition rate from 60.40% to 64.85%, and the pearson correlation coefficient from 0.8594 to 0.8829 on the LSAFBD and obtained 0.9200 regression prediction results on the SCUT dataset

    Learning Transferable Features From Different Domains

    Get PDF
    Les progrès récents en matière d'apprentissage automatique supposent généralement que les données d'apprentissage et de test proviennent de la même distribution de données. Cependant, dans la pratique, les données peuvent être collectées séparément comme des ensembles de données différents. Apprendre à partir de données provenant de plusieurs domaines sources et les généraliser à un autre domaine est un problème crucial de l'apprentissage automatique. Nous abordons ce type de problème dans le contexte de l'apprentissage par transfert (TL), notamment l'adaptation de domaine (DA), la généralisation de domaine (DG) et l'apprentissage multi-tâches (MTL), et ce dans le but de transférer les caractéristiques invariantes communes à de nouveaux domaines. Nous avons étudié ce type d'apprentissage par transfert sous différents aspects, y compris les problèmes liés au décalage conditionnel dans l'adaptation de domaine, les problèmes de désalignement sémantique et de décalage d'étiquettes dans la généralisation de domaine et l'apprentissage multi-tâches en parvenant à plusieurs résultats. Concrètement, nous explorons d'abord les problèmes de décalage conditionnel (DA) avec une stratégie d'apprentissage actif pour interroger les instances les plus informatives dans le domaine cible afin de faire migrer le terme de désaccord entre les fonctions d'étiquetage des domaines source et cible. Nous explorons ensuite les similitudes de catégories dans les problèmes liés à la généralisation de domaine (DG) via l'entraînement adversarial basé sur le transport optimal avec un objectif d'apprentissage de similarité métrique afin d'améliorer la correspondance au niveau du domaine et de la classe pour les problèmes DG. Nous étudions ensuite, plus en détail les relations entre les étiquettes et la sémantique dans le MTL, où nous fournissons une compréhension théorique de la manière de contrôler les divergences entre les étiquettes et la distribution sémantique. Enfin, nous étendons l'analyse théorique sur la façon d'exploiter les étiquettes et l'information sémantique dans la généralisation de domaine (DG), en fournissant une première analyse pour comprendre les propriétés de généralisation dans le contrôle des divergences de distribution des étiquettes et de la sémantique. Pour chaque travail reflété dans cette thèse, nous menons des expériences approfondies afin de démontrer l'efficacité et les objectifs d'apprentissage. Les résultats expérimentaux confirment que nos méthodes parviennent aux performances souhaitées et indiquées par les principes d'analyse et d'apprentissage, ce qui valide les contributions de cette thèse.Recent machine learning progresses usually assume the data for training and testing are from the same data distribution. However, in practice, the data might be gathered separately as different datasets. To learn data from several source domains and generalize to another domain, is a crucial problem in machine learning. We tackle this kind of problem in the context of Transfer Learning (TL), including Domain Adaptation (DA), Domain Generalization (DG) and Multi-task Learning (MTL), with the sake of transferring the common invariant features to new domains. We have investigated this kind of transfer learning method in several different aspects, including the conditional shift problems in domain adaptation, semantic misalignment and label shift problems in domain generalization and multi-task learning problems with several accomplishments. Concretely, we first explore the conditional shift problems DA with an active learning strategy to query the most informative instances in the target domain to migrate the disagreement term between the source and target domain labelling functions. We then explore the category similarities in the DG problems via the optimal transport-based adversarial training with a metric similarity learning objective to enhance both the domain-level and class-level matching for DG problems. After that, we further investigate the label and semantic relations in MTL, where we provide the first theoretical understanding of how to control the label and semantic distribution divergences. Lastly, we extend the theoretical analysis on how to leverage the label and semantic information in DG, providing the first analysis to understand the generalization properties on controlling the label and semantic distribution divergences. For each work reflected in this thesis, we also conduct intensive experiments to demonstrate the effectiveness and learning objectives. The experimental results confirm that our methods achieve the desired performance indicated by the analysis and learning principles, which confirms the contributions of this thesis

    The role of time in video understanding

    Get PDF

    Core Challenges in Embodied Vision-Language Planning

    Full text link
    Recent advances in the areas of multimodal machine learning and artificial intelligence (AI) have led to the development of challenging tasks at the intersection of Computer Vision, Natural Language Processing, and Embodied AI. Whereas many approaches and previous survey pursuits have characterised one or two of these dimensions, there has not been a holistic analysis at the center of all three. Moreover, even when combinations of these topics are considered, more focus is placed on describing, e.g., current architectural methods, as opposed to also illustrating high-level challenges and opportunities for the field. In this survey paper, we discuss Embodied Vision-Language Planning (EVLP) tasks, a family of prominent embodied navigation and manipulation problems that jointly use computer vision and natural language. We propose a taxonomy to unify these tasks and provide an in-depth analysis and comparison of the new and current algorithmic approaches, metrics, simulated environments, as well as the datasets used for EVLP tasks. Finally, we present the core challenges that we believe new EVLP works should seek to address, and we advocate for task construction that enables model generalizability and furthers real-world deployment.Comment: 35 page

    Neural Natural Language Generation: A Survey on Multilinguality, Multimodality, Controllability and Learning

    Get PDF
    Developing artificial learning systems that can understand and generate natural language has been one of the long-standing goals of artificial intelligence. Recent decades have witnessed an impressive progress on both of these problems, giving rise to a new family of approaches. Especially, the advances in deep learning over the past couple of years have led to neural approaches to natural language generation (NLG). These methods combine generative language learning techniques with neural-networks based frameworks. With a wide range of applications in natural language processing, neural NLG (NNLG) is a new and fast growing field of research. In this state-of-the-art report, we investigate the recent developments and applications of NNLG in its full extent from a multidimensional view, covering critical perspectives such as multimodality, multilinguality, controllability and learning strategies. We summarize the fundamental building blocks of NNLG approaches from these aspects and provide detailed reviews of commonly used preprocessing steps and basic neural architectures. This report also focuses on the seminal applications of these NNLG models such as machine translation, description generation, automatic speech recognition, abstractive summarization, text simplification, question answering and generation, and dialogue generation. Finally, we conclude with a thorough discussion of the described frameworks by pointing out some open research directions.This work has been partially supported by the European Commission ICT COST Action “Multi-task, Multilingual, Multi-modal Language Generation” (CA18231). AE was supported by BAGEP 2021 Award of the Science Academy. EE was supported in part by TUBA GEBIP 2018 Award. BP is in in part funded by Independent Research Fund Denmark (DFF) grant 9063-00077B. IC has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 838188. EL is partly funded by Generalitat Valenciana and the Spanish Government throught projects PROMETEU/2018/089 and RTI2018-094649-B-I00, respectively. SMI is partly funded by UNIRI project uniri-drustv-18-20. GB is partly supported by the Ministry of Innovation and the National Research, Development and Innovation Office within the framework of the Hungarian Artificial Intelligence National Laboratory Programme. COT is partially funded by the Romanian Ministry of European Investments and Projects through the Competitiveness Operational Program (POC) project “HOLOTRAIN” (grant no. 29/221 ap2/07.04.2020, SMIS code: 129077) and by the German Academic Exchange Service (DAAD) through the project “AWAKEN: content-Aware and netWork-Aware faKE News mitigation” (grant no. 91809005). ESA is partially funded by the German Academic Exchange Service (DAAD) through the project “Deep-Learning Anomaly Detection for Human and Automated Users Behavior” (grant no. 91809358)

    Computer Science 2019 APR Self-Study & Documents

    Get PDF
    UNM Computer Science APR self-study report and review team report for Spring 2019, fulfilling requirements of the Higher Learning Commission

    From pixels to people : recovering location, shape and pose of humans in images

    Get PDF
    Humans are at the centre of a significant amount of research in computer vision. Endowing machines with the ability to perceive people from visual data is an immense scientific challenge with a high degree of direct practical relevance. Success in automatic perception can be measured at different levels of abstraction, and this will depend on which intelligent behaviour we are trying to replicate: the ability to localise persons in an image or in the environment, understanding how persons are moving at the skeleton and at the surface level, interpreting their interactions with the environment including with other people, and perhaps even anticipating future actions. In this thesis we tackle different sub-problems of the broad research area referred to as "looking at people", aiming to perceive humans in images at different levels of granularity. We start with bounding box-level pedestrian detection: We present a retrospective analysis of methods published in the decade preceding our work, identifying various strands of research that have advanced the state of the art. With quantitative exper- iments, we demonstrate the critical role of developing better feature representations and having the right training distribution. We then contribute two methods based on the insights derived from our analysis: one that combines the strongest aspects of past detectors and another that focuses purely on learning representations. The latter method outperforms more complicated approaches, especially those based on hand- crafted features. We conclude our work on pedestrian detection with a forward-looking analysis that maps out potential avenues for future research. We then turn to pixel-level methods: Perceiving humans requires us to both separate them precisely from the background and identify their surroundings. To this end, we introduce Cityscapes, a large-scale dataset for street scene understanding. This has since established itself as a go-to benchmark for segmentation and detection. We additionally develop methods that relax the requirement for expensive pixel-level annotations, focusing on the task of boundary detection, i.e. identifying the outlines of relevant objects and surfaces. Next, we make the jump from pixels to 3D surfaces, from localising and labelling to fine-grained spatial understanding. We contribute a method for recovering 3D human shape and pose, which marries the advantages of learning-based and model- based approaches. We conclude the thesis with a detailed discussion of benchmarking practices in computer vision. Among other things, we argue that the design of future datasets should be driven by the general goal of combinatorial robustness besides task-specific considerations.Der Mensch steht im Zentrum vieler Forschungsanstrengungen im Bereich des maschinellen Sehens. Es ist eine immense wissenschaftliche Herausforderung mit hohem unmittelbarem Praxisbezug, Maschinen mit der Fähigkeit auszustatten, Menschen auf der Grundlage von visuellen Daten wahrzunehmen. Die automatische Wahrnehmung kann auf verschiedenen Abstraktionsebenen erfolgen. Dies hängt davon ab, welches intelligente Verhalten wir nachbilden wollen: die Fähigkeit, Personen auf der Bildfläche oder im 3D-Raum zu lokalisieren, die Bewegungen von Körperteilen und Körperoberflächen zu erfassen, Interaktionen einer Person mit ihrer Umgebung einschließlich mit anderen Menschen zu deuten, und vielleicht sogar zukünftige Handlungen zu antizipieren. In dieser Arbeit beschäftigen wir uns mit verschiedenen Teilproblemen die dem breiten Forschungsgebiet "Betrachten von Menschen" gehören. Beginnend mit der Fußgängererkennung präsentieren wir eine Analyse von Methoden, die im Jahrzehnt vor unserem Ausgangspunkt veröffentlicht wurden, und identifizieren dabei verschiedene Forschungsstränge, die den Stand der Technik vorangetrieben haben. Unsere quantitativen Experimente zeigen die entscheidende Rolle sowohl der Entwicklung besserer Bildmerkmale als auch der Trainingsdatenverteilung. Anschließend tragen wir zwei Methoden bei, die auf den Erkenntnissen unserer Analyse basieren: eine Methode, die die stärksten Aspekte vergangener Detektoren kombiniert, eine andere, die sich im Wesentlichen auf das Lernen von Bildmerkmalen konzentriert. Letztere übertrifft kompliziertere Methoden, insbesondere solche, die auf handgefertigten Bildmerkmalen basieren. Wir schließen unsere Arbeit zur Fußgängererkennung mit einer vorausschauenden Analyse ab, die mögliche Wege für die zukünftige Forschung aufzeigt. Anschließend wenden wir uns Methoden zu, die Entscheidungen auf Pixelebene betreffen. Um Menschen wahrzunehmen, müssen wir diese sowohl praezise vom Hintergrund trennen als auch ihre Umgebung verstehen. Zu diesem Zweck führen wir Cityscapes ein, einen umfangreichen Datensatz zum Verständnis von Straßenszenen. Dieser hat sich seitdem als Standardbenchmark für Segmentierung und Erkennung etabliert. Darüber hinaus entwickeln wir Methoden, die die Notwendigkeit teurer Annotationen auf Pixelebene reduzieren. Wir konzentrieren uns hierbei auf die Aufgabe der Umgrenzungserkennung, d. h. das Erkennen der Umrisse relevanter Objekte und Oberflächen. Als nächstes machen wir den Sprung von Pixeln zu 3D-Oberflächen, vom Lokalisieren und Beschriften zum präzisen räumlichen Verständnis. Wir tragen eine Methode zur Schätzung der 3D-Körperoberfläche sowie der 3D-Körperpose bei, die die Vorteile von lernbasierten und modellbasierten Ansätzen vereint. Wir schließen die Arbeit mit einer ausführlichen Diskussion von Evaluationspraktiken im maschinellen Sehen ab. Unter anderem argumentieren wir, dass der Entwurf zukünftiger Datensätze neben aufgabenspezifischen Überlegungen vom allgemeinen Ziel der kombinatorischen Robustheit bestimmt werden sollte
    corecore