1,739 research outputs found

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Characterising eye movement events with an unsupervised hidden markov model

    Get PDF
    Eye-tracking allows researchers to infer cognitive processes from eye movements that are classified into distinct events. Parsing the events is typically done by algorithms. Here we aim at developing an unsupervised, generative model that can be fitted to eye-movement data using maximum likelihood estimation. This approach allows hypothesis testing about fitted models, next to being a method for classification. We developed gazeHMM, an algorithm that uses a hidden Markov model as a generative model, has few critical parameters to be set by users, and does not require human coded data as input. The algorithm classifies gaze data into fixations, saccades, and optionally postsaccadic oscillations and smooth pursuits. We evaluated gazeHMM’s performance in a simulation study, showing that it successfully recovered hidden Markov model parameters and hidden states. Parameters were less well recovered when we included a smooth pursuit state and/or added even small noise to simulated data. We applied generative models with different numbers of events to benchmark data. Comparing them indicated that hidden Markov models with more events than expected had most likely generated the data. We also applied the full algorithm to benchmark data and assessed its similarity to human coding and other algorithms. For static stimuli, gazeHMM showed high similarity and outperformed other algorithms in this regard. For dynamic stimuli, gazeHMM tended to rapidly switch between fixations and smooth pursuits but still displayed higher similarity than most other algorithms. Concluding that gazeHMM can be used in practice, we recommend parsing smooth pursuits only for exploratory purposes. Future hidden Markov model algorithms could use covariates to better capture eye movement processes and explicitly model event durations to classify smooth pursuits more accurately

    White Matter Hyperintensity and Multi-region Brain MRI Segmentation Using Convolutional Neural Network

    Get PDF
    Accurate segmentation of WMH (white matter hyperintensity) from the magnetic resonance image is a prerequisite for many precise medical procedures, especially for the diagnosis of vascular dementia. Brain segmentation has important research significance and clinical application prospects especially for early detection of Alzheimer’s disease. In order to effectively perform accurate segmentation according to the MRI characteristics of different regions of the brain, this thesis proposed an optimized 3D u-net and used WHM segmentation as a pre-experiment to select the good hyperparameters (i.e. network depth, image fusion method, and the implementation of loss function) to construct an image feature learning network with both long and short skip connections. Soft voting is used as the postprocessing procedure. Our model is evaluated by a 10-fold cross-validation and achieved a dice score of 0.78 for binary segmentation (WMH segmentation) and accuracy of 0.96 for multi-class segmentation (139 regions brain segmentation), outperforming other methods

    A deep learning approach to bone segmentation in CT scans

    Get PDF
    This thesis proposes a deep learning approach to bone segmentation in abdominal CT scans. Segmentation is a common initial step in medical images analysis, often fundamental for computer-aided detection and diagnosis systems. The extraction of bones in CT scans is a challenging task, which if done manually by experts requires a time consuming process and that has not today a broadly recognized automatic solution. The method presented is based on a convolutional neural network, inspired by the U-Net and trained end-to-end, that performs a semantic segmentation of the data. The training dataset is made up of 21 abdominal CT scans, each one containing between 403 and 994 2D transversal images. Those images are in full resolution, 512x512 voxels, and each voxel is classified by the network into one of the following classes: background, femoral bones, hips, sacrum, sternum, spine and ribs. The output is therefore a bone mask where the bones are recognized and divided into six different classes. In the testing dataset, labeled by experts, the best model achieves a Dice coefficient as average of all bone classes of 0.93. This work demonstrates, to the best of my knowledge for the first time, the feasibility of automatic bone segmentation and classification for CT scans using a convolutional neural network

    Applying text mining techniques to forecast the stock market fluctuations of large it companies with twitter data: descriptive and predictive approaches to enhance the research of stock market predictions with textual and semantic data

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementThis research project applies advanced text mining techniques as a method to predict stock market fluctuations by merging published tweets and daily stock market prices for a set of American Information Technology companies. This project executes a systematical approach to investigate and further analyze, by using mainly R code, two main objectives: i) which are the descriptive criteria, patterns, and variables, which are correlated with the stock fluctuation and ii) does the single usage of tweets indicate moderate signal to predict with high accuracy the stock market fluctuations. The main supposition and expected output of the research work is to deliver findings about the twitter text significance and predictability power to indicate the importance of social media content in terms of stock market fluctuations by using descriptive and predictive data mining approaches, as natural language processing, topic modelling, sentiment analysis and binary classification with neural networks

    An Artificial Intelligence Method to Describe the Onset and Transition from Stochastic to Coordinated Neural Activity in Danionella Translucida Embryo

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceNos últimos anos, a aprendizagem profunda tem se tornado cada vez mais bem-sucedida quando aplicada para lidar com diferentes questões em diversos campos. Na análise de bioimagem, tem sido usada para extrair informações significativas de imagens microscópicas, onde aplicamos aprendizagem profunda a dados de microscopia de light-sheet para compreender o desenvolvimento inicial do sistema nervoso. Atualmente, sabe-se que o cérebro é responsável pela maioria de nossas ações voluntárias e involuntárias e que regula os processos fisiológicos em todo o corpo. No entanto, as barreiras técnicas deixaram muitas questões em aberto em relação ao desenvolvimento e função dos circuitos neuronais. Imagiologia provou ser uma técnica poderosa para responder a essas perguntas, embora as dificuldades em segmentar e rastrear neurônios individuais tenham retardado o progresso. Danionella translucida foi recentemente introduzida como um poderoso organismo modelo para estudos neurocientíficos devido a ter o menor cérebro de vertebrado conhecido e não desenvolver um crânio completo na idade adulta, tornando-a facilmente acessível para estudos de imagem. No entanto, o surgimento da atividade neural e subsequente montagem de circuitos neurais no desenvolvimento inicial do embrião não foi ainda caracterizado. Esta dissertação pretende fornecer uma descrição inicial de todo o processo de resolução celular, utilizando técnicas avançadas de microscopia e um método de inteligência artificial para segmentar e analisar os dados. Usamos microscopia de fluorescência de light-sheet para obter imagens do início e da coordenação da atividade neuronal da medula espinhal da Danionella translucida com alta resolução temporal e por longos períodos de tempo. Além disso, analisamos os dados com um algoritmo baseado em aprendizagem profunda para detetar, segmentar e rastrear no espaço e no tempo o sinal de cada neurônio. Focamos nossa análise nos picos de intensidade do sinal, ou seja, no momento em que os neurónios estavam a disparar, e encontramos mais atividade na região inferior do embrião, sugerindo uma correspondência com a extensão da cauda. Este trabalho demonstra que a combinação de métodos utilizados foi capaz de gerar imagens e analisar os dados com sucesso. Abre as possibilidades para um estudo mais aprofundado da rede neuronal da Danionella translucida, e para estudar sinais de imagens aglomeradas com resolução de célula única que, de outra forma, seriam muito complexas para serem analisadas

    Learning Algorithms for Fat Quantification and Tumor Characterization

    Get PDF
    Obesity is one of the most prevalent health conditions. About 30% of the world\u27s and over 70% of the United States\u27 adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice
    • …
    corecore