289 research outputs found

    Teaching Computer Programming Through Hands-on Labs on Cognitive Computing

    Get PDF
    In this work we report the experience of a long-lasting educational project that we have been carrying since a couple of years. In particular, we summarize the results achieved by students in the last year, when they were put to work on the collaborative development of small, yet full featured, software projects. At the same time, based on more recent findings, we seek to lay the foundations to build a pragmatic model to teach cognitive computing programming. The experience was carried on in a Programming course at the Universities of Naples “Federico II” and Genoa, in Italy, and fostered the use of a PaaS (Platform as a Service) environment for a cooperative learning activity, used to disseminate theoretical concepts acquired within the course, also by means of cognitive computing tools. The project, from its inception, has involved a relevant number of students. Initially, the experiment had to be concluded in one year but, instead, has continued evolving with new projects, as new tools and services were made available, carrying new opportunities. The evolution has led, in the most recent release, to using the IBM Bluemix platform with its wide range of components, including Watson. This work goes in the direction of developing the smart university model, by using innovative and intelligent services to help develop a new generation of applications, but also to promote and disseminate a new way for designing and building them

    Consortia optimization for European Space Agency proposals based on cognitive computing

    Get PDF
    Trabalho de projeto de mestrado, Matemática Aplicada à Economia e Gestão, Universidade de Lisboa, Faculdade de Ciências, 2019Esta tese de mestrado estuda as relações entre as palavras escritas nos resumos dos concursos da Agência Espacial Europeia (ESA - Invitation to Tender - ITT) e, em particular, se existe alguma correlação entre as palavras e a possibilidade de determinado país ser o ganhador do concurso. Um conjunto de dados de 2013 a 2016, com as informações dos dashboards dos status dos concursos e as informações do site Emits fornecidos pela ESA foram organizadas e compiladas. Em seguida, os códigos necessários para analisar esse conjunto de dados foi desenvolvido em R. Construímos matrizes e representações gráficas com as relações entre os países vencedores, os escritórios da ESA e os diferentes programas da ESA. Com base nisso, os primeiros pontos foram levantados e analisados. Em seguida, selecionamos cinco países com base no número de ITTs premiados e representatividade nos escritórios da ESA para desenvolvimento de modelos estatísticos. Esses países são: Alemanha, França, Grã-Bretanha ( Reino Unido), Itália e Bélgica. Com o uso de pacotes de mineração de dados (text mining), com o “TM” do R, os resumos originais foram organizados, de forma a retirar informação irrelevante que poderiam dificultar a realização deste trabalho. Números, espaços em branco e palavras mais frequentes foram removidas e todo texto foi colocado em minúsculo. Após estas etapas, a matriz documento por termo (DTM) foi construída. Nesta matrix, cada linhas é um documento (neste caso, o resumo de cada um dos ITTs) e cada coluna as variáveis (neste caso, as palavras mais frequentes na base de dados). A DTM é a base de todo o estudo relativo a análise textual. Para cada um dos cinco países com mais ITTs, modelos logísticos foram criados e métodos de seleção Stepwise aplicados. Os modelos criados relacionam palavras com a possibilidade de um determinado país ganhar um ITT. A validade dos modelos foi analisada utilizando parâmetros estatísticos como: sensibilidade x curva de especificidade (ponto de corte), área curva Roc e Odd. Posteriormente, começamos a investigar se os ITTs se aglomeraram em clusters definidos por estas variáveis. Diferentes métodos foram utilizados. O parâmetro da silhueta foi usado para validação dos clusters, porém os resultados não foram satisfatórios. Aplicou-se a análise de componentes principais (PCA), que permaneceu deixando lacunas, sugerindo que estudos mais avançados devem ser feitos para entender essa questão. Com este estudo, podemos inferir que existem relações entre as palavras escritas nos resumos dos ITTS e a chance de um determinado país ser o vencedor de um determinado ITT. Por essa razão, este tema merece continuar a ser desenvolvido em trabalhos futuros.This master thesis intends to study relations between the words written in European Space Agency (ESA) Invitation to Tender (ITT) abstracts, and, if there is any correlation between the words and the chance of a certain country to award a bid. An intermediate task was to compile and organize a proper dataset. A dataset was created using the ESA Dashboards and ESA Emits from 2013 to 2016 as basis. Then, we developed the necessary codes to analyze this dataset in R. We constructed matrices and graphical representations with the relations between Winner Countries, the ESA Offices and the different ESA Programs. Based on this, our firsts points were raised and analyzed. Five countries were selected based in the number of awarded ITTs. They are Germany, France, Great Britain, Italy and Belgium. These countries were scrutinized using text mining techniques and statistics models. Using our dataset, we analyzed the entire text abstract with R packages for text mining, as the TM package. The original abstracts were organized removing numbers, white spaces and most frequent words. After these steps, document term matrix (DTM) were constructed. DTM is a matrix, where the rows are the documents (ITT abstract) and the columns are the variables (most frequent words). The DTM was the basis for all textual analysis study. Regression models (logistic regression) were created for these five countries and stepwise methods used for variables selection. The created models relate words with the chance of a certain country winning an ITT. The validity of the models was analyzed using statistics parameters as: Sensibility x Specificity curve (cut-off point), Area under ROC curve, ODD. Ratio and fitted values. Afterwards, we started to investigate if the ITTs clustered in the DTM defined space. Different methods were used to define clusters. We verified if clustered formed in the word frequency space and also in a principal component analysis transformed space. However, results show that no method results in an automatic clustering using the Silhouette method, suggesting that more advanced techniques might be needed to extract the true number of clusters. The results of the application of PCA do not show agglomeration, suggesting internal clustering tendency. Finally, we can conclude that there seems to exist some relations between words and winner countries, the reasons for which remains to be studied in further works

    Multimodal Sentiment Sensing and Emotion Recognition Based on Cognitive Computing Using Hidden Markov Model with Extreme Learning Machine

    Get PDF
    In today's competitive business environment, exponential increase of multimodal content results in a massive amount of shapeless data. Big data that is unstructured has no specific format or organisation and can take any form, including text, audio, photos, and video. Many assumptions and algorithms are generally required to recognize different emotions as per literature survey, and the main focus for emotion recognition is based on single modality, such as voice, facial expression and bio signals. This paper proposed the novel technique in multimodal sentiment sensing with emotion recognition using artificial intelligence technique. Here the audio and visual data has been collected based on social media review and classified using hidden Markov model based extreme learning machine (HMM_ExLM). The features are trained using this method. Simultaneously, these speech emotional traits are suitably maximised. The strategy of splitting areas is employed in the research for expression photographs and various weights are provided to each area to extract information. Speech as well as facial expression data are then merged using decision level fusion and speech properties of each expression in region of face are utilized to categorize. Findings of experiments show that combining features of speech and expression boosts effect greatly when compared to using either speech or expression alone. In terms of accuracy, recall, precision, and optimization level, a parametric comparison was made

    Visual Semantic SLAM with Landmarks for Large-Scale Outdoor Environment

    Full text link
    Semantic SLAM is an important field in autonomous driving and intelligent agents, which can enable robots to achieve high-level navigation tasks, obtain simple cognition or reasoning ability and achieve language-based human-robot-interaction. In this paper, we built a system to creat a semantic 3D map by combining 3D point cloud from ORB SLAM with semantic segmentation information from Convolutional Neural Network model PSPNet-101 for large-scale environments. Besides, a new dataset for KITTI sequences has been built, which contains the GPS information and labels of landmarks from Google Map in related streets of the sequences. Moreover, we find a way to associate the real-world landmark with point cloud map and built a topological map based on semantic map.Comment: Accepted by 2019 China Symposium on Cognitive Computing and Hybrid Intelligence(CCHI'19

    Autonomic computing architecture for SCADA cyber security

    Get PDF
    Cognitive computing relates to intelligent computing platforms that are based on the disciplines of artificial intelligence, machine learning, and other innovative technologies. These technologies can be used to design systems that mimic the human brain to learn about their environment and can autonomously predict an impending anomalous situation. IBM first used the term ‘Autonomic Computing’ in 2001 to combat the looming complexity crisis (Ganek and Corbi, 2003). The concept has been inspired by the human biological autonomic system. An autonomic system is self-healing, self-regulating, self-optimising and self-protecting (Ganek and Corbi, 2003). Therefore, the system should be able to protect itself against both malicious attacks and unintended mistakes by the operator
    corecore