743 research outputs found

    Management and Security of IoT systems using Microservices

    Get PDF
    Devices that assist the user with some task or help them to make an informed decision are called smart devices. A network of such devices connected to internet are collectively called as Internet of Things (IoT). The applications of IoT are expanding exponentially and are becoming a part of our day to day lives. The rise of IoT led to new security and management issues. In this project, we propose a solution for some major problems faced by the IoT devices, including the problem of complexity due to heterogeneous platforms and the lack of IoT device monitoring for security and fault tolerance. We aim to solve the above issues in a microservice architecture. We build a data pipeline for IoT devices to send data through a messaging platform Kafka and monitor the devices using the collected data by making real time dashboards and a machine learning model to give better insights of the data. For proof of concept, we test the proposed solution on a heterogeneous cluster, including Raspberry Pi’s and IoT devices from different vendors. We validate our design by presenting some simple experimental results

    HOLMeS: eHealth in the Big Data and Deep Learning Era

    Get PDF
    Now, data collection and analysis are becoming more and more important in a variety of application domains, as long as novel technologies advance. At the same time, we are experiencing a growing need for human–machine interaction with expert systems, pushing research toward new knowledge representation models and interaction paradigms. In particular, in the last few years, eHealth—which usually indicates all the healthcare practices supported by electronic elaboration and remote communications—calls for the availability of a smart environment and big computational resources able to offer more and more advanced analytics and new human–computer interaction paradigms. The aim of this paper is to introduce the HOLMeS (health online medical suggestions) system: A particular big data platform aiming at supporting several eHealth applications. As its main novelty/functionality, HOLMeS exploits a machine learning algorithm, deployed on a cluster-computing environment, in order to provide medical suggestions via both chat-bot and web-app modules, especially for prevention aims. The chat-bot, opportunely trained by leveraging a deep learning approach, helps to overcome the limitations of a cold interaction between users and software, exhibiting a more human-like behavior. The obtained results demonstrate the effectiveness of the machine learning algorithms, showing an area under ROC (receiver operating characteristic) curve (AUC) of 74.65% when some first-level features are used to assess the occurrence of different chronic diseases within specific prevention pathways. When disease-specific features are added, HOLMeS shows an AUC of 86.78%, achieving a greater effectiveness in supporting clinical decisions

    UNDERSTANDING THE ROLE OF OVERT AND COVERT ONLINE COMMUNICATION IN INFORMATION OPERATIONS

    Get PDF
    This thesis combines regression, sentiment, and social network analysis to explore how Russian online media agencies, both overt and covert, affect online communication on Twitter when North Atlantic Treaty Organization (NATO) exercises occur. It explores the relations between the average sentiment of tweets and the activities of Russia’s overt and covert online media agencies. The data source for this research is the Naval Postgraduate School’s licensed Twitter archive and open-source information about the NATO exercises timeline. Publicly available lexicons of positive and negative terms helped to measure the sentiment in tweets. The thesis finds that Russia’s covert media agencies, such as the Internet Research Agency, have a great impact on and likelihood for changing the sentiment of network users about NATO than do the overt Russian media outlets. The sentiment during NATO exercises becomes more negative as the activity of Russian media organizations, whether covert or overt, increases. These conclusions suggest that close tracking and examination of the activities of Russia’s online media agencies provide the necessary base for detecting ongoing information operations. Further refining of the analytical methods can deliver a more comprehensive outcome. These refinements could employ machine learning or natural language processing algorithms that can increase the precision of the sentiment measurement probability and timely identification of trolls’ accounts.Podpolkovnik, Bulgarian Air ForceApproved for public release. Distribution is unlimited

    Deep Learning in the Automotive Industry: Applications and Tools

    Full text link
    Deep Learning refers to a set of machine learning techniques that utilize neural networks with many hidden layers for tasks, such as image classification, speech recognition, language understanding. Deep learning has been proven to be very effective in these domains and is pervasively used by many Internet services. In this paper, we describe different automotive uses cases for deep learning in particular in the domain of computer vision. We surveys the current state-of-the-art in libraries, tools and infrastructures (e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural networks. We particularly focus on convolutional neural networks and computer vision use cases, such as the visual inspection process in manufacturing plants and the analysis of social media data. To train neural networks, curated and labeled datasets are essential. In particular, both the availability and scope of such datasets is typically very limited. A main contribution of this paper is the creation of an automotive dataset, that allows us to learn and automatically recognize different vehicle properties. We describe an end-to-end deep learning application utilizing a mobile app for data collection and process support, and an Amazon-based cloud backend for storage and training. For training we evaluate the use of cloud and on-premises infrastructures (including multiple GPUs) in conjunction with different neural network architectures and frameworks. We assess both the training times as well as the accuracy of the classifier. Finally, we demonstrate the effectiveness of the trained classifier in a real world setting during manufacturing process.Comment: 10 page

    Nip it in the Bud: Moderation Strategies in Open Source Software Projects and the Role of Bots

    Full text link
    Much of our modern digital infrastructure relies critically upon open sourced software. The communities responsible for building this cyberinfrastructure require maintenance and moderation, which is often supported by volunteer efforts. Moderation, as a non-technical form of labor, is a necessary but often overlooked task that maintainers undertake to sustain the community around an OSS project. This study examines the various structures and norms that support community moderation, describes the strategies moderators use to mitigate conflicts, and assesses how bots can play a role in assisting these processes. We interviewed 14 practitioners to uncover existing moderation practices and ways that automation can provide assistance. Our main contributions include a characterization of moderated content in OSS projects, moderation techniques, as well as perceptions of and recommendations for improving the automation of moderation tasks. We hope that these findings will inform the implementation of more effective moderation practices in open source communities

    Evaluating NLP toxicity tools: Towards the ethical limits

    Get PDF
    In the last years we have seen and big evolution in the field of neuronal networks, and the field of natural language processing (NLP). Solutions as voice assistants write assistance, or chatbots are present, every time more often, in our daily work. In addition, these techniques are used for more sophisticated analysis as sentimental classification or hate-speech detection. In contrast, the detection of gender or racial biases in these solutions has created problems. This problem has opened a debate around the limitations and potentials of these solutions. The goal of this work is to evaluate the present tools around the sentimental analysis that are available at the moment of writing. To achieve this, we have selected a set of tools and we have compared its usability over a specific dataset focused on biased detection. In addition, we have developed a tool to evaluate these models in a real-world application by integrating these models into Content Management systems. The developed tool has the goal to help in the moderation of the content in the CMS, is developed over a popular CMS distribution (Drupal). Finally, we present a debate around the ethics and fairness in sentiment analysis using NLP

    Prototype of a Conversational Assistant for Satellite Mission Operations

    Get PDF
    The very first artificial satellite, Sputnik, was launched in 1957 marking a new era. Concurrently, satellite mission operations emerged. These start at launch and finish at the end of mission, when the spacecraft is decommissioned. Running a satellite mission requires the monitoring and control of telemetry data, to verify and maintain satellite health, reconfigure and command the spacecraft, detect, identify and resolve anomalies and perform launch and early orbit operations. The very first chatbot, ELIZA was created in 1966, and also marked a new era of Artificial Intelligence Systems. Said systems answer users’ questions in the most diverse domains, interpreting the human language input and responding in the same manner. Nowadays, these systems are everywhere, and the list of possible applications seems endless. The goal of the present master’s dissertation is to develop a prototype of a chatbot for mission operations. For this purpose implementing a Natural Language Processing (NLP) model for satellite missions allied to a dialogue flow model. The performance of the conversational assistant is evaluated with its implementation on a mission operated by the European Space Agency (ESA), implying the generation of the spacecraft’s Database Knowledge Graph (KG). Throughout the years, many tools have been developed and added to the systems used to monitor and control spacecrafts helping Flight Control Teams (FCT) either by maintaining a comprehensive overview of the spacecraft’s status and health, speeding up failure investigation, or allowing to easily correlate time series of telemetry data. However, despite all the advances made which facilitate the daily tasks, the teams still need to navigate through thousands of parameters and events spanning years of data, using purposely built user interfaces and relying on filters and time series plots. The solution presented in this dissertation and proposed by VisionSpace Technologies focuses on improving operational efficiency whilst dealing with the mission’s complex and extensive databases.O primeiro satélite artificial, Sputnik, foi lançado em 1957 e marcou o início de uma nova era. Simultaneamente, surgiram as operações de missão de satélites. Estas iniciam com o lançamento e terminam com desmantelamento do veículo espacial, que marca o fim da missão. A operação de satélites exige o acompanhamento e controlo de dados de telemetria, com o intuito de verificar e manter a saúde do satélite, reconfigurar e comandar o veículo, detetar, identificar e resolver anomalias e realizar o lançamento e as operações iniciais do satélite. Em 1966, o primeiro Chatbot foi criado, ELIZA, e também marcou uma nova era, de sistemas dotados de Inteligência Artificial. Tais sistemas respondem a perguntas nos mais diversos domínios, para tal interpretando linguagem humana e repondendo de forma similar. Hoje em dia, é muito comum encontrar estes sistemas e a lista de aplicações possíveis parece infindável. O objetivo da presente dissertação de mestrado consiste em desenvolver o protótipo de um Chatbot para operação de satélites. Para este proposito, criando um modelo de Processamento de Linguagem Natural (NLP) aplicado a missoões de satélites aliado a um modelo de fluxo de diálogo. O desempenho do assistente conversacional será avaliado com a sua implementação numa missão operada pela Agência Espacial Europeia (ESA), o que implica a elaboração do grafico de conhecimentos associado à base de dados da missão. Ao longo dos anos, várias ferramentas foram desenvolvidas e adicionadas aos sistemas que acompanham e controlam veículos espaciais, que colaboram com as equipas de controlo de missão, mantendo uma visão abrangente sobre a condição do satélite, acelerando a investigação de falhas, ou permitindo correlacionar séries temporais de dados de telemetria. No entanto, apesar de todos os progressos que facilitam as tarefas diárias, as equipas ainda necessitam de navegar por milhares de parametros e eventos que abrangem vários anos de recolha de dados, usando interfaces para esse fim e dependendo da utilização de filtros e gráficos de series temporais. A solução apresentada nesta dissertação e proposta pela VisionSpace Technologies tem como foco melhorar a eficiência operacional lidando simultaneamente com as suas complexas e extensas bases de dados

    Super-forecasting the 'technological singularity' risks from artificial intelligence

    Get PDF
    This article investigates cybersecurity (and risk) in the context of ‘technological singularity’ from artificial intelligence. The investigation constructs multiple risk forecasts that are synthesised in a new framework for counteracting risks from artificial intelligence (AI) itself. In other words, the research in this article is not just concerned with securing a system, but also analysing how the system responds when (internal and external) failure(s) and compromise(s) occur. This is an important methodological principle because not all systems can be secured, and totally securing a system is not feasible. Thus, we need to construct algorithms that will enable systems to continue operating even when parts of the system have been compromised. Furthermore, the article forecasts emerging cyber-risks from the integration of AI in cybersecurity. Based on the forecasts, the article is concentrated on creating synergies between the existing literature, the data sources identified in the survey, and forecasts. The forecasts are used to increase the feasibility of the overall research and enable the development of novel methodologies that uses AI to defend from cyber risks. The methodology is focused on addressing the risk of AI attacks, as well as to forecast the value of AI in defence and in the prevention of AI rogue devices acting independently

    Automatically responding to customers

    Get PDF
    • …
    corecore