3,618 research outputs found

    HOLMeS: eHealth in the Big Data and Deep Learning Era

    Get PDF
    Now, data collection and analysis are becoming more and more important in a variety of application domains, as long as novel technologies advance. At the same time, we are experiencing a growing need for human–machine interaction with expert systems, pushing research toward new knowledge representation models and interaction paradigms. In particular, in the last few years, eHealth—which usually indicates all the healthcare practices supported by electronic elaboration and remote communications—calls for the availability of a smart environment and big computational resources able to offer more and more advanced analytics and new human–computer interaction paradigms. The aim of this paper is to introduce the HOLMeS (health online medical suggestions) system: A particular big data platform aiming at supporting several eHealth applications. As its main novelty/functionality, HOLMeS exploits a machine learning algorithm, deployed on a cluster-computing environment, in order to provide medical suggestions via both chat-bot and web-app modules, especially for prevention aims. The chat-bot, opportunely trained by leveraging a deep learning approach, helps to overcome the limitations of a cold interaction between users and software, exhibiting a more human-like behavior. The obtained results demonstrate the effectiveness of the machine learning algorithms, showing an area under ROC (receiver operating characteristic) curve (AUC) of 74.65% when some first-level features are used to assess the occurrence of different chronic diseases within specific prevention pathways. When disease-specific features are added, HOLMeS shows an AUC of 86.78%, achieving a greater effectiveness in supporting clinical decisions

    Cognitive computing in education

    Get PDF
    Cognitive computing is the new wave of Artificial Intelligence (AI), relying on traditional techniques based on expert systems and also exploiting statistics and mathematical model. In particular, cognitive computing systems can be regarded as a "more human" artificial intelligence. In fact, they mimic human reasoning methodologies, showing special capabilities in dealing with uncertainties and in solving problems that typically entail computation consuming processes. Moreover, they can evolve, exploiting the accumulated experience to learn from the past, both from errors and from successful findings. From a theoretical point of view, cognitive computing could replace existing calculators in many fields of application but hardware requirements are still high, even if the cloud infrastructure, which is expected to uphold its rapid growth in the very next future, can support their diffusion and ease the penetration of such a novel variety of systems, fostering new services as well as changes in many settled paradigms. In this paper, we focus on benefits that this technology can bring when applied in the education field and we make a short review of relevant experiences

    Novel proposal for prediction of CO2 course and occupancy recognition in Intelligent Buildings within IoT

    Get PDF
    Many direct and indirect methods, processes, and sensors available on the market today are used to monitor the occupancy of selected Intelligent Building (IB) premises and the living activities of IB residents. By recognizing the occupancy of individual spaces in IB, IB can be optimally automated in conjunction with energy savings. This article proposes a novel method of indirect occupancy monitoring using CO2, temperature, and relative humidity measured by means of standard operating measurements using the KNX (Konnex (standard EN 50090, ISO/IEC 14543)) technology to monitor laboratory room occupancy in an intelligent building within the Internet of Things (IoT). The article further describes the design and creation of a Software (SW) tool for ensuring connectivity of the KNX technology and the IoT IBM Watson platform in real-time for storing and visualization of the values measured using a Message Queuing Telemetry Transport (MQTT) protocol and data storage into a CouchDB type database. As part of the proposed occupancy determination method, the prediction of the course of CO2 concentration from the measured temperature and relative humidity values were performed using mathematical methods of Linear Regression, Neural Networks, and Random Tree (using IBM SPSS Modeler) with an accuracy higher than 90%. To increase the accuracy of the prediction, the application of suppression of additive noise from the CO2 signal predicted by CO2 using the Least mean squares (LMS) algorithm in adaptive filtering (AF) method was used within the newly designed method. In selected experiments, the prediction accuracy with LMS adaptive filtration was better than 95%.Web of Science1223art. no. 454

    Review of Artificial Intelligence with Retailing Sector

    Get PDF
    This research service provides an original perspective on how artificial intelligence (AI) is making its way into the retail sector. Retail has entered a new era where ECommerce and technology bellwethers like Alibaba, Amazon, Apple, Baidu, Facebook, Google, Microsoft, and Tencent have raised consumers’ expectations. AI is enabling automated decision-making with accuracy and speed, based on data analytics, coupled with self- learning abilities. The retail sector has witnessed the dramatic evolution with the rapid digitalization of communication (i.e. Internet) and; smart phones and devices. Customer is no longer the same as they became more empowered by smart devices which has entirely prevailed their expectation, habits, style of shopping and investigating the shops. This article outlines the Significant innovation done in retails which helped them to evolve such as Artificial Intelligence (AI), Big data and Internet of Things (IoT), Chatbots, Robots. This article further also discusses the ideology of various author on how AI become more profitable and a close asset to customers and retailers

    Ocular attention-sensing interface system

    Get PDF
    The purpose of the research was to develop an innovative human-computer interface based on eye movement and voice control. By eliminating a manual interface (keyboard, joystick, etc.), OASIS provides a control mechanism that is natural, efficient, accurate, and low in workload

    Contributions to chatbots and digital analytics in industry

    Get PDF
    Diese kumulative Dissertation umfasst zehn wissenschaftliche Artikel, die zur Forschung digitaler Analytik, Messung von Technologieakzeptanz und Chatbots beitragen. Ziel der Artikel ist es, die Entwicklung, Implementierung und Verwaltung von Technologien zu vereinfachen und zu unterstützen. Modelle werden entwickelt, welche die wichtigsten Schritte beschreiben und unter anderem relevante damit zusammenhängende Fragen auflisten, die zu beteiligenden Interessengruppen benennen und geeignete Tools vorstellen, welche berücksichtigt werden sollten. Es werden Chatbot Taxonomien entwickelt und vorgestellt, welche die Bandbreite der derzeit bestehenden Gestaltungsmöglichkeiten aufzeigen, während identifizierte Archetypen zu beobachtende Kombinationen aufzeigen. Die Identifizierung der häufigsten Gründe für Misserfolge und die Entwicklung kritischer Erfolgsfaktoren tragen ebenfalls zu dem Ziel bei, den Entwicklungs- und Managementprozess zu erleichtern. Da die Endnutzer über die Akzeptanz und Nutzung und damit über den Erfolg einer Technologie entscheiden, werden Ansätze genutzt, wie die Nutzerakzeptanz von Technologien gemessen werden kann und wie Nutzer frühzeitig in den Entwicklungsprozess eingebunden werden können

    CLOUD-BASED MACHINE LEARNING AND SENTIMENT ANALYSIS

    Get PDF
    The role of a Data Scientist is becoming increasingly ubiquitous as companies and institutions see the need to gain additional insights and information from data to make better decisions to improve the quality-of-service delivery to customers. This thesis document contains three aspects of data science projects aimed at improving tools and techniques used in analyzing and evaluating data. The first research study involved the use of a standard cybersecurity dataset and cloud-based auto-machine learning algorithms were applied to detect vulnerabilities in the network traffic data. The performance of the algorithms was measured and compared using standard evaluation metrics. The second research study involved the use of text-mining social media, specifically Reddit. We mined up to 100,000 comments in multiple subreddits and tested for hate speech via a custom designed version of the Python Vader sentiment analysis package. Our work integrated standard sentiment analysis with Hatebase.org and we demonstrate our new method can better detect hate speech in social media. Following sentiment analysis and hate speech detection, in the third research project, we applied statistical techniques in evaluating the significant difference in text analytics, specifically the sentiment-categories for both lexicon-based software and cloud-based tools. We compared the three big cloud providers, AWS, Azure, and GCP with the standard python Vader sentiment analysis library. We utilized statistical analysis to determine a significant difference between the cloud platforms utilized as well as Vader and demonstrated that each platform is unique in its analysis scoring mechanism

    How can Big Data from Social Media be used in Emergency Management? A case study of Twitter during the Paris attacks

    Get PDF
    Postponed access: the file will be accessible after 2019-06-11Over the past years, social media have impacted emergency management and disaster response in numerous ways. The access to live, continuous updates from the public brings new opportunities when it comes to detecing, coordinating and aiding in an emergency situation. The thesis present a research of social media during an emergency situation. The goal of the study is to discover how data from social media can be used for emergency management and determine if existing analysis services can be proven useful for the same occasion. To achieve the goal, a dataset from Twitter during the Paris attacks 2015 was collected. The dataset was analyzed using three different analysis tools; IBM Watson Discovery service, Microsoft Azure Text Analytics and an own developed Keyword Frequency Script. The results indicate that data from social media can be used for emergency management, in form of detecting and providing important information. Additional testing with larger datasets is needed to fully demonstrate the usefulness, in addition to interviews with emergency responders and social media users.Masteroppgave i informasjonsvitenskapINFO39
    • …
    corecore