126 research outputs found

    Affective computing for smart operations: a survey and comparative analysis of the available tools, libraries and web services

    Get PDF
    In this paper, we make a deep search of the available tools in the market, at the current state of the art of Sentiment Analysis. Our aim is to optimize the human response in Datacenter Operations, using a combination of research tools, that allow us to decrease human error in general operations, managing Complex Infrastructures. The use of Sentiment Analysis tools is the first step for extending our capabilities for optimizing the human interface. Using different data collections from a variety of data sources, our research provides a very interesting outcome. In our final testing, we have found that the three main commercial platforms (IBM Watson, Google Cloud and Microsoft Azure) get the same accuracy (89-90%). for the different datasets tested, based on Artificial Neural Network and Deep Learning techniques. The other stand-alone Applications or APIs, like Vader or MeaninCloud, get a similar accuracy level in some of the datasets, using a different approach, semantic Networks, such as Concepnet1, but the model can easily be optimized above 90% of accuracy, just adjusting some parameter of the semantic model. This paper points to future directions for optimizing DataCenter Operations Management and decreasing human error in complex environments

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Multimodal Approach for Big Data Analytics and Applications

    Get PDF
    The thesis presents multimodal conceptual frameworks and their applications in improving the robustness and the performance of big data analytics through cross-modal interaction or integration. A joint interpretation of several knowledge renderings such as stream, batch, linguistics, visuals and metadata creates a unified view that can provide a more accurate and holistic approach to data analytics compared to a single standalone knowledge base. Novel approaches in the thesis involve integrating multimodal framework with state-of-the-art computational models for big data, cloud computing, natural language processing, image processing, video processing, and contextual metadata. The integration of these disparate fields has the potential to improve computational tools and techniques dramatically. Thus, the contributions place multimodality at the forefront of big data analytics; the research aims at mapping and under- standing multimodal correspondence between different modalities. The primary contribution of the thesis is the Multimodal Analytics Framework (MAF), a collaborative ensemble framework for stream and batch processing along with cues from multiple input modalities like language, visuals and metadata to combine benefits from both low-latency and high-throughput. The framework is a five-step process: Data ingestion. As a first step towards Big Data analytics, a high velocity, fault-tolerant streaming data acquisition pipeline is proposed through a distributed big data setup, followed by mining and searching patterns in it while data is still in transit. The data ingestion methods are demonstrated using Hadoop ecosystem tools like Kafka and Flume as sample implementations. Decision making on the ingested data to use the best-fit tools and methods. In Big Data Analytics, the primary challenges often remain in processing heterogeneous data pools with a one-method-fits all approach. The research introduces a decision-making system to select the best-fit solutions for the incoming data stream. This is the second step towards building a data processing pipeline presented in the thesis. The decision-making system introduces a Fuzzy Graph-based method to provide real-time and offline decision-making. Lifelong incremental machine learning. In the third step, the thesis describes a Lifelong Learning model at the processing layer of the analytical pipeline, following the data acquisition and decision making at step two for downstream processing. Lifelong learning iteratively increments the training model using a proposed Multi-agent Lambda Architecture (MALA), a collaborative ensemble architecture between the stream and batch data. As part of the proposed MAF, MALA is one of the primary contributions of the research.The work introduces a general-purpose and comprehensive approach in hybrid learning of batch and stream processing to achieve lifelong learning objectives. Improving machine learning results through ensemble learning. As an extension of the Lifelong Learning model, the thesis proposes a boosting based Ensemble method as the fourth step of the framework, improving lifelong learning results by reducing the learning error in each iteration of a streaming window. The strategy is to incrementally boost the learning accuracy on each iterating mini-batch, enabling the model to accumulate knowledge faster. The base learners adapt more quickly in smaller intervals of a sliding window, improving the machine learning accuracy rate by countering the concept drift. Cross-modal integration between text, image, video and metadata for more comprehensive data coverage than a text-only dataset. The final contribution of this thesis is a new multimodal method where three different modalities: text, visuals (image and video) and metadata, are intertwined along with real-time and batch data for more comprehensive input data coverage than text-only data. The model is validated through a detailed case study on the contemporary and relevant topic of the COVID-19 pandemic. While the remainder of the thesis deals with text-only input, the COVID-19 dataset analyzes both textual and visual information in integration. Post completion of this research work, as an extension to the current framework, multimodal machine learning is investigated as a future research direction

    A DEEP LEARNING APPROACH FOR SENTIMENT ANALYSIS

    Get PDF
    La Sentiment Analysis si riferisce alla analisi qualitativa volta ad identificare e classificare opinioni contenute in frasi e testi, allo scopo di stabilire lo \u201cstato d\u2019animo\u201d dell\u2019autore rispetto ad un particolare argomento o prodotto, e di determinare se tale stato \ue8 di fatto positivo, negativo oppure neutrale. Le opinioni espresse in un testo, come ad esempio giudizi, sentimenti ed emozioni, sono di recente diventate oggetto di studio e di ricerca sia in ambito accademico che industriale. Sfortunatamente la comprensione del linguaggio, applicata a commenti di utenti, \ue8 un attivit\ue0 estremamente complessa per una macchina, specialmente se ci si riferisce ai contesti dei moderni social network. Le modalit\ue0 in cui le persone si esprimono in linguaggio naturale, sono molteplici, e l\u2019utilizzo \u201cinformale\u201d della lingua adottato tipicamente nei social netowrks, genera frasi spesso dense di errori, modi di dire (slang), costrutti sintattici \u201dpersonalizzati\u201d, o anche frasi arricchite da caratteri speciali (come l\u2019hashtag in Twitter), il che complica notevolmente l\u2019analisi. Recentemente, le tecniche di Deep Learning, stanno emergendo nel panorama del machine learning, come un modello computazionale che pu\uf2 essere adoperato con efficacia per scoprire relazioni semantiche complesse, all\u2019interno di un testo, anche senza la necessit\ue0 di dover individuare a priori caratteristiche (features) di tali relazioni. Questi approcci hanno migliorato l\u2019attuale stato dell\u2019arte in diversi settori della Sentiment Analysis, come ad esempio la classificazione di frasi o di documenti, l\u2019apprendimento basato su lexicon, fino ad arrivare alla analisi di fenomeni complessi come il cyber bullismo. I contributi di questa tesi sono di due tipi. Il primo contributo fornito, relativo ad aspetti generali di Sentiment Analysis, riguarda la proposta di un modello di rete neurale semi supervisionata, basato sulle reti di tipo Deep Belief, in grado di affrontare l\u2019incertezza dei dati insita nelle frasi testuali, con particolare riferimento alla lingua italiana. Il modello proposto \ue8 stato testato rispetto a diversi datasets presi dalla letteratura di riferimento, composti da testi relativi a critiche cinematografiche, adottando una rappresentazione dell\u2019informazione basata su vettori (Word2Vec) ed introducendo anche metodi derivati dal campo del Natural Language Processing (NLP). Il secondo contributo fornito in questa tesi, partendo dall\u2019assunto che il cyber bullismo pu\uf2 essere considerato come un caso particolare di Sentiment Analysis, propone un approccio non supervisionato alla rilevazione automatica di tracce di cyber bullismo all\u2019interno di social networks, basato sia su di una rete neurale di tipo GHSOM (Growing Hierarchical Self Organizing Map), sia su di un modello di caratteristiche (features) predefinito. Il modello non supervisionato proposto dimostra di raggiungere comunque risultati interessanti rispetto ai tipici modelli supervisionati, applicati solitamente in questo ambito.Sentiment Analysis refers to the process of computationally identifying and categorizing opinions expressed in a piece of text, in order to determine whether the writer\u2019s attitude towards a particular topic or product is positive, negative, or even neutral. The views expressed and its related concepts, such as feelings, judgments, and emotions have become recently a subject of study and research in both academic and industrial areas. Unfortunately language comprehension of user comments, especially in social networks, is inherently complex to computers. The ways in which humans express themselves with natural language are nearly unlimited and informal texts is riddled with typos, misspellings, badly set up syntactic constructions and also specific symbols (e.g. hashtags in Twitter) which exponentially complicate this task. Recently, deep learning approaches are emerging as powerful computational models that discover intricate semantic representations of texts automatically from data without hand-made feature engineering. These approaches have improved the state-of-the-art in many Sentiment Analysis tasks including sentiment classification of sentences or documents, sentiment lexicon learning and also in more complex problems as cyber bullying detection. The contributions of this work are twofold. First, related to the general Sentiment Analysis problem, we propose a semi-supervised neural network model, based on Deep Belief Networks, able to deal with data uncertainty for text sentences in Italian language. We test this model against some datasets from literature related to movie reviews, adopting a vectorized representation of text (Word2Vec) and exploiting methods from Natural Language Processing (NLP) pre-processing. Second, assuming that the cyber bullying phenomenon can be treated as a particular Sentiment Analysis problem, we propose an unsupervised approach to automatic cyber bullying detection in social networks, based both on Growing Hierarchical Self Organizing Map (GHSOM) and on a new specific features model, showing that our solution can achieve interesting results, respect to classical supervised approaches

    Atas das Oitavas Jornadas de Informática da Universidade de Évora

    Get PDF
    Atas das Oitavas Jornadas de Informática da Universidade de Évora realizadas em Março de 2018

    Building Blocks for IoT Analytics Internet-of-Things Analytics

    Get PDF
    Internet-of-Things (IoT) Analytics are an integral element of most IoT applications, as it provides the means to extract knowledge, drive actuation services and optimize decision making. IoT analytics will be a major contributor to IoT business value in the coming years, as it will enable organizations to process and fully leverage large amounts of IoT data, which are nowadays largely underutilized. The Building Blocks of IoT Analytics is devoted to the presentation the main technology building blocks that comprise advanced IoT analytics systems. It introduces IoT analytics as a special case of BigData analytics and accordingly presents leading edge technologies that can be deployed in order to successfully confront the main challenges of IoT analytics applications. Special emphasis is paid in the presentation of technologies for IoT streaming and semantic interoperability across diverse IoT streams. Furthermore, the role of cloud computing and BigData technologies in IoT analytics are presented, along with practical tools for implementing, deploying and operating non-trivial IoT applications. Along with the main building blocks of IoT analytics systems and applications, the book presents a series of practical applications, which illustrate the use of these technologies in the scope of pragmatic applications. Technical topics discussed in the book include: Cloud Computing and BigData for IoT analyticsSearching the Internet of ThingsDevelopment Tools for IoT Analytics ApplicationsIoT Analytics-as-a-ServiceSemantic Modelling and Reasoning for IoT AnalyticsIoT analytics for Smart BuildingsIoT analytics for Smart CitiesOperationalization of IoT analyticsEthical aspects of IoT analyticsThis book contains both research oriented and applied articles on IoT analytics, including several articles reflecting work undertaken in the scope of recent European Commission funded projects in the scope of the FP7 and H2020 programmes. These articles present results of these projects on IoT analytics platforms and applications. Even though several articles have been contributed by different authors, they are structured in a well thought order that facilitates the reader either to follow the evolution of the book or to focus on specific topics depending on his/her background and interest in IoT and IoT analytics technologies. The compilation of these articles in this edited volume has been largely motivated by the close collaboration of the co-authors in the scope of working groups and IoT events organized by the Internet-of-Things Research Cluster (IERC), which is currently a part of EU's Alliance for Internet of Things Innovation (AIOTI)

    The Camera in conservation: determining photography’s place in the preservation of wildlife

    Get PDF
    This MA by research study is a reflection of photography’s past, current and future role within wildlife conservation, or whether there is indeed a necessity for it moving forwards. The following investigation and analysis of photography seeks to materialise how in fact the photographic medium can be both beneficial and negatively impactful to the preservation of wildlife, and how best it can be used by photographers in future conservation projects to ensure the preservation of wildlife. Several significant aspects of photography and external influences are engaged with in this study, firstly investigating the importance of empathy within wildlife conservation and how it can be elicited through imagery and photographic methods. Furthermore, I investigate the other side of conservation photography’s success, analysing what negative or neutral impacts it can bring with it, before researching the role that social media does and has the potential to play in conservation, and how photography can adapt to it to maximise its success. Lastly, I explore alternative visual media such as moving image, and how photography can best applicate successful techniques learned from them to reinterpret how conservation photography is perceived. Finally, using information and research from across my thesis, I have produced a ‘guide’ as to how conservation photography can be shaped to achieve its full potential for success, drawing upon previous successes and failures of other conservation attempts and photographers

    On the Combination of Game-Theoretic Learning and Multi Model Adaptive Filters

    Get PDF
    This paper casts coordination of a team of robots within the framework of game theoretic learning algorithms. In particular a novel variant of fictitious play is proposed, by considering multi-model adaptive filters as a method to estimate other players’ strategies. The proposed algorithm can be used as a coordination mechanism between players when they should take decisions under uncertainty. Each player chooses an action after taking into account the actions of the other players and also the uncertainty. Uncertainty can occur either in terms of noisy observations or various types of other players. In addition, in contrast to other game-theoretic and heuristic algorithms for distributed optimisation, it is not necessary to find the optimal parameters a priori. Various parameter values can be used initially as inputs to different models. Therefore, the resulting decisions will be aggregate results of all the parameter values. Simulations are used to test the performance of the proposed methodology against other game-theoretic learning algorithms.</p
    • …
    corecore