212 research outputs found

    Till data do us part: Understanding data-based value creation in data-intensive infrastructures

    Get PDF
    Much of the literature on value creation in social media-based infrastructures has largely neglected the pivotal role of data and their processes. This paper tries to move beyond this limitation and discusses data-based value creation in data-intensive infrastructures, such as social media, by focusing on processes of data generation, use and reuse, and on infrastructure development activities. Building on current debates in value theory, the paper develops a multidimensional value framework to interrogate the data collected in an embedded ethnographical case study of the development of PatientsLikeMe, a social media network for patients. It asks when, and where, value is created from the data, and what kinds of value are created from them, as they move through the data infrastructure; and how infrastructure evolution relates to, and shapes, existing data-based value creation practices. The findings show that infrastructure development can have unpredictable consequences for data-based value creation, shaping shared practices in complex ways and through a web of interdependent situations. The paper argues for an understanding of infrastructural innovation that accounts for the situational interdependencies of data use and reuse. Uniquely positioned, the paper demonstrates the importance of research that looks critically into processes of data use in infrastructures to keep abreast of the social consequences of developments in big data and data analytics aimed at exploiting all kinds of digital traces for multiple purposes.This research is funded by the European Research Council (ERC) under the European Union's Seventh Framework Programme (FP7/2007–2013)/ERC grant agreement number 335925

    Exploring the motivation behind cybersecurity insider threat and proposed research agenda

    Get PDF
    Cyber exploitation and malicious activities have become more sophisticated. Insider threat is one of the most significant cyber security threat vector, while posing a great concern to corporations and governments. An overview of the fundamental motivating forces and motivation theory are discussed. Such overview is provided to identify motivations that lead trusted employees to become insider threats in the context of cyber security. A research agenda with two sequential experimental research studies are outlined to address the challenge of insider threat mitigation by a prototype development. The first proposed study will classify data intake feeds, as recognized and weighted by cyber security experts, in an effort to establish predictive analytics of novel correlations of activities that may lead to cyber security incidents. It will also develop approach to identify how user activities can be compared against an established baseline, the user’s network cyber security pulse, with visualization of simulated users’ activities. Additionally, the second study will explain the process of assessing the usability of a developed visualization prototype that intends to present correlated suspicious activities requiring immediate action. Successfully developing the proposed prototype via feeds aggregation and an advanced visualization from the proposed research could assist in the mitigation of malicious insider threat

    Open-world Story Generation with Structured Knowledge Enhancement: A Comprehensive Survey

    Full text link
    Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structure knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematical taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study

    90th Annual Meeting of the Virginia Academy of Science: Proceedings

    Get PDF
    Full proceedings of the 90th Annual Meeting of the Virginia Academy of Science, Norfolk State University, Norfolk Virginia, May 23-25, 201

    Multiscale Modeling of the Ventricles: From Cellular Electrophysiology to Body Surface Electrocardiograms

    Get PDF
    This work is focused on different aspects within the loop of multiscale modeling: On the cellular level, effects of adrenergic regulation and the Long-QT syndrome have been investigated. On the organ level, a model for the excitation conduction system was developed and the role of electrophysiological heterogeneities was analyzed. On the torso level a dynamic model of a deforming heart was created and the effects of tissue conductivities on the solution of the forward problem were evaluated

    Big Data Analytics for Complex Systems

    Get PDF
    The evolution of technology in all fields led to the generation of vast amounts of data by modern systems. Using data to extract information, make predictions, and make decisions is the current trend in artificial intelligence. The advancement of big data analytics tools made accessing and storing data easier and faster than ever, and machine learning algorithms help to identify patterns in and extract information from data. The current tools and machines in health, computer technologies, and manufacturing can generate massive raw data about their products or samples. The author of this work proposes a modern integrative system that can utilize big data analytics, machine learning, super-computer resources, and industrial health machines’ measurements to build a smart system that can mimic the human intelligence skills of observations, detection, prediction, and decision-making. The applications of the proposed smart systems are included as case studies to highlight the contributions of each system. The first contribution is the ability to utilize big data revolutionary and deep learning technologies on production lines to diagnose incidents and take proper action. In the current digital transformational industrial era, Industry 4.0 has been receiving researcher attention because it can be used to automate production-line decisions. Reconfigurable manufacturing systems (RMS) have been widely used to reduce the setup cost of restructuring production lines. However, the current RMS modules are not linked to the cloud for online decision-making to take the proper decision; these modules must connect to an online server (super-computer) that has big data analytics and machine learning capabilities. The online means that data is centralized on cloud (supercomputer) and accessible in real-time. In this study, deep neural networks are utilized to detect the decisive features of a product and build a prediction model in which the iFactory will make the necessary decision for the defective products. The Spark ecosystem is used to manage the access, processing, and storing of the big data streaming. This contribution is implemented as a closed cycle, which for the best of our knowledge, no one in the literature has introduced big data analysis using deep learning on real-time applications in the manufacturing system. The code shows a high accuracy of 97% for classifying the normal versus defective items. The second contribution, which is in Bioinformatics, is the ability to build supervised machine learning approaches based on the gene expression of patients to predict proper treatment for breast cancer. In the trial, to personalize treatment, the machine learns the genes that are active in the patient cohort with a five-year survival period. The initial condition here is that each group must only undergo one specific treatment. After learning about each group (or class), the machine can personalize the treatment of a new patient by diagnosing the patients’ gene expression. The proposed model will help in the diagnosis and treatment of the patient. The future work in this area involves building a protein-protein interaction network with the selected genes for each treatment to first analyze the motives of the genes and target them with the proper drug molecules. In the learning phase, a couple of feature-selection techniques and supervised standard classifiers are used to build the prediction model. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges around 100%. The third contribution is the ability to build semi-supervised learning for the breast cancer survival treatment that advances the second contribution. By understanding the relations between the classes, we can design the machine learning phase based on the similarities between classes. In the proposed research, the researcher used the Euclidean matrix distance among each survival treatment class to build the hierarchical learning model. The distance information that is learned through a non-supervised approach can help the prediction model to select the classes that are away from each other to maximize the distance between classes and gain wider class groups. The performance measurement of this approach shows a slight improvement from the second model. However, this model reduced the number of discriminative genes from 47 to 37. The model in the second contribution studies each class individually while this model focuses on the relationships between the classes and uses this information in the learning phase. Hierarchical clustering is completed to draw the borders between groups of classes before building the classification models. Several distance measurements are tested to identify the best linkages between classes. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges from 90% to 100%. All the case study models showed high-performance measurements in the prediction phase. These modern models can be replicated for different problems within different domains. The comprehensive models of the newer technologies are reconfigurable and modular; any newer learning phase can be plugged-in at both ends of the learning phase. Therefore, the output of the system can be an input for another learning system, and a newer feature can be added to the input to be considered for the learning phase

    Gestion et visualisation de données hétérogènes multidimensionnelles : application PLM à la neuroimagerie

    Get PDF
    Neuroimaging domain is confronted with issues in analyzing and reusing the growing amount of heterogeneous data produced. Data provenance is complex – multi-subjects, multi-methods, multi-temporalities – and the data are only partially stored, restricting multimodal and longitudinal studies. Especially, functional brain connectivity is studied to understand how areas of the brain work together. Raw and derived imaging data must be properly managed according to several dimensions, such as acquisition time, time between two acquisitions or subjects and their characteristics. The objective of the thesis is to allow exploration of complex relationships between heterogeneous data, which is resolved in two parts : (1) how to manage data and provenance, (2) how to visualize structures of multidimensional data. The contribution follow a logical sequence of three propositions which are presented after a research survey in heterogeneous data management and graph visualization.The BMI-LM (Bio-Medical Imaging – Lifecycle Management) data model organizes the management of neuroimaging data according to the phases of a study and takes into account the scalability of research thanks to specific classes associated to generic objects. The application of this model into a PLM (Product Lifecycle Management) system shows that concepts developed twenty years ago for manufacturing industry can be reused to manage neuroimaging data. GMDs (Dynamic Multidimensional Graphs) are introduced to represent complex dynamic relationships of data, as well as JGEX (Json Graph EXchange) format that was created to store and exchange GMDs between software applications. OCL (Overview Constraint Layout) method allows interactive and visual exploration of GMDs. It is based on user’s mental map preservation and alternating of complete and reduced views of data. OCL method is applied to the study of functional brain connectivity at rest of 231 subjects that are represented by a GMD – the areas of the brain are the nodes and connectivity measures the edges – according to age, gender and laterality : GMDs are computed through processing workflow on MRI acquisitions into the PLM system. Results show two main benefits of using OCL method : (1) identification of global trends on one or many dimensions, and (2) highlights of local changes between GMD states.La neuroimagerie est confrontée à des difficultés pour analyser et réutiliser la masse croissante de données hétérogènes qu’elle produit. La provenance des données est complexe – multi-sujets, multi-analyses, multi-temporalités – et ces données ne sont stockées que partiellement, limitant les possibilités d’études multimodales et longitudinales. En particulier, la connectivité fonctionnelle cérébrale est analysée pour comprendre comment les différentes zones du cerveau travaillent ensemble. Il est nécessaire de gérer les données acquises et traitées suivant plusieurs dimensions, telles que le temps d’acquisition, le temps entre les acquisitions ou encore les sujets et leurs caractéristiques. Cette thèse a pour objectif de permettre l’exploration de relations complexes entre données hétérogènes, ce qui se décline selon deux axes : (1) comment gérer les données et leur provenance, (2) comment visualiser les structures de données multidimensionnelles. L’apport de nos travaux s’articule autour de trois propositions qui sont présentées à l’issue d’un état de l’art sur les domaines de la gestion de données hétérogènes et de la visualisation de graphes.Le modèle de données BMI-LM (Bio-Medical Imaging – Lifecycle Management) structure la gestion des données de neuroimagerie en fonction des étapes d’une étude et prend en compte le caractère évolutif de la recherche grâce à l’association de classes spécifiques à des objets génériques. L’implémentation de ce modèle au sein d’un système PLM (Product Lifecycle Management) montre que les concepts développés depuis vingt ans par l’industrie manufacturière peuvent être réutilisés pour la gestion des données en neuroimagerie. Les GMD (Graphes MultidimensionnelsDynamiques) sont introduits pour représenter des relations complexes entre données qui évoluent suivant plusieurs dimensions, et le format JGEX (Json Graph EXchange) a été créé pour permettre le stockage et l’échange de GMD entre applications. La méthode OCL (Overview Constraint Layout) permet l’exploration visuelle et interactive de GMD. Elle repose sur la préservation partielle de la carte mentale de l’utilisateur et l’alternance de vues complètes et réduites des données. La méthode OCL est appliquée à l’étude de la connectivité fonctionnelle cérébrale au repos de 231 sujets représentées sous forme de GMD – les zones du cerveau sont représentées par les noeuds et les mesures de connectivité par les arêtes – en fonction de l’âge, du genre et de la latéralité : les GMD sont obtenus par l’application de chaînes de traitement sur des acquisitions IRM dans le système PLM. Les résultats montrent deux intérêts principaux à l’utilisation de la méthode OCL : (1) l’identification des tendances globales sur une ou plusieurs dimensions et (2) la mise en exergue des changements locaux entre états du GMD

    Advances on Mechanics, Design Engineering and Manufacturing III

    Get PDF
    This open access book gathers contributions presented at the International Joint Conference on Mechanics, Design Engineering and Advanced Manufacturing (JCM 2020), held as a web conference on June 2–4, 2020. It reports on cutting-edge topics in product design and manufacturing, such as industrial methods for integrated product and process design; innovative design; and computer-aided design. Further topics covered include virtual simulation and reverse engineering; additive manufacturing; product manufacturing; engineering methods in medicine and education; representation techniques; and nautical, aeronautics and aerospace design and modeling. The book is organized into four main parts, reflecting the focus and primary themes of the conference. The contributions presented here not only provide researchers, engineers and experts in a range of industrial engineering subfields with extensive information to support their daily work; they are also intended to stimulate new research directions, advanced applications of the methods discussed and future interdisciplinary collaborations

    JIDOKA. Integration of Human and AI within Industry 4.0 Cyber Physical Manufacturing Systems

    Get PDF
    This book is about JIDOKA, a Japanese management technique coined by Toyota that consists of imbuing machines with human intelligence. The purpose of this compilation of research articles is to show industrial leaders innovative cases of digitization of value creation processes that have allowed them to improve their performance in a sustainable way. This book shows several applications of JIDOKA in the quest towards an integration of human and AI within Industry 4.0 Cyber Physical Manufacturing Systems. From the use of artificial intelligence to advanced mathematical models or quantum computing, all paths are valid to advance in the process of human–machine integration

    Corporate Venture Management in SMEs : evidence from the German IT consulting industry

    Get PDF
    Nowadays, there is a continuous need for many corporations to renew their business portfolio strategically in anticipation of changes in the business environment (e.g., technological change). The ongoing booming of founding international start-ups suggests that small entrepreneurial teams are an effective means to develop new businesses. Corporations should be able to benefit from this form of self-organized innovation when entering novel business domains for strategic renewal. However, corporations that establish small entrepreneurial teams (corporate ventures) are facing two obstacles. First, corporate ventures often fail for reasons that are not well explored. Second, it remains unclear how the partial successes may be improved to large successes. Although the key success factors remain ambiguous, there is little hope that corporate ventures will be successful without effective management. Since an empirical model for corporate venture management does not exists so far, the thesis formulates and answers the following problem statement: How can corporate management effectively manage corporate ventures? Building on qualitative and quantitative research methodologies, a model for effective corporate venture management is developed and tested statistically in the German IT consulting industry. The research results reveal some of the essential management principles through which corporate management can increase corporate venture success systematically.Algorithms and the Foundations of Software technolog
    • …
    corecore