311 research outputs found

    The future of social is personal: the potential of the personal data store

    No full text
    This chapter argues that technical architectures that facilitate the longitudinal, decentralised and individual-centric personal collection and curation of data will be an important, but partial, response to the pressing problem of the autonomy of the data subject, and the asymmetry of power between the subject and large scale service providers/data consumers. Towards framing the scope and role of such Personal Data Stores (PDSes), the legalistic notion of personal data is examined, and it is argued that a more inclusive, intuitive notion expresses more accurately what individuals require in order to preserve their autonomy in a data-driven world of large aggregators. Six challenges towards realising the PDS vision are set out: the requirement to store data for long periods; the difficulties of managing data for individuals; the need to reconsider the regulatory basis for third-party access to data; the need to comply with international data handling standards; the need to integrate privacy-enhancing technologies; and the need to future-proof data gathering against the evolution of social norms. The open experimental PDS platform INDX is introduced and described, as a means of beginning to address at least some of these six challenges

    Resolving semantic conflicts through ontological layering

    Get PDF
    We examine the problem of semantic interoperability in modern software systems, which exhibit pervasiveness, a range of heterogeneities and in particular, semantic heterogeneity of data models which are built upon ubiquitous data repositories. We investigate whether we can build ontologies upon heterogeneous data repositories in order to resolve semantic conflicts in them, and achieve their semantic interoperability. We propose a layered software architecture, which accommodates in its core, ontological layering, resulting in a Generic ontology for Context aware, Interoperable and Data sharing (Go-CID) software applications. The software architecture supports retrievals from various data repositories and resolves semantic conflicts which arise from heterogeneities inherent in them. It allows extendibility of heterogeneous data repositories through ontological layering, whilst preserving the autonomy of their individual elements. Our specific ontological layering for interoperable data repositories is based on clearly defined reasoning mechanisms in order to perform ontology mappings. The reasoning mechanisms depend on the user‟s involvments in retrievals of and types of semantic conflicts, which we have to resolve after identifying semantically related data. Ontologies are described in terms of ontological concepts and their semantic roles that make the types of semantic conflicts explicit. We contextualise semantically related data through our own categorisation of semantic conflicts and their degrees of similarities. Our software architecture has been tested through a case study of retrievals of semantically related data across repositories in pervasive healthcare and deployed with Semantic Web technology. The extensions to the research results include the applicability of our ontological layering and reasoning mechanisms in various problem domains and in environments where we need to (i) establish if and when we have overlapping “semantics”, and (ii) infer/assert a correct set of “semantics” which can support any decision making in such domains

    Architectures for integration of information systems under conditions of dynamic reconfiguration of virtual enterprises

    Get PDF
    Tese Doutoramento Programa Doutoral em Industrial and Systems EngineeringThe aim of this thesis is to explore Architectures of information systems Integration under conditions of dynamic reconfiguration of Virtual Enterprises. The main challenge that we identify and which formed the basis of the research is that information technologies alone cannot support efficiently and effectively the human knowledge and their natural way of interacting. Already from Sausurre (1916) it could be argued that part of knowledge resides in person, and the attempt to try to model it is sufficient for it to be misrepresented. And this is the motto of all this work. Enhance the capabilities of emerging technologies, but in the sense that allow humanto- human interaction, having the information system merely a means to make this possible. Thus we argue that a communicational architecture of information systems integration (where Pragmatics mechanisms are enabled) in virtual enterprises in dynamic reconfiguration scenarios, are better able than the existing transactional architectures. We propose a communicational architecture able to achieve an effective integration of information systems, as well as designing its logical and functional model. We also define the necessary semiotic framework in order to a communicational integration architecture could be efficient and effective. We implemented two prototypes to demonstrate the applicability of the proposed architecture. The demonstration of the research hypothesis was demonstrated with the realization of two experimentations where the ontologies have been unable to resolve disagreements or absences of opinion inherent in people who collaborated. This was overcome with the implementation of mechanisms that allow the co-creation between members of the group that participated in the trial.O objectivo desta tese é explorar Arquitecturas de Integração de Sistemas de Informação em condições de Reconfiguração Dinâmica de Empresas Virtuais. O principal desafio que identificamos e que serviu de base da pesquisa é que as tecnologias de informação por si só não conseguem suportar de forma eficiente e efectiva o conhecimento humano e a sua forma natural de interagir. Já Sausurre (1916) defendia que parte do conhecimento residirá sempre na pessoa, e a tentativa de o tentar modelar é suficiente para que seja deturpado. E esse é o mote de todo este trabalho. Enaltecer as capacidades das tecnologias emergentes mas no sentido de elas permitirem a interacção homem-to-homem, sendo o sistema de informação meramente um meio para que tal seja possível. Argumentamos por isso que uma arquitectura comunicacional de integração de sistemas de informação, onde Pragmatics mechanisms are enabled, em empresas virtuais em cenários de reconfiguração dinâmica, são mais capazes que as actuais arquitecturas transacionais. Propomos para isso uma arquitectura comunicacional capaz de conseguir uma integração efectiva de sistemas de informação, assim como desenhamos o seu modelo lógico e funcional. Definimos ainda o quadro semiótico necessário para que uma arquitectura comunicacional de integração seja eficiente e effectiva. Implementamos dois protótipos capazes de demonstrar a aplicabilidade da arquitectura proposta. A demonstração da hipótese de pesquisa ficou demonstrada com a realização de uma experimentação onde as ontologias se mostraram incapazes de resolver discordâncias ou ausências de opinião inerentes às pessoas que colaboram. Tal foi superado com a aplicação de mecanismos que permitiram a co-criação entre os membros do grupo que realizou a experimentação

    Complex adaptive systems based data integration : theory and applications

    Get PDF
    Data Definition Languages (DDLs) have been created and used to represent data in programming languages and in database dictionaries. This representation includes descriptions in the form of data fields and relations in the form of a hierarchy, with the common exception of relational databases where relations are flat. Network computing created an environment that enables relatively easy and inexpensive exchange of data. What followed was the creation of new DDLs claiming better support for automatic data integration. It is uncertain from the literature if any real progress has been made toward achieving an ideal state or limit condition of automatic data integration. This research asserts that difficulties in accomplishing integration are indicative of socio-cultural systems in general and are caused by some measurable attributes common in DDLs. This research’s main contributions are: (1) a theory of data integration requirements to fully support automatic data integration from autonomous heterogeneous data sources; (2) the identification of measurable related abstract attributes (Variety, Tension, and Entropy); (3) the development of tools to measure them. The research uses a multi-theoretic lens to define and articulate these attributes and their measurements. The proposed theory is founded on the Law of Requisite Variety, Information Theory, Complex Adaptive Systems (CAS) theory, Sowa’s Meaning Preservation framework and Zipf distributions of words and meanings. Using the theory, the attributes, and their measures, this research proposes a framework for objectively evaluating the suitability of any data definition language with respect to degrees of automatic data integration. This research uses thirteen data structures constructed with various DDLs from the 1960\u27s to date. No DDL examined (and therefore no DDL similar to those examined) is designed to satisfy the law of requisite variety. No DDL examined is designed to support CAS evolutionary processes that could result in fully automated integration of heterogeneous data sources. There is no significant difference in measures of Variety, Tension, and Entropy among DDLs investigated in this research. A direction to overcome the common limitations discovered in this research is suggested and tested by proposing GlossoMote, a theoretical mathematically sound description language that satisfies the data integration theory requirements. The DDL, named GlossoMote, is not merely a new syntax, it is a drastic departure from existing DDL constructs. The feasibility of the approach is demonstrated with a small scale experiment and evaluated using the proposed assessment framework and other means. The promising results require additional research to evaluate GlossoMote’s approach commercial use potential

    Skilling up for CRM: qualifications for CRM professionals in the Fourth Industrial Revolution

    Get PDF
    The 4th industrial revolution (4IR) describes a series of innovations in artificial intelligence, ubiquitous internet connectivity, and robotics, along with the subsequent disruption to the means of production. The impact of 4IR on industry reveals a construct called Industry 4.0. Higher education, too, is called to transform to respond to the disruption of 4IR, to meet the needs of industry, and to maximize human flourishing. Education 4.0 describes 4IR’s impact or predicted impact or intended impact on higher education, including prescriptions for HE’s transformation to realize these challenges. Industry 4.0 requires a highly skilled workforce, and a 4IR world raises questions about skills portability, durability, and lifespan. Every vertical within industry will be impacted by 4IR and such impact will manifest in needs for diverse employees possessing distinct competencies. Customer relationship management (CRM) describes the use of information systems to implement a customer-centric strategy and to practice relationship marketing (RM). Salesforce, a market leading CRM vendor, proposes its products alone will generate 9 million new jobs and $1.6 trillion in new revenues for Salesforce customers by 2024. Despite the strong market for CRM skills, a recent paper in a prominent IS journal claims higher education is not preparing students for CRM careers. In order to supply the CRM domain with skilled workers, it is imperative that higher education develop curricula oriented toward the CRM professional. Assessing skills needed for specific industry roles has long been an important task in IS pedagogy, but we did not find a paper in our literature review that explored the Salesforce administrator role. In this paper, we report the background, methodology, and results of a content analysis of Salesforce Administrator job postings retrieved from popular job sites. We further report the results of semi-structured interviews with industry experts, which served to validate, revise, and extend the content analysis framework. Our resulting skills framework serves as a foundation for CRM curriculum development and our resulting analysis incorporates elements of Education 4.0 to provide a roadmap for educating students to be successful with CRM in a 4IR world

    Localizing the media, locating ourselves: a critical comparative analysis of socio-spatial sorting in locative media platforms (Google AND Flickr 2009-2011)

    Get PDF
    In this thesis I explore media geocoding (i.e., geotagging or georeferencing), the process of inscribing the media with geographic information. A process that enables distinct forms of producing, storing, and distributing information based on location. Historically, geographic information technologies have served a biopolitical function producing knowledge of populations. In their current guise as locative media platforms, these systems build rich databases of places facilitated by user-generated geocoded media. These geoindexes render places, and users of these services, this thesis argues, subject to novel forms of computational modelling and economic capture. Thus, the possibility of tying information, people and objects to location sets the conditions to the emergence of new communicative practices as well as new forms of governmentality (management of populations). This project is an attempt to develop an understanding of the socio-economic forces and media regimes structuring contemporary forms of location-aware communication, by carrying out a comparative analysis of two of the main current location-enabled platforms: Google and Flickr. Drawing from the medium-specific approach to media analysis characteristic of the subfield of Software Studies, together with the methodological apparatus of Cultural Analytics (data mining and visualization methods), the thesis focuses on examining how social space is coded and computed in these systems. In particular, it looks at the databases’ underlying ontologies supporting the platforms' geocoding capabilities and their respective algorithmic logics. In the final analysis the thesis argues that the way social space is translated in the form of POIs (Points of Interest) and business-biased categorizations, as well as the geodemographical ordering underpinning the way it is computed, are pivotal if we were to understand what kind of socio-spatial relations are actualized in these systems, and what modalities of governing urban mobility are enabled

    Ontology-based Consistent Specification and Scalable Execution of Sensor Data Acquisition Plans in Cross-Domain loT Platforms

    Get PDF
    Nowadays there is an increased number of vertical Internet of Things (IoT) applications that have been developed within IoT Platforms that often do not interact with each other because of the adoption of different standards and formats. Several efforts are devoted to the construction of software infrastructures that facilitate the interoperability among heterogeneous cross-domain IoT platforms for the realization of horizontal applications. Even if their realization poses different challenges across all layers of the network stack, in this thesis we focus on the interoperability issues that arise at the data management layer. Starting from a flexible multi-granular Spatio-Temporal-Thematic data model according to which events generated by different kinds of sensors can be represented, we propose a Semantic Virtualization approach according to which the sensors belonging to different IoT platforms and the schema of the produced event streams are described in a Domain Ontology, obtained through the extension of the well-known ontologies (SSN and IoT-Lite ontologies) to the needs of a specific domain. Then, these sensors can be exploited for the creation of Data Acquisition Plans (DAPs) by means of which the streams of events can be filtered, merged, and aggregated in a meaningful way. Notions of soundness and consistency are introduced to bind the output streams of the services contained in the DAP with the Domain Ontology for providing a semantic description of its final output. The facilities of the \streamLoader prototype are finally presented for supporting the domain experts in the Semantic Virtualization of the sensors and for the construction of meaningful DAPs. Different graphical facilities have been developed for supporting domain experts in the development of complex DAPs. The system provides also facilities for their syntax-based translations in the Apache Spark Streaming language and execution in real time in a distributed cluster of machines

    Governance of Platform Data : From Canonical Data Models to Federative Interoperability

    Get PDF
    ABSTRACT As the volume of data generated every day is constantly increasing and, at the same time, ever more complex business networks are using this voluminous data, there is a clear need for better data governance. So far, the academic and practical literature has focused mainly on data governance for intra-organisational purposes. However, in the era of multifaceted business networks and with a rising number of data-driven platforms, the scope of data governance needs to be widened to address interorganisational contexts. From a practical point of view, data on the same subjects are scattered across numerous information systems and attempts to integrate them are often unsuccessful. Thus, new approaches are needed. The object of this dissertation is to study data governance in platform contexts. The goal is to use the existing data governance frameworks as a basis for creating a new framework that encompasses aspects of the networked business and network and platform business models, and the specific features related to data sharing on platforms. Three independent qualitative case studies were conducted to collect empirical evidence for this dissertation. The first case was about data federation in breast cancer patient and treatment data. The other two cases were conducted in the maritime industry to find out how data should be governed on platforms and what aspects affect the willingness of participating organisations to share data on platforms. The qualitative data consists of interviews with 29 people, material from several workshops, and group discussions and interview journals kept during the data collection. The theory building is based on the existing literature on data governance, networks and platforms and on the results of the case studies conducted. The key theoretical contributions of this dissertation are threefold. First, a federative approach to data interoperability is presented, with related tools. The federative approach enables preserving the original and other contexts of data and is based on the use of metadata to explain the various meanings of data. Second, a platform data governance model that includes business model aspects for networks is proposed. This model considers data federation as a means of joining the network and the platform, and has a special focus on data access and ownership. Third and most importantly, this dissertation joins the discussion on whether data is universally or contextually defined by presenting new views on the latter position. The idea is that instead of aiming for objective definitions for data, data should be seen as the representation of facts in their contexts that affect how the facts should be seen. In data-sharing situations, there should be agreements on what data means and how it should be understood. These agreements can be stored as metadata for the data entries. There should be three types of metadata, i.e. information system (IS) technical, information processing and socio-contextual metadata, in order to give a thorough explanation of the meaning of the data. This dissertation provides new insights into how data sharing in networked business environments could be arranged, taking into account the ownership and access issues of data, especially in the platform context. This study also widens the understanding of the ontological nature of data in more complex IS environments. So far, the academic literature on platform data governance is sparse, and that on data governance in general has focused more on data management functions and considered data to have universal definitions. On the practical side, the framework presents methods for implementing data governance policies on platforms and offers federative tools that are feasible for actual data integration. KEYWORDS: Data, Data governance, Platforms, Networks, Data federation, Data interoperabilityTIIVISTELMÄ Syntyvän, luotavan ja tallennettavan datan määrä kasvaa yhä kiihtyvällä tahdilla, ja samanaikaisesti yhä monimutkaisemmat liiketoimintaverkostot haluavat hyödyntää näitä valtavia datamassoja. Tämän takia tarvitaan kattavampaa ja kokonaisvaltaisempaa tiedonhallintaa. Aiempi kirjallisuus on käsitellyt tiedonhallintaa pääasiassa organisaation sisäisestä näkökulmasta. Kasvavien ja monitahoistuvien liiketoimintaverkostojen, sekä erilaisten datavetoisten alustojen aikakautena tiedonhallinta tulee ulottaa koskemaan myös organisaatioiden välisiä konteksteja. Käytännön näkökulmasta tarve tiedonhallinnalle nousee samoja objekteja koskevan tiedon hajaantumisesta useisiin järjestelmiin ja organisaatioihin. Tämän hajaantuneen tiedon integraatioyritykset ovat usein epäonnistuneita ja siksi uudenlaisia lähestymistapoja tarvitaan. Tässä väitöskirjassa tiedonhallintaa tutkitaan alustakontekstissa. Tavoitteena on olemassa olevia tiedonhallinnan viitekehyksiä lähtökohtana käyttäen luoda uusi viitekehys, joka huomioi verkostomaisen liiketoiminnan, verkosto- ja alustaliiketoimintamallit sekä alustoilla tapahtuvaan tiedonjakoon liittyvät erityispiirteet. Väitöskirjaan sisältyy kolme itsenäistä tapaustutkimusta, joissa on kerätty tutkielman empiirinen aineisto. Ensimmäinen tapaustutkimus koskee rintasyöpäpotilaita ja heidän hoitoaan koskevan datan federaatiota. Kaksi muuta tapaustutkimusta toteutettiin meriteollisuudessa. Näiden kahden tapaustutkimuksen tavoitteena oli selvittää miten dataa tulisi hallinnoida alustoilla ja mitkä tekijät vaikuttavat organisaatioiden halukkuuteen jakaa tietoa digitaalisilla alustoilla. Laadullinen aineisto koostuu yhteensä 29 henkilöhaastattelusta, useiden työpajojen materiaalista, ryhmäkeskusteluista sekä aineistonkeruun aikana pidetyistä haastattelupäiväkirjoista. Tutkimuksen teoreettisen kontribuution rakentuminen pohjautuu siis aiempaan tiedonhallintakirjallisuuteen, kirjallisuuteen verkostoista ja alustoista sekä kolmen toteutetun tapaustutkimuksen tuloksiin. Väitöskirjan teoreettinen kontribuutio jakautuu kolmeen osaan. Ensiksi esitellään federatiivinen lähestymistapa tiedon yhteen toimivuuteen ja tähän liittyviä työkaluja. Federatiivinen lähestymistapa mahdollistaa datan alkuperäisen ja muiden kontekstien samanaikaisen säilyttämisen sekä datan eri merkitysten ymmärtämisen metadataan perustuen. Toiseksi esitellään ehdotus tiedonhallintamalliksi digitaalisille alustoille. Tämä malli huomioi verkostomaisen liiketoiminnan erityispiirteet. Ehdotetussa tiedonhallintamallissa datafederaatio nähdään tapana liittyä dataa jakavaan verkostoon ja alustalle. Erityistä huomiota kiinnitetään datan käyttöoikeuksiin ja omistajuuteen. Kolmanneksi väitöskirja osallistuu keskusteluun datan luonteen määrittelystä: onko data universaalisti vai kontekstuaalisesti määriteltyä. Väitöskirjassa esitellään jälkimmäistä näkökulmaa puoltavia uusia tekijöitä. Pyrkimyksenä on objektiivisen datan määrittelyn sijaan nähdä datan kuvaavan tosiasioita erinäisissä konteksteissa. Nämä kontekstit vaikuttavat siihen, miten kyseiset tosiasiat tulee ymmärtää. Dataa jaettaessa on sovittava, mitä data tarkoittaa nämä kontekstit huomioiden. Nämä sopimukset voidaan tallentaa dataan liitettävänä metatietona. Datan merkityksen selvittämiseen perinpohjaisesti metatietoa tulisi olla kolmea eri tyyppiä: tietojärjestelmien teknistä, tiedon käsittelyyn liittyvää sekä sosio-kontekstuaalista metatietoa. Tutkimuksessa esitellään uusia näkökulmia datan jakamiseen verkottuneissa liiketoimintaympäristöissä datan omistajuuteen ja saatavuuteen liittyvät tekijät huomioiden. Erityinen painopiste on alustakontekstissa. Lisäksi tutkimus lisää ymmärrystä datan ontologisesta luonteesta monimutkaisemmissa tietojärjestelmäympäristöissä. Tähänastinen akateeminen kirjallisuus tiedonhallinnasta on ollut niukkaa, keskittyen pääasiassa tiedon johtamisen eri toimintoihin. Lisäksi taustaoletuksena on ollut, että data on universaalisti määriteltyä. Käytännölliseltä kannalta väitöskirjassa esiteltävä viitekehys tarjoaa menetelmiä tiedonhallinnan toimintaperiaatteiden implementointiin digitaalisilla alustoilla. Lisäksi esitellään federatiivisia työkaluja, jotka soveltuvat käytännön dataintegraatioihin. ASIASANAT: Data, Tiedonhallinta, Alustat, Liiketoimintaverkostot, Datafederaatio, Datan yhteentoimivuu
    corecore