846 research outputs found

    Review of Materialized Views Selection Algorithm for Cyber Manufacturing

    Get PDF
    Technological advancement in data transfer and connection has driven massive data growth. Within the semiconductor cyber manufacturing environment, in order to cope with rapid data transfer enabled by the Internet of Things (IoT) technology, rapid query processing becomes a priority. Especially, in the era of Industry 4.0, semiconductor manufacturing that operates within cyber-physical systems (CPS) relies heavily on the reporting function to monitor delicate wafer processing. Thus, delay in reporting which is usually caused by slow query processing is intolerable. Materialized views (MVs) are usually used in order to improve query processing speed. Nevertheless, as MVs requires database space and maintenance, the decision to use MVs is not determined by time factor only. Thus, MVs selection is a problem that calls for an efficient selection algorithm that can deal with several constraints at a time. In this paper, we reveal the criteria of optimisation algorithms that were proposed to deal with MVs selection problem. In particular, this paper attempts to evaluate the coverage and limitations of the algorithm under study

    Review Of Materialized Views Selection Algorithm For Cyber Manufacturing

    Get PDF
    Technological advancement in data transfer and connection has driven massive data growth.Within the semiconductor cyber manufacturing environment,in order to cope with rapid data transfer enabled by the Internet of Things (IoT) technology,rapid query processing becomes a priority.Especially,in the era of Industry 4.0, semiconductor manufacturing that operates within cyber-physical systems (CPS) relies heavily on the reporting function to monitor delicate wafer processing.Thus,delay in reporting which is usually caused by slow query processing is intolerable.Materialized views (MVs) are usually used in order to improve query processing speed. Nevertheless,as MVs requires database space and maintenance,the decision to use MVs is not determined by time factor only.Thus,MVs selection is a problem that calls for an efficient selection algorithm that can deal with several constraints at a time.In this paper,we reveal the criteria of optimisation algorithms that were proposed to deal with MVs selection problem.In particular,this paper attempts to evaluate the coverage and limitations of the algorithm under study

    A Web Based Fuzzy Data Mining Using Combs Inference Method And Decision Predictor

    Get PDF
    Fuzzy logic has become a very popular method of reasoning a system with approximate input system instead of a precise one. When qualitative variables are used to determine the decisions then we have to create some specific functions where the membership values of the input can be any number between 0 to 1 instead of 1 or 0 which is used in binary logic. When number of input attribute increases it the combinatorial rules increases exponentially, and diminishes performance of the system. The problem is generally known as “combinatorial rule explosion”. The Information Technology Department of Minnesota State University, Mankato has been developing a system to analyze historical data and mining. The research paper presents a methodology to reduce the number of rules used in the application and creating a data prediction system using partial incomplete data set

    Power efficiency through tuple ranking in wireless sensor network monitoring

    Get PDF
    In this paper, we present an innovative framework for efficiently monitoring Wireless Sensor Networks (WSNs). Our framework, coined KSpot, utilizes a novel top-k query processing algorithm we developed, in conjunction with the concept of in-network views, in order to minimize the cost of query execution. For ease of exposition, consider a set of sensors acquiring data from their environment at a given time instance. The generated information can conceptually be thought as a horizontally fragmented base relation R. Furthermore, the results to a user-defined query Q, registered at some sink point, can conceptually be thought as a view V . Maintaining consistency between V and R is very expensive in terms of communication and energy. Thus, KSpot focuses on a subset V′ (⊆ V ) that unveils only the k highest-ranked answers at the sink, for some user defined parameter k. To illustrate the efficiency of our framework, we have implemented a real system in nesC, which combines the traditional advantages of declarative acquisition frameworks, like TinyDB, with the ideas presented in this work. Extensive real-world testing and experimentation with traces from University of California-Berkeley, the University of Washington and Intel Research Berkeley, show that KSpot provides an up to 66% of energy savings compared to TinyDB, minimizes both the size and number of packets transmitted over the network (up to 77%), and prolongs the longevity of a WSN deployment to new scales

    Metadata-driven data integration

    Get PDF
    Cotutela: Universitat Politècnica de Catalunya i Université Libre de Bruxelles, IT4BI-DC programme for the joint Ph.D. degree in computer science.Data has an undoubtable impact on society. Storing and processing large amounts of available data is currently one of the key success factors for an organization. Nonetheless, we are recently witnessing a change represented by huge and heterogeneous amounts of data. Indeed, 90% of the data in the world has been generated in the last two years. Thus, in order to carry on these data exploitation tasks, organizations must first perform data integration combining data from multiple sources to yield a unified view over them. Yet, the integration of massive and heterogeneous amounts of data requires revisiting the traditional integration assumptions to cope with the new requirements posed by such data-intensive settings. This PhD thesis aims to provide a novel framework for data integration in the context of data-intensive ecosystems, which entails dealing with vast amounts of heterogeneous data, from multiple sources and in their original format. To this end, we advocate for an integration process consisting of sequential activities governed by a semantic layer, implemented via a shared repository of metadata. From an stewardship perspective, this activities are the deployment of a data integration architecture, followed by the population of such shared metadata. From a data consumption perspective, the activities are virtual and materialized data integration, the former an exploratory task and the latter a consolidation one. Following the proposed framework, we focus on providing contributions to each of the four activities. We begin proposing a software reference architecture for semantic-aware data-intensive systems. Such architecture serves as a blueprint to deploy a stack of systems, its core being the metadata repository. Next, we propose a graph-based metadata model as formalism for metadata management. We focus on supporting schema and data source evolution, a predominant factor on the heterogeneous sources at hand. For virtual integration, we propose query rewriting algorithms that rely on the previously proposed metadata model. We additionally consider semantic heterogeneities in the data sources, which the proposed algorithms are capable of automatically resolving. Finally, the thesis focuses on the materialized integration activity, and to this end, proposes a method to select intermediate results to materialize in data-intensive flows. Overall, the results of this thesis serve as contribution to the field of data integration in contemporary data-intensive ecosystems.Les dades tenen un impacte indubtable en la societat. La capacitat d’emmagatzemar i processar grans quantitats de dades disponibles és avui en dia un dels factors claus per l’èxit d’una organització. No obstant, avui en dia estem presenciant un canvi representat per grans volums de dades heterogenis. En efecte, el 90% de les dades mundials han sigut generades en els últims dos anys. Per tal de dur a terme aquestes tasques d’explotació de dades, les organitzacions primer han de realitzar una integració de les dades, combinantles a partir de diferents fonts amb l’objectiu de tenir-ne una vista unificada d’elles. Per això, aquest fet requereix reconsiderar les assumpcions tradicionals en integració amb l’objectiu de lidiar amb els requisits imposats per aquests sistemes de tractament massiu de dades. Aquesta tesi doctoral té com a objectiu proporcional un nou marc de treball per a la integració de dades en el context de sistemes de tractament massiu de dades, el qual implica lidiar amb una gran quantitat de dades heterogènies, provinents de múltiples fonts i en el seu format original. Per això, proposem un procés d’integració compost d’una seqüència d’activitats governades per una capa semàntica, la qual és implementada a partir d’un repositori de metadades compartides. Des d’una perspectiva d’administració, aquestes activitats són el desplegament d’una arquitectura d’integració de dades, seguit per la inserció d’aquestes metadades compartides. Des d’una perspectiva de consum de dades, les activitats són la integració virtual i materialització de les dades, la primera sent una tasca exploratòria i la segona una de consolidació. Seguint el marc de treball proposat, ens centrem en proporcionar contribucions a cada una de les quatre activitats. La tesi inicia proposant una arquitectura de referència de software per a sistemes de tractament massiu de dades amb coneixement semàntic. Aquesta arquitectura serveix com a planell per a desplegar un conjunt de sistemes, sent el repositori de metadades al seu nucli. Posteriorment, proposem un model basat en grafs per a la gestió de metadades. Concretament, ens centrem en donar suport a l’evolució d’esquemes i fonts de dades, un dels factors predominants en les fonts de dades heterogènies considerades. Per a l’integració virtual, proposem algorismes de rescriptura de consultes que usen el model de metadades previament proposat. Com a afegitó, considerem heterogeneïtat semàntica en les fonts de dades, les quals els algorismes de rescriptura poden resoldre automàticament. Finalment, la tesi es centra en l’activitat d’integració materialitzada. Per això proposa un mètode per a seleccionar els resultats intermedis a materialitzar un fluxes de tractament intensiu de dades. En general, els resultats d’aquesta tesi serveixen com a contribució al camp d’integració de dades en els ecosistemes de tractament massiu de dades contemporanisLes données ont un impact indéniable sur la société. Le stockage et le traitement de grandes quantités de données disponibles constituent actuellement l’un des facteurs clés de succès d’une entreprise. Néanmoins, nous assistons récemment à un changement représenté par des quantités de données massives et hétérogènes. En effet, 90% des données dans le monde ont été générées au cours des deux dernières années. Ainsi, pour mener à bien ces tâches d’exploitation des données, les organisations doivent d’abord réaliser une intégration des données en combinant des données provenant de sources multiples pour obtenir une vue unifiée de ces dernières. Cependant, l’intégration de quantités de données massives et hétérogènes nécessite de revoir les hypothèses d’intégration traditionnelles afin de faire face aux nouvelles exigences posées par les systèmes de gestion de données massives. Cette thèse de doctorat a pour objectif de fournir un nouveau cadre pour l’intégration de données dans le contexte d’écosystèmes à forte intensité de données, ce qui implique de traiter de grandes quantités de données hétérogènes, provenant de sources multiples et dans leur format d’origine. À cette fin, nous préconisons un processus d’intégration constitué d’activités séquentielles régies par une couche sémantique, mise en oeuvre via un dépôt partagé de métadonnées. Du point de vue de la gestion, ces activités consistent à déployer une architecture d’intégration de données, suivies de la population de métadonnées partagées. Du point de vue de la consommation de données, les activités sont l’intégration de données virtuelle et matérialisée, la première étant une tâche exploratoire et la seconde, une tâche de consolidation. Conformément au cadre proposé, nous nous attachons à fournir des contributions à chacune des quatre activités. Nous commençons par proposer une architecture logicielle de référence pour les systèmes de gestion de données massives et à connaissance sémantique. Une telle architecture consiste en un schéma directeur pour le déploiement d’une pile de systèmes, le dépôt de métadonnées étant son composant principal. Ensuite, nous proposons un modèle de métadonnées basé sur des graphes comme formalisme pour la gestion des métadonnées. Nous mettons l’accent sur la prise en charge de l’évolution des schémas et des sources de données, facteur prédominant des sources hétérogènes sous-jacentes. Pour l’intégration virtuelle, nous proposons des algorithmes de réécriture de requêtes qui s’appuient sur le modèle de métadonnées proposé précédemment. Nous considérons en outre les hétérogénéités sémantiques dans les sources de données, que les algorithmes proposés sont capables de résoudre automatiquement. Enfin, la thèse se concentre sur l’activité d’intégration matérialisée et propose à cette fin une méthode de sélection de résultats intermédiaires à matérialiser dans des flux des données massives. Dans l’ensemble, les résultats de cette thèse constituent une contribution au domaine de l’intégration des données dans les écosystèmes contemporains de gestion de données massivesPostprint (published version

    The Work of Communication

    Get PDF
    The Work of Communication: Relational Perspectives on Working and Organizing in Contemporary Capitalism revolves around a two-part question: "What have work and organization become under contemporary capitalism—and how should organization studies approach them?" Changes in the texture of capitalism, heralded by social and organizational theorists alike, increasingly focus attention on communication as both vital to the conduct of work and as imperative to organizational performance. Yet most accounts of communication in organization studies fail to understand an alternate sense of the "work of communication" in the constitution of organizations, work practices, and economies. This book responds to that lack by portraying communicative practices—as opposed to individuals, interests, technologies, structures, organizations, or institutions—as the focal units of analysis in studies of the social and organizational problems occasioned by contemporary capitalism. Rather than suggesting that there exists a canonically "correct" route communicative analyses must follow, The Work of Communication: Relational Perspectives on Working and Organizing in Contemporary Capitalism explores the value of transcending longstanding divides between symbolic and material factors in studies of working and organizing. The recognition of dramatic shifts in technological, economic, and political forces, along with deep interconnections among the myriad of factors shaping working and organizing, sows doubts about whether organization studies is up to the vital task of addressing the social problems capitalism now creates. Kuhn, Ashcraft, and Cooren argue that novel insights into those social problems are possible if we tell different stories about working and organizing. To aid authors of those stories, they develop a set of conceptual resources that they capture under the mantle of communicative relationality. These resources allow analysts to profit from burgeoning interest in notions such as sociomateriality, posthumanism, performativity, and affect. It goes on to illustrate the benefits that investigations of work and organization can realize from communicative relationality by presenting case studies that analyze (a) the becoming of an idea, from its inception to solidification, (b) the emergence of what is taken to be the "the product" in high-tech startup entrepreneurship, and (c) the branding of work (in this case, academic writing and commercial aviation) through affective economies. Taken together, the book portrays "the work of communication" as simultaneously about how work in the "new economy" revolves around communicative practice and about how communication serves as a mode of explanation with the potential to cultivate novel stories about working and organizing. Aimed at academics, researchers, and policy makers, this book’s goal is to make tangible the contributions of communication for thinking about contemporary social and organizational problems

    Web ontology reasoning with logic databases [online]

    Get PDF

    Data warehousing technologies for large-scale and right-time data

    Get PDF

    The ‘Meanings’ and ‘Enactments’ of Science and Technology: ANT-Mobilities’ Analysis of Two Cases

    Get PDF
    In this work I study two cases involving practices of science and technology in the backdrop of related and recent curricular reforms in both settings. The first case study is based on the 2005 South Asian earthquake in Muzaffarabad, Pakistan which led to massive losses including large scale injuries and disabilities. This led to reforms at many levels ranging from disaster management to action plans on disability, including educational reforms in rehabilitation sciences. Local efforts to deal with this disaster led to innovative approaches such as the formation of a Community Based Rehabilitation (CBR) model by a local NGO, which I study in detail. The second case study is based on the recent reform of science and technology curriculum in Ontario, which is related to the release of the 2007 Intergovernmental Panel for Climate Change (IPCC) reports. With climate change science driving this reform with curricular demands for students to learn ‘what scientists do’, my second case study details the formation of the Canadian CloudSat CALIPSO Validation Project (C3VP) and scientific practices which depict cutting edge science related to climate change. Towards contending with the complexity inherent in these cases, I have developed a hybrid framework which is based on Actor-Network Theory (ANT) and the mobilities paradigm while drawing on some aspects of the Annales school of historians. The resulting historical sociology or historiography depicts how these various networks were formed via mobilities of various actor-networks and vice versa. The practices involved in both cases evolved over time and required innovation in times of crises and challenges, and are far more than simple applications of method as required by biomedical and positivist representations of science inherent in both educational reforms. Non-human agency in the form of crisis and disaster also emerges as a key reason for the formation of these networks. Drawing from both cases, I introduce the concept of “transectionalities” as a metaphor which represent configurations of actor-networks in science and technology geared towards dealing with crisis and disaster scenarios. Based on these findings, I also extend the idea of “multiple ontologies” by Mol (2002) to “Epistemic-Ontologic-Techne-” configurations which is sensitive to considerations of time. Moreover, I also find that mathematics is a key mobilizing actor and material semiotic which mediates communication between humans and non-humans and term these dynamics as “mathematical mobilities.” Based on case study one, I also suggest the notion of “affective care” in clinical reasoning, which is based on enhancing the beneficial effect of human to human relationships in these engagements
    corecore