211,320 research outputs found

    Context for Ubiquitous Data Management

    Get PDF
    In response to the advance of ubiquitous computing technologies, we believe that for computer systems to be ubiquitous, they must be context-aware. In this paper, we address the impact of context-awareness on ubiquitous data management. To do this, we overview different characteristics of context in order to develop a clear understanding of context, as well as its implications and requirements for context-aware data management. References to recent research activities and applicable techniques are also provided

    Context Data Management for Large Scale Context-Aware Ubiquitous Systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Integrated context management for multi-domain pervasive environments

    Get PDF
    An important part of the value of ubiquitous computing environments is in their ability to interact with external domains. This paper addresses the issue of cross-domain context management in a scenario of seamless integration between a user home domain and a ubiquitous computing environment. The work is based on the broader concept of Value ADded Environment (Vade), where multiple integration possibilities are explored. This paper is focused on location context management issues and describes two integration paths for sharing context data: the possibility to provide local applications with access to the context of a visiting user, and the possibility to provide global applications with access to context sources that are specific to the visited location.(undefined

    Study on the Context-Aware Middleware for Ubiquitous Greenhouses Using Wireless Sensor Networks

    Get PDF
    Wireless Sensor Network (WSN) technology is one of the important technologies to implement the ubiquitous society, and it could increase productivity of agricultural and livestock products, and secure transparency of distribution channels if such a WSN technology were successfully applied to the agricultural sector. Middleware, which can connect WSN hardware, applications, and enterprise systems, is required to construct ubiquitous agriculture environment combining WSN technology with agricultural sector applications, but there have been insufficient studies in the field of WSN middleware in the agricultural environment, compared to other industries. This paper proposes a context-aware middleware to efficiently process data collected from ubiquitous greenhouses by applying WSN technology and used to implement combined services through organic connectivity of data. The proposed middleware abstracts heterogeneous sensor nodes to integrate different forms of data, and provides intelligent context-aware, event service, and filtering functions to maximize operability and scalability of the middleware. To evaluate the performance of the middleware, an integrated management system for ubiquitous greenhouses was implemented by applying the proposed middleware to an existing greenhouse, and it was tested by measuring the level of load through CPU usage and the response time for users’ requests when the system is working

    Semantic-Based Storage QoS Management Methodology -- Case Study for Distributed Environments

    Get PDF
    The distributed computing environments, e.g. clouds, often deal with huge amounts of data, which constantly increase. The global growth of data is caused by ubiquitous personal devices, enterprise and scientific applications, etc. As the size of data grows new challenges are emerging in the context of storage management. Modern data and storage resource management systems need to face wide range of problems -- minimizing energy consumption (green data centers), optimizing resource usage, throughput and capacity, data availability, security and legal issues, scalability. In addition users or their applications can have QoS (Quality of Service) requirements concerning the storage access, which further complicates the management. To cope with this problem a common mass storage system model taking into account the performance aspects of a storage system becomes a necessity. The model described with semantic technologies brings a semantic interoperability between the system components. In this paper we describe our approach at data management with QoS based on the developed models as a case study for distributed environments

    Survey of context provisioning middleware

    Get PDF
    In the scope of ubiquitous computing, one of the key issues is the awareness of context, which includes diverse aspects of the user's situation including his activities, physical surroundings, location, emotions and social relations, device and network characteristics and their interaction with each other. This contextual knowledge is typically acquired from physical, virtual or logical sensors. To overcome problems of heterogeneity and hide complexity, a significant number of middleware approaches have been proposed for systematic and coherent access to manifold context parameters. These frameworks deal particularly with context representation, context management and reasoning, i.e. deriving abstract knowledge from raw sensor data. This article surveys not only related work in these three categories but also the required evaluation principles. © 2009-2012 IEEE

    Economics of Data Systematic Review for Planning Strategies in the InsurTech industry

    Get PDF
    The knowledge of data enables exploring how value is created from data. Organizations’ strategic planning becomes easier if the value of data is understood and adopted. Unless managers know how to use data, its exploitable value remains limited. Previous studies assessed either data dimensions such as volume, variety, velocity, veracity and granularity, or data management processes. However, many of these topics have been treated with a technical approach and only a few focused on the data value in management, strategy, and planning. The ubiquitous of data has allowed insurance incumbents and startups to exploit technologies, from which InsurTech, leveraging a unique data-driven proposition and often gaining a competitive advantage. The paper aims to explore the economics of data, enabling to strategically plan data management practices. It contributes to the management and strategy literature with an evidence-based systematic literature review that embraces the value generated by knowing data sources, data types, extended data dimensions, analyzes enabling technologies, and extends data management practices for reaching organizations’ objectives in the InsurTech empirical context. In addition to further avenues of research, it provides managers with a theoretical data-valorization framework for data strategic planning, and institutions an overview for guiding the digital transformation. The novelty of this paper is the comprehensive focus on the economics of data at the intersection between traditional and emerging business models.The knowledge of data enables exploring how value is created from data. Organizations’ strategic planning becomes easier if the value of data is understood and adopted. Unless managers know how to use data, its exploitable value remains limited. Previous studies assessed either data dimensions such as volume, variety, velocity, veracity and granularity, or data management processes. However, many of these topics have been treated with a technical approach and only a few focused on the data value in management, strategy, and planning. The ubiquitous of data has allowed insurance incumbents and startups to exploit technologies, from which InsurTech, leveraging a unique data-driven proposition and often gaining a competitive advantage. The paper aims to explore the economics of data, enabling to strategically plan data management practices. It contributes to the management and strategy literature with an evidence-based systematic literature review that embraces the value generated by knowing data sources, data types, extended data dimensions, analyzes enabling technologies, and extends data management practices for reaching organizations’ objectives in the InsurTech empirical context. In addition to further avenues of research, it provides managers with a theoretical data-valorization framework for data strategic planning, and institutions an overview for guiding the digital transformation. The novelty of this paper is the comprehensive focus on the economics of data at the intersection between traditional and emerging business models

    Identity in research infrastructure and scientific communication: Report from the 1st IRISC workshop, Helsinki Sep 12-13, 2011

    Get PDF
    Motivation for the IRISC workshop came from the observation that identity and digital identification are increasingly important factors in modern scientific research, especially with the now near-ubiquitous use of the Internet as a global medium for dissemination and debate of scientific knowledge and data, and as a platform for scientific collaborations and large-scale e-science activities.

The 1 1/2 day IRISC2011 workshop sought to explore a series of interrelated topics under two main themes: i) unambiguously identifying authors/creators & attributing their scholarly works, and ii) individual identification and access management in the context of identity federations. Specific aims of the workshop included:

• Raising overall awareness of key technical and non-technical challenges, opportunities and developments.
• Facilitating a dialogue, cross-pollination of ideas, collaboration and coordination between diverse – and largely unconnected – communities.
• Identifying & discussing existing/emerging technologies, best practices and requirements for researcher identification.

This report provides background information on key identification-related concepts & projects, describes workshop proceedings and summarizes key workshop findings

    Parallel Sort-Based Matching for Data Distribution Management on Shared-Memory Multiprocessors

    Full text link
    In this paper we consider the problem of identifying intersections between two sets of d-dimensional axis-parallel rectangles. This is a common problem that arises in many agent-based simulation studies, and is of central importance in the context of High Level Architecture (HLA), where it is at the core of the Data Distribution Management (DDM) service. Several realizations of the DDM service have been proposed; however, many of them are either inefficient or inherently sequential. These are serious limitations since multicore processors are now ubiquitous, and DDM algorithms -- being CPU-intensive -- could benefit from additional computing power. We propose a parallel version of the Sort-Based Matching algorithm for shared-memory multiprocessors. Sort-Based Matching is one of the most efficient serial algorithms for the DDM problem, but is quite difficult to parallelize due to data dependencies. We describe the algorithm and compute its asymptotic running time; we complete the analysis by assessing its performance and scalability through extensive experiments on two commodity multicore systems based on a dual socket Intel Xeon processor, and a single socket Intel Core i7 processor.Comment: Proceedings of the 21-th ACM/IEEE International Symposium on Distributed Simulation and Real Time Applications (DS-RT 2017). Best Paper Award @DS-RT 201
    corecore