1,366 research outputs found

    Internet of Things Cloud: Architecture and Implementation

    Full text link
    The Internet of Things (IoT), which enables common objects to be intelligent and interactive, is considered the next evolution of the Internet. Its pervasiveness and abilities to collect and analyze data which can be converted into information have motivated a plethora of IoT applications. For the successful deployment and management of these applications, cloud computing techniques are indispensable since they provide high computational capabilities as well as large storage capacity. This paper aims at providing insights about the architecture, implementation and performance of the IoT cloud. Several potential application scenarios of IoT cloud are studied, and an architecture is discussed regarding the functionality of each component. Moreover, the implementation details of the IoT cloud are presented along with the services that it offers. The main contributions of this paper lie in the combination of the Hypertext Transfer Protocol (HTTP) and Message Queuing Telemetry Transport (MQTT) servers to offer IoT services in the architecture of the IoT cloud with various techniques to guarantee high performance. Finally, experimental results are given in order to demonstrate the service capabilities of the IoT cloud under certain conditions.Comment: 19pages, 4figures, IEEE Communications Magazin

    Evaluating cloud database migration options using workload models

    Get PDF
    A key challenge in porting enterprise software systems to the cloud is the migration of their database. Choosing a cloud provider and service option (e.g., a database-as-a-service or a manually configured set of virtual machines) typically requires the estimation of the cost and migration duration for each considered option. Many organisations also require this information for budgeting and planning purposes. Existing cloud migration research focuses on the software components, and therefore does not address this need. We introduce a two-stage approach which accurately estimates the migration cost, migration duration and cloud running costs of relational databases. The first stage of our approach obtains workload and structure models of the database to be migrated from database logs and the database schema. The second stage performs a discrete-event simulation using these models to obtain the cost and duration estimates. We implemented software tools that automate both stages of our approach. An extensive evaluation compares the estimates from our approach against results from real-world cloud database migrations

    SoK: Cryptographically Protected Database Search

    Full text link
    Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly; systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions: 1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms. 2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality. 3) An analysis of attacks against protected search for different base queries. 4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.Comment: 20 pages, to appear to IEEE Security and Privac

    Distributed data service for data management in internet of things middleware

    Get PDF
    The development of the Internet of Things (IoT) is closely related to a considerable increase in the number and variety of devices connected to the Internet. Sensors have become a regular component of our environment, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected devices, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this paper, we introduce a Distributed Data Service (DDS) to collect and process data for IoT environments. One central goal of this DDS is to enable multiple and distinct IoT middleware systems to share common data services from a loosely-coupled provider. In this context, we propose a new specification of functionalities for a DDS and the conception of the corresponding techniques for collecting, filtering and storing data conveniently and efficiently in this environment. Another contribution is a data aggregation component that is proposed to support efficient real-time data querying. To validate its data collecting and querying functionalities and performance, the proposed DDS is evaluated in two case studies regarding a simulated smart home system, the first case devoted to evaluating data collection and aggregation when the DDS is interacting with the UIoT middleware, and the second aimed at comparing the DDS data collection with this same functionality implemented within the Kaa middleware

    Curriculum Guidelines for Undergraduate Programs in Data Science

    Get PDF
    The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program met for the purpose of composing guidelines for undergraduate programs in Data Science. The group consisted of 25 undergraduate faculty from a variety of institutions in the U.S., primarily from the disciplines of mathematics, statistics and computer science. These guidelines are meant to provide some structure for institutions planning for or revising a major in Data Science

    Security Implications of Adopting a New Data Storage and Access Model in Big Data and Cloud Computing

    Get PDF
    This article examines the security implications of using cloud computing and Big Data. It employs a mixed methodology of qualitative and quantitative research and takes a critical realist epistemological approach. The objective is to identify the components of a theory for predicting and explaining [1, 4] the security implications associated with adopting the services provided by cloud computing and Big Data. The integration of various information sources and the widespread use of computing across diverse fields have resulted in a significant increase in data volume, scale, quantity, and diversity. Consequently, data management, storage, retrieval, and access have undergone significant changes. The latest developments in IT have brought forth novel technologies such as Cloud Computing and Big Data. Big Data comprises of technologies that rely on NoSQL (Not only SQL) databases, which enable the growth of data volumes, numbers, and types on a large scale. The new NoSQL systems are seen as solutions for meeting scalability requirements of large IT firms. Multiple open-source and pay-as-you-go NoSQL models are available for purchase

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates

    Relational Database Design and Multi-Objective Database Queries for Position Navigation and Timing Data

    Get PDF
    Performing flight tests is a natural part of researching cutting edge sensors and filters for sensor integration. Unfortunately, tests are expensive, and typically take many months of planning. A sensible goal would be to make previously collected data readily available to researchers for future development. The Air Force Institute of Technology (AFIT) has hundreds of data logs potentially available to aid in facilitating further research in the area of navigation. A database would provide a common location where older and newer data sets are available. Such a database must be able to store the sensor data, metadata about the sensors, and affiliated metadata of interest. This thesis proposes a standard approach for sensor and metadata schema and three different design approaches that organize this data in relational databases. Queries proposed by members of the Autonomy and Navigation Technology (ANT) Center at AFIT are the foundation of experiments for testing. These tests fall into two categories, downloaded data, and queries which return a list of missions. Test databases of 100 and 1000 missions are created for the three design approaches to simulate AFIT\u27s present and future volume of data logs. After testing, this thesis recommends one specific approach to the ANT Center as its database solution. In order to enable more complex queries, a Genetic algorithm and Hill Climber algorithm are developed as solutions to queries in the combined Knapsack/Set Covering Problem Domains. These algorithms are tested against the two test databases for the recommended database approach. Each algorithm returned solutions in under two minutes, and may be a valuable tool for researchers when the database becomes operational
    corecore