2,510 research outputs found

    Platforms and Protocols for the Internet of Things

    Get PDF
    Building a general architecture for the Internet of Things (IoT) is a very complex task, exacerbated by the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we identify the main blocks of a generic IoT architecture, describing their features and requirements, and analyze the most common approaches proposed in the literature for each block. In particular, we compare three of the most important communication technologies for IoT purposes, i.e., REST, MQTT, and AMQP, and we also analyze three IoT platforms: openHAB, Sentilo, and Parse. The analysis will prove the importance of adopting an integrated approach that jointly addresses several issues and is able to flexibly accommodate the requirements of the various elements of the system. We also discuss a use case which illustrates the design challenges and the choices to make when selecting which protocols and technologies to use

    Middleware-based Database Replication: The Gaps between Theory and Practice

    Get PDF
    The need for high availability and performance in data management systems has been fueling a long running interest in database replication from both academia and industry. However, academic groups often attack replication problems in isolation, overlooking the need for completeness in their solutions, while commercial teams take a holistic approach that often misses opportunities for fundamental innovation. This has created over time a gap between academic research and industrial practice. This paper aims to characterize the gap along three axes: performance, availability, and administration. We build on our own experience developing and deploying replication systems in commercial and academic settings, as well as on a large body of prior related work. We sift through representative examples from the last decade of open-source, academic, and commercial database replication systems and combine this material with case studies from real systems deployed at Fortune 500 customers. We propose two agendas, one for academic research and one for industrial R&D, which we believe can bridge the gap within 5-10 years. This way, we hope to both motivate and help researchers in making the theory and practice of middleware-based database replication more relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on Management of Data, Vancouver, Canada, June 200

    Software-implemented attack tolerance for critical information retrieval

    Get PDF
    The fast-growing reliance of our daily life upon online information services often demands an appropriate level of privacy protection as well as highly available service provision. However, most existing solutions have attempted to address these problems separately. This thesis investigates and presents a solution that provides both privacy protection and fault tolerance for online information retrieval. A new approach to Attack-Tolerant Information Retrieval (ATIR) is developed based on an extension of existing theoretical results for Private Information Retrieval (PIR). ATIR uses replicated services to protect a user's privacy and to ensure service availability. In particular, ATIR can tolerate any collusion of up to t servers for privacy violation and up to ƒ faulty (either crashed or malicious) servers in a system with k replicated servers, provided that k ≄ t + ƒ + 1 where t ≄ 1 and ƒ ≀ t. In contrast to other related approaches, ATIR relies on neither enforced trust assumptions, such as the use of tanker-resistant hardware and trusted third parties, nor an increased number of replicated servers. While the best solution known so far requires k (≄ 3t + 1) replicated servers to cope with t malicious servers and any collusion of up to t servers with an O(n^*^) communication complexity, ATIR uses fewer servers with a much improved communication cost, O(n1/2)(where n is the size of a database managed by a server).The majority of current PIR research resides on a theoretical level. This thesis provides both theoretical schemes and their practical implementations with good performance results. In a LAN environment, it takes well under half a second to use an ATIR service for calculations over data sets with a size of up to 1MB. The performance of the ATIR systems remains at the same level even in the presence of server crashes and malicious attacks. Both analytical results and experimental evaluation show that ATIR offers an attractive and practical solution for ever-increasing online information applications

    ATTACKS AND COUNTERMEASURES FOR WEBVIEW ON MOBILE SYSTEMS

    Get PDF
    ABSTRACT All the mainstream mobile operating systems provide a web container, called ``WebView\u27\u27. This Web-based interface can be included as part of the mobile application to retrieve and display web contents from remote servers. WebView not only provides the same functionalities as web browser, more importantly, it enables rich interactions between mobile apps and webpages loaded inside WebView. Through its APIs, WebView enables the two-way interaction. However, the design of WebView changes the landscape of the Web, especially from the security perspective. This dissertation conducts a comprehensive and systematic study of WebView\u27s impact on web security, with a particular focus on identifying its fundamental causes. This dissertation discovers multiple attacks on WebView, and proposes new protection models to enhance the security of WebView. The design principles of these models are also described as well as the prototype implementation in Android platform. Evaluations are used to demonstrate the effectiveness and performance of these protection models

    Show Me the (Data About the) Money!

    Get PDF
    Information about consumers, their money, and what they do with it is the lifeblood of the flourishing financial technology (“FinTech”) sector. Historically, highly regulated banks jealously protected this data. However, consumers themselves now share their data with businesses more than ever before. These businesses monetize and use the data for countless prospects, often without the consumers’ actual consent. Understanding the dimensions of this recent phenomenon, more and more consumer groups, scholars, and lawmakers have started advocating for consumers to have the ability to control their data as a modern imperative. This ability is tightly linked to the concept of open banking—an initiative that allows consumers to control and share their banking data with service providers as they see fit. But in the U.S., banks have threatened to block the servers of tech companies and data aggregators—business entities that serve as the middlemen connecting FinTech companies and banks, enabling consumers to get more financial services—from accessing their customers’ data even if the customers agree to it. With no regulation or accepted standards for the ethical gathering and use of data, banks argue that limiting access helps them protect their clients’ privacy, improve their accounts’ safety, and promote consumer protection principles. Banks claim that FinTech apps collect more data than needed, store it insecurely, and sell it to others. But the motivation of the big banks in advocating for such limitations may not be so pure. Banks do not want to relinquish competitive advantages, lose customers, or be held liable for data or fund losses. Witnessing resistance, tech companies are not sitting idly by waiting for banks to limit their data access. Instead, they are working on ways to outsmart banks’ blocking technology and use data aggregation services as a middleman. They also extended the fight into Washington, where regulators such as the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB) are noticing how technology impacts consumer data flows and credit reporting issues. Advocating for consumers’ rights to control data, tech companies lobby for open banking

    The Risks of Sourcing Software as a Service – An Empirical Analysis of Adopters and Non-Adopters

    Get PDF
    Software-as-a-Service (SaaS) is said to become an important cornerstone of the Internet of Services. However, while some market research and IT provider firms fervently support this point-of-view, others already conjure up the failure of this on-demand sourcing option due to considerable risks associated with SaaS. Although there is a substantial body of research at the intersection of traditional and on-demand IT outsourcing and risk management, existing research is virtually silent on analyzing the risks of SaaS. This study thus seeks to deepen the understanding of a comprehensive set of risk factors affecting the adoption of SaaS and discriminates between SaaS adopters and non-adopters. Grounded in perceived risk theory, we developed a research model that was analyzed with survey data of 379 firms in Germany. Our analysis revealed that security risk was the dominant factor influencing companies’ overall risk perceptions on SaaS-based sourcing. Moreover, we found significant differences between adopters’ and non-adopters’ perceptions of performance and financial risks. Overall, this study provides relevant findings that potential and actual SaaS clients may use to better assess SaaS-based offerings. For SaaS providers, our study gives important factors to emphasize when offering SaaS services to companies in different stages of the technology adoption lifecycle

    Data Linking - Linking survey data with geospatial, social media, and sensor data (Version 1.0)

    Get PDF
    Survey data are still the most commonly used type of data in the quantitative social sciences. However, as not everything that is of interest to social scientists can be measured via surveys, and the self-report data they provide have certain limitations, such as recollection or social desirability bias, researchers have increasingly used other types of data that are not speciïŹcally created for research. These data are often called "found data" or "non-designed data" and encompass a variety of different data types. Naturally, these data have their own sets of limitations. One way of combining the unique strengths of survey data and these other data types and dealing with some of their respective limitations is to link them. This guideline ïŹrst describes why linking survey data with other types of data can be useful for researchers. After that, it focuses on the linking of survey data with three types of data that are becoming increasingly popular in the social sciences: geospatial data, social media data, and sensor data. Following a discussion of the advantages and challenges associated with linking survey data with these types of data, the guideline concludes by comparing their similarities, presenting some general recommendations regarding linking surveys with other types of (found/non-designed) data, and providing an outlook on current developments in survey research with regard to data linking
    • 

    corecore