3,446 research outputs found

    Furthering the Growth of Cloud Computing by Providing Privacy as a Service

    Get PDF
    The evolution of Cloud Computing as a viable business solution for providing hardware and software has created many security concerns. Among these security concerns, privacy is often overlooked. If Cloud Computing is to continue its growth, this privacy concern will need to be addressed. In this work we discuss the current growth of Cloud Computing and the impact the public sector and privacy can have in furthering this growth. To begin to provide privacy protection for Cloud Computing, we introduce privacy constraints that outline privacy preferences. We propose the expansion of Cloud Service Level Agreements (SLAs) to include these privacy constraints as Quality of Service (QoS) levels. This privacy QoS must be agreed upon along with the rest of the QoS terms within the SLA by the Cloud consumer and provider. Finally, we introduce Privacy as a Service (PraaS) to monitor the agreement and provide enforcement if necessary

    Cold Storage Data Archives: More Than Just a Bunch of Tapes

    Full text link
    The abundance of available sensor and derived data from large scientific experiments, such as earth observation programs, radio astronomy sky surveys, and high-energy physics already exceeds the storage hardware globally fabricated per year. To that end, cold storage data archives are the---often overlooked---spearheads of modern big data analytics in scientific, data-intensive application domains. While high-performance data analytics has received much attention from the research community, the growing number of problems in designing and deploying cold storage archives has only received very little attention. In this paper, we take the first step towards bridging this gap in knowledge by presenting an analysis of four real-world cold storage archives from three different application domains. In doing so, we highlight (i) workload characteristics that differentiate these archives from traditional, performance-sensitive data analytics, (ii) design trade-offs involved in building cold storage systems for these archives, and (iii) deployment trade-offs with respect to migration to the public cloud. Based on our analysis, we discuss several other important research challenges that need to be addressed by the data management community

    Mapping the Current Landscape of Research Library Engagement with Emerging Technologies in Research and Learning: Final Report

    Get PDF
    The generation, dissemination, and analysis of digital information is a significant driver, and consequence, of technological change. As data and information stewards in physical and virtual space, research libraries are thoroughly entangled in the challenges presented by the Fourth Industrial Revolution:1 a societal shift powered not by steam or electricity, but by data, and characterized by a fusion of the physical and digital worlds.2 Organizing, structuring, preserving, and providing access to growing volumes of the digital data generated and required by research and industry will become a critically important function. As partners with the community of researchers and scholars, research libraries are also recognizing and adapting to the consequences of technological change in the practices of scholarship and scholarly communication. Technologies that have emerged or become ubiquitous within the last decade have accelerated information production and have catalyzed profound changes in the ways scholars, students, and the general public create and engage with information. The production of an unprecedented volume and diversity of digital artifacts, the proliferation of machine learning (ML) technologies,3 and the emergence of data as the “world’s most valuable resource,”4 among other trends, present compelling opportunities for research libraries to contribute in new and significant ways to the research and learning enterprise. Librarians are all too familiar with predictions of the research library’s demise in an era when researchers have so much information at their fingertips. A growing body of evidence provides a resounding counterpoint: that the skills, experience, and values of librarians, and the persistence of libraries as an institution, will become more important than ever as researchers contend with the data deluge and the ephemerality and fragility of much digital content. This report identifies strategic opportunities for research libraries to adopt and engage with emerging technologies,5 with a roughly fiveyear time horizon. It considers the ways in which research library values and professional expertise inform and shape this engagement, the ways library and library worker roles will be reconceptualized, and the implication of a range of technologies on how the library fulfills its mission. The report builds on a literature review covering the last five years of published scholarship, primarily North American information science literature, and interviews with a dozen library field experts, completed in fall 2019. It begins with a discussion of four cross-cutting opportunities that permeate many or all aspects of research library services. Next, specific opportunities are identified in each of five core research library service areas: facilitating information discovery, stewarding the scholarly and cultural record, advancing digital scholarship, furthering student learning and success, and creating learning and collaboration spaces. Each section identifies key technologies shaping user behaviors and library services, and highlights exemplary initiatives. Underlying much of the discussion in this report is the idea that “digital transformation is increasingly about change management”6 —that adoption of or engagement with emerging technologies must be part of a broader strategy for organizational change, for “moving emerging work from the periphery to the core,”7 and a broader shift in conceptualizing the research library and its services. Above all, libraries are benefitting from the ways in which emerging technologies offer opportunities to center users and move from a centralized and often siloed service model to embedded, collaborative engagement with the research and learning enterprise

    TinyML: Tools, Applications, Challenges, and Future Research Directions

    Full text link
    In recent years, Artificial Intelligence (AI) and Machine learning (ML) have gained significant interest from both, industry and academia. Notably, conventional ML techniques require enormous amounts of power to meet the desired accuracy, which has limited their use mainly to high-capability devices such as network nodes. However, with many advancements in technologies such as the Internet of Things (IoT) and edge computing, it is desirable to incorporate ML techniques into resource-constrained embedded devices for distributed and ubiquitous intelligence. This has motivated the emergence of the TinyML paradigm which is an embedded ML technique that enables ML applications on multiple cheap, resource- and power-constrained devices. However, during this transition towards appropriate implementation of the TinyML technology, multiple challenges such as processing capacity optimization, improved reliability, and maintenance of learning models' accuracy require timely solutions. In this article, various avenues available for TinyML implementation are reviewed. Firstly, a background of TinyML is provided, followed by detailed discussions on various tools supporting TinyML. Then, state-of-art applications of TinyML using advanced technologies are detailed. Lastly, various research challenges and future directions are identified.Comment: 12 pags, 3 tables, 4 figure

    THE INTERNET OF THINGS (IOT) IN DISASTER RESPONSE

    Get PDF
    Disaster management is a complex practice that relies on access to and the usability of critical information to develop strategies for effective decision-making. The emergence of wearable internet of things (IoT) technology has attracted the interests of several major industries, making it one of the fastest-growing technologies to date. This thesis asks, How can disaster management incorporate wearable IoT technology in operations and decision-making practices in disaster response? How IoT is applied in other prominent industries, including construction, manufacturing and distribution, the Department of Defense, and public safety, provides a basis for furthering its application to challenges affecting agency coordination. The critical needs of disaster intelligence in the context of hurricanes, structural collapses, and wildfires are scrutinized to identify gaps that wearable technology could address in terms of information-sharing in multi-agency coordination and the decision-making practices that routinely occur in disaster response. Last, the specifics of wearable technology from the perspective of the private consumer and commercial industry illustrate its potential to improve disaster response but also acknowledge certain limitations including technical capabilities and information privacy and security.Civilian, Virginia Beach Fire Department / FEMA - USAR VATF-2Approved for public release. Distribution is unlimited

    Dropping Dropbox in your Law Practice to Maintain your Duty of Confidentiality

    Get PDF

    Toward Business Integrity Modeling and Analysis Framework for Risk Measurement and Analysis

    Get PDF
    Financialization has contributed to economic growth but has caused scandals, misselling, rogue trading, tax evasion, and market speculation. To a certain extent, it has also created problems in social and economic instability. It is an important aspect of Enterprise Security, Privacy, and Risk (ESPR), particularly in risk research and analysis. In order to minimize the damaging impacts caused by the lack of regulatory compliance, governance, ethical responsibilities, and trust, we propose a Business Integrity Modeling and Analysis (BIMA) framework to unify business integrity with performance using big data predictive analytics and business intelligence. Comprehensive services include modeling risk and asset prices, and consequently, aligning them with business strategies, making our services, according to market trend analysis, both transparent and fair. The BIMA framework uses Monte Carlo simulation, the Black–Scholes–Merton model, and the Heston model for performing financial, operational, and liquidity risk analysis and present outputs in the form of analytics and visualization. Our results and analysis demonstrate supplier bankruptcy modeling, risk pricing, high-frequency pricing simulations, London Interbank Offered Rate (LIBOR) rate simulation, and speculation detection results to provide a variety of critical risk analysis. Our approaches to tackle problems caused by financial services and the operational risk clearly demonstrate that the BIMA framework, as the outputs of our data analytics research, can effectively combine integrity and risk analysis together with overall business performance and can contribute to operational risk research

    Dropping Dropbox in your Law Practice to Maintain your Duty of Confidentiality

    Get PDF

    Personal Jurisdiction and Choice of Law in the Cloud

    Get PDF
    Cloud computing has revolutionized how society interacts with, and via, technology. Though some early detractors criticized the cloud as being nothing more than an empty industry buzzword, we contend that by dovetailing communications and calculating processes for the first time in history, cloud computing is--both practically and legally-a shift in prevailing paradigms. As a practical matter, the cloud brings with it a previously undreamt-of sense of location independence for both suppliers and consumers. And legally, the shift toward deploying computing ability as a service, rather than as a product, represents an evolution to a contractual foundation for interacting. Already, substantive cloud-based disputes have erupted in a variety of legal fields, including personal privacy, intellectual property, and antitrust, to name a few. Yet before courts can confront such issues, they must first address the two fundamental procedural questions of a lawsuit that form the bases of this Article-whether any law applies in the cloud, and, if so, which law ought to apply. Drawing upon novel analyses of analogous Internet jurisprudence, as well as concepts borrowed from disciplines ranging from economics to anthropology, this Article seeks to supply answers to these questions. To do so, we first identify a set of normative goals that jurisdictional and choice-of-law methodologies ought to achieve in the unique context of cloud computing. With these goals in mind, we then lay out structured analytical guidelines and suggested policy reforms to guide the continued development of jurisdiction and choice of law in the cloud

    SoK: Cryptographically Protected Database Search

    Full text link
    Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly; systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions: 1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms. 2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality. 3) An analysis of attacks against protected search for different base queries. 4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.Comment: 20 pages, to appear to IEEE Security and Privac
    • …
    corecore