3,712 research outputs found

    Confidential Boosting with Random Linear Classifiers for Outsourced User-generated Data

    Full text link
    User-generated data is crucial to predictive modeling in many applications. With a web/mobile/wearable interface, a data owner can continuously record data generated by distributed users and build various predictive models from the data to improve their operations, services, and revenue. Due to the large size and evolving nature of users data, data owners may rely on public cloud service providers (Cloud) for storage and computation scalability. Exposing sensitive user-generated data and advanced analytic models to Cloud raises privacy concerns. We present a confidential learning framework, SecureBoost, for data owners that want to learn predictive models from aggregated user-generated data but offload the storage and computational burden to Cloud without having to worry about protecting the sensitive data. SecureBoost allows users to submit encrypted or randomly masked data to designated Cloud directly. Our framework utilizes random linear classifiers (RLCs) as the base classifiers in the boosting framework to dramatically simplify the design of the proposed confidential boosting protocols, yet still preserve the model quality. A Cryptographic Service Provider (CSP) is used to assist the Cloud's processing, reducing the complexity of the protocol constructions. We present two constructions of SecureBoost: HE+GC and SecSh+GC, using combinations of homomorphic encryption, garbled circuits, and random masking to achieve both security and efficiency. For a boosted model, Cloud learns only the RLCs and the CSP learns only the weights of the RLCs. Finally, the data owner collects the two parts to get the complete model. We conduct extensive experiments to understand the quality of the RLC-based boosting and the cost distribution of the constructions. Our results show that SecureBoost can efficiently learn high-quality boosting models from protected user-generated data

    A comprehensive meta-analysis of cryptographic security mechanisms for cloud computing

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The concept of cloud computing offers measurable computational or information resources as a service over the Internet. The major motivation behind the cloud setup is economic benefits, because it assures the reduction in expenditure for operational and infrastructural purposes. To transform it into a reality there are some impediments and hurdles which are required to be tackled, most profound of which are security, privacy and reliability issues. As the user data is revealed to the cloud, it departs the protection-sphere of the data owner. However, this brings partly new security and privacy concerns. This work focuses on these issues related to various cloud services and deployment models by spotlighting their major challenges. While the classical cryptography is an ancient discipline, modern cryptography, which has been mostly developed in the last few decades, is the subject of study which needs to be implemented so as to ensure strong security and privacy mechanisms in today’s real-world scenarios. The technological solutions, short and long term research goals of the cloud security will be described and addressed using various classical cryptographic mechanisms as well as modern ones. This work explores the new directions in cloud computing security, while highlighting the correct selection of these fundamental technologies from cryptographic point of view

    Determining Training Needs for Cloud Infrastructure Investigations using I-STRIDE

    Full text link
    As more businesses and users adopt cloud computing services, security vulnerabilities will be increasingly found and exploited. There are many technological and political challenges where investigation of potentially criminal incidents in the cloud are concerned. Security experts, however, must still be able to acquire and analyze data in a methodical, rigorous and forensically sound manner. This work applies the STRIDE asset-based risk assessment method to cloud computing infrastructure for the purpose of identifying and assessing an organization's ability to respond to and investigate breaches in cloud computing environments. An extension to the STRIDE risk assessment model is proposed to help organizations quickly respond to incidents while ensuring acquisition and integrity of the largest amount of digital evidence possible. Further, the proposed model allows organizations to assess the needs and capacity of their incident responders before an incident occurs.Comment: 13 pages, 3 figures, 3 tables, 5th International Conference on Digital Forensics and Cyber Crime; Digital Forensics and Cyber Crime, pp. 223-236, 201

    Guidance Notes for Cloud Research Users

    No full text
    There is a rapidly increasing range of research activities which involve the outsourcing of computing and storage resources to public Cloud Service Providers (CSPs), who provide managed and scalable resources virtualised as a single service. For example Amazon Elastic Computing Cloud (EC2) and Simple Storage Service (S3) are two widely adopted open cloud solutions, which aim at providing pooled computing and storage services and charge users according to their weighted resource usage. Other examples include employment of Google Application Engine and Microsoft Azure as development platforms for research applications. Despite a lot of activity and publication on cloud computing, the term itself and the technologies that underpin it are still confusing to many. This note, as one of deliverables of the TeciRes project1, provides guidance to researchers who are potential end users of public CSPs for research activities. The note contains information to researchers on: •The difference between and relation to current research computing models •The considerations that have to be taken into account before moving to cloud-aided research •The issues associated with cloud computing for research that are currently being investigated •Tips and tricks when using cloud computing Readers who are interested in provisioning cloud capabilities for research should also refer to our guidance notes to cloud infrastructure service providers. This guidance notes focuses on technical aspects only. Readers who are interested in non-technical guidance should refer to the briefing paper produced by the “using cloud computing for research” project

    Adaptive and Resilient Revenue Maximizing Dynamic Resource Allocation and Pricing for Cloud-Enabled IoT Systems

    Full text link
    Cloud computing is becoming an essential component of modern computer and communication systems. The available resources at the cloud such as computing nodes, storage, databases, etc. are often packaged in the form of virtual machines (VMs) to be used by remotely located client applications for computational tasks. However, the cloud has a limited number of VMs available, which have to be efficiently utilized to generate higher productivity and subsequently generate maximum revenue. Client applications generate requests with computational tasks at random times with random complexity to be processed by the cloud. The cloud service provider (CSP) has to decide whether to allocate a VM to a task at hand or to wait for a higher complexity task in the future. We propose a threshold-based mechanism to optimally decide the allocation and pricing of VMs to sequentially arriving requests in order to maximize the revenue of the CSP over a finite time horizon. Moreover, we develop an adaptive and resilient framework based that can counter the effect of realtime changes in the number of available VMs at the cloud server, the frequency and nature of arriving tasks on the revenue of the CSP.Comment: American Control Conference (ACC 2018
    • …
    corecore