19 research outputs found

    A Password-Based Access Control Framework for Time-Sequence Aware Media Cloudization

    Get PDF
    The time sequence-based outsourcing makes new demands for related access control continue to grow increasingly in cloud computing. In this paper, we propose a practical password-based access control framework for such media cloudization relying on content control based on the time-sequence attribute, which is designed over prime-order groups. First, the scheme supports multi-keyword search simultaneously in any monotonic boolean formulas, and enables media owner to control content encryption key for different time-periods using an updatable password; Second, the scheme supports the key self-retrievability of content encryption key, which is more suitable for the cloud-based media applications with massive users. Then, we show that the proposed scheme is provably secure in the standard model. Finally, the detailed result of performance evaluation shows the proposed scheme is efficient and practical for cloud-based media applications

    Data Sharing and Access Using Aggregate Key Concept

    Get PDF
    Cloud Storage is a capacity of information online in the cloud, which is available from different and associated assets. Distributed storage can provide high availability and consistent quality, reliable assurance, debacle free restoration, and reduced expense. Distributed storage has imperative usefulness, i.e., safely, proficiently, adaptably offering information to others. Data privacy is essential in the cloud to ensure that the user’s identity is not leaked to unauthorized persons. Using the cloud, anyone can share and store the data, as much as they want. To share the data in a secure way, cryptography is very useful. By using different encryption techniques, a user can store data in the cloud. Encryption and decryption keys are created for unique data that the user provides. Only a particular set of decryption keys are shared so that the data can be decrypted. A public–key encryption system which is called a Key-Aggregate cryptosystem (KAC) is presented. This system produces constant size ciphertexts. Any arrangement of secret keys can be aggregated and make them into a single key, which has the same power of the keys that are being used. This total key can then be sent to the others for decoding of a ciphertext set and remaining encoded documents outside the set stays private. The project presented in this paper is an implementation of the proposed system

    Health Participatory Sensing Networks for Mobile Device Public Health Data Collection and Intervention

    Get PDF
    The pervasive availability and increasingly sophisticated functionalities of smartphones and their connected external sensors or wearable devices can provide new data collection capabilities relevant to public health. Current research and commercial efforts have concentrated on sensor-based collection of health data for personal fitness and personal healthcare feedback purposes. However, to date there has not been a detailed investigation of how such smartphones and sensors can be utilized for public health data collection. Unlike most sensing applications, in the case of public health, capturing comprehensive and detailed data is not a necessity, as aggregate data alone is in many cases sufficient for public health purposes. As such, public health data has the characteristic of being capturable whilst still not infringing privacy, as the detailed data of individuals that may allow re-identification is not needed, but rather only aggregate, de-identified and non-unique data for an individual. These types of public health data collection provide the challenge of the need to be flexible enough to answer a range of public health queries, while ensuring the level of detail returned preserves privacy. Additionally, the distribution of public health data collection request and other information to the participants without identifying the individual is a core requirement. An additional requirement for health participatory sensing networks is the ability to perform public health interventions. As with data collection, this needs to be completed in a non-identifying and privacy preserving manner. This thesis proposes a solution to these challenges, whereby a form of query assurance provides private and secure distribution of data collection requests and public health interventions to participants. While an additional, privacy preserving threshold approach to local processing of data prior to submission is used to provide re-identification protection for the participant. The evaluation finds that with manageable overheads, minimal reduction in the detail of collected data and strict communication privacy; privacy and anonymity can be preserved. This is significant for the field of participatory health sensing as a major concern of participants is most often real or perceived privacy risks of contribution

    Efficient and secure document similarity search cloud utilizing mapreduce

    Get PDF
    Document similarity has important real life applications such as finding duplicate web sites and identifying plagiarism. While the basic techniques such as k-similarity algorithms have been long known, overwhelming amount of data, being collected such as in big data setting, calls for novel algorithms to find highly similar documents in reasonably short amount of time. In particular, pairwise comparison of documents sharing a common feature, necessitates prohibitively high storage and computation power. The wide spread availability of cloud computing provides users easy access to high storage and processing power. Furthermore, outsourcing their data to the cloud guarantees reliability and availability for their data while privacy and security concerns are not always properly addressed. This leads to the problem of protecting the privacy of sensitive data against adversaries including the cloud operator. Generally, traditional document similarity algorithms tend to compare all the documents in a data set sharing same terms (words) with query document. In our work, we propose a new filtering technique that works on plaintext data, which decreases the number of comparisons between the query set and the search set to find highly similar documents. The technique, referred as ZOLIP algorithm, is efficient and scalable, but does not provide security. We also design and implement three secure similarity search algorithms for text documents, namely Secure Sketch Search, Secure Minhash Search and Secure ZOLIP. The first algorithm utilizes locality sensitive hashing techniques and cosine similarity. While the second algorithm uses the Minhash Algorithm, the last one uses the encrypted ZOLIP Signature, which is the secure version of the ZOLIP algorithm. We utilize the Hadoop distributed file system and the MapReduce parallel programming model to scale our techniques to big data setting. Our experimental results on real data show that some of the proposed methods perform better than the previous work in the literature in terms of the number of joins, and therefore, speed

    Tunable Security for Deployable Data Outsourcing

    Get PDF
    Security mechanisms like encryption negatively affect other software quality characteristics like efficiency. To cope with such trade-offs, it is preferable to build approaches that allow to tune the trade-offs after the implementation and design phase. This book introduces a methodology that can be used to build such tunable approaches. The book shows how the proposed methodology can be applied in the domains of database outsourcing, identity management, and credential management

    Smurf : A reliable method for contextualising social media artefacts

    Get PDF
    © Cranfield University 2020. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright ownerThis research aims to evaluate whether artefacts other than the content of user com munication on social media can be used to attribute actions or relationships to a user. Social Media has enhanced the way users communicate on the Internet; providing the means for users to share content in real-time, and to establish connections and social relationships with like-minded individuals. However, as with all technology, social media can be leveraged for disagreeable and/or unlawful activities such as cyber bullying, trolling, grooming, or luring. There are reported cases where evidence from social media was used to secure convictions; for example, the tragic cases of Ashleigh Hall in 2009 and Kayleigh Haywood in 2015. The social media evidence e.g. the messages sent to the victim to arrange a meet up was used to link the suspect to the victim, and attribute actions to the suspect; in addition to other physical evidence presented as part of the case. Investigations with elements of social media is growing within digital forensics. This reinforces the need for a technique that can be used to make inferences about user actions and relationships, especially during a live triage investigation where the information needs to be obtained as quickly as possible. This research evaluates the use of live triage in the investigation of social media interactions, in order to determine the reliability of such a technique as a means of contextualising user activity, and attributing relationships or actions to a user. This research also evaluates the reliability of artefacts other than the actual content exchanged on social media; in the event that the content of communication is not immediately accessible/available to the investigator. To achieve this, it was important to break down the events that occur before, during and after user activity on social media; followed by the determination of what constitutes communication content in the context of this research. This research makes the following contributions: establishes a method for the cat egorisation of social media artefacts based on perceived user activity; communication content was characterised, thus highlighting evidential data of interest from user social media activity; the criteria for assessing the reliability of social media artefacts in a live triage investigation was proposed; a novel framework for social media investigation was developed with a Proof of Concept (PoC) to test its viability. The PoC demonstrates that it is possible to attribute actions or relationships to a user, using artefacts other than the actual content exchanged on social media.Ph

    Tackling the Challenges of Information Security Incident Reporting: A Decentralized Approach

    Get PDF
    Information security incident under-reporting is unambiguously a business problem, as identified by a variety of sources, such as ENISA (2012), Symantec (2016), Newman (2018) and more. This research project identified the underlying issues that cause this problem and proposed a solution, in the form of an innovative artefact, which confronts a number of these issues. This research project was conducted according to the requirements of the Design Science Research Methodology (DSRM) by Peffers et al (2007). The research question set at the beginning of this research project, probed the feasible formation of an incident reporting solution, which would increase the motivational level of users towards the reporting of incidents, by utilizing the positive features offered by existing solutions, on one hand, but also by providing added value to the users, on the other. The comprehensive literature review chapter set the stage, and identified the reasons for incident underreporting, while also evaluating the existing solutions and determining their advantages and disadvantages. The objectives of the proposed artefact were then set, and the artefact was designed and developed. The output of this development endeavour is “IRDA”, the first decentralized incident reporting application (DApp), built on “Quorum”, a permissioned blockchain implementation of Ethereum. Its effectiveness was demonstrated, when six organizations accepted to use the developed artefact and performed a series of pre-defined actions, in order to confirm the platform’s intended functionality. The platform was also evaluated using Venable et al’s (2012) evaluation framework for DSR projects. This research project contributes to knowledge in various ways. It investigates blockchain and incident reporting, two domains which have not been extensively examined and the available literature is rather limited. Furthermore, it also identifies, compares, and evaluates the conventional, reporting platforms, available, up to date. In line with previous findings (e.g Humphrey, 2017), it also confirms the lack of standard taxonomies for information security incidents. This work also contributes by creating a functional, practical artefact in the blockchain domain, a domain where, according to Taylor et al (2019), most studies are either experimental proposals, or theoretical concepts, with limited practicality in solving real-world problems. Through the evaluation activity, and by conducting a series of non-parametric significance tests, it also suggests that IRDA can potentially increase the motivational level of users towards the reporting of incidents. This thesis describes an original attempt in utilizing the newly emergent blockchain technology, and its inherent characteristics, for addressing those concerns which actively contribute to the business problem. To the best of the researcher’s knowledge, there is currently no other solution offering similar benefits to users/organizations for incident reporting purposes. Through the accomplishment of this project’s pre-set objectives, the developed artefact provides a positive answer to the research question. The artefact, featuring increased anonymity, availability, immutability and transparency levels, as well as an overall lower cost, has the potential to increase the motivational level of organizations towards the reporting of incidents, thus improving the currently dismaying statistics of incident under-reporting. The structure of this document follows the flow of activities described in the DSRM by Peffers et al (2007), while also borrowing some elements out of the nominal structure of an empirical research process, including the literature review chapter, the description of the selected research methodology, as well as the “discussion and conclusion” chapter

    Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges

    Get PDF
    As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR

    Käyttäjäkeskeisen tuotekehitysorganisaation tietotuki – Pohdintoja "Knowledge Storage" –tiedonhallintaympäristön toteutuksen ja arvioinnin perusteella

    Get PDF
    User-centred development of interactive systems and devices has received increasing importance in product development organisations. So far, the answer from the usability engineering community has been the offering of different types of methods that can be applied during the development work in different stages of the development process. As the amount of applicable methods increase and the stakeholder population utilising these methods broadens, support for the management of this usability engineering work becomes important. This includes considerations about the arrangement of the organisation performing the development work as well as the tools and methods that supports this. The objective of this study is the modelling and construction of information support for product development that takes user-centred issues into consideration. The main research question of the study is "What kind of information management system can provide support for user-oriented product development?" Boundaries for the main research question are presented with the focus areas of the work that point out insights from the organisational standpoint as well as from the standpoint of the methods and tools. The methodological standpoint has a longer tradition in the HCI field than the organisational one. In the literature review part of this thesis, methods that are used rather widespread are introduced in order to find out the characteristics and requirements that they point out for the surrounding information support systems and the hosting organisation. The organisational standpoint is studied and reflected in the three empirical studies that illustrate the contemporary arrangement and organisation of product development. Product development is addressed also in the literature review. Building on the findings, a framework of the characteristics for a hosting organisation is presented. The framework consists of five levels: organisational orientation (values, attitude), life-cycle considerations (business/process/product), generic development support (methods and tools), quality instructions (organisation-specific adjustments of the generic level), and information support (operative level). This framework points out the position of an information support system in the organisational surroundings. After this 'positioning', more detailed modelling, design and implementation of the supporting information support system, the "Knowledge Storage" is presented. The results from the construction and evaluation of the Knowledge Storage point out needs for information support applications (developer community, roles, awareness, contribution evaluation). The results also reveal difficulties in the making of these kinds of applications attached to real development projects and activities (migration of existing knowledge base, 'suitable' project, application integration, implementation of baseline functionality vs. value-added features).Käyttäjäkeskeisyyden merkitys tuotekehitystoiminnan yhtenä suunnittelunäkökulmana on kasvanut viimeisen vuosikymmenen aikana merkittävästi. Käyttäjäkeskeistä suunnittelua painottavan tutkimusyhteisön tarjooma käytännön suunnittelutyölle on perinteisesti ollut joukko erilaisia suunnittelumenetelmiä, joita on mahdollista hyödyntää eri vaiheissa suunnittelua. Menetelmien määrä samoin kuin niitä hyödyntävien suunnitteluosapuolten määrä onkin näiden johdosta kasvanut. Tämä on luonut tilanteen, jossa käyttäjäkeskeisten menetelmien soveltamista ja niillä kerättyä tietoa on tarpeen hallita entistä systemaattisemmin keinoin. Tämän tutkimuksen tavoitteena on käyttäjäkeskeisen tuotekehityksen tueksi soveltuvan tietotukiympäristön mallintaminen, kehittäminen ja arviointi. Tutkimuksen pääkysymys on: "Millainen tiedonhallinnan järjestely voi tarjota tukea käyttäjäsuuntautuneelle tuotekehitykselle?" Pääkysymystä tarkentavat rajaukset nostavat esiin kaksi näkökulmaa: organisatorisen ja välineellisen. Metodiikkapainotteisella näkökulmalla on vahvempi perinne ihminen-tietokone-vuorovaikutuksen tutkimusalueella kuin organisatorispainotteisella. Työn kirjallisuuskatsauksessa tarkastellaankin yleisesti sovellettuja käyttäjäkeskeisen suunnittelun menetelmiä hakemalla niistä piirteitä ja vaatimuksia, joita niiden soveltaminen asettaa ympäröivälle organisaatiolle. Organisatorista näkökulmaa tarkastellaan kirjallisuuskatsauksessa tuotekehitystoiminnan yleisten piirteiden kautta sekä lisäksi kolmen empiirisen nykyaikaista tuotekehitystoimintaa selvittävän tutkimuksen avulla. Selvitysten tulosten perusteella työssä kehitetään viitekehys, jonka avulla tarkastellaan käyttäjäkeskeisesti toimivan organisaation edellytyksiä soveltaa käyttäjäkeskeistä suunnittelua ja siihen liittyvää tietotukea. Viitekehyksessä tunnistetaan viisi elementtiä: organisatorinen suhtautuminen (arvostukset, asenne), elinkaaritarkastelut (liiketoiminta/prosessit/tuotteet), toiminnan yleisen tason välineistäminen (menetelmät), laatuohjeistus (organisaatiokohtainen menetelmien soveltaminen) ja toiminnan operatiivisen tason tietotuki. Viitekehys sijoittaakin tietotukiratkaisut laajempaan organisatoriseen ympäristöön. Viitekehyksen esittämisen jälkeen työssä esitetään "Knowledge Storage" -tietotukiratkaisun kehittäminen. Tulokset tästä kehitystyöstä osoittavat vaatimuksia organisatorisessa viitekehyksessä toimiville tietotukiratkaisuille (yhteisöllisyys, roolijako, tilannetietoisuus, palaute yksilöllisistä kontribuutioista). Tulokset nostavat esiin myös merkittäviä haasteita ja vaikeuksia tämän tyyppisten ratkaisujen kokeilussa ja toimivuuden arvioinnissa todellisten käytännön projektien yhteydessä (aiemman tietoainespohjan hyödyntäminen, sopivan luonteinen projekti, perustoiminnallisuusvaade vs. uusi lisäarvoa tuottava toiminnallisuus).reviewe
    corecore