24 research outputs found

    Towards a secure and efficient search over encrypted cloud data

    Get PDF
    Includes bibliographical references.2016 Summer.Cloud computing enables new types of services where the computational and network resources are available online through the Internet. One of the most popular services of cloud computing is data outsourcing. For reasons of cost and convenience, public as well as private organizations can now outsource their large amounts of data to the cloud and enjoy the benefits of remote storage and management. At the same time, confidentiality of remotely stored data on untrusted cloud server is a big concern. In order to reduce these concerns, sensitive data, such as, personal health records, emails, income tax and financial reports, are usually outsourced in encrypted form using well-known cryptographic techniques. Although encrypted data storage protects remote data from unauthorized access, it complicates some basic, yet essential data utilization services such as plaintext keyword search. A simple solution of downloading the data, decrypting and searching locally is clearly inefficient since storing data in the cloud is meaningless unless it can be easily searched and utilized. Thus, cloud services should enable efficient search on encrypted data to provide the benefits of a first-class cloud computing environment. This dissertation is concerned with developing novel searchable encryption techniques that allow the cloud server to perform multi-keyword ranked search as well as substring search incorporating position information. We present results that we have accomplished in this area, including a comprehensive evaluation of existing solutions and searchable encryption schemes for ranked search and substring position search

    Top 10 technologies and their impact on CPA\u27s

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/2474/thumbnail.jp

    Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Get PDF
    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method

    US export controls on encryption technology

    Get PDF
    Includes bibliographical references (p. 111-118).Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Political Science, 2004.(cont.) effort that eventually paid off in 1999. Interest group politics also factors into the actions of the national security establishment as they also lobby the Presidency and Congress to maintain restrictive encryption regulations. The study uses organizational culture to explain the motivations and some of the actions of the NSA, particularly with regard to its preference for secrecy, its placement of national security above other values, and its efforts to maintain control over all cryptology, whether government or civilian.This thesis seeks to explain why the U.S. government export controls on encryption technologies instituted during the 1970s remained in place until 1999 even though the widespread availability of similar products internationally had rendered the regulations largely without national security benefit by the late 1980s and early 1990s. The second part of the thesis explores the processes and reasons behind the eventual liberalization of encryption policies in 1999. Underlying the study is a values tradeoff between national security, economic interests, and civil liberties for which the relative gains and losses to each value shift through the three decades of the study as a result of technological advances in commercial and civilian cryptography, the growing popularity of electronic communications, the rise of the computer software industry, and the end of the Cold War. The explanation rests upon a combination of political science and organization theories. Structural obstacles to adaptation within the legislative process and interest group politics help account for some of the inertia in the policy adaptation process. In particular, regulatory capture of the Presidency and critical Congressional committees by the National Security Agency helped lock in the NSA's preferred policies even after technological advancements in the commercial sector began to cut into the national security benefits resulting from export controls. Interest group politics also helps explain the rise and eventual success of the lobby for liberalization of encryption regulations. A combination of the software industry and civil liberties activists intent on preserving the right to privacy and First Amendment allied to lobby Congress to change encryption regulations, anby Shirley K. Hung.S.M

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas

    A Human-Centric Approach to Software Vulnerability Discovery

    Get PDF
    Software security bugs | referred to as vulnerabilities | persist as an important and costly challenge. Significant effort has been exerted toward automatic vulnerability discovery, but human intelligence generally remains required and will remain necessary for the foreseeable future. Therefore, many companies have turned to internal and external (e.g., penetration testing, bug bounties) security experts to manually analyze their code for vulnerabilities. Unfortunately, there are a limited number of qualified experts. Therefore, to improve software security, we must understand how experts search for vulnerabilities and how their processes could be made more efficient, by improving tool usability and targeting the most common vulnerabilities. Additionally, we seek to understand how to improve training to increase the number of experts. To answer these questions, I begin with an in-depth qualitative analysis of secure development competition submissions to identify common vulnerabilities developers introduce. I found developers struggle to understand and implement complex security concepts, not recognizing how nuanced development decisions could lead to vulnerabilities. Next, using a cognitive task analysis to investigate experts' and non-experts' vulnerability discovery processes, I observed they use the same process, but dier in the variety of security experiences which inform their searches. Together, these results suggest exposure to an in-depth understanding of potential vulnerabilities as essential for vulnerability discovery. As a first step to leverage both experts and non-experts, I pursued two lines of work: education to support experience development and vulnerability discovery automation interaction improvements. To improve vulnerability discovery tool interaction, I conducted observational interviews of experts' reverse engineering process, an essential and time-consuming component of vulnerability discovery. From this, I provide guidelines for more usable interaction design. For security education, I began with a pedagogical review of security exercises to identify their current strengths and weaknesses. I also developed a psychometric measure for secure software development self-efficacy to support comparisons between educational interventions

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    From online social network analysis to a user-centric private sharing system

    Get PDF
    Online social networks (OSNs) have become a massive repository of data constructed from individuals’ inputs: posts, photos, feedbacks, locations, etc. By analyzing such data, meaningful knowledge is generated that can affect individuals’ beliefs, desires, happiness and choices—a data circulation started from individuals and ended in individuals! The OSN owners, as the one authority having full control over the stored data, make the data available for research, advertisement and other purposes. However, the individuals are missed in this circle while they generate the data and shape the OSN structure. In this thesis, we started by introducing approximation algorithms for finding the most influential individuals in a social graph and modeling the spread of information. To do so, we considered the communities of individuals that are shaped in a social graph. The social graph is extracted from the data stored and controlled centrally, which can cause privacy breaches and lead to individuals’ concerns. Therefore, we introduced UPSS: the user-centric private sharing system, in which the individuals are considered as the real data owners and provides secure and private data sharing on untrusted servers. The UPSS’s public API allows the application developers to implement applications as diverse as OSNs, document redaction systems with integrity properties, censorship-resistant systems, health care auditing systems, distributed version control systems with flexible access controls and a filesystem in userspace. Accessing users’ data is possible only with explicit user consent. We implemented the two later cases to show the applicability of UPSS. Supporting different storage models by UPSS enables us to have a local, remote and global filesystem in userspace with one unique core filesystem implementation and having it mounted with different block stores. By designing and implementing UPSS, we show that security and privacy can be addressed at the same time in the systems that need selective, secure and collaborative information sharing without requiring complete trust

    A Secure keyword ordered multiuser searchable encryption framework

    No full text
    Recent trends in information technology have triggered the shift in various sectors from traditional methods of operation and data management to web based solutions. Cloud computing provides the best alternative, providing storage as a service and ensures efficient data operations. Data outsourcing reduces high cost and increases efficiency but it is vulnerable to leakage and manipulation hence rendering it unusable for most of the practical applications. Data encryption makes it safe but limits the scope for search and multiuser access. The model proposed supports efficient data encryption and search over the encrypted data by legitimate users and facilitates multiuser access to the data. Multiuser searchable encryption allows multiple users to access the data in both read and write mode with their distinct keys and facilitates addition and re-invocation of users with less overhead. The proposed scheme is based on encryption of data, keyword generation and search based on bilinear pairing. The system provides a proxy server which manipulates user queries making the system faster and more secure. The search is performed on keyword ordered list which makes the search faster as compared to the traditional methods of comparing trapdoor with keywords associated with the different files.13 page(s
    corecore