1,906 research outputs found

    SoK: Cryptographically Protected Database Search

    Full text link
    Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly; systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions: 1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms. 2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality. 3) An analysis of attacks against protected search for different base queries. 4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.Comment: 20 pages, to appear to IEEE Security and Privac

    Reducing fuzzy answer set programming to model finding in fuzzy logics

    Get PDF
    In recent years, answer set programming (ASP) has been extended to deal with multivalued predicates. The resulting formalisms allow for the modeling of continuous problems as elegantly as ASP allows for the modeling of discrete problems, by combining the stable model semantics underlying ASP with fuzzy logics. However, contrary to the case of classical ASP where many efficient solvers have been constructed, to date there is no efficient fuzzy ASP solver. A well-known technique for classical ASP consists of translating an ASP program P to a propositional theory whose models exactly correspond to the answer sets of P. In this paper, we show how this idea can be extended to fuzzy ASP, paving the way to implement efficient fuzzy ASP solvers that can take advantage of existing fuzzy logic reasoners

    Using Search Term Positions for Determining Document Relevance

    Get PDF
    The technological advancements in computer networks and the substantial reduction of their production costs have caused a massive explosion of digitally stored information. In particular, textual information is becoming increasingly available in electronic form. Finding text documents dealing with a certain topic is not a simple task. Users need tools to sift through non-relevant information and retrieve only pieces of information relevant to their needs. The traditional methods of information retrieval (IR) based on search term frequency have somehow reached their limitations, and novel ranking methods based on hyperlink information are not applicable to unlinked documents. The retrieval of documents based on the positions of search terms in a document has the potential of yielding improvements, because other terms in the environment where a search term appears (i.e. the neighborhood) are considered. That is to say, the grammatical type, position and frequency of other words help to clarify and specify the meaning of a given search term. However, the required additional analysis task makes position-based methods slower than methods based on term frequency and requires more storage to save the positions of terms. These drawbacks directly affect the performance of the most user critical phase of the retrieval process, namely query evaluation time, which explains the scarce use of positional information in contemporary retrieval systems. This thesis explores the possibility of extending traditional information retrieval systems with positional information in an efficient manner that permits us to optimize the retrieval performance by handling term positions at query evaluation time. To achieve this task, several abstract representation of term positions to efficiently store and operate on term positional data are investigated. In the Gauss model, descriptive statistics methods are used to estimate term positional information, because they minimize outliers and irregularities in the data. The Fourier model is based on Fourier series to represent positional information. In the Hilbert model, functional analysis methods are used to provide reliable term position estimations and simple mathematical operators to handle positional data. The proposed models are experimentally evaluated using standard resources of the IR research community (Text Retrieval Conference). All experiments demonstrate that the use of positional information can enhance the quality of search results. The suggested models outperform state-of-the-art retrieval utilities. The term position models open new possibilities to analyze and handle textual data. For instance, document clustering and compression of positional data based on these models could be interesting topics to be considered in future research

    Computational Methods in Systems Biology. 17th International Conference, CMSB 2019, Trieste, Italy, September 18\u201320, 2019, Proceedings

    Get PDF
    This volume contains the papers presented at CMSB 2019, the 17th Conference on Computational Methods in Systems Biology, held during September 18\u201320, 2019, at the University of Trieste, Italy. The CMSB annual conference series, initiated in 2003, provides a unique discussion forum for computer scientists, biologists, mathematicians, engineers, and physicists interested in a system-level understanding of biological processes. Topics covered by the CMSB proceedings include: formalisms for modeling biological processes; models and their biological applications; frameworks for model verification, validation, anal- ysis, and simulation of biological systems; high-performance computational systems biology and parallel implementations; model inference from experimental data; model integration from biological databases; multi-scale modeling and analysis methods; computational approaches for synthetic biology; and case studies in systems and synthetic biology. This year there were 53 submissions in total for the 4 conference tracks. Each regular submission and tool paper submission were reviewed by at least three Program Committee members. Additionally, tools were subjected to an additional review by members of the Tool Evaluation Committee, testing the usability of the software and the reproducibility of the results. For the proceedings, the Program Committee decided to accept 14 regular papers, 7 tool papers, and 11 short papers. This rich program of talks was complemented by a poster session, providing an opportunity for informal discussion of preliminary results and results in related fields. In view of the broad scope of the CMSB conference series, we selected the fol- lowing five high-profile invited speakers: Kobi Benenson (ETH Zurich, Switzerland), Trevor Graham (Barts Cancer Hospital, London, UK), Gaspar Tkacik (IST, Austria), Adelinde Uhrmacher (Rostock University, Germany), and Manuel Zimmer (University of Vienna, Austria). Their invited talks covered a broad area within the technical and applicative domains of the conference, and stimulated fruitful discussions among the conference attendees. Further details on CMSB 2019 are available on the following website: https://cmsb2019.units.it. Finally, as the program co-chairs, we are extremely grateful to the members of the Program Committee and the external reviewers for their peer reviews and the valuable feedback they provided to the authors. Our special thanks go to Laura Nenzi as local organization co-chair, Dimitrios Milios as chair of the Tool Evaluation Committee, and to Fran\ue7ois Fages and all the members of the CMSB Steering Committee, for their advice on organizing and running the conference. We acknowledge the support of the EasyChair conference system during the reviewing process and the production of these proceedings. We also thank Springer for publishing the CMSB proceedings in its Lecture Notes in Computer Science series. Additionally, we would like to thank the Department of Mathematics and Geo- sciences of the University of Trieste, for sponsoring and hosting this event, and Confindustria Venezia Giulia, for supporting this event and providing administrative help. Finally, we would like to thank all the participants of the conference. It was the quality of their presentations and their contribution to the discussions that made the meeting a scientific success

    Secure and Reliable Data Outsourcing in Cloud Computing

    Get PDF
    The many advantages of cloud computing are increasingly attracting individuals and organizations to outsource their data from local to remote cloud servers. In addition to cloud infrastructure and platform providers, such as Amazon, Google, and Microsoft, more and more cloud application providers are emerging which are dedicated to offering more accessible and user friendly data storage services to cloud customers. It is a clear trend that cloud data outsourcing is becoming a pervasive service. Along with the widespread enthusiasm on cloud computing, however, concerns on data security with cloud data storage are arising in terms of reliability and privacy which raise as the primary obstacles to the adoption of the cloud. To address these challenging issues, this dissertation explores the problem of secure and reliable data outsourcing in cloud computing. We focus on deploying the most fundamental data services, e.g., data management and data utilization, while considering reliability and privacy assurance. The first part of this dissertation discusses secure and reliable cloud data management to guarantee the data correctness and availability, given the difficulty that data are no longer locally possessed by data owners. We design a secure cloud storage service which addresses the reliability issue with near-optimal overall performance. By allowing a third party to perform the public integrity verification, data owners are significantly released from the onerous work of periodically checking data integrity. To completely free the data owner from the burden of being online after data outsourcing, we propose an exact repair solution so that no metadata needs to be generated on the fly for the repaired data. The second part presents our privacy-preserving data utilization solutions supporting two categories of semantics - keyword search and graph query. For protecting data privacy, sensitive data has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. We define and solve the challenging problem of privacy-preserving multi- keyword ranked search over encrypted data in cloud computing. We establish a set of strict privacy requirements for such a secure cloud data utilization system to become a reality. We first propose a basic idea for keyword search based on secure inner product computation, and then give two improved schemes to achieve various stringent privacy requirements in two different threat models. We also investigate some further enhancements of our ranked search mechanism, including supporting more search semantics, i.e., TF × IDF, and dynamic data operations. As a general data structure to describe the relation between entities, the graph has been increasingly used to model complicated structures and schemaless data, such as the personal social network, the relational database, XML documents and chemical compounds. In the case that these data contains sensitive information and need to be encrypted before outsourcing to the cloud, it is a very challenging task to effectively utilize such graph-structured data after encryption. We define and solve the problem of privacy-preserving query over encrypted graph-structured data in cloud computing. By utilizing the principle of filtering-and-verification, we pre-build a feature-based index to provide feature-related information about each encrypted data graph, and then choose the efficient inner product as the pruning tool to carry out the filtering procedure
    corecore