2,759 research outputs found

    Multilevel Hashing based Access Control for Authentication and Security in Relational Database Management System

    Get PDF
    The relational database is very widely used in these days and so as the relational database security is very vital to protect from different kinds of threats and attacks. The security of relational database is very important because now these days all data and information are stored in database by some form of database objects including user’s personal information like credit/debit card details, username, passwords etc. as well as confidential data of business organization and companies. Many researches are done in order to protect the relational database from these vulnerabilities but the methodologies of relational database security are not able to protect the relational database from all security issues. Authentication of users is an important issue in the database that is needed to be addressed most because it can give full access to the database objects. In this paper we present various security issues that can cause degradation in relational database security and we also identify various problems in the current database security policies. In this paper we propose the use of Multilevel Hashing based Access control mechanism for authentication and security in Relational database

    A Detailed Study on Aggregation Methods used in Natural Language Interface to Databases (NLIDB)

    Get PDF
    Historically, databases have been the most crucial issue in the study of information systems, and they constitute an essential part of all information management systems. Since, it complicated due to restricting the number of potential users, particularly non-expert database users who must comprehend the database structure to submit such queries. Natural language interface (NLI), the simplest method to retrieve information, is one possibility for interacting with the database. The transformation of a natural language query into a Structured Query (SQL) in a database is known as a "Natural Language Interface to Database" (NLIDB). This study uses NLIDB to handle the works performed under various aggregations with aggregation functions, a grouping phrase, and a possessing clause. This study carefully examines the numerous systematic aggregation approaches utilized in the NLIDB. This review provides extensive information about the many methods, including query-based, pattern-based, general, keyword-based NLIDB, and grammar-based systems, to extract data for a dissertation from a generic module for use in such systems that support query execution utilizing aggregations

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Access Control for Data Integration in Presence of Data Dependencies

    Full text link
    International audienceDefining access control policies in a data integration scenario is a challenging task. In such a scenario typically each source specifies its local access control policy and cannot anticipate data inferences that can arise when data is integrated at the mediator level. Inferences, e.g., using functional dependencies, can allow malicious users to obtain, at the mediator level, prohibited information by linking multiple queries and thus violating the local policies. In this paper, we propose a framework, i.e., a methodology and a set of algorithms, to prevent such violations. First, we use a graph-based approach to identify sets of queries, called violating transactions, and then we propose an approach to forbid the execution of those transactions by identifying additional access control rules that should be added to the mediator. We also state the complexity of the algorithms and discuss a set of experiments we conducted by using both real and synthetic datasets. Tests also confirm the complexity and upper bounds in worst-case scenarios of the proposed algorithms

    VerdictDB: Universalizing Approximate Query Processing

    Full text link
    Despite 25 years of research in academia, approximate query processing (AQP) has had little industrial adoption. One of the major causes of this slow adoption is the reluctance of traditional vendors to make radical changes to their legacy codebases, and the preoccupation of newer vendors (e.g., SQL-on-Hadoop products) with implementing standard features. Additionally, the few AQP engines that are available are each tied to a specific platform and require users to completely abandon their existing databases---an unrealistic expectation given the infancy of the AQP technology. Therefore, we argue that a universal solution is needed: a database-agnostic approximation engine that will widen the reach of this emerging technology across various platforms. Our proposal, called VerdictDB, uses a middleware architecture that requires no changes to the backend database, and thus, can work with all off-the-shelf engines. Operating at the driver-level, VerdictDB intercepts analytical queries issued to the database and rewrites them into another query that, if executed by any standard relational engine, will yield sufficient information for computing an approximate answer. VerdictDB uses the returned result set to compute an approximate answer and error estimates, which are then passed on to the user or application. However, lack of access to the query execution layer introduces significant challenges in terms of generality, correctness, and efficiency. This paper shows how VerdictDB overcomes these challenges and delivers up to 171×\times speedup (18.45×\times on average) for a variety of existing engines, such as Impala, Spark SQL, and Amazon Redshift, while incurring less than 2.6% relative error. VerdictDB is open-sourced under Apache License.Comment: Extended technical report of the paper that appeared in Proceedings of the 2018 International Conference on Management of Data, pp. 1461-1476. ACM, 201

    Securing Databases from Probabilistic Inference

    Full text link
    Databases can leak confidential information when users combine query results with probabilistic data dependencies and prior knowledge. Current research offers mechanisms that either handle a limited class of dependencies or lack tractable enforcement algorithms. We propose a foundation for Database Inference Control based on ProbLog, a probabilistic logic programming language. We leverage this foundation to develop Angerona, a provably secure enforcement mechanism that prevents information leakage in the presence of probabilistic dependencies. We then provide a tractable inference algorithm for a practically relevant fragment of ProbLog. We empirically evaluate Angerona's performance showing that it scales to relevant security-critical problems.Comment: A short version of this paper has been accepted at the 30th IEEE Computer Security Foundations Symposium (CSF 2017

    Open world reasoning in semantics-aware access control: A preliminary study

    Get PDF
    We address the relationships between theoretical foundations of Description Logics and practical applications of security-oriented Semantic Web techniques. We first describe the advantages of semantics-aware Access Control and review the state of the art; we also introduce the basics of Description Logics and the novel semantics they share. Then we translate the principle underlying the Little House Problem of DL into a real-world use case: by applying Open World Reasoning to the Knowledge Base modelling a Virtual Organization, we derive information not achievable with traditional Access Control methodologies. With this example, we also show that a general problem such as ontology mapping can take advantage of the enhanced semantics underlying OWL Lite and OWL DL to handle under-specified concepts

    Extending Complex Event Processing for Advanced Applications

    Get PDF
    Recently numerous emerging applications, ranging from on-line financial transactions, RFID based supply chain management, traffic monitoring to real-time object monitoring, generate high-volume event streams. To meet the needs of processing event data streams in real-time, Complex Event Processing technology (CEP) has been developed with the focus on detecting occurrences of particular composite patterns of events. By analyzing and constructing several real-world CEP applications, we found that CEP needs to be extended with advanced services beyond detecting pattern queries. We summarize these emerging needs in three orthogonal directions. First, for applications which require access to both streaming and stored data, we need to provide a clear semantics and efficient schedulers in the face of concurrent access and failures. Second, when a CEP system is deployed in a sensitive environment such as health care, we wish to mitigate possible privacy leaks. Third, when input events do not carry the identification of the object being monitored, we need to infer the probabilistic identification of events before feed them to a CEP engine. Therefore this dissertation discusses the construction of a framework for extending CEP to support these critical services. First, existing CEP technology is limited in its capability of reacting to opportunities and risks detected by pattern queries. We propose to tackle this unsolved problem by embedding active rule support within the CEP engine. The main challenge is to handle interactions between queries and reactions to queries in the high-volume stream execution. We hence introduce a novel stream-oriented transactional model along with a family of stream transaction scheduling algorithms that ensure the correctness of concurrent stream execution. And then we demonstrate the proposed technology by applying it to a real-world healthcare system and evaluate the stream transaction scheduling algorithms extensively using real-world workload. Second, we are the first to study the privacy implications of CEP systems. Specifically we consider how to suppress events on a stream to reduce the disclosure of sensitive patterns, while ensuring that nonsensitive patterns continue to be reported by the CEP engine. We formally define the problem of utility-maximizing event suppression for privacy preservation. We then design a suite of real-time solutions that eliminate private pattern matches while maximizing the overall utility. Our first solution optimally solves the problem at the event-type level. The second solution, at event-instance level, further optimizes the event-type level solution by exploiting runtime event distributions using advanced pattern match cardinality estimation techniques. Our experimental evaluation over both real-world and synthetic event streams shows that our algorithms are effective in maximizing utility yet still efficient enough to offer near real time system responsiveness. Third, we observe that in many real-world object monitoring applications where the CEP technology is adopted, not all sensed events carry the identification of the object whose action they report on, so called €œnon-ID-ed€� events. Such non-ID-ed events prevent us from performing object-based analytics, such as tracking, alerting and pattern matching. We propose a probabilistic inference framework to tackle this problem by inferring the missing object identification associated with an event. Specifically, as a foundation we design a time-varying graphic model to capture correspondences between sensed events and objects. Upon this model, we elaborate how to adapt the state-of-the-art Forward-backward inference algorithm to continuously infer probabilistic identifications for non-ID-ed events. More important, we propose a suite of strategies for optimizing the performance of inference. Our experimental results, using large-volume streams of a real-world health care application, demonstrate the accuracy, efficiency, and scalability of the proposed technology
    • …
    corecore