12 research outputs found

    A parallelized database damage assessment approach after cyberattack for healthcare systems

    Get PDF
    In the current Internet of things era, all companies shifted from paper-based data to the electronic format. Although this shift increased the efficiency of data processing, it has security drawbacks. Healthcare databases are a precious target for attackers because they facilitate identity theft and cybercrime. This paper presents an approach for database damage assessment for healthcare systems. Inspired by the current behavior of COVID-19 infections, our approach views the damage assessment problem the same way. The malicious transactions will be viewed as if they are COVID-19 viruses, taken from infection onward. The challenge of this research is to discover the infected transactions in a minimal time. The proposed parallel algorithm is based on the transaction dependency paradigm, with a time complexity O((M+NQ+Nˆ3)/L) (M = total number of transactions under scrutiny, N = number of malicious and affected transactions in the testing list, Q = time for dependency check, and L = number of threads used). The memory complexity of the algorithm is O(N+KL) (N = number of malicious and affected transactions, K = number of transactions in one area handled by one thread, and L = number of threads). Since the damage assessment time is directly proportional to the denial-of-service time, the proposed algorithm provides a minimized execution time. Our algorithm is a novel approach that outperforms other existing algorithms in this domain in terms of both time and memory, working up to four times faster in terms of time and with 120,000 fewer bytes in terms of memory

    Provenance-Aware Tracing of Worm Break-in and Contaminations: A Process Coloring Approach

    Get PDF
    To investigate the exploitation and contamination by self-propagating Internet worms, a provenanceaware tracing mechanism is highly desirable. Provenance unawareness causes difficulties in fast and accurate identification of a worm’s break-in point (namely, a remotely-accessible vulnerable service running in the infected host), and incurs significant log data inspection overhead. This paper presents the design, implementation, and evaluation of process coloring, an efficient provenance-aware approach to worm breakin and contamination tracing. More specifically, process coloring assigns a “color”, a unique system-wide identifier, to each remotely-accessible server or process. The color will then be either inherited by spawned child processes or diffused indirectly through process actions (e.g., read or write operations). Process coloring brings two major advantages: (1) It enables fast color-based identification of the break-in point exploited by a worm even before detailed log analysis; (2) It naturally partitions log data according to their associated colors, effectively reducing the volume of log data that need to be examined and correspondingly, log processing overhead for worm investigation. A tamper-resistant log collection method is developed based on the virtual machine introspection technique. Our experiments with a number of real-world worms demonstrate the advantages of processing coloring. For example, to reveal detaile

    Intrusion recovery for database-backed web applications

    Get PDF
    Warp is a system that helps users and administrators of web applications recover from intrusions such as SQL injection, cross-site scripting, and clickjacking attacks, while preserving legitimate user changes. Warp repairs from an intrusion by rolling back parts of the database to a version before the attack, and replaying subsequent legitimate actions. Warp allows administrators to retroactively patch security vulnerabilities---i.e., apply new security patches to past executions---to recover from intrusions without requiring the administrator to track down or even detect attacks. Warp's time-travel database allows fine-grained rollback of database rows, and enables repair to proceed concurrently with normal operation of a web application. Finally, Warp captures and replays user input at the level of a browser's DOM, to recover from attacks that involve a user's browser. For a web server running MediaWiki, Warp requires no application source code changes to recover from a range of common web application vulnerabilities with minimal user input at a cost of 24--27% in throughput and 2--3.2 GB/day in storage.United States. Defense Advanced Research Projects Agency. Clean-slate design of Resilient, Adaptive, Secure Hosts (Contract N66001-10-2-4089)National Science Foundation (U.S.) (Award CNS-1053143)Quanta Computer (Firm)Google (Firm)Samsung Scholarship Foundatio

    Managing malicious transactions in mobile database systems

    Get PDF
    Title from PDF of title page, viewed on March 15, 2013Thesis advisor: Vijay KumarVitaIncludes bibliographic references (p. 53-55)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2012Database security is one of the most important issues for any organization, especially for financial institutions such as banks. Protecting database from external threats is relatively easier and a number of effective security schemes are available to organizations. Unfortunately, this is not so in the case of threats from insiders. Existing security schemes for such threats are some variation of external schemes that are not able to provide desirable security level. As a result, still authorized users (insiders) manage to misuse their privileges for fulfilling their malicious intent. It is a fact that most external security breaches succeed mainly with the help of insiders. An example for an insider is the Enron scandal of 2001 which led to bankruptcy of Enron Corporation. The firm was widely regarded as one of the most innovative, fastest growing and best managed business in the United States. When Enron filed for bankruptcy its share prices fall from US90to90 to 1 causing a loss of nearly 11billiondollartoitsstakeholders.Financialofficersandexecutivesmisledoutsideinvestors,auditorsandEnronsboardofdirectorsaboutcorporationsnetincomeandliabilities.Theseinsiderskeptreportedincomeandreportedcashflowup,assetvalueinflatedandliabilitiesoffthebooktomeetWallStreetexpectations.Enrons11 billion dollar to its stakeholders. Financial officers and executives misled outside investors, auditors and Enron's board of directors about corporation's net income and liabilities. These insiders kept reported income and reported cash flow up, asset value inflated and liabilities off the book to meet Wall Street expectations. Enron's 63.4 billion in assets made it the largest corporate bankruptcy in American history at that time. Existing security policies are inadequate to prevent the attacks from insiders. Current database protections mechanisms do not fully protect occurrence of these malicious transactions. These requires human intervention in some form or other to detect malicious transactions. In a database, a transaction can affect the execution of the subsequesnt transactions thereby spreading the damage and hence making the attack recovery more complex. The problem of malicious attack becomes more pronounced when we are dealing with mobile database systems. This thesis proposes a solution to mitigate insider attack by identifying such malicious transactions. It develops a formal framework for characterizing mobile transaction by identifying essential components like order of data access, order of operations and user profile.Introduction -- Mobile database system -- Research problem -- Solution and scheme -- Simulation and results -- Future work -- Conclusio

    Design and Development of Techniques to Ensure Integrity in Fog Computing Based Databases

    Get PDF
    The advancement of information technology in coming years will bring significant changes to the way sensitive data is processed. But the volume of generated data is rapidly growing worldwide. Technologies such as cloud computing, fog computing, and the Internet of things (IoT) will offer business service providers and consumers opportunities to obtain effective and efficient services as well as enhance their experiences and services; increased availability and higher-quality services via real-time data processing augment the potential for technology to add value to everyday experiences. This improves human life quality and easiness. As promising as these technological innovations, they are prone to security issues such as data integrity and data consistency. However, as with any computer system, these services are not without risks. There is the possibility that systems might be infiltrated by malicious transactions and, as a result, data could be corrupted, which is a cause for concern. Once an attacker damages a set of data items, the damage can spread through the database. When valid transactions read corrupted data, they can update other data items based on the value read. Given the sensitive nature of important data and the critical need to provide real-time access for decision-making, it is vital that any damage done by a malicious transaction and spread by valid transactions must be corrected immediately and accurately. In this research, we develop three different novel models for employing fog computing technology in critical systems such as healthcare, intelligent government system and critical infrastructure systems. In the first model, we present two sub-models for using fog computing in healthcare: an architecture using fog modules with heterogeneous data, and another using fog modules with homogeneous data. We propose a unique approach for each module to assess the damage caused by malicious transactions, so that original data may be recovered and affected transactions may be identified for future investigations. In the second model, we introduced a unique model that uses fog computing in smart cities to manage utility service companies and consumer data. Then we propose a novel technique to assess damage to data caused by an attack. Thus, original data can be recovered, and a database can be returned to its consistent state as no attacking has occurred. The last model focus of designing a novel technique for an intelligent government system that uses fog computing technology to control and manage data. Unique algorithms sustaining the integrity of system data in the event of cyberattack are proposed in this segment of research. These algorithms are designed to maintain the security of systems attacked by malicious transactions or subjected to fog node data modifications. A transaction-dependency graph is implemented in this model to observe and monitor the activities of every transaction. Once an intrusion detection system detects malicious activities, the system will promptly detect all affected transactions. Then we conducted a simulation study to prove the applicability and efficacy of the proposed models. The evaluation rendered this models practicable and effective

    Automated intrusion recovery for web applications

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 93-97).In this dissertation, we develop recovery techniques for web applications and demonstrate that automated recovery from intrusions and user mistakes is practical as well as effective. Web applications play a critical role in users' lives today, making them an attractive target for attackers. New vulnerabilities are routinely found in web application software, and even if the software is bug-free, administrators may make security mistakes such as misconfiguring permissions; these bugs and mistakes virtually guarantee that every application will eventually be compromised. To clean up after a successful attack, administrators need to find its entry point, track down its effects, and undo the attack's corruptions while preserving legitimate changes. Today this is all done manually, which results in days of wasted effort with no guarantee that all traces of the attack have been found or that no legitimate changes were lost. To address this problem, we propose that automated intrusion recovery should be an integral part of web application platforms. This work develops several ideas-retroactive patching, automated UI replay, dependency tracking, patch-based auditing, and distributed repair-that together recover from past attacks that exploited a vulnerability, by retroactively fixing the vulnerability and repairing the system state to make it appear as if the vulnerability never existed. Repair tracks down and reverts effects of the attack on other users within the same application and on other applications, while preserving legitimate changes. Using techniques resulting from these ideas, an administrator can easily recover from past attacks that exploited a bug using nothing more than a patch fixing the bug, with no manual effort on her part to find the attack or track its effects. The same techniques can also recover from attacks that exploit past configuration mistakes-the administrator only has to point out the past request that resulted in the mistake. We built three prototype systems, WARP, POIROT, and AIRE, to explore these ideas. Using these systems, we demonstrate that we can recover from challenging attacks in real distributed web applications with little or no changes to application source code; that recovery time is a fraction of the original execution time for attacks with a few affected requests; and that support for recovery adds modest runtime overhead during the application's normal operation.by Ramesh Chandra.Ph.D

    Distributed authorization in loosely coupled data federation

    Get PDF
    The underlying data model of many integrated information systems is a collection of inter-operable and autonomous database systems, namely, a loosely coupled data federation. A challenging security issue in designing such a data federation is to ensure the integrity and confidentiality of data stored in remote databases through distributed authorization of users. Existing solutions in centralized databases are not directly applicable here due to the lack of a centralized authority, and most solutions designed for outsourced databases cannot easily support frequent updates essential to a data federation. In this thesis, we provide a solution in three steps. First, we devise an architecture to support fully distributed, fine-grained, and data-dependent authorization in loosely coupled data federations. For this purpose, we adapt the integrity-lock architecture originally designed for multilevel secure databases to data federations. Second, we propose an integrity mechanism to detect, localize, and verify updates of data stored in remote databases while reducing communication overhead and limiting the impact of unauthorized updates. We realize the mechanism as a three-stage procedure based on a grid of Merkle Hash Trees built on relational tables. Third, we present a confidentiality mechanism to control remote users' accesses to sensitive data while allowing authorization policies to be frequently updated. We achieve this objective through a new over-encryption scheme based on secret sharing. Finally, we evaluate the proposed architecture and mechanisms through experiments

    Utilizando funções de autenticação de mensagens para a detecção e recuperação de violações de integridade de acesso a tabelas relacionais

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2014Este trabalho propõe métodos para verificar a integridade dos dados em tempo real, simplificando e automatizando um processo de auditoria. Os métodos propostos utilizam funções criptográficas de baixo custo, são independentes do sistema de banco de dados e permitem a detecção de atualizações, remoções e inserções maliciosas, além da verificação da atualidade de registros e recuperação de tuplas modificadas de forma indevida. A detecção das alterações maliciosas dos dados é feita através de funções MAC, calculadas sobre a concatenação dos atributos de cada tupla das tabelas. Já a detecção da remoção é feita através de um novo algoritmo proposto, chamado CMAC, que faz o encadeamento de todas as linhas de uma tabela, não permitindo que novas linhas sejam inseridas ou removidas sem o conhecimento da chave utilizada para o cálculo do CMAC. Este trabalho também explora diferentes arquiteturas para a implementação dos métodos propostos, apresentando as vantagens e desvantagens de cada arquitetura. Além disso, é feita a avaliação dos métodos propostos, mostrando que o custo para calcular e armazenar os dados necessários para controlar a integridade é muito pequeno.Abstract: Unauthorized changes to database content can result in significant losses for organizations and individuals. As a result, there is a need for mechanisms capable of assuring the integrity of stored data. Meanwhile, existing solutions have requirements that are difficult to meet in real world environments. These requirements can include modifications to the database engine or the usage of costly cryptographic functions. In this master's thesis, we propose a technique that uses low cost cryptographic functions and it is independent of the database engine. Our approach allows for the detection of malicious update operations, as well as insertion and deletion operations. This is achieved by the insertion of a small amount of protection data into the database.The protection data is utilized by the data owner for data access security by applying Message Authentication Codes. In addition, our experiments have shown that the overhead of calculating and storing the protection data is very low
    corecore