128 research outputs found
Prevention of SQL Injection Attacks using AWS WAF
SQL injection is one of several different types of code injection techniques used to attack data driven applications. This is done by the attacker injecting an input in the query not intended by the programmer of the application gaining the access of the database which results in potential reading, modification or deletion of users’ data. The vulnerabilities are due to the lack of input validation which is the most critical part of software security that is often not properly covered in the design phase of the software development lifecycle. This paper presents different techniques and some of the countermeasures for detection and prevention of SQL injection attacks. The proposed procedure in the paper is to use a database firewall between the client (user) side and the database server through AWS to avoid the malicious codes injected by the attackers
Análise de malware com suporte de hardware
Orientadores: Paulo Lício de Geus, André Ricardo Abed GrégioDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O mundo atual é impulsionado pelo uso de sistemas computacionais, estando estes pre- sentes em todos aspectos da vida cotidiana. Portanto, o correto funcionamento destes é essencial para se assegurar a manutenção das possibilidades trazidas pelos desenvolvi- mentos tecnológicos. Contudo, garantir o correto funcionamento destes não é uma tarefa fácil, dado que indivíduos mal-intencionados tentam constantemente subvertê-los visando benefíciar a si próprios ou a terceiros. Os tipos mais comuns de subversão são os ataques por códigos maliciosos (malware), capazes de dar a um atacante controle total sobre uma máquina. O combate à ameaça trazida por malware baseia-se na análise dos artefatos coletados de forma a permitir resposta aos incidentes ocorridos e o desenvolvimento de contramedidas futuras. No entanto, atacantes têm se especializado em burlar sistemas de análise e assim manter suas operações ativas. Para este propósito, faz-se uso de uma série de técnicas denominadas de "anti-análise", capazes de impedir a inspeção direta dos códigos maliciosos. Dentre essas técnicas, destaca-se a evasão do processo de análise, na qual são empregadas exemplares capazes de detectar a presença de um sistema de análise para então esconder seu comportamento malicioso. Exemplares evasivos têm sido cada vez mais utilizados em ataques e seu impacto sobre a segurança de sistemas é considerá- vel, dado que análises antes feitas de forma automática passaram a exigir a supervisão de analistas humanos em busca de sinais de evasão, aumentando assim o custo de se manter um sistema protegido. As formas mais comuns de detecção de um ambiente de análise se dão através da detecção de: (i) código injetado, usado pelo analista para inspecionar a aplicação; (ii) máquinas virtuais, usadas em ambientes de análise por questões de escala; (iii) efeitos colaterais de execução, geralmente causados por emuladores, também usados por analistas. Para lidar com malware evasivo, analistas tem se valido de técnicas ditas transparentes, isto é, que não requerem injeção de código nem causam efeitos colaterais de execução. Um modo de se obter transparência em um processo de análise é contar com suporte do hardware. Desta forma, este trabalho versa sobre a aplicação do suporte de hardware para fins de análise de ameaças evasivas. No decorrer deste texto, apresenta-se uma avaliação das tecnologias existentes de suporte de hardware, dentre as quais máqui- nas virtuais de hardware, suporte de BIOS e monitores de performance. A avaliação crítica de tais tecnologias oferece uma base de comparação entre diferentes casos de uso. Além disso, são enumeradas lacunas de desenvolvimento existentes atualmente. Mais que isso, uma destas lacunas é preenchida neste trabalho pela proposição da expansão do uso dos monitores de performance para fins de monitoração de malware. Mais especificamente, é proposto o uso do monitor BTS para fins de construção de um tracer e um debugger. O framework proposto e desenvolvido neste trabalho é capaz, ainda, de lidar com ataques do tipo ROP, um dos mais utilizados atualmente para exploração de vulnerabilidades. A avaliação da solução demonstra que não há a introdução de efeitos colaterais, o que per- mite análises de forma transparente. Beneficiando-se desta característica, demonstramos a análise de aplicações protegidas e a identificação de técnicas de evasãoAbstract: Today¿s world is driven by the usage of computer systems, which are present in all aspects of everyday life. Therefore, the correct working of these systems is essential to ensure the maintenance of the possibilities brought about by technological developments. However, ensuring the correct working of such systems is not an easy task, as many people attempt to subvert systems working for their own benefit. The most common kind of subversion against computer systems are malware attacks, which can make an attacker to gain com- plete machine control. The fight against this kind of threat is based on analysis procedures of the collected malicious artifacts, allowing the incident response and the development of future countermeasures. However, attackers have specialized in circumventing analysis systems and thus keeping their operations active. For this purpose, they employ a series of techniques called anti-analysis, able to prevent the inspection of their malicious codes. Among these techniques, I highlight the analysis procedure evasion, that is, the usage of samples able to detect the presence of an analysis solution and then hide their malicious behavior. Evasive examples have become popular, and their impact on systems security is considerable, since automatic analysis now requires human supervision in order to find evasion signs, which significantly raises the cost of maintaining a protected system. The most common ways for detecting an analysis environment are: i) Injected code detec- tion, since injection is used by analysts to inspect applications on their way; ii) Virtual machine detection, since they are used in analysis environments due to scalability issues; iii) Execution side effects detection, usually caused by emulators, also used by analysts. To handle evasive malware, analysts have relied on the so-called transparent techniques, that is, those which do not require code injection nor cause execution side effects. A way to achieve transparency in an analysis process is to rely on hardware support. In this way, this work covers the application of the hardware support for the evasive threats analysis purpose. In the course of this text, I present an assessment of existing hardware support technologies, including hardware virtual machines, BIOS support, performance monitors and PCI cards. My critical evaluation of such technologies provides basis for comparing different usage cases. In addition, I pinpoint development gaps that currently exists. More than that, I fill one of these gaps by proposing to expand the usage of performance monitors for malware monitoring purposes. More specifically, I propose the usage of the BTS monitor for the purpose of developing a tracer and a debugger. The proposed framework is also able of dealing with ROP attacks, one of the most common used technique for remote vulnerability exploitation. The framework evaluation shows no side-effect is introduced, thus allowing transparent analysis. Making use of this capability, I demonstrate how protected applications can be inspected and how evasion techniques can be identifiedMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE
Transparent and Precise Malware Analysis Using Virtualization: From Theory to Practice
Dynamic analysis is an important technique used in malware analysis and is complementary to static analysis. Thus far, virtualization has been widely adopted for building fine-grained dynamic analysis tools and this trend is expected to continue. Unlike User/Kernel space malware analysis platforms that essentially co-exist with malware, virtualization based platforms benefit from isolation and fine-grained instrumentation support. Isolation makes it more difficult for malware samples to disrupt analysis and fine-grained instrumentation provides analysts with low level details, such as those at the machine instruction level. This in turn supports the development of advanced analysis tools such as dynamic taint analysis and symbolic execution for automatic path exploration.
The major disadvantage of virtualization based malware analysis is the loss of semantic information, also known as the semantic gap problem. To put it differently, since analysis takes place at the virtual machine monitor where only the raw system state (e.g., CPU and memory) is visible, higher level constructs such as processes and files must be reconstructed using the low level information. The collection of techniques used to bridge semantic gaps is known as Virtual Machine Introspection.
Virtualization based analysis platforms can be further separated into emulation and hardware virtualization. Emulators have the advantages of flexibility of analysis tool development and efficiency for fine-grained analysis; however, emulators suffer from the transparency problem. That is, malware can employ methods to determine whether it is executing in an emulated environment versus real hardware and cease operations to disrupt analysis if the machine is emulated. In brief, emulation based dynamic analysis has advantages over User/Kernel space and hardware virtualization based techniques, but it suffers from semantic gap and transparency problems.
These problems have been exacerbated by recent discoveries of anti-emulation malware that detects emulators and Android malware with two semantic gaps, Java and native. Also, it is foreseeable that malware authors will have a similar response to taint analysis. In other words, once taint analysis becomes widely used to understand how malware operates, the authors will create new malware that attacks the imprecisions in taint analysis implementations and induce false-positives and false-negatives in an effort to frustrate analysts.
This dissertation addresses these problems by presenting concepts, methods and techniques that can be used to transparently and precisely analyze both desktop and mobile malware using virtualization. This is achieved in three parts. First, precise heterogeneous record and replay is presented as a means to help emulators benefit from the transparency characteristics of hardware virtualization. This technique is implemented in a tool called V2E that uses KVM for recording and TEMU for replaying and analysis. It was successfully used to analyze real-world anti-emulation malware that evaded analysis using TEMU alone. Second, the design of an emulation based Android malware analysis platform that uses virtual machine introspection to bridge both the Java and native level semantic gaps as well as seamlessly bind the two views together into a single view is presented. The core introspection and instrumentation techniques were implemented in a new analysis platform called DroidScope that is based on the Android emulator. It was successfully used to analyze two real-world Android malware samples that have cooperating Java and native level components. Taint analysis was also used to study their information ex-filtration behaviors. Third, formal methods for studying the sources of false-positives and false-negatives in dynamic taint analysis designs and for verifying the correctness of manually defined taint propagation rules are presented. These definitions and methods were successfully used to analyze and compare previously published taint analysis platforms in terms of false-positives and false-negatives
Stronger secrecy for network-facing applications through privilege reduction
Despite significant effort in improving software quality, vulnerabilities and bugs persist in applications. Attackers remotely exploit vulnerabilities in network-facing applications and then disclose and corrupt users' sensitive information that these applications process. Reducing privilege of application components helps to limit the harm that an attacker may cause if she exploits an application. Privilege reduction, i.e., the Principle of Least Privilege, is a fundamental technique that allows one to contain possible exploits of error-prone software components: it entails granting a software component the minimal privilege that it needs to operate. Applying this principle ensures that sensitive data is given only to those software components that indeed require processing such data. This thesis explores how to reduce the privilege of network-facing applications to provide stronger confidentiality and integrity guarantees for sensitive data. First, we look into applying privilege reduction to cryptographic protocol implementations. We address the vital and largely unexamined problem of how to structure implementations of cryptographic protocols to protect sensitive data even in the case when an attacker compromises untrusted components of a protocol implementation. As evidence that the problem is poorly understood, we identified two attacks which succeed in disclosing of sensitive data in two state-of-the-art, exploit-resistant cryptographic protocol implementations: the privilege-separated OpenSSH server and the HiStar/DStar DIFC-based SSL web server. We propose practical, general, system-independent principles for structuring protocol implementations to defend against these two attacks. We apply our principles to protect sensitive data from disclosure in the implementations of both the server and client sides of OpenSSH and of the OpenSSL library. Next, we explore how to reduce the privilege of language runtimes, e.g., the JavaScript language runtime, so as to minimize the risk of their compromise, and thus of the disclosure and corruption of sensitive information. Modern language runtimes are complex software involving such advanced techniques as just-in-time compilation, native-code support routines, garbage collection, and dynamic runtime optimizations. This complexity makes it hard to guarantee the safety of language runtimes, as evidenced by the frequency of the discovery of vulnerabilities in them. We provide new mechanisms that allow sandboxing language runtimes using Software-based Fault Isolation (SFI). In particular, we enable sandboxing of runtime code modification, which modern language runtimes depend on heavily for achieving high performance. We have applied our sandboxing techniques to the V8 Javascript engine on both the x86-32 and x86-64 architectures, and found that the techniques incur only moderate performance overhead. Finally, we apply privilege reduction within the web browser to secure sensitive data within web applications. Web browsers have become an attractive target for attackers because of their widespread use. There are two principal threats to a user's sensitive data in the browser environment: untrusted third-party extensions and untrusted web pages. Extensions execute with elevated privilege which allows them to read content within all web applications. Thus, a malicious extension author may write extension code that reads sensitive page content and sends it to a remote server he controls. Alternatively, a malicious page author may exploit an honest but buggy extension, thus leveraging its elevated privilege to disclose sensitive information from other origins. We propose enforcing privilege reduction policies on extension JavaScript code to protect web applications' sensitive data from malicious extensions and malicious pages. We designed ScriptPolice, a policy system for the Chrome browser's V8 JavaScript language runtime, to enforce flexible security policies on JavaScript execution. We restrict the privileges of a variety of extensions and contain any malicious activity whether introduced by design or injected by a malicious page. The overhead ScriptPolice incurs on extension execution is acceptable: the added page load latency caused by ScriptPolice is so short as to be virtually indistinguishable by users
Using Virtualisation to Protect Against Zero-Day Attacks
Bal, H.E. [Promotor]Bos, H.J. [Copromotor
Identity Management and Authorization Infrastructure in Secure Mobile Access to Electronic Health Records
We live in an age of the mobile paradigm of anytime/anywhere access, as the mobile device
is the most ubiquitous device that people now hold. Due to their portability, availability, easy
of use, communication, access and sharing of information within various domains and areas of
our daily lives, the acceptance and adoption of these devices is still growing. However, due to
their potential and raising numbers, mobile devices are a growing target for attackers and, like
other technologies, mobile applications are still vulnerable.
Health information systems are composed with tools and software to collect, manage, analyze
and process medical information (such as electronic health records and personal health records).
Therefore, such systems can empower the performance and maintenance of health services,
promoting availability, readability, accessibility and data sharing of vital information about a
patients overall medical history, between geographic fragmented health services. Quick access
to information presents a great importance in the health sector, as it accelerates work processes,
resulting in better time utilization. Additionally, it may increase the quality of care.
However health information systems store and manage highly sensitive data, which raises serious
concerns regarding patients privacy and safety, and may explain the still increasing number
of malicious incidents reports within the health domain.
Data related to health information systems are highly sensitive and subject to severe legal
and regulatory restrictions, that aim to protect the individual rights and privacy of patients.
Along side with these legislations, security requirements must be analyzed and measures implemented.
Within the necessary security requirements to access health data, secure authentication,
identity management and access control are essential to provide adequate means to
protect data from unauthorized accesses. However, besides the use of simple authentication
models, traditional access control models are commonly based on predefined access policies
and roles, and are inflexible. This results in uniform access control decisions through people,
different type of devices, environments and situational conditions, and across enterprises, location
and time.
Although already existent models allow to ensure the needs of the health care systems, they still
lack components for dynamicity and privacy protection, which leads to not have desire levels
of security and to the patient not to have a full and easy control of his privacy. Within this
master thesis, after a deep research and review of the stat of art, was published a novel dynamic
access control model, Socio-Technical Risk-Adaptable Access Control modEl (SoTRAACE),
which can model the inherent differences and security requirements that are present in this
thesis. To do this, SoTRAACE aggregates attributes from various domains to help performing
a risk assessment at the moment of the request. The assessment of the risk factors identified
in this work is based in a Delphi Study. A set of security experts from various domains were
selected, to classify the impact in the risk assessment of each attribute that SoTRAACE aggregates.
SoTRAACE was integrated in an architecture with requirements well-founded, and based
in the best recommendations and standards (OWASP, NIST 800-53, NIST 800-57), as well based in
deep review of the state-of-art. The architecture is further targeted with the essential security
analysis and the threat model. As proof of concept, the proposed access control model was implemented within the user-centric
architecture, with two mobile prototypes for several types of accesses by patients and healthcare
professionals, as well the web servers that handles the access requests, authentication and
identity management.
The proof of concept shows that the model works as expected, with transparency, assuring privacy
and data control to the user without impact for user experience and interaction. It is clear
that the model can be extended to other industry domains, and new levels of risks or attributes
can be added because it is modular. The architecture also works as expected, assuring secure
authentication with multifactor, and secure data share/access based in SoTRAACE decisions.
The communication channel that SoTRAACE uses was also protected with a digital certificate.
At last, the architecture was tested within different Android versions, tested with static and
dynamic analysis and with tests with security tools.
Future work includes the integration of health data standards and evaluating the proposed system
by collecting users’ opinion after releasing the system to real world.Hoje em dia vivemos em um paradigma móvel de acesso em qualquer lugar/hora, sendo que
os dispositivos móveis são a tecnologia mais presente no dia a dia da sociedade. Devido à sua
portabilidade, disponibilidade, fácil manuseamento, poder de comunicação, acesso e partilha
de informação referentes a várias áreas e domínios das nossas vidas, a aceitação e integração
destes dispositivos é cada vez maior. No entanto, devido ao seu potencial e aumento do número
de utilizadores, os dispositivos móveis são cada vez mais alvos de ataques, e tal como outras
tecnologias, aplicações móveis continuam a ser vulneráveis.
Sistemas de informação de saúde são compostos por ferramentas e softwares que permitem
recolher, administrar, analisar e processar informação médica (tais como documentos de saúde
eletrónicos). Portanto, tais sistemas podem potencializar a performance e a manutenção dos
serviços de saúde, promovendo assim a disponibilidade, acessibilidade e a partilha de dados
vitais referentes ao registro médico geral dos pacientes, entre serviços e instituições que estão
geograficamente fragmentadas. O rápido acesso a informações médicas apresenta uma grande
importância para o setor da saúde, dado que acelera os processos de trabalho, resultando assim
numa melhor eficiência na utilização do tempo e recursos. Consequentemente haverá uma
melhor qualidade de tratamento. Porém os sistemas de informação de saúde armazenam e
manuseiam dados bastantes sensíveis, o que levanta sérias preocupações referentes à privacidade
e segurança do paciente. Assim se explica o aumento de incidentes maliciosos dentro do
domínio da saúde.
Os dados de saúde são altamente sensíveis e são sujeitos a severas leis e restrições regulamentares,
que pretendem assegurar a proteção dos direitos e privacidade dos pacientes, salvaguardando
os seus dados de saúde. Juntamente com estas legislações, requerimentos de segurança
devem ser analisados e medidas implementadas. Dentro dos requerimentos necessários
para aceder aos dados de saúde, uma autenticação segura, gestão de identidade e controlos de
acesso são essenciais para fornecer meios adequados para a proteção de dados contra acessos
não autorizados. No entanto, além do uso de modelos simples de autenticação, os modelos
tradicionais de controlo de acesso são normalmente baseados em políticas de acesso e cargos
pré-definidos, e são inflexíveis. Isto resulta em decisões de controlo de acesso uniformes para
diferentes pessoas, tipos de dispositivo, ambientes e condições situacionais, empresas, localizações
e diferentes alturas no tempo. Apesar dos modelos existentes permitirem assegurar
algumas necessidades dos sistemas de saúde, ainda há escassez de componentes para accesso
dinâmico e proteção de privacidade , o que resultam em níveis de segurança não satisfatórios e
em o paciente não ter controlo directo e total sobre a sua privacidade e documentos de saúde.
Dentro desta tese de mestrado, depois da investigação e revisão intensiva do estado da arte,
foi publicado um modelo inovador de controlo de acesso, chamado SoTRAACE, que molda as
diferenças de acesso inerentes e requerimentos de segurança presentes nesta tese. Para isto,
o SoTRAACE agrega atributos de vários ambientes e domínios que ajudam a executar uma avaliação
de riscos, no momento em que os dados são requisitados. A avaliação dos fatores de risco
identificados neste trabalho são baseados num estudo de Delphi. Um conjunto de peritos de
segurança de vários domínios industriais foram selecionados, para classificar o impacto de cada
atributo que o SoTRAACE agrega. O SoTRAACE foi integrado numa arquitectura para acesso a
dados médicos, com requerimentos bem fundados, baseados nas melhores normas e recomendações (OWASP, NIST 800-53, NIST 800-57), e em revisões intensivas do estado da arte. Esta
arquitectura é posteriormente alvo de uma análise de segurança e modelos de ataque.
Como prova deste conceito, o modelo de controlo de acesso proposto é implementado juntamente
com uma arquitetura focada no utilizador, com dois protótipos para aplicações móveis,
que providênciam vários tipos de acesso de pacientes e profissionais de saúde. A arquitetura é
constituída também por servidores web que tratam da gestão de dados, controlo de acesso e
autenticação e gestão de identidade. O resultado final mostra que o modelo funciona como esperado,
com transparência, assegurando a privacidade e o controlo de dados para o utilizador,
sem ter impacto na sua interação e experiência. Consequentemente este modelo pode-se extender
para outros setores industriais, e novos níveis de risco ou atributos podem ser adicionados
a este mesmo, por ser modular. A arquitetura também funciona como esperado, assegurando
uma autenticação segura com multi-fator, acesso e partilha de dados segura baseado em decisões
do SoTRAACE. O canal de comunicação que o SoTRAACE usa foi também protegido com
um certificado digital.
A arquitectura foi testada em diferentes versões de Android, e foi alvo de análise estática,
dinâmica e testes com ferramentas de segurança.
Para trabalho futuro está planeado a integração de normas de dados de saúde e a avaliação do
sistema proposto, através da recolha de opiniões de utilizadores no mundo real
Testing And Verification For The Open Source Release Of The Horizon Simulation Framework
Modeling and simulation tools are exceptionally useful for designing aerospace systems because they allow engineers to test and iterate designs before committing the massive resources required for system realization. The Horizon Simulation Framework (HSF) is a time-driven modeling and simulation tool which attempts to optimize how a modeled system could perform a mission profile. After 15 years of development, the HSF team aims to achieve a wider user and developer base by releasing the software open source. To ensure a successful release, the software required extensive testing, and the main scheduling algorithm required protections against new code breaking old functionality. The goal of the work presented in this thesis is to satisfy these requirements and officially release the software open source. The software was tested with \u3e 80% coverage and a continuous integration pipeline which runs build and unit/integration tests on every new commit was set up. Finally, supporting documentation and user resources were created and organized to promote community adoption of the software, making Horizon ready for an open source release
Recommended from our members
Automated Testing and Debugging for Big Data Analytics
The prevalence of big data analytics in almost every large-scale software system has generated a substantial push to build data-intensive scalable computing (DISC) frameworks such as Google MapReduce and Apache Spark that can fully harness the power of existing data centers. However, frameworks once used by domain experts are now being leveraged by data scientists, business analysts, and researchers. This shift in user demographics calls for immediate advancements in the development, debugging, and testing practices of big data applications, which are falling behind compared to the DISC framework design and implementation. In practice, big data applications often fail as users are unable to test all behaviors emerging from interleaving dataflow operators, user-defined functions, and framework's code. "Testing based on a random sample" rarely guarantees the reliability and "trial and error" and "print" debugging methods are expensive and time-consuming. Thus, the current practice of developing a big data application must be improved and the tools built to enhance the developer's productivity must adapt to the distinct characteristics of data-intensive scalable computing. By synthesizing ideas from software engineering and database systems, our hypothesis is that we can design effective and scalable testing and debugging algorithms for big data analytics without compromising the performance and efficiency of the underlying DISC framework. To design such techniques, we investigate how we can build interactive and responsive debugging primitives that significantly reduce the debugging time, yet do not pose much performance overhead on big data applications. Furthermore, we investigate how we can leverage data provenance techniques from databases and fault-isolation algorithms from software engineering to pinpoint the minimal subset of failure-inducing inputs efficiently. To improve the reliability of big data analytics, we investigate how we can abstract the semantics of dataflow operators and use them in tandem with the semantics of user-defined functions to generate a minimum set of synthetic test inputs capable of revealing more defects than the entire input dataset.To examine the first hypothesis, we introduce interactive, real-time debugging primitives for big data analytics through innovative and scalable debugging features such as simulated breakpoint, dynamic watchpoint, and crash culprit identification. Second, we design a new automated fault localization approach that combines insights from both the software engineering and database literature to bring delta debugging closer to a reality in the big data applications by leveraging data provenance and by constructing systems optimizations for debugging provenance queries. Lastly, we devise a new symbolic-execution based white-box testing algorithm for big data applications that abstracts the implementation of dataflow operators using logical specifications instead of modeling their implementations and combines them with the semantics of any arbitrary user-defined function. We instantiate the idea of an interactive debugging algorithm as BigDebug, the idea of an automated debugging algorithm as BigSift, and the idea of symbolic execution-based testing as BigTest. Our investigation shows that the interactive debugging primitives can scale to terabytes---our record-level tracing incurs less than 25% overhead on average and provides up to 100% time saving compared to the baseline replay debugger. Second, we observe that by combining data provenance with delta debugging, we can identify the minimum faulty input in just under 30% of the original job execution time. Lastly, we verify that by abstracting dataflow operators using logical specifications, we can efficiently generate the most concise test data suitable for local testing while revealing twice as many faults as prior approaches. Our investigations collectively demonstrate that developer productivity can be significantly improved through effective and scalable testing and debugging techniques for big data analytics, without impacting the DISC framework's performance. This dissertation affirms the feasibility of automated debugging and testing techniques for big data analytics---techniques that were previously considered infeasible for large-scale data processing
- …