302 research outputs found
Segurança de contentores em ambiente de desenvolvimento contínuo
The rising of the DevOps movement and the transition from a product economy
to a service economy drove significant changes in the software development
life cycle paradigm, among which the dropping of the waterfall in favor of
agile methods. Since DevOps is itself an agile method, it allows us to monitor
current releases, receiving constant feedback from clients, and improving
the next software releases. Despite its extraordinary development, DevOps
still presents limitations concerning security, which needs to be included in the
Continuous Integration or Continuous Deployment pipelines (CI/CD) used in
software development.
The massive adoption of cloud services and open-source software, the widely
spread containers and related orchestration, as well as microservice architectures,
broke all conventional models of software development. Due to these
new technologies, packaging and shipping new software is done in short periods
nowadays and becomes almost instantly available to users worldwide.
The usual approach to attach security at the end of the software development
life cycle (SDLC) is now becoming obsolete, thus pushing the adoption of DevSecOps
or SecDevOps, by injecting security into SDLC processes earlier
and preventing security defects or issues from entering into production.
This dissertation aims to reduce the impact of microservices’ vulnerabilities by
examining the respective images and containers through a flexible and adaptable
set of analysis tools running in dedicated CI/CD pipelines. This approach
intends to provide a clean and secure collection of microservices for later release
in cloud production environments. To achieve this purpose, we have
developed a solution that allows programming and orchestrating a battery of
tests. There is a form where we can select several security analysis tools, and
the solution performs this set of tests in a controlled way according to the defined
dependencies. To demonstrate the solution’s effectiveness, we program
a battery of tests for different scenarios, defining the security analysis pipeline
to incorporate various tools. Finally, we will show security tools working locally,
which subsequently integrated into our solution return the same results.A ascensão da estratégia DevOps e a transição de uma economia de produto
para uma economia de serviços conduziu a mudanças significativas no paradigma
do ciclo de vida do desenvolvimento de software, entre as quais o
abandono do modelo em cascata em favor de métodos ágeis. Uma vez que
o DevOps é parte integrante de um método ágil, permite-nos monitorizar as
versões actuais, recebendo feedback constante dos clientes, e melhorando
as próximas versões de software. Apesar do seu extraordinário desenvolvimento,
o DevOps ainda apresenta limitações relativas à segurança, que necessita
de ser incluída nas pipelines de integração contínua ou implantação
contínua (CI/CD) utilizadas no desenvolvimento de software.
A adopção em massa de serviços na nuvem e software aberto, a ampla difusão
de contentores e respectiva orquestração bem como das arquitecturas
de micro-serviços, quebraram assim todos os modelos convencionais de desenvolvimento
de software. Devido a estas novas tecnologias, a preparação e
expedição de novo software é hoje em dia feita em curtos períodos temporais
e ficando disponível quase instantaneamente a utilizadores em todo o mundo.
Face a estes fatores, a abordagem habitual que adiciona segurança ao final
do ciclo de vida do desenvolvimento de software está a tornar-se obsoleta,
sendo crucial adotar metodologias DevSecOps ou SecDevOps, injetando a
segurança mais cedo nos processos de desenvolvimento de software e impedindo
que defeitos ou problemas de segurança fluam para os ambientes de
produção.
O objectivo desta dissertação é reduzir o impacto de vulnerabilidades em
micro-serviços através do exame das respectivas imagens e contentores por
um conjunto flexível e adaptável de ferramentas de análise que funcionam em
pipelines CI/CD dedicadas. Esta abordagem pretende fornecer uma coleção
limpa e segura de micro-serviços para posteriormente serem lançados em
ambientes de produção na nuvem. Para atingir este objectivo, desenvolvemos
uma solução que permite programar e orquestrar uma bateria de testes.
Existe um formulário onde podemos seleccionar várias ferramentas de análise
de segurança, e a solução executa este conjunto de testes de uma forma
controlada de acordo com as dependências definidas. Para demonstrar a
eficácia da solução, programamos um conjunto de testes para diferentes cenários,
definindo as pipelines de análise de segurança para incorporar várias
ferramentas. Finalmente, mostraremos ferramentas de segurança a funcionar
localmente, que posteriormente integradas na nossa solução devolvem
os mesmos resultados.Mestrado em Engenharia Informátic
Análise de malware com suporte de hardware
Orientadores: Paulo Lício de Geus, André Ricardo Abed GrégioDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O mundo atual é impulsionado pelo uso de sistemas computacionais, estando estes pre- sentes em todos aspectos da vida cotidiana. Portanto, o correto funcionamento destes é essencial para se assegurar a manutenção das possibilidades trazidas pelos desenvolvi- mentos tecnológicos. Contudo, garantir o correto funcionamento destes não é uma tarefa fácil, dado que indivíduos mal-intencionados tentam constantemente subvertê-los visando benefíciar a si próprios ou a terceiros. Os tipos mais comuns de subversão são os ataques por códigos maliciosos (malware), capazes de dar a um atacante controle total sobre uma máquina. O combate à ameaça trazida por malware baseia-se na análise dos artefatos coletados de forma a permitir resposta aos incidentes ocorridos e o desenvolvimento de contramedidas futuras. No entanto, atacantes têm se especializado em burlar sistemas de análise e assim manter suas operações ativas. Para este propósito, faz-se uso de uma série de técnicas denominadas de "anti-análise", capazes de impedir a inspeção direta dos códigos maliciosos. Dentre essas técnicas, destaca-se a evasão do processo de análise, na qual são empregadas exemplares capazes de detectar a presença de um sistema de análise para então esconder seu comportamento malicioso. Exemplares evasivos têm sido cada vez mais utilizados em ataques e seu impacto sobre a segurança de sistemas é considerá- vel, dado que análises antes feitas de forma automática passaram a exigir a supervisão de analistas humanos em busca de sinais de evasão, aumentando assim o custo de se manter um sistema protegido. As formas mais comuns de detecção de um ambiente de análise se dão através da detecção de: (i) código injetado, usado pelo analista para inspecionar a aplicação; (ii) máquinas virtuais, usadas em ambientes de análise por questões de escala; (iii) efeitos colaterais de execução, geralmente causados por emuladores, também usados por analistas. Para lidar com malware evasivo, analistas tem se valido de técnicas ditas transparentes, isto é, que não requerem injeção de código nem causam efeitos colaterais de execução. Um modo de se obter transparência em um processo de análise é contar com suporte do hardware. Desta forma, este trabalho versa sobre a aplicação do suporte de hardware para fins de análise de ameaças evasivas. No decorrer deste texto, apresenta-se uma avaliação das tecnologias existentes de suporte de hardware, dentre as quais máqui- nas virtuais de hardware, suporte de BIOS e monitores de performance. A avaliação crítica de tais tecnologias oferece uma base de comparação entre diferentes casos de uso. Além disso, são enumeradas lacunas de desenvolvimento existentes atualmente. Mais que isso, uma destas lacunas é preenchida neste trabalho pela proposição da expansão do uso dos monitores de performance para fins de monitoração de malware. Mais especificamente, é proposto o uso do monitor BTS para fins de construção de um tracer e um debugger. O framework proposto e desenvolvido neste trabalho é capaz, ainda, de lidar com ataques do tipo ROP, um dos mais utilizados atualmente para exploração de vulnerabilidades. A avaliação da solução demonstra que não há a introdução de efeitos colaterais, o que per- mite análises de forma transparente. Beneficiando-se desta característica, demonstramos a análise de aplicações protegidas e a identificação de técnicas de evasãoAbstract: Today¿s world is driven by the usage of computer systems, which are present in all aspects of everyday life. Therefore, the correct working of these systems is essential to ensure the maintenance of the possibilities brought about by technological developments. However, ensuring the correct working of such systems is not an easy task, as many people attempt to subvert systems working for their own benefit. The most common kind of subversion against computer systems are malware attacks, which can make an attacker to gain com- plete machine control. The fight against this kind of threat is based on analysis procedures of the collected malicious artifacts, allowing the incident response and the development of future countermeasures. However, attackers have specialized in circumventing analysis systems and thus keeping their operations active. For this purpose, they employ a series of techniques called anti-analysis, able to prevent the inspection of their malicious codes. Among these techniques, I highlight the analysis procedure evasion, that is, the usage of samples able to detect the presence of an analysis solution and then hide their malicious behavior. Evasive examples have become popular, and their impact on systems security is considerable, since automatic analysis now requires human supervision in order to find evasion signs, which significantly raises the cost of maintaining a protected system. The most common ways for detecting an analysis environment are: i) Injected code detec- tion, since injection is used by analysts to inspect applications on their way; ii) Virtual machine detection, since they are used in analysis environments due to scalability issues; iii) Execution side effects detection, usually caused by emulators, also used by analysts. To handle evasive malware, analysts have relied on the so-called transparent techniques, that is, those which do not require code injection nor cause execution side effects. A way to achieve transparency in an analysis process is to rely on hardware support. In this way, this work covers the application of the hardware support for the evasive threats analysis purpose. In the course of this text, I present an assessment of existing hardware support technologies, including hardware virtual machines, BIOS support, performance monitors and PCI cards. My critical evaluation of such technologies provides basis for comparing different usage cases. In addition, I pinpoint development gaps that currently exists. More than that, I fill one of these gaps by proposing to expand the usage of performance monitors for malware monitoring purposes. More specifically, I propose the usage of the BTS monitor for the purpose of developing a tracer and a debugger. The proposed framework is also able of dealing with ROP attacks, one of the most common used technique for remote vulnerability exploitation. The framework evaluation shows no side-effect is introduced, thus allowing transparent analysis. Making use of this capability, I demonstrate how protected applications can be inspected and how evasion techniques can be identifiedMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE
Spatial Hypermedia as a programming environment
This thesis investigates the possibilities opened to a programmer when their programming environment not only utilises Spatial Hypermedia functionality, but embraces it as a core component. Designed and built to explore these possibilities, SpIDER (standing for Spatial Integrated Development Environment Research) is an IDE featuring not only traditional functionality such as content assist and debugging support but also multimedia integration and free-form spatial code layout. Such functionality allows programmers to visually communicate aspects of the intent and structure of their code that would be tedious—and in some cases impossible—to achieve in conventional IDEs.
Drawing from literature on Spatial Memory, the design of SpIDER has been driven by the desire to improve the programming experience while also providing a flexible authoring environment for software development. The programmer’s use of Spatial Memory is promoted, in particular, by: utilising fixed sized authoring canvases; providing the capacity for landmarks; exploiting a hierarchical linking system; and having well defined occlusion and spatial stability of authored code.
The key challenge in implementing SpIDER was to devise an algorithm to bridge the gap between spatially expressed source code, and the serial text forms required by compilers. This challenge was met by developing an algorithm that we have called the flow walker. We validated this algorithm through user testing to establish that participants’ interpretation of the meaning of spatially laid out code matched the flow walker’s implementation.
SpIDER can be obtained at: https://sourceforge.net/projects/spatial-ide-research-spide
NPC AI System Based on Gameplay Recordings
Hästi optimeeritud mitte-mängija tegelased (MMT) on vastaste või meeskonna kaaslastena üheks peamiseks osaks mitme mängija mängudes. Enamus mänguroboteid on ehitatud jäikade süsteemide peal, mis võimaldavad vaid loetud arvu otsuseid ja animatsioone. Kogenud mängijad suudavad eristada mänguroboteid inimmängijatest ning ette ennustada nende liigutusi ja strateegiaid. See alandab mängukogemuse kvaliteeti. Seetõttu, eelistavad mitme mängijaga mängude mängijad mängida pigem inimmängijate kui MMTde vastu. Virtuaalreaalsuse (VR) mängud ja VR mängijad on siiani veel väike osa mängutööstusest ja mitme mängija VR mängud kannatavad mängijabaasi kaotusest, kui mänguomanikud ei suuda leida teisi mängijaid, kellega mängida. See uurimus demonstreerib mängulindistustel põhineva tehisintellekt (TI) süsteemi rakendatavust VR esimese isiku vaates tulistamismängule Vrena. Teemamäng kasutab ebatavalist liikumisesüsteemi, milles mängijad liiguvad otsiankrute abil. VR mängijate liigutuste imiteerimiseks loodi AI süsteem, mis kasutab mängulindistusi navigeerimisandmetena. Süsteem koosneb kolmest peamisest funktsionaalsusest. Need funktsionaalsused on mängutegevuse lindistamine, andmete töötlemine ja navigeerimine. Mängu keskkond on tükeldatud kuubikujulisteks sektoriteks, et vähendada erinevate asukohal põhinevate olekute arvu ning mängutegevus on lindistatud ajaintervallide ja tegevuste põhjal. Loodud mängulogid on segmenteeritud logilõikudeks ning logilõikude abil on loodud otsingutabel. Otsingutabelit kasutatakse MMT agentide navigeerimiseks ning MMTde otsuste langetamise mehanism jäljendab olek-tegevus-tasu kontseptsiooni. Loodud töövahendi kvaliteeti hinnati uuringu põhjal, millest saadi märkimisväärset tagasisidet süsteemi täiustamiseks.A well optimized Non-Player Character (NPC) as an opponent or a teammate is a major part of the multiplayer games. Most of the game bots are built upon a rigid system with numbered decisions and animations. Experienced players can distinguish bots from hu-man players and they can predict bot movements and strategies. This reduces the quality of the gameplay experience. Therefore, multiplayer game players favour playing against human players rather than NPCs. VR game market and VR gamers are still a small frac-tion of the game industry and multiplayer VR games suffer from loss of their player base if the game owners cannot find other players to play with. This study demonstrates the applicability of an Artificial Intelligence (AI) system based on gameplay recordings for a Virtual Reality (VR) First-person Shooter (FPS) game called Vrena. The subject game has an uncommon way of movement, in which the players use grappling hooks to navigate. To imitate VR players’ movements and gestures an AI system is developed which uses gameplay recordings as navigation data. The system contains three major functionality. These functionalities are gameplay recording, data refinement, and navigation. The game environment is sliced into cubic sectors to reduce the number of positional states and gameplay is recorded by time intervals and actions. Produced game logs are segmented into log sections and these log sections are used for creating a look-up table. The lookup table is used for navigating the NPC agent and the decision mechanism followed a way similar to the state-action-reward concept. The success of the developed tool is tested via a survey, which provided substantial feedback for improving the system
A Primer on Architectural Level Fault Tolerance
This paper introduces the fundamental concepts of fault tolerant computing. Key topics covered are voting, fault detection, clock synchronization, Byzantine Agreement, diagnosis, and reliability analysis. Low level mechanisms such as Hamming codes or low level communications protocols are not covered. The paper is tutorial in nature and does not cover any topic in detail. The focus is on rationale and approach rather than detailed exposition
The Disappearing Frame.: A Practice-based investigation into composing virtual environment artworks
Through creative art making practice, research seeks to contribute a body of
knowledge to an under researched area by examining how key concepts germane to
computer based, interactive, three-dimensional, virtual environment artworks might
be explicated, potential compositional issues characterised, and possible production
strategies identified and/or proposed. Initial research summarises a range of
classifications pertaining to the function of interactivity within virtual space,
leading to an identification and analysis of a predominant model for composing
virtual environment media, characterised as the "world as model": a methodological
approach to devising interactive and spatial contexts employing visual and
behavioural modes based on the physical world. Following this alternative forms of
environmental organisation are examined through the development of a series of
artworks beginning with Bodies and Bethlem, and culminating with Reconnoitre: a
networked environment, spatially manifest through performative user input.
Theoretical corollaries to the project are identified placing it within a wider critical
context concerned with distinguishing between the virtual as a condition of
simulation: a representation of something pre-existing, and the virtual as potential
structure: a phenomena in itself requiring creative actualisation and orientated
toward change. This distinction is further developed through an analysis of some
existing typologies of interactive computer based art, and used to generalise two
base conditions between which various possibilities for practice might be situated:
the "fluid" and "formatted" virtual
The disappearing frame: a practice-based investigation into composing virtual environment artworks
Through creative art making practice, research seeks to contribute a body of knowledge to an under researched area by examining how key concepts germane to computer based, interactive, three-dimensional, virtual environment artworks might be explicated, potential compositional issues characterised, and possible production strategies identified and/or proposed. Initial research summarises a range of classifications pertaining to the function of interactivity within virtual space, leading to an identification and analysis of a predominant model for composing virtual environment media, characterised as the "world as model": a methodological approach to devising interactive and spatial contexts employing visual and behavioural modes based on the physical world. Following this alternative forms of environmental organisation are examined through the development of a series of artworks beginning with Bodies and Bethlem, and culminating with Reconnoitre: a networked environment, spatially manifest through performative user input. Theoretical corollaries to the project are identified placing it within a wider critical context concerned with distinguishing between the virtual as a condition of simulation: a representation of something pre-existing, and the virtual as potential structure: a phenomena in itself requiring creative actualisation and orientated toward change. This distinction is further developed through an analysis of some existing typologies of interactive computer based art, and used to generalise two base conditions between which various possibilities for practice might be situated: the "fluid" and "formatted" virtual.
On the Dissection of Evasive Malware
Complex malware samples feature measures to impede automatic and manual analyses, making their investigation cumbersome. While automatic characterization of malware benefits from recently proposed designs for passive monitoring, the subsequent dissection process still sees human analysts struggling with adversarial behaviors, many of which also closely resemble those studied for automatic systems. This gap affects the day-to-day analysis of complex samples and researchers have not yet attempted to bridge it. We make a first step down this road by proposing a design that can reconcile transparency requirements with manipulation capabilities required for dissection. Our open-source prototype BluePill (i) offers a customizable execution environment that remains stealthy when analysts intervene to alter instructions and data or run third-party tools, (ii) is extensible to counteract newly encountered anti-analysis measures using insights from the dissection, and (iii) can accommodate program analyses that aid analysts, as we explore for taint analysis. On a set of highly evasive samples BluePill resulted as stealthy as commercial sandboxes while offering new intervention and customization capabilities for dissection
InSight2: An Interactive Web Based Platform for Modeling and Analysis of Large Scale Argus Network Flow Data
Monitoring systems are paramount to the proactive detection and mitigation of problems in computer networks related to performance and security. Degraded performance and compromised end-nodes can cost computer networks downtime, data loss and reputation. InSight2 is a platform that models, analyzes and visualizes large scale Argus network flow data using up-to-date geographical data, organizational information, and emerging threats. It is engineered to meet the needs of network administrators with flexibility and modularity in mind. Scalability is ensured by devising multi-core processing by implementing robust software architecture. Extendibility is achieved by enabling the end user to enrich flow records using additional user provided databases. Deployment is streamlined by providing an automated installation script. State-of-the-art visualizations are devised and presented in a secure, user friendly web interface giving greater insight about the network to the end user
- …