6,341 research outputs found
Automated Network Exploitation Utilizing Bayesian Decision Networks
Computer Network Exploitation (CNE) is the process of using tactics and techniques to penetrate computer systems and networks in order to achieve desired effects. It is currently a manual process requiring significant experience and time that are in limited supply. This thesis presents the Automated Network Discovery and Exploitation System (ANDES) which demonstrates that it is feasible to automate the CNE process. The uniqueness of ANDES is the use of Bayesian decision networks to represent the CNE domain and subject matter expert knowledge. ANDES conducts multiple execution cycles, which build upon previous action results. Cycles begin by modeling the current belief state using Bayesian decision networks. ANDES uses these networks to select and execute an expected best action. Observed results are used to update the systems current belief state before the next cycle begins. ANDES was tested in a live-execution event, taking place within a virtual network environment. ANDES successfully performed a series of information gathering and remote exploit actions, across multiple network hosts to gain access to the target
MusA: Using Indoor Positioning and Navigation to Enhance Cultural Experiences in a museum
In recent years there has been a growing interest into the use of multimedia mobile guides in museum environments. Mobile devices have the capabilities to detect the user context and to provide pieces of information suitable to help visitors discovering and following the logical and emotional connections that develop during the visit. In this scenario, location based services (LBS) currently represent an asset, and the choice of the technology to determine users' position, combined with the definition of methods that can effectively convey information, become key issues in the design process. In this work, we present MusA (Museum Assistant), a general framework for the development of multimedia interactive guides for mobile devices. Its main feature is a vision-based indoor positioning system that allows the provision of several LBS, from way-finding to the contextualized communication of cultural contents, aimed at providing a meaningful exploration of exhibits according to visitors' personal interest and curiosity. Starting from the thorough description of the system architecture, the article presents the implementation of two mobile guides, developed to respectively address adults and children, and discusses the evaluation of the user experience and the visitors' appreciation of these application
Effects of Dynamic Goals on Agent Performance
Autonomous systems are increasingly being used for complex tasks in dynamic environments. Robust automation needs to be able to establish its current goal and determine when the goal has changed. In human-machine teams autonomous goal detection is an important component of maintaining shared situational awareness between both parties. This research investigates how different categories of goals affect autonomous change detection in a dynamic environment. In order to accomplish this goal, a set of autonomous agents were developed to perform within an environment with multiple possible goals. The agents perform the environmental task while monitoring for goal changes. The experiment tests the agents over a range of goal changes to determine how detection performance is affected by the different categories of goals. Results show that detection is highly dependent on what goal is being switch to and from. The point similarity between goals is the most significant factor in evaluating the change detection time. An additional experiment improved upon the goal agent and demonstrated the importance of having the proper perception mechanics for feedback within the environment
Recommended from our members
Capability-based access control for cyber physical systems
Cyber Physical Systems (CPS)
couple digital systems with the physical environment, creating
technical, usability, and economic security challenges beyond those of
information systems. Their distributed and
hierarchical nature, real-time and safety-critical requirements, and limited
resources create new vulnerability classes and severely constrain the security
solution space. This dissertation explores these challenges, focusing on
Industrial Control Systems (ICS), but demonstrating broader applicability to
the whole domain.
We begin by systematising the usability and economic challenges to secure ICS.
We fingerprint and track more than 10\,000 Internet-connected devices over four years and show
the population is growing, continuously-connected, and unpatched. We then
explore adversarial interest in this vulnerable population. We track 150\,000
botnet hosts, sift 70 million underground forum posts, and perform the
largest ICS honeypot study to date to demonstrate that the cybercrime community
has little competence or interest in the domain. We show that the current
heterogeneity, cost, and level of expertise required for large-scale attacks on
ICS are economic deterrents when targets in the IoT domain are
available.
The ICS landscape is changing, however, and we demonstrate the imminent
convergence with the IoT domain as inexpensive hardware, commodity operating
Cyber Physical Systems (CPS) couple digital systems with the physical environment, creating technical, usability, and economic security challenges beyond those of information systems. Their distributed and hierarchical nature, real-time and safety-critical requirements, and limited resources create new vulnerability classes and severely constrain the security solution space. This dissertation explores these challenges, focusing on Industrial Control Systems (ICS), but demonstrating broader applicability to the whole domain.
We begin by systematising the usability and economic challenges to secure ICS. We fingerprint and track more than 10,000 Internet-connected devices over four years and show the population is growing, continuously-connected, and unpatched. We then explore adversarial interest in this vulnerable population. We track 150,000 botnet hosts, sift 70 million underground forum posts, and perform the largest ICS honeypot study to date to demonstrate that the cybercrime community has little competence or interest in the domain. We show that the current heterogeneity, cost, and level of expertise required for large-scale attacks on ICS are economic deterrents when targets in the IoT domain are available.
The ICS landscape is changing, however, and we demonstrate the imminent convergence with the IoT domain as inexpensive hardware, commodity operating systems, and wireless connectivity become standard. Industry's security solution is boundary defence, pushing privilege to firewalls and anomaly detectors; however, this propagates rather than minimises privilege and leaves the hierarchy vulnerable to a single boundary compromise.
In contrast, we propose, implement, and evaluate a security architecture based on distributed capabilities. Specifically, we show that object capabilities, representing physical resources, can be constructed, delegated, and used anywhere in a distributed CPS by composing hardware-enforced architectural capabilities and cryptographic network tokens. Our architecture provides defence-in-depth, minimising privilege at every level of the CPS hierarchy, and both supports and adds integrity protection to legacy CPS protocols. We implement distributed capabilities in robotics and ICS demonstrators, and we show that our architecture adds negligible overhead to realistic integrations and can be implemented without significant modification to existing source code.
In contrast, we propose, implement, and evaluate a security architecture based on distributed capabilities. Specifically, we show that object capabilities, representing physical resources, can be constructed, delegated, and used anywhere in a distributed CPS by composing hardware-enforced architectural capabilities and cryptographic network tokens. Our architecture provides defence-in-depth, minimising privilege at every level of the CPS hierarchy, and both supports and adds integrity protection to legacy CPS protocols. We implement distributed capabilities in robotics and ICS demonstrators, and we show that our architecture adds negligible overhead to realistic integrations and can be implemented without significant modification to existing source code
Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions
In the past decade, Convolutional Neural Networks (CNNs) have demonstrated
state-of-the-art performance in various Artificial Intelligence tasks. To
accelerate the experimentation and development of CNNs, several software
frameworks have been released, primarily targeting power-hungry CPUs and GPUs.
In this context, reconfigurable hardware in the form of FPGAs constitutes a
potential alternative platform that can be integrated in the existing deep
learning ecosystem to provide a tunable balance between performance, power
consumption and programmability. In this paper, a survey of the existing
CNN-to-FPGA toolflows is presented, comprising a comparative study of their key
characteristics which include the supported applications, architectural
choices, design space exploration methods and achieved performance. Moreover,
major challenges and objectives introduced by the latest trends in CNN
algorithmic research are identified and presented. Finally, a uniform
evaluation methodology is proposed, aiming at the comprehensive, complete and
in-depth evaluation of CNN-to-FPGA toolflows.Comment: Accepted for publication at the ACM Computing Surveys (CSUR) journal,
201
Recommended from our members
A machine learning approach for smart computer security audit
This thesis presents a novel application of machine learning technology to automate network security audit and penetration testing processes in particular. A model-free reinforcement learning approach is presented. It is characterized by the absence of the environmental model. The model is derived autonomously by the audit system while acting in the tested computer network. The penetration testing process is specified as a Markov decision process (MDP) without definition of reward and transition functions for every state/action pair. The presented approach includes application of traditional and modified Q-learning algorithms. A traditional Q-learning algorithm learns the action-value function stored in the table, which gives the expected utility of executing a particular action in a particular state of the penetration testing process. The modified Q-learning algorithm differs by incorporation of the state space approximator and representation of the action-value function as a linear combination of features. Two deep architectures of the approximator are presented: autoencoder joint with artificial neural network (ANN) and autoencoder joint with recurrent neural network (RNN). The autoencoder is used to derive the feature set defining audited hosts. ANN is intended to approximate the state space of the audit process based on derived features. RNN is a more advanced version of the approximator and differs by the existence of the additional loop connections from hidden to input layers of the neural network. Such architecture incorporates previously executed actions into new inputs. It gives the opportunity to audit system learn sequences of actions leading to the goal of the audit, which is defined as receiving administrator rights on the host. The model-free reinforcement learning approach based on traditional Q-learning algorithms was also applied to reveal new vulnerabilities, buffer overflow in particular. The penetration testing system showed the ability to discover a string, exploiting potential vulnerability, by learning its formation process on the go.
In order to prove the concept and to test the efficiency of an approach, audit tool was developed. Presented results are intended to demonstrate the adaptivity of the approach, performance of the algorithms and deep machine learning architectures. Different sets of hyperparameters are compared graphically to test the ability of convergence to the optimal action policy. An action policy is a sequence of actions, leading to the audit goal (getting admin rights on the remote host). The testing environment is also presented. It consists of 80+ virtual machines based on a vSphere virtualization platform. This combination of hosts represents a typical corporate network with Users segment, Demilitarized zone (DMZ) and external segment (Internet). The network has typical corporate services available: web server, mail server, file server, SSH, SQL server. During the testing process, the audit system acts as an attacker from the Internet
Best practices in cloud-based Penetration Testing
This thesis addresses and defines best practices in cloud-based penetration testing. The aim of this thesis is to give guidance for penetration testers how cloud-based penetration testing differs from traditional penetration testing and how certain aspects are limited compared to traditional penetration testing. In addition, this thesis gives adequate level of knowledge to reader what are the most important topics to consider when organisation is ordering a penetration test of their cloud-based systems or applications. The focus on this thesis is the three major cloud service providers (Microsoft Azure, Amazon AWS, and Google Cloud Platform). The purpose of this research is to fill the gap in scientific literature about guidance for cloud-based penetration testing for testers and organisations ordering penetration testing. This thesis contains both theoretical and empirical methods. The result of this thesis is focused collection of best practices for penetration tester, who is conducting penetration testing for cloud-based systems. The lists consist of topics focused on planning and execution of penetration testing activities
Generating Threat Intelligence based on OSINT and a Cyber Threat Unified Taxonomy
Tese de mestrado em Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2020As ameaças cibernéticas atuais utilizam múltiplos meios de propagação, tais como a engenharia social, vulnerabilidades de e-mail e aplicações e, muitas vezes, operam em diferentes fases, tais como o comprometimento de um único dispositivo, o movimento lateral na rede e a exfiltração de dados. Estas ameaças são complexas e dependem de táticas bem avançadas, por forma a passarem despercebidas nas defesas de segurança tradicionais, como por exemplo firewalls. Um tipo de ameaças que tem tido um impacto significativo na ascensão do cibercrime são as ameaças persistentes avançadas (APTs), as quais têm objetivos claros, são altamente organizadas, têm acesso a recursos praticamente ilimitados e tendem a realizar ataques ocultos por longos perÃodos e com múltiplas tentativas. À medida que as organizações têm tido consciência que os ciberataques estão a aumentar em quantidade e complexidade, a utilização de informação sobre ciberameaças está a ganhar popularidade para combater tais ataques. Esta tendência tem acompanhado a evolução das APTs, uma vez que estas exigem um nÃvel de resposta diferente e mais especÃfico a cada organização. A informação sobre ciberameaças pode ser obtida de diversas fontes e em diferentes formatos, sendo a informação de fonte aberta (OSINT) uma das mais comuns. Também pode ser obtida por plataformas especificas de ameaças (TIPs) que ajudam a consumir, produzir e partilhar informações sobre ciberameaças. As TIPs têm múltiplas vantagens que permitem à s organizações explorar facilmente os principais processos de recolha, enriquecimento e partilha de informações relacionadas com ameaças. No entanto, devido ao elevado volume de informação OSINT recebido por dia e à s diversas taxonomias existentes para classificação de ciberameaças provenientes do OSINT, as TIPs atuais apresentam limitações de processamento desta, capaz de produzir informação inteligente (threat intelligence, TI) de qualidade que seja útil no combate de ciberataques, impedido assim a sua adoção em massa. Por sua vez, os analistas de segurança desperdiçam um tempo considerável em analisar o OSINT e a classificá-lo com diferentes taxonomias, por vezes, correspondentes a ameaças da mesma categoria. Esta dissertação propõe uma solução, denominada Automated Event Classification and Correlation Platform (AECCP), para algumas das limitações das TIPs mencionadas anteriormente e relacionadas com a gestão do conhecimento de ameaças, a triagem de ameaças, o elevado volume de informação partilhada, a qualidade dos dados, as capacidades de análise avançadas e a automatização de tarefas. Esta solução procura aumentar a qualidade da TI produzidas por TIPs, classificando-a em conformidade com um sistema de classificação comum, removendo a informação irrelevante, ou seja, com baixo valor, enriquecendo-a com dados importantes e relevantes de fontes OSINT, e agregando-a em eventos com informação semelhante. O sistema de classificação comum, denominado de Unified Taxonomy, foi definido no âmbito desta dissertação e teve como base uma análise de outras taxonomias públicas conhecidas e utilizadas na partilha de TI. O AECCP é uma plataforma composta por componentes que podem trabalhar em conjunto ou individualmente. O AECCP compreende um classificador (Classifier), um redutor de informação irrelevante (Trimmer), um enriquecedor de informação baseado em OSINT (Enricher) e um agregador de agregador de eventos sobre a mesma ameaça, ou seja, que contêm informação semelhante (Clusterer). O Classifier analisa eventos e, com base na sua informação, classifica-os na Unified Taxonomy, por forma a catalogar eventos ainda não classificados e a eliminar a duplicação de taxonomias com o mesmo significado de eventos previamente classificados. O Trimmer elimina a informação menos pertinente dos eventos baseando-se na classificação do mesmo. O Enricher enriquece os eventos com dados externos e provenientes de OSINT, os quais poderão conter informação importante e relacionada com a informação já presente no evento, mas não contida no mesmo. Por último, o Clusterer agrega eventos que partilham o mesmo contexto associado à classificação de cada um e à informação que estes contêm, produzindo aglomerados de eventos que serão combinados num único evento. Esta nova informação garantirá aos analistas de segurança o acesso e fácil visibilidade a informação relativa a eventos semelhantes aos que estes analisam. O desenho da arquitetura do AECCP, foi fundamentado numa realizada sobre três fontes públicas de informação que continham mais de 1100 eventos de ameaças de cibersegurança partilhados por 24 entidades externas e colecradas entre os anos de 2016 e 2019. A Unified Taxonomy utilizada pelo Classifier, foi produzida com base na análise detalhada das taxonomias utilizadas por estes eventos e nas taxonomias mais utilizadas na comunidade de partilha de TI sobre ciberameaças. No decorrer desta análise foram também identificados os atributos mais pertinentes e relevantes para cada categoria da Unified Taxonomy, através da agregação da informação em grupos com contexto semelhante e de uma análise minuciosa da informação contida em cada um dos mais de 1100 eventos. A dissertação, também, apresenta os algoritmos utilizados na implementação de cada um dos componentes que compõem o AECCP, bem como a avaliação destes e da plataforma. Na avaliação foram utilizadas as mesmas três fontes de OSINT utilizadas na análise inicial, no entanto, com 64 eventos criados e partilhados mais recentemente que os utilizados nessa análise. Dos resultados, foi possÃvel verificar um aumento de 72% na classificação dos eventos, um aumento médio de 54 atributos por evento, com uma redução nos atributos com pouco valor e aumento superior de atributos com maior valor, após os eventos serem processados pelo AECCP. Foi também possÃvel produzir 24 eventos agregados, enriquecidos e classificados pelos outros componentes do AECCP. Por último, foram processados pelo AECCP 6 eventos com grande volume de informação produzidos por uma plataforma externa, denominada de PURE, onde foi possÃvel verificar que o AECCP é capaz de processar eventos oriundos de outras plataformas e de tamanho elevando. Em suma, a dissertação apresenta quatro contribuições, nomeadamente, um sistema de classificação comum, a Unified Taxonomy, os atributos mais pertinentes para cada uma das categorias da Unified Taxonomy, o desenho da arquitetura do AECCP composto por 4 módulos (Classifier, Trimmer, Enricher e Clusterer) que procura resolver 5 das limitações das atuais TIPs (gestão do conhecimento de ameaças, a triagem de ameaças, o elevado volume de informação partilhada, a qualidade dos dados e as capacidades de análise avançadas e a automatização de tarefas) e a sua implementação e avaliação.Today’s threats use multiple means of propagation, such as social engineering, email, and application vulnerabilities, and often operate in different phases, such as single device compromise, network lateral movement and data exfiltration. These complex threats rely on well-advanced tactics for appearing unknown to traditional security defences. One type that had a major impact in the rise of cybercrime are the advanced persistent threats (APTs), which have clear objectives, are highly organized and well-resourced and tend to perform long term stealthy campaigns with repeated attempts. As organizations realize that attacks are increasing in size and complexity, threat intelligence (TI) is growing in popularity and use amongst them. This trend followed the evolution of the APTs as they require a different level of response that is more specific to the organization. TI can be obtained via many formats, being open source intelligence (OSINT) one of the most common; and using threat intelligence platforms (TIPs) that aid organization consuming, producing and sharing TI. TIPs have multiple advantages that enable organisations to easily bootstrap the core processes of collecting, normalising, enriching, correlating, analysing, disseminating and sharing of threat related information. However, current TIPs have some limitations that prevents theirs mass adoption. This dissertation proposes a solution to some of these limitations related with threat knowledge management, limited technology enablement in threat triage, high volume of shared threat information, data quality and limited advanced analytics capabilities and tasks automation. Overall, our solution improves the quality of TI by classifying it accordingly a common taxonomy, removing the information with low value, enriching it with valuable information from OSINT sources, and aggregating it into clusters of events with similar information. This dissertation offers a complete data analysis of three OSINT feeds and the results that made us to design our solution, a detailed description of the architecture of our solution, its implementations and its validation, including the processing of events from other academic solutions
- …