112 research outputs found

    Vulnerabilities detection at runtime and continuous auditing

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2020Na atualidade, a integração de funcionalidade e segurança em aplicações é um desafio. Existe a noção de que a segurança é um processo pesado, requer conhecimento e consome o tempo dos programadores, contrastando desta forma com a visão relativa à funcionalidade. Independentemente destes desafios, é importante que as organizações tratem da segurança nos seus processos ágeis, pois os ativos críticos da organização devem ser protegidos contra potenciais ataques. Uma forma de evitar que os ataques tenham sucesso passa por integrar ferramentas que possam ajudar a identificar vulnerabilidades de segurança durante a fase de desenvolvimento das aplicações e sugerir métodos para a sua correção. Segundo o Instituto Gartner, mais de 75% dos problemas com segurança na Internet são devidos a vulnerabilidades exploráveis a partir das Aplicações Web (Web Apps). A maior parte das Web Apps são naturalmente vulneráveis devido às tecnologias adotadas na sua concepção, à forma como são desenhadas e desenvolvidas, e ao uso de vários objetos e recursos, além da integração de outros sistemas. Frequentemente observa-se que são priorizados os aspetos funcionais que atendem a área de negócios, enquanto os requisitos de segurança ficam em segundo plano. Os ataques a Web Apps podem causar problemas de variados níveis de impacto, como por exemplo: interrupção ou queda de desempenho do serviço; acesso não autorizado a dados confidenciais e estratégicos; roubo de informação e clientes; fraudes e modificação de dados no fluxo das operações; perdas financeiras diretas e indiretas; prejuízos à imagem da marca da empresa; perda da lealdade dos clientes e gastos extraordinários com incidentes de segurança. Os riscos de ataques mais comuns são genericamente conhecidos e podem ser previstos com antecedência, pois são listados pela Open Web Application Security Project (OWASP), e dentre eles, três dos principais são: SQL Injection (SQLi); Cross-Site Scripting (XSS); Broken Authentication e Session Management. Os ataques mais graves são aqueles que, quando realizados sobre vulnerabilidades da Web App, não serão detetados de imediato e resultam no acesso a dados sigilosos do negócio, da infraestrutura, ou de clientes, e que podem ser posteriormente organizados para realizar um ataque de impacto mais relevante, ou uma fraude. Neste contexto, um novo paradigma surge no que se refere `a auditoria em ambientes web. O conceito de Auditoria Contínua (AC) emerge como uma nova solução de auditoria que responde a novas necessidades, sendo um tema recente que tem sido objeto de pesquisas e aposta de organizações. O modelo tradicional de auditoria, baseado em análises pontuais e descontínuas, torna-se cada vez mais inadequado à dinâmica atual da informação e aos sistemas que a gerem. Atualizações constantes de aplicações e as alterações nas configurações do sistema podem introduzir vulnerabilidades e deixar uma organização suscetível a ataques. Portanto, para manter os dados seguros, os sistemas e dispositivos devem ser verificados continuamente para identificar e relatar vulnerabilidades à medida que são descobertas. Este conceito traduz-se numa enorme mudança na filosofia tradicional da auditoria para um paradigma de AC que torna possível uma intervenção e ação corretiva mais cedo. Desta forma, é necessário que as organizações adotem uma metodologia que permita aos auditores independentes, fornecer garantias por meio de relatórios sobre a ocorrência de eventos ao longo da vida do sistema. Esses eventos, quando monitorizados em tempo real, permitem desvios a serem detetados e relatados para aumentar a velocidade e a eficácia da resposta pelos elementos responsáveis pela tomada de decisão. As organizações estão sujeitas a vários tipos de auditorias que têm diferentes finalidades, como a qualidade, o ambiente, a operação ou a gestão. Estes processos seguem um período de tempo para validar e analisar o que já foi feito e o estado atual da organização. Na segurança da informação, a AC visa garantir a monitorização em tempo real do sistema e o risco dos ativos da empresa. Para além disso, permite avaliar o nível de segurança atual do sistema, monitorizar o sistema em tempo real, aumentando a eficiência da descoberta e mitigação de vulnerabilidades. Os testes de intrusão, são geralmente um complemento para a AC. Num processo contínuo em que não existe esse comportamento invasivo, as análises de vulnerabilidades são realizadas com o auxílio de ferramentas automáticas ao longo do tempo para observar e monitorizar o estado do sistema e as ações corretivas as serem tomadas. O objetivo desta tese é propor uma abordagem e desenvolver uma ferramenta que permitirá detetar ataques do tipo Injection Attacks (IA) ou Cross-Site Request Forgery (CSRF) em Web Apps, no caso de estas estarem a recorrer ao mecanismo Cross-Origin Resource Sharing (CORS). Para efetuar a deteção de IA, a ferramenta terá a capacidade de analisar os links externos que são passados no atributo href a que uma Web App se liga, com o intuito de verificar se estes estão comprometidos. Para a deteção de CORS a ferramenta analisará todos os links internos passados no atributo src para verificar se estes invocam métodos XMLHttpRequest utilizados para chamadas de CORS. Estes dois tipos de ataques est˜ao sempre associados, contribuindo para um IA bem-sucedido. O IA é uma classe de ataques que depende da injeção de dados numa Web App, causando a execução ou interpretação de dados mal-intencionados de maneira inesperada. Exemplos de ataques desta classe incluem SQLi, HTML Injection, XSS, Header Injection, Log Injection e Full Path Disclosure. Estes são os ataques mais comuns e bem-sucedidos na Internet devido aos seus numerosos tipos, grande superfície de ataque e complexidade necessária para os proteger. O CORS é um mecanismo do browser que permite o acesso controlado a recursos localizados fora de um determinado domínio. Ele estende e adiciona flexibilidade à Same Origin Policy (SOP). No entanto, este mecanismo também oferece potencial para ataques baseados em vários domínios, se a política de CORS de um site estiver mal configurada ou implementada. O CORS não pretende ser uma proteção contra ataques de Cross-Request como o CSRF.Tendo em conta o anteriormente descrito relativamente a IA e CORS, a ferramenta desenvolvida permite a deteção de vulnerabilidades em Web Apps em AC. O foco fundamental está nos links externos e internos da Web App. Corre num servidor web, disponibilizando este serviço aos utilizadores na internet, permitindo analisar ligações externas e internas de uma determinada Web App. Para as ligações externas irá detetar evidências de IA, atribuindo uma classificação de benigno ou maligno às ligações externas identificadas. Para os links internos, verifica se existem chamadas de Cross-Origin mais especificamente CORS. Desta forma um utilizador poderá submeter o URL da sua Web App que irá ser analisado pela ferramenta Vulnerabilities Detector at Runtime and Continuous Auditing (VuDRuCA) que recorre a um mecanismo de AC. A ferramenta VuDRuCA emprega técnicas de crawling para navegar nas páginas da Web App e obter a informação pretendida. Utiliza ainda a API do Virus Total para analisar URLs, identificando conteúdo malicioso detestável por antivírus e scanners de Web Apps. Como backend a ferramenta utiliza uma base de dados relacional que armazena todos os dados recolhidos para que estes possam ser analisados, contribuindo para a apresentação de indicadores. Na fase de avaliação a ferramenta foi testada utilizando uma amostragem de 100 URLs de Web App que recorrem à tecnologia AJAX. Para estes foram contabilizados o número de sites externos e internos da Web App. Após uma primeira análise foram escolhidos 30 Web Apps para categorização, medição dos tempos de execução para deteção de links externos e internos e várias outras métricas relativas aos tempos de execução. Finalmente para testar o motor de AC foram selecionados 10 URL de Web Apps que na sua maioria recorrem a CORS. Nestas 10 Web Apps foi identificada a tecnologia de Content Manamgment System (CMS) utilizada. O módulo de AC, efetuou ainda uma análise durante um período de 5 dias, com intervalos de 24h, para validar se existia a introdução de novos links externos ou se algum destes estava comprometido. Relativamente aos links internos foi validado se existiam novos links internos e se estes recorriam a CORS.Nowadays integrating applications agility and security is an extremely challenging process. There is the notion that security is a heavy process, requiring knowledge and consuming time of the development teams. On the other hand, the acquisition of Web Applications (Web Apps) is often achieved through contracted services because companies do not have the necessary software developers. Taking this fact into account, the risk of obtaining a product implemented by poorly qualified developers is a reality. The main objective of this thesis is to propose a solution and develop a tool that will detect some forms of Injection Attacks (IA) or Cross-Site Request Forgery (CSRF) attacks inWeb Apps. The latter is due to the fact thatWeb Apps sometimes employ Cross-Origin Resource Sharing (CORS). Some statistics demonstrate that these attacks are some of the most common security risks in Web Apps. IA is a class of attacks that relies on inputting data into a Web App to make it execute or interpret malicious information unexpectedly. Examples of attacks in this class include SQL Injection (SQLi), Header Injection, Log Injection, and Full Path Disclosure. CORS is used by browsers to allow controlled access to resources located outside a given domain. It extends and adds flexibility to the Same Origin Policy (SOP). However, this mechanism also offers the potential for Cross-Domain based attacks if a site’s CORS policy is misconfigured. CORS is not intended to be a protection against Cross-Request attacks like the CSRF. The developed tool, called VuDRuCA, allows the detection of vulnerabilities associated with IA and CORS in Web Apps. It runs on a web server, providing this service to users on the internet, allowing them to analyse external and internal links of a particular Web App. For the external links, it will detect evidence of IA, assigning a benign or a malign classification to the identified external links. For internal links, there is a check for Cross-Origin calls, specifically CORS. VuDRuCA uses crawling techniques to navigate through the pages of the Web App and obtain the desired information. It also uses the Virus Total API, which is a free online service that parses URLs, enabling the discovery of malicious content detectable by antivirus and website scanners. As a backend, it uses a relational database to store the collected data so that it can be retrieved and analysed, reporting the presence of security indicators

    Behind the Code: Identifying Zero-Day Exploits in WordPress

    Get PDF
    The rising awareness of cybersecurity among governments and the public underscores the importance of effectively managing security incidents, especially zero-day attacks that exploit previously unknown software vulnerabilities. These zero-day attacks are particularly challenging because they exploit flaws that neither the public nor developers are aware of. In our study, we focused on dynamic application security testing (DAST) to investigate cross-site scripting (XSS) attacks. We closely examined 23 popular WordPress plugins, especially those requiring user or admin interactions, as these are frequent targets for XSS attacks. Our testing uncovered previously unknown zero-day vulnerabilities in three of these plugins. Through controlled environment testing, we accurately identified and thoroughly analyzed these XSS vulnerabilities, revealing their mechanisms, potential impacts, and the conditions under which they could be exploited. One of the most concerning findings was the potential for admin-side attacks, which could lead to multi-site insider threats. Specifically, we found vulnerabilities that allow for the insertion of malicious scripts, creating backdoors that unauthorized users can exploit. We demonstrated the severity of these vulnerabilities by employing a keylogger-based attack vector capable of silently capturing and extracting user data from the compromised plugins. Additionally, we tested a zero-click download strategy, allowing malware to be delivered without any user interaction, further highlighting the risks posed by these vulnerabilities. The National Institute of Standards and Technology (NIST) recognized these vulnerabilities and assigned them CVE numbers: CVE-2023-5119 for the Forminator plugin, CVE-2023-5228 for user registration and contact form issues, and CVE-2023-5955 for another critical plugin flaw. Our study emphasizes the critical importance of proactive security measures, such as rigorous input validation, regular security testing, and timely updates, to mitigate the risks posed by zero-day vulnerabilities. It also highlights the need for developers and administrators to stay vigilant and adopt strong security practices to defend against evolving threats

    BioMeRSA: The Biology media repository with semantic augmentation

    Get PDF
    With computers now capable of easily handling all kinds of multimedia files in vast quantity, and with the Internet now well-suited to exchange these files, we are faced with the challenge of organizing this data in such a way so as to make the information most useful and accessible. This holds true as well for media pertaining to the field of biology, where multimedia is particularly useful in education, as well as in research. To help address this, a software system with a Web-based interface has been developed for improving the accuracy and specificity of multimedia searching and browsing by integrating semantic data pertaining to the field of biology from the Unified Medical Language System (UMLS). Using the Biology Media Repository with Semantic Augmentation (BioMeRSA) system, users who are considered to be `experts\u27 can associate concepts from UMLS with multimedia files submitted by other users to provide semantic context for the files. These annotations are used to retrieve relevant files in the searching and browsing interfaces. A wide variety of image files are currently supported, with some limited support for video and audio files

    Detection and Diagnosis of Memory Leaks in Web Applications

    Get PDF
    Memory leaks -- the existence of unused memory on the heap of applications -- result in low performance and may, in the worst case, cause applications to crash. The migration of application logic to the client side of modern web applications and the use of JavaScript as the main language for client-side development have made memory leaks in JavaScript an issue for web applications. Significant portions of modern web applications are executed on the client browser, with the server acting only as a data store. Client-side web applications communicate with the server asynchronously, remaining on the same web page during their lifetime. Thus, even minor memory leaks can eventually lead to excessive memory usage, negatively affecting user-perceived response time and possibly causing page crashes. This thesis demonstrates the existence of memory leaks in the client side of large and popular web applications, and develops prototype tools to solve this problem. The first approach taken to address memory leaks in web applications is to detect, diagnose, and x them during application development. This approach prevents such leaks from happening by finding and removing their causes. To achieve this goal, this thesis introduces LeakSpot, a tool that creates a runtime heap model of JavaScript applications by modifying web-application code in a browser-agnostic way to record object allocations, accesses, and references created on objects. LeakSpot reports the locations of the code that are allocating leaked objects, i.e., leaky allocation sites. It also identifies accumulation sites, which are the points in the program where references are created on objects but are not removed, e.g., the points where objects are added to a data structure but are not removed. To facilitate debugging and fixing the code, LeakSpot narrows down the space that must be searched for finding the cause of the leaks in two ways: First, it refines the list of leaky allocation sites and reports those allocation sites that are the main cause of the leaks. In addition, for every leaked object, LeakSpot reports all the locations in the program that create a reference to that object. To confirm its usefulness and e fficacy experimentally, LeakSpot is used to find and x memory leaks in JavaScript benchmarks and open-source web applications. In addition, the potential causes of the leaks in large and popular web applications are identified. The performance overhead of LeakSpot in large and popular web applications is also measured, which indirectly demonstrates the scalability of LeakSpot. The second approach taken to address memory leaks assumes memory leaks may still be present after development. This approach aims to reduce the effects of leaked memory during runtime and improve memory efficiency of web applications by removing the leaked objects or early triggering of garbage collection, Using a new tool, MemRed. MemRed automatically detects excessive use of memory during runtime and then takes actions to reduce memory usage. It detects the excessive use of memory by tracking the size of all objects on the heap. If an error is detected, MemRed applies recovery actions to reduce the overall size of the heap and hide the effects of excessive memory usage from users. MemRed is implemented as an extension for the Chrome browser. Evaluation demonstrates the effectiveness of MemRed in reducing memory usage of web applications. In summary, the first tool provided in this thesis, LeakSpot, can be used by developers in finding and fixing memory leaks in JavaScript Applications. Using both tools improves the experience of web-application users.4 month

    TOWARDS REDESIGNING WEB BROWSERS WITH SECURITY PRINCIPLES

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Mobile Big Data Analytics in Healthcare

    Get PDF
    Mobile and ubiquitous devices are everywhere around us generating considerable amount of data. The concept of mobile computing and analytics is expanding due to the fact that we are using mobile devices day in and out without even realizing it. These mobile devices use Wi-Fi, Bluetooth or mobile data to be intermittently connected to the world, generating, sending and receiving data on the move. Latest mobile applications incorporating graphics, video and audio are main causes of loading the mobile devices by consuming battery, memory and processing power. Mobile Big data analytics includes for instance, big health data, big location data, big social media data, and big heterogeneous data. Healthcare is undoubtedly one of the most data-intensive industries nowadays and the challenge is not only in acquiring, storing, processing and accessing data, but also in engendering useful insights out of it. These insights generated from health data may reduce health monitoring cost, enrich disease diagnosis, therapy, and care and even lead to human lives saving. The challenge in mobile data and Big data analytics is how to meet the growing performance demands of these activities while minimizing mobile resource consumption. This thesis proposes a scalable architecture for mobile big data analytics implementing three new algorithms (i.e. Mobile resources optimization, Mobile analytics customization and Mobile offloading), for the effective usage of resources in performing mobile data analytics. Mobile resources optimization algorithm monitors the resources and switches off unused network connections and application services whenever resources are limited. However, analytics customization algorithm attempts to save energy by customizing the analytics process while implementing some data-aware techniques. Finally, mobile offloading algorithm decides on the fly whether to process data locally or delegate it to a Cloud back-end server. The ultimate goal of this research is to provide healthcare decision makers with the advancements in mobile Big data analytics and support them in handling large and heterogeneous health datasets effectively on the move

    Internet of Things. Information Processing in an Increasingly Connected World

    Get PDF
    This open access book constitutes the refereed post-conference proceedings of the First IFIP International Cross-Domain Conference on Internet of Things, IFIPIoT 2018, held at the 24th IFIP World Computer Congress, WCC 2018, in Poznan, Poland, in September 2018. The 12 full papers presented were carefully reviewed and selected from 24 submissions. Also included in this volume are 4 WCC 2018 plenary contributions, an invited talk and a position paper from the IFIP domain committee on IoT. The papers cover a wide range of topics from a technology to a business perspective and include among others hardware, software and management aspects, process innovation, privacy, power consumption, architecture, applications

    Efficient algorithms for passive network measurement

    Get PDF
    Network monitoring has become a necessity to aid in the management and operation of large networks. Passive network monitoring consists of extracting metrics (or any information of interest) by analyzing the traffic that traverses one or more network links. Extracting information from a high-speed network link is challenging, given the great data volumes and short packet inter-arrival times. These difficulties can be alleviated by using extremely efficient algorithms or by sampling the incoming traffic. This work improves the state of the art in both these approaches. For one-way packet delay measurement, we propose a series of improvements over a recently appeared technique called Lossy Difference Aggregator. A main limitation of this technique is that it does not provide per-flow measurements. We propose a data structure called Lossy Difference Sketch that is capable of providing such per-flow delay measurements, and, unlike recent related works, does not rely on any model of packet delays. In the problem of collecting measurements under the sliding window model, we focus on the estimation of the number of active flows and in traffic filtering. Using a common approach, we propose one algorithm for each problem that obtains great accuracy with significant resource savings. In the traffic sampling area, the selection of the sampling rate is a crucial aspect. The most sensible approach involves dynamically adjusting sampling rates according to network traffic conditions, which is known as adaptive sampling. We propose an algorithm called Cuckoo Sampling that can operate with a fixed memory budget and perform adaptive flow-wise packet sampling. It is based on a very simple data structure and is computationally extremely lightweight. The techniques presented in this work are thoroughly evaluated through a combination of theoretical and experimental analysis.Postprint (published version

    Spartan Daily, January 26, 1996

    Get PDF
    Volume 106, Issue 2https://scholarworks.sjsu.edu/spartandaily/8787/thumbnail.jp
    corecore