81 research outputs found

    Reaction to New Security Threat Class

    Full text link
    Each new identified security threat class triggers new research and development efforts by the scientific and professional communities. In this study, we investigate the rate at which the scientific and professional communities react to new identified threat classes as it is reflected in the number of patents, scientific articles and professional publications over a long period of time. The following threat classes were studied: Phishing; SQL Injection; BotNet; Distributed Denial of Service; and Advanced Persistent Threat. Our findings suggest that in most cases it takes a year for the scientific community and more than two years for industry to react to a new threat class with patents. Since new products follow patents, it is reasonable to expect that there will be a window of approximately two to three years in which no effective product is available to cope with the new threat class

    Understanding Software Obfuscation and Diversification as Defensive Measures for the Cybersecurity of Internet of Things

    Get PDF
    Internet of Things (IoT) has emerged as an umbrella term to describe connecting smart everyday objects (such as washing machines, toilets and sound systems), sensors and industrial machines to the internet. While IoT devices hold potential to greatly enhance quality of life through automating and optimizing mundane tasks, there are a great deal of security and privacy challenges. For this reason, practitioners and academics have explored various ways to enhance the multi-layered security of IoT devices. One of these methods is obfuscation, which has been successfully applied to make accessing devices more difficult for adversaries. In this study, we systematically processed the literature on applying obfuscation and diversification to improve IoT cybersecurity (81 articles) and clustered this research according the obfuscation target (code, data, interface, location, traffic). We then conducted a follow-up bibliometric review of the entire research profile of IoT cybersecurity (3,682 articles) to understand how these obfuscation and diversification approaches relate to the general cybersecurity landscape and solutions of IoT. We also derive a comprehensive list of benefits and shortcomings of enhancing IoT security through diversification, and present points of departure for future research

    BIBLIOMETRIC STUDY ON THE IMPORTANCE OF ENDPOINT SECURITY IN COMPANIES

    Get PDF
    This bibliometric study addresses the importance of endpoint security in companies, considering the growing use of information technologies, both in business and personal use. It highlights the need to protect endpoints such as computers, mobile devices, servers, and IoT devices. Endpoint security encompasses measures such as monitoring the files and binaries on and running on the machine using antivirus, data encryption, and threat detection solutions. The literature review highlights the importance of terminology and best practices, highlighting the application of graph-based approaches to strengthen security in medical information networks. Tools such as EDR are cited as essential, especially for small and medium-sized companies. The study emphasizes the importance of business continuity in the face of cyber threats, highlighting the role of artificial intelligence, machine learning, and frameworks. It takes a bibliometric approach, using a specific database to collect bibliometric data on scientific publications published between 2017 and 2023. As a basis for the study, the words “cybersecurity”, “endpoint security”, “business continuity”, and “business” were used. Various analyses of bibliometric results are also presented, including the number of publications by type of document, the scientific journals with the highest number of publications, the countries with the highest number of publications, the number of publications per author, the most cited articles, and the occurrence of identified keywords.info:eu-repo/semantics/publishedVersio

    BIBLIOMETRIC STUDY ON THE IMPORTANCE OF ENDPOINT SECURITY IN COMPANIES

    Get PDF
    This bibliometric study addresses the importance of endpoint security in companies, considering the growing use of information technologies, both in business and personal use. It highlights the need to protect endpoints such as computers, mobile devices, servers, and IoT devices. Endpoint security encompasses measures such as monitoring the files and binaries on and running on the machine using antivirus, data encryption, and threat detection solutions. The literature review highlights the importance of terminology and best practices, highlighting the application of graph-based approaches to strengthen security in medical information networks. Tools such as EDR are cited as essential, especially for small and medium-sized companies. The study emphasizes the importance of business continuity in the face of cyber threats, highlighting the role of artificial intelligence, machine learning, and frameworks. It takes a bibliometric approach, using a specific database to collect bibliometric data on scientific publications published between 2017 and 2023. As a basis for the study, the words “cybersecurity”, “endpoint security”, “business continuity”, and “business” were used. Various analyses of bibliometric results are also presented, including the number of publications by type of document, the scientific journals with the highest number of publications, the countries with the highest number of publications, the number of publications per author, the most cited articles, and the occurrence of identified keywords.info:eu-repo/semantics/publishedVersio

    Dynamic and bibliometric analysis of terms identifying the combating financial and cyber fraud system

    Get PDF
    The main purpose of this study is to conduct a dynamic and bibliometric analysis of the main terms that identify the system for combating financial and fraud to identify trends in the formation of social and scientific thought. The review of the scientific literature indicates an increase in the number of scientific publications over the past ten years. It was revealed that the most cited works cover the problems associated with cyber threats in everyday life, among which are botnets, cyber bullying, as well as financial fraud implemented through cryptocurrencies, smart contracts, and the black market on the Internet. Cloud forensics, technical and intellectual analysis are proposed as countermeasures. The research tools were a dynamic analysis of global network user requests, implemented using Google Trends, and a bibliometric analysis of scientific publications by the world’s leading scientists, performed using the VOSviewer analytical package. The search terms “Fraud”, “Finance Fraud”, “Cyber Fraud”, “Finance Cyber Fraud”, “Money Laundering”, “Anti-Money Laundering” and “Anti-Fraud” for the period from 08/07/2017 to 08/07/2022. For bibliometric analysis, two datasets with a length of 2,000 observations were formed based on queries in the Scopus database regarding the terms “Cyber Crime” and “Anti-money Laundering”. The results of the dynamic analysis revealed a decrease in the level of interest in fraud and financial fraud since the beginning of 2021, while the trend of cyber fraud is increasing. This led to the conclusion that there was an impact of the pandemic, which caused an increase in cybercrime. The results of the analysis of requests for “Fraud” and “Finance Fraud” by geographical distribution showed that they interested users belonging to countries with a significant difference in economic development. That is, representatives of poor countries are potential cyber fraudsters, and developed countries are potential victims of fraud. Conducting a bibliometric analysis made it possible to obtain clusters of promising areas of scientific research in the field of cybercrimes, among which mathematical and network tools for combating them, general concepts, digitalization and digital forensics, cyber protection, data protection, authentication and encryption of data, etc. are highlighted. At the same time, the focus of research is shifting towards methods of countering cybercrimes. Promising directions in the field of Money Laundering are mathematical methods and information technologies, cryptocurrencies and blockchains, corruption, financial terrorism, etc. The greatest potential belongs to money laundering through cryptocurrencies and blockchains. The lessons learned can be useful for improving the strategy of combating financial and cybercrimes and forming an analytical basis for the scientific community and practitioners

    Enterprise information security risks: a systematic review of the literature

    Get PDF
    Currently, computer security or cybersecurity is a relevant aspect in the area of networks and communications of a company, therefore, it is important to know the risks and computer security policies that allow a unified management of cyber threats that only seek to affect the reputation or profit from the confidential information of organizations in the business sector. The objective of the research is to conduct a systematic review of the literature through articles published in databases such as Scopus and Dimension. Thus, in order to perform a complete documentary analysis, inclusion and exclusion criteria were applied to evaluate the quality of each article. Then, using a quantitative scale, articles were filtered according to author, period and country of publication, leaving a total of 86 articles from both databases. The methodology used was the one proposed by Kitchenham, and the conclusion reached was that the vast majority of companies do not make a major investment in the purchase of equipment and improvement of information technology (IT) infrastructure, exposing themselves to cyber-attacks that continue to grow every day. This research provides an opportunity for researchers, companies and entrepreneurs to consult so that they can protect their organization's most important assets

    Selecting Root Exploit Features Using Flying Animal-Inspired Decision

    Get PDF
    Malware is an application that executes malicious activities to a computer system, including mobile devices. Root exploit brings more damages among all types of malware because it is able to run in stealthy mode. It compromises the nucleus of the operating system known as kernel to bypass the Android security mechanisms. Once it attacks and resides in the kernel, it is able to install other possible types of malware to the Android devices. In order to detect root exploit, it is important to investigate its features to assist machine learning to predict it accurately. This study proposes flying animal-inspired (1) bat, 2) firefly, and 3) bee) methods to search automatically the exclusive features, then utilizes these flying animal-inspired decision features to improve the machine learning prediction. Furthermore, a boosting method (Adaboost) boosts the multilayer perceptron (MLP) potential to a stronger classification. The evaluation jotted the best result is from bee search, which recorded 91.48 percent in accuracy, 82.2 percent in true positive rate, and 0.1 percent false positive rate

    On the subspace learning for network attack detection

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2019.O custo com todos os tipos de ciberataques tem crescido nas organizações. A casa branca do goveno norte americano estima que atividades cibernéticas maliciosas custaram em 2016 um valor entre US57bilho~eseUS57 bilhões e US109 bilhões para a economia norte americana. Recentemente, é possível observar um crescimento no número de ataques de negação de serviço, botnets, invasões e ransomware. A Accenture argumenta que 89% dos entrevistados em uma pesquisa acreditam que tecnologias como inteligência artificial, aprendizagem de máquina e análise baseada em comportamentos, são essenciais para a segurança das organizações. É possível adotar abordagens semisupervisionada e não-supervisionadas para implementar análises baseadas em comportamentos, que podem ser aplicadas na detecção de anomalias em tráfego de rede, sem a ncessidade de dados de ataques para treinamento. Esquemas de processamento de sinais têm sido aplicados na detecção de tráfegos maliciosos em redes de computadores, através de abordagens não-supervisionadas que mostram ganhos na detecção de ataques de rede e na detecção e anomalias. A detecção de anomalias pode ser desafiadora em cenários de dados desbalanceados, que são casos com raras ocorrências de anomalias em comparação com o número de eventos normais. O desbalanceamento entre classes pode comprometer o desempenho de algoritmos traficionais de classificação, através de um viés para a classe predominante, motivando o desenvolvimento de algoritmos para detecção de anomalias em dados desbalanceados. Alguns algoritmos amplamente utilizados na detecção de anomalias assumem que observações legítimas seguem uma distribuição Gaussiana. Entretanto, esta suposição pode não ser observada na análise de tráfego de rede, que tem suas variáveis usualmente caracterizadas por distribuições assimétricas ou de cauda pesada. Desta forma, algoritmos de detecção de anomalias têm atraído pesquisas para se tornarem mais discriminativos em distribuições assimétricas, como também para se tornarem mais robustos à corrupção e capazes de lidar com problemas causados pelo desbalanceamento de dados. Como uma primeira contribuição, foi proposta a Autosimilaridade (Eigensimilarity em inglês), que é uma abordagem baseada em conceitos de processamento de sinais com o objetivo de detectar tráfego malicioso em redes de computadores. Foi avaliada a acurácia e o desempenho da abordagem proposta através de cenários simulados e dos dados do DARPA 1998. Os experimentos mostram que Autosimilaridade detecta os ataques synflood, fraggle e varredura de portas com precisão, com detalhes e de uma forma automática e cega, i.e. em uma abordagem não-supervisionada. Considerando que a assimetria de distribuições de dados podem melhorar a detecção de anomalias em dados desbalanceados e assimétricos, como no caso de tráfego de rede, foi proposta a Análise Robusta de Componentes Principais baseada em Momentos (ARCP-m), que é uma abordagem baseada em distâncias entre observações contaminadas e momentos calculados a partir subespaços robustos aprendidos através da Análise Robusta de Componentes Principais (ARCP), com o objetivo de detectar anomalias em dados assimétricos e em tráfego de rede. Foi avaliada a acurácia do ARCP-m para detecção de anomalias em dados simulados, com distribuições assimétricas e de cauda pesada, como também para os dados do CTU-13. Os experimentos comparam nossa proposta com algoritmos amplamente utilizados para detecção de anomalias e mostra que a distância entre estimativas robustas e observações contaminadas pode melhorar a detecção de anomalias em dados assimétricos e a detecção de ataques de rede. Adicionalmente, foi proposta uma arquitetura e abordagem para avaliar uma prova de conceito da Autosimilaridade para a detecção de comportamentos maliciosos em aplicações móveis corporativas. Neste sentido, foram propostos cenários, variáveis e abordagem para a análise de ameaças, como também foi avaliado o tempo de processamento necessário para a execução do Autosimilaridade em dispositivos móveis.The cost of all types of cyberattacks is increasing for global organizations. The Whitehouse of the U.S. government estimates that malicious cyber activity cost the U.S. economy between US57billionandUS57 billion and US109 billion in 2016. Recently, it is possible to observe an increasing in numbers of Denial of Service (DoS), botnets, malicious insider and ransomware attacks. Accenture consulting argues that 89% of survey respondents believe breakthrough technologies, like artificial intelligence, machine learning and user behavior analytics, are essential for securing their organizations. To face adversarial models, novel network attacks and counter measures of attackers to avoid detection, it is possible to adopt unsupervised or semi-supervised approaches for network anomaly detection, by means of behavioral analysis, where known anomalies are not necessaries for training models. Signal processing schemes have been applied to detect malicious traffic in computer networks through unsupervised approaches, showing advances in network traffic analysis, in network attack detection, and in network intrusion detection systems. Anomalies can be hard to identify and separate from normal data due to the rare occurrences of anomalies in comparison to normal events. The imbalanced data can compromise the performance of most standard learning algorithms, creating bias or unfair weight to learn from the majority class and reducing detection capacity of anomalies that are characterized by the minority class. Therefore, anomaly detection algorithms have to be highly discriminating, robust to corruption and able to deal with the imbalanced data problem. Some widely adopted algorithms for anomaly detection assume a Gaussian distributed data for legitimate observations, however this assumption may not be observed in network traffic, which is usually characterized by skewed and heavy-tailed distributions. As a first important contribution, we propose the Eigensimilarity, which is an approach based on signal processing concepts applied to detection of malicious traffic in computer networks. We evaluate the accuracy and performance of the proposed framework applied to a simulated scenario and to the DARPA 1998 data set. The performed experiments show that synflood, fraggle and port scan attacks can be detected accurately by Eigensimilarity and with great detail, in an automatic and blind fashion, i.e. in an unsupervised approach. Considering that the skewness improves anomaly detection in imbalanced and skewed data, such as network traffic, we propose the Moment-based Robust Principal Component Analysis (mRPCA) for network attack detection. The m-RPCA is a framework based on distances between contaminated observations and moments computed from a robust subspace learned by Robust Principal Component Analysis (RPCA), in order to detect anomalies from skewed data and network traffic. We evaluate the accuracy of the m-RPCA for anomaly detection on simulated data sets, with skewed and heavy-tailed distributions, and for the CTU-13 data set. The Experimental evaluation compares our proposal to widely adopted algorithms for anomaly detection and shows that the distance between robust estimates and contaminated observations can improve the anomaly detection on skewed data and the network attack detection. Moreover, we propose an architecture and approach to evaluate a proof of concept of Eigensimilarity for malicious behavior detection on mobile applications, in order to detect possible threats in offline corporate mobile client. We propose scenarios, features and approaches for threat analysis by means of Eigensimilarity, and evaluate the processing time required for Eigensimilarity execution in mobile devices

    Beyond Futures - Festival of Research & Innovation 2024

    Full text link
    © 2024 The Authors. Published by the University of Wolverhampton. This is an open access book available under a Creative Commons licence.Book of abstracts and full paper proceedings for Beyond Futures - Festival of Research and Innovation, Research Student Conference, held on Tuesday 16 – Thursday 18 July 2024
    corecore