30 research outputs found
On the Detection Capabilities of Signature-Based Intrusion Detection Systems in the Context of Web Attacks
This work has been partly funded by the research grant PID2020-115199RB-I00 provided by the Spanish ministry of Industry under the contract MICIN/AEI/10.13039/501100011033, and also by FEDER/Junta de Andalucia-Consejeria de Transformacion Economica, Industria, Conocimiento y Universidades under project PYC20-RE-087-USE.Signature-based Intrusion Detection Systems (SIDS) play a crucial role within the arsenal
of security components of most organizations. They can find traces of known attacks in the network
traffic or host events for which patterns or signatures have been pre-established. SIDS include
standard packages of detection rulesets, but only those rules suited to the operational environment
should be activated for optimal performance. However, some organizations might skip this tuning
process and instead activate default off-the-shelf rulesets without understanding its implications and
trade-offs. In this work, we help gain insight into the consequences of using predefined rulesets in the
performance of SIDS. We experimentally explore the performance of three SIDS in the context of web
attacks. In particular, we gauge the detection rate obtained with predefined subsets of rules for Snort,
ModSecurity and Nemesida using seven attack datasets. We also determine the precision and rate of
alert generated by each detector in a real-life case using a large trace from a public webserver. Results
show that the maximum detection rate achieved by the SIDS under test is insufficient to protect
systems effectively and is lower than expected for known attacks. Our results also indicate that the
choice of predefined settings activated on each detector strongly influences its detection capability
and false alarm rate. Snort and ModSecurity scored either a very poor detection rate (activating
the less-sensitive predefined ruleset) or a very poor precision (activating the full ruleset). We also
found that using various SIDS for a cooperative decision can improve the precision or the detection
rate, but not both. Consequently, it is necessary to reflect upon the role of these open-source SIDS
with default configurations as core elements for protection in the context of web attacks. Finally, we
provide an efficient method for systematically determining which rules deactivate from a ruleset to
significantly reduce the false alarm rate for a target operational environment. We tested our approach
using Snort’s ruleset in our real-life trace, increasing the precision from 0.015 to 1 in less than 16 h
of work.Spanish Government PID2020-115199RB-I00
MICIN/AEI/10.13039/501100011033FEDER/Junta de Andalucia-Consejeria de Transformacion Economica, Industria, Conocimiento y Universidades PYC20-RE-087-US
How Much Training Data Is Enough? A Case Study for HTTP Anomaly-Based Intrusion Detection
Most anomaly-based intrusion detectors rely on models that learn from training datasets whose
quality is crucial in their performance. Albeit the properties of suitable datasets have been formulated,
the influence of the dataset size on the performance of the anomaly-based detector has received scarce
attention so far. In this work, we investigate the optimal size of a training dataset. This size should be
large enough so that training data is representative of normal behavior, but after that point, collecting more
data may result in unnecessary waste of time and computational resources, not to mention an increased
risk of overtraining. In this spirit, we provide a method to find out when the amount of data collected at
the production environment is representative of normal behavior in the context of a detector of HTTP URI
attacks based on 1-grammar. Our approach is founded on a set of indicators related to the statistical properties
of the data. These indicators are periodically calculated during data collection, producing time series that
stabilize when more training data is not expected to translate to better system performance, which indicates
that data collection can be stopped.We present a case study with real-life datasets collected at the University
of Seville (Spain) and a public dataset from the University of Saskatchewan. The application of our method
to these datasets showed that more than 42% of one trace, and almost 20% of another were unnecessarily
collected, thereby showing that our proposed method can be an efficient approach for collecting training
data at the production environment.This work was supported in part by the Corporación Tecnológica de Andalucía and the University of Seville through the Projects under
Grant CTA 1669/22/2017, Grant PI-1786/22/2018, and Grant PI-1736/22/2017
Monitorización y selección de incidentes en seguridad de redes mediante EDA
Uno de los mayores retos a los que se enfrentan los sistemas de monitorización de seguridad en redes es el gran volumen de datos de diversa naturaleza y relevancia que deben procesar para su presentación adecuada al equipo administrador del sistema, tratando de incorporar la información semántica más relevante. En este artículo se propone la aplicación de herramientas derivadas de técnicas de análisis exploratorio de datos para la selección de los eventos críticos en los que el administrador debe focalizar su atención. Adicionalmente, estas herramientas son capaces de proporcionar información semántica en relación a los elementos involucrados y su grado de implicación en los eventos seleccionados. La propuesta se presenta y evalúa utilizando el desafío VAST 2012 como caso de estudio, obteniéndose resultados altamente satisfactorios.Este trabajo ha sido parcialmente financiado por el MICINN a través del proyecto TEC2011-22579
Fusing Information from Tickets and Alerts to Improve the Incident Resolution Process
In the context of network incident monitoring, alerts are useful notifications
that provide IT management staff with information about incidents. They are
usually triggered in an automatic manner by network equipment and monitoring systems, thus containing only technical information available to the systems
that are generating them. On the other hand, ticketing systems play a different
role in this context. Tickets represent the business point of view of incidents.
They are usually generated by human intervention and contain enriched semantic information about ongoing and past incidents. In this article, our main
hypothesis is that incorporating tickets information into the alert correlation
process will be beneficial to the incident resolution life-cycle in terms of accuracy, timing, and overall incident’s description. We propose a methodology to
validate this hypothesis and suggest a solution to the main challenges that appear. The proposed correlation approach is based on the time alignment of the
events (alerts and tickets) that affect common elements in the network. For this
we use real alert and ticket datasets obtained from a large telecommunications
network. The results have shown that using ticket information enhances the
incident resolution process, mainly by reducing and aggregating a higher percentage of alerts compared with standard alert correlation systems that only use
alerts as the main source of information. Finally, we also show the applicability
and usability of this model by applying it to a case study where we analyze the
performance of the management staff
Defense techniques for low-rate DoS attacks against application servers
a b s t r a c t Low-rate denial of service (DoS) attacks have recently emerged as new strategies for denying networking services. Such attacks are capable of discovering vulnerabilities in protocols or applications behavior to carry out a DoS with low-rate traffic. In this paper, we focus on a specific attack: the low-rate DoS attack against application servers, and address the task of finding an effective defense against this attack. Different approaches are explored and four alternatives to defeat these attacks are suggested. The techniques proposed are based on modifying the way in which an application server accepts incoming requests. They focus on protective measures aimed at (i) preventing an attacker from capturing all the positions in the incoming queues of applications, and (ii) randomizing the server operation to eliminate possible vulnerabilities due to predictable behaviors. We extensively describe the suggested techniques, discussing the benefits and drawbacks for each under two criteria: the attack efficiency reduction obtained, and the impact on the normal operation of the server. We evaluate the proposed solutions in a both a simulated and a real environment, and provide guidelines for their implementation in a production system
On the Detection Capabilities of Signature-Based Intrusion Detection Systems in the Context of Web Attacks
Signature-based Intrusion Detection Systems (SIDS) play a crucial role within the arsenal of security components of most organizations. They can find traces of known attacks in the network traffic or host events for which patterns or signatures have been pre-established. SIDS include standard packages of detection rulesets, but only those rules suited to the operational environment should be activated for optimal performance. However, some organizations might skip this tuning process and instead activate default off-the-shelf rulesets without understanding its implications and trade-offs. In this work, we help gain insight into the consequences of using predefined rulesets in the performance of SIDS. We experimentally explore the performance of three SIDS in the context of web attacks. In particular, we gauge the detection rate obtained with predefined subsets of rules for Snort, ModSecurity and Nemesida using seven attack datasets. We also determine the precision and rate of alert generated by each detector in a real-life case using a large trace from a public webserver. Results show that the maximum detection rate achieved by the SIDS under test is insufficient to protect systems effectively and is lower than expected for known attacks. Our results also indicate that the choice of predefined settings activated on each detector strongly influences its detection capability and false alarm rate. Snort and ModSecurity scored either a very poor detection rate (activating the less-sensitive predefined ruleset) or a very poor precision (activating the full ruleset). We also found that using various SIDS for a cooperative decision can improve the precision or the detection rate, but not both. Consequently, it is necessary to reflect upon the role of these open-source SIDS with default configurations as core elements for protection in the context of web attacks. Finally, we provide an efficient method for systematically determining which rules deactivate from a ruleset to significantly reduce the false alarm rate for a target operational environment. We tested our approach using Snort’s ruleset in our real-life trace, increasing the precision from 0.015 to 1 in less than 16 h of work. View Full-TextMinisterio de Ciencias e Innovación (MICINN)/AEI 10.13039/501100011033: PID2020-115199RB-I00FEDER/Junta de Andalucía-Consejería de Transformación Económica, Industria, Conocimiento y Universidades PYC20-RE-087-US
Estudio de asociación con caracteres de calidad de carne y marcadores SNP en regiones relacionadas con genes diferencialmente expresador en dos músculos en la raza bovina avileña-negra ibérica
In order to identify regions of the genome associated with some meat quality traits in the Avileña Negra-Ibérica (ANI) breed in two muscles, the genomic regions cointainig the 205 genes differentially expressed (DE) between Psoas major y Flexor digitorum in this breed were taken into account. Genotypes of 397 ANI calves with the Illumina Bovine 50K SNP platform were available. Markers with a call rate bellow 98% or fixed were excluded from the analysis. A total of 3110 markers were within the DE genes (104) or within the flanking regions of the same genes. The traits included in the analysis were two sensory traits (flavor and tenderness) and two laboratory measurements (intramuscular fat and instrumental tenderness). The software used for the analysis was QXPak 5.0. Twenty markers were found to be associated with intramuscular fat (12), tenderness (5) and instrumental texture (3). None of the markers were associated to more than one character and in both muscles to the same trait. Twenty two genes were flanked by this markers and one of the markers was contained in one of the DE geneCon el objetivo de identificar regiones del genoma asociadas a algunos caracteres de calidad de carne en la raza Avileña Negra-Ibérica (ANI) en dos músculos, se han tenido en cuenta la posición en el genoma de 205 genes diferencialmente expresados (DE) entre el Psoas major y Flexor digitorum en esta misma raza. Se dispuso de genotipos de 397 terneros de raza Avileña-Negra Ibérica obtenidos con la plataforma Illumina Bovine SNP50. Los marcadores con un “call rate” menor del 98% o fijados fueron excluidos del análisis. Los caracteres incorporados en el análisis fueron dos caracteres organolépticos (Flavor y Terneza) y dos caracteres laboratoriales (Grasa intramuscular y Terneza instrumental). Para el análisis de asociación se utilizó un total de 3110 marcadores que bien estaban contenidos en los genes DE (104 marcadores) o en las zonas flanqueantes a dichos genes en una ventana de 500kb. El análisis estadístico se realizó con QXPak 5.0. Se encontraron en total veinte marcadores asociados a las características estudiadas, de los cuales 12 se asociaron a grasa intramuscular , cinco a terneza organoléptica y tres a terneza instrumental. Estos marcadores corresponden a 23 genes DE distribuidos en las zonas flanqueantes (22) y contenidos dentro de genes DE (1). No se encontraron marcadores asociados a flavor. Tampoco se encontraron coincidencias de marcadores entre los caracteres ni entre músculos
Añadiendo mecanismos de ayuda en un juez on-line automático para soporte a mentorías académicas
Todos los jueces en línea, incluído en sus inicios ¡Acepta el reto! (https://www.aceptaelreto.com) desarrollado por profesores de la UCM, adolecen de un problema de realimentación al usuario: cuando el visitante hace un envío incorrecto el sistema no es capaz de dar información específica del error.
El documento es la memoria final de un Proyecto de Innovación y Mejora de la Calidad Docente (PIMCD) de 2017/2018 en el que se puso en marcha un sistema de pistas en el juez
La gamificación en la educación universitaria: aplicación a asignaturas de programación
Informe sobre la experiencia de aplicar técnicas de gamificación en la asignatura “Estructura de Datos y Algoritmos”, obligatoria de 2º curso en los grados impartidos en la Facultad de Informática de la UCM
Smart home anomaly-based IDS: Architecture proposal and case study
The complexity and diversity of the technologies involved in the Internet of Things (IoT)
challenge the generalization of security solutions based on anomaly detection, which should
fit the particularities of each context and deployment and allow for performance comparison.
In this work, we provide a flexible architecture based on building blocks suited for detecting
anomalies in the network traffic and the application-layer data exchanged by IoT devices in
the context of Smart Home. Following this architecture, we have defined a particular Intrusion
Detector System (IDS) for a case study that uses a public dataset with the electrical consumption
of 21 home devices over one year. In particular, we have defined ten Indicators of Compromise
(IoC) to detect network attacks and two anomaly detectors to detect false command or data
injection attacks. We have also included a signature-based IDS (Snort) to extend the detection
range to known attacks. We have reproduced eight network attacks (e.g., DoS, scanning) and
four False Command or Data Injection attacks to test our IDS performance. The results show that
all attacks were successfully detected by our IoCs and anomaly detectors with a false positive
rate lower than 0.3%. Signature detection was able to detect only 4 out of 12 attacks. Our
architecture and the IDS developed can be a reference for developing future IDS suited to
different contexts or use cases. Given that we use a public dataset, our contribution can also
serve as a baseline for comparison with new techniques that improve detection performanc