10 research outputs found
Application of feast (Feature Selection Toolbox) in ids (Intrusion detection Systems)
Security in computer networks has become a critical point for many organizations, but keeping data integrity demands time and large economic investments, in consequence there has been several solution approaches between hardware and software but sometimes these has become inefficient for attacks detection. This paper presents research results obtained implementing algorithms from FEAST, a Matlab Toolbox with the purpose of selecting the method with better precision results for different attacks detection using the least number of features. The Data Set NSL-KDD was taken as reference. The Relief method obtained the best precision levels for attack detection: 86.20%(NORMAL), 85.71% (DOS), 88.42% (PROBE), 93.11%(U2R), 90.07(R2L), which makes it a promising technique for features selection in data network intrusions
Feature selection, learning metrics and dimension reduction in training and classification processes in intrusion detection systems
This research presents an IDS prototype in Matlab that assess network traffic connections contained in the
NSL-KDD dataset, comparing feature selection techniques available in FEAST toolbox, refining prior
results applying dimension reduction technique ISOMAP. The classification process used a supervised
learning technique called Support Vector Machines (SVM). The comparative analysis related to detection
rates by attack category are conclusive that MRMR+PCA+SVM (selection, reduction and classification
techniques) combined obtained more promising results, just using 5 of 41 available features in the dataset.
The results obtained were: 85.42% normal traffic, 80.77% DoS, 90.41% Probe, 91.78% U2R and 83.25%
R2L
Classification and features selection method for obesity level prediction
Obesity has become one of the world’s largest health issues, rich and poor countries, without exception, have
each year larger populations with this condition. Obesity and overweight are defined as abnormal or excessive
fat accumulation that may impair health according to the World Health Organization (WHO) and has nearly
tripled since 1975. Data Mining and their techniques have become a strong scientific field to analyze huge
data sources and to provide new information about patterns and behaviors from the population. This study
uses data mining techniques to build a model for obesity prediction, using a dataset based on a survey for
college students in several countries. After cleaning and transformation of the data, a set of classification
methods was implemented (Logistic Model Tree - LMT, RandomForest - RF, Multi-Layer Perceptron - MLP
and Support Vector Machines - SVM), and the feature selection methods InfoGain, GainRatio, Chi-Square
and Relief, finally, crossed validation was performed for the training and testing processes. The data showed
than LMT had the best performance in precision, obtaining 96.65%, compared to RandomForest (95.62%),
MLP (94.41%) and SMO (83.89%), so this study shows that LMT it can be used with confidence to analyze
obesity and similar data
A set of software tools to build an author assessment package on Moodle: Implementing the AEEA proposal
A set of new types of assessment is required for
learning management systems (LMSs), and there is a need for
a way to assess lifelong adaptive competencies. Proposed
solutions to these problems need to preserve the
interoperability, reusability, efficiency and abstract modeling
already present in LMSs. This paper introduces a set of
software tools for an author assessment package on the LMS
Moodle being developed as part of adaptive e-learning engine
architecture (AEEA). The principal features of this set are: 1)
The set avoid editing items for a 360-degree feedback
evaluation, 2) Whole items and tests are linked to levels of
competencies acquisition, 3) The competency-based eassessment
data model are based on e-learning specification
and complemented with XML data on the appraised
competencies, 4) Items and tests are storage in repositories,
and 5) The tools are integrated within Moodle to facilitate the
design of an assessment plan
Agile testing practices in software quality: State of the art review
In this paper you can find a review of articles related to agile testing practices in software quality, looking
for theoretical information and real cases applied to testing inside a modern context, comparing them with
the standard procedures taking into account their advantages and relevant features. As final result, we
determine that agile practices in software quality have wide acceptance and many companies have chosen
their use for all their benefits and impact on development software processes in several real applications,
not necessarily IT governance ones, since other kind of technical applications have shown excellent results
on testing
Implementation Of MOPROSOFT level i and ii in software development companies in the Colombian Caribbean, a commitment to the software product quality region
Currently, over 90 % of the world market universe of software development is constituted by SMEs. These organizations usually sees the implementation of methodologies are 'too heavy' to be adopted in their daily actions. Thus a model that is adapted to the needs of SME developers of software should primarily focus on: Continuous Improvement both processes Software development, like other fundamental aspects of the organization as such supporters as the quality process that results in high quality products, competitive in the national and international markets. In accordance with the above stated, this research paper in the first instance, contextualizes about different quality models in the area of software development, then the model features MoProSoft delimited, then the process of implementing the methodology defined in organizations, poses the same improvements in this way for more efficient results and conclusions of the research project
Talleres de algoritmos y estructura de datos basados en UML
El término algoritmo oravienedel latín dixit alegrithrms cuya raíz a su vez procede del nombre del matemático, astrónomo y geógrafo persa Abu Abdallah Muhammad ibn Músa al-Khwarizmi, comúnmente conocido como Al-Khwarizmi, el cual vivió aproximadamente entre el 780 y 850 d.C. Un algoritmo es una secuencia ordenada y finita de pasos lógicos, exentos de ambigüedades, empleados para la solución de un determinado problema. Las Estructuras de Datos por su parte posibilitan el almacenamiento y gestión de conjuntos de datos, susceptibles de ser procesados para generar información. Los algoritmos se complementan con las estructuras de datos, haciendo posible la solución de problemas inherentes a diferentes disciplinas, mediante la aplicación de técnicas propias de las ciencias computacionales
Semi-supervised adaptive method for human activities recognition (HAR)
Using sensors and mobile devices integrated with hardware and software tools for Human Recognition Activities (HAR), is a growing scientific field, the analysis based on this information have promising benefits to detect regular and irregular behaviors in individuals during their daily activities. In this study, the Van Kasteren dataset was used for the experimental stage, and it all data was processed using the data mining classification methods: Decision Trees (DT), Support Vector Machines (SVM) and Naïve Bayes (NB). These methods were applied during the training and validation processes with the proposed methodology, and the results obtained showed that all these three methods were successful to identify the cluster associated to the activities contained in the Van Kasteren dataset. The Support Vector Machines (SVM) method showed the best results with the evaluation metrics: True Positive Rate (TPR) 99.2%, False Positive Rate (FPR) 0.6%, precision (99.2%), coverage (99.2%) and F-Measure (98.8%)
Libro de Proyectos Finales 2021 primer semestre
PregradoIngeniero CivilIngeniero de SistemasIngeniero ElectricistaIngeniero ElectrónicoIngeniero IndustrialIngeniero Mecánic