83 research outputs found
Breaking Data Encryption Standard with a Reduced Number of Rounds Using Metaheuristics Differential Cryptanalysis
This article presents the author’s own metaheuristic cryptanalytic attack based on the use of differential cryptanalysis (DC) methods and memetic algorithms (MA) that improve the local search process through simulated annealing (SA). The suggested attack will be verified on a set of ciphertexts generated with the well-known DES (data encryption standard) reduced to six rounds. The aim of the attack is to guess the last encryption subkey, for each of the two characteristics Ω. Knowing the last subkey, it is possible to recreate the complete encryption key and thus decrypt the cryptogram. The suggested approach makes it possible to automatically reject solutions (keys) that represent the worst fitness function, owing to which we are able to significantly reduce the attack search space. The memetic algorithm (MASA) created in such a way will be compared with other metaheuristic techniques suggested in literature, in particular, with the genetic algorithm (NGA) and the classical differential cryptanalysis attack, in terms of consumption of memory and time needed to guess the key. The article also investigated the entropy of MASA and NGA attacks
Towards Optimal Copyright Protection Using Neural Networks Based Digital Image Watermarking
In the field of digital watermarking, digital image watermarking for copyright protection has attracted a lot of attention in the research community. Digital watermarking contains varies techniques for protecting the digital content. Among all those techniques,Discrete Wavelet Transform (DWT) provides higher image imperceptibility and robustness. Over the years, researchers have been designing watermarking techniques with robustness in mind, in order for the watermark to be resistant against any image processing techniques. Furthermore, the requirements of a good watermarking technique includes a tradeoff between robustness, image quality (imperceptibility) and capacity. In this paper, we have done an extensive literature review for the existing DWT techniques and those combined with other techniques such as Neural Networks. In addition to that, we have discuss the contribution of Neural Networks in copyright protection. Finally we reached our goal in which we identified the research gaps existed in the current watermarking schemes. So that, it will be easily to obtain an optimal techniques to make the watermark object robust to attacks while maintaining the imperceptibility to enhance the copyright protection
Image multi-level-thresholding with Mayfly optimization
Image thresholding is a well approved pre-processing methodology and enhancing the image information based on a chosen threshold is always preferred. This research implements the mayfly optimization algorithm (MOA) based image multi-level-thresholding on a class of benchmark images of dimension 512x512x1. The MOA is a novel methodology with the algorithm phases, such as; i) Initialization, ii) Exploration with male-mayfly (MM), iii) Exploration with female-mayfly (FM), iv) Offspring generation and, v) Termination. This algorithm implements a strict two-step search procedure, in which every Mayfly is forced to attain the global best solution. The proposed research considers the threshold value from 2 to 5 and the superiority of the result is confirmed by computing the essential Image quality measures (IQM). The performance of MOA is also compared and validated against the other procedures, such as particle-swarm-optimization (PSO), bacterial foraging optimization(BFO), firefly-algorithm(FA), bat algorithm (BA), cuckoo search(CS) and moth-flame optimization (MFO) and the attained p-value of Wilcoxon rank test confirmed the superiority of the MOA compared with other algorithms considered in this wor
Reconstrução e classificação de sequências de ADN desconhecidas
The continuous advances in DNA sequencing technologies and techniques
in metagenomics require reliable reconstruction and accurate classification
methodologies for the diversity increase of the natural repository while contributing
to the organisms' description and organization. However, after
sequencing and de-novo assembly, one of the highest complex challenges
comes from the DNA sequences that do not match or resemble any biological
sequence from the literature. Three main reasons contribute to this
exception: the organism sequence presents high divergence according to the
known organisms from the literature, an irregularity has been created in the
reconstruction process, or a new organism has been sequenced. The inability
to efficiently classify these unknown sequences increases the sample
constitution's uncertainty and becomes a wasted opportunity to discover
new species since they are often discarded.
In this context, the main objective of this thesis is the development and
validation of a tool that provides an efficient computational solution to
solve these three challenges based on an ensemble of experts, namely
compression-based predictors, the distribution of sequence content, and
normalized sequence lengths. The method uses both DNA and amino acid
sequences and provides efficient classification beyond standard referential
comparisons. Unusually, it classifies DNA sequences without resorting directly
to the reference genomes but rather to features that the species biological
sequences share. Specifically, it only makes use of features extracted
individually from each genome without using sequence comparisons.
RFSC was then created as a machine learning classification pipeline that
relies on an ensemble of experts to provide efficient classification in metagenomic
contexts. This pipeline was tested in synthetic and real data, both
achieving precise and accurate results that, at the time of the development
of this thesis, have not been reported in the state-of-the-art. Specifically, it
has achieved an accuracy of approximately 97% in the domain/type classification.Os contÃnuos avanços em tecnologias de sequenciação de ADN e técnicas
em meta genómica requerem metodologias de reconstrução confiáveis e de
classificação precisas para o aumento da diversidade do repositório natural,
contribuindo, entretanto, para a descrição e organização dos organismos.
No entanto, após a sequenciação e a montagem de-novo, um dos desafios
mais complexos advém das sequências de ADN que não correspondem ou se
assemelham a qualquer sequencia biológica da literatura. São três as principais
razões que contribuem para essa exceção: uma irregularidade emergiu
no processo de reconstrução, a sequência do organismo é altamente dissimilar
dos organismos da literatura, ou um novo e diferente organismo foi
reconstruÃdo. A incapacidade de classificar com eficiência essas sequências
desconhecidas aumenta a incerteza da constituição da amostra e desperdiça
a oportunidade de descobrir novas espécies, uma vez que muitas vezes são
descartadas.
Neste contexto, o principal objetivo desta tese é fornecer uma solução computacional
eficiente para resolver este desafio com base em um conjunto
de especialistas, nomeadamente preditores baseados em compressão, a distribuição de conteúdo de sequência e comprimentos de sequência normalizados.
O método usa sequências de ADN e de aminoácidos e fornece classificação eficiente além das comparações referenciais padrão. Excecionalmente,
ele classifica as sequências de ADN sem recorrer diretamente a genomas
de referência, mas sim à s caracterÃsticas que as sequências biológicas da
espécie compartilham. Especificamente, ele usa apenas recursos extraÃdos
individualmente de cada genoma sem usar comparações de sequência. Além
disso, o pipeline é totalmente automático e permite a reconstrução sem referência de genomas a partir de reads FASTQ com a garantia adicional de
armazenamento seguro de informações sensÃveis.
O RFSC é então um pipeline de classificação de aprendizagem automática
que se baseia em um conjunto de especialistas para fornecer classificação
eficiente em contextos meta genómicos. Este pipeline foi aplicado em dados
sintéticos e reais, alcançando em ambos resultados precisos e exatos que,
no momento do desenvolvimento desta dissertação, não foram relatados na
literatura. Especificamente, esta ferramenta desenvolvida, alcançou uma
precisão de aproximadamente 97% na classificação de domÃnio/tipo.Mestrado em Engenharia de Computadores e Telemátic
Applied Metaheuristic Computing
For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
Algorithms based on spider daddy long legs for finding the optimal route in securing mobile ad hoc networks
Mobile ad hoc networks (MANETs) are wireless networks that are subject to severe attacks, such as the black hole attack. One of the goals in the research is to find a method to prevent black hole attacks without decreasing network throughput or
increasing routing overhead. The routing mechanism in define uses route requests (RREQs; for discovering routes) and route replies (RREPs; for receiving paths). However, this mechanism is vulnerable to attacks by malicious black hole nodes. The mechanism is developed to find the shortest secure path and to reduce overhead using
the information that is available in the routing tables as an input to propose a more complex nature-inspired algorithm. The new method is called the Daddy Long-Legs Algorithm (PGO-DLLA), which modifies the standard AODV and optimizes the
routing process. This method avoids dependency exclusively on the hop counts and destination sequence numbers (DSNs) that are exploited by malicious nodes in the standard AODV protocol. The experiment by performance metrics End-to-End delay
and packet delivery ratio are compared in order to determine the best effort traffic. The results showed the PGO-DLLA improvement of the shortest and secure routing from black hole attack in MANET. In addition, the results indicate better performance
than the related works algorithm with respect to all metrics excluding throughput which AntNet is best in routing when the pause time be more than 40 seconds. PGODLLA is able to improve the route discovery against the black hole attacks in AODV.
Experiments in this thesis have shown that PGO-DLLA is able to reduce the normalized routing load, end-to-end delay, and packet loss and has a good throughput and packet delivery ratio when compared with the standard AODV protocol, BAODV protocol, and the current related protocols that enhance the routing security of the AODV protocols
Biopsychosocial Assessment and Ergonomics Intervention for Sustainable Living: A Case Study on Flats
This study proposes an ergonomics-based approach for those who are living in small housings (known as flats) in Indonesia. With regard to human capability and limitation, this research shows how the basic needs of human beings are captured and analyzed, followed by proposed designs of facilities and standard living in small housings. Ninety samples were involved during the study through in- depth interview and face-to-face questionnaire. The results show that there were some proposed of modification of critical facilities (such as multifunction ironing work station, bed furniture, and clothesline) and validated through usability testing. Overall, it is hoped that the proposed designs will support biopsychosocial needs and sustainability
Determining Additional Modulus of Subgarde Reaction Based on Tolerable Settlement for the Nailed-slab System Resting on Soft Clay.
Abstract—Nailed-slab System is a proposed alternative
solution for rigid pavement problem on soft soils. Equivalent
modulus of subgrade reaction (k’) can be used in designing of
nailed-slab system. This modular is the cumulative of modulus of
subgrade reaction from plate load test (k) and additional
modulus of subgrade reaction due to pile installing (∆∆∆∆k). A recent
method has used reduction of pile resistance approach in
determining ∆∆∆∆k. The relative displacement between pile and soils,
and reduction of pile resistance has been identified. In fact,
determining of reduction of pile resistance is difficult. This paper
proposes an approach by considering tolerable settlement of rigid
pavement. Validation is carried out with respect to a loading test
of nailed-slab models. The models are presented as strip section
of rigid pavement. The theory of beams on elastic foundation is
used to calculate the slab deflection by using k’. Proposed
approach can results in deflection prediction close to observed
one. In practice, the Nailed-slab System would be constructed by
multiple-row piles. Designing this system based on one-pile row
analysis will give more safety design and will consume less time
Linear subspace methods in face recognition
Despite over 30 years of research, face recognition is still one of the most difficult problems in the field of Computer Vision. The challenge comes from many factors affecting the performance of a face recognition system: noisy input, training data collection, speed-accuracy trade-off, variations in expression, illumination, pose, or ageing. Although relatively successful attempts have been made for special cases, such as frontal faces, no satisfactory methods exist that work under completely unconstrained conditions. This thesis proposes solutions to three important problems: lack of training data, speed-accuracy requirement, and unconstrained environments.
The problem of lacking training data has been solved in the worst case: single sample per person. Whitened Principal Component Analysis is proposed as a simple but effective solution. Whitened PCA performs consistently well on multiple face datasets.
Speed-accuracy trade-off problem is the second focus of this thesis. Two solutions are proposed to tackle this problem. The first solution is a new feature extraction method called Compact Binary Patterns which is about three times faster than Local Binary Patterns. The second solution is a multi-patch classifier which performs much better than a single classifier without compromising speed.
Two metric learning methods are introduced to solve the problem of unconstrained face recognition. The first method called Indirect Neighourhood Component Analysis combines the best ideas from Neighourhood Component Analysis and One-shot learning. The second method, Cosine Similarity Metric Learning, uses Cosine Similarity instead of the more popular Euclidean distance to form the objective function in the learning process. This Cosine Similarity Metric Learning method produces the best result in the literature on the state-of-the-art face dataset: the Labelled Faces in the Wild dataset.
Finally, a full face verification system based on our real experience taking part in ICPR 2010 Face Verification contest is described. Many practical points are discussed
Applied Cognitive Sciences
Cognitive science is an interdisciplinary field in the study of the mind and intelligence. The term cognition refers to a variety of mental processes, including perception, problem solving, learning, decision making, language use, and emotional experience. The basis of the cognitive sciences is the contribution of philosophy and computing to the study of cognition. Computing is very important in the study of cognition because computer-aided research helps to develop mental processes, and computers are used to test scientific hypotheses about mental organization and functioning. This book provides a platform for reviewing these disciplines and presenting cognitive research as a separate discipline
- …