13 research outputs found

    Network Activity Monitoring Against Malware in Android Operating System

    Get PDF
    Google’s Android is the most used Operating System in mobile devices but as its popularity has increased hackers have taken advantage of the momentum to plague Google Play (Android’s Application Store) with multipurpose Malware that is capable of stealing private information and give the hacker remote control of smartphone’s features in the worst cases. This work presents an innovative methodology that helps in the process of malware detection for Android Operating System, which addresses aforementioned problem from a different perspective that even popular Anti-Malware software has left aside. It is based on the analysis of a common characteristic to all different kinds of malware: the need of network communications, so the victim device can interact with the attacker. It is important to highlight that in order to improve the security level in Android, our methodology should be considered in the process of malware detection. As main characteristic, it does not need to install additional kernel modules or to root the Android device. And finally as additional characteristic, it is as simple as can be considered for non-experienced users

    A New Procedure to Detect Low Interaction Honeypots

    Get PDF
    Honeypots  systems  are   an   important  piece  of   the   network  security infrastructure and can be deployed to accomplish different purposes such as: network sensing, capturing and learning about 0-day exploits, capturing and analyzing of black hat techniques, deterring black hats and data gathering for doing  statistical  analysis  over  the  Internet  traffic,  among  others. Nevertheless, all honeypots need to look like real systems, due to if a honeypot is unmasked, it loses its value. This paper presents a new procedure to detect low interaction honeypots, through HTTP request, regardless honeypot architecture. It is important to mention that Low Interaction Honeypots network services need to be improved in order to get trustworthy information. Otherwise, it should consider data obtained by low interaction honeypots like inaccurate and unreliable information.DOI:http://dx.doi.org/10.11591/ijece.v4i6.688

    Los Triángulos de Delaunay como Procesamiento Previo para Extractores Difusos

    Get PDF
    Resumen: La información biométrica que se extrae de las huellas dactilares tiende a ser diferente en cada adquisición, dada la incertidumbre existente en las mediciones y la presencia de ruido en las muestras, lo cual puede ocasionar que las palabras código generadas dentro de un extractor difuso posean un número de errores tal que rebase la capacidad de corrección de la codificación. Como consecuencia se tiene que lo anterior puede ocasionar que las huellas dactilares de una misma persona sean catalogadas como no coincidentes en su verificación o bien, que huellas de individuos diferentes parezcan demasiado similares.Para mitigar los efectos antes mencionados y sobrepasar las dificultades del pre-alineamiento de huellas dactilares, se propuso el uso de triángulos de Delaunay, lo cual permite proveer de estabilidad estructural local a la representación espacial de la información biométrica. En esa propuesta, las minucias de la huella son utilizadas como vértices de las triangulaciones y la red formada por éstas es tolerante a distorsiones, rotaciones y traslaciones. Sin embargo, en dicha propuesta se considera a la dispersión de minucias de huellas dactilares como no degenerativa y por tanto no se mencionan los umbrales o criterios necesarios para la formación de dichas triangulaciones, lo cual repercute en el desempeño de los extractores difusos. Con base en ello, este artículo presenta los resultados obtenidos al probar la formación de triangulaciones de Delaunay en imágenes de huella dactilar, en donde se aplican umbrales y criterios geométricos para luego contabilizar los triángulos coincidentes entre las estructuras formadas y definir los umbrales que maximicen dichas coincidencias. Palabras clave: Extractores Difusos, Huella Dactilar, Triángulos de Delaunay

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Electronic Voting Protocol Using Identity-Based Cryptography

    Get PDF
    Electronic voting protocols proposed to date meet their properties based on Public Key Cryptography (PKC), which offers high flexibility through key agreement protocols and authentication mechanisms. However, when PKC is used, it is necessary to implement Certification Authority (CA) to provide certificates which bind public keys to entities and enable verification of such public key bindings. Consequently, the components of the protocol increase notably. An alternative is to use Identity-Based Encryption (IBE). With this kind of cryptography, it is possible to have all the benefits offered by PKC, without neither the need of certificates nor all the core components of a Public Key Infrastructure (PKI). Considering the aforementioned, in this paper we propose an electronic voting protocol, which meets the privacy and robustness properties by using bilinear maps

    Teletraffic Analysis of DoS and Malware Cyber Attacks on P2P Networks under Exponential Assumptions

    No full text
    Peer-to-peer (P2P) networks are distributed systems with a communication model in which no central authority governs the behavior of individual peers. These networks currently account for a considerable percentage of all bandwidth worldwide. However, this communication model also has a clear disadvantage: it has a multitude of vulnerabilities and security threats. The nature of the P2P philosophy itself means that there is no centralized server responsible for uploading, storing, and verifying the authenticity of the shared files and packets. A direct consequence of this is that P2P networks are a good choice for hackers for the spread of malicious software or malware in general since there is no mechanism to control what content is shared. In this paper, we present a mathematical model for P2P networks to study the effect of two different attacks on these systems, namely, malware and denial of service. To analyze the behavior of the cyber attacks and identify important weaknesses, we develop different Markov chains that reflect the main dynamics of the system and the attacks. Specifically, our model considers the case in which a certain number of nodes are infected with a cyber worm that is spread throughout the network as the file is shared among peers. This allows observation of the final number of infected peers when an initial number (we evaluate the system for from 1 to 14 initial nodes) of malicious nodes infect the system. For the DoS attack, our model considers the portion of peers that are unable to communicate and the average attack duration to study the performance degradation of such an attack. A two-pronged approach was used to study the impact of the attacks on P2P networks; the first focused only on the P2P network, and the second focused on the attacks and the network

    Los Triángulos de Delaunay como Procesamiento Previo para Extractores Difusos

    No full text
    La información biométrica que se extrae de las huellas dactilares tiende a ser diferente en cada adquisición, dada la incertidumbre existente en las mediciones y la presencia de ruido en las muestras, lo cual puede ocasionar que las palabras código generadas dentro de un extractor difuso posean un número de errores tal que rebase la capacidad de corrección de la codificación. Como consecuencia se tiene que lo anterior puede ocasionar que las huellas dactilares de una misma persona sean catalogadas como no coincidentes en su verificación o bien, que huellas de individuos diferentes parezcan demasiado similares. Para mitigar los efectos antes mencionados y sobrepasar las dificultades del pre-alineamiento de huellas dactilares, se propuso el uso de triángulos de Delaunay, lo cual permite proveer de estabilidad estructural local a la representación espacial de la información biométrica. En esa propuesta, las minucias de la huella son utilizadas como vértices de las triangulaciones y la red formada por éstas es tolerante a distorsiones, rotaciones y traslaciones. Sin embargo, en dicha propuesta se considera a la dispersión de minucias de huellas dactilares como no degenerativa y por tanto no se mencionan los umbrales o criterios necesarios para la formación de dichas triangulaciones, lo cual repercute en el desempeño de los extractores difusos. Con base en ello, este artículo presenta los resultados obtenidos al probar la formación de triangulaciones de Delaunay en imágenes de huella dactilar, en donde se aplican umbrales y criterios geométricos para luego contabilizar los triángulos coincidentes entre las estructuras formadas y definir los umbrales que maximicen dichas coincidencias

    (Universal) Unconditional Verifiability in E-Voting without Trusted Parties

    Get PDF
    In traditional e-voting protocols, privacy is often provided by a trusted authority that learns the votes and computes the tally. Some protocols replace the trusted authority by a set of authorities, and privacy is guaranteed if less than a threshold number of authorities are corrupt. For verifiability, stronger security guarantees are demanded. Typically, corrupt authorities that try to fake the result of the tally must always be detected. To provide verifiability, many e-voting protocols use Non-Interactive Zero-Knowledge proofs (NIZKs). Thanks to their non-interactive nature, NIZKs allow anybody, including third parties that do not participate in the protocol, to verify the correctness of the tally. Therefore, NIZKs can be used to obtain universal verifiability. Additionally, NIZKs also improve usability because they allow voters to cast a vote using a non-interactive protocol. The disadvantage of NIZKs is that their security is based on setup assumptions such as the common reference string (CRS) or the random oracle (RO) model. The former requires a trusted party for the generation of a common reference string. The latter, though a popular methodology for designing secure protocols, has been shown to be unsound. In this paper, we address the design of an e-voting protocol that provides verifiability without any trust assumptions, where verifiability here is meant without eligibility verification. We show that Non-Interactive Witness-Indistinguishable proofs (NIWI) can be used for this purpose. The e-voting scheme is private under the Decision Linear assumption, while verifiability holds unconditionally. To our knowledge, this is the first private e-voting scheme with perfect universal verifiability, i.e. one in which the probability of a fake tally not being detected is 0, and with {\em non-interactive} protocols that does not rely on trust assumptions
    corecore