19 research outputs found

    ANT colony optimization based optimal path selection and data gathering in WSN

    Get PDF
    A data aggregation is an essential process in the field of wireless sensor network to deal with base station and sink node. In current data gathering mechanism, the nearest nodes to the sink receives data from all the other nodes and shares it to the sink. The data aggregation process is utilized to increase the capability and efficiency of the existing system. In existing technique, the possibility of data loss is high this may leads to energy loss therefore; the efficiency and performance are damaged. In order to overcome these issues, an effective cluster based data gathering technique is developed. Here the optimal cluster heads are selected which is used for transmission with low energy consumption. The optimal path for mobile sink (MS) is done by Ant Colony Optimization (ACO) algorithm. It provides efficient path along with MS to collect the data along with Cluster centroid. The performance of the proposed method is analyzed in terms of delay, throughput, lifetime, etc.</p

    Diseño e implementación de un modelo individual para la simulación de la propagación de malware en redes de sensores inalámbricas

    Get PDF
    [ES] Las redes de sensores inalámbricas están formadas por un conjunto de dispositivos denominados sensores, que han sido desplegados en un área determinada; además, forman una red sin arquitectura pre-establecida de tipo ad-hoc, es decir, son redes descentralizadas que no requieren una infraestructura preexistente, como los dispositivos de red para el enrutamiento o puntos de acceso inalámbricos. En los últimos años, el malware se ha convertido en una potencial amenaza para las vulnerabilidades del Internet de las Cosas; por lo tanto, estas amenazas en constante evolución afectan a las redes de sensores inalámbricas. La propagación del malware en redes de sensores inalámbricas se ha estudiado desde diferentes perspectivas, con la finalidad de conocer cómo se producen estos ataques y poder definir medidas de seguridad especializadas. Estos estudios se realizan a través de modelos matemáticos, utilizando diferentes herramientas de modelado, como los sistemas de ecuaciones diferenciales ordinarias y sistemas de ecuaciones en derivadas parciales, cadenas de Markov, autómatas celulares o agentes. Sin embargo, la mayoría de estos modelos propuestos excluyen las características individuales de los componentes principales de la red. El objetivo de esta tesis doctoral es definir un modelo individual basado en agentes y desarrollar un entorno computacional que permita la simulación y análisis de la propagación de diferentes tipos de malware en redes de sensores inalámbricas. La metodología que se ha utilizado en este trabajo comienza con una revisión del estado del arte de los temas principales, que incluyen redes de sensores inalámbricas, malware y modelos basados en agentes. Posteriormente, se ha realizado una revisión bibliográfica de los modelos matemáticos que se han propuesto para la simulación del malware en redes de sensores inalámbricas. A continuación se han extraído las características más significativas de las redes de sensores inalámbricas, lo que permitirá crear un modelo matemático bajo el paradigma de modelos basados en agentes, donde se han detallado los agentes involucrados, los coeficientes que les afectan y las reglas de transición que utilizan. Por último, se ha implementado computacionalmente el modelo, utilizando el entorno de trabajo Mesa, desarrollado en Python, que ha permitido analizar los resultados en diferentes escenarios y para diferentes topologías. Finalmente, se ha concluido que tanto los entornos como las topologías influyen en el proceso de propagación del malware. Además, las características computacionales de los sensores pueden ayudar a evitar una rápida propagación, puesto que puede hacer sensores con altas características computacionales que dispongan de algún tipo de mecanismo de seguridad ya implementado

    Computer Science & Technology Series : XVIII Argentine Congress of Computer Science. Selected papers

    Get PDF
    CACIC’12 was the eighteenth Congress in the CACIC series. It was organized by the School of Computer Science and Engineering at the Universidad Nacional del Sur. The Congress included 13 Workshops with 178 accepted papers, 5 Conferences, 2 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 5 courses. CACIC 2012 was organized following the traditional Congress format, with 13 Workshops covering a diversity of dimensions of Computer Science Research. Each topic was supervised by a committee of 3-5 chairs of different Universities. The call for papers attracted a total of 302 submissions. An average of 2.5 review reports were collected for each paper, for a grand total of 752 review reports that involved about 410 different reviewers. A total of 178 full papers, involving 496 authors and 83 Universities, were accepted and 27 of them were selected for this book.Red de Universidades con Carreras en Informática (RedUNCI

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Modelización Matemática de la propagación de malware: Un nuevo enfoque basado en la seguridad de la información

    Get PDF
    [ES] En esta tesis se estudian modelos que simulan la propagación del malware. Uno de los objetivos de estos modelos es prever si una epidemia desaparece o permanece a lo largo del tiempo. Para ello se realiza un estudio de la estabilidad del modelo y se calcula el número reproductivo básico, denotado por R0. Para estudiar la estabilidad se usan los valores propios de las matrices Jacobianas, las funciones de Liapunov y el enfoque geométrico, mientras que para obtener el número reproductivo básico se utiliza el método de la siguiente generación. De este modo, se obtiene que la epidemia desaparece si R0 es menor o igual a 1 y la epidemia se mantiene si R0 > 1, entre otras propiedades. Haciendo un análisis de estos modelos se han propuesto tres mejoras en esta tesis: 1. La creación de una familia de modelos que tiene en cuenta el compartimento de los portadores, es decir, aquellos dispositivos que están infectados pero el malware no les afecta. 2. El estudio del número reproductivo básico en varias variables. 3. La redefinición de los parámetros de los modelos teniendo en cuenta las características del malware

    Identifying and Mitigating Security Risks in Multi-Level Systems-of-Systems Environments

    Get PDF
    In recent years, organisations, governments, and cities have taken advantage of the many benefits and automated processes Information and Communication Technology (ICT) offers, evolving their existing systems and infrastructures into highly connected and complex Systems-of-Systems (SoS). These infrastructures endeavour to increase robustness and offer some resilience against single points of failure. The Internet, Wireless Sensor Networks, the Internet of Things, critical infrastructures, the human body, etc., can all be broadly categorised as SoS, as they encompass a wide range of differing systems that collaborate to fulfil objectives that the distinct systems could not fulfil on their own. ICT constructed SoS face the same dangers, limitations, and challenges as those of traditional cyber based networks, and while monitoring the security of small networks can be difficult, the dynamic nature, size, and complexity of SoS makes securing these infrastructures more taxing. Solutions that attempt to identify risks, vulnerabilities, and model the topologies of SoS have failed to evolve at the same pace as SoS adoption. This has resulted in attacks against these infrastructures gaining prevalence, as unidentified vulnerabilities and exploits provide unguarded opportunities for attackers to exploit. In addition, the new collaborative relations introduce new cyber interdependencies, unforeseen cascading failures, and increase complexity. This thesis presents an innovative approach to identifying, mitigating risks, and securing SoS environments. Our security framework incorporates a number of novel techniques, which allows us to calculate the security level of the entire SoS infrastructure using vulnerability analysis, node property aspects, topology data, and other factors, and to improve and mitigate risks without adding additional resources into the SoS infrastructure. Other risk factors we examine include risks associated with different properties, and the likelihood of violating access control requirements. Extending the principals of the framework, we also apply the approach to multi-level SoS, in order to improve both SoS security and the overall robustness of the network. In addition, the identified risks, vulnerabilities, and interdependent links are modelled by extending network modelling and attack graph generation methods. The proposed SeCurity Risk Analysis and Mitigation Framework and principal techniques have been researched, developed, implemented, and then evaluated via numerous experiments and case studies. The subsequent results accomplished ascertain that the framework can successfully observe SoS and produce an accurate security level for the entire SoS in all instances, visualising identified vulnerabilities, interdependencies, high risk nodes, data access violations, and security grades in a series of reports and undirected graphs. The framework’s evolutionary approach to mitigating risks and the robustness function which can determine the appropriateness of the SoS, revealed promising results, with the framework and principal techniques identifying SoS topologies, and quantifying their associated security levels. Distinguishing SoS that are either optimally structured (in terms of communication security), or cannot be evolved as the applied processes would negatively impede the security and robustness of the SoS. Likewise, the framework is capable via evolvement methods of identifying SoS communication configurations that improve communication security and assure data as it traverses across an unsecure and unencrypted SoS. Reporting enhanced SoS configurations that mitigate risks in a series of undirected graphs and reports that visualise and detail the SoS topology and its vulnerabilities. These reported candidates and optimal solutions improve the security and SoS robustness, and will support the maintenance of acceptable and tolerable low centrality factors, should these recommended configurations be applied to the evaluated SoS infrastructure

    Calibración de un algoritmo de detección de anomalías marítimas basado en la fusión de datos satelitales

    Get PDF
    La fusión de diferentes fuentes de datos aporta una ayuda significativa en el proceso de toma de decisiones. El presente artículo describe el desarrollo de una plataforma que permite detectar anomalías marítimas por medio de la fusión de datos del Sistema de Información Automática (AIS) para seguimiento de buques y de imágenes satelitales de Radares de Apertura Sintética (SAR). Estas anomalías son presentadas al operador como un conjunto de detecciones que requieren ser monitoreadas para descubrir su naturaleza. El proceso de detección se lleva adelante primero identificando objetos dentro de las imágenes SAR a través de la aplicación de algoritmos CFAR, y luego correlacionando los objetos detectados con los datos reportados mediante el sistema AIS. En este trabajo reportamos las pruebas realizadas con diferentes configuraciones de los parámetros para los algoritmos de detección y asociación, analizamos la respuesta de la plataforma y reportamos la combinación de parámetros que reporta mejores resultados para las imágenes utilizadas. Este es un primer paso en nuestro objetivo futuro de desarrollar un sistema que ajuste los parámetros en forma dinámica dependiendo de las imágenes disponibles.XVI Workshop Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    XXIII Congreso Argentino de Ciencias de la Computación - CACIC 2017 : Libro de actas

    Get PDF
    Trabajos presentados en el XXIII Congreso Argentino de Ciencias de la Computación (CACIC), celebrado en la ciudad de La Plata los días 9 al 13 de octubre de 2017, organizado por la Red de Universidades con Carreras en Informática (RedUNCI) y la Facultad de Informática de la Universidad Nacional de La Plata (UNLP).Red de Universidades con Carreras en Informática (RedUNCI

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore