12 research outputs found

    An Optimal Task Scheduling Algorithm in Wireless Sensor Networks

    Get PDF
    Sensing tasks should be allocated and processed among sensor nodes in minimum times so that users can draw useful conclusions through analyzing sensed data. Furthermore, finishing sensing task faster will benefit energy saving, which is critical in system design of wireless sensor networks. To minimize the execution time (makespan) of a given task, an optimal task scheduling algorithm (OTSA-WSN) in a clustered wireless sensor network is proposed based on divisible load theory. The algorithm consists of two phases: intra-cluster task scheduling and inter-cluster task scheduling. Intra-cluster task scheduling deals with allocating different fractions of sensing tasks among sensor nodes in each cluster; inter-cluster task scheduling involves the assignment of sensing tasks among all clusters in multiple rounds to improve overlap of communication with computation. OTSA-WSN builds from eliminating transmission collisions and idle gaps between two successive data transmissions. By removing performance degradation caused by communication interference and idle, the reduced finish time and improved network resource utilization can be achieved. With the proposed algorithm, the optimal number of rounds and the most reasonable load allocation ratio on each node could be derived. Finally, simulation results are presented to demonstrate the impacts of different network parameters such as the number of clusters, computation/communication latency, and measurement/communication speed, on the number of rounds, makespan and energy consumption

    A Multi-objective Optimization Algorithm of Task Scheduling in WSN

    Get PDF
    Sensing tasks should be allocated and processed among sensor nodes in minimum times so that users can draw useful conclusions through analyzing sensed data. Furthermore, finishing sensing task faster will benefit energy saving. The above needs form a contrast to the lower efficiency of task-performing caused by the  ailureprone sensor. To solve this problem, a multi-objective optimization algorithm of task scheduling is proposed for wireless sensor networks (MTWSN). This algorithm tries its best to make less makespan, but meanwhile, it also pay much more attention to the probability of task-performing and the lifetime of network. MTWSN avoids the task assigned to the failure-prone sensor, which effectively reducing the effect of failed nodes on task-performing. Simulation results show that the proposed algorithm can trade off these three objectives well. Compared with the traditional task scheduling algorithms, simulation experiments obtain better results

    A Non-cooperative Game Algorithm for Task Scheduling in Wireless Sensor Networks

    Get PDF
    Scheduling tasks in wireless sensor networks is one of the most challenging problems. Sensing tasks should be allocated and processed among sensors in minimum times, so that users can draw prompt and effective conclusions through analyzing sensed data. Furthermore, finishing sensing task faster will benefit energy saving, which is critical in system design of wireless sensor networks. But sensors may refuse to take pains to carry out the tasks due to the limited energy. To solve the potentially selfish problem of the sensors, a non-cooperative game algorithm (NGTSA) for task scheduling in wireless sensor networks is proposed. In the proposed algorithm, according to the divisible load theory, the tasks are distributed reasonably to every node from SINK based on the processing capability and communication capability. By removing the performance degradation caused by communications interference and idle, the reduced task completion time and the improved network resource utilization are achieved. Strategyproof mechanism which provide incentives to the sensors to obey the prescribed algorithms, and to truthfully report their parameters, leading to an effient task scheduling and execution. A utility function related with the total task completion time and tasks allocating scheme is designed. The Nash equilibrium of the game algorithm is proved. The simulation results show that with the mechanism in the algorithm, selfish nodes can be forced to report their true processing capability and endeavor to participate in the measurement, thereby the total time for accomplishing the task is minimized and the energy-consuming of the nodes is balanced

    Enhancing Security of Advanced Metering Infrastructure by Introducing Threshold Attendance Protocol , Journal of Telecommunications and Information Technology, 2014, nr 2

    Get PDF
    The industry pushes towards Smart grid systems in order to resolve current limitations of the unidirectional legacy power grid infrastructure. By introducing Advanced Metering Infrastructure (AMI) as an integral part of the Smart grid solution, the utility company obtains an invaluable tool to optimize its network, lower the operational costs, and improve quality of service. Unfortunately, introducing two-way communication poses a security risk to the power grid infrastructure. In this paper the authors consider a Threshold Attendance Protocol (TAP) acting in a reverted security paradigm. Its main idea is to keep the network load at a predictable level at all times. To achieve that, TAP in AMI environment is embedded and the solution using real-life simulation parameters is validated

    Internet Early Warning Systems -Overview and Architecture

    Get PDF
    Abstract In the last two decades the Internet has become more and more important to our live and economy. Also the number of threats to the Internet is rising. Actual security systems that are used to protect the infrastructure are insufficient. For this reason Internet Early Warning Systems have gain a more and more important position in research. Such systems have a lot of aspects that must be bear in mind. These are technical and organisational aspects. In this work we give an overview of such aspects to define the term Internet Early Warning Systems in detail

    DETERMINING THE INFLUENCE OF THE NETWORK TIME PROTOCOL (NTP) ON THE DOMAIN NAME SERVICE SECURITY EXTENSION (DNSSEC) PROTOCOL

    Get PDF
    Recent hacking events against Sony Entertainment, Target, Home Depot, and bank Automated Teller Machines (ATMs) fosters a growing perception that the Internet is an insecure environment. While Internet Privacy Concerns (IPCs) continue to grow out of a general concern for personal privacy, the availability of inexpensive Internet-capable mobile devices increases the Internet of Things (IoT), a network of everyday items embedded with the ability to connect and exchange data. Domain Name Services (DNS) has been integral part of the Internet for name resolution since the beginning. Domain Name Services has several documented vulnerabilities; for example, cache poisoning. The solution adopted by the Internet Engineering Task Force (IETF) to strengthen DNS is DNS Security Extensions (DNSSEC). DNS Security Extensions uses support for cryptographically signed name resolution responses. The cryptography used by DNSSEC is the Public Key Infrastructure (PKI). Some researchers have suggested that the time stamp used in the public certificate of the name resolution response influences DNSSEC vulnerability to a Man-in-the-Middle (MiTM) attack. This quantitative study determined the efficacy of using the default relative Unix epoch time stamp versus an absolute time stamp provided by the Network Time Protocol (NTP). Both a two-proportion test and Fisher’s exact test were used on a large sample size to show that there is a statistically significant better performance in security behavior when using NTP absolute time instead of the traditional relative Unix epoch time with DNSSEC

    Applying Mutable Object Snapshots to a High-level Object-Oriented Language

    Get PDF
    Software Engineers are familiar with mutable and immutable object state. Mutable objects shared across modules may lead to unexpected results as changes to the object in one module are visible to other modules sharing the object. When provided a mutable object as input in Java, it is common practice to defensively create a new private copy of the object bearing the same state via cloning, serializing/de-serializing, specialized object constructor, or third-party library. No universal approach exists for all scenarios and each common solution has well-known problems. This research explores the applicability of concepts within the Computer Engineering storage field related to snapshots. This exploration results in a simplified method of memory snapshotting implemented within OpenJDK 10. A novel runtime-managed method is proposed for declaring intent for object state to be unshared within the method signature. Preliminary experiments evaluate the attributes of this approach. A path for future research is proposed, including differential snapshots, alternative block sizes, improving performance, and exploring a tree of snapshots as a foundation to reason about changes to object state over time

    An extensive study on iterative solver resilience : characterization, detection and prediction

    Get PDF
    Soft errors caused by transient bit flips have the potential to significantly impactan applicalion's behavior. This has motivated the design of an array of techniques to detect, isolate, and correct soft errors using microarchitectural, architectural, compilation­based, or application-level techniques to minimize their impact on the executing application. The first step toward the design of good error detection/correction techniques involves an understanding of an application's vulnerability to soft errors. This work focuses on silent data e orruption's effects on iterative solvers and efforts to mitigate those effects. In this thesis, we first present the first comprehensive characterizalion of !he impact of soft errors on !he convergen ce characteris tics of six iterative methods using application-level fault injection. We analyze the impact of soft errors In terms of the type of error (single-vs multi-bit), the distribution and location of bits affected, the data structure and statement impacted, and varialion with time. We create a public access database with more than 1.5 million fault injection results. We then analyze the performance of soft error detection mechanisms and present the comparalive results. Molivated by our observations, we evaluate a machine-learning based detector that takes as features that are the runtime features observed by the individual detectors to arrive al their conclusions. Our evalualion demonstrates improved results over individual detectors. We then propase amachine learning based method to predict a program's error behavior to make fault injection studies more efficient. We demonstrate this method on asse ssing the performance of soft error detectors. We show that our method maintains 84% accuracy on average with up to 53% less cost. We also show, once a model is trained further fault injection tests would cost 10% of the expected full fault injection runs.“Soft errors” causados por cambios de estado transitorios en bits, tienen el potencial de impactar significativamente el comportamiento de una aplicación. Esto, ha motivado el diseño de una variedad de técnicas para detectar, aislar y corregir soft errors aplicadas a micro-arquitecturas, arquitecturas, tiempo de compilación y a nivel de aplicación para minimizar su impacto en la ejecución de una aplicación. El primer paso para diseñar una buna técnica de detección/corrección de errores, implica el conocimiento de las vulnerabilidades de la aplicación ante posibles soft errors. Este trabajo se centra en los efectos de la corrupción silenciosa de datos en soluciones iterativas, así como en los esfuerzos para mitigar esos efectos. En esta tesis, primeramente, presentamos la primera caracterización extensiva del impacto de soft errors sobre las características convergentes de seis métodos iterativos usando inyección de fallos a nivel de aplicación. Analizamos el impacto de los soft errors en términos del tipo de error (único vs múltiples-bits), de la distribución y posición de los bits afectados, las estructuras de datos, instrucciones afectadas y de las variaciones en el tiempo. Creamos una base de datos pública con más de 1.5 millones de resultados de inyección de fallos. Después, analizamos el desempeño de mecanismos de detección de soft errors actuales y presentamos los resultados de su comparación. Motivados por las observaciones de los resultados presentados, evaluamos un detector de soft errors basado en técnicas de machine learning que toma como entrada las características observadas en el tiempo de ejecución individual de los detectores anteriores al llegar a su conclusión. La evaluación de los resultados obtenidos muestra una mejora por sobre los detectores individualmente. Basados en estos resultados propusimos un método basado en machine learning para predecir el comportamiento de los errores en un programa con el fin de hacer el estudio de inyección de errores mas eficiente. Presentamos este método para evaluar el rendimiento de los detectores de soft errors. Demostramos que nuestro método mantiene una precisión del 84% en promedio con hasta un 53% de mejora en el tiempo de ejecución. También mostramos que una vez que un modelo ha sido entrenado, las pruebas de inyección de errores siguientes costarían 10% del tiempo esperado de ejecución.Postprint (published version
    corecore