66 research outputs found

    Department of Computer Science Activity 1998-2004

    Get PDF
    This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period

    Methodologies synthesis

    Get PDF
    This deliverable deals with the modelling and analysis of interdependencies between critical infrastructures, focussing attention on two interdependent infrastructures studied in the context of CRUTIAL: the electric power infrastructure and the information infrastructures supporting management, control and maintenance functionality. The main objectives are: 1) investigate the main challenges to be addressed for the analysis and modelling of interdependencies, 2) review the modelling methodologies and tools that can be used to address these challenges and support the evaluation of the impact of interdependencies on the dependability and resilience of the service delivered to the users, and 3) present the preliminary directions investigated so far by the CRUTIAL consortium for describing and modelling interdependencies

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    A Real-Time Communication Framework for Wireless Sensor Networks

    Get PDF
    Recent advances in miniaturization and low power design have led to a flurry of activity in wireless sensor networks. Sensor networks have different constraints than traditional wired networks. A wireless sensor network is a special network with large numbers of nodes equipped with embedded processors, sensors, and radios. These nodes collaborate to accomplish a common task such as environment monitoring or asset tracking. In many applications, sensor nodes will be deployed in an ad-hoc fashion without careful planning. They must organize themselves to form a multihop, wireless communication network. In sensor network environments, much research has been conducted in areas such as power consumption, self-organisation techniques, routing between the sensors, and the communication between the sensor and the sink. On the other hand, real-time communication with the Quality of Service (QoS) concept in wireless sensor networks is still an open research field. Most protocols either ignore real time or simply attempt to process as fast as possible and hope that this speed is sufficient to meet the deadline. However, the introduction of real-time communication has created additional challenges in this area. The sensor node spends most of its life routing packets from one node to another until the packet reaches the sink; therefore, the node functions as a small router most of the time. Since sensor networks deal with time-critical applications, it is often necessary for communication to meet real time constraints. However, research that deals with providing QoS guarantees for real-time traffic in sensor networks is still in its infancy.This thesis presents a real-time communication framework to provide quality of service in sensor networks environments. The proposed framework consists of four components: First, present an analytical model for implementing Priority Queuing (PQ) in a sensor node to calculate the queuing delay. The exact packet delay for corresponding classes is calculated. Further, the analytical results are validated through an extensive simulation study. Second, report on a novel analytical model based on a limited service polling discipline. The model is based on an M/D/1 queuing system (a special class of M/G/1 queuing systems), which takes into account two different classes of traffic in a sensor node. The proposed model implements two queues in a sensor node that are served in a round robin fashion. The exact queuing delay in a sensor node for corresponding classes is calculated. Then, the analytical results are validated through an extensive simulation study. Third, exhibit a novel packet delivery mechanism, namely the Multiple Level Stateless Protocol (MLSP), as a real-time protocol for sensor networks to guarantee the traffic in wireless sensor networks. MLSP improves the packet loss rate and the handling of holes in sensor network much better than its counterpart, MMSPEED. It also introduces the k-limited polling model for the first time. In addition, the whole sending packets dropped significantly compared to MMSPEED, which it leads to decrease the consumption power. Fourth, explain a new framework for moving data from the sink to the user, at a low cost and low power, using the Universal Mobile Telecommunication System (UMTS), which is standard for the Third Generation Mobile System (3G). The integration of sensor networks with the 3G mobile network infrastructure will reduce the cost of building new infrastructures and enable the large-scale deployment of sensor network

    Feature interaction in composed systems. Proceedings. ECOOP 2001 Workshop #08 in association with the 15th European Conference on Object-Oriented Programming, Budapest, Hungary, June 18-22, 2001

    Get PDF
    Feature interaction is nothing new and not limited to computer science. The problem of undesirable feature interaction (feature interaction problem) has already been investigated in the telecommunication domain. Our goal is the investigation of feature interaction in componet-based systems beyond telecommunication. This Technical Report embraces all position papers accepted at the ECOOP 2001 workshop no. 08 on "Feature Interaction in Composed Systems". The workshop was held on June 18, 2001 at Budapest, Hungary

    Adaptive Mechanisms to Improve Message Dissemination in Vehicular Networks

    Get PDF
    En el pasado, se han dedicado muchos recursos en construir mejores carreteras y autovías. Con el paso del tiempo, los objetivos fueron cambiando hacia las mejoras de los vehículos, consiguiendo cada vez vehículos más rápidos y con mayor autonomía. Más tarde, con la introducción de la electrónica en el mercado del automóvil, los vehículos fueron equipados con sensores, equipos de comunicaciones, y otros avances tecnológicos que han permitido la aparición de coches más eficientes, seguros y confortables. Las aplicaciones que nos permite el uso de las Redes Vehiculares (VNs) en términos de seguridad y eficiencia son múltiples, lo que justifica la cantidad y recursos de investigación que se están dedicando en los últimos años. En el desarrollo de esta Tesis, los esfuerzos se han centrado en el área de las Vehicular Ad-hoc Networks, una subclase de las Redes Vehiculares que se centra en las comunicaciones entre los vehículos, sin necesidad de que existan elementos de infraestructura. Con la intención de mejorar el proceso de diseminación de mensajes de alerta, imprescindibles para las aplicaciones relacionadas con la seguridad, se ha propuesto un esquema de difusión adaptativo, capaz de seleccionar automáticamente el mecanismo de difusión óptimo en función de la complejidad del mapa y de la densidad actual de vehículos. El principal objetivo es maximizar la efectividad en la difusión de mensajes, reduciendo al máximo el número de mensajes necesarios, evitando o mitigando las tormentas de difusión. Las propuestas actuales en el área de las VANETs, se centran principalmente en analizar escenarios con densidades típicas o promedio. Sin embargo, y debido a las características de este tipo de redes, a menudo se dan situaciones con densidades extremas (altas y bajas). Teniendo en cuenta los problemas que pueden ocasionar en el proceso de diseminación de los mensajes de emergencia, se han propuesto dos nuevos esquemas de difusión para bajas densidades: el \emph{Junction Store and Forward} (JSF) y el \emph{Neighbor Store and Forward} (NSF). Además, para situaciones de alta densidad de vehículos, se ha diseñado el \emph{Nearest Junction Located} (NJL), un esquema de diseminación que reduce notablemente el número de mensajes enviados, sin por ello perder prestaciones. Finalmente, hemos realizado una clasificacion de los esquemas de difusión para VANETs más importantes, analizando las características utilizadas en su diseño. Además hemos realizado una comparación de todos ellos, utilizando el mismo entorno de simulación y los mismos escenarios, permitiendo conocer cuál es el mejor esquema de diseminación a usar en cada momento

    Modelling and Design of Resilient Networks under Challenges

    Get PDF
    Communication networks, in particular the Internet, face a variety of challenges that can disrupt our daily lives resulting in the loss of human lives and significant financial costs in the worst cases. We define challenges as external events that trigger faults that eventually result in service failures. Understanding these challenges accordingly is essential for improvement of the current networks and for designing Future Internet architectures. This dissertation presents a taxonomy of challenges that can help evaluate design choices for the current and Future Internet. Graph models to analyse critical infrastructures are examined and a multilevel graph model is developed to study interdependencies between different networks. Furthermore, graph-theoretic heuristic optimisation algorithms are developed. These heuristic algorithms add links to increase the resilience of networks in the least costly manner and they are computationally less expensive than an exhaustive search algorithm. The performance of networks under random failures, targeted attacks, and correlated area-based challenges are evaluated by the challenge simulation module that we developed. The GpENI Future Internet testbed is used to conduct experiments to evaluate the performance of the heuristic algorithms developed

    Workload Interleaving with Performance Guarantees in Data Centers

    Get PDF
    In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purposes of reducing energy and operating costs, and of improving availability and reliability. Along with the above benefits, resource sharing also introduces performance challenges: when multiple workloads access the same resources concurrently, contention may occur and introduce delays in the performance of individual workloads. Providing performance isolation to individual workloads needs effective management methodologies. The challenges of deriving effective management methodologies lie in finding accurate, robust, compact metrics and models to drive algorithms that can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies aiming at solving the challenging performance isolation problem in workload interleaving in data centers, focusing on both storage components and computing components. at the storage node level, we focus on methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. More specifically, a scheduling policy for background workload based on the statistical characteristics of the system busy periods and a methodology that quantitatively estimates the performance impact of power savings are developed. at the storage cluster level, we consider methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. More specifically, we develop a framework that can estimate beforehand the benefits and overheads of each option in order to automate the process of reaching intelligent consolidation decisions while achieving faster eventual consistency. at the computing node level, we focus on improving workload interleaving at off-the-shelf servers as they are the basic building blocks of large-scale data centers. We develop priority scheduling middleware that employs different policies to schedule background tasks based on the instantaneous resource requirements of the high priority applications running on the server node. Finally, at the computing cluster level, we investigate popular computing frameworks for large-scale data intensive distributed processing, such as MapReduce and its Hadoop implementation. We develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives
    • …
    corecore