54 research outputs found

    Gestion de la qualité de service et planification optimale de réseaux de capteurs multimédia sans fil

    Get PDF
    RÉSUMÉ Un RCSF est constitué d'un certain nombre d'entités (capteurs) géographiquement dispersées, de taille réduite, avec une autonomie et une puissance de traitement réduites. Ces dispositifs sont utilisés pour réaliser, de manière indépendante, des tâches comme la surveillance, le contrôle de processus industriel, etc. Les avancées en microélectronique ont conduit à l'émergence des petites caméras (type CMOS) et microphones accessibles. Ces capteurs audio-visuels peuvent être intégrés dans un RCSF pour former des RCMSF. Dans certains types d'applications, comme la surveillance des frontières, un grand nombre de ce type de capteurs est susceptible d'être déployés, sur de vastes terrains. Un volume considérable de flux audio-visuel (en plus des données) doit être transmis au centre de contrôle (le collecteur, ou SINK) pour analyse et prise de décision. Il y a donc un besoin important en termes de bande passante, avec surtout une forte contrainte en termes de délai de transmission et d'autres paramètres de RCSF. Des solutions pour le routage d'information ont été développées pour des RCSF, mais ces protocoles n'ont pas pris en compte la génération à grande échelle des données multimédia, elles sont par conséquent inadaptées aux RCMSF. Les capteurs typiquement sont omnidirectionnels, c'est-à-dire qu'ils sont capables de capter des signaux qui proviennent de toutes les directions autour d'eux. Les capteurs multimédia, en particulier les capteurs de vidéo, sont de type directionnel. Pour ce type de capteurs, l'aire de captage est limitée à un secteur donné d'un plan tridimensionnel. Malheureusement, les modèles mathématiques développés pour le placement des RCMSF conventionnels ne peuvent pas être appliqués dans le cadre de la configuration et de la planification des réseaux de capteurs directionnels. De nouveaux modèles d'optimisation sont donc nécessaires pour la capture des principaux paramètres caractérisant les capteurs directionnels. Dans cette thèse, nous abordons donc les problèmes clés suivants: le routage des données hétérogènes (scalaires et multimédia) pour les nœuds d'un RCMSF afin d'assurer une meilleure QdS aux usagers; et le déploiement optimisé de capteurs directionnels d'un RCMSF dans un espace tridimensionnel dont le but est couvrir un ensemble de points d'intérêts définis dans tel espace. Notre thèse se compose de trois articles scientifiques, chacun traitant d'une problématique bien spécifique. Le premier article traite du problème du routage d'information pour les RCMSF basé sur la QdS. Nous proposons un nouveau protocole, AntSensNet, basé sur l'heuristique de la colonie de fourmis, qui utilise plusieurs métriques de QdS pour trouver de bonnes routes pour les données multimédia et l'information scalaire. Dans la pratique, le protocole établit d'abord une structure hiérarchique sur le réseau avant de choisir les chemins appropriés pour répondre aux diverses exigences de QdS des différents types de trafic qui circulent dans le réseau. Ceci permet de maximiser l'utilisation des ressources du réseau, tout en améliorant la performance de la transmission de l'information. En outre, AntSensNet est capable d'utiliser un mécanisme efficace d'ordonnancement de paquets et de multiples chemins afin d'obtenir la distorsion minimale au moment où une application fait la transmission de la vidéo dans le réseau. Dans le deuxième article nous continuons avec le sujet de la QdS dans le RCMSFs et, plus spécifiquement, nous abordons la problématique du contrôle d'admission pour ce type de réseau. Grâce au contrôle d'admission, il est possible de déterminer si un réseau est capable de supporter un nouveau flot de données. S'il n'y a pas de contrôle d'admission dans un RCMSF, le performance du réseau sera compromis car les ressources existantes dans le réseau ne seront pas assez pour tous les flots acceptés et cela entraînera beaucoup de problèmes comme la perte de paquets des flots. Nous proposons un nouveau schéma de contrôle d'admission de nouveaux flots multimédia pour un RCMSF. Le système proposé est en mesure de déterminer si un flot de données puisse être admis dans le réseau, compte tenu de l'état actuel des liaisons de communications et l'énergie des nœuds. La décision sur l'acceptation est prise de manière distribuée, sans utiliser une entité centrale. De plus, notre schéma se présente comme un plug-in, et est adaptable à d'éventuels protocoles de routage et MAC utilisés pour la transmission de données dans les RCMSF. Nos résultats de simulation montrent l'efficacité de notre approche pour répondre aux exigences de QdS des nouveaux flots de données. Finalement, notre troisième article traite du problème du déploiement optimal des capteurs multimédia dans un espace 3D. Tel que mentionné ci-dessus, la plupart des capteurs multimédia sont du type directionnel. De surcroît, ces capteurs sont plus coûteux et plus spécialisés que les capteurs scalaires. En conséquence, les déploiements aléatoires, qui sont typiques pour les capteurs scalaires, ne sont ni souhaitables ni adéquats pour les capteurs multimédia. A cet effet, nous proposons un modèle optimal de déploiement 3D de capteurs directionnels. Ce modèle vise à déterminer le nombre minimum de capteurs directionnels connectés, leur emplacement et leur configuration, qui sont nécessaires pour couvrir un ensemble de points de contrôle dans un espace 3D donné. La configuration de chaque capteur déployé est déterminée par trois paramètres : la plage de détection, le champ de vision (FoV) et l'orientation. Nous présentons une formulation « Integer Linear Programming » (ILP) pour trouver la solution exacte du problème et aussi, un algorithme glouton capable de trouver une solution approximative (mais efficace) du problème. Nous évaluons également différentes propriétés des solutions proposées par le biais de nombreuses simulations. Avec ces trois articles on a réussi à résoudre, d'une façon à la fois innovatrice et pratique, les problèmes de routage basé sur la QdS pour les RCMSF et le déploiement de capteurs directionnels, qui sont l'objectif principal de notre recherche.----------ABSTRACT A Wireless Sensor Network (WSN) consists of a set of embedded processing units, called sensors, communicating via wireless links, whose main function is the collection of parameters related to the surrounding environment, such as temperature, pressure or the presence/motion of objects. WSN are expected to have many applications in various fields, such as industrial processes, military surveillance, observation and monitoring of habitat, etc. The availability of inexpensive hardware such as CMOS cameras and microphones that are able to ubiquitously capture multimedia content from the environment has fostered the development of Wireless Multimedia Sensor Networks (WMSNs), i.e., networks of wirelessly interconnected devices that allow retrieving video and audio streams, still images, and scalar sensor data. In addition to the ability to retrieve multimedia data, WMSNs will be able to store, process in real time, correlate and fuse multimedia data originated from heterogeneous sources, and perform actions on the environment based on the content gathered. Many applications require the sensor network paradigm to be rethought in view of the need for mechanisms to deliver multimedia content with a certain level of quality of service (QoS). Due to high bandwidth, processing and stringent Qos requirements existing solutions are not feasible for WMSNs. Since the need to minimize the energy consumption has driven most of the research in sensor networks so far, there is a need to create mechanisms to efficiently deliver application-level QoS, and to map these requirements to network-layer metrics such as latency or delay. Additionally, in WSNs, an omnidirectional sensing model is often assumed where each sensor can equally detect its environment in each direction. Instead, multimedia sensors, specially video sensor, are directional sensors. A directional sensor is characterized by its sensing region which can be viewed as a sector in a three-dimensional plane. Therefore, it can only choose one active sector (or direction) at any time instant. Unfortunately, the many methods developed for deploying traditional WSNs cannot directly be used for optimizing and configuring directional WMSNs due to the different parameters involved. Therefore, new optimization models which capture the primary parameters characterizing directional sensors are necessary. The issues aforementioned are crucial challenges for the development of WMSNs. In this thesis, we are interested in the following aspects: routing of heterogeneous data (scalar and multimedia) from the nodes of a WMSN to the sink in order to provide better QoS experience to users; and an optimized deployment of directional sensors of a WMSN in a three-dimensional surface with the objective to cover all the control points as defined in such a space. Our thesis runs through three scientific papers, each addressing a specific problem. In our first paper, we address the problem of data routing based on different QoS metrics in a WMSN. We propose a new protocol AntSensNet, based on the traditional ant-based algorithm. The AntSensNet protocol builds a hierarchical structure on the network before choosing suitable paths to meet various QoS requirements from different kinds of traffic, thus maximizing network utilization, while improving its performance. In addition, AntSensNet is able to use a efficient multipath video packet scheduling in order to get minimum video distortion transmission. In the second paper, we address the problem of connection admission control for WMSNs. With admission control, it is possible to determine whether a network is capable of supporting a new data stream. Without admission control in a WMSN, the network performance will be compromised because the existing resources within the network cannot be enough for all the flows accepted and this will cause many problems such as packet loss and congestion. Taking multiple parameters into account, we propose a novel connection admission control scheme for the multimedia traffic circulating in the network. The proposed scheme is able to determine if a new flow can be admitted in the network considering the current link states and the energy of the nodes. The decision about accepting is taken in a distributed way, without trusting in a central entity to take this decision. In addition, our scheme works like a plug-in, being easily adaptable to any routing and MAC protocols. Our simulation results show the effectiveness of our approach to satisfy QoS requirements of flows and achieve fair bandwidth utilization and low jitter. Finally, in the third paper, we address the problem of optimal deployment of directional sensors in a 3D space. We have already mentioned that conventional methods to deploy omnidirectional sensors are not suitable to deploy directional sensors. To remedy this deficiency, we propose a mathematical model which aims at to determine the minimum number of connected directional multimedia sensor nodes and their configuration, needed to cover a set of control points in a given 3D space. The configuration of each deployed sensor is determined by three parameters: sensing range, field of view and orientation. We present the exact ILP formulation for the problem and an approximate (but computationally efficient) greedy algorithm solution. We also evaluate different properties of the proposed solutions through extensive simulations. Overall, the proposed solutions in this thesis are both innovative and practical. With these three papers, we have been successfully resolved the problems of a QoS-based routing protocol for WMSN and an optimal deployment of directional sensors in a 3D space, which are the components of the main objective of this thesis

    Interference management in impulse-radio ultra-wide band networks

    Get PDF
    We consider networks of impulse-radio ultra-wide band (IR-UWB) devices. We are interested in the architecture, design, and performance evaluation of these networks in a low data-rate, self-organized, and multi-hop setting. IR-UWB is a potential physical layer for sensor networks and emerging pervasive wireless networks. These networks are likely to have no particular infrastructure, might have nodes embedded in everyday life objects and have a size ranging from a few dozen nodes to large-scale networks composed of hundreds of nodes. Their average data-rate is low, on the order of a few megabits per second. IR-UWB physical layers are attractive for these networks because they potentially combine low-power consumption, robustness to multipath fading and to interference, and location/ranging capability. The features of an IR-UWB physical layer greatly differ from the features of the narrow-band physical layers used in existing wireless networks. First, the bandwidth of an IR-UWB physical layer is at least 500 MHz, which is easily two orders of magnitude larger than the bandwidth used by a typical narrow-band physical layer. Second, this large bandwidth implies stringent radio spectrum regulations because UWB systems might occupy a portion of the spectrum that is already in use. Consequently, UWB systems exhibit extremely low power spectral densities. Finally IR-UWB physical layers offer multi-channel capabilities for multiple and concurrent access to the physical layer. Hence, the architecture and design of IR-UWB networks are likely to differ significantly from narrow-band wireless networks. For the network to operate efficiently, it must be designed and implemented to take into account the features of IR-UWB and to take advantage of them. In this thesis, we focus on both the medium access control (MAC) layer and the physical layer. Our main objectives are to understand and determine (1) the architecture and design principles of IR-UWB networks, and (2) how to implement them in practical schemes. In the first part of this thesis, we explore the design space of IR-UWB networks and analyze the fundamental design choices. We show that interference from concurrent transmissions should not be prevented as in protocols that use mutual exclusion (for instance, IEEE 802.11). Instead, interference must be managed with rate adaptation, and an interference mitigation scheme should be used at the physical layer. Power control is useless. Based on these findings, we develop a practical PHY-aware MAC protocol that takes into account the specific nature of IR-UWB and that is able to adapt its rate to interference. We evaluate the performance obtained with this design: It clearly outperforms traditional designs that, instead, use mutual exclusion or power control. One crucial aspect of IR-UWB networks is packet detection and timing acquisition. In this context, a network design choice is whether to use a common or private acquisition preamble for timing acquisition. Therefore, we evaluate how this network design issue affects the network throughput. Our analysis shows that a private acquisition preamble yields a tremendous increase in throughput, compared with a common acquisition preamble. In addition, simulations on multi-hop topologies with TCP flows demonstrate that a network using private acquisition preambles has a stable throughput. On the contrary, using a common acquisition preamble exhibits an effect similar to exposed terminal issues in 802.11 networks: the throughput is severely degraded and flow starvation might occur. In the second part of this thesis, we are interested in IEEE 802.15.4a, a standard for low data-rate, low complexity networks that employs an IR-UWB physical layer. Due to its low complexity, energy detection is appealing for the implementation of practical receivers. But it is less robust to multi-user interference (MUI) than a coherent receiver. Hence, we evaluate the performance of an IEEE 802.15.4a physical layer with an energy detection receiver to find out whether a satisfactory performance is still obtained. Our results show that MUI severely degrades the performance in this case. The energy detection receiver significantly diminishes one of the most appealing benefits of UWB, specifically its robustness to MUI and thus the possibility of allowing for parallel transmissions. This performance analysis leads to the development of an IR-UWB receiver architecture, based on energy detection, that is robust to MUI and adapted to the peculiarities of IEEE 802.15.4a. This architecture greatly improves the performance and entails only a moderate increase in complexity. Finally, we present the architecture of an IR-UWB physical layer implementation in ns-2, a well-known network simulator. This architecture is generic and allows for the simulation of several multiple-access physical layers. In addition, it comprises a model of packet detection and timing acquisition. Network simulators also need to have efficient algorithms to accurately compute bit or packet error rates. Hence, we present a fast algorithm to compute the bit error rate of an IR-UWB physical layer in a network setting with MUI. It is based on a novel combination of large deviation theory and importance sampling

    Multi-hop relaying networks in TDD-CDMA systems

    Get PDF
    The communications phenomena at the end of the 20th century were the Internet and mobile telephony. Now, entering the new millennium, an effective combination of the two should become a similarly everyday experience. Current limitations include scarce, exorbitantly priced bandwidth and considerable power consumption at higher data rates. Relaying systems use several shorter communications links instead of the conventional point-to-point transmission. This can allow for a lower power requirement and, due to the shorter broadcast range, bandwidth re-use may be more efficiently exploited. Code division multiple access (CDMA) is emerging as one of the most common methods for multi user access. Combining CDMA with time division duplexing (TDD) provides a system that supports asymmetric communications and relaying cost-effectively. The capacity of CDMA may be reduced by interference from other users, hence it is important that the routing of relays is performed to minimise interference at receivers. This thesis analyses relaying within the context of TDD-CDMA systems. Such a system was included in the initial draft of the European 3G specifications as opportunity driven multiple access (ODMA). Results are presented which demonstrate that ODMA allows for a more flexible capacity coverage trade-off than non-relaying systems. An investigation into the interference characteristics of ODMA shows that most interference occurs close to the base station (BS). Hence it is possible that in-cell routing to avoid the BS may increase capacity. As a result, a novel hybrid network topology is presented. ODMA uses path loss as a metric for routing. This technique does not avoid interference, and hence ODMA shows no capacity increase with the hybrid network. Consequently, a novel interference based routing algorithm and admission control are developed. When at least half the network is engaged in in-cell transmission, the interference based system allows for a higher capacity than a conventional cellular system. In an attempt to reduce transmitted power, a novel congestion based routing algorithm is introduced. This system is shown to have lower power requirement than any other analysed system and, when more than 2 hops are allowed, the highest capacity. The allocation of time slots affects system performance through co-channel interference. To attempt to minimise this, a novel dynamic channel allocation (DCA) algorithm is developed based on the congestion routing algorithm. By combining the global minimisation of system congestion in both time slots and routing, the DCA further increases throughput. Implementing congestion routed relaying, especially with DCA, in any TDD-CDMA system with in-cell calls can show significant performance improvements over conventional cellular systems

    Deployment and operational aspects of rural broadband wireless access networks

    Get PDF
    Broadband speeds, Internet literacy and digital technologies have been steadily evolving over the last decade. Broadband infrastructure has become a key asset in today’s society, enabling innovation, driving economic efficiency and stimulating cultural inclusion. However, populations living in remote and rural communities are unable to take advantage of these trends. Globally, a significant part of the world population is still deprived of basic access to the Internet. Broadband Wireless Access (BWA) networks are regarded as a viable solution for providing Internet access to populations living in rural regions. In recent years, Wireless Internet Service Providers (WISPs) and community organizations around the world proved that rural BWA networks can be an effective strategy and a profitable business. This research began by deploying a BWA network testbed, which also provides Internet access to several remote communities in the harsh environment of the Scottish Highlands and Islands. The experience of deploying and operating this network pointed out three unresolved research challenges that need to be addressed to ease the path towards widespread deployment of rural BWA networks, thereby bridging the rural-urban broadband divide. Below, our research contributions are outlined with respect to these challenges. Firstly, an effective planning paradigm for deploying BWA networks is proposed: incremental planning. Incremental planning allows to anticipate return of investment and to overcome the limited network infrastructure (e.g., backhaul fibre links) in rural areas. I have developed a software tool called IncrEase and underlying network planning algorithms to consider a varied set of operational metrics to guide the operator in identifying the regions that would benefit the most from a network upgrade, automatically suggesting the best long-term strategy to the network administrator. Second, we recognize that rural and community networks present additional issues for network management. As the Internet uplink is often the most expensive part of the operational expenses for such deployments, it is desirable to minimize overhead for network management. Also, unreliable connectivity between the network operation centre and the network being managed can render traditional centralized management approaches ineffective. Finally, the number of skilled personnel available to maintain such networks is limited. I have developed a distributed network management platform called Stix for BWA networks, to make it easy to manage such networks for rural/community deployments and WISPs alike while keeping the network management infrastructure scalable and flexible. Our approach is based on the notions of goal-oriented and in-network management: administrators graphically specify network management activities as workflows, which are run in the network on a distributed set of agents that cooperate in executing those workflows and storing management information. The Stix system was implemented on low-cost and small form-factor embedded boards and shown to have a low memory footprint. Third, the research focus moves to the problem of assessing broadband coverage and quality in a given geographic region. The outcome is BSense, a flexible framework that combines data provided by ISPs with measurements gathered by distributed software agents. The result is a census (presented as maps and tables) of the coverage and quality of broadband connections available in the region of interest. Such information can be exploited by ISPs to drive their growth, and by regulators and policy makers to get the true picture of broadband availability in the region and make informed decisions. In exchange for installing the multi-platform measurement software (that runs in the background) on their computers, users can get statistics about their Internet connection and those in their neighbourhood. Finally, the lessons learned through this research are summarised. The outcome is a set of suggestions about how the deployment and operation of rural BWA networks, including our own testbed, can be made more efficient by using the proper tools. The software systems presented in this thesis have been evaluated in lab settings and in real networks, and are available as open-source software

    Dynamic Resource Management of Network-on-Chip Platforms for Multi-stream Video Processing

    Get PDF
    This thesis considers resource management in the context of parallel multiple video stream decoding, on multicore/many-core platforms. Such platforms have tens or hundreds of on-chip processing elements which are connected via a Network-on-Chip (NoC). Inefficient task allocation configurations can negatively affect the communication cost and resource contention in the platform, leading to predictability and performance issues. Efficient resource management for large-scale complex workloads is considered a challenging research problem; especially when applications such as video streaming and decoding have dynamic and unpredictable workload characteristics. For these type of applications, runtime heuristic-based task mapping techniques are required. As the application and platform size increase, decentralised resource management techniques are more desirable to overcome the reliability and performance bottlenecks in centralised management. In this work, several heuristic-based runtime resource management techniques, targeting real-time video decoding workloads are proposed. Firstly, two admission control approaches are proposed; one fully deterministic and highly predictable; the other is heuristic-based, which balances predictability and performance. Secondly, a pair of runtime task mapping schemes are presented, which make use of limited known application properties, communication cost and blocking-aware heuristics. Combined with the proposed deterministic admission controller, these techniques can provide strict timing guarantees for hard real-time streams whilst improving resource usage. The third contribution in this thesis is a distributed, bio-inspired, low-overhead, task re-allocation technique, which is used to further improve the timeliness and workload distribution of admitted soft real-time streams. Finally, this thesis explores parallelisation and resource management issues, surrounding soft real-time video streams that have been encoded using complex encoding tools and modern codecs such as High Efficiency Video Coding (HEVC). Properties of real streams and decoding trace data are analysed, to statistically model and generate synthetic HEVC video decoding workloads. These workloads are shown to have complex and varying task dependency structures and resource requirements. To address these challenges, two novel runtime task clustering and mapping techniques for Tile-parallel HEVC decoding are proposed. These strategies consider the workload communication to computation ratio and stream-specific characteristics to balance predictability improvement and communication energy reduction. Lastly, several task to memory controller port assignment schemes are explored to alleviate performance bottlenecks, resulting from memory traffic contention

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Compression et transmission d'images avec énergie minimale application aux capteurs sans fil

    Get PDF
    Un réseau de capteurs d'images sans fil (RCISF) est un réseau ad hoc formé d'un ensemble de noeuds autonomes dotés chacun d'une petite caméra, communiquant entre eux sans liaison filaire et sans l'utilisation d'une infrastructure établie, ni d'une gestion de réseau centralisée. Leur utilité semble majeure dans plusieurs domaines, notamment en médecine et en environnement. La conception d'une chaîne de compression et de transmission sans fil pour un RCISF pose de véritables défis. L'origine de ces derniers est liée principalement à la limitation des ressources des capteurs (batterie faible , capacité de traitement et mémoire limitées). L'objectif de cette thèse consiste à explorer des stratégies permettant d'améliorer l'efficacité énergétique des RCISF, notamment lors de la compression et de la transmission des images. Inéluctablement, l'application des normes usuelles telles que JPEG ou JPEG2000 est éner- givore, et limite ainsi la longévité des RCISF. Cela nécessite leur adaptation aux contraintes imposées par les RCISF. Pour cela, nous avons analysé en premier lieu, la faisabilité d'adapter JPEG au contexte où les ressources énergétiques sont très limitées. Les travaux menés sur cet aspect nous permettent de proposer trois solutions. La première solution est basée sur la propriété de compactage de l'énergie de la Transformée en Cosinus Discrète (TCD). Cette propriété permet d'éliminer la redondance dans une image sans trop altérer sa qualité, tout en gagnant en énergie. La réduction de l'énergie par l'utilisation des régions d'intérêts représente la deuxième solution explorée dans cette thèse. Finalement, nous avons proposé un schéma basé sur la compression et la transmission progressive, permettant ainsi d'avoir une idée générale sur l'image cible sans envoyer son contenu entier. En outre, pour une transmission non énergivore, nous avons opté pour la solution suivante. N'envoyer fiablement que les basses fréquences et les régions d'intérêt d'une image. Les hautes fréquences et les régions de moindre intérêt sont envoyées""infiablement"", car leur pertes n'altèrent que légèrement la qualité de l'image. Pour cela, des modèles de priorisation ont été comparés puis adaptés à nos besoins. En second lieu, nous avons étudié l'approche par ondelettes (wavelets ). Plus précisément, nous avons analysé plusieurs filtres d'ondelettes et déterminé les ondelettes les plus adéquates pour assurer une faible consommation en énergie, tout en gardant une bonne qualité de l'image reconstruite à la station de base. Pour estimer l'énergie consommée par un capteur durant chaque étape de la 'compression, un modèle mathématique est développé pour chaque transformée (TCD ou ondelette). Ces modèles, qui ne tiennent pas compte de la complexité de l'implémentation, sont basés sur le nombre d'opérations de base exécutées à chaque étape de la compression

    An Embryonics Inspired Architecture for Resilient Decentralised Cloud Service Delivery

    Get PDF
    Data-driven artificial intelligence applications arising from Internet of Things technologies can have profound wide-reaching societal benefits at the cross-section of the cyber and physical domains. Usecases are expanding rapidly. For example, smart-homes and smart-buildings provide intelligent monitoring, resource optimisation, safety, and security for their inhabitants. Smart cities can manage transport, waste, energy, and crime on large scales. Whilst smart-manufacturing can autonomously produce goods through the self-management of factories and logistics. As these use-cases expand further, the requirement to ensure data is processed accurately and timely is ever crucial, as many of these applications are safety critical. Where loss off life and economic damage is a likely possibility in the event of system failure. While the typical service delivery paradigm, cloud computing, is strong due to operating upon economies of scale, their physical proximity to these applications creates network latency which is incompatible with these safety critical applications. To complicate matters further, the environments they operate in are becoming increasingly hostile. With resource-constrained and mobile wireless networking, commonplace. These issues drive the need for new service delivery architectures which operate closer to, or even upon, the network devices, sensors and actuators which compose these IoT applications at the network edge. These hostile and resource constrained environments require adaptation of traditional cloud service delivery models to these decentralised mobile and wireless environments. Such architectures need to provide persistent service delivery within the face of a variety of internal and external changes or: resilient decentralised cloud service delivery. While the current state of the art proposes numerous techniques to enhance the resilience of services in this manner, none provide an architecture which is capable of providing data processing services in a cloud manner which is inherently resilient. Adopting techniques from autonomic computing, whose characteristics are resilient by nature, this thesis presents a biologically-inspired platform modelled on embryonics. Embryonic systems have an ability to self-heal and self-organise whilst showing capacity to support decentralised data processing. An initial model for embryonics-inspired resilient decentralised cloud service delivery is derived according to both the decentralised cloud, and resilience requirements given for this work. Next, this model is simulated using cellular automata, which illustrate the embryonic concept’s ability to provide self-healing service delivery under varying system component loss. This highlights optimisation techniques, including: application complexity bounds, differentiation optimisation, self-healing aggression, and varying system starting conditions. All attributes of which can be adjusted to vary the resilience performance of the system depending upon different resource capabilities and environmental hostilities. Next, a proof-of-concept implementation is developed and validated which illustrates the efficacy of the solution. This proof-of-concept is evaluated on a larger scale where batches of tests highlighted the different performance criteria and constraints of the system. One key finding was the considerable quantity of redundant messages produced under successful scenarios which were helpful in terms of enabling resilience yet could increase network contention. Therefore balancing these attributes are important according to use-case. Finally, graph-based resilience algorithms were executed across all tests to understand the structural resilience of the system and whether this enabled suitable measurements or prediction of the application’s resilience. Interestingly this study highlighted that although the system was not considered to be structurally resilient, the applications were still being executed in the face of many continued component failures. This highlighted that the autonomic embryonic functionality developed was succeeding in executing applications resiliently. Illustrating that structural and application resilience do not necessarily coincide. Additionally, one graph metric, assortativity, was highlighted as being predictive of application resilience, although not structural resilience

    Robust Analysis of Sensor Coverage and Location for Real-Time Traffic Estimation and Prediction in Large-Scale Networks

    Get PDF
    The growing need of agencies to obtain real-time information on the traffic state of key facilities in the systems they manage is driving interest in cost-effective deployment of sensor technologies across the networks they manage. This has led to greater interest in the sensor location problem. Finding a set of optimal sensor locations is a network design problem. This dissertation addresses a series of critical and challenging issues in the robustness analysis of sensor coverage and location under different traffic conditions, in the context of real-time traffic estimation and prediction in a large scale traffic network. The research presented in this dissertation represents an important step towards optimization of sensor locations based on dynamic traffic assignment methodology. It proposes an effective methodology to find optimal sensor coverage and locations, for a specified number of sensors, through an iterative mathematical bi-level optimization framework, The proposed methods help transportation planners locate a minimal number of sensors to completely cover all or a subset of OD pairs in a network without budgetary constraints, or optimally locate a limited number of sensors by considering link information gains (weights of each link brought to correct a-priori origin-destination flows) and flow coverage with budgetary constraints. Network uncertainties associated with the sensor location problem are considered in the mathematical formulation. The model is formulated as a two stage stochastic model. The first stage decision denotes a strategic sensor location plan before observations of any randomness events, while the recourse function associated with the second stage denotes the expected cost of taking corrective actions to the first stage solution after the occurrence of the random events. Recognizing the location problem as a NP-hard problem, a hybrid Greedy Randomized Adaptive Search Procedure (GRASP) is employed to circumvent the difficulties of exhaustively exploring the feasible solutions and find a near-optimal solution for this problem. The proposed solution procedure is operated in two stages. In stage one, a restricted candidate list (RCL) is generated from choosing a set of top candidate locations sorted by the link flows. A predetermined number of links is randomly selected from the RCL according to link independent rule. In stage two, the selected candidate locations generated from stage one are evaluated in terms of the magnitude of flow variation reduction and coverage of the origin-destination flows using the archived historical and simulated traffic data. The proposed approaches are tested on several actual networks and the results are analyzed

    Performance modelling with adaptive hidden Markov models and discriminatory processor sharing queues

    Get PDF
    In modern computer systems, workload varies at different times and locations. It is important to model the performance of such systems via workload models that are both representative and efficient. For example, model-generated workloads represent realistic system behaviour, especially during peak times, when it is crucial to predict and address performance bottlenecks. In this thesis, we model performance, namely throughput and delay, using adaptive models and discrete queues. Hidden Markov models (HMMs) parsimoniously capture the correlation and burstiness of workloads with spatiotemporal characteristics. By adapting the batch training of standard HMMs to incremental learning, online HMMs act as benchmarks on workloads obtained from live systems (i.e. storage systems and financial markets) and reduce time complexity of the Baum-Welch algorithm. Similarly, by extending HMM capabilities to train on multiple traces simultaneously it follows that workloads of different types are modelled in parallel by a multi-input HMM. Typically, the HMM-generated traces verify the throughput and burstiness of the real data. Applications of adaptive HMMs include predicting user behaviour in social networks and performance-energy measurements in smartphone applications. Equally important is measuring system delay through response times. For example, workloads such as Internet traffic arriving at routers are affected by queueing delays. To meet quality of service needs, queueing delays must be minimised and, hence, it is important to model and predict such queueing delays in an efficient and cost-effective manner. Therefore, we propose a class of discrete, processor-sharing queues for approximating queueing delay as response time distributions, which represent service level agreements at specific spatiotemporal levels. We adapt discrete queues to model job arrivals with distributions given by a Markov-modulated Poisson process (MMPP) and served under discriminatory processor-sharing scheduling. Further, we propose a dynamic strategy of service allocation to minimise delays in UDP traffic flows whilst maximising a utility function.Open Acces
    corecore