10 research outputs found

    Lease based addressing for event-driven wireless sensor networks

    Full text link
    Sensor Networks have applications in diverse fields. They can be deployed for habitat modeling, temperature monitoring and industrial sensing. They also find applications in battlefield awareness and emergency (first) response situations. While unique addressing is not a requirement of many data collecting applications of wireless sensor networks it is vital for the success of applications such as emergency response. Data that cannot be associated with a specific node becomes useless in such situations. In this work we propose an addressing mechanism for event-driven wireless sensor networks. The proposed scheme eliminates the need for network wide Duplicate Address Detection (DAD) and enables reuse of addresses. <br /

    Contributions to Wireless multi-hop networks : Quality of Services and Security concerns

    Get PDF
    Ce document résume mes travaux de recherche conduits au cours de ces 6 dernières années. Le principal sujet de recherche de mes contributions est la conception et l’évaluation des solutions pour les réseaux sans fil multi-sauts en particulier les réseaux mobiles adhoc (MANETs), les réseaux véhiculaires ad hoc (VANETs), et les réseaux de capteurs sans fil (WSNs). La question clé de mes travaux de recherche est la suivante : « comment assurer un transport des données e cace en termes de qualité de services (QoS), de ressources énergétiques, et de sécurité dans les réseaux sans fil multi-sauts? » Pour répondre à cette question, j’ai travaillé en particulier sur les couches MAC et réseau et utilisé une approche inter-couches.Les réseaux sans fil multi-sauts présentent plusieurs problèmes liés à la gestion des ressources et au transport des données capable de supporter un grand nombre de nœuds, et d’assurer un haut niveau de qualité de service et de sécurité.Dans les réseaux MANETs, l’absence d’infrastructure ne permet pas d’utiliser l’approche centralisée pour gérer le partage des ressources, comme l’accès au canal.Contrairement au WLAN (réseau sans fil avec infrastructure), dans les réseaux Ad hoc les nœuds voisins deviennent concurrents et il est di cile d’assurer l’équité et l’optimisation du débit. La norme IEEE802.11 ne prend pas en compte l’équité entre les nœuds dans le contexte des MANETs. Bien que cette norme propose di érents niveaux de transmission, elle ne précise pas comment allouer ces débits de manière e cace. En outre, les MANETs sont basés sur le concept de la coopération entre les nœuds pour former et gérer un réseau. Le manque de coopération entre les nœuds signifie l’absence de tout le réseau. C’est pourquoi, il est primordial de trouver des solutions pour les nœuds non-coopératifs ou égoïstes. Enfin, la communication sans fil multi-sauts peut participer à l’augmentation de la couverture radio. Les nœuds de bordure doivent coopérer pour transmettre les paquets des nœuds voisins qui se trouvent en dehors de la zone de couverture de la station de base.Dans les réseaux VANETs, la dissémination des données pour les applications de sureté est un vrai défi. Pour assurer une distribution rapide et globale des informations, la méthode de transmission utilisée est la di usion. Cette méthode présente plusieurs inconvénients : perte massive des données due aux collisions, absence de confirmation de réception des paquets, non maîtrise du délai de transmission, et redondance de l’information. De plus, les applications de sureté transmettent des informations critiques, dont la fiabilité et l’authenticité doivent être assurées.Dans les réseaux WSNs, la limitation des ressources (bande passante, mémoire, énergie, et capacité de calcul), ainsi que le lien sans fil et la mobilité rendent la conception d’un protocole de communication e cace di cile. Certaines applications nécessitent un taux important de ressources (débit, énergie, etc) ainsi que des services de sécurité, comme la confidentialité et l’intégrité des données et l’authentification mutuelle. Ces paramètres sont opposés et leur conciliation est un véritable défi. De plus, pour transmettre de l’information, certaines applications ont besoin de connaître la position des nœuds dans le réseau. Les techniques de localisation sou rent d’un manque de précision en particulier dans un environnement fermé (indoor), et ne permettent pas de localiser les nœuds dans un intervalle de temps limité. Enfin, la localisation des nœuds est nécessaire pour assurer le suivi d’objet communicant ou non. Le suivi d’objet est un processus gourmand en énergie, et requiert de la précision.Pour répondre à ces défis, nous avons proposé et évalué des solutions, présentées de la manière suivante : l’ensemble des contributions dédiées aux réseaux MANETs est présenté dans le deuxième chapitre. Le troisième chapitre décrit les solutions apportées dans le cadre des réseaux VANETs. Enfin, les contributions liées aux réseaux WSNs sont présentées dans le quatrième chapitre

    Cautiously Optimistic Program Analyses for Secure and Reliable Software

    Full text link
    Modern computer systems still have various security and reliability vulnerabilities. Well-known dynamic analyses solutions can mitigate them using runtime monitors that serve as lifeguards. But the additional work in enforcing these security and safety properties incurs exorbitant performance costs, and such tools are rarely used in practice. Our work addresses this problem by constructing a novel technique- Cautiously Optimistic Program Analysis (COPA). COPA is optimistic- it infers likely program invariants from dynamic observations, and assumes them in its static reasoning to precisely identify and elide wasteful runtime monitors. The resulting system is fast, but also ensures soundness by recovering to a conservatively optimized analysis when a likely invariant rarely fails at runtime. COPA is also cautious- by carefully restricting optimizations to only safe elisions, the recovery is greatly simplified. It avoids unbounded rollbacks upon recovery, thereby enabling analysis for live production software. We demonstrate the effectiveness of Cautiously Optimistic Program Analyses in three areas: Information-Flow Tracking (IFT) can help prevent security breaches and information leaks. But they are rarely used in practice due to their high performance overhead (>500% for web/email servers). COPA dramatically reduces this cost by eliding wasteful IFT monitors to make it practical (9% overhead, 4x speedup). Automatic Garbage Collection (GC) in managed languages (e.g. Java) simplifies programming tasks while ensuring memory safety. However, there is no correct GC for weakly-typed languages (e.g. C/C++), and manual memory management is prone to errors that have been exploited in high profile attacks. We develop the first sound GC for C/C++, and use COPA to optimize its performance (16% overhead). Sequential Consistency (SC) provides intuitive semantics to concurrent programs that simplifies reasoning for their correctness. However, ensuring SC behavior on commodity hardware remains expensive. We use COPA to ensure SC for Java at the language-level efficiently, and significantly reduce its cost (from 24% down to 5% on x86). COPA provides a way to realize strong software security, reliability and semantic guarantees at practical costs.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/170027/1/subarno_1.pd

    Similarity and diversity: two sides of the same coin in the evaluation of data streams

    Get PDF
    The Information Systems represent the primary instrument of growth for the companies that operate in the so-called e-commerce environment. The data streams generated by the users that interact with their websites are the primary source to define the user behavioral models. Some main examples of services integrated in these websites are the Recommender Systems, where these models are exploited in order to generate recommendations of items of potential interest to users, the User Segmentation Systems, where the models are used in order to group the users on the basis of their preferences, and the Fraud Detection Systems, where these models are exploited to determine the legitimacy of a financial transaction. Even though in literature diversity and similarity are considered as two sides of the same coin, almost all the approaches take into account them in a mutually exclusive manner, rather than jointly. The aim of this thesis is to demonstrate how the consideration of both sides of this coin is instead essential to overcome some well-known problems that affict the state-of-the-art approaches used to implement these services, improving their performance. Its contributions are the following: with regard to the recommender systems, the detection of the diversity in a user profile is used to discard incoherent items, improving the accuracy, while the exploitation of the similarity of the predicted items is used to re-rank the recommendations, improving their effectiveness; with regard to the user segmentation systems, the detection of the diversity overcomes the problem of the non-reliability of data source, while the exploitation of the similarity reduces the problems of understandability and triviality of the obtained segments; lastly, concerning the fraud detection systems, the joint use of both diversity and similarity in the evaluation of a new transaction overcomes the problems of the data scarcity, and those of the non-stationary and unbalanced class distribution

    Similarity and diversity: two sides of the same coin in the evaluation of data streams

    Get PDF
    The Information Systems represent the primary instrument of growth for the companies that operate in the so-called e-commerce environment. The data streams generated by the users that interact with their websites are the primary source to define the user behavioral models. Some main examples of services integrated in these websites are the Recommender Systems, where these models are exploited in order to generate recommendations of items of potential interest to users, the User Segmentation Systems, where the models are used in order to group the users on the basis of their preferences, and the Fraud Detection Systems, where these models are exploited to determine the legitimacy of a financial transaction. Even though in literature diversity and similarity are considered as two sides of the same coin, almost all the approaches take into account them in a mutually exclusive manner, rather than jointly. The aim of this thesis is to demonstrate how the consideration of both sides of this coin is instead essential to overcome some well-known problems that affict the state-of-the-art approaches used to implement these services, improving their performance. Its contributions are the following: with regard to the recommender systems, the detection of the diversity in a user profile is used to discard incoherent items, improving the accuracy, while the exploitation of the similarity of the predicted items is used to re-rank the recommendations, improving their effectiveness; with regard to the user segmentation systems, the detection of the diversity overcomes the problem of the non-reliability of data source, while the exploitation of the similarity reduces the problems of understandability and triviality of the obtained segments; lastly, concerning the fraud detection systems, the joint use of both diversity and similarity in the evaluation of a new transaction overcomes the problems of the data scarcity, and those of the non-stationary and unbalanced class distribution

    Algorithmic skeletons for exact combinatorial search at scale

    Get PDF
    Exact combinatorial search is essential to a wide range of application areas including constraint optimisation, graph matching, and computer algebra. Solutions to combinatorial problems are found by systematically exploring a search space, either to enumerate solutions, determine if a specific solution exists, or to find an optimal solution. Combinatorial searches are computationally hard both in theory and practice, and efficiently exploring the huge number of combinations is a real challenge, often addressed using approximate search algorithms. Alternatively, exact search can be parallelised to reduce execution time. However, parallel search is challenging due to both highly irregular search trees and sensitivity to search order, leading to anomalies that can cause unexpected speedups and slowdowns. As core counts continue to grow, parallel search becomes increasingly useful for improving the performance of existing searches, and allowing larger instances to be solved. A high-level approach to parallel search allows non-expert users to benefit from increasing core counts. Algorithmic Skeletons provide reusable implementations of common parallelism patterns that are parameterised with user code which determines the specific computation, e.g. a particular search. We define a set of skeletons for exact search, requiring the user to provide in the minimal case a single class that specifies how the search tree is generated and a parameter that specifies the type of search required. The five are: Sequential search; three general-purpose parallel search methods: Depth-Bounded, Stack-Stealing, and Budget; and a specific parallel search method, Ordered, that guarantees replicable performance. We implement and evaluate the skeletons in a new C++ parallel search framework, YewPar. YewPar provides both high-level skeletons and low-level search specific schedulers and utilities to deal with the irregularity of search and knowledge exchange between workers. YewPar is based on the HPX library for distributed task-parallelism potentially allowing search to execute on multi-cores, clusters, cloud, and high performance computing systems. Underpinning the skeleton design is a novel formal model, MT^3 , a parallel operational semantics that describes multi-threaded tree traversals, allowing reasoning about parallel search, e.g. describing common parallel search phenomena such as performance anomalies. YewPar is evaluated using seven different search applications (and over 25 specific instances): Maximum Clique, k-Clique, Subgraph Isomorphism, Travelling Salesperson, Binary Knapsack, Enumerating Numerical Semigroups, and the Unbalanced Tree Search Benchmark. The search instances are evaluated at multiple scales from 1 to 255 workers, on a 17 host, 272 core Beowulf cluster. The overheads of the skeletons are low, with a mean 6.1% slowdown compared to hand-coded sequential implementation. Crucially, for all search applications YewPar reduces search times by an order of magnitude, i.e hours/minutes to minutes/seconds, and we commonly see greater than 60% (average) parallel efficiency speedups for up to 255 workers. Comparing skeleton performance reveals that no one skeleton is best for all searches, highlighting a benefit of a skeleton approach that allows multiple parallelisations to be explored with minimal refactoring. The Ordered skeleton avoids slowdown anomalies where, due to search knowledge being order dependent, a parallel search takes longer than a sequential search. Analysis of Ordered shows that, while being 41% slower on average (73% worse-case) than Depth-Bounded, in nearly all cases it maintains the following replicable performance properties: 1) parallel executions are no slower than one worker sequential executions 2) runtimes do not increase as workers are added, and 3) variance between repeated runs is low. In particular, where Ordered maintains a relative standard deviation (RSD) of less than 15%, Depth-Bounded suffers from an RSD greater than 50%, showing the importance of carefully controlling search orders for repeatability

    ECOS 2012

    Get PDF
    The 8-volume set contains the Proceedings of the 25th ECOS 2012 International Conference, Perugia, Italy, June 26th to June 29th, 2012. ECOS is an acronym for Efficiency, Cost, Optimization and Simulation (of energy conversion systems and processes), summarizing the topics covered in ECOS: Thermodynamics, Heat and Mass Transfer, Exergy and Second Law Analysis, Process Integration and Heat Exchanger Networks, Fluid Dynamics and Power Plant Components, Fuel Cells, Simulation of Energy Conversion Systems, Renewable Energies, Thermo-Economic Analysis and Optimisation, Combustion, Chemical Reactors, Carbon Capture and Sequestration, Building/Urban/Complex Energy Systems, Water Desalination and Use of Water Resources, Energy Systems- Environmental and Sustainability Issues, System Operation/ Control/Diagnosis and Prognosis, Industrial Ecology

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Applied Methuerstic computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
    corecore