535 research outputs found

    A cluster based communication architecture for distributed applications in mobile ad hoc networks

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2006Includes bibliographical references (leaves: 63-69)Text in English; Abstract: Turkish and Englishx, 85 leavesIn this thesis, we aim to design and implement three protocols on a hierarchical architecture to solve the balanced clustering, backbone formation and distributed mutual exclusion problems for mobile ad hoc network(MANET)s. Our ÂŻrst goal is to cluster the MANET into balanced partitions. Clustering is a widely used approach to ease implemen-tation of various problems such as routing and resource management in MANETs. We propose the Merging Clustering Algorithm(MCA) for clustering in MANETs that merges clusters to form higher level of clusters by increasing their levels. Secondly, we aim to con-struct a directed ring topology across clusterheads which were selected by MCA. Lastly, we implement the distributed mutual exclusion algorithm based on Ricart-Agrawala algo-rithm for MANETs(Mobile RA). Each cluster is represented by a coordinator node on the ring which implements distributed mutual exclusion algorithm on behalf of any member in the cluster it represents. We show the operations of the algorithms, analyze their time and message complexities and provide results in the simulation environment of ns2

    A mechanism for solving conflicts in ambient intelligent environments

    Full text link
    Ambient Intelligence scenarios describe situations in which multitude of devices and agents live together. In this kind of scenarios is frequent to see the appearance of conflicts when modifying the state of a device as for example a lamp. Those problems are not as much of sharing of resources as of conflict of orders coming from different agents. This coexistence must deal also with the desire of privacy of the different users over their personal information such as where they are, what their preferences are or to whom this information should be available. When facing incompatible orders over the state of a device it turns necessary to make a decision. In this paper we propose a centralised mechanism based on prioritized FIFO queues to decide the order in which the control of a device is granted. The priority of the commands is calculated following a policy that considers issues such as the commander's role, command's type, context's state and commander-context and commander-resource relations. Finally we propose a set of particular policies for those resources that do not adjust to the general policy. In addition we present a model pretending to integrate privacy through limiting and protecting contextual information

    Reliable Resource Provisioning using Bankers’ Deadlock Avoidance Algorithm in MEC for Industrial IoT

    Get PDF
    Multi-Access Edge Computing (MEC) is a new 5G enabling technology proposed to reduce latency by bringing cloud computing capability closer to IoT and mobile device users. MEC may be prone to unreliable communication as a result of deadlock during resource provisioning. Deadlock may occur due to a huge number of devices contending for a limited amount of resources if adequate measures are not put in place. It is crucial to eradicate deadlock while scheduling and provisioning of resources on MEC to achieve highly reliable and available system. In this paper, a deadlock avoidance resource provisioning algorithm is proposed for Industrial IoT devices using MEC platforms to ensure higher reliability of network interactions. The proposed scheme incorporates banker’s resource-request algorithm using SDN to reduce communication overhead. Simulation Results have shown that system deadlock can be prevented by applying the proposed algorithm which ultimately leads to a more reliable network interaction between mobile stations and MEC platforms

    Resource Management in Multi-Access Edge Computing (MEC)

    Get PDF
    This PhD thesis investigates the effective ways of managing the resources of a Multi-Access Edge Computing Platform (MEC) in 5th Generation Mobile Communication (5G) networks. The main characteristics of MEC include distributed nature, proximity to users, and high availability. Based on these key features, solutions have been proposed for effective resource management. In this research, two aspects of resource management in MEC have been addressed. They are the computational resource and the caching resource which corresponds to the services provided by the MEC. MEC is a new 5G enabling technology proposed to reduce latency by bringing cloud computing capability closer to end-user Internet of Things (IoT) and mobile devices. MEC would support latency-critical user applications such as driverless cars and e-health. These applications will depend on resources and services provided by the MEC. However, MEC has limited computational and storage resources compared to the cloud. Therefore, it is important to ensure a reliable MEC network communication during resource provisioning by eradicating the chances of deadlock. Deadlock may occur due to a huge number of devices contending for a limited amount of resources if adequate measures are not put in place. It is crucial to eradicate deadlock while scheduling and provisioning resources on MEC to achieve a highly reliable and readily available system to support latency-critical applications. In this research, a deadlock avoidance resource provisioning algorithm has been proposed for industrial IoT devices using MEC platforms to ensure higher reliability of network interactions. The proposed scheme incorporates Banker’s resource-request algorithm using Software Defined Networking (SDN) to reduce communication overhead. Simulation and experimental results have shown that system deadlock can be prevented by applying the proposed algorithm which ultimately leads to a more reliable network interaction between mobile stations and MEC platforms. Additionally, this research explores the use of MEC as a caching platform as it is proclaimed as a key technology for reducing service processing delays in 5G networks. Caching on MEC decreases service latency and improve data content access by allowing direct content delivery through the edge without fetching data from the remote server. Caching on MEC is also deemed as an effective approach that guarantees more reachability due to proximity to endusers. In this regard, a novel hybrid content caching algorithm has been proposed for MEC platforms to increase their caching efficiency. The proposed algorithm is a unification of a modified Belady’s algorithm and a distributed cooperative caching algorithm to improve data access while reducing latency. A polynomial fit algorithm with Lagrange interpolation is employed to predict future request references for Belady’s algorithm. Experimental results show that the proposed algorithm obtains 4% more cache hits due to its selective caching approach when compared with case study algorithms. Results also show that the use of a cooperative algorithm can improve the total cache hits up to 80%. Furthermore, this thesis has also explored another predictive caching scheme to further improve caching efficiency. The motivation was to investigate another predictive caching approach as an improvement to the formal. A Predictive Collaborative Replacement (PCR) caching framework has been proposed as a result which consists of three schemes. Each of the schemes addresses a particular problem. The proactive predictive scheme has been proposed to address the problem of continuous change in cache popularity trends. The collaborative scheme addresses the problem of cache redundancy in the collaborative space. Finally, the replacement scheme is a solution to evict cold cache blocks and increase hit ratio. Simulation experiment has shown that the replacement scheme achieves 3% more cache hits than existing replacement algorithms such as Least Recently Used, Multi Queue and Frequency-based replacement. PCR algorithm has been tested using a real dataset (MovieLens20M dataset) and compared with an existing contemporary predictive algorithm. Results show that PCR performs better with a 25% increase in hit ratio and a 10% CPU utilization overhead

    PORTING OF FREERTOS ON A PYTHON VIRTUAL MACHINE FOR EMBEDDED AND IOT DEVICES

    Get PDF
    The fourth industrial revolution, The Industry 4.0, puts emphasis on the need of “Smart” and “Connected” objects through the use of services provided by Internet of Things, cyber-physical systems and cloud computing to optimize the cost, development time and remote connectivity. Development of highly scalable and flexible IoT applications is the need of time. These solutions require connectivity, less development time, time-to-market and at the same time offers a high performance and great reliability. Zerynth, a small company, provides its full stack for IoT solutions. Zerynth Virtual Machine is the core component among other components in stack which allow the programmers to code in python or hybrid C/Python coding with multithreaded Real Time OS with negligible memory footprint. The Python layer, Application Layer, is totally agnostic of underlying RTOS and hardware abstraction layer. This layered software architecture of Zerynth VM makes it totally compatible with new Industry 4.0 standard. The Hardware abstraction layer, VHAL, abstracts the hardware features of supported MCU and its peripherals while RTOS layer, VOSAL, uses the features of underlying Real Time OS. Zerynth VM can be ported with different Real Time OS and various hardware platforms depending upon the application’s cost, features and other relevant parameters. Configuring Kinetis MCU (MK64FN1M0VDC12) with existing VM became the first objective of my thesis. This configuration covers from scratch the clock, boot loading and peripheral support. Since previous version of Zerynth VM had a support of only Chibi2 OS which has certain dependency on the hardware layer underneath so this became another objective to separate the Chibi2 OS from VHAL layer for total independence. Finally, Porting of FreeRTOS on Zerynth VM with Hexiwear MCU as target board could a make a room for another RTOS hence enhancing the features and support of currently available VM. This thesis report describes all porting steps, procedures and testing methodologies starting from configuring a new hardware platform Hexiwear to FreeRTOS porting on Zerynth V

    Transparent distributed data management in large scale distributed systems

    Get PDF
    International audienceIn this chapter, we deal with sharing resources transparency in large distributed systems. By using the Data Handover (DHO), together with a peer-to-peer system we provide an easy-to-use architecture to claim resources in dynamic environments: data resources are distributed over a set of peers that may appear and disappear. By means of DHO functions, users request the mapping of data into local memory (for reading or writing) without prior knowledge neither of the location of that data nor of the underlying structure nor of the mobility of peers. This abstraction level is ensured by three managers that interact within our three level architecture. Two algorithms, Exclusive Locks with Mobile Processes (ELMP) and Read-Write Locks with Mobile Processes (RW-LMP), are introduced on the lowest level of the architecture. They ensure data access consistency despite the dynamicity of the environment. Both algorithms satisfy Safety and Liveness properties. Experimental studies show good performance as well as the stability of our approach

    Predictable migration and communication in the Quest-V multikernal

    Full text link
    Quest-V is a system we have been developing from the ground up, with objectives focusing on safety, predictability and efficiency. It is designed to work on emerging multicore processors with hardware virtualization support. Quest-V is implemented as a ``distributed system on a chip'' and comprises multiple sandbox kernels. Sandbox kernels are isolated from one another in separate regions of physical memory, having access to a subset of processing cores and I/O devices. This partitioning prevents system failures in one sandbox affecting the operation of other sandboxes. Shared memory channels managed by system monitors enable inter-sandbox communication. The distributed nature of Quest-V means each sandbox has a separate physical clock, with all event timings being managed by per-core local timers. Each sandbox is responsible for its own scheduling and I/O management, without requiring intervention of a hypervisor. In this paper, we formulate bounds on inter-sandbox communication in the absence of a global scheduler or global system clock. We also describe how address space migration between sandboxes can be guaranteed without violating service constraints. Experimental results on a working system show the conditions under which Quest-V performs real-time communication and migration.National Science Foundation (1117025

    Reducing Power Consumption and Latency in Mobile Devices using a Push Event Stream Model, Kernel Display Server, and GUI Scheduler

    Get PDF
    The power consumed by mobile devices can be dramatically reduced by improving how mobile operating systems handle events and display management. Currently, mobile operating systems use a pull model that employs a polling loop to constantly ask the operating system if an event exists. This constant querying prevents the CPU from entering a deep sleep, which unnecessarily consumes power. We’ve improved this process by switching to a push model which we refer to as the event stream model (ESM). This model leverages modern device interrupt controllers which automatically notify an application when events occur, thus removing the need to constantly rouse the CPU in order to poll for events. Since the CPU rests while no events are occurring, power consumption is reduced. Furthermore, an application is immediately notified when an event occurs, as opposed to waiting for a polling loop to recognize when an event has occurred. This immediate notification reduces latency, which is the elapsed time between the occurrence of an event and the beginning of its processing by an application. We further improved the benefits of the ESM by moving the display server, a central piece of the graphical user interface (GUI), into the kernel. Existing display servers duplicate some of the kernel code. They contain important information about an application that can assist the kernel with scheduling, such as whether the application is visible and able to receive events. However, they do not share such information with the kernel. Our new kernel-level display server (KDS) interacts directly with the process scheduler to determine when applications are allowed to use the CPU. For example, when an application is idle and not visible on the screen, the KDS prevents that application from using the CPU, thus conserving power. These combined improvements have reduced power consumption by up to 31.2% and latency by up to 17.1 milliseconds in our experimental applications. This improvement in power consumption roughly increases battery life by one to four hours when the device is being actively used or fifty to three-hundred hours when the device is idle
    • …
    corecore