11,258 research outputs found

    On Efficiency and Validity of Previous Homeplug MAC Performance Analysis

    Get PDF
    The Medium Access Control protocol of Power Line Communication networks (defined in Homeplug and IEEE 1901 standards) has received relatively modest attention from the research community. As a consequence, there is only one analytic model that complies with the standardised MAC procedures and considers unsaturated conditions. We identify two important limitations of the existing analytic model: high computational expense and predicted results just prior to the predicted saturation point do not correspond to long-term network performance. In this work, we present a simplification of the previously defined analytic model of Homeplug MAC able to substantially reduce its complexity and demonstrate that the previous performance results just before predicted saturation correspond to a transitory phase. We determine that the causes of previous misprediction are common analytical assumptions and the potential occurrence of a transitory phase, that we show to be of extremely long duration under certain circumstances. We also provide techniques, both analytical and experimental, to correctly predict long-term behaviour and analyse the effect of specific Homeplug/IEEE 1901 features on the magnitude of misprediction errors

    Interference Calculation in Asynchronous Random Access Protocols using Diversity

    Full text link
    The use of Aloha-based Random Access protocols is interesting when channel sensing is either not possible or not convenient and the traffic from terminals is unpredictable and sporadic. In this paper an analytic model for packet interference calculation in asynchronous Random Access protocols using diversity is presented. The aim is to provide a tool that avoids time-consuming simulations to evaluate packet loss and throughput in case decodability is still possible when a certain interference threshold is not exceeded. Moreover the same model represents the groundbase for further studies in which iterative Interference Cancellation is applied to received frames.Comment: This paper has been accepted for publication in the Springer's Telecommunication Systems journal. The final publication will be made available at Springer. Please refer to that version when citing this paper; Springer Telecommunication Systems, 201

    Delay and energy consumption analysis of frame slotted ALOHA variants for massive data collection in internet-of-things scenarios

    Get PDF
    This paper models and evaluates three FSA-based (Frame Slotted ALOHA) MAC (Medium Access Control) protocols, namely, FSA-ACK (FSA with ACKnowledgements), FSA-FBP (FSA with FeedBack Packets) and DFSA (Dynamic FSA). The protocols are modeled using an AMC (Absorbing Markov Chain), which allows to derive analytic expressions for the average packet delay, as well as the energy consumption of both the network coordinator and the end-devices. The results, based on computer simulations, show that the analytic model is accurate and outline the benefits of DFSA. In terms of delay, DFSA provides a reduction of 17% (FSA-FBP) and 32% (FSA-ACK), whereas in terms of energy consumption DFSA provides savings of 23% (FSA-FBP) and 28% (FSA-ACK) for the coordinator and savings of 50% (FSA-FBP) and 24% (FSA-ACK) for end-devices. Finally, the paper provides insights on how to configure each FSA variant depending on the network parameters, i.e., depending on the number of end-devices, to minimize delay and energy expenditure. This is specially interesting for massive data collection in IoT (Internet-of-Things) scenarios, which typically rely on FSA-based protocols and where the operation has to be optimized to support a large number of devices with stringent energy consumption requirementsPeer ReviewedPostprint (published version

    Modeling, Analysis and Impact of a Long Transitory Phase in Random Access Protocols

    Get PDF
    In random access protocols, the service rate depends on the number of stations with a packet buffered for transmission. We demonstrate via numerical analysis that this state-dependent rate along with the consideration of Poisson traffic and infinite (or large enough to be considered infinite) buffer size may cause a high-throughput and extremely long (in the order of hours) transitory phase when traffic arrivals are right above the stability limit. We also perform an experimental evaluation to provide further insight into the characterisation of this transitory phase of the network by analysing statistical properties of its duration. The identification of the presence as well as the characterisation of this behaviour is crucial to avoid misprediction, which has a significant potential impact on network performance and optimisation. Furthermore, we discuss practical implications of this finding and propose a distributed and low-complexity mechanism to keep the network operating in the high-throughput phase.Comment: 13 pages, 10 figures, Submitted to IEEE/ACM Transactions on Networkin

    Analysis of concurrency control protocols for real-time database systems

    Get PDF
    Cataloged from PDF version of article.This paper provides an approximate analytic solution method for evaluating the performance of concurrency control protocols developed for real-time database systems (RTDBSs). Transactions processed in a RTDBS are associated with timing constraints typically in the form of deadlines. The primary consideration in developing a RTDBS concurrency control protocol is the fact that satisfaction of the timing constraints of transactions is as important as maintaining the consistency of the underlying database. The proposed solution method provides the evaluation of the performance of concurrency control protocols in terms of the satisfaction rate of timing constraints. As a case study, a RTDBS concurrency control protocol, called High Priority, is analyzed using the proposed method. The accuracy of the performance results obtained is ascertained via simulation. The solution method is also used to investigate the real-time performance benefits of the High Priority over the ordinary Two-Phase Locking. © 1998 Elsevier Science Inc. All rights reserved

    Tuning the Level of Concurrency in Software Transactional Memory: An Overview of Recent Analytical, Machine Learning and Mixed Approaches

    Get PDF
    Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contention—an aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed

    JiTS: Just-in-Time Scheduling for Real-Time Sensor Data Dissemination

    Full text link
    We consider the problem of real-time data dissemination in wireless sensor networks, in which data are associated with deadlines and it is desired for data to reach the sink(s) by their deadlines. To this end, existing real-time data dissemination work have developed packet scheduling schemes that prioritize packets according to their deadlines. In this paper, we first demonstrate that not only the scheduling discipline but also the routing protocol has a significant impact on the success of real-time sensor data dissemination. We show that the shortest path routing using the minimum number of hops leads to considerably better performance than Geographical Forwarding, which has often been used in existing real-time data dissemination work. We also observe that packet prioritization by itself is not enough for real-time data dissemination, since many high priority packets may simultaneously contend for network resources, deteriorating the network performance. Instead, real-time packets could be judiciously delayed to avoid severe contention as long as their deadlines can be met. Based on this observation, we propose a Just-in-Time Scheduling (JiTS) algorithm for scheduling data transmissions to alleviate the shortcomings of the existing solutions. We explore several policies for non-uniformly delaying data at different intermediate nodes to account for the higher expected contention as the packet gets closer to the sink(s). By an extensive simulation study, we demonstrate that JiTS can significantly improve the deadline miss ratio and packet drop ratio compared to existing approaches in various situations. Notably, JiTS improves the performance requiring neither lower layer support nor synchronization among the sensor nodes

    Performance and evaluation of real-time multicomputer control systems

    Get PDF
    Three experiments on fault tolerant multiprocessors (FTMP) were begun. They are: (1) measurement of fault latency in FTMP; (2) validation and analysis of FTMP synchronization protocols; and investigation of error propagation in FTMP

    Reducing false wake-up in contention-based wake-up control of wireless LANs

    Get PDF
    This paper studies the potential problem and performance when tightly integrating a low power wake-up radio (WuR) and a power-hungry wireless LAN (WLAN) module for energy efficient channel access. In this model, a WuR monitors the channel, performs carrier sense, and activates its co-located WLAN module when the channel becomes ready for transmission. Different from previous methods, the node that will be activated is not decided in advance, but decided by distributed contention. Because of the wake-up latency of WLAN modules, multiple nodes may be falsely activated, except the node that will actually transmit. This is called a false wake-up problem and it is solved from three aspects in this work: (i) resetting backoff counter of each node in a way as if it is frozen in a wake-up period, (ii) reducing false wake-up time by immediately putting a WLAN module into sleep once a false wake-up is inferred, and (iii) reducing false wake-up probability by adjusting contention window. Analysis shows that false wake-ups, instead of collisions, become the dominant energy overhead. Extensive simulations confirm that the proposed method (WuR-ESOC) effectively reduces energy overhead, by up to 60% compared with state-of-the-arts, achieving a better tradeoff between throughput and energy consumption
    • …
    corecore