15,623 research outputs found

    Control and Communication Protocols that Enable Smart Building Microgrids

    Full text link
    Recent communication, computation, and technology advances coupled with climate change concerns have transformed the near future prospects of electricity transmission, and, more notably, distribution systems and microgrids. Distributed resources (wind and solar generation, combined heat and power) and flexible loads (storage, computing, EV, HVAC) make it imperative to increase investment and improve operational efficiency. Commercial and residential buildings, being the largest energy consumption group among flexible loads in microgrids, have the largest potential and flexibility to provide demand side management. Recent advances in networked systems and the anticipated breakthroughs of the Internet of Things will enable significant advances in demand response capabilities of intelligent load network of power-consuming devices such as HVAC components, water heaters, and buildings. In this paper, a new operating framework, called packetized direct load control (PDLC), is proposed based on the notion of quantization of energy demand. This control protocol is built on top of two communication protocols that carry either complete or binary information regarding the operation status of the appliances. We discuss the optimal demand side operation for both protocols and analytically derive the performance differences between the protocols. We propose an optimal reservation strategy for traditional and renewable energy for the PDLC in both day-ahead and real time markets. In the end we discuss the fundamental trade-off between achieving controllability and endowing flexibility

    Evaluation Study for Delay and Link Utilization with the New-Additive Increase Multiplicative Decrease Congestion Avoidance and Control Algorithm

    Get PDF
    As the Internet becomes increasingly heterogeneous, the issue of congestion avoidance and control becomes ever more important. And the queue length, end-to-end delays and link utilization is some of the important things in term of congestion avoidance and control mechanisms. In this work we continue to study the performances of the New-AIMD (Additive Increase Multiplicative Decrease) mechanism as one of the core protocols for TCP congestion avoidance and control algorithm, we want to evaluate the effect of using the AIMD algorithm after developing it to find a new approach, as we called it the New-AIMD algorithm to measure the Queue length, delay and bottleneck link utilization, and use the NCTUns simulator to get the results after make the modification for the mechanism. And we will use the Droptail mechanism as the active queue management mechanism (AQM) in the bottleneck router. After implementation of our new approach with different number of flows, we expect the delay will less when we measure the delay dependent on the throughput for all the system, and also we expect to get end-to-end delay less. And we will measure the second type of delay a (queuing delay), as we shown in the figure 1 bellow. Also we will measure the bottleneck link utilization, and we expect to get high utilization for bottleneck link with using this mechanism, and avoid the collisions in the link

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    A Priority-based Fair Queuing (PFQ) Model for Wireless Healthcare System

    Get PDF
    Healthcare is a very active research area, primarily due to the increase in the elderly population that leads to increasing number of emergency situations that require urgent actions. In recent years some of wireless networked medical devices were equipped with different sensors to measure and report on vital signs of patient remotely. The most important sensors are Heart Beat Rate (ECG), Pressure and Glucose sensors. However, the strict requirements and real-time nature of medical applications dictate the extreme importance and need for appropriate Quality of Service (QoS), fast and accurate delivery of a patient’s measurements in reliable e-Health ecosystem. As the elderly age and older adult population is increasing (65 years and above) due to the advancement in medicine and medical care in the last two decades; high QoS and reliable e-health ecosystem has become a major challenge in Healthcare especially for patients who require continuous monitoring and attention. Nevertheless, predictions have indicated that elderly population will be approximately 2 billion in developing countries by 2050 where availability of medical staff shall be unable to cope with this growth and emergency cases that need immediate intervention. On the other side, limitations in communication networks capacity, congestions and the humongous increase of devices, applications and IOT using the available communication networks add extra layer of challenges on E-health ecosystem such as time constraints, quality of measurements and signals reaching healthcare centres. Hence this research has tackled the delay and jitter parameters in E-health M2M wireless communication and succeeded in reducing them in comparison to current available models. The novelty of this research has succeeded in developing a new Priority Queuing model ‘’Priority Based-Fair Queuing’’ (PFQ) where a new priority level and concept of ‘’Patient’s Health Record’’ (PHR) has been developed and integrated with the Priority Parameters (PP) values of each sensor to add a second level of priority. The results and data analysis performed on the PFQ model under different scenarios simulating real M2M E-health environment have revealed that the PFQ has outperformed the results obtained from simulating the widely used current models such as First in First Out (FIFO) and Weight Fair Queuing (WFQ). PFQ model has improved transmission of ECG sensor data by decreasing delay and jitter in emergency cases by 83.32% and 75.88% respectively in comparison to FIFO and 46.65% and 60.13% with respect to WFQ model. Similarly, in pressure sensor the improvements were 82.41% and 71.5% and 68.43% and 73.36% in comparison to FIFO and WFQ respectively. Data transmission were also improved in the Glucose sensor by 80.85% and 64.7% and 92.1% and 83.17% in comparison to FIFO and WFQ respectively. However, non-emergency cases data transmission using PFQ model was negatively impacted and scored higher rates than FIFO and WFQ since PFQ tends to give higher priority to emergency cases. Thus, a derivative from the PFQ model has been developed to create a new version namely “Priority Based-Fair Queuing-Tolerated Delay” (PFQ-TD) to balance the data transmission between emergency and non-emergency cases where tolerated delay in emergency cases has been considered. PFQ-TD has succeeded in balancing fairly this issue and reducing the total average delay and jitter of emergency and non-emergency cases in all sensors and keep them within the acceptable allowable standards. PFQ-TD has improved the overall average delay and jitter in emergency and non-emergency cases among all sensors by 41% and 84% respectively in comparison to PFQ model

    Mechanistic modeling of architectural vulnerability factor

    Get PDF
    Reliability to soft errors is a significant design challenge in modern microprocessors owing to an exponential increase in the number of transistors on chip and the reduction in operating voltages with each process generation. Architectural Vulnerability Factor (AVF) modeling using microarchitectural simulators enables architects to make informed performance, power, and reliability tradeoffs. However, such simulators are time-consuming and do not reveal the microarchitectural mechanisms that influence AVF. In this article, we present an accurate first-order mechanistic analytical model to compute AVF, developed using the first principles of an out-of-order superscalar execution. This model provides insight into the fundamental interactions between the workload and microarchitecture that together influence AVF. We use the model to perform design space exploration, parametric sweeps, and workload characterization for AVF

    On the Behavior of the Distributed Coordination Function of IEEE 802.11 with Multirate Capability under General Transmission Conditions

    Full text link
    The aim of this paper is threefold. First, it presents a multi-dimensional Markovian state transition model characterizing the behavior of the IEEE 802.11 protocol at the Medium Access Control layer which accounts for packet transmission failures due to channel errors modeling both saturated and non-saturated traffic conditions. Second, it provides a throughput analysis of the IEEE 802.11 protocol at the data link layer in both saturated and non-saturated traffic conditions taking into account the impact of both the physical propagation channel and multirate transmission in Rayleigh fading environment. The general traffic model assumed is M/M/1/K. Finally, it shows that the behavior of the throughput in non-saturated traffic conditions is a linear combination of two system parameters; the payload size and the packet rates, λ(s)\lambda^{(s)}, of each contending station. The validity interval of the proposed model is also derived. Simulation results closely match the theoretical derivations, confirming the effectiveness of the proposed models.Comment: Submitted to IEEE Transactions on Wireless Communications, October 21, 200
    corecore