1,474 research outputs found
Supporting Real-Time Communication in CSMA-Based Networks : the VTP-CSMA Virtual Token Passing Approach
Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200
Interference caused by the insertion of an h-BEB station in standard shared-Ethernet networks: simulation analysis
In this paper, it is presented the simulation analysis of an enhanced algorithm for the collision resolution in shared Ethernet networks. Such algorithm, referred as high priority Binary Exponential Backoff (h-BEB), provides high priority traffic separation, enabling the Support of real-time communications. One of the main features of the h-BEB algorithm is to enable the coexistence in the same network segment of Ethernet standard stations with h-BEB modified stations, by imposing higher priority for the transfer of h-BEB messages (privileged traffic). The simulation analysis shows that the proposed traffic separation guarantees a predictable and significantly smaller access delay for the h-BEB station, when compared with the access delay for standard Ethernet stations. The simulation analysis also shows that the h-BEB traffic must be tightly controlled, as it has a high interference level over the non-real-time traffic. Otherwise, if the load generated by the h-BEB station is not closely controlled, the standard Ethernet stations may experience extended access delays
Analysis of Hardware Accelerated Deep Learning and the Effects of Degradation on Performance
As convolutional neural networks become more prevalent in research and real world applications, the need for them to be faster and more robust will be a constant battle. This thesis investigates the effect of degradation being introduced to an image prior to object recognition with a convolutional neural network. As well as experimenting with methods to reduce the degradation and improve performance. Gaussian smoothing and additive Gaussian noise are both analyzed degradation models within this thesis and are reduced with Gaussian and Butterworth masks using unsharp masking and smoothing, respectively. The results show that each degradation is disruptive to the performance of YOLOv3, with Gaussian smoothing producing a mean average precision of less than 20% and Gaussian noise producing a mean average precision as low as 0%. Reduction methods applied to the data give results of 1%-21% mean average precision increase over the baseline, varying based on the degradation model. These methods are also applied to an 8-bit quantized implementation of YOLOv3, which is intended to run on a Xilinx ZCU104 FPGA, which showed to be as robust as the oating point network, with results within 2% mean average precision of the oating point network. With the ZCU104 being able to process images of 416x416 at 25 frames per second which is comparable to a NVIDIA 2080 RTX, FPGAs are a viable solution to computing object detection on the edge. In conclusion, this thesis shows that degradation causes performance of a convolutional neural network (quantized and oating point) to lose accuracy to a level that the network is unable to accurately predict objects. However, the degradation can be reduced, and in most cases can elevate the performance of the network by using computer vision techniques to reduce the noise within the image
A Real-Time Service-Oriented Architecture for Industrial Automation
Industrial automation platforms are experiencing a paradigm shift. New technologies are making their way in the area, including embedded real-time systems, standard local area networks like Ethernet, Wi-Fi and ZigBee, IP-based communication protocols, standard service oriented architectures (SOAs) and Web services. An automation system will be composed of flexible autonomous components with plug & play functionality, self configuration and diagnostics, and autonomic local control that communicate through standard networking technologies. However, the introduction of these new technologies raises important problems that need to be properly solved, one of these being the need to support real-time and quality-of-service (QoS) for real-time applications. This paper describes a SOA enhanced with real-time capabilities for industrial automation. The proposed architecture allows for negotiation of the QoS requested by clients from Web services, and provides temporal encapsulation of individual activities. This way, it is possible to perform an a priori analysis of the temporal behavior of each service, and to avoid unwanted interference among them. After describing the architecture, experimental results gathered on a real implementation of the framework (which leverages a soft real-time scheduler for the Linux kernel) are presented, showing the effectiveness of the proposed solution. The experiments were performed on simple case studies designed in the context of industrial automation applications
From Measurements to Modeling The Trade-off between Energy Efficiency and System Performance
In this thesis, the work is based on experimental and modeling
methodologies. I applied this starting from measurements of
Asymmetric Digital Subscriber Line (ADSL) technology, which is the
preferred high-speed access to the Internet. The work focuses on
the problem of determining the maximum stable bandwidth that an
Internet Service Provider (ISP) can offer. This task was
accomplished by monitoring and analyzing a large set of ADSL
end-users. Huge amount of collected data allowed us to get a
detailed statistical analysis of the behavior of ADSL lines, such
as the distribution, variable dependencies and correlation of
effective bitrate with the physical measurements exposed by network
devices. Analysis of collected data suggest that there is no clear
trend to predict a downstream bandwidth of ADSL line based on line
conditions, and thus requires an intelligent way of analyzing the
ADSL lines. Thus, a Neural Network (NN) was employed, which is an
intelligent machine learning tool to (i) learn the behavior of ADSL
lines, (ii) extract useful information from huge set of
measurements, (iii) automatically suggest maximum stable bandwidth.
The results indicate that NN performs well in predicting end-users
available bandwidth. However, NN is required to be properly
trained, as well as needs careful selection of design parameters.
Later, the focus of the work was centered over the energy
efficiency of telecommunication systems using mathematical modeling
approach. Motivation of first work was to know how much energy
efficient is Voice over Internet Protocol (VoIP) architecture as
compared to traditional Public-Switched Telephone Network (PSTN)
architecture used for voice communication. To answer this, these
two architectures already implemented at our campus were
extensively examined by means of measuring real power consumption.
To generalize or estimate power consumption for any given number of users, a
mathematical model of power consumption for both the architectures
is built. The results indicate that VoIP architecture consumes a
lot of power, yet VoIP system have the flexibility to be made
energy efficient by adding some sort of energy-wise schemes.
The last part of the work investigates the interaction between
the Transmission Control Protocol (TCP) and rate adaptation
mechanism. Where rate adaptation is an approach to save energy by
adapting the transmission rates or capacity according to traffic
dynamics. To have deep insight of the interaction, a mathematical
model based on fluid based framework is built to depict the
behavior of TCP and rate adaptation scheme. The model is then
tested for its accuracy and stability by conducting simulation
experiments and steady state analysis. Later, the model is used to
study the impact of tuning the parameters on system performance.
The results suggest that the implementation of rate adaptation
scheme can be effective but a careful parameter setting is needed
to avoid undesired disruptive interaction among controllers at
different levels, that impair QoS
AirSync: Enabling Distributed Multiuser MIMO with Full Spatial Multiplexing
The enormous success of advanced wireless devices is pushing the demand for
higher wireless data rates. Denser spectrum reuse through the deployment of
more access points per square mile has the potential to successfully meet the
increasing demand for more bandwidth. In theory, the best approach to density
increase is via distributed multiuser MIMO, where several access points are
connected to a central server and operate as a large distributed multi-antenna
access point, ensuring that all transmitted signal power serves the purpose of
data transmission, rather than creating "interference." In practice, while
enterprise networks offer a natural setup in which distributed MIMO might be
possible, there are serious implementation difficulties, the primary one being
the need to eliminate phase and timing offsets between the jointly coordinated
access points.
In this paper we propose AirSync, a novel scheme which provides not only time
but also phase synchronization, thus enabling distributed MIMO with full
spatial multiplexing gains. AirSync locks the phase of all access points using
a common reference broadcasted over the air in conjunction with a Kalman filter
which closely tracks the phase drift. We have implemented AirSync as a digital
circuit in the FPGA of the WARP radio platform. Our experimental testbed,
comprised of two access points and two clients, shows that AirSync is able to
achieve phase synchronization within a few degrees, and allows the system to
nearly achieve the theoretical optimal multiplexing gain. We also discuss MAC
and higher layer aspects of a practical deployment. To the best of our
knowledge, AirSync offers the first ever realization of the full multiuser MIMO
gain, namely the ability to increase the number of wireless clients linearly
with the number of jointly coordinated access points, without reducing the per
client rate.Comment: Submitted to Transactions on Networkin
Modular software architecture for flexible reservation mechanisms on heterogeneous resources
Management, allocation and scheduling of heterogeneous resources for complex distributed real-time applications is a chal-
lenging problem. Timing constraints of applications may be fulfilled by a proper use of real-time scheduling policies, admission
control and enforcement of timing constraints. However, it is not easy to design basic infrastructure services that allow for an easy
access to the allocation of multiple heterogeneous resources in a distributed environment.
In this paper, we present a middleware for providing distributed soft real-time applications with a uniform API for reserving
heterogeneous resources with real-time scheduling capabilities in a distributed environment. The architecture relies on standard
POSIX OS facilities, such as time management and standard TCP/IP networking services, and it is designed around CORBA, in
order to facilitate modularity, flexibility and portability of the applications using it. However, real-time scheduling is supported
by proper extensions at the kernel-level, plugged within the framework by means of dedicated resource managers. Our current implementation on Linux supports reservation of CPU, disk and network bandwidth. However, additional resource managers supporting alternative real-time schedulers for these resources, as well as additional types of resources, may be easily added.
We present experimental results gathered on both synthetic applications and a real multimedia video streaming case study, showing advantages deriving from the use of the proposed middleware. Finally, overhead figures are reported, showing sustainability of the approach for a wide class of complex, distributed, soft real-time applications
Buffer Sizing for 802.11 Based Networks
We consider the sizing of network buffers in 802.11 based networks. Wireless
networks face a number of fundamental issues that do not arise in wired
networks. We demonstrate that the use of fixed size buffers in 802.11 networks
inevitably leads to either undesirable channel under-utilization or unnecessary
high delays. We present two novel dynamic buffer sizing algorithms that achieve
high throughput while maintaining low delay across a wide range of network
conditions. Experimental measurements demonstrate the utility of the proposed
algorithms in a production WLAN and a lab testbed.Comment: 14 pages, to appear on IEEE/ACM Transactions on Networkin
- …