122 research outputs found
Recommended from our members
Open Networking Lab: online practical learning of computer networking
Learning to configure computer networks is a topic requiring a substantial practical component and suggesting a pedagogic approach that foregrounds experiential learning. However, providing appropriate computer networking hardware is expensive for classroom labs, and is not viable for individual distance learners.
Simulation offers an alternative basis for practical learning and supports a range of modes, from individual distance learning to in-class blended learning. Sophisticated network simulation packages, such as Ciscoâs Packet Tracer, have high fidelity to networking devices and can simulate complex network scenarios. Unfortunately their complex interfaces make it difficult for a novice student to engage productively.
The Open Networking Lab (ONL) will provide online resources for students of introductory computer networking. It will take an activity-centred approach, supported with video and screencasts, in preference to lengthy text. Practical activity is based on PT Anywhere, a network simulator that provides students with an easy-to-use, browser-based interface over Ciscoâs Packet Tracer. PT Anywhere thus provides fully authentic simulation but, by only revealing a subset of features, supports a carefully scaffolded approach to teaching and learning.
We report at an early stage in the development of the ONL. Material is being piloted with students at UK Further Education colleges. Evaluation will include observation, surveys and interviews with students and staff; PT Anywhere also provides learning analytics. A further stage of development will culminate in a badged open course on the Open Universityâs OpenLearn platform.
The ONL will provide vocational learning at scale in educational institutions, employment contexts and for individual learners
Online experimentation and interactive learning resources for teaching network engineering
This paper presents a case study on teaching network engineering in conjunction with interactive learning resources. This case study has been developed in collaboration with the Cisco Networking Academy in the context of the FORGE project, which promotes online learning and experimentation by offering access to virtual and remote labs. The main goal of this work is allowing learners and educators to perform network simulations within a web browser or an interactive eBook by using any type of mobile, tablet or desktop device. Learning Analytics are employed in order to monitor learning behaviour for further analysis of the learning experience offered to students
Using Data Compression for Delay Constrained Applications in Wireless Sensor Networks
International audienceData compression is a technique used to save energy in Wireless Sensor Networks by reducing the quantity of data transmitted and the number of transmission. Actually, the main cause of energy consumption in WSN is data transmission. There exist critical applications such as delay constrained activities in which the data have to arrive quickly to the Sink for rapid analysis. In this article, we explore the use of data compression algorithms for delay constrained applications by evaluating a recent data compression algorithm for WSN named K-RLE with optimal parameters on an ultra-low power microcontroller from TI MSP430 series. The relevance of the parameter K for the lossy algorithm K-RLE led us to propose and compare two methods to characterize K: the Standard deviation and the Allan deviation. The last one allow us to control the percentage of data modified. Experimental results show that data compression is an energy efficient technique which can also perform in certain cases the global data transfer time (compression plus transmission time) compared to direct transmission
SSthreshless Start: A Sender-Side TCP Intelligence for Long Fat Network
Measurement shows that 85% of TCP flows in the internet are short-lived flows
that stay most of their operation in the TCP startup phase. However, many
previous studies indicate that the traditional TCP Slow Start algorithm does
not perform well, especially in long fat networks. Two obvious problems are
known to impact the Slow Start performance, which are the blind initial setting
of the Slow Start threshold and the aggressive increase of the probing rate
during the startup phase regardless of the buffer sizes along the path. Current
efforts focusing on tuning the Slow Start threshold and/or probing rate during
the startup phase have not been considered very effective, which has prompted
an investigation with a different approach. In this paper, we present a novel
TCP startup method, called threshold-less slow start or SSthreshless Start,
which does not need the Slow Start threshold to operate. Instead, SSthreshless
Start uses the backlog status at bottleneck buffer to adaptively adjust probing
rate which allows better seizing of the available bandwidth. Comparing to the
traditional and other major modified startup methods, our simulation results
show that SSthreshless Start achieves significant performance improvement
during the startup phase. Moreover, SSthreshless Start scales well with a wide
range of buffer size, propagation delay and network bandwidth. Besides, it
shows excellent friendliness when operating simultaneously with the currently
popular TCP NewReno connections.Comment: 25 pages, 10 figures, 7 table
Enhancement of Adaptive Forward Error Correction Mechanism for Video Transmission Over Wireless Local Area Network
Video transmission over the wireless network faces many challenges. The most critical challenge is related to packet loss. To overcome the problem of packet loss,
Forward Error Correction is used by adding extra packets known as redundant packet or parity packet. Currently, FEC mechanisms have been adopted together with Automatic Repeat reQuest (ARQ) mechanism to overcome packet losses and avoid network congestion in various wireless network conditions. The number of FEC packets need to be generated effectively because wireless network usually has varying network conditions. In the current Adaptive FEC mechanism, the FEC packets are decided by the average queue length and average packet retransmission times. The Adaptive FEC mechanisms have been proposed to suit the network condition by generating FEC packets adaptively in the wireless network. However, the current Adaptive FEC mechanism has some major drawbacks such as the reduction of recovery performance which injects too many excessive FEC packets into the network. This is not flexible enough to adapt with varying wireless network condition. Therefore, the enhancement of Adaptive FEC mechanism (AFEC) known as Enhanced Adaptive FEC (EnAFEC) has been proposed. The aim is to improve recovery performance on the current Adaptive FEC mechanism by injecting FEC packets dynamically based on varying wireless network conditions. The EnAFEC mechanism is implemented in the simulation environment using Network Simulator 2 (NS-2). Performance evaluations are also carried out. The EnAFEC was tested with the random uniform error model. The results from experiments and performance analyses showed that EnAFEC mechanism outperformed the other Adaptive FEC mechanism in terms of recovery efficiency. Based on the findings, the optimal amount of FEC generated by EnAFEC mechanism can recover high packet loss and produce good video quality
Analytical Investigation of On-Path Caching Performance in Information Centric Networks
Information Centric Networking (ICN) architectures are proposed as a solution to address the shift from host-centric model toward an information centric model in the Internet. In these architectures, routing nodes have caching functionality that can influence the network traffic and communication quality since the data items can be sent from nodes far closer to the requesting users. Therefore, realizing effective caching networks becomes important to grasp the cache characteristics of each node and to manage system resources, taking into account networking metrics (e.g., higher hit ratio) as well as userâs metrics (e.g. shorter delay). This thesis studies the methodologies for improving the performance of cache management in ICNs. As individual sub-problems, this thesis investigates the LRU-2 and 2-LRU algorithms, geographical locality in distribution of usersâ requests and efficient caching in ICNs.
As the first contribution of this thesis, a mathematical model to approximate the behaviour of the LRU-2 algorithm is proposed. Then, 2-LRU and LRU-2 cache replacement algorithms are analyzed. The 2-LRU caching strategy has been shown to outperform LRU. The main idea behind 2-LRU and LRU-2 is considering both frequency (i.e. metric used in LFU) and recency (i.e. metric used in LRU) together for cache replacement process. The simulation as well as numeric results show that the proposed LRU-2 model precisely approximates the miss rate for LRU-2 algorithm.
Next, the influence of geographical locality in usersâ requests on the performance of network of caches is investigated. Geographically localized and global request patterns have both been observed to possess Zipf (i.e. a power-law distribution in which few data items have high request frequencies while most of data items have low request frequencies) properties, although the local distributions are poorly correlated with the global distribution. This suggests that several independent Zipf distributions combine to form an emergent Zipf distribution in real client request scenarios. An algorithm is proposed that can generate realistic synthetic traffic to regional caches that possesses Zipf properties as well as produces a global Zipf distribution. The simulation results show that the caching performance could have different behaviour based on what distribution the usersâ requests follow.
Finally, the efficiency of cache replacement and replication algorithms in ICNs are studied since ICN literature still lacks an empirical and analytical deep understanding of benefits brought by in-network caching. An analytical model is proposed that optimally distributes a total cache budget among the nodes of ICN networks for LRU cache replacement and LCE cache replication algorithms. The results will show how much user-centric and system-centric benefits could be gained through the in-network caching compared to the benefits obtained through caching facilities provided only at the edge of the network
FavorQueue: A parameterless active queue management to improve TCP traffic performance
This paper presents and analyzes the implementation of a novel active queue management (AQM) named FavorQueue that aims to improve delay transfer of short lived TCP flows over best-effort networks. The idea is to dequeue packets that do not belong to a flow previously enqueued first. The rationale is to mitigate the delay induced by long-lived TCP flows over the pace of short TCP data requests and to prevent dropped packets at the beginning of a connection and during recovery period. Although the main target of this AQM is to accelerate short TCP traffic, we show that FavorQueue does not only improve the performance of short TCP traffic but also improves the performance of all TCP traffic in terms of drop ratio and latency whatever the flow size. In particular, we demonstrate that FavorQueue reduces the loss of a retransmitted packet, decreases the number of dropped packets recovered by RTO and improves the latency up to 30% compared to DropTail. Finally, we show that this scheme remains compliant with recent TCP updates such as the increase of the initial slow-start value
A multi-layer data fusion system for Wi-Fi attack detection using automatic belief assignment
Wireless networks are increasingly becoming susceptible
to more sophisticated threats. An attacker may spoof the
identity of legitimate users before implementing more serious
attacks. Most of the current Intrusion Detection Systems (IDS)
that employ multi-layer approach to help towards mitigating
network attacks, offer high detection accuracy rate and low
numbers of false alarms. Dempster-Shafer theory has been used
with the purpose of combining beliefs of different metric measurements
across multiple layers. However, an important step to
be investigated remains open; this is to find an automatic and
self-adaptive process of Basic Probability Assignment (BPA).
This paper describes a novel BPA methodology able to automatically
adapt its detection capabilities to the current measured
characteristics, with a light weight process of generating a baseline
profile of normal utilisation and without intervention from
the IDS administrator. We have developed a multi-layer based
application able to classify individual network frames as normal
or malicious
- âŠ