2,792 research outputs found
Performance evaluation of an open distributed platform for realistic traffic generation
Network researchers have dedicated a notable part of their efforts
to the area of modeling traffic and to the implementation of efficient traffic
generators. We feel that there is a strong demand for traffic generators
capable to reproduce realistic traffic patterns according to theoretical
models and at the same time with high performance. This work presents an open
distributed platform for traffic generation that we called distributed
internet traffic generator (D-ITG), capable of producing traffic (network,
transport and application layer) at packet level and of accurately replicating
appropriate stochastic processes for both inter departure time (IDT) and
packet size (PS) random variables. We implemented two different versions of
our distributed generator. In the first one, a log server is in charge of
recording the information transmitted by senders and receivers and these
communications are based either on TCP or UDP. In the other one, senders and
receivers make use of the MPI library. In this work a complete performance
comparison among the centralized version and the two distributed versions of
D-ITG is presented
The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena
The Internet is the most complex system ever created in human history.
Therefore, its dynamics and traffic unsurprisingly take on a rich variety of
complex dynamics, self-organization, and other phenomena that have been
researched for years. This paper is a review of the complex dynamics of
Internet traffic. Departing from normal treatises, we will take a view from
both the network engineering and physics perspectives showing the strengths and
weaknesses as well as insights of both. In addition, many less covered
phenomena such as traffic oscillations, large-scale effects of worm traffic,
and comparisons of the Internet and biological models will be covered.Comment: 63 pages, 7 figures, 7 tables, submitted to Advances in Complex
System
Using Transcoding for Hidden Communication in IP Telephony
The paper presents a new steganographic method for IP telephony called
TranSteg (Transcoding Steganography). Typically, in steganographic
communication it is advised for covert data to be compressed in order to limit
its size. In TranSteg it is the overt data that is compressed to make space for
the steganogram. The main innovation of TranSteg is to, for a chosen voice
stream, find a codec that will result in a similar voice quality but smaller
voice payload size than the originally selected. Then, the voice stream is
transcoded. At this step the original voice payload size is intentionally
unaltered and the change of the codec is not indicated. Instead, after placing
the transcoded voice payload, the remaining free space is filled with hidden
data. TranSteg proof of concept implementation was designed and developed. The
obtained experimental results are enclosed in this paper. They prove that the
proposed method is feasible and offers a high steganographic bandwidth.
TranSteg detection is difficult to perform when performing inspection in a
single network localisation.Comment: 17 pages, 16 figures, 4 table
Block the Root Takeover: Validating Devices Using Blockchain Protocol
This study addresses a vulnerability in the trust-based STP protocol that allows malicious users to target an Ethernet LAN with an STP Root-Takeover Attack. This subject is relevant because an STP Root-Takeover attack is a gateway to unauthorized control over the entire network stack of a personal or enterprise network. This study aims to address this problem with a potentially trustless research solution called the STP DApp. The STP DApp is the combination of a kernel /net modification called stpverify and a Hyperledger Fabric blockchain framework in a NodeJS runtime environment in userland. The STP DApp works as an Intrusion Detection System (IPS) by intercepting Ethernet traffic and blocking forged Ethernet frames sent by STP Root-Takeover attackers. This studyâs research methodology is a quantitative pre-experimental design that provides conclusive results through empirical data and analysis using experimental control groups. In this study, data collection was based on active RAM utilization and CPU Usage during a performance evaluation of the STP DApp. It blocks an STP Root-Takeover Attack launched by the Yersinia attack tool installed on a virtual machine with the Kali operating system. The research solution is a test blockchain framework using Hyperledger Fabric. It is made up of an experimental test network made up of nodes on a host virtual machine and is used to validate Ethernet frames extracted from stpverify
Effectiveness of OPC for systems integration in the process control information architecture
A Process is defined as the progression to some particular end or objective through a logical and orderly sequence of events. Various devices (e.g., actuators, limit switches, motors, sensors, etc.) play a significant role in making sure that the process attains its objective (e.g., maintaining the furnace temperature within an acceptable limit). To do these things effectively, manufacturers need to access data from the plant floor or devices and integrate those into their control applications, which maybe one of the off the shelf tools such as Supervisory Control and Data Acquisition (SCADA), Distributed Control System (DCS), or Programmable Logic Controllers (PLC). A number of vendors have devised their own Data Acquisition Networks or Process Control Architectures (e.g., PROFIBUS, DEVICENET, INTERBUS, ETHERNET I/P, etc.) that claim to be open to or interoperable with a number of third party devices or products that make process data available to the Process or Business Management level. In reality this is far from what it is claimed to be. Due to the problem of interoperability, a manufacturer is forced to be bound, either with the solutions provided by a single vendor or with the writing of a driver for each hardware device that is accessed by a process application. Today\u27s manufacturers are looking for advanced distributed object technologies that allow for seamless exchange of information across plant networks as a means of integrating the islands of automation that exist in their manufacturing operations. OLE for Process Control (OPC) works to significantly reduce the time, cost, and effort required in writing custom interfaces for hundreds of different intelligent devices and networks in use today. The objective of this thesis is to explore the OLE for Process Control (OPC) technology in depth by highlighting its need in industry and by using the OPC technology in an application in which data from a process controlled by Siemens Simatic S7 PLC are shared with a client application running in LabVTEW6i
A Modular Approach to Adaptive Reactive Streaming Systems
The latest generations of FPGA devices offer large resource counts that provide the headroom to implement large-scale and complex systems. However, there are increasing challenges for the designer, not just because of pure size and complexity, but also in harnessing effectively the flexibility and programmability of the FPGA. A central issue is the need to integrate modules from diverse sources to promote modular design and reuse. Further, the capability to perform dynamic partial reconfiguration (DPR) of FPGA devices means that implemented systems can be made reconfigurable, allowing components to be changed during operation. However, use of DPR typically requires low-level planning of the system implementation, adding to the design challenge. This dissertation presents ReShape: a high-level approach for designing systems by interconnecting modules, which gives a âplug and playâ look and feel to the designer, is supported by tools that carry out implementation and verification functions, and is carried through to support system reconfiguration during operation. The emphasis is on the inter-module connections and abstracting the communication patterns that are typical between modules â for example, the streaming of data that is common in many FPGA-based systems, or the reading and writing of data to and from memory modules. ShapeUp is also presented as the static precursor to ReShape. In both, the details of wiring and signaling are hidden from view, via metadata associated with individual modules. ReShape allows system reconfiguration at the module level, by supporting type checking of replacement modules and by managing the overall system implementation, via metadata associated with its FPGA floorplan. The methodology and tools have been implemented in a prototype for a broad domain-specific setting â networking systems â and have been validated on real telecommunications design projects
The Two-Step P2P Simulation Approach
In this article a framework is introduced that can be used to analyse the effects & requirements of P2P applications on application and on network layer. P2P applications are complex and deployed on a large scale, pure packet level simulations do not scale well enough to analyse P2P applications in a large network with thousands of peers. It is also difficult to assess the effect of application level behavior on the communication system. We therefore propose an approach starting with a more abstract and therefore scalable application level simulation. For the application layer a specific simulation framework was developed. The results of the application layer simulations plus some estimated background traffic are fed into a packet layer simulator like NS2 (or our lab testbed) in a second step to perform some detailed packet layer analysis such as loss and delay measurements. This can be done for a subnetwork of the original network to avoid scalability problems
Recommended from our members
Performance evaluation of information and communications technology infrastructure for smart distribution network applications
This thesis was submitted for the degree of Master of Philosophy and awarded by Brunel University.Current electrical networks require secure, scalable and cost-effective Information and
Communications Technology (ICT) solutions to facilitate the novel functionalities
required by Smart Grids. Countries around the globe are investigating alternative energy sources to mitigate the current energy crisis and environmental issues experienced by many countries due to global warming, rapid growth of population, inefficient energy management, dwindling fossil fuel resources, etc. Therefore, alternative or renewable energy sources, such as wind, solar, hydro, combined heat and power, etc., are required to mitigate such a crisis and such sources will also need to be integrated in to the power grid
in a distributed manner. Such distributed energy sources are mainly connected to the
distribution networks and introduce huge challenges to the distribution network operator (DNO). Many of these challenges cannot be dealt with effectively using existing network operation mechanisms therefore the research and development of novel ICT solutions to support smart distribution network operation is required.
This research investigated suitable ICT solutions to enable the Smart Grid to tackle these challenges and proposes ICT infrastructure models that can be used for simulation studies in order to investigate cost-effective, scalable and secure solutions for the DNOs. Initially, a Quality of Service (QoS) monitoring test-bed was proposed to evaluate the performance of bandwidth intensive applications, such as smart meter data transmission. Simulation studies for different communication technologies, cellular and Power Line
Communication (PLC), were also carried out and the simulation models were verified
using experimental test results. Finally, the modelling and analysis of smart metering
infrastructure was carried out using simulation and extensive studies were performed to evaluate the data transmission rate performance for different configurations of smart meters and concentrators
An Innovative Malaxer Equipped with SCADA Platform for Improving Extra Virgin Olive Oil Quality
Agriculture 4.0 is gaining more attention, and all companies are thinking about innovating machines to increase income and improve the quality of the final products. In the agro-food sector, there is space for innovation, as it is far behind the industrial sector. This paper reports an industrial-scale study on the application of an innovative system for the extraction of Sicilian EVOO (extra virgin olive oil) to improve both process management and the quality of the product. Based on previous studies, the authors suggested an innovative machine equipped with a SCADA (supervisory control and data acquisition system) for oxygen and process duration monitoring and control. The objective of the research was thus to discuss the development of a SCADA platform applied to the malaxer and the establishment of an optimized approach to control the main process parameters for obtaining high-quality EVOO. The SCADA system application in the EVOO extraction process allowed a qualitative improvement of the Sicilian EVOO of Nocellara del Belice and Cerasuola cultivars. The use of the innovative system made it possible to increase the values of tocopherols (by about 25%) in Cerasuola cultivar and total phenol content (by about 30%) in Nocellara del Belice cultivar EVOOs
- âŠ