697 research outputs found
Proposal of a clean slate network architecture for ubiquitous services provisioning
The Pervasive Computing field is almost always addressed from application, middleware, sensing or Human Computer Interaction perspective. Thus, solutions are usually designed at application level or involve developing new hardware.
Although current layered network architectures (mainly TCP/IP stack) have enabled internetworking of lots of different devices and services, they are neither well-suited nor optimized for pervasive computing applications. Hence, we firmly believe that we should have an underlying network architecture providing the
flexible, context-aware and adaptable communication infrastructure required to ease the development of ubiquitous
services and applications. Herein, we propose a clean slate network architecture to deploy ubiquitous services in a Pervasive and Ubiquitous Computing environment. The architecture is designed to avoid hierarchical layering, so we propose a serviceoriented approach for a flow-oriented context-aware network architecture where communications are composed on the fly (using reusable components) according to the needs and requirements of the consumed service.Postprint (published version
Satellite-based delivery of educational content to geographically isolated communities: A service based approach
Enabling learning for members of geographically
isolated communities presents benefits in terms of
promoting regional development and cost savings for governments and companies. However, notwithstanding recent advances in e-Learning, from both technological and pedagogical perspectives, there are very few, if any,
recognised methodologies for user-led design of satellite-based e-learning infrastructures. In this paper, we present a methodology for designing a satellite and wireless based network infrastructure and learning services to support distance learning for such isolated communities. This methodology entails (a) the involvement of community members in the development of targeted learning services from an early stage, and (b) a service-oriented approach to learning solution deployment. Results show, that, while the technological premises of distance learning can be
accommodated by hybrid satellite/wireless infrastructures,this has to be complemented with (a) high-quality audioâvisual educational material, and (b) the opportunity for community members to interact with other community
members either as groups (common-room oriented scenarios) or individuals (home-based scenarios), thus providing an impetus for learner engagement in both formal and informal activities
Physically Dense Server Architectures.
Distributed, in-memory key-value stores have emerged as one of today's most
important data center workloads. Being critical for the scalability of modern
web services, vast resources are dedicated to key-value stores in order
to ensure that quality of service guarantees are met. These resources include:
many server racks to store terabytes of key-value data, the power necessary to
run all of the machines, networking equipment and bandwidth, and the data center
warehouses used to house the racks.
There is, however, a mismatch between the key-value store software and the
commodity servers on which it is run, leading to inefficient use of resources.
The primary cause of inefficiency is the overhead incurred from processing
individual network packets, which typically carry small payloads, and require
minimal compute resources. Thus, one of the key challenges as we enter the
exascale era is how to best adjust to the paradigm shift from compute-centric
to storage-centric data centers.
This dissertation presents a hardware/software solution that addresses the
inefficiency issues present in the modern data centers on which key-value
stores are currently deployed. First, it proposes two physical server
designs, both of which use 3D-stacking technology and low-power CPUs to improve
density and efficiency. The first 3D architecture---Mercury---consists of stacks
of low-power CPUs with 3D-stacked DRAM. The second
architecture---Iridium---replaces DRAM with 3D NAND Flash to improve density.
The second portion of this dissertation proposes and enhanced version of the
Mercury server design---called KeyVault---that incorporates integrated,
zero-copy network interfaces along with an integrated switching fabric. In order
to utilize the integrated networking hardware, as well as reduce the
response time of requests, a custom networking protocol is proposed. Unlike
prior works on accelerating key-value stores---e.g., by completely bypassing the
CPU and OS when processing requests---this work only bypasses the CPU and OS
when placing network payloads into a process' memory. The insight behind this is
that because most of the overhead comes from processing packets in the OS
kernel---and not the request processing itself---direct placement of packet's
payload is sufficient to provide higher throughput and lower latency than prior
approaches.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111414/1/atgutier_1.pd
Recommended from our members
Converged IP-over-standard ethernet progress control networks for hydrocarbon process automation applications controllers
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The maturity level of Internet Protocol (IP) and the emergence of standard Ethernet interfaces of Hydrocarbon Process Automation Application (HPAA) present a real opportunity to combine independent industrial applications onto an integrated IP based network platform. Quality of Service (QoS) for IP over Ethernet has the strength to regulate traffic mix and support timely delivery. The combinations of these technologies lend themselves to provide a platform to support HPAA applications across Local Area Network (LAN) and Wide Area Network (WAN) networks. HPAA systems are composed of sensors, actuators, and logic solvers networked together to form independent control system network platforms. They support hydrocarbon plants operating under critical conditions that â if not controlled â could become dangerous to people, assets and the environment. This demands high speed networking which is triggered by the need to capture data with higher frequency rate at a finer granularity. Nevertheless, existing HPAA network infrastructure is based on unique autonomous systems, which has resulted in multiple, parallel and separate networks with limited interconnectivity supporting different functions. This created increased complexity in integrating various applications and resulted higher costs in the technology life cycle total ownership. To date, the concept of consolidating HPAA into a converged IP network over standard Ethernet has not yet been explored. This research aims to explore and develop the HPAA Process Control Systems (PCS) in a Converged Internet Protocol (CIP) using experimental and simulated networks case studies. Results from experimental and simulation work showed encouraging outcomes and provided a good argument for supporting the co-existence of HPAA and non-HPAA applications taking into consideration timeliness and reliability requirements. This was achieved by invoking priority based scheduling with the highest priority being awarded to PCS among other supported services such as voice, multimedia streams and other applications. HPAA can benefit from utilizing CIP over Ethernet by reducing the number of interdependent HPAA PCS networks to a single uniform and standard network. In addition, this integrated infrastructure offers a platform for additional support services such as multimedia streaming, voice, and data. This networkâbased model manifests itself to be integrated with remote control system platform capabilities at the end user's desktop independent of space and time resulting in the concept of plant virtualization
Network Access in a Diversified Internet
There is a growing interest in virtualized network infrastructures as a means to enable experimental evaluation of new network architectures on a realistic scale. The National Science Foundation\u27s GENI initiative seeks to develop a national experimental facility that would include virtualized network platforms that can support many concurrent experimental networks. Some researchers seek to make virtualization a central architectural component of a future Internet, so that new network architectures can be introduced at any time, without the barriers to entry that currently make this difficult. This paper focuses on how to extend the concept of virtualized networking through LAN-based access networks to the end systems. Our objective is to allow virtual networks that support new network services to make those services directly available to applications, rather than force applications to access them indirectly through existing network protocols. We demonstrate that this approach can improve performance by an order of magnitude over other approaches and can enable virtual networks that provide end-to-end quality of service
QoS provisioning in multimedia streaming
Multimedia consists of voice, video, and data. Sample applications include video conferencing, video on demand, distance learning, distributed games, and movies on demand. Providing Quality of Service (QoS) for multimedia streaming has been a difficult and challenging problem. When multimedia traffic is transported over a network, video traffic, though usually compressed/encoded for bandwidth reduction, still consumes most of the bandwidth. In addition, compressed video streams typically exhibit highly variable bit rates as well as long range dependence properties, thus exacerbating the challenge in meeting the stringent QoS requirements of multimedia streaming with high network utilization. Dynamic bandwidth allocation in which video traffic prediction can play an important role is thus needed.
Prediction of the variation of the I frame size using Least Mean Square (LMS) is first proposed. Owing to a smoother sequence, better prediction has been achieved as compared to the composite MPEG video traffic prediction scheme. One problem with this LMS algorithm is its slow convergence. In Variable Bit Rate (VBR) videos characterized by frequent scene changes, the LMS algorithm may result in an extended period of intractability, and thus may experience excessive cell loss during scene changes. A fast convergent non-linear predictor called Variable Step-size Algorithm (VSA) is subsequently proposed to overcome this drawback. The VSA algorithm not only incurs small prediction errors but more importantly achieves fast convergence. It tracks scene changes better than LMS. Bandwidth is then assigned based on the predicted I frame size which is usually the largest in a Group of Picture (GOP). Hence, the Cell Loss Ratio (CLR) can be kept small. By reserving bandwidth at least equal to the predicted one, only prediction errors need to be buffered. Since the prediction error was demonstrated to resemble white noise or exhibits at most short term memory, smaller buffers, less delay, and higher bandwidth utilization can be achieved. In order to further improve network bandwidth utilization, a QoS guaranteed on-line bandwidth allocation is proposed. This method allocates the bandwidth based on the predicted GOP and required QoS. Simulations and analytical results demonstrate that this scheme provides guaranteed delay and achieves higher bandwidth utilization.
Network traffic is generally accepted to be self similar. Aggregating self similar traffic can actually intensify rather than diminish burstiness. Thus, traffic prediction plays an important role in network management. Least Mean Kurtosis (LMK), which uses the negated kurtosis of the error signal as the cost function, is proposed to predict the self similar traffic. Simulation results show that the prediction performance is improved greatly as compared to the LMS algorithm. Thus, it can be used to effectively predict the real time network traffic.
The Differentiated Service (DiffServ) model is a less complex and more scalable solution for providing QoS to IP as compared to the Integrated Service (IntServ) model. We propose to transport MPEG frames through various service classes of DiffServ according to the MPEG video characteristics. Performance analysis and simulation results show that our proposed approach can not only guarantee QoS but can also achieve high bandwidth utilization. As the end video quality is determined not only by the network QoS but also by the encoded video quality, we consider video quality from these two aspects and further propose to transport spatial scalable encoded videos over DiffServ. Performance analysis and simulation results show that this can provision QoS guarantees. The dropping policy we propose at the egress router can reduce the traffic load as well as the risk of congestion in other domains
VOIP weathermap - a VOIP QOS collection analysis and dissemination system
 Current trends point to VoIP as a cheaper and more effective long term solution than possible future PSTN upgrades. To move towards greater adoption of VoIP the future converged digital network is moving towards a service level management and control regime. To ensure that VoIP services provide an acceptable quality of service (QoS) a measurement solution would be helpful. The research outcome presented in this thesis is a new system for testing, analysing and presenting the call quality of Voice over Internet Protocol (VoIP). The system is called VoIP WeatherMap. Information about the current status of the Internet for VoIP calls is currently limited and a recognised approach to identifying the network status has not been adopted. An important consideration is the difficulty of assessing network conditions across links including network segments belonging to different telecommunication companies and Internet Service Providers. The VoIP WeatherMap includes the use of probes to simulate voice calls by implementing RTP/RTCP stacks. VoIP packets are sent from a probe to a server over the Internet. The important characteristics of VoIP calls such as delay and packet loss rate are collected by the server, analysed, stored in a database and presented through a web based interface. The collected voice call session data is analysed using the E-model algorithm described in ITU-T G.107. The VoIP WeatherMap presentation system includes a geographic display and internet connection links are coloured to represent the Quality of Service rank
Comunicaciones MĂłviles de MisiĂłn CrĂtica sobre Redes LTE
Mission Critical Communications (MCC) have been typically provided by proprietary radio technologies, but, in the last years, the interest to use commercial-off-the-shelf mobile technologies has increased. In this thesis, we explore the use of LTE to support MCC. We analyse the feasibility of LTE networks employing an experimental platform, PerformNetworks. To do so, we extend the testbed to increase the number of possible scenarios and the tooling available. After exploring the Key Performance Indicators (KPIs) of LTE, we propose different architectures to support the performance and functional requirements demanded by MCC.
We have identified latency as one of the KPI to improve, so we have done several proposals to reduce it. These proposals follow the Mobile Edge Computing (MEC) paradigm, locating the services in what we called the fog, close to the base station to avoid the backhaul and transport networks. Our first proposal is the Fog Gateway, which is a MEC solution fully compatible with standard LTE networks that analyses the traffic coming from the base station to decide whether it has to be routed to the fog of processed normally by the SGW. Our second proposal is its natural evolution, the GTP Gateway that requires modifications on the base station. With this proposal, the base station will only transport over GTP the traffic not going to the fog.
Both proposals have been validated by providing emulated scenarios, and, in the case of the Fog Gateway, also with the implementation of different prototypes, proving its compatibility with standard LTE network and its performance. The gateways can reduce drastically the end-to-end latency, as they avoid the time consumed by the backhaul and transport networks, with a very low trade-off
- âŠ