535,718 research outputs found
QoSatAr: a cross-layer architecture for E2E QoS provisioning over DVB-S2 broadband satellite systems
This article presents QoSatAr, a cross-layer architecture developed to provide end-to-end quality of service (QoS) guarantees for Internet protocol (IP) traffic over the Digital Video Broadcasting-Second generation (DVB-S2) satellite systems. The architecture design is based on a cross-layer optimization between the physical layer and the network layer to provide QoS provisioning based on the bandwidth availability present in the DVB-S2 satellite channel. Our design is developed at the satellite-independent layers, being in compliance with the ETSI-BSM-QoS standards. The architecture is set up inside the gateway, it includes a Re-Queuing Mechanism (RQM) to enhance the goodput of the EF and AF traffic classes and an adaptive IP scheduler to guarantee the high-priority traffic classes taking into account the channel conditions affected by rain events. One of the most important aspect of the architecture design is that QoSatAr is able to guarantee the QoS requirements for specific traffic flows considering a single parameter: the bandwidth availability which is set at the physical layer (considering adaptive code and modulation adaptation) and sent to the network layer by means of a cross-layer optimization. The architecture has been evaluated using the NS-2 simulator. In this article, we present evaluation metrics, extensive simulations results and conclusions about the performance of the proposed QoSatAr when it is evaluated over a DVB-S2 satellite scenario. The key results show that the implementation of this architecture enables to keep control of the satellite system load while guaranteeing the QoS levels for the high-priority traffic classes even when bandwidth variations due to rain events are experienced. Moreover, using the RQM mechanism the user’s quality of experience is improved while keeping lower delay and jitter values for the high-priority traffic classes. In particular, the AF goodput is enhanced around 33% over the drop tail scheme (on average)
A cross layer multi hop network architecture for wireless Ad Hoc networks
In this paper, a novel decentralized cross-layer multi-hop cooperative network architecture is presented. Our architecture involves the design of a simple yet efficient cooperative flooding scheme,two decentralized opportunistic cooperative forwarding mechanisms as well as the design of Routing
Enabled Cooperative Medium Access Control (RECOMAC) protocol that spans and incorporates the physical, medium access control (MAC) and routing layers for improving the performance of multihop communication. The proposed architecture exploits randomized coding at the physical layer to realize cooperative diversity. Randomized coding alleviates relay selection and actuation mechanisms,and therefore reduces the coordination among the relays. The coded packets are forwarded via opportunistically formed cooperative sets within a region, without communication among the relays and without establishing a prior route. In our architecture, routing layer functionality is submerged into the
MAC layer to provide seamless cooperative communication while the messaging overhead to set up routes, select and actuate relays is minimized. RECOMAC is shown to provide dramatic performance improvements, such as eight times higher throughput and ten times lower end-to-end delay as well as reduced overhead, as compared to networks based on well-known IEEE 802.11 and Ad hoc On Demand
Distance Vector (AODV) protocols
RECOMAC: a cross-layer cooperative network protocol for wireless ad hoc networks
A novel decentralized cross-layer multi-hop cooperative protocol, namely, Routing Enabled Cooperative Medium Access Control (RECOMAC) is proposed for wireless ad hoc networks. The protocol architecture makes use of cooperative
forwarding methods, in which coded packets are forwarded via opportunistically formed cooperative sets within a region, as RECOMAC spans the physical, medium access control (MAC) and routing layers. Randomized coding is exploited at the physical layer to realize cooperative transmissions, and cooperative forwarding is implemented for routing functionality, which is submerged into the MAC layer, while the overhead for MAC and route set up is minimized. RECOMAC is shown to provide dramatic performance improvements of eight times higher throughput and one tenth of end-to-end delay than that of the conventional architecture in practical wireless mesh networks
Network layer security: Design for a cross layer architecture
Traditional modular layering schemes have served a major part in the development of a variety of protocols. However, as the physical layer impairments become more unpredictable, a cross layer design (CLD) which is dynamic in nature provides better performance. CLD introduces new challenges in protocol design as well as in the area of security. Using numerical analysis, we show that a link layer design employing header compression and cross layer signalling to protect protocol headers can limit packet discarding. This paper also reviews the IPsec protocol and describes how IPsec can be modified for cross layer architecture. © 2007 IEEE
DDLSTM: Dual-Domain LSTM for Cross-Dataset Action Recognition
Domain alignment in convolutional networks aims to learn the degree of
layer-specific feature alignment beneficial to the joint learning of source and
target datasets. While increasingly popular in convolutional networks, there
have been no previous attempts to achieve domain alignment in recurrent
networks. Similar to spatial features, both source and target domains are
likely to exhibit temporal dependencies that can be jointly learnt and aligned.
In this paper we introduce Dual-Domain LSTM (DDLSTM), an architecture that is
able to learn temporal dependencies from two domains concurrently. It performs
cross-contaminated batch normalisation on both input-to-hidden and
hidden-to-hidden weights, and learns the parameters for cross-contamination,
for both single-layer and multi-layer LSTM architectures. We evaluate DDLSTM on
frame-level action recognition using three datasets, taking a pair at a time,
and report an average increase in accuracy of 3.5%. The proposed DDLSTM
architecture outperforms standard, fine-tuned, and batch-normalised LSTMs.Comment: To appear in CVPR 201
Cross-Layer Designs in Coded Wireless Fading Networks with Multicast
A cross-layer design along with an optimal resource allocation framework is
formulated for wireless fading networks, where the nodes are allowed to perform
network coding. The aim is to jointly optimize end-to-end transport layer
rates, network code design variables, broadcast link flows, link capacities,
average power consumption, and short-term power allocation policies. As in the
routing paradigm where nodes simply forward packets, the cross-layer
optimization problem with network coding is non-convex in general. It is proved
however, that with network coding, dual decomposition for multicast is optimal
so long as the fading at each wireless link is a continuous random variable.
This lends itself to provably convergent subgradient algorithms, which not only
admit a layered-architecture interpretation but also optimally integrate
network coding in the protocol stack. The dual algorithm is also paired with a
scheme that yields near-optimal network design variables, namely multicast
end-to-end rates, network code design quantities, flows over the broadcast
links, link capacities, and average power consumption. Finally, an asynchronous
subgradient method is developed, whereby the dual updates at the physical layer
can be affordably performed with a certain delay with respect to the resource
allocation tasks in upper layers. This attractive feature is motivated by the
complexity of the physical layer subproblem, and is an adaptation of the
subgradient method suitable for network control.Comment: Accepted in IEEE/ACM Transactions on Networking; revision pendin
Spatio-Temporal Deep Learning Models for Tip Force Estimation During Needle Insertion
Purpose. Precise placement of needles is a challenge in a number of clinical
applications such as brachytherapy or biopsy. Forces acting at the needle cause
tissue deformation and needle deflection which in turn may lead to misplacement
or injury. Hence, a number of approaches to estimate the forces at the needle
have been proposed. Yet, integrating sensors into the needle tip is challenging
and a careful calibration is required to obtain good force estimates.
Methods. We describe a fiber-optical needle tip force sensor design using a
single OCT fiber for measurement. The fiber images the deformation of an epoxy
layer placed below the needle tip which results in a stream of 1D depth
profiles. We study different deep learning approaches to facilitate calibration
between this spatio-temporal image data and the related forces. In particular,
we propose a novel convGRU-CNN architecture for simultaneous spatial and
temporal data processing.
Results. The needle can be adapted to different operating ranges by changing
the stiffness of the epoxy layer. Likewise, calibration can be adapted by
training the deep learning models. Our novel convGRU-CNN architecture results
in the lowest mean absolute error of 1.59 +- 1.3 mN and a cross-correlation
coefficient of 0.9997, and clearly outperforms the other methods. Ex vivo
experiments in human prostate tissue demonstrate the needle's application.
Conclusions. Our OCT-based fiber-optical sensor presents a viable alternative
for needle tip force estimation. The results indicate that the rich
spatio-temporal information included in the stream of images showing the
deformation throughout the epoxy layer can be effectively used by deep learning
models. Particularly, we demonstrate that the convGRU-CNN architecture performs
favorably, making it a promising approach for other spatio-temporal learning
problems.Comment: Accepted for publication in the International Journal of Computer
Assisted Radiology and Surger
Controlling Concurrent Change - A Multiview Approach Toward Updatable Vehicle Automation Systems
The development of SAE Level 3+ vehicles [{SAE}, 2014] poses new challenges not only for the functional development, but also for design and development processes. Such systems consist of a growing number of interconnected functional, as well as hardware and software components, making safety design increasingly difficult. In order to cope with emergent behavior at the vehicle level, thorough systems engineering becomes a key requirement, which enables traceability between different design viewpoints. Ensuring traceability is a key factor towards an efficient validation and verification of such systems. Formal models can in turn assist in keeping track of how the different viewpoints relate to each other and how the interplay of components affects the overall system behavior. Based on experience from the project Controlling Concurrent Change, this paper presents an approach towards model-based integration and verification of a cause effect chain for a component-based vehicle automation system. It reasons on a cross-layer model of the resulting system, which covers necessary aspects of a design in individual architectural views, e.g. safety and timing. In the synthesis stage of integration, our approach is capable of inserting enforcement mechanisms into the design to ensure adherence to the model. We present a use case description for an environment perception system, starting with a functional architecture, which is the basis for componentization of the cause effect chain. By tying the vehicle architecture to the cross-layer integration model, we are able to map the reasoning done during verification to vehicle behavior
- …
