8 research outputs found

    Real-Time Monitoring of Video Quality in IP Networks

    Get PDF
    This paper investigates the problem of assessing the quality of video transmitted over IP networks. Our goal is to develop a methodology that is both reasonably accurate and simple enough to support the large-scale deployments that the increasing use of video over IP are likely to demand. For that purpose, we focus on developing an approach that is capable of mapping network statistics, e.g., packet losses, available from simple measurements, to the quality of video sequences reconstructed by receivers. A first step in that direction is a loss-distortion model that accounts for the impact of network losses on video quality, as a function of application-specific parameters such as video codec, loss recovery technique, coded bit rate, packetization, video characteristics, etc. The model, although accurate, is poorly suited to large-scale, on-line monitoring, because of its dependency on parameters that are difficult to estimate in real-time. As a result, we introduce a relative quality metric (rPSNR) that bypasses this problem by measuring video quality against a quality benchmark that the network is expected to provide. The approach offers a lightweight video quality monitoring solution that is suitable for large-scale deployments. We assess its feasibility and accuracy through extensive simulations and experiments

    Minimum cost mirror sites using network coding: Replication vs. coding at the source nodes

    Get PDF
    Content distribution over networks is often achieved by using mirror sites that hold copies of files or portions thereof to avoid congestion and delay issues arising from excessive demands to a single location. Accordingly, there are distributed storage solutions that divide the file into pieces and place copies of the pieces (replication) or coded versions of the pieces (coding) at multiple source nodes. We consider a network which uses network coding for multicasting the file. There is a set of source nodes that contains either subsets or coded versions of the pieces of the file. The cost of a given storage solution is defined as the sum of the storage cost and the cost of the flows required to support the multicast. Our interest is in finding the storage capacities and flows at minimum combined cost. We formulate the corresponding optimization problems by using the theory of information measures. In particular, we show that when there are two source nodes, there is no loss in considering subset sources. For three source nodes, we derive a tight upper bound on the cost gap between the coded and uncoded cases. We also present algorithms for determining the content of the source nodes.Comment: IEEE Trans. on Information Theory (to appear), 201

    TRIDNT: The Trust-Based Routing Protocol with Controlled Degree of Node Selfishness for MANET

    Full text link

    Estudi comparatiu de la publicació científica de la UPC i l’ETSETB vs. altres universitats (2006-2016)

    Get PDF
    L'informe es centra en la publicació científica especialitzada en l'àmbit temàtic propi de l'ETSETB: l'enginyeria de telecomunicacions i l'electrònica. Es comparen indicadors bibliomètrics de la UPC i l'ETSETB amb els d'altres universitats nacionals, europees i internacionals amb activitat de recerca notable en l'àrea de les telecomunicacions i l'electrònica.Postprint (published version

    Improving Multicast Communications Over Wireless Mesh Networks

    Get PDF
    In wireless mesh networks (WMNs) the traditional approach to shortest path tree based multicasting is to cater for the needs of the poorest performingnode i.e. the maximum permitted multicast line rate is limited to the lowest line rate used by the individual Child nodes on a branch. In general, this meansfixing the line rate to its minimum value and fixing the transmit power to its maximum permitted value. This simplistic approach of applying a single multicast rate for all nodes in the multicast group results in a sub-optimal trade-off between the mean network throughput and coverage area that does not allow for high bandwidth multimedia applications to be supported. By relaxing this constraint and allowing multiple line rates to be used, the mean network throughput can be improved. This thesis presents two methods that aim to increase the mean network throughput through the use of multiple line rates by the forwarding nodes. This is achieved by identifying the Child nodes responsible for reducing the multicast group rate. The first method identifies specific locations for the placement of relay nodes which allows for higher multicast branch line rates to be used. The second method uses a power control algorithm to tune the transmit power to allow for higher multicast branch line rates. The use of power control also helps to reduce the interference caused to neighbouring nodes.Through extensive computer simulation it can be shown that these two methods can lead to a four-fold gain in the mean network throughput undertypical WMN operating conditions compared with the single line rate case

    Defense and traceback mechanisms in opportunistic wireless networks

    Full text link
     In this thesis, we have identified a novel attack in OppNets, a special type of packet dropping attack where the malicious node(s) drops one or more packets (not all the packets) and then injects new fake packets instead. We name this novel attack as the Catabolism attack and propose a novel attack detection and traceback approach against this attack referred to as the Anabolism defence. As part of the Anabolism defence approach we have proposed three techniques: time-based, Merkle tree based and Hash chain based techniques for attack detection and malicious node(s) traceback. We provide mathematical models that show our novel detection and traceback mechanisms to be very effective and detailed simulation results show our defence mechanisms to achieve a very high accuracy and detection rate
    corecore