112,294 research outputs found

    μ\muNap: Practical Micro-Sleeps for 802.11 WLANs

    Get PDF
    In this paper, we revisit the idea of putting interfaces to sleep during 'packet overhearing' (i.e., when there are ongoing transmissions addressed to other stations) from a practical standpoint. To this aim, we perform a robust experimental characterisation of the timing and consumption behaviour of a commercial 802.11 card. We design μ\muNap, a local standard-compliant energy-saving mechanism that leverages micro-sleep opportunities inherent to the CSMA operation of 802.11 WLANs. This mechanism is backwards compatible and incrementally deployable, and takes into account the timing limitations of existing hardware, as well as practical CSMA-related issues (e.g., capture effect). According to the performance assessment carried out through trace-based simulation, the use of our scheme would result in a 57% reduction in the time spent in overhearing, thus leading to an energy saving of 15.8% of the activity time.Comment: 15 pages, 12 figure

    Survey of timing/synchronization of operating wideband digital communications networks

    Get PDF
    In order to benefit from experience gained from the synchronization of operational wideband digital networks, a survey was made of three such systems: Data Transmission Company, Western Union Telegraph Company, and the Computer Communications Group of the Trans-Canada Telephone System. The focus of the survey was on deployment and operational experience from a practical (as opposed to theoretical) viewpoint. The objective was to provide a report on the results of deployment how the systems performed, and wherein the performance differed from that predicted or intended in the design. It also attempted to determine how the various system designers would use the benefit of hindsight if they could design those same systems today

    FPGA Implementation of Convolutional Neural Networks with Fixed-Point Calculations

    Full text link
    Neural network-based methods for image processing are becoming widely used in practical applications. Modern neural networks are computationally expensive and require specialized hardware, such as graphics processing units. Since such hardware is not always available in real life applications, there is a compelling need for the design of neural networks for mobile devices. Mobile neural networks typically have reduced number of parameters and require a relatively small number of arithmetic operations. However, they usually still are executed at the software level and use floating-point calculations. The use of mobile networks without further optimization may not provide sufficient performance when high processing speed is required, for example, in real-time video processing (30 frames per second). In this study, we suggest optimizations to speed up computations in order to efficiently use already trained neural networks on a mobile device. Specifically, we propose an approach for speeding up neural networks by moving computation from software to hardware and by using fixed-point calculations instead of floating-point. We propose a number of methods for neural network architecture design to improve the performance with fixed-point calculations. We also show an example of how existing datasets can be modified and adapted for the recognition task in hand. Finally, we present the design and the implementation of a floating-point gate array-based device to solve the practical problem of real-time handwritten digit classification from mobile camera video feed

    Impact of topology on layer 2 switched QoS sensitive services

    Get PDF
    High-bandwidth QoS sensitive services such as large scale video surveillance generally depend on provisioned capacity delivered by circuit-switched technology such as SONET/SDH. Yet development in layer 2 protocol sets and manageability extensions to Ethernet standards propose layer 2 packet switching technology as a viable, cheaper alternative to SONET/SDH. Layer 2 switched networks traditionally offer more complex topologies; in this paper we explain general QoS issues with layer 2 switching and show the impact of topology choice on service performance

    Ubiquitous Cell-Free Massive MIMO Communications

    Get PDF
    Since the first cellular networks were trialled in the 1970s, we have witnessed an incredible wireless revolution. From 1G to 4G, the massive traffic growth has been managed by a combination of wider bandwidths, refined radio interfaces, and network densification, namely increasing the number of antennas per site. Due its cost-efficiency, the latter has contributed the most. Massive MIMO (multiple-input multiple-output) is a key 5G technology that uses massive antenna arrays to provide a very high beamforming gain and spatially multiplexing of users, and hence, increases the spectral and energy efficiency. It constitutes a centralized solution to densify a network, and its performance is limited by the inter-cell interference inherent in its cell-centric design. Conversely, ubiquitous cell-free Massive MIMO refers to a distributed Massive MIMO system implementing coherent user-centric transmission to overcome the inter-cell interference limitation in cellular networks and provide additional macro-diversity. These features, combined with the system scalability inherent in the Massive MIMO design, distinguishes ubiquitous cell-free Massive MIMO from prior coordinated distributed wireless systems. In this article, we investigate the enormous potential of this promising technology while addressing practical deployment issues to deal with the increased back/front-hauling overhead deriving from the signal co-processing.Comment: Published in EURASIP Journal on Wireless Communications and Networking on August 5, 201
    • …
    corecore