10,626 research outputs found

    Applications of Soft Computing in Mobile and Wireless Communications

    Get PDF
    Soft computing is a synergistic combination of artificial intelligence methodologies to model and solve real world problems that are either impossible or too difficult to model mathematically. Furthermore, the use of conventional modeling techniques demands rigor, precision and certainty, which carry computational cost. On the other hand, soft computing utilizes computation, reasoning and inference to reduce computational cost by exploiting tolerance for imprecision, uncertainty, partial truth and approximation. In addition to computational cost savings, soft computing is an excellent platform for autonomic computing, owing to its roots in artificial intelligence. Wireless communication networks are associated with much uncertainty and imprecision due to a number of stochastic processes such as escalating number of access points, constantly changing propagation channels, sudden variations in network load and random mobility of users. This reality has fuelled numerous applications of soft computing techniques in mobile and wireless communications. This paper reviews various applications of the core soft computing methodologies in mobile and wireless communications

    A 64-point Fourier transform chip for high-speed wireless LAN application using OFDM

    No full text
    In this article, we present a novel fixed-point 16-bit word-width 64-point FFT/IFFT processor developed primarily for the application in the OFDM based IEEE 802.11a Wireless LAN (WLAN) baseband processor. The 64-point FFT is realized by decomposing it into a 2-D structure of 8-point FFTs. This approach reduces the number of required complex multiplications compared to the conventional radix-2 64-point FFT algorithm. The complex multiplication operations are realized using shift-and-add operations. Thus, the processor does not use any 2-input digital multiplier. It also does not need any RAM or ROM for internal storage of coefficients. The proposed 64-point FFT/IFFT processor has been fabricated and tested successfully using our in-house 0.25 ?m BiCMOS technology. The core area of this chip is 6.8 mm2. The average dynamic power consumption is 41 mW @ 20 MHz operating frequency and 1.8 V supply voltage. The processor completes one parallel-to-parallel (i. e., when all input data are available in parallel and all output data are generated in parallel) 64-point FFT computation in 23 cycles. These features show that though it has been developed primarily for application in the IEEE 802.11a standard, it can be used for any application that requires fast operation as well as low power consumption

    A brief network analysis of Artificial Intelligence publication

    Full text link
    In this paper, we present an illustration to the history of Artificial Intelligence(AI) with a statistical analysis of publish since 1940. We collected and mined through the IEEE publish data base to analysis the geological and chronological variance of the activeness of research in AI. The connections between different institutes are showed. The result shows that the leading community of AI research are mainly in the USA, China, the Europe and Japan. The key institutes, authors and the research hotspots are revealed. It is found that the research institutes in the fields like Data Mining, Computer Vision, Pattern Recognition and some other fields of Machine Learning are quite consistent, implying a strong interaction between the community of each field. It is also showed that the research of Electronic Engineering and Industrial or Commercial applications are very active in California. Japan is also publishing a lot of papers in robotics. Due to the limitation of data source, the result might be overly influenced by the number of published articles, which is to our best improved by applying network keynode analysis on the research community instead of merely count the number of publish.Comment: 18 pages, 7 figure

    Modeling the Internet of Things: a simulation perspective

    Full text link
    This paper deals with the problem of properly simulating the Internet of Things (IoT). Simulating an IoT allows evaluating strategies that can be employed to deploy smart services over different kinds of territories. However, the heterogeneity of scenarios seriously complicates this task. This imposes the use of sophisticated modeling and simulation techniques. We discuss novel approaches for the provision of scalable simulation scenarios, that enable the real-time execution of massively populated IoT environments. Attention is given to novel hybrid and multi-level simulation techniques that, when combined with agent-based, adaptive Parallel and Distributed Simulation (PADS) approaches, can provide means to perform highly detailed simulations on demand. To support this claim, we detail a use case concerned with the simulation of vehicular transportation systems.Comment: Proceedings of the IEEE 2017 International Conference on High Performance Computing and Simulation (HPCS 2017

    An efficient genetic algorithm for large-scale transmit power control of dense and robust wireless networks in harsh industrial environments

    Get PDF
    The industrial wireless local area network (IWLAN) is increasingly dense, due to not only the penetration of wireless applications to shop floors and warehouses, but also the rising need of redundancy for robust wireless coverage. Instead of simply powering on all access points (APs), there is an unavoidable need to dynamically control the transmit power of APs on a large scale, in order to minimize interference and adapt the coverage to the latest shadowing effects of dominant obstacles in an industrial indoor environment. To fulfill this need, this paper formulates a transmit power control (TPC) model that enables both powering on/off APs and transmit power calibration of each AP that is powered on. This TPC model uses an empirical one-slope path loss model considering three-dimensional obstacle shadowing effects, to enable accurate yet simple coverage prediction. An efficient genetic algorithm (GA), named GATPC, is designed to solve this TPC model even on a large scale. To this end, it leverages repair mechanism-based population initialization, crossover and mutation, parallelism as well as dedicated speedup measures. The GATPC was experimentally validated in a small-scale IWLAN that is deployed a real industrial indoor environment. It was further numerically demonstrated and benchmarked on both small- and large-scales, regarding the effectiveness and the scalability of TPC. Moreover, sensitivity analysis was performed to reveal the produced interference and the qualification rate of GATPC in function of varying target coverage percentage as well as number and placement direction of dominant obstacles. (C) 2018 Elsevier B.V. All rights reserved

    WLAN Location Sharing through a Privacy Observant Architecture

    Get PDF
    In the last few years, WLAN has seen immense growth and it will continue this trend due to the fact that it provides convenient connectivity as well as high speed links. Furthermore, the infrastructure already exists in most public places and is cheap to extend. These advantages, together with the fact that WLAN covers a large area and is not restricted to line of sight, have led to developing many WLAN localization techniques and applications based on them. In this paper we present a novel calibration-free localization technique using the existing WLAN infrastructure that enables conference participants to determine their location without the need of a centralized system. The evaluation results illustrate the superiority of our technique compared to existing methods. In addition, we present a privacy observant architecture to share location information. We handle both the location of people and the resources in the infrastructure as services, which can be easily discovered and used. An important design issue for us was to avoid tracking people and giving the users control over who they share their location information with and under which conditions

    Hotspot wireless LANs to enhance the performance of 3G and beyond cellular networks

    Get PDF

    Anticipatory Buffer Control and Quality Selection for Wireless Video Streaming

    Full text link
    Video streaming is in high demand by mobile users, as recent studies indicate. In cellular networks, however, the unreliable wireless channel leads to two major problems. Poor channel states degrade video quality and interrupt the playback when a user cannot sufficiently fill its local playout buffer: buffer underruns occur. In contrast to that, good channel conditions cause common greedy buffering schemes to pile up very long buffers. Such over-buffering wastes expensive wireless channel capacity. To keep buffering in balance, we employ a novel approach. Assuming that we can predict data rates, we plan the quality and download time of the video segments ahead. This anticipatory scheduling avoids buffer underruns by downloading a large number of segments before a channel outage occurs, without wasting wireless capacity by excessive buffering. We formalize this approach as an optimization problem and derive practical heuristics for segmented video streaming protocols (e.g., HLS or MPEG DASH). Simulation results and testbed measurements show that our solution essentially eliminates playback interruptions without significantly decreasing video quality
    • …
    corecore