3,220 research outputs found

    Efficient Approximation Algorithms for Multi-Antennae Largest Weight Data Retrieval

    Full text link
    In a mobile network, wireless data broadcast over mm channels (frequencies) is a powerful means for distributed dissemination of data to clients who access the channels through multi-antennae equipped on their mobile devices. The δ\delta-antennae largest weight data retrieval (δ\deltaALWDR) problem is to compute a schedule for downloading a subset of data items that has a maximum total weight using δ\delta antennae in a given time interval. In this paper, we propose a ratio 11eϵ1-\frac{1}{e}-\epsilon approximation algorithm for the δ\delta-antennae largest weight data retrieval (δ\deltaALWDR) problem that has the same ratio as the known result but a significantly improved time complexity of O(21ϵ1ϵm7T3.5L)O(2^{\frac{1}{\epsilon}}\frac{1}{\epsilon}m^{7}T^{3.5}L) from O(ϵ3.5m3.5ϵT3.5L)O(\epsilon^{3.5}m^{\frac{3.5}{\epsilon}}T^{3.5}L) when δ=1\delta=1 \cite{lu2014data}. To our knowledge, our algorithm represents the first ratio 11eϵ1-\frac{1}{e}-\epsilon approximation solution to δ\deltaALWDR for the general case of arbitrary δ\delta. To achieve this, we first give a ratio 11e1-\frac{1}{e} algorithm for the γ\gamma-separated δ\deltaALWDR (δ\deltaAγ\gammaLWDR) with runtime O(m7T3.5L)O(m^{7}T^{3.5}L), under the assumption that every data item appears at most once in each segment of δ\deltaAγ\gammaLWDR, for any input of maximum length LL on mm channels in TT time slots. Then, we show that we can retain the same ratio for δ\deltaAγ\gammaLWDR without this assumption at the cost of increased time complexity to O(2γm7T3.5L)O(2^{\gamma}m^{7}T^{3.5}L). This result immediately yields an approximation solution of same ratio and time complexity for δ\deltaALWDR, presenting a significant improvement of the known time complexity of ratio 11eϵ1-\frac{1}{e}-\epsilon approximation to the problem

    Optimization Algorithms for Information Retrieval and Transmission in Distributed Ad Hoc Networks

    Get PDF
    An ad hoc network is formed by a group of self-configuring nodes, typically deployed in two or three dimensional spaces, and communicating with each other through wireless or some other media. The distinct characteristics of ad hoc networks include the lack of pre-designed infrastructure, the natural correlation between the network topology and geometry, and limited communication and computation resources. These characteristics introduce new challenges and opportunities for de- signing ad hoc network applications. This dissertation studies various optimization problems in ad hoc network information retrieval and transmission. Information stored in ad hoc networks is naturally associated with its location. To effectively retrieve such information, we study two fundamental problems, range search and object locating, from a distance sensitive point of view, where the retrieval cost depends on the distance between the user and the target information. We develop a general framework that is applicable to both problems for optimizing the storage overhead while maintaining the distance sensitive retrieval requirement. In addition, we derive a lowerbound result for the object locating problem which shows that logarithmic storage overhead is asymptotically optimal to achieve linear retrieval cost for growth bounded networks. Bandwidth is a scarce resource for wireless ad hoc networks, and its proper utilization is crucial to effective information transmission. To avoid conflict of wireless transmissions, links need to be carefully scheduled to satisfy various constraints. In this part of the study, we first consider an optimization problem of end-to-end on- demand bandwidth allocation with the single transceiver constraint. We study its complexity and present a 2-approximation algorithm. We then discuss how to estimate the end-to-end throughput under a widely adopted model for radio signal interference. A method based on identifying certain clique patterns is proposed and shown to have good practical performance

    Energy conservation in wireless sensor networks: a rule-based approach

    Get PDF

    Information Centric Networking in the IoT: Experiments with NDN in the Wild

    Get PDF
    This paper explores the feasibility, advantages, and challenges of an ICN-based approach in the Internet of Things. We report on the first NDN experiments in a life-size IoT deployment, spread over tens of rooms on several floors of a building. Based on the insights gained with these experiments, the paper analyses the shortcomings of CCN applied to IoT. Several interoperable CCN enhancements are then proposed and evaluated. We significantly decreased control traffic (i.e., interest messages) and leverage data path and caching to match IoT requirements in terms of energy and bandwidth constraints. Our optimizations increase content availability in case of IoT nodes with intermittent activity. This paper also provides the first experimental comparison of CCN with the common IoT standards 6LoWPAN/RPL/UDP.Comment: 10 pages, 10 figures and tables, ACM ICN-2014 conferenc

    Random Neural Networks and Optimisation

    Get PDF
    In this thesis we introduce new models and learning algorithms for the Random Neural Network (RNN), and we develop RNN-based and other approaches for the solution of emergency management optimisation problems. With respect to RNN developments, two novel supervised learning algorithms are proposed. The first, is a gradient descent algorithm for an RNN extension model that we have introduced, the RNN with synchronised interactions (RNNSI), which was inspired from the synchronised firing activity observed in brain neural circuits. The second algorithm is based on modelling the signal-flow equations in RNN as a nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory quasi-Newton algorithm specifically designed for the RNN case. Regarding the investigation of emergency management optimisation problems, we examine combinatorial assignment problems that require fast, distributed and close to optimal solution, under information uncertainty. We consider three different problems with the above characteristics associated with the assignment of emergency units to incidents with injured civilians (AEUI), the assignment of assets to tasks under execution uncertainty (ATAU), and the deployment of a robotic network to establish communication with trapped civilians (DRNCTC). AEUI is solved by training an RNN tool with instances of the optimisation problem and then using the trained RNN for decision making; training is achieved using the developed learning algorithms. For the solution of ATAU problem, we introduce two different approaches. The first is based on mapping parameters of the optimisation problem to RNN parameters, and the second on solving a sequence of minimum cost flow problems on appropriately constructed networks with estimated arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer linear programming formulation, which is based on network flows. Finally, we design and implement distributed heuristic algorithms for the deployment of robots when the civilian locations are known or uncertain

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Genetic algorithms for satellite scheduling problems

    Get PDF
    Recently there has been a growing interest in mission operations scheduling problem. The problem, in a variety of formulations, arises in management of satellite/space missions requiring efficient allocation of user requests to make possible the communication between operations teams and spacecraft systems. Not only large space agencies, such as ESA (European Space Agency) and NASA, but also smaller research institutions and universities can establish nowadays their satellite mission, and thus need intelligent systems to automate the allocation of ground station services to space missions. In this paper, we present some relevant formulations of the satellite scheduling viewed as a family of problems and identify various forms of optimization objectives. The main complexities, due highly constrained nature, windows accessibility and visibility, multi-objectives and conflicting objectives are examined. Then, we discuss the resolution of the problem through different heuristic methods. In particular, we focus on the version of ground station scheduling, for which we present computational results obtained with Genetic Algorithms using the STK simulation toolkit.Peer ReviewedPostprint (published version

    Compression-based Data Reduction Technique for IoT Sensor Networks

    Get PDF
    في شبكات أجهزة استشعار إنترنت الأشياء ، يعد توفير الطاقة أمرًا مهمًا جدًا نظرًا لأن عقد أجهزة استشعار إنترنت الأشياء تعمل ببطاريتها المحدودة. يعد نقل البيانات مكلفًا للغاية في عقد أجهزة استشعار إنترنت الأشياء ويهدر معظم الطاقة ، في حين أن استهلاك الطاقة أقل بكثير بالنسبة لمعالجة البيانات. هناك العديد من التقنيات والمفاهيم التي تعنى بتوفير الطاقة ، وهي مخصصة في الغالب لتقليل نقل البيانات. لذلك ، يمكننا الحفاظ على كمية كبيرة من الطاقة مع تقليل عمليات نقل البيانات في شبكات مستشعر إنترنت الأشياء. في هذا البحث ، اقترحنا طريقة تقليل البيانات القائمة على الضغط (CBDR) والتي تعمل في مستوى عقد أجهزة استشعار إنترنت الأشياء. يتضمن CBDR مرحلتين للضغط ، مرحلة التكميم باستخدام طريقة SAX والتي تقلل النطاق الديناميكي لقراءات بيانات المستشعر ، بعد ذلك ضغط LZW بدون خسارة لضغط مخرجات المرحلة الاولى. يؤدي تكميم قراءات البيانات لعقد المستشعر إلى حجم ابجدية الـ SAX إلى تقليل القراءات ، مع الاستفادة من أفضل أحجام الضغط ، مما يؤدي إلى تحقيق ضغط أكبر في LZW. نقترح أيضًا تحسينًا آخر لطريقة CBDR وهو إضافة ناقل حركة ديناميكي (DT-CBDR) لتقليل إجمالي عدد البيانات المرسلة إلى البوابة والمعالجة المطلوبة. يتم استخدام محاكي OMNeT ++ جنبًا إلى جنب مع البيانات الحسية الحقيقية التي تم جمعها في Intel Lab لإظهار أداء الطريقة المقترحة. توضح تجارب المحاكاة أن تقنية CBDR المقترحة تقدم أداء أفضل من التقنيات الأخرى في الأدبياتEnergy savings are very common in IoT sensor networks because IoT sensor nodes operate with their own limited battery. The data transmission in the IoT sensor nodes is very costly and consume much of the energy while the energy usage for data processing is considerably lower. There are several energy-saving strategies and principles, mainly dedicated to reducing the transmission of data. Therefore, with minimizing data transfers in IoT sensor networks, can conserve a considerable amount of energy. In this research, a Compression-Based Data Reduction (CBDR) technique was suggested which works in the level of IoT sensor nodes. The CBDR includes two stages of compression, a lossy SAX Quantization stage which reduces the dynamic range of the sensor data readings, after which a lossless LZW compression to compress the loss quantization output. Quantizing the sensor node data readings down to the alphabet size of SAX results in lowering, to the advantage of the best compression sizes, which contributes to greater compression from the LZW end of things. Also, another improvement was suggested to the CBDR technique which is to add a Dynamic Transmission (DT-CBDR) to decrease both the total number of data sent to the gateway and the processing required. OMNeT++ simulator along with real sensory data gathered at Intel Lab is used to show the performance of the proposed technique. The simulation experiments illustrate that the proposed CBDR technique provides better performance than the other techniques in the literature
    corecore