447 research outputs found

    Channel and noise variance estimation and tracking algorithms for unique-word based single-carrier systems

    Get PDF

    Energy aware optimization for low power radio technologies

    Get PDF
    The explosive growth of IoT is pushing the market towards cheap, very low power devices with a strong focus on miniaturization, for applications such as in-body sensors, personal health monitoring and microrobots. Proposing procedures for energy efficiency in IoT is a difficult task, as it is a rapidly growing market comprised of many and very diverse product categories using technologies that are not stable, evolving at a high pace. The research in this field proposes solutions that go from physical layer optimization up to the network layer, and the sensor network designer has to select the techniques that are best for its application specific architecture and radio technology used. This work is focused on exploring new techniques for enhancing the energy efficiency and user experience of IoT networks. We divide the proposed techniques in frame and chip level optimization techniques, respectively. While the frame level techniques are meant to improve the performance of existing radio technologies, the chip level techniques aim at replacing them with crystal-free architectures. The identified frame level techniques are the use of preamble authentication and packet fragmentation, advisable for Low Power Wide Area Networks (LPWANs), a technology that offers the lowest energy consumption per provided service, but is vulnerable in front of energy exhaustion attacks and does not perform well in dense networks. The use of authenticated preambles between the sensors and gateways becomes a defence mechanism against the battery draining intended by attackers. We show experimentally that this approach is able to reduce with 91% the effect of an exhaustion attack, increasing the device's lifetime from less than 0.24 years to 2.6 years. The experiments were conducted using Loadsensing sensor nodes, commercially used for critical infrastructure control and monitoring. Even if exemplified on LoRaWAN, the use of preamble authentication is extensible to any wireless protocol. The use of packet fragmentation despite the packet fits the frame, is shown to reduce the probability of collisions while the number of users in the duty-cycle restricted network increases. Using custom-made Matlab simulations, important goodput improvement was obtained with fragmentation, with higher impact in slower and denser networks. Using NS3 simulations, we showed that combining packet fragmentation with group NACK can increase the network reliability, while reducing the energy consumed for retransmissions, at the cost of adding small headers to each fragment. It is a strategy that proves to be effective in dense duty-cycle restricted networks only, where the headers overhead is negligible compared to the network traffic. As a chip level technique, we consider using radios for communication that do not use external frequency references such as crystal oscillators. This would enable having all sensor's elements on a single piece of silicon, rendering it even ten times more energy efficient due to the compactness of the chip. The immediate consequence is the loss of communication accuracy and ability to easily switch communication channels. In this sense, we propose a sequence of frequency synchronization algorithms and phases that have to be respected by a crystal-free device so that it can be able to join a network by finding the beacon channel, synthesize all communication channels and then maintain their accuracy against temperature change. The proposed algorithms need no additional network overhead, as they are using the existing network signaling. The evaluation is made in simulations and experimentally on a prototype implementation of an IEEE802.15.4 crystal-free radio. While in simulations we are able to change to another communication channel with very good frequency accuracy, the results obtained experimentally show an initial accuracy slightly above 40ppm, which will be later corrected by the chip to be below 40 ppm.El crecimiento significativo de la IoT está empujando al mercado hacia el desarrollo de dispositivos de bajo coste, de muy bajo consumo energético y con un fuerte enfoque en la miniaturización, para aplicaciones que requieran sensores corporales, monitoreo de salud personal y micro-robots. La investigación en el campo de la eficiencia energética en la IoT propone soluciones que van desde la optimización de la capa física hasta la capa de red. Este trabajo se centra en explorar nuevas técnicas para mejorar la eficiencia energética y la experiencia del usuario de las redes IoT. Dividimos las técnicas propuestas en técnicas de optimización de nivel de trama de red y chip, respectivamente. Si bien las técnicas de nivel de trama están destinadas a mejorar el rendimiento de las tecnologías de radio existentes, las técnicas de nivel de chip tienen como objetivo reemplazarlas por arquitecturas que no requieren de cristales. Las técnicas de nivel de trama desarrolladas en este trabajo son el uso de autenticación de preámbulos y fragmentación de paquetes, aconsejables para redes LPWAN, una tecnología que ofrece un menor consumo de energía por servicio prestado, pero es vulnerable frente a los ataques de agotamiento de energía y no escalan frente la densificación. El uso de preámbulos autenticados entre los sensores y las pasarelas de enlace se convierte en un mecanismo de defensa contra el agotamiento del batería previsto por los atacantes. Demostramos experimentalmente que este enfoque puede reducir con un 91% el efecto de un ataque de agotamiento, aumentando la vida útil del dispositivo de menos de 0.24 años a 2.6 años. Los experimentos se llevaron a cabo utilizando nodos sensores de detección de carga, utilizados comercialmente para el control y monitoreo de infrastructura crítica. Aunque la técnica se ejemplifica en el estándar LoRaWAN, el uso de autenticación de preámbulo es extensible a cualquier protocolo inalámbrico. En esta tesis se muestra también que el uso de la fragmentación de paquetes a pesar de que el paquete se ajuste a la trama, reduce la probabilidad de colisiones mientras aumenta el número de usuarios en una red con restricciones de ciclos de transmisión. Mediante el uso de simulaciones en Matlab, se obtiene una mejora importante en el rendimiento de la red con la fragmentación, con un mayor impacto en redes más lentas y densas. Usando simulaciones NS3, demostramos que combinar la fragmentación de paquetes con el NACK en grupo se puede aumentar la confiabilidad de la red, al tiempo que se reduce la energía consumida para las retransmisiones, a costa de agregar pequeños encabezados a cada fragmento. Como técnica de nivel de chip, consideramos el uso de radios para la comunicación que no usan referencias de frecuencia externas como los osciladores basados en un cristal. Esto permitiría tener todos los elementos del sensor en una sola pieza de silicio, lo que lo hace incluso diez veces más eficiente energéticamente debido a la integración del chip. La consecuencia inmediata, en el uso de osciladores digitales en vez de cristales, es la pérdida de precisión de la comunicación y la capacidad de cambiar fácilmente los canales de comunicación. En este sentido, proponemos una secuencia de algoritmos y fases de sincronización de frecuencia que deben ser respetados por un dispositivo sin cristales para que pueda unirse a una red al encontrar el canal de baliza, sintetizar todos los canales de comunicación y luego mantener su precisión contra el cambio de temperatura. Los algoritmos propuestos no necesitan una sobrecarga de red adicional, ya que están utilizando la señalización de red existente. La evaluación se realiza en simulaciones y experimentalmente en una implementación prototipo de una radio sin cristal IEEE802.15.4. Los resultados obtenidos experimentalmente muestran una precisión inicial ligeramente superior a 40 ppm, que luego será corregida por el chip para que sea inferior a 40 ppm.Postprint (published version

    Spatio-Temporal processing for Optimum Uplink-Downlink WCDMA Systems

    Get PDF
    The capacity of a cellular system is limited by two different phenomena, namely multipath fading and multiple access interference (MAl). A Two Dimensional (2-D) receiver combats both of these by processing the signal both in the spatial and temporal domain. An ideal 2-D receiver would perform joint space-time processing, but at the price of high computational complexity. In this research we investigate computationally simpler technique termed as a Beamfom1er-Rake. In a Beamformer-Rake, the output of a beamfom1er is fed into a succeeding temporal processor to take advantage of both the beamformer and Rake receiver. Wireless service providers throughout the world are working to introduce the third generation (3G) and beyond (3G) cellular service that will provide higher data rates and better spectral efficiency. Wideband COMA (WCDMA) has been widely accepted as one of the air interfaces for 3G. A Beamformer-Rake receiver can be an effective solution to provide the receivers enhanced capabilities needed to achieve the required performance of a WCDMA system. We consider three different Pilot Symbol Assisted (PSA) beamforming techniques, Direct Matrix Inversion (DMI), Least-Mean Square (LMS) and Recursive Least Square (RLS) adaptive algorithms. Geometrically Based Single Bounce (GBSB) statistical Circular channel model is considered, which is more suitable for array processing, and conductive to RAKE combining. The performances of the Beam former-Rake receiver are evaluated in this channel model as a function of the number of antenna elements and RAKE fingers, in which are evaluated for the uplink WCDMA system. It is shown that, the Beamformer-Rake receiver outperforms the conventional RAKE receiver and the conventional beamformer by a significant margin. Also, we optimize and develop a mathematical formulation for the output Signal to Interference plus Noise Ratio (SINR) of a Beam former-Rake receiver. In this research, also, we develop, simulate and evaluate the SINR and Signal to Noise Ratio (Et!Nol performances of an adaptive beamforming technique in the WCDMA system for downlink. The performance is then compared with an omnidirectional antenna system. Simulation shows that the best perfom1ance can be achieved when all the mobiles with same Angle-of-Arrival (AOA) and different distance from base station are formed in one beam

    A study of adaptive beamforming techniques uising smart antenna for mobile communication

    Get PDF
    Mobile radio network with cellular structure demand high spectral efficiency for minimizing number of connections in a given bandwidth. One of the promising technologies is the use of “Smart Antenna”. A smart antenna is actually combination of an array of individual antenna elements and dedicated signal processing algorithm. Such system can distinguish signal combinations arriving from different directions and subsequently increase the received power from the desired user. Wireless systems that enable higher data rates and higher capacities have become the need of hour. Smart antenna technology offer significantly improved solution to reduce interference level and improve system capacity. With this technology, each user’s signal is transmitted and received by the base station only in the direction of that particular user. Smart antenna technology attempts to address this problem via advanced signal processing technology called beam-forming. The advent of powerful low-cost digital signal processors (DSPs), generalpurpose processors (and ASICs), as well as innovative software-based signal-processing techniques (algorithms) have made intelligent antennas practical for cellular communications systems and makes it a promising new technology. Through adaptive beam-forming, a base station can form narrower beam toward user and nulls toward interfering users. In this thesis, both the block adaptive and sample-by-sample methods are used to update weights of the smart antenna. Block adaptive beam-former employs a block of data to estimate the optimum weight vector and is known as sample matrix inversion (SMI) algorithm. The sample-by-sample method updates the weight vector with each sample. Various sample-bysample methods, attempted in the present study are least mean square (LMS) algorithm, constant modulus algorithm (CMA), least square constant modulus algorithm(LS-CMA) and recursive least square (RLS) algorithm. In the presence of two interfering signals and noise, both amplitude and phase comparison between desired signal and estimated output, beam patterns of the smart antennas and learning characteristics of the above mentioned algorithms are compared and analyzed. The recursive least square algorithm has the faster convergence rate; however this improvement is achieved at the expense of increase in computational complexity. Smart antennas technology suggested in this present work offers a significantly improved solution to reduce interference levels and improve the system capacity. With this novel technology, each user’s signal is transmitted and received by the base station only in the direction of that particular user. This drastically reduces the overall interference in the system. Further through adaptive beam forming, the base station can form narrower beams towards the desired user and nulls towards interfering users, considerably improving the signal-tointerference-plus-noise ratio. It provides better range or coverage by focusing the energy sent out into the cell, multi-path rejection by minimizing fading and other undesirable effects of multipath propagation

    Évaluation des techniques de réception multi-antennes dans un système DS-CDMA

    Get PDF

    Aperture-Level Simultaneous Transmit and Receive (STAR) with Digital Phased Arrays

    Get PDF
    In the signal processing community, it has long been assumed that transmitting and receiving useful signals at the same time in the same frequency band at the same physical location was impossible. A number of insights in antenna design, analog hardware, and digital signal processing have allowed researchers to achieve simultaneous transmit and receive (STAR) capability, sometimes also referred to as in-band full-duplex (IBFD). All STAR systems must mitigate the interference in the receive channel caused by the signals emitted by the system. This poses a significant challenge because of the immense disparity in the power of the transmitted and received signals. As an analogy, imagine a person that wanted to be able to hear a whisper from across the room while screaming at the top of their lungs. The sound of their own voice would completely drown out the whisper. Approaches to increasing the isolation between the transmit and receive channels of a system attempt to successively reduce the magnitude of the transmitted interference at various points in the received signal processing chain. Many researchers believe that STAR cannot be achieved practically without some combination of modified antennas, analog self-interference cancellation hardware, digital adaptive beamforming, and digital self-interference cancellation. The aperture-level simultaneous transmit and receive (ALSTAR) paradigm confronts that assumption by creating isolation between transmit and receive subarrays in a phased array using only digital adaptive transmit and receive beamforming and digital self-interference cancellation. This dissertation explores the boundaries of performance for the ALSTAR architecture both in terms of isolation and in terms of spatial imaging resolution. It also makes significant strides towards practical ALSTAR implementation by determining the performance capabilities and computational costs of an adaptive beamforming and self-interference cancellation implementation inspired by the mathematical structure of the isolation performance limits and designed for real-time operation
    corecore