16 research outputs found

    Spectrum Sensing Algorithms for Cognitive Radio Applications

    Get PDF
    Future wireless communications systems are expected to be extremely dynamic, smart and capable to interact with the surrounding radio environment. To implement such advanced devices, cognitive radio (CR) is a promising paradigm, focusing on strategies for acquiring information and learning. The first task of a cognitive systems is spectrum sensing, that has been mainly studied in the context of opportunistic spectrum access, in which cognitive nodes must implement signal detection techniques to identify unused bands for transmission. In the present work, we study different spectrum sensing algorithms, focusing on their statistical description and evaluation of the detection performance. Moving from traditional sensing approaches we consider the presence of practical impairments, and analyze algorithm design. Far from the ambition of cover the broad spectrum of spectrum sensing, we aim at providing contributions to the main classes of sensing techniques. In particular, in the context of energy detection we studied the practical design of the test, considering the case in which the noise power is estimated at the receiver. This analysis allows to deepen the phenomenon of the SNR wall, providing the conditions for its existence and showing that presence of the SNR wall is determined by the accuracy of the noise power estimation process. In the context of the eigenvalue based detectors, that can be adopted by multiple sensors systems, we studied the practical situation in presence of unbalances in the noise power at the receivers. Then, we shift the focus from single band detectors to wideband sensing, proposing a new approach based on information theoretic criteria. This technique is blind and, requiring no threshold setting, can be adopted even if the statistical distribution of the observed data in not known exactly. In the last part of the thesis we analyze some simple cooperative localization techniques based on weighted centroid strategies

    Secondary spectrum usage in TV white space

    Get PDF
    Currently, the use of TV frequencies is exclusively license based with the area not covered by licensed TV transmitters being known as TV white space. In TV white space, the spectrum can be reused by a secondary user. This thesis studies how the TV white space can be used by a cellular system. The study addresses the problems of how the access to the spectrum is arranged, how the spectrum usage is constrained and how much capacity a secondary system will have. The access to TV white space can be arranged by using spectrum sensing or a geolocation database. This spectrum sensing relies on the performance of the signal detection algorithm. The detector has to operate in a fading environment where it should identify very low signal levels. In this thesis, the detector performance in a slow and fast fading environment is modeled. The model indicates that for a sufficiently long measurement time the impact of the fast fading can be averaged out. Unfortunately, simple single antenna-based detectors are not able to operate at a low enough signal-to-noise level. We propose a novel multi antenna-based detection algorithm that is specially designed to operate in a fading environment. TV white space is characterized by the amount of spectrum available for secondary usage. Because of the signal detection errors, a system using the sensing-based access is not able to use the entire available spectrum. This dissertation provides a method for estimating the spectrum utilization efficiency. The method illustrates how the detection error level affects the amount of available spectrum. One of the central questions studied in this thesis is how to describe the interference generated by the secondary transmitters. In the conventional model, the interference is computed as the sum of the interfering powers from individual transmitters. An alternative approach, pursued here, is to characterize the transmitter by its transmission power density per area. With such a model, the interference computation is done by integrating over the secondary system deployment area. The proposed method simplifies the interference estimation process. In data communication systems the spectrum attractiveness depends on the data rate the system can provide. Within the scope of this work, the achievable data rate is computed for a cellular system. Such computation is described as an optimization problem. The solution to this problem is found by searching for the optimal power allocation among the cochannels and the adjacent channels of a nearby TV transmitter

    Untangling the Web: A Guide To Internet Research

    Get PDF
    [Excerpt] Untangling the Web for 2007 is the twelfth edition of a book that started as a small handout. After more than a decade of researching, reading about, using, and trying to understand the Internet, I have come to accept that it is indeed a Sisyphean task. Sometimes I feel that all I can do is to push the rock up to the top of that virtual hill, then stand back and watch as it rolls down again. The Internet—in all its glory of information and misinformation—is for all practical purposes limitless, which of course means we can never know it all, see it all, understand it all, or even imagine all it is and will be. The more we know about the Internet, the more acute is our awareness of what we do not know. The Internet emphasizes the depth of our ignorance because our knowledge can only be finite, while our ignorance must necessarily be infinite. My hope is that Untangling the Web will add to our knowledge of the Internet and the world while recognizing that the rock will always roll back down the hill at the end of the day

    A Temporal Approach to Defining Place Types based on User-Contributed Geosocial Content

    Get PDF
    Place is one of the foundational concepts on which the field of Geography has been built. Traditionally, GIScience research into place has been approached from a spatial perspective. While space is an integral feature of place, it represents only a single dimension (or a combination of three dimensions to be exact), in the complex, multidimensional concept that is place. Though existing research has shown that both spatial and thematic dimensions are valuable, time has historically been under-utilized in its ability to describe and define places and their types. The recent availability and access to user-generated geosocial content has allowed for a much deeper investigation of the temporal dimension of place. Multi-resolution temporal signatures are constructed based on these data permitting both place instances and place types to be compared through a robust set of (dis)similarity measures. The primary contribution of this work lies in demonstrating how places are defined through a better understanding of temporal user behavior. Furthermore, the results of this research present the argument that the temporal dimension is the most indicative placial dimension for classifying places by type

    2019 EC3 July 10-12, 2019 Chania, Crete, Greece

    Get PDF

    Exploring the ethical, technical and legal issues of voice assistants (2020)

    Get PDF

    Quantifying Quality of Life

    Get PDF
    Describes technological methods and tools for objective and quantitative assessment of QoL Appraises technology-enabled methods for incorporating QoL measurements in medicine Highlights the success factors for adoption and scaling of technology-enabled methods This open access book presents the rise of technology-enabled methods and tools for objective, quantitative assessment of Quality of Life (QoL), while following the WHOQOL model. It is an in-depth resource describing and examining state-of-the-art, minimally obtrusive, ubiquitous technologies. Highlighting the required factors for adoption and scaling of technology-enabled methods and tools for QoL assessment, it also describes how these technologies can be leveraged for behavior change, disease prevention, health management and long-term QoL enhancement in populations at large. Quantifying Quality of Life: Incorporating Daily Life into Medicine fills a gap in the field of QoL by providing assessment methods, techniques and tools. These assessments differ from the current methods that are now mostly infrequent, subjective, qualitative, memory-based, context-poor and sparse. Therefore, it is an ideal resource for physicians, physicians in training, software and hardware developers, computer scientists, data scientists, behavioural scientists, entrepreneurs, healthcare leaders and administrators who are seeking an up-to-date resource on this subject

    Quantifying Quality of Life

    Get PDF
    Describes technological methods and tools for objective and quantitative assessment of QoL Appraises technology-enabled methods for incorporating QoL measurements in medicine Highlights the success factors for adoption and scaling of technology-enabled methods This open access book presents the rise of technology-enabled methods and tools for objective, quantitative assessment of Quality of Life (QoL), while following the WHOQOL model. It is an in-depth resource describing and examining state-of-the-art, minimally obtrusive, ubiquitous technologies. Highlighting the required factors for adoption and scaling of technology-enabled methods and tools for QoL assessment, it also describes how these technologies can be leveraged for behavior change, disease prevention, health management and long-term QoL enhancement in populations at large. Quantifying Quality of Life: Incorporating Daily Life into Medicine fills a gap in the field of QoL by providing assessment methods, techniques and tools. These assessments differ from the current methods that are now mostly infrequent, subjective, qualitative, memory-based, context-poor and sparse. Therefore, it is an ideal resource for physicians, physicians in training, software and hardware developers, computer scientists, data scientists, behavioural scientists, entrepreneurs, healthcare leaders and administrators who are seeking an up-to-date resource on this subject

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
    corecore