34 research outputs found

    Compressive Sensing of Multiband Spectrum towards Real-World Wideband Applications.

    Get PDF
    PhD Theses.Spectrum scarcity is a major challenge in wireless communication systems with their rapid evolutions towards more capacity and bandwidth. The fact that the real-world spectrum, as a nite resource, is sparsely utilized in certain bands spurs the proposal of spectrum sharing. In wideband scenarios, accurate real-time spectrum sensing, as an enabler of spectrum sharing, can become ine cient as it naturally requires the sampling rate of the analog-to-digital conversion to exceed the Nyquist rate, which is resourcecostly and energy-consuming. Compressive sensing techniques have been applied in wideband spectrum sensing to achieve sub-Nyquist-rate sampling of frequency sparse signals to alleviate such burdens. A major challenge of compressive spectrum sensing (CSS) is the complexity of the sparse recovery algorithm. Greedy algorithms achieve sparse recovery with low complexity but the required prior knowledge of the signal sparsity. A practical spectrum sparsity estimation scheme is proposed. Furthermore, the dimension of the sparse recovery problem is proposed to be reduced, which further reduces the complexity and achieves signal denoising that promotes recovery delity. The robust detection of incumbent radio is also a fundamental problem of CSS. To address the energy detection problem in CSS, the spectrum statistics of the recovered signals are investigated and a practical threshold adaption scheme for energy detection is proposed. Moreover, it is of particular interest to seek the challenges and opportunities to implement real-world CSS for systems with large bandwidth. Initial research on the practical issues towards the real-world realization of wideband CSS system based on the multicoset sampler architecture is presented. In all, this thesis provides insights into two critical challenges - low-complexity sparse recovery and robust energy detection - in the general CSS context, while also looks into some particular issues towards the real-world CSS implementation based on the i multicoset sampler

    Integration of TV White Space and Femtocell Networks.

    Get PDF
    PhDFemtocell is an effective approach to increase system capacity in cellular networks. Since traditional Femtocells use the same frequency band as the cellular network, cross-tier and co-tier interference exist in such Femtocell networks and have a major impact on deteriorating the system throughput. In order to tackle these challenges, interference mitigation has drawn attentions from both academia and industry. TV White Space (TVWS) is a newly opened portion of spectrum, which comes from the spare spectrum created by the transition from analogue TV to digital TV. It can be utilized by using cognitive radio technology according to the policies from telecommunications regulators. This thesis considers using locally available TVWS to reduce the interference in Femtocell networks. The objective of this research is to mitigate the downlink cross-tier and co-tier interference in different Femtocell deployment scenarios, and increase the throughput of the overall system. A Geo-location database model to obtain locally available TVWS information in UK is developed in this research. The database is designed using power control method to calculate available TVWS channels and maximum allowable transmit power based on digital TV transmitter information in UK and regulations on unlicensed use of TVWS. The proposed database model is firstly combined with a grid-based resource allocation scheme and investigated in a simplified Femtocell network to demonstrate the gains of using TVWS in Femtocell networks. Furthermore, two Femtocell deployment scenarios are studied in this research. In the suburban Femtocell deployment scenario, a novel system architecture that consists of the Geo-location database and a resource allocation scheme using TVWS is proposed to mitigate cross-tier interference between Macrocell and Femtocells. In the dense Femtocell deployment scenario, a power efficient resource allocation scheme is proposed to maximize the throughput of Femtocells while limiting the co-tier interference among Femtocells. The optimization problem in the power efficient scheme is solved by using sequential quadratic programming method. The simulation results show that the proposed schemes can effectively mitigate the interference in Femtocell networks in practical deployment scenarios

    Alternative and Mainstream Media

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Historically, alternative media have been viewed as fundamental, albeit at times culturally peripheral, forces in social change. In this book, however, Kenix argues that these media do not uniformly subvert the hierarchies of access that are so central to mainstream media - in fact, their journalistic norms and routines have always been based on the professional standards of the mainstream. Kenix goes on to posit the perception of 'mainstream' and 'alternative' as a misconception. She argues that, although alternative media can - and do - construct distinct alternative communications, they have always existed on the same continuum as the mainstream and the two will continue to converge. Through comparative analysis, this book argues that many alternative and mainstream media are merging to create a continuous spectrum rooted in commercial ideology. Indeed, much of what is now considered alternative media actually draws very little from principles of the independent press, whereas many contemporary mainstream media now use communication techniques more commonly associated with media that do not operate for financial gain. This book puts forward a controversial but convincing argument around the relationship between alternative and mainstream media, drawing on examples from the UK, US, Australia and New Zealand to strengthen and develop the central premise

    Alternative and Mainstream Media

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Historically, alternative media have been viewed as fundamental, albeit at times culturally peripheral, forces in social change. In this book, however, Kenix argues that these media do not uniformly subvert the hierarchies of access that are so central to mainstream media - in fact, their journalistic norms and routines have always been based on the professional standards of the mainstream. Kenix goes on to posit the perception of 'mainstream' and 'alternative' as a misconception. She argues that, although alternative media can - and do - construct distinct alternative communications, they have always existed on the same continuum as the mainstream and the two will continue to converge. Through comparative analysis, this book argues that many alternative and mainstream media are merging to create a continuous spectrum rooted in commercial ideology. Indeed, much of what is now considered alternative media actually draws very little from principles of the independent press, whereas many contemporary mainstream media now use communication techniques more commonly associated with media that do not operate for financial gain. This book puts forward a controversial but convincing argument around the relationship between alternative and mainstream media, drawing on examples from the UK, US, Australia and New Zealand to strengthen and develop the central premise

    Applications In Sentiment Analysis And Machine Learning For Identifying Public Health Variables Across Social Media

    Get PDF
    Twitter, a popular social media outlet, has evolved into a vast source of linguistic data, rich with opinion, sentiment, and discussion. We mined data from several public Twitter endpoints to identify content relevant to healthcare providers and public health regulatory professionals. We began by compiling content related to electronic nicotine delivery systems (or e-cigarettes) as these had become popular alternatives to tobacco products. There was an apparent need to remove high frequency tweeting entities, called bots, that would spam messages, advertisements, and fabricate testimonials. Algorithms were constructed using natural language processing and machine learning to sift human responses from automated accounts with high degrees of accuracy. We found the average hyperlink per tweet, the average character dissimilarity between each individual\u27s content, as well as the rate of introduction of unique words were valuable attributes in identifying automated accounts. We performed a 10-fold Cross Validation and measured performance of each set of tweet features, at various bin sizes, the best of which performed with 97% accuracy. These methods were used to isolate automated content related to the advertising of electronic cigarettes. A rich taxonomy of automated entities, including robots, cyborgs, and spammers, each with different measurable linguistic features were categorized. Electronic cigarette related posts were classified as automated or organic and content was investigated with a hedonometric sentiment analysis. The overwhelming majority (≈ 80%) were automated, many of which were commercial in nature. Others used false testimonials that were sent directly to individuals as a personalized form of targeted marketing. Many tweets advertised nicotine vaporizer fluid (or e-liquid) in various “kid-friendly” flavors including \u27Fudge Brownie\u27, \u27Hot Chocolate\u27, \u27Circus Cotton Candy\u27 along with every imaginable flavor of fruit, which were long ago banned for traditional tobacco products. Others offered free trials, as well as incentives to retweet and spread the post among their own network. Free prize giveaways were also hosted whose raffle tickets were issued for sharing their tweet. Due to the large youth presence on the public social media platform, this was evidence that the marketing of electronic cigarettes needed considerable regulation. Twitter has since officially banned all electronic cigarette advertising on their platform. Social media has the capacity to afford the healthcare industry with valuable feedback from patients who reveal and express their medical decision-making process, as well as self-reported quality of life indicators both during and post treatment. We have studied several active cancer patient populations, discussing their experiences with the disease as well as survivor-ship. We experimented with a Convolutional Neural Network (CNN) as well as logistic regression to classify tweets as patient related. This led to a sample of 845 breast cancer survivor accounts to study, over 16 months. We found positive sentiments regarding patient treatment, raising support, and spreading awareness. A large portion of negative sentiments were shared regarding political legislation that could result in loss of coverage of their healthcare. We refer to these online public testimonies as “Invisible Patient Reported Outcomes” (iPROs), because they carry relevant indicators, yet are difficult to capture by conventional means of self-reporting. Our methods can be readily applied interdisciplinary to obtain insights into a particular group of public opinions. Capturing iPROs and public sentiments from online communication can help inform healthcare professionals and regulators, leading to more connected and personalized treatment regimens. Social listening can provide valuable insights into public health surveillance strategies

    Winter/Spring 2022

    Get PDF

    A geo-database for potentially polluting marine sites and associated risk index

    Get PDF
    The increasing availability of geospatial marine data provides an opportunity for hydrographic offices to contribute to the identification of Potentially Polluting Marine Sites (PPMS). To adequately manage these sites, a PPMS Geospatial Database (GeoDB) application was developed to collect and store relevant information suitable for site inventory and geo-spatial analysis. The benefits of structuring the data to conform to the Universal Hydrographic Data Model (IHO S-100) and to use the Geographic Mark-Up Language (GML) for encoding are presented. A storage solution is proposed using a GML-enabled spatial relational database management system (RDBMS). In addition, an example of a risk index methodology is provided based on the defined data structure. The implementation of this example was performed using scripts containing SQL statements. These procedures were implemented using a cross-platform C++ application based on open-source libraries and called PPMS GeoDB Manager

    Early Phases of Corporate Venturing

    Get PDF
    Dette værk er indgivet til Ph.D. bedømmelse under Forskerskolen i Viden og Ledelse ved Institut for Ledelse, Politik og Filosofi ved Copenhagen Business School som en del af opfyldelse af kravene for at opnå graden Ph.D. Målet med denne afhandling er, at fremsætte en kombination af nye teoretisk perspektiver og ledelsesmetoder, som tilsammen vil give et bedre indblik i de tidlige stadier af corporate venturing. Dette vil inkludere nye perspektiver på corporate venturing, eftersom afhandlingen videreudvikler akademiske og praktiske værktøjer for beslutningsprocesser. Afhandling bidrager med to overordnede tilføjelser til den nuværende litteratur om corporate venture. For det første, sætter den fokus på de vigtige, men oversete, tidlige faser ved venture processen. Dette indebærer de forhold, nødvendige for udvikling af nye innovative venture muligheder (venture basen), opdagelse af investeringsmuligheder og endelig forberedelse til evaluering af investeringsmuligheder. Venture basen er de karakteristika og forhold der for et firma og dets miljø kan udgøre ressourcer til opstart af nye ventures. Grundet ventures innovative natur bliver det, at opdage entreprenelle muligheder en hovedudfordring der involverer en diversificeret gruppe af aktører. Den tidlige fase inkluderer også specifikke vidensskabende handlinger der skal udføres for at kunne evaluere de mange investeringsmuligheder. For det andet bibringer afhandlingen nye perspektiver til hvorledes aktiviteterne i de tidlige faser er forbundet i værdikæden. I modsætning til tidligere litteratur, hvor venture processer præsenteres som lineære og forudsigelige, demonstrerer dette værk, at en mere dynamisk tilgang er tiltrængt, en tilgang der er særlig fokuseret på hvordan vidensprocesser og læringsfremmende aktiviteter driver venture processen, lige fra udviklingen af nye ideer til deres betydning evalueres. Disse bidrag trækker på teoretiske perspektiver fra den nuværende corporate venture litteratur (såsom Block and MacMillan, 1993; Burgelman, 1984, 1996; Chesbrough, 2000; Zahra, 1991) og komplementerende litteratur der tilvejebringer et netværk og videns perspektiv (såsom Gibbons et al. 1994; Kline and Rosenberg, 1986; Powell et al., 1996). Disse perspektiver er særligt gennemslagskraftige i deres argumentation om innovations processer og evolutionær udvikling. De bringer også ny indsigt om den type læringsproces som corporate ventures er en del af når de udvikler og evaluerer nye venture muligheder. I modsætning til en traditionel monografisk Ph.D. afhandling, så præsenterer denne afhandling sine resultater i fem (5) uafhængige men forbundne undersøgelser, udgivet i internationale peerreviewed tidsskrifter og bog kapitler. Udover disse studier så indeholder afhandlingen også en teoretisk introduktion og metode, en litteratur gennemgang og en konklusion

    Question Answering using Syntactic Patterns in a Contextual Search Engine

    Get PDF
    Question Answering (QA) systems promise to enhance both usability and accuracy when searching for knowledge. This thesis presents a prototype QA system built to leverage the extraction capabilities of a modern, context-aware search platform; Fast ESP. Questions in plain English are transformed to queries which target specific entities in the text that correspond with the identified answer types. A small set of unified patterns is demonstrated as adequate to classify a wide variety of syntactic constructs. For the purpose of verifying the answers, a semantic lexicon is compiled using an automated procedure. The whole solution is based on pattern matching and presents this as a viable alternative to deeper linguistic methods
    corecore