5,019 research outputs found

    An Online Decoding Schedule Generating Algorithm for Successive Cancellation Decoder of Polar Codes

    Full text link
    Successive cancellation (SC) is the first and widely known decoder of polar codes, which has received a lot of attentions recently. However, its decoding schedule generating algorithms are still primitive, which are not only complex but also offline. This paper proposes a simple and online algorithm to generate the decoding schedule of SC decoder. Firstly, the dependencies among likelihood ratios (LR) are explored, which lead to the discovery of a sharing factor. Secondly, based on the online calculation of the sharing factor, the proposed algorithm is presented, which is neither based on the depth-first traversal of the scheduling tree nor based on the recursive construction. As shown by the comparisons among the proposed algorithm and existed algorithms, the proposed algorithm has advantages of the online feature and the far less memory taken by the decoding schedule.Comment: 16 pages, 2 figure

    Estimating HIV Epidemics for Sub-National Areas

    Full text link
    As the global HIV pandemic enters its fourth decade, increasing numbers of surveillance sites have been established which allows countries to look into the epidemics at a finer scale, e.g. at sub-national levels. Currently, the epidemic models have been applied independently to the sub-national areas within countries. However, the availability and quality of the data vary widely, which leads to biased and unreliable estimates for areas with very few data. We propose to overcome this issue by introducing the dependence of the parameters across areas in a mixture model. The joint distribution of the parameters in multiple areas can be approximated directly from the results of independent fits without needing to refit the data or unpack the software. As a result, the mixture model has better predictive ability than the independent model as shown in examples of multiple countries in Sub-Saharan Africa.Comment: arXiv admin note: substantial text overlap with arXiv:1411.421

    A Hierarchical Model for Estimating HIV Epidemics

    Full text link
    As the global HIV pandemic enters its fourth decade, increasing numbers of surveillance sites have been established which allows countries to look into the epidemics at a finer scale, e.g. at sub-national level. However, the epidemic models have been applied independently to the sub-national areas within countries. An important technical barrier is that the availability and quality of the data vary widely from area to area, and many areas lack data for deriving stable and reliable results. To improve the accuracy of the results in areas with little data, we propose a hierarchical model that utilizes information efficiently by assuming similar characteristics of the epidemics across areas within one country. The joint distribution of the parameters in the hierarchical model can be approximated directly from the results of independent fits without needing to the refit the data. As a result, the hierarchical model has better predictive ability than the independent model as shown in examples of multiple countries in Sub-Saharan Africa

    Efficient Bit Sifting Scheme of Post-processing in Quantum Key Distribution

    Full text link
    Bit sifting is an important step in the post-processing of Quantum Key Distribution (QKD) whose function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system, and it is facing unprecedented challenges with the fast increase of the repetition frequency of quantum channel. In this paper, we present an efficient bit sifting scheme whose core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of our scheme is approaching the Shannon limit. Our scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means it can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system. Meanwhile, it can relieve the storage pressure of the system greatly, especially the device at Alice side. Some recommendations on the application of our scheme to some representative practical QKD systems are also provided

    Improved DC Programming Approaches for Solving the Quadratic Eigenvalue Complementarity Problem

    Full text link
    In this paper, we discuss the solution of a Quadratic Eigenvalue Complementarity Problem (QEiCP) by using Difference of Convex (DC) programming approaches. We first show that QEiCP can be represented as dc programming problem. Then we investigate different dc programming formulations of QEiCP and discuss their dc algorithms based on a well-known method -- DCA. A new local dc decomposition is proposed which aims at constructing a better dc decomposition regarding to the specific feature of the target problem in some neighborhoods of the iterates. This new procedure yields faster convergence and better precision of the computed solution. Numerical results illustrate the efficiency of the new dc algorithms in practice.Comment: 23 page

    Attention-based Temporal Weighted Convolutional Neural Network for Action Recognition

    Full text link
    Research in human action recognition has accelerated significantly since the introduction of powerful machine learning tools such as Convolutional Neural Networks (CNNs). However, effective and efficient methods for incorporation of temporal information into CNNs are still being actively explored in the recent literature. Motivated by the popular recurrent attention models in the research area of natural language processing, we propose the Attention-based Temporal Weighted CNN (ATW), which embeds a visual attention model into a temporal weighted multi-stream CNN. This attention model is simply implemented as temporal weighting yet it effectively boosts the recognition performance of video representations. Besides, each stream in the proposed ATW framework is capable of end-to-end training, with both network parameters and temporal weights optimized by stochastic gradient descent (SGD) with backpropagation. Our experiments show that the proposed attention mechanism contributes substantially to the performance gains with the more discriminative snippets by focusing on more relevant video segments.Comment: 14th International Conference on Artificial Intelligence Applications and Innovations (AIAI 2018), May 25-27, 2018, Rhodes, Greec

    What Can We Learn from the Travelers Data in Detecting Disease Outbreaks -- A Case Study of the COVID-19 Epidemic

    Full text link
    Background: Travel is a potent force in the emergence of disease. We discussed how the traveler case reports could aid in a timely detection of a disease outbreak. Methods: Using the traveler data, we estimated a few indicators of the epidemic that affected decision making and policy, including the exponential growth rate, the doubling time, and the probability of severe cases exceeding the hospital capacity, in the initial phase of the COVID-19 epidemic in multiple countries. We imputed the arrival dates when they were missing. We compared the estimates from the traveler data to the ones from domestic data. We quantitatively evaluated the influence of each case report and knowing the arrival date on the estimation. Findings: We estimated the travel origin's daily exponential growth rate and examined the date from which the growth rate was consistently above 0.1 (equivalent to doubling time < 7 days). We found those dates were very close to the dates that critical decisions were made such as city lock-downs and national emergency announcement. Using only the traveler data, if the assumed epidemic start date was relatively accurate and the traveler sample was representative of the general population, the growth rate estimated from the traveler data was consistent with the domestic data. We also discussed situations that the traveler data could lead to biased estimates. From the data influence study, we found more recent travel cases had a larger influence on each day's estimate, and the influence of each case report got smaller as more cases became available. We provided the minimum number of exported cases needed to determine whether the local epidemic growth rate was above a certain level, and developed a user-friendly Shiny App to accommodate various scenarios.Comment: 25 pages, 6 figure

    Thirty Years of The Network Scale up Method

    Full text link
    Estimating the size of hard-to-reach populations is an important problem for many fields. The Network Scale-up Method (NSUM) is a relatively new approach to estimate the size of these hard-to-reach populations by asking respondents the question, "How many X's do you know," where X is the population of interest (e.g. "How many female sex workers do you know?"). The answers to these questions form Aggregated Relational Data (ARD). The NSUM has been used to estimate the size of a variety of subpopulations, including female sex workers, drug users, and even children who have been hospitalized for choking. Within the Network Scale-up methodology, there are a multitude of estimators for the size of the hidden population, including direct estimators, maximum likelihood estimators, and Bayesian estimators. In this article, we first provide an in-depth analysis of ARD properties and the techniques to collect the data. Then, we comprehensively review different estimation methods in terms of the assumptions behind each model, the relationships between the estimators, and the practical considerations of implementing the methods. Finally, we provide a summary of the dominant methods and an extensive list of the applications, and discuss the open problems and potential research directions in this area

    Evaluating the relative contribution of data sources in a Bayesian analysis with the application of estimating the size of hard to reach populations

    Full text link
    When using multiple data sources in an analysis, it is important to understand the influence of each data source on the analysis and the consistency of the data sources with each other and the model. We suggest the use of a retrospective value of information framework in order to address such concerns. Value of information methods can be computationally difficult. We illustrate the use of computational methods that allow these methods to be applied even in relatively complicated settings. In illustrating the proposed methods, we focus on an application in estimating the size of hard to reach populations. Specifically, we consider estimating the number of injection drug users in Ukraine by combining all available data sources spanning over half a decade and numerous sub-national areas in the Ukraine. This application is of interest to public health researchers as this hard to reach population that plays a large role in the spread of HIV. We apply a Bayesian hierarchical model and evaluate the contribution of each data source in terms of absolute influence, expected influence, and level of surprise. Finally we apply value of information methods to inform suggestions on future data collection.Comment: 24 pages, 7 figures, 2 table

    Modeling the Marked Presence-only Data: A Case Study of Estimating the Female Sex Worker Size in Malawi

    Full text link
    Certain subpopulations like female sex workers (FSW), men who have sex with men (MSM), and people who inject drugs (PWID) often have higher prevalence of HIV/AIDS and are difficult to map directly due to stigma, discrimination, and criminalization. Fine-scale mapping of those populations contributes to the progress towards reducing the inequalities and ending the AIDS epidemic. In 2016 and 2017, the PLACE surveys were conducted at 3,290 venues in 20 out of the total 28 districts in Malawi to estimate the FSW sizes. These venues represent a presence-only data set where, instead of knowing both where people live and do not live (presence-absence data), only information about visited locations is available. In this study, we develop a Bayesian model for presence-only data and utilize the PLACE data to estimate the FSW size and uncertainty interval at a 1.5×1.51.5 \times 1.5-km resolution for all of Malawi. The estimates can also be aggregated to any desirable level (city/district/region) for implementing targeted HIV prevention and treatment programs in FSW communities, which have been successful in lowering the incidence of HIV and other sexually transmitted infections
    • …
    corecore