95 research outputs found

    Coded Parity Packet Transmission Method for Two Group Resource Allocation

    No full text
    Gap value control is investigated when the number of source and parity packets is adjusted in a concatenated coding scheme whilst keeping the overall coding rate fixed. Packet-based outer codes which are generated from bit-wise XOR combinations of the source packets are used to adjust the number of both source packets. Having the source packets, the number of parity packets, which are the bit-wise XOR combinations of the source packets can be adjusted such that the gap value, which measures the gap between the theoretical and the required signal-to-noise ratio (SNR), is controlled without changing the actual coding rate. Consequently, the required SNR reduces, yielding a lower required energy to realize the transmission data rate. Integrating this coding technique with a two-group resource allocation scheme renders efficient utilization of the total energy to further improve the data rates. With a relatively small-sized set of discrete data rates, the system throughput achieved by the proposed two-group loading scheme is observed to be approximately equal to that of the existing loading scheme, which is operated with a much larger set of discrete data rates. The gain obtained by the proposed scheme over the existing equal rate and equal energy loading scheme is approximately 5 dB. Furthermore, a successive interference cancellation scheme is also integrated with this coding technique, which can be used to decode and provide consecutive symbols for inter-symbol interference (ISI) and multiple access interference (MAI) mitigation. With this integrated scheme, the computational complexity is signi cantly reduced by eliminating matrix inversions. In the same manner, the proposed coding scheme is also incorporated into a novel fixed energy loading, which distributes packets over parallel channels, to control the gap value of the data rates although the SNR of each code channel varies from each other

    Mixture of latent trait analyzers for model-based clustering of categorical data

    Get PDF
    Model-based clustering methods for continuous data are well established and commonly used in a wide range of applications. However, model-based clustering methods for categorical data are less standard. Latent class analysis is a commonly used method for model-based clustering of binary data and/or categorical data, but due to an assumed local independence structure there may not be a correspondence between the estimated latent classes and groups in the population of interest. The mixture of latent trait analyzers model extends latent class analysis by assuming a model for the categorical response variables that depends on both a categorical latent class and a continuous latent trait variable; the discrete latent class accommodates group structure and the continuous latent trait accommodates dependence within these groups. Fitting the mixture of latent trait analyzers model is potentially difficult because the likelihood function involves an integral that cannot be evaluated analytically. We develop a variational approach for fitting the mixture of latent trait models and this provides an efficient model fitting strategy. The mixture of latent trait analyzers model is demonstrated on the analysis of data from the National Long Term Care Survey (NLTCS) and voting in the U.S. Congress. The model is shown to yield intuitive clustering results and it gives a much better fit than either latent class analysis or latent trait analysis alone

    A muon-track reconstruction exploiting stochastic losses for large-scale Cherenkov detectors

    Get PDF
    IceCube is a cubic-kilometer Cherenkov telescope operating at the South Pole. The main goal of IceCube is the detection of astrophysical neutrinos and the identification of their sources. High-energy muon neutrinos are observed via the secondary muons produced in charge current interactions with nuclei in the ice. Currently, the best performing muon track directional reconstruction is based on a maximum likelihood method using the arrival time distribution of Cherenkov photons registered by the experiment\u27s photomultipliers. A known systematic shortcoming of the prevailing method is to assume a continuous energy loss along the muon track. However at energies >1 TeV the light yield from muons is dominated by stochastic showers. This paper discusses a generalized ansatz where the expected arrival time distribution is parametrized by a stochastic muon energy loss pattern. This more realistic parametrization of the loss profile leads to an improvement of the muon angular resolution of up to 20% for through-going tracks and up to a factor 2 for starting tracks over existing algorithms. Additionally, the procedure to estimate the directional reconstruction uncertainty has been improved to be more robust against numerical errors

    “Secure” Log-Linear and Logistic Regression Analysis of Distributed Databases

    No full text
    Abstract. The machine learning community has focused on confiden-tiality problems associated with statistical analyses that “integrate ” data stored in multiple, distributed databases where there are barriers to sim-ply integrating the databases. This paper discusses various techniques which can be used to perform statistical analysis for categorical data, especially in the form of log-linear analysis and logistic regression over partitioned databases, while limiting confidentiality concerns. We show how ideas from the current literature that focus on “secure ” summa-tions and secure regression analysis can be adapted or generalized to the categorical data setting.
    • …
    corecore