83 research outputs found

    Iteratively Decoded Irregular Variable Length Coding and Sphere-Packing Modulation-Aided Differential Space-Time Spreading

    No full text
    In this paper we consider serially concatenated and iteratively decoded Irregular Variable Length Coding (IrVLC) combined with precoded Differential Space-Time Spreading (DSTS) aided multidimensional Sphere Packing (SP) modulation designed for near-capacity joint source and channel coding. The IrVLC scheme comprises a number of component Variable Length Coding (VLC) codebooks having different coding rates for the sake of encoding particular fractions of the input source symbol stream. The relative length of these source-stream fractions can be chosen with the aid of EXtrinsic Information Transfer (EXIT) charts in order to shape the EXIT curve of the IrVLC codec, so that an open EXIT chart tunnel may be created even at low Eb/N0 values that are close to the capacity bound of the channel. These schemes are shown to be capable of operating within 0.9 dB of the DSTS-SP channel’s capacity bound using an average interleaver length of 113, 100 bits and an effective bandwidth efficiency of 1 bit/s/Hz, assuming ideal Nyquist filtering. By contrast, the equivalent-rate regular VLC-based benchmarker scheme was found to be capable of operating at 1.4 dB from the capacity bound, which is about 1.56 times the corresponding discrepancy of the proposed IrVLC-aided scheme

    Coreset Clustering on Small Quantum Computers

    Full text link
    Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigate using this paradigm to perform kk-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We compare the performance of this approach to classical kk-means clustering both numerically and experimentally on IBM Q hardware. We are able to find data sets where coresets work well relative to random sampling and where QAOA could potentially outperform standard kk-means on a coreset. However, finding data sets where both coresets and QAOA work well--which is necessary for a quantum advantage over kk-means on the entire data set--appears to be challenging

    Goal-Oriented Quantization: Analysis, Design, and Application to Resource Allocation

    Full text link
    In this paper, the situation in which a receiver has to execute a task from a quantized version of the information source of interest is considered. The task is modeled by the minimization problem of a general goal function f(x;g)f(x;g) for which the decision xx has to be taken from a quantized version of the parameters gg. This problem is relevant in many applications e.g., for radio resource allocation (RA), high spectral efficiency communications, controlled systems, or data clustering in the smart grid. By resorting to high resolution (HR) analysis, it is shown how to design a quantizer that minimizes the gap between the minimum of ff (which would be reached by knowing gg perfectly) and what is effectively reached with a quantized gg. The conducted formal analysis both provides quantization strategies in the HR regime and insights for the general regime and allows a practical algorithm to be designed. The analysis also allows one to provide some elements to the new and fundamental problem of the relationship between the goal function regularity properties and the hardness to quantize its parameters. The derived results are discussed and supported by a rich numerical performance analysis in which known RA goal functions are studied and allows one to exhibit very significant improvements by tailoring the quantization operation to the final task

    A Peridynamics-Based Micromechanical Modeling Approach for Random Heterogeneous Structural Materials

    Get PDF
    This paper presents a peridynamics-based micromechanical analysis framework that can efficiently handle material failure for random heterogeneous structural materials. In contrast to conventional continuum-based approaches, this method can handle discontinuities such as fracture without requiring supplemental mathematical relations. The framework presented here generates representative unit cells based on microstructural information on the material and assigns distinct material behavior to the constituent phases in the random heterogenous microstructures. The framework incorporates spontaneous failure initiation/propagation based on the critical stretch criterion in peridynamics and predicts effective constitutive response of the material. The current framework is applied to a metallic particulate-reinforced cementitious composite. The simulated mechanical responses show excellent match with experimental observations signifying efficacy of the peridynamics-based micromechanical framework for heterogenous composites. Thus, the multiscale peridynamics-based framework can efficiently facilitate microstructure guided material design for a large class of inclusion-modified random heterogenous materials

    Representation learning for minority and subtle activities in a smart home environment

    Get PDF
    Daily human activity recognition using sensor data can be a fundamental task for many real-world applications, such as home monitoring and assisted living. One of the challenges in human activity recognition is to distinguish activities that have infrequent occurrence and less distinctive patterns. We propose a dissimilarity representation-based hierarchical classifier to perform two-phase learning. In the first phase, the classifier learns general features to recognise majority classes, and the second phase is to collect minority and subtle classes to identify fine difference between them. We compare our approach with a collection of state-of-the-art classification techniques on a real-world third-party dataset that is collected in a two-user home setting. Our results demonstrate that our hierarchical classifier approach outperforms the existing techniques in distinguishing users in performing the same type of activities. The key novelty of our approach is the exploration of dissimilarity representations and hierarchical classifiers, which allows us to highlight the difference between activities with subtle difference, and thus allows the identification of well-discriminating features.Postprin
    • …
    corecore