22,557 research outputs found

    Simple implementations of mutually orthogonal complementary sets of sequences

    Get PDF
    This paper presents simple software and hardware implementations for a class of mutually orthogonal complementary sets of sequences based on its closed-form construction formula. Following a brief review of the Golay-paired Hadamard matrix concept, the flow graph for constructing mutually orthogonal Golay-paired Hadamard matrices, which represent the scalable complete complementary sets of sequences, is proposed. Then, their superb scalability and completeness are summarized. Finally, the C and Matlab functions and a logic schematic diagram are given to easily generate these complementary sequences

    Low-Complexity Generation of Scalable Complete Complementary Sets of Sequences

    Get PDF
    This paper presents extremely low-complexity boolean logic for the generation of coefficients suitable for filtering or correlation of scalable complete complementary sets of sequences (SCCSS). As the unique auto- and cross-correlation properties of SCCSS are of broad interest, the simplicity of the proposed coefficient generation technique allows arbitrarily long SCCSS to be used in resource constrained applications

    Bandwidth efficient multi-station wireless streaming based on complete complementary sequences

    Get PDF
    Data streaming from multiple base stations to a client is recognized as a robust technique for multimedia streaming. However the resulting transmission in parallel over wireless channels poses serious challenges, especially multiple access interference, multipath fading, noise effects and synchronization. Spread spectrum techniques seem the obvious choice to mitigate these effects, but at the cost of increased bandwidth requirements. This paper proposes a solution that exploits complete complementary spectrum spreading and data compression techniques jointly to resolve the communication challenges whilst ensuring efficient use of spectrum and acceptable bit error rate. Our proposed spreading scheme reduces the required transmission bandwidth by exploiting correlation among information present at multiple base stations. Results obtained show 1.75 Mchip/sec (or 25%) reduction in transmission rate, with only up to 6 dB loss in frequency-selective channel compared to a straightforward solution based solely on complete complementary spectrum spreading

    Knowledge-aware Complementary Product Representation Learning

    Full text link
    Learning product representations that reflect complementary relationship plays a central role in e-commerce recommender system. In the absence of the product relationships graph, which existing methods rely on, there is a need to detect the complementary relationships directly from noisy and sparse customer purchase activities. Furthermore, unlike simple relationships such as similarity, complementariness is asymmetric and non-transitive. Standard usage of representation learning emphasizes on only one set of embedding, which is problematic for modelling such properties of complementariness. We propose using knowledge-aware learning with dual product embedding to solve the above challenges. We encode contextual knowledge into product representation by multi-task learning, to alleviate the sparsity issue. By explicitly modelling with user bias terms, we separate the noise of customer-specific preferences from the complementariness. Furthermore, we adopt the dual embedding framework to capture the intrinsic properties of complementariness and provide geometric interpretation motivated by the classic separating hyperplane theory. Finally, we propose a Bayesian network structure that unifies all the components, which also concludes several popular models as special cases. The proposed method compares favourably to state-of-art methods, in downstream classification and recommendation tasks. We also develop an implementation that scales efficiently to a dataset with millions of items and customers

    Functional Dependencies Unleashed for Scalable Data Exchange

    Full text link
    We address the problem of efficiently evaluating target functional dependencies (fds) in the Data Exchange (DE) process. Target fds naturally occur in many DE scenarios, including the ones in Life Sciences in which multiple source relations need to be structured under a constrained target schema. However, despite their wide use, target fds' evaluation is still a bottleneck in the state-of-the-art DE engines. Systems relying on an all-SQL approach typically do not support target fds unless additional information is provided. Alternatively, DE engines that do include these dependencies typically pay the price of a significant drop in performance and scalability. In this paper, we present a novel chase-based algorithm that can efficiently handle arbitrary fds on the target. Our approach essentially relies on exploiting the interactions between source-to-target (s-t) tuple-generating dependencies (tgds) and target fds. This allows us to tame the size of the intermediate chase results, by playing on a careful ordering of chase steps interleaving fds and (chosen) tgds. As a direct consequence, we importantly diminish the fd application scope, often a central cause of the dramatic overhead induced by target fds. Moreover, reasoning on dependency interaction further leads us to interesting parallelization opportunities, yielding additional scalability gains. We provide a proof-of-concept implementation of our chase-based algorithm and an experimental study aiming at gauging its scalability with respect to a number of parameters, among which the size of source instances and the number of dependencies of each tested scenario. Finally, we empirically compare with the latest DE engines, and show that our algorithm outperforms them

    A Systematic Framework for the Construction of Optimal Complete Complementary Codes

    Full text link
    The complete complementary code (CCC) is a sequence family with ideal correlation sums which was proposed by Suehiro and Hatori. Numerous literatures show its applications to direct-spread code-division multiple access (DS-CDMA) systems for inter-channel interference (ICI)-free communication with improved spectral efficiency. In this paper, we propose a systematic framework for the construction of CCCs based on NN-shift cross-orthogonal sequence families (NN-CO-SFs). We show theoretical bounds on the size of NN-CO-SFs and CCCs, and give a set of four algorithms for their generation and extension. The algorithms are optimal in the sense that the size of resulted sequence families achieves theoretical bounds and, with the algorithms, we can construct an optimal CCC consisting of sequences whose lengths are not only almost arbitrary but even variable between sequence families. We also discuss the family size, alphabet size, and lengths of constructible CCCs based on the proposed algorithms

    The STRESS Method for Boundary-point Performance Analysis of End-to-end Multicast Timer-Suppression Mechanisms

    Full text link
    Evaluation of Internet protocols usually uses random scenarios or scenarios based on designers' intuition. Such approach may be useful for average-case analysis but does not cover boundary-point (worst or best-case) scenarios. To synthesize boundary-point scenarios a more systematic approach is needed.In this paper, we present a method for automatic synthesis of worst and best case scenarios for protocol boundary-point evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. The algorithms used in our method utilize implicit backward search using branch and bound techniques and start from given target events. This aims to reduce the search complexity drastically. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multicast protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. Results obtained using stress scenarios differ dramatically from those obtained through average-case analyses. We hope for our method to serve as a model for applying systematic scenario generation to other multicast protocols.Comment: 24 pages, 10 figures, IEEE/ACM Transactions on Networking (ToN) [To appear
    • …
    corecore