158 research outputs found

    Asymptotically Good Additive Cyclic Codes Exist

    Full text link
    Long quasi-cyclic codes of any fixed index >1>1 have been shown to be asymptotically good, depending on Artin primitive root conjecture in (A. Alahmadi, C. G\"uneri, H. Shoaib, P. Sol\'e, 2017). We use this recent result to construct good long additive cyclic codes on any extension of fixed degree of the base field. Similarly self-dual double circulant codes, and self-dual four circulant codes, have been shown to be good, also depending on Artin primitive root conjecture in (A. Alahmadi, F. \"Ozdemir, P. Sol\'e, 2017) and ( M. Shi, H. Zhu, P. Sol\'e, 2017) respectively. Building on these recent results, we can show that long cyclic codes are good over \F_q, for many classes of qq's. This is a partial solution to a fifty year old open problem

    Stable and verifiable state estimation methods and systems with spacecraft applications

    Get PDF
    The stability of a recursive estimator process (e.g., a Kalman filter is assured for long time periods by periodically resetting an error covariance P(t.sub.n) of the system to a predetermined reset value P.sub.r. The recursive process is thus repetitively forced to start from a selected covariance and continue for a time period that is short compared to the system's total operational time period. The time period in which the process must maintain its numerical stability is significantly reduced as is the demand on the system's numerical stability. The process stability for an extended operational time period T.sub.o is verified by performing the resetting step at the end of at least one reset time period T.sub.r whose duration is less than the operational time period T.sub.o and then confirming stability of the process over the reset time period T.sub.r. Because the recursive process starts from a selected covariance at the beginning of each reset time period T.sub.r, confirming stability of the process over at least one reset time period substantially confirms stability over the longer operational time period T.sub.o

    On qq-ary shortened-11-perfect-like codes

    Full text link
    We study codes with parameters of qq-ary shortened Hamming codes, i.e., (n=(qm−q)/(q−1),qn−m,3)q(n=(q^m-q)/(q-1), q^{n-m}, 3)_q. At first, we prove the fact mentioned in [A.E.Brouwer et al. Bounds on mixed binary/ternary codes. IEEE Trans. Inf. Theory 44 (1998) 140-161] that such codes are optimal, generalizing it to a bound for multifold packings of radius-11 balls, with a corollary for multiple coverings. In particular, we show that the punctured Hamming code is an optimal qq-fold packing with minimum distance 22. At second, we show the existence of 44-ary codes with parameters of shortened 11-perfect codes that cannot be obtained by shortening a 11-perfect code. Keywords: Hamming graph; multifold packings; multiple coverings; perfect codes

    System for star catalog equalization to enhance attitude determination

    Get PDF
    An apparatus for star catalog equalization to enhance attitude determination includes a star tracker, a star catalog and a controller. The star tracker is used to sense the positions of stars and generate signals corresponding to the positions of the stars as seen in its field of view. The star catalog contains star location data that is stored using a primary and multiple secondary arrays sorted by both declination (DEC) and right ascension (RA), respectively. The star location data stored in the star catalog is predetermined by calculating a plurality of desired star locations, associating one of a plurality of stars with each of the plurality of desired star locations based upon a neighborhood association angle to generate an associated plurality of star locations: If an artificial star gap occurs during association, then the neighborhood association angle for reassociation is increased. The controller uses the star catalog to determine which stars to select to provide star measurement residuals for correcting gyroscope bias and spacecraft attitude

    Spacecraft attitude control systems with dynamic methods and structures for processing star tracker signals

    Get PDF
    Methods are provided for dynamically processing successively-generated star tracker data frames and associated valid flags to generate processed star tracker signals that have reduced noise and a probability greater than a selected probability P.sub.slctd of being valid. These methods maintain accurate spacecraft attitude control in the presence of spurious inputs (e.g., impinging protons) that corrupt collected charges in spacecraft star trackers. The methods of the invention enhance the probability of generating valid star tracker signals because they respond to a current frame probability P.sub.frm by dynamically selecting the largest valid frame combination whose combination probability P.sub.cmb satisfies a selected probability P.sub.slctd. Noise is thus reduced while the probability of finding a valid frame combination is enhanced. Spacecraft structures are also provided for practicing the methods of the invention

    System and method for calibrating inter-star-tracker misalignments in a stellar inertial attitude determination system

    Get PDF
    A method and apparatus for determining star tracker misalignments is disclosed. The method comprises the steps of defining a defining a reference frame for the star tracker assembly according to a boresight of the primary star tracker and a boresight of a second star tracker wherein the boresight of the primary star tracker and a plane spanned by the boresight of the primary star tracker and the boresight of the second star tracker at least partially define a datum for the reference frame for the star tracker assembly; and determining the misalignment of the at least one star tracker as a rotation of the defined reference frame

    Neighborhood-based Hard Negative Mining for Sequential Recommendation

    Full text link
    Negative sampling plays a crucial role in training successful sequential recommendation models. Instead of merely employing random negative sample selection, numerous strategies have been proposed to mine informative negative samples to enhance training and performance. However, few of these approaches utilize structural information. In this work, we observe that as training progresses, the distributions of node-pair similarities in different groups with varying degrees of neighborhood overlap change significantly, suggesting that item pairs in distinct groups may possess different negative relationships. Motivated by this observation, we propose a Graph-based Negative sampling approach based on Neighborhood Overlap (GNNO) to exploit structural information hidden in user behaviors for negative mining. GNNO first constructs a global weighted item transition graph using training sequences. Subsequently, it mines hard negative samples based on the degree of overlap with the target item on the graph. Furthermore, GNNO employs curriculum learning to control the hardness of negative samples, progressing from easy to difficult. Extensive experiments on three Amazon benchmarks demonstrate GNNO's effectiveness in consistently enhancing the performance of various state-of-the-art models and surpassing existing negative sampling strategies. The code will be released at \url{https://github.com/floatSDSDS/GNNO}

    Examining the Effect of Pre-training on Time Series Classification

    Full text link
    Although the pre-training followed by fine-tuning paradigm is used extensively in many fields, there is still some controversy surrounding the impact of pre-training on the fine-tuning process. Currently, experimental findings based on text and image data lack consensus. To delve deeper into the unsupervised pre-training followed by fine-tuning paradigm, we have extended previous research to a new modality: time series. In this study, we conducted a thorough examination of 150 classification datasets derived from the Univariate Time Series (UTS) and Multivariate Time Series (MTS) benchmarks. Our analysis reveals several key conclusions. (i) Pre-training can only help improve the optimization process for models that fit the data poorly, rather than those that fit the data well. (ii) Pre-training does not exhibit the effect of regularization when given sufficient training time. (iii) Pre-training can only speed up convergence if the model has sufficient ability to fit the data. (iv) Adding more pre-training data does not improve generalization, but it can strengthen the advantage of pre-training on the original data volume, such as faster convergence. (v) While both the pre-training task and the model structure determine the effectiveness of the paradigm on a given dataset, the model structure plays a more significant role
    • …
    corecore