106 research outputs found

    New binary and ternary LCD codes

    Get PDF
    LCD codes are linear codes with important cryptographic applications. Recently, a method has been presented to transform any linear code into an LCD code with the same parameters when it is supported on a finite field with cardinality larger than 3. Hence, the study of LCD codes is mainly open for binary and ternary fields. Subfield-subcodes of JJ-affine variety codes are a generalization of BCH codes which have been successfully used for constructing good quantum codes. We describe binary and ternary LCD codes constructed as subfield-subcodes of JJ-affine variety codes and provide some new and good LCD codes coming from this construction

    Twisted skew GG-codes

    Full text link
    In this paper we investigate left ideals as codes in twisted skew group rings. The considered rings, which are often algebras over a finite field, allows us to detect many of the well-known codes. The presentation, given here, unifies the concept of group codes, twisted group codes and skew group codes

    The metric structure of linear codes

    Get PDF
    Producción CientíficaThe bilinear form with associated identity matrix is used in coding theory to define the dual code of a linear code, also it endows linear codes with a metric space structure. This metric structure was studied for generalized toric codes and a characteristic decomposition was obtained, which led to several applications as the construction of stabilizer quantum codes and LCD codes. In this work, we use the study of bilinear forms over a finite field to give a decomposition of an arbitrary linear code similar to the one obtained for generalized toric codes. Such a decomposition, called the geometric decomposition of a linear code, can be obtained in a constructive way; it allows us to express easily the dual code of a linear code and provides a method to construct stabilizer quantum codes, LCD codes and in some cases, a method to estimate their minimum distance. The proofs for characteristic 2 are different, but they are developed in parallel.The author gratefully acknowledges the support from RYC-2016-20208 (AEI/FSE/UE), the support from The Danish Council for Independent Research (Grant No. DFF-4002-00367), and the support from the Spanish MINECO/FEDER (Grants No. MTM2015-65764-C3-2-P and MTM2015-69138-REDT)

    Subfield subcodes of projective Reed-Muller codes

    Full text link
    Explicit bases for the subfield subcodes of projective Reed-Muller codes over the projective plane and their duals are obtained. In particular, we provide a formula for the dimension of these codes. For the general case over the projective space, we are able to generalize the necessary tools to deal with this case as well: we obtain a universal Gr\"obner basis for the vanishing ideal of the set of standard representatives of the projective space and we are able to reduce any monomial with respect to this Gr\"obner basis. With respect to the parameters of these codes, by considering subfield subcodes of projective Reed-Muller codes we are able to obtain long linear codes with good parameters over a small finite field

    On some codes from rank 3 primitive actions of the simple Chevalley group G2(q)

    Get PDF
    Please read abstract in the article.The National Research Foundation of South Africahttp://aimsciences.org/journals/amc/index.htmhj2022Mathematics and Applied Mathematic

    Subject Index Volumes 1–200

    Get PDF

    Local features for view matching across independently moving cameras.

    Get PDF
    PhD ThesisMoving platforms, such as wearable and robotic cameras, need to recognise the same place observed from different viewpoints in order to collaboratively reconstruct a 3D scene and to support augmented reality or autonomous navigation. However, matching views is challenging for independently moving cameras that directly interact with each other due to severe geometric and photometric differences, such as viewpoint, scale, and illumination changes, can considerably decrease the matching performance. This thesis proposes novel, compact, local features that can cope with with scale and viewpoint variations. We extract and describe an image patch at different scales of an image pyramid by comparing intensity values between learnt pixel pairs (binary test), and employ a cross-scale distance when matching these features. We capture, at multiple scales, the temporal changes of a 3D point, as observed in the image sequence of a camera, by tracking local binary descriptors. After validating the feature-point trajectories through 3D reconstruction, we reduce, for each scale, the sequence of binary features to a compact, fixed-length descriptor that identifies the most frequent and the most stable binary tests over time. We then propose XC-PR, a cross-camera place recognition approach that stores locally, for each uncalibrated camera, spatio-temporal descriptors, extracted at a single scale, in a tree that is selectively updated, as the camera moves. Cameras exchange descriptors selected from previous frames within an adaptive temporal window and with the highest number of local features corresponding to the descriptors. The other camera locally searches and matches the received descriptors to identify and geometrically validate a previously seen place. Experiments on different scenarios show the improved matching accuracy of the joint multi-scale extraction and temporal reduction through comparisons of different temporal reduction strategies, as well as the cross-camera matching strategy based on Bag of Binary Words, and the application to several binary descriptors. We also show that XC-PR achieves similar accuracy but faster, on average, than a baseline consisting of an incremental list of spatio-temporal descriptors. Moreover, XC-PR achieves similar accuracy of a frame-based Bag of Binary Words approach adapted to our approach, while avoiding to match features that cannot be informative, e.g. for 3D reconstruction

    Aspirational toilet user experiences: translating latent user needs into aspirational user experiences.

    Get PDF
    What makes a product user experience aspirational? What do people truly want from their products? The aim of this research is to assess the implementation of latent needs to design an innovative aspirational product user experience. The thesis details reflective action-based research on the study of the design of an aspirational toilet user experience; a taboo subject that has little to no aspiration attributed to it. Toilets have not changed in the past 200 years and arguably the user experience is not considered aspirational. The reflections on an admittedly extreme case could in turn have implications for the other practitioners. Latent needs were elicited from 77 households in Kumasi Ghana to understand the motivations for acquiring a toilet while latent needs of the user experience were gathered from hackers online. The results suggest that the negative ‘shut away’ nature of a toilet means people do not attribute value to them while there is a universal fear of the invisibility of disease. The study resulted in the construction of a wellbeing monitoring toilet prototype that would change the meaning people attribute to toilets while beginning to satisfy the fear of disease. A final test was arranged where the improved user experience is shown to be more valuable and aspirational to users by questionnaire because the new concept affords new meaning beyond the utility that toilets currently provide. The reflections on the case study suggest that when implementing latent needs in the design of an aspirational product user experience, it is worth considering that what users say is not what they do and meaning is a dimension of innovation that is as important as technology.PhD in Wate

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
    • …
    corecore