79 research outputs found

    Dynamic Gradient Reactivation for Backward Compatible Person Re-identification

    Full text link
    We study the backward compatible problem for person re-identification (Re-ID), which aims to constrain the features of an updated new model to be comparable with the existing features from the old model in galleries. Most of the existing works adopt distillation-based methods, which focus on pushing new features to imitate the distribution of the old ones. However, the distillation-based methods are intrinsically sub-optimal since it forces the new feature space to imitate the inferior old feature space. To address this issue, we propose the Ranking-based Backward Compatible Learning (RBCL), which directly optimizes the ranking metric between new features and old features. Different from previous methods, RBCL only pushes the new features to find best-ranking positions in the old feature space instead of strictly alignment, and is in line with the ultimate goal of backward retrieval. However, the sharp sigmoid function used to make the ranking metric differentiable also incurs the gradient vanish issue, therefore stems the ranking refinement during the later period of training. To address this issue, we propose the Dynamic Gradient Reactivation (DGR), which can reactivate the suppressed gradients by adding dynamic computed constant during forward step. To further help targeting the best-ranking positions, we include the Neighbor Context Agents (NCAs) to approximate the entire old feature space during training. Unlike previous works which only test on the in-domain settings, we make the first attempt to introduce the cross-domain settings (including both supervised and unsupervised), which are more meaningful and difficult. The experimental results on all five settings show that the proposed RBCL outperforms previous state-of-the-art methods by large margins under all settings.Comment: Submitted to Pattern Recognition on Dec 06, 2021. Under Revie

    Retinal Vascular Network Topology Reconstruction and Artery/Vein Classification via Dominant Set Clustering

    Get PDF
    The estimation of vascular network topology in complex networks is important in understanding the relationship between vascular changes and a wide spectrum of diseases. Automatic classification of the retinal vascular trees into arteries and veins is of direct assistance to the ophthalmologist in terms of diagnosis and treatment of eye disease. However, it is challenging due to their projective ambiguity and subtle changes in appearance, contrast and geometry in the imaging process. In this paper, we propose a novel method that is capable of making the artery/vein (A/V) distinction in retinal color fundus images based on vascular network topological properties. To this end, we adapt the concept of dominant set clustering and formalize the retinal blood vessel topology estimation and the A/V classification as a pairwise clustering problem. The graph is constructed through image segmentation, skeletonization and identification of significant nodes. The edge weight is defined as the inverse Euclidean distance between its two end points in the feature space of intensity, orientation, curvature, diameter, and entropy. The reconstructed vascular network is classified into arteries and veins based on their intensity and morphology. The proposed approach has been applied to five public databases, INSPIRE, IOSTAR, VICAVR, DRIVE and WIDE, and achieved high accuracies of 95.1%, 94.2%, 93.8%, 91.1%, and 91.0%, respectively. Furthermore, we have made manual annotations of the blood vessel topologies for INSPIRE, IOSTAR, VICAVR, and DRIVE datasets, and these annotations are released for public access so as to facilitate researchers in the community

    Correlation between inflammatory markers over time and disease severity in status epilepticus: a preliminary study

    Get PDF
    ObjectivesConvulsive status epilepticus (CSE) is a major subtype of status epilepticus that is known to be closely associated with systemic inflammation. Some important inflammatory biomarkers of this disorder include the neutrophil-to-lymphocyte ratio (NLR), platelet-to-lymphocyte ratio (PLR), monocyte-to-lymphocyte ratio (MLR), systemic immune inflammation index (SII), and pan-immune inflammation value (PIV). This study aimed to determine the NLR, PLR, MLR, SII, and PIV levels before and after treatment in adult patients with CSE and investigated the relationship of these parameters with disease severity.MethodsThis retrospective study analyzed data from 103 adult patients with CSE and 103 healthy controls. The neutrophil, monocyte, platelet, and lymphocyte counts, as well as the NLR, PLR, MLR, SII, and PIV, were compared in adult patients with CSE during acute seizures (within 2 h of admission) and after treatment relief (1–2 weeks of complete seizure control). Furthermore, multivariate linear regression analysis investigated the relationship between NLR, PLR, MLR, SII, and PIV with the Status Epilepticus Severity Score (STESS).ResultsThe data revealed significant differences (p < 0.05) in neutrophils, monocytes, lymphocytes, NLR, PLR, MLR, SII, and PIV between adult patients with CSE during acute seizures and after treatment relief. The average neutrophil count was high during acute seizures in the patient group and decreased after remission. In contrast, the average lymphocyte count was lower after remission (p < 0.05). Furthermore, significant differences (p < 0.05) were observed in monocytes, lymphocytes, platelets, NLR, PLR, MLR, and PIV levels between adult patients with CSE after remission and the healthy control group. Multivariate linear regression analysis showed no significant correlation between NLR, PLR, MLR, SII, and PIV with STESS.ConclusionThe results of this study indicated that adult patients with CSE experienced a transient systemic inflammatory response during acute seizures, which gradually returned to baseline levels after remission. However, there was a lack of robust clinical evidence correlating the severity of adult CSE and systemic inflammatory response

    Topology reconstruction of tree-like structure in images via structural similarity measure and dominant set clustering

    Get PDF
    The reconstruction and analysis of tree-like topological structures in the biomedical images is crucial for biologists and surgeons to understand biomedical conditions and plan surgical procedures. The underlying tree-structure topology reveals how different curvilinear components are anatomically connected to each other. Existing automated topology reconstruction methods have great difficulty in identifying the connectivity when two or more curvilinear components cross or bifurcate, due to their projection ambiguity, imaging noise and low contrast. In this paper, we propose a novel curvilinear structural similarity measure to guide a dominant-set clustering approach to address this indispensable issue. The novel similarity measure takes into account both intensity and geometric properties in representing the curvilinear structure locally and globally, and group curvilinear objects at crossover points into different connected branches by dominant-set clustering. The proposed method is applicable to different imaging modalities, and quantitative and qualitative results on retinal vessel, plant root, and neuronal network datasets show that our methodology is capable of advancing the current state-of-the-art techniques

    Implementing and Proving the TLS 1.3 Record Layer

    Get PDF
    International audienceThe record layer is the main bridge between TLS applications and internal sub-protocols. Its corefunctionality is an elaborate form of authenticated encryption: streams of messages for each sub-protocol(handshake, alert, and application data) are fragmented, multiplexed, and encrypted with optionalpadding to hide their lengths. Conversely, the sub-protocols may provide fresh keys or signal streamtermination to the record layer.Compared to prior versions, TLS 1.3 discards obsolete schemes in favor of a common construction forAuthenticated Encryption with Associated Data (AEAD), instantiated with algorithms such as AESGCMand ChaCha20-Poly1305. It differs from TLS 1.2 in its use of padding, associated data and nonces.It also encrypts the content-type used to multiplex between sub-protocols. New protocol features suchas early application data (0-RTT and 0.5-RTT) and late handshake messages require additional keysand a more general model of stateful encryption.We build and verify a reference implementation of the TLS record layer and its cryptographic algorithmsin F?, a dependently typed language where security and functional guarantees can be specifiedas pre- and post-conditions. We reduce the high-level security of the record layer to cryptographic assumptionson its ciphers. Each step in the reduction is verified by typing an F? module; for each stepthat involves a cryptographic assumption, this module precisely captures the corresponding game.We first verify the functional correctness and injectivity properties of our implementations of onetimeMAC algorithms (Poly1305 and GHASH) and provide a generic proof of their security given thesetwo properties. We show the security of a generic AEAD construction built from any secure onetimeMAC and PRF. We extend AEAD, first to stream encryption, then to length-hiding, multiplexedencryption. Finally, we build a security model of the record layer against an adversary that controls theTLS sub-protocols. We compute concrete security bounds for the AES_128_GCM, AES_256_GCM,and CHACHA20_POLY1305 ciphersuites, and derive recommended limits on sent data before re-keying.We plug our implementation of the record layer into the miTLS library, confirm that they interoperatewith Chrome and Firefox, and report initial performance results. Combining our functional correctness,security, and experimental results, we conclude that the new TLS record layer (as described in RFCsand cryptographic standards) is provably secure, and we provide its first verified implementation

    Everest: Towards a Verified, Drop-in Replacement of HTTPS

    Get PDF
    The HTTPS ecosystem is the foundation on which Internet security is built. At the heart of this ecosystem is the Transport Layer Security (TLS) protocol, which in turn uses the X.509 public-key infrastructure and numerous cryptographic constructions and algorithms. Unfortunately, this ecosystem is extremely brittle, with headline-grabbing attacks and emergency patches many times a year. We describe our ongoing efforts in Everest (The Everest VERified End-to-end Secure Transport) a project that aims to build and deploy a verified version of TLS and other components of HTTPS, replacing the current infrastructure with proven, secure software. Aiming both at full verification and usability, we conduct high-level code-based, game-playing proofs of security on cryptographic implementations that yield efficient, deployable code, at the level of C and assembly. Concretely, we use F*, a dependently typed language for programming, meta-programming, and proving at a high level, while relying on low-level DSLs embedded within F* for programming low-level components when necessary for performance and, sometimes, side-channel resistance. To compose the pieces, we compile all our code to source-like C and assembly, suitable for deployment and integration with existing code bases, as well as audit by independent security experts. Our main results so far include (1) the design of Low*, a subset of F* designed for C-like imperative programming but with high-level verification support, and KreMLin, a compiler that extracts Low* programs to C; (2) an implementation of the TLS-1.3 record layer in Low*, together with a proof of its concrete cryptographic security; (3) Vale, a new DSL for verified assembly language, and several optimized cryptographic primitives proven functionally correct and side-channel resistant. In an early deployment, all our verified software is integrated and deployed within libcurl, a widely used library of networking protocols

    Performance and characterization of the SPT-3G digital frequency-domain multiplexed readout system using an improved noise and crosstalk model

    Get PDF
    The third-generation South Pole Telescope camera (SPT-3G) improves upon its predecessor (SPTpol) by an order of magnitude increase in detectors on the focal plane. The technology used to read out and control these detectors, digital frequency-domain multiplexing (DfMUX), is conceptually the same as used for SPTpol, but extended to accommodate more detectors. A nearly 5× expansion in the readout operating bandwidth has enabled the use of this large focal plane, and SPT-3G performance meets the forecasting targets relevant to its science objectives. However, the electrical dynamics of the higher-bandwidth readout differ from predictions based on models of the SPTpol system due to the higher frequencies used and parasitic impedances associated with new cryogenic electronic architecture. To address this, we present an updated derivation for electrical crosstalk in higher-bandwidth DfMUX systems and identify two previously uncharacterized contributions to readout noise, which become dominant at high bias frequency. The updated crosstalk and noise models successfully describe the measured crosstalk and readout noise performance of SPT-3G. These results also suggest specific changes to warm electronics component values, wire-harness properties, and SQUID parameters, to improve the readout system for future experiments using DfMUX, such as the LiteBIRD space telescope
    • 

    corecore