60 research outputs found

    Microscopy with undetected photons in the mid-infrared

    Get PDF
    Owing to its capacity for unique (bio)-chemical specificity, microscopy withmid-IR illumination holds tremendous promise for a wide range of biomedical and industrial applications. The primary limitation, however, remains detection; with current mid-IR detection technology often marrying inferior technical capabilities with prohibitive costs. This has lead to approaches that shift detection towavelengths into the visible regime, where vastly superior silicon-based cameratechnology is available. Here, we experimentally show how nonlinear interferometry with entangled light can provide a powerful tool for mid-IR microscopy, while only requiring near-infrared detection with a standard CMOS camera. In this proof-of-principle implementation, we demonstrate intensity imaging overa broad wavelength range covering 3.4-4.3um and demonstrate a spatial resolution of 35um for images containing 650 resolved elements. Moreover, we demonstrate our technique is fit for purpose, acquiring microscopic images of biological tissue samples in the mid-IR. These results open a new perspective for potential relevance of quantum imaging techniques in the life sciences.Comment: back-to-back submission with arXiv:2002.05956, Anna V. Paterova, Sivakumar M. Maniam, Hongzhi Yang, Gianluca Grenci, and Leonid A. Krivitsky, "Hyperspectral Infrared Microscopy With Visible Light

    Synthetic Data for Model Selection

    Full text link
    Recent breakthroughs in synthetic data generation approaches made it possible to produce highly photorealistic images which are hardly distinguishable from real ones. Furthermore, synthetic generation pipelines have the potential to generate an unlimited number of images. The combination of high photorealism and scale turn synthetic data into a promising candidate for improving various machine learning (ML) pipelines. Thus far, a large body of research in this field has focused on using synthetic images for training, by augmenting and enlarging training data. In contrast to using synthetic data for training, in this work we explore whether synthetic data can be beneficial for model selection. Considering the task of image classification, we demonstrate that when data is scarce, synthetic data can be used to replace the held out validation set, thus allowing to train on a larger dataset. We also introduce a novel method to calibrate the synthetic error estimation to fit that of the real domain. We show that such calibration significantly improves the usefulness of synthetic data for model selection

    Asymmetric Image Retrieval with Cross Model Compatible Ensembles

    Full text link
    The asymmetrical retrieval setting is a well suited solution for resource constrained applications such as face recognition and image retrieval. In this setting, a large model is used for indexing the gallery while a lightweight model is used for querying. The key principle in such systems is ensuring that both models share the same embedding space. Most methods in this domain are based on knowledge distillation. While useful, they suffer from several drawbacks: they are upper-bounded by the performance of the single best model found and cannot be extended to use an ensemble of models in a straightforward manner. In this paper we present an approach that does not rely on knowledge distillation, rather it utilizes embedding transformation models. This allows the use of N independently trained and diverse gallery models (e.g., trained on different datasets or having a different architecture) and a single query model. As a result, we improve the overall accuracy beyond that of any single model while maintaining a low computational budget for querying. Additionally, we propose a gallery image rejection method that utilizes the diversity between multiple transformed embeddings to estimate the uncertainty of gallery images

    FDG-PET-positive lower-extremity sebaceous-gland carcinoma in a patient with Muir-Torre syndrome

    Get PDF
    Sebaceous-gland carcinoma can occur alone or as one of the defining features of the Muir-Torre syndrome. Cases occurring below the head and neck are extremely rare. Here we describe the case of a 70-year-old male with Muir-Torre syndrome who had a recurrent sebaceous-gland carcinoma in the left lower extremity that demonstrated 18F-FDG avidity. An 18F-FDG-avid lower-extremity sebaceous-gland carcinoma has not been previously reported

    FPGAN-Control: A Controllable Fingerprint Generator for Training with Synthetic Data

    Full text link
    Training fingerprint recognition models using synthetic data has recently gained increased attention in the biometric community as it alleviates the dependency on sensitive personal data. Existing approaches for fingerprint generation are limited in their ability to generate diverse impressions of the same finger, a key property for providing effective data for training recognition models. To address this gap, we present FPGAN-Control, an identity preserving image generation framework which enables control over the fingerprint's image appearance (e.g., fingerprint type, acquisition device, pressure level) of generated fingerprints. We introduce a novel appearance loss that encourages disentanglement between the fingerprint's identity and appearance properties. In our experiments, we used the publicly available NIST SD302 (N2N) dataset for training the FPGAN-Control model. We demonstrate the merits of FPGAN-Control, both quantitatively and qualitatively, in terms of identity preservation level, degree of appearance control, and low synthetic-to-real domain gap. Finally, training recognition models using only synthetic datasets generated by FPGAN-Control lead to recognition accuracies that are on par or even surpass models trained using real data. To the best of our knowledge, this is the first work to demonstrate this

    Microscopy with undetected photons in the mid-infrared

    Get PDF
    Owing to its capacity for unique (bio)-chemical specificity, microscopy with mid-infrared (IR) illumination holds tremendous promise for a wide range of biomedical and industrial applications. The primary limitation, however, remains detection, with current mid-IR detection technology often marrying inferior technical capabilities with prohibitive costs. Here, we experimentally show how nonlinear interferometry with entangled light can provide a powerful tool for mid-IR microscopy while only requiring near-IR detection with a silicon-based camera. In this proof-of-principle implementation, we demonstrate widefield imaging over a broad wavelength range covering 3.4 to 4.3 μm and demonstrate a spatial resolution of 35 μm for images containing 650 resolved elements. Moreover, we demonstrate that our technique is suitable for acquiring microscopic images of biological tissue samples in the mid-IR. These results form a fresh perspective for potential relevance of quantum imaging techniques in the life sciences

    Early Detection of COVID-19 in Female Athletes Using Wearable Technology

    Get PDF
    Background: Heart rate variability (HRV), respiratory rate (RR), and resting heart rate (RHR) are common variables measured by wrist-worn activity trackers to monitor health, fitness, and recovery in athletes. Variations in RR are observed in lower-respiratory infections, and preliminary data suggest changes in HRV and RR are linked to early detection of COVID-19 infection in nonathletes. Hypothesis: Wearable technology measuring HRV, RR, RHR, and recovery will be successful for early detection of COVID-19 in NCAA Division I female athletes. Study Design: Cohort study. Level of Evidence: Level 2. Methods: Female athletes wore WHOOP, Inc. bands through the 2020 to 2021 competitive season. Of the athletes who tested positive for COVID (n = 33), 14 had enough data to be assessed (N = 14; 20.0 ± 1.3 years; 69.8 ± 7.2 kg; 172.0 ± 8.3 cm). Roughly 2 weeks of noninfected days were used to set baseline levels of HRV, RR, recovery, and RHR to compare with -3, -2, and -1 days before a positive COVID-19 result. Results: Increases in RR (P = 0.02) were detected on day -3. RHR (P \u3c 0.01) and RR increased (P \u3c 0.01), while HRV decreased (P \u3c 0.05) on day -1, compared with baseline. Differences were noted in all variables on the day of the positive COVID-19 result: decreased HRV (P \u3c 0.05) and recovery scores (P \u3c 0.01), and increased RHR (P \u3c 0.01) and RR (P \u3c 0.01). Conclusion: In female athletes, wearable technology was successful in predicting COVID-19 infection through changes in RR 3 days before a positive test, and also HRV and RHR the day before a positive test. Clinical Relevance: Wearable technology may be used, as part of a multifaceted approach, for the early detection of COVID-19 in elite athletes through monitoring of HRV, RR, and RHR for overall team health

    Hierarchical transfer learning for online recognition of compound actions

    Get PDF
    Recognising human actions in real-time can provide users with a natural user interface (NUI) enabling a range of innovative and immersive applications. A NUI application should not restrict users’ movements; it should allow users to transition between actions in quick succession, which we term as compound actions. However, the majority of action recognition researchers have focused on individual actions, so their approaches are limited to recognising single actions or multiple actions that are temporally separated. This paper proposes a novel online action recognition method for fast detection of compound actions. A key contribution is our hierarchical body model that can be automatically configured to detect actions based on the low level body parts that are the most discriminative for a particular action. Another key contribution is a transfer learning strategy to allow the tasks of action segmentation and whole body modelling to be performed on a related but simpler dataset, combined with automatic hierarchical body model adaption on a more complex target dataset. Experimental results on a challenging and realistic dataset show an improvement in action recognition performance of 16% due to the introduction of our hierarchical transfer learning. The proposed algorithm is fast with an average latency of just 2 frames (66ms) and outperforms state of the art action recognition algorithms that are capable of fast online action recognition
    corecore