23,571 research outputs found

    Hierarchical Data Representation Model - Multi-layer NMF

    Full text link
    In this paper, we propose a data representation model that demonstrates hierarchical feature learning using nsNMF. We extend unit algorithm into several layers. Experiments with document and image data successfully discovered feature hierarchies. We also prove that proposed method results in much better classification and reconstruction performance, especially for small number of features. feature hierarchies

    Signatures of unconventional pairing in near-vortex electronic structure of LiFeAs

    Full text link
    A major question in Fe-based superconductors remains the structure of the pairing, in particular whether it is of unconventional nature. The electronic structure near vortices can serve as a platform for phase-sensitive measurements to answer this question. By solving Bogoliubov-de Gennes equations for LiFeAs, we calculate the energy-dependent local electronic structure near a vortex for different nodeless gap-structure possibilities. At low energies, the local density of states (LDOS) around a vortex is determined by the normal-state electronic structure. However, at energies closer to the gap value, the LDOS can distinguish an anisotropic from a conventional isotropic s-wave gap. We show within our self-consistent calculation that in addition, the local gap profile differs between a conventional and an unconventional pairing. We explain this through admixing of a secondary order parameter within Ginzburg-Landau theory. In-field scanning tunneling spectroscopy near vortices can therefore be used as a real-space probe of the gap structure

    Sub-pixel resolving optofluidic microscope for on-chip cell imaging

    Get PDF
    We report the implementation of a fully on-chip, lensless, sub-pixel resolving optofluidic microscope (SROFM). The device utilizes microfluidic flow to deliver specimens directly across a complementary metal oxide semiconductor (CMOS) sensor to generate a sequence of low-resolution (LR) projection images, where resolution is limited by the sensor's pixel size. This image sequence is then processed with a pixel super-resolution algorithm to reconstruct a single high resolution (HR) image, where features beyond the Nyquist rate of the LR images are resolved. We demonstrate the device's capabilities by imaging microspheres, protist Euglena gracilis, and Entamoeba invadens cysts with sub-cellular resolution and establish that our prototype has a resolution limit of 0.75 microns. Furthermore, we also apply the same pixel super-resolution algorithm to reconstruct HR videos in which the dynamic interaction between the fluid and the sample, including the in-plane and out-of-plane rotation of the sample within the flow, can be monitored in high resolution. We believe that the powerful combination of both the pixel super-resolution and optofluidic microscopy techniques within our SROFM is a significant step forwards toward a simple, cost-effective, high throughput and highly compact imaging solution for biomedical and bioscience needs

    MildInt: Deep Learning-Based Multimodal Longitudinal Data Integration Framework

    Get PDF
    As large amounts of heterogeneous biomedical data become available, numerous methods for integrating such datasets have been developed to extract complementary knowledge from multiple domains of sources. Recently, a deep learning approach has shown promising results in a variety of research areas. However, applying the deep learning approach requires expertise for constructing a deep architecture that can take multimodal longitudinal data. Thus, in this paper, a deep learning-based python package for data integration is developed. The python package deep learning-based multimodal longitudinal data integration framework (MildInt) provides the preconstructed deep learning architecture for a classification task. MildInt contains two learning phases: learning feature representation from each modality of data and training a classifier for the final decision. Adopting deep architecture in the first phase leads to learning more task-relevant feature representation than a linear model. In the second phase, linear regression classifier is used for detecting and investigating biomarkers from multimodal data. Thus, by combining the linear model and the deep learning model, higher accuracy and better interpretability can be achieved. We validated the performance of our package using simulation data and real data. For the real data, as a pilot study, we used clinical and multimodal neuroimaging datasets in Alzheimer's disease to predict the disease progression. MildInt is capable of integrating multiple forms of numerical data including time series and non-time series data for extracting complementary features from the multimodal dataset
    • …
    corecore