118 research outputs found

    Identifying Light-curve Signals with a Deep Learning Based Object Detection Algorithm. II. A General Light Curve Classification Framework

    Full text link
    Vast amounts of astronomical photometric data are generated from various projects, requiring significant efforts to identify variable stars and other object classes. In light of this, a general, widely applicable classification framework would simplify the task of designing custom classifiers. We present a novel deep learning framework for classifying light curves using a weakly supervised object detection model. Our framework identifies the optimal windows for both light curves and power spectra automatically, and zooms in on their corresponding data. This allows for automatic feature extraction from both time and frequency domains, enabling our model to handle data across different scales and sampling intervals. We train our model on datasets obtained from both space-based and ground-based multi-band observations of variable stars and transients. We achieve an accuracy of 87% for combined variables and transient events, which is comparable to the performance of previous feature-based models. Our trained model can be utilized directly to other missions, such as ASAS-SN, without requiring any retraining or fine-tuning. To address known issues with miscalibrated predictive probabilities, we apply conformal prediction to generate robust predictive sets that guarantee true label coverage with a given probability. Additionally, we incorporate various anomaly detection algorithms to empower our model with the ability to identify out-of-distribution objects. Our framework is implemented in the Deep-LC toolkit, which is an open-source Python package hosted on Github and PyPI.Comment: 26 pages, 19 figures, 6 tables. Submitted to AAS Journal. Code is available on https://github.com/ckm3/Deep-L

    Modeling the Resource Requirements of Convolutional Neural Networks on Mobile Devices

    Full text link
    Convolutional Neural Networks (CNNs) have revolutionized the research in computer vision, due to their ability to capture complex patterns, resulting in high inference accuracies. However, the increasingly complex nature of these neural networks means that they are particularly suited for server computers with powerful GPUs. We envision that deep learning applications will be eventually and widely deployed on mobile devices, e.g., smartphones, self-driving cars, and drones. Therefore, in this paper, we aim to understand the resource requirements (time, memory) of CNNs on mobile devices. First, by deploying several popular CNNs on mobile CPUs and GPUs, we measure and analyze the performance and resource usage for every layer of the CNNs. Our findings point out the potential ways of optimizing the performance on mobile devices. Second, we model the resource requirements of the different CNN computations. Finally, based on the measurement, pro ling, and modeling, we build and evaluate our modeling tool, Augur, which takes a CNN configuration (descriptor) as the input and estimates the compute time and resource usage of the CNN, to give insights about whether and how e ciently a CNN can be run on a given mobile platform. In doing so Augur tackles several challenges: (i) how to overcome pro ling and measurement overhead; (ii) how to capture the variance in different mobile platforms with different processors, memory, and cache sizes; and (iii) how to account for the variance in the number, type and size of layers of the different CNN configurations

    Fully Point-wise Convolutional Neural Network for Modeling Statistical Regularities in Natural Images

    Full text link
    Modeling statistical regularity plays an essential role in ill-posed image processing problems. Recently, deep learning based methods have been presented to implicitly learn statistical representation of pixel distributions in natural images and leverage it as a constraint to facilitate subsequent tasks, such as color constancy and image dehazing. However, the existing CNN architecture is prone to variability and diversity of pixel intensity within and between local regions, which may result in inaccurate statistical representation. To address this problem, this paper presents a novel fully point-wise CNN architecture for modeling statistical regularities in natural images. Specifically, we propose to randomly shuffle the pixels in the origin images and leverage the shuffled image as input to make CNN more concerned with the statistical properties. Moreover, since the pixels in the shuffled image are independent identically distributed, we can replace all the large convolution kernels in CNN with point-wise (1∗11*1) convolution kernels while maintaining the representation ability. Experimental results on two applications: color constancy and image dehazing, demonstrate the superiority of our proposed network over the existing architectures, i.e., using 1/10∼\sim1/100 network parameters and computational cost while achieving comparable performance.Comment: 9 pages, 7 figures. To appear in ACM MM 201

    Seismic system reliability analysis of bridges using the multiplicative dimensional reduction method

    Get PDF
    A combined method of finite element reliability analysis and multiplicative dimensional reduction method (M-DRM) is proposed for systems reliability analysis of practical bridge structures. The probability distribution function of a structural response is derived based on the maximum entropy principle. To illustrate the accuracy and efficiency of the proposed approach, a simply supported bridge structure is adopted and the failure probability obtained are compared with the Monte Carlo simulation method. The validated method is then applied for the system reliability analysis for a practical high-pier rigid frame railway bridge located at the seismic-prone region. The finite element model of the bridge is developed using OpenSees and the M-DRM method is used to analyse the structural system reliability under earthquake loading

    A face recognition system for assistive robots

    Get PDF
    Assistive robots collaborating with people demand strong Human-Robot interaction capabilities. In this way, recognizing the person the robot has to interact with is paramount to provide a personalized service and reach a satisfactory end-user experience. To this end, face recognition: a non-intrusive, automatic mechanism of identification using biometric identifiers from an user's face, has gained relevance in the recent years, as the advances in machine learning and the creation of huge public datasets have considerably improved the state-of-the-art performance. In this work we study different open-source implementations of the typical components of state-of-the-art face recognition pipelines, including face detection, feature extraction and classification, and propose a recognition system integrating the most suitable methods for their utilization in assistant robots. Concretely, for face detection we have considered MTCNN, OpenCV's DNN, and OpenPose, while for feature extraction we have analyzed InsightFace and Facenet. We have made public an implementation of the proposed recognition framework, ready to be used by any robot running the Robot Operating System (ROS). The methods in the spotlight have been compared in terms of accuracy and performance in common benchmark datasets, namely FDDB and LFW, to aid the choice of the final system implementation, which has been tested in a real robotic platform.This work is supported by the Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech, the research projects WISER ([DPI2017-84827-R]),funded by the Spanish Government, and financed by European RegionalDevelopment’s funds (FEDER), and MoveCare ([ICT-26-2016b-GA-732158]), funded by the European H2020 program, and by a postdoc contract from the I-PPIT-UMA program financed by the University of Málaga

    Simple algebras of Weyl type

    Full text link
    Over a field FF of any characteristic, for a commutative associative algebra AA with an identity element and for the polynomial algebra F[D]F[D] of a commutative derivation subalgebra DD of AA, the associative and the Lie algebras of Weyl type on the same vector space A[D]=A⊗F[D]A[D]=A\otimes F[D] are defined. It is proved that A[D]A[D], as a Lie algebra (modular its center) or as an associative algebra, is simple if and only if AA is DD-simple and A[D]A[D] acts faithfully on AA. Thus a lot of simple algebras are obtained.Comment: 9 pages, Late

    Temporal Cross-Media Retrieval with Soft-Smoothing

    Full text link
    Multimedia information have strong temporal correlations that shape the way modalities co-occur over time. In this paper we study the dynamic nature of multimedia and social-media information, where the temporal dimension emerges as a strong source of evidence for learning the temporal correlations across visual and textual modalities. So far, cross-media retrieval models, explored the correlations between different modalities (e.g. text and image) to learn a common subspace, in which semantically similar instances lie in the same neighbourhood. Building on such knowledge, we propose a novel temporal cross-media neural architecture, that departs from standard cross-media methods, by explicitly accounting for the temporal dimension through temporal subspace learning. The model is softly-constrained with temporal and inter-modality constraints that guide the new subspace learning task by favouring temporal correlations between semantically similar and temporally close instances. Experiments on three distinct datasets show that accounting for time turns out to be important for cross-media retrieval. Namely, the proposed method outperforms a set of baselines on the task of temporal cross-media retrieval, demonstrating its effectiveness for performing temporal subspace learning.Comment: To appear in ACM MM 201

    Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness

    Full text link
    In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. Intuitively, robustness means that small perturbations to an input do not cause the network to perform misclassifications. In this paper, we present a novel algorithm for verifying robustness properties of neural networks. Our method synergistically combines gradient-based optimization methods for counterexample search with abstraction-based proof search to obtain a sound and ({\delta}-)complete decision procedure. Our method also employs a data-driven approach to learn a verification policy that guides abstract interpretation during proof search. We have implemented the proposed approach in a tool called Charon and experimentally evaluated it on hundreds of benchmarks. Our experiments show that the proposed approach significantly outperforms three state-of-the-art tools, namely AI^2 , Reluplex, and Reluval

    Real-time detection of DNA interactions with long-period fiber-grating-based biosensor

    Get PDF
    Using an optical biosensor based on a dual-peak long-period fiber grating, we have demonstrated the detection of interactions between biomolecules in real time. Silanization of the grating surface was successfully realized for the covalent immobilization of probe DNA, which was subsequently hybridized with the complementary target DNA sequence. It is interesting to note that the DNA biosensor was reusable after being stripped off the hybridized target DNA from the grating surface, demonstrating a function of multiple usability. © 2007 Optical Society of America

    Multi-view Face Detection Using Deep Convolutional Neural Networks

    Full text link
    In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose/landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between dis- tribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed methods performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks.Comment: in International Conference on Multimedia Retrieval 2015 (ICMR
    • …
    corecore