1,217 research outputs found

    Portable Biometric System of High Sensitivity Absorption Detection

    Get PDF

    A Large Panel Two-CCD Camera Coordinate System with an Alternate-Eight-Matrix Look-Up Table Algorithm

    Get PDF
    AbstractIn this study, a novel positioning model of a double-CCD cameras calibration system, with an Alternate-Eight-Matrix (AEM) Look-Up-Table (LUT), was proposed. Two CCD cameras were fixed on both sides of a large scale screen to redeem Field Of View (FOV) problems. The first to the fourth AEMLUT were used to compute the corresponding positions of intermediate blocks on the screen captured by the right side camera. In these AEMLUT for the right side camera, the coordinate mapping data of the target in a specific space were stored in two matrixes, while the gray level threshold values of different position were stored in the others. Similarly, the fifth to the eighth AEMLUT were used to compute the corresponding positions of intermediate blocks on the screen captured by the left side camera. Experimental results showed that the problems of dead angles and non-uniform light fields were solved. In addition, rapid and precision positioning results can be obtained by the proposed method

    Systematic Design of Myopic Ophthalmic Lens

    Get PDF
    [[abstract]]The aim of this research is to design a myopic ophthalmic lens by using ZEMAX optical design software. The myopic ophthalmic lens is designed by integrating the effects of myopic eyes. The first design started from adjusting the design of purely using lens making formula by ZEMAX’s simulation,named “eyeglass 1” of -0.93D myopic ophthalmic lens for -1D myopia . It then shows that the lighter power of the myopic ophthalmic lens is more suitable for the -1D myopia. This verifies that spectacle design should take into account the effects combined effects of the ophthalmic lens and the eye. If the lens is viewed to match the three configurations named far, middle, and near view points according to the human eye structure, an aspheric surface is introduced to the lens “named eyeglass 2”, which gives more freedom to correct the aberrations. At the end of this study, we show that the choice of “far distance” gives an even more suitable ophthalmic lens for myopia, and this is shown by “eyeglass 3 & 4”. The MTF values show that it is about 0.3 when spatial frequency is 83lp/mm “20/20 vision”

    The Life and Death of Dense Molecular Clumps in the Large Magellanic Cloud

    Full text link
    We report the results of a high spatial (parsec) resolution HCO+ (J = 1-0) and HCN (J = 1-0) emission survey toward the giant molecular clouds of the star formation regions N105, N113, N159, and N44 in the Large Magellanic Cloud. The HCO+ and HCN observations at 89.2 and 88.6 GHz, respectively, were conducted in the compact configuration of the Australia Telescope Compact Array. The emission is imaged into individual clumps with masses between 10^2 and 10^4 solar masses and radii of <1 pc to ~2 pc. Many of the clumps are coincident with indicators of current massive star formation, indicating that many of the clumps are associated with deeply-embedded forming stars and star clusters. We find that massive YSO-bearing clumps tend to be larger (>1 pc), more massive (M > 10^3 solar masses), and have higher surface densities (~1 g cm^-2), while clumps without signs of star formation are smaller (<1 pc), less massive (M < 10^3 solar masses), and have lower surface densities (~0.1 g cm^-2). The dearth of massive (M >10^3 solar masses) clumps not bearing massive YSOs suggests the onset of star formation occurs rapidly once the clump has attained physical properties favorable to massive star formation. Using a large sample of LMC massive YSO mid-IR spectra, we estimate that ~2/3 of the massive YSOs for which there are Spitzer mid-IR spectra are no longer located in molecular clumps; we estimate that these young stars/clusters have destroyed their natal clumps on a time scale of at least 3 x 10^{5}$ yrs.Comment: Accepted to ApJ 3-19-201

    Multitask Learning for Time Series Data with 2D Convolution

    Full text link
    Multitask learning (MTL) aims to develop a unified model that can handle a set of closely related tasks simultaneously. By optimizing the model across multiple tasks, MTL generally surpasses its non-MTL counterparts in terms of generalizability. Although MTL has been extensively researched in various domains such as computer vision, natural language processing, and recommendation systems, its application to time series data has received limited attention. In this paper, we investigate the application of MTL to the time series classification (TSC) problem. However, when we integrate the state-of-the-art 1D convolution-based TSC model with MTL, the performance of the TSC model actually deteriorates. By comparing the 1D convolution-based models with the Dynamic Time Warping (DTW) distance function, it appears that the underwhelming results stem from the limited expressive power of the 1D convolutional layers. To overcome this challenge, we propose a novel design for a 2D convolution-based model that enhances the model's expressiveness. Leveraging this advantage, our proposed method outperforms competing approaches on both the UCR Archive and an industrial transaction TSC dataset

    Toward a Foundation Model for Time Series Data

    Full text link
    A foundation model is a machine learning model trained on a large and diverse set of data, typically using self-supervised learning-based pre-training techniques, that can be adapted to various downstream tasks. However, current research on time series pre-training has mostly focused on models pre-trained solely on data from a single domain, resulting in a lack of knowledge about other types of time series. However, current research on time series pre-training has predominantly focused on models trained exclusively on data from a single domain. As a result, these models possess domain-specific knowledge that may not be easily transferable to time series from other domains. In this paper, we aim to develop an effective time series foundation model by leveraging unlabeled samples from multiple domains. To achieve this, we repurposed the publicly available UCR Archive and evaluated four existing self-supervised learning-based pre-training methods, along with a novel method, on the datasets. We tested these methods using four popular neural network architectures for time series to understand how the pre-training methods interact with different network designs. Our experimental results show that pre-training improves downstream classification tasks by enhancing the convergence of the fine-tuning process. Furthermore, we found that the proposed pre-training method, when combined with the Transformer model, outperforms the alternatives
    • …
    corecore