415 research outputs found

    Optimal surface profile design of deployable mesh reflectors via a force density strategy

    Get PDF
    Based on a force density method coupled with optimal design of node positions, a novel approach for optimal surface profile design of mesh reflectors is presented. Uniform tension is achieved by iterations on coefficients of force density. The positions of net nodes are recalculated in each iteration so that the faceting RMS error of the reflector surface is minimized. Applications of both prime focus and offset configurations are demonstrated. The simulation results show the effectiveness of the proposed approach

    Towards End-to-end Car License Plate Location and Recognition in Unconstrained Scenarios

    Full text link
    Benefiting from the rapid development of convolutional neural networks, the performance of car license plate detection and recognition has been largely improved. Nonetheless, challenges still exist especially for real-world applications. In this paper, we present an efficient and accurate framework to solve the license plate detection and recognition tasks simultaneously. It is a lightweight and unified deep neural network, that can be optimized end-to-end and work in real-time. Specifically, for unconstrained scenarios, an anchor-free method is adopted to efficiently detect the bounding box and four corners of a license plate, which are used to extract and rectify the target region features. Then, a novel convolutional neural network branch is designed to further extract features of characters without segmentation. Finally, recognition task is treated as sequence labelling problems, which are solved by Connectionist Temporal Classification (CTC) directly. Several public datasets including images collected from different scenarios under various conditions are chosen for evaluation. A large number of experiments indicate that the proposed method significantly outperforms the previous state-of-the-art methods in both speed and precision

    PRL1, an RNA-Binding Protein, Positively Regulates the Accumulation of miRNAs and siRNAs in Arabidopsis

    Get PDF
    The evolutionary conserved WD-40 protein PRL1 plays important roles in immunity and development. Here we show that PRL1 is required for the accumulation of microRNAs (miRNAs) and small interfering RNAs (siRNAs). PRL1 positively influences the processing of miRNA primary transcripts (pri-miRNAs) and double-stranded RNAs (dsRNAs). Furthermore, PRL1 interacts with the pri-miRNA processor, DCL1, and the dsRNA processors (DCL3 and DCL4). These results suggest that PRL1 may function as a general factor to promote the production of miRNAs and siRNAs. We also show that PRL1 is an RNA-binding protein and associates with pri-miRNAs in vivo. In addition, prl1 reduces pri-miRNA levels without affecting pri-miRNA transcription. These results suggest that PRL1 may stabilize pri-miRNAs and function as a co-factor to enhance DCL1 activity. We further reveal the genetic interaction of PRL1 with CDC5, which interacts with PRL1 and regulates transcription and processing of pri-miRNAs. Both miRNA and pri-miRNA levels are lower in cdc5 prl1 than those in either cdc5 or prl1. However, the processing efficiency of pri-miRNAs in cdc5 prl1 is similar to that in cdc5 and slightly lower than that in prl1. Based on these results, we propose that CDC5 and PRL1 cooperatively regulate pri-miRNA levels, which results in their synergistic effects on miRNA accumulation, while they function together as a complex to enhance DCL1 activity

    Capacity Control of ReLU Neural Networks by Basis-path Norm

    Full text link
    Recently, path norm was proposed as a new capacity measure for neural networks with Rectified Linear Unit (ReLU) activation function, which takes the rescaling-invariant property of ReLU into account. It has been shown that the generalization error bound in terms of the path norm explains the empirical generalization behaviors of the ReLU neural networks better than that of other capacity measures. Moreover, optimization algorithms which take path norm as the regularization term to the loss function, like Path-SGD, have been shown to achieve better generalization performance. However, the path norm counts the values of all paths, and hence the capacity measure based on path norm could be improperly influenced by the dependency among different paths. It is also known that each path of a ReLU network can be represented by a small group of linearly independent basis paths with multiplication and division operation, which indicates that the generalization behavior of the network only depends on only a few basis paths. Motivated by this, we propose a new norm \emph{Basis-path Norm} based on a group of linearly independent paths to measure the capacity of neural networks more accurately. We establish a generalization error bound based on this basis path norm, and show it explains the generalization behaviors of ReLU networks more accurately than previous capacity measures via extensive experiments. In addition, we develop optimization algorithms which minimize the empirical risk regularized by the basis-path norm. Our experiments on benchmark datasets demonstrate that the proposed regularization method achieves clearly better performance on the test set than the previous regularization approaches

    Nonlinear dynamics of full-range CNNs with time-varying delays and variable coefficients

    Get PDF
    In the article, the dynamical behaviours of the full-range cellular neural networks (FRCNNs) with variable coefficients and time-varying delays are considered. Firstly, the improved model of the FRCNNs is proposed, and the existence and uniqueness of the solution are studied by means of differential inclusions and set-valued analysis. Secondly, by using the Hardy inequality, the matrix analysis, and the Lyapunov functional method, we get some criteria for achieving the globally exponential stability (GES). Finally, some examples are provided to verify the correctness of the theoretical results

    Invertible Rescaling Network and Its Extensions

    Full text link
    Image rescaling is a commonly used bidirectional operation, which first downscales high-resolution images to fit various display screens or to be storage- and bandwidth-friendly, and afterward upscales the corresponding low-resolution images to recover the original resolution or the details in the zoom-in images. However, the non-injective downscaling mapping discards high-frequency contents, leading to the ill-posed problem for the inverse restoration task. This can be abstracted as a general image degradation-restoration problem with information loss. In this work, we propose a novel invertible framework to handle this general problem, which models the bidirectional degradation and restoration from a new perspective, i.e. invertible bijective transformation. The invertibility enables the framework to model the information loss of pre-degradation in the form of distribution, which could mitigate the ill-posed problem during post-restoration. To be specific, we develop invertible models to generate valid degraded images and meanwhile transform the distribution of lost contents to the fixed distribution of a latent variable during the forward degradation. Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable. We start from image rescaling and instantiate the model as Invertible Rescaling Network (IRN), which can be easily extended to the similar decolorization-colorization task. We further propose to combine the invertible framework with existing degradation methods such as image compression for wider applications. Experimental results demonstrate the significant improvement of our model over existing methods in terms of both quantitative and qualitative evaluations of upscaling and colorizing reconstruction from downscaled and decolorized images, and rate-distortion of image compression.Comment: Accepted by IJC
    corecore