9,129 research outputs found

    Spontaneous Formation of Stable Capillary Bridges for Firming Compact Colloidal Microstructures in Phase Separating Liquids: A Computational Study

    Full text link
    Computer modeling and simulations are performed to investigate capillary bridges spontaneously formed between closely packed colloidal particles in phase separating liquids. The simulations reveal a self-stabilization mechanism that operates through diffusive equilibrium of two-phase liquid morphologies. Such mechanism renders desired microstructural stability and uniformity to the capillary bridges that are spontaneously formed during liquid solution phase separation. This self-stabilization behavior is in contrast to conventional coarsening processes during phase separation. The volume fraction limit of the separated liquid phases as well as the adhesion strength and thermodynamic stability of the capillary bridges are discussed. Capillary bridge formations in various compact colloid assemblies are considered. The study sheds light on a promising route to in-situ (in-liquid) firming of fragile colloidal crystals and other compact colloidal microstructures via capillary bridges

    Analysis of ship registration system and study on selection of ship registration system for China

    Get PDF

    An oil painters recognition method based on cluster multiple kernel learning algorithm

    Get PDF
    A lot of image processing research works focus on natural images, such as in classification, clustering, and the research on the recognition of artworks (such as oil paintings), from feature extraction to classifier design, is relatively few. This paper focuses on oil painter recognition and tries to find the mobile application to recognize the painter. This paper proposes a cluster multiple kernel learning algorithm, which extracts oil painting features from three aspects: color, texture, and spatial layout, and generates multiple candidate kernels with different kernel functions. With the results of clustering numerous candidate kernels, we selected the sub-kernels with better classification performance, and use the traditional multiple kernel learning algorithm to carry out the multi-feature fusion classification. The algorithm achieves a better result on the Painting91 than using traditional multiple kernel learning directly

    Silver Nanoparticle Aggregates as Highly Efficient Plasmonic Antennas for Fluorescence Enhancement

    Get PDF
    The enhanced local fields around plasmonic structures can lead to enhancement of the excitation and modification of the emission quantum yield of fluorophores. So far, high enhancement of fluorescence intensity from dye molecules was demonstrated using bow-tie gap antenna made by e-beam lithography. However, the high manufacturing cost and the fact that currently there are no effective ways to place fluorophores only at the gap prevent the use of these structures for enhancing fluorescence-based biochemical assays. We report on the simultaneous modification of fluorescence intensity and lifetime of dye-labeled DNA in the presence of aggregated silver nanoparticles. The nanoparticle aggregates act as efficient plasmonic antennas, leading to more than 2 orders of magnitude enhancement of the average fluorescence. This is comparable to the best-reported fluorescence enhancement for a single molecule but here applies to the average signal detected from all fluorophores in the system. This highlights the remarkable efficiency of this system for surface-enhanced fluorescence. Moreover, we show that the fluorescence intensity enhancement varies with the plasmon resonance position and measure a significant reduction (300×) of the fluorescence lifetime. Both observations are shown to be in agreement with the electromagnetic model of surface-enhanced fluorescence

    Dropout Drops Double Descent

    Full text link
    In this paper, we find and analyze that we can easily drop the double descent by only adding one dropout layer before the fully-connected linear layer. The surprising double-descent phenomenon has drawn public attention in recent years, making the prediction error rise and drop as we increase either sample or model size. The current paper shows that it is possible to alleviate these phenomena by using optimal dropout in the linear regression model and the nonlinear random feature regression, both theoretically and empirically. % y=Xβ0+ϵ{y}=X{\beta}^0+{\epsilon} with X∈Rn×pX\in\mathbb{R}^{n\times p}. We obtain the optimal dropout hyperparameter by estimating the ground truth β0{\beta}^0 with generalized ridge typed estimator β^=(XTX+α⋅diag(XTX))−1XTy\hat{{\beta}}=(X^TX+\alpha\cdot\mathrm{diag}(X^TX))^{-1}X^T{y}. Moreover, we empirically show that optimal dropout can achieve a monotonic test error curve in nonlinear neural networks using Fashion-MNIST and CIFAR-10. Our results suggest considering dropout for risk curve scaling when meeting the peak phenomenon. In addition, we figure out why previous deep learning models do not encounter double-descent scenarios -- because we already apply a usual regularization approach like the dropout in our models. To our best knowledge, this paper is the first to analyze the relationship between dropout and double descent
    • …
    corecore