1,771 research outputs found

    Hilbert Statistics of Vorticity Scaling in Two-Dimensional Turbulence

    Full text link
    In this paper, the scaling property of the inverse energy cascade and forward enstrophy cascade of the vorticity filed ω(x,y)\omega(x,y) in two-dimensional (2D) turbulence is analyzed. This is accomplished by applying a Hilbert-based technique, namely Hilbert-Huang Transform, to a vorticity field obtained from a 819228192^2 grid-points direct numerical simulation of the 2D turbulence with a forcing scale kf=100k_f=100 and an Ekman friction. The measured joint probability density function p(C,k)p(C,k) of mode Ci(x)C_i(x) of the vorticity ω\omega and instantaneous wavenumber k(x)k(x) is separated by the forcing scale kfk_f into two parts, which corresponding to the inverse energy cascade and the forward enstrophy cascade. It is found that all conditional pdf p(C∣k)p(C\vert k) at given wavenumber kk has an exponential tail. In the inverse energy cascade, the shape of p(C∣k)p(C\vert k) does collapse with each other, indicating a nonintermittent cascade. The measured scaling exponent ζωI(q)\zeta_{\omega}^I(q) is linear with the statistical order qq, i.e., ζωI(q)=−q/3\zeta_{\omega}^I(q)=-q/3, confirming the nonintermittent cascade process. In the forward enstrophy cascade, the core part of p(C∣k)p(C\vert k) is changing with wavenumber kk, indicating an intermittent forward cascade. The measured scaling exponent ζωF(q)\zeta_{\omega}^F(q) is nonlinear with qq and can be described very well by a log-Poisson fitting: ζωF(q)=13q+0.45(1−0.43q)\zeta_{\omega}^F(q)=\frac{1}{3}q+0.45\left(1-0.43^{q}\right). However, the extracted vorticity scaling exponents ζω(q)\zeta_{\omega}(q) for both inverse energy cascade and forward enstrophy cascade are not consistent with Kraichnan\rq{}s theory prediction. New theory for the vorticity field in 2D turbulence is required to interpret the observed scaling behavior.Comment: 13 pages with 10 figure

    LGLG-WPCA: An Effective Texture-based Method for Face Recognition

    Full text link
    In this paper, we proposed an effective face feature extraction method by Learning Gabor Log-Euclidean Gaussian with Whitening Principal Component Analysis (WPCA), called LGLG-WPCA. The proposed method learns face features from the embedded multivariate Gaussian in Gabor wavelet domain; it has the robust performance to adverse conditions such as varying poses, skin aging and uneven illumination. Because the space of Gaussian is a Riemannian manifold and it is difficult to incorporate learning mechanism in the model. To address this issue, we use L2EMG to map the multidimensional Gaussian model to the linear space, and then use WPCA to learn face features. We also implemented the key-point-based version of LGLG-WPCA, called LGLG(KP)-WPCA. Experiments show the proposed methods are effective and promising for face texture feature extraction and the combination of the feature of the proposed methods and the features of Deep Convolutional Network (DCNN) achieved the best recognition accuracies on FERET database compared to the state-of-the-art methods. In the next version of this paper, we will test the performance of the proposed methods on the large-varying pose databases

    The refined BPS index from stable pair invariants

    Full text link
    A refinement of the stable pair invariants of Pandharipande and Thomas for non-compact Calabi-Yau spaces is introduced based on a virtual Bialynicki-Birula decomposition with respect to a C* action on the stable pair moduli space, or alternatively the equivariant index of Nekrasov and Okounkov. This effectively calculates the refined index for M-theory reduced on these Calabi-Yau geometries. Based on physical expectations we propose a product formula for the refined invariants extending the motivic product formula of Morrison, Mozgovoy, Nagao, and Szendroi for local P^1. We explicitly compute refined invariants in low degree for local P^2 and local P^1 x P^1 and check that they agree with the predictions of the direct integration of the generalized holomorphic anomaly and with the product formula. The modularity of the expressions obtained in the direct integration approach allows us to relate the generating function of refined PT invariants on appropriate geometries to Nekrasov's partition function and a refinement of Chern-Simons theory on a lens space. We also relate our product formula to wallcrossing.Comment: 60 pages, 1 eps figure; reference updated; minor typos correcte

    Cross-frequency interactions during diffusion on complex brain networks are facilitated by scale-free properties

    Full text link
    We studied the interactions between different temporal scales of diffusion processes on complex networks and found them to be stronger in scale-free (SF) than in Erdos-Renyi (ER) networks, especially for the case of phase-amplitude coupling (PAC)-the phenomenon where the phase of an oscillatory mode modulates the amplitude of another oscillation. We found that SF networks facilitate PAC between slow and fast frequency components of the diffusion process, whereas ER networks enable PAC between slow-frequency components. Nodes contributing the most to the generation of PAC in SF networks were non-hubs that connected with high probability to hubs. Additionally, brain networks from healthy controls (HC) and Alzheimer's disease (AD) patients presented a weaker PAC between slow and fast frequencies than SF, but higher than ER. We found that PAC decreased in AD compared to HC and was more strongly correlated to the scores of two different cognitive tests than what the strength of functional connectivity was, suggesting a link between cognitive impairment and multi-scale information flow in the brain.Comment: 38 pages, 8 figures, 3 supplementary figure

    The refined BPS index from stable pair invariants

    No full text

    Learning Domain-Invariant Subspace using Domain Features and Independence Maximization

    Full text link
    Domain adaptation algorithms are useful when the distributions of the training and the test data are different. In this paper, we focus on the problem of instrumental variation and time-varying drift in the field of sensors and measurement, which can be viewed as discrete and continuous distributional change in the feature space. We propose maximum independence domain adaptation (MIDA) and semi-supervised MIDA (SMIDA) to address this problem. Domain features are first defined to describe the background information of a sample, such as the device label and acquisition time. Then, MIDA learns a subspace which has maximum independence with the domain features, so as to reduce the inter-domain discrepancy in distributions. A feature augmentation strategy is also designed to project samples according to their backgrounds so as to improve the adaptation. The proposed algorithms are flexible and fast. Their effectiveness is verified by experiments on synthetic datasets and four real-world ones on sensors, measurement, and computer vision. They can greatly enhance the practicability of sensor systems, as well as extend the application scope of existing domain adaptation algorithms by uniformly handling different kinds of distributional change.Comment: Accepte

    Dynamic SPECT reconstruction with temporal edge correlation

    Full text link
    In dynamic imaging, a key challenge is to reconstruct image sequences with high temporal resolution from strong undersampling projections due to a relatively slow data acquisition speed. In this paper, we propose a variational model using the infimal convolution of Bregman distance with respect to total variation to model edge dependence of sequential frames. The proposed model is solved via an alternating iterative scheme, for which each subproblem is convex and can be solved by existing algorithms. The proposed model is formulated under both Gaussian and Poisson noise assumption and the simulation on two sets of dynamic images shows the advantage of the proposed method compared to previous methods.Comment: 24page

    Harnessing Sparsity over the Continuum: Atomic Norm Minimization for Super Resolution

    Full text link
    Convex optimization recently emerges as a compelling framework for performing super resolution, garnering significant attention from multiple communities spanning signal processing, applied mathematics, and optimization. This article offers a friendly exposition to atomic norm minimization as a canonical convex approach to solve super resolution problems. The mathematical foundations and performances guarantees of this approach are presented, and its application in super resolution image reconstruction for single-molecule fluorescence microscopy are highlighted

    Molecular access to multi-dimensionally encoded information

    Get PDF
    Polymer scientist have only recently realized that information storage on the molecular level is not only restricted to DNA-based systems. Similar encoding and decoding of data have been demonstrated on synthetic polymers that could overcome some of the drawbacks associated with DNA, such as the ability to make use of a larger monomer alphabet. This feature article describes some of the recent data storage strategies that were investigated, ranging from writing information on linear sequence-defined macromolecules up to layer-by-layer casted surfaces and QR codes. In addition, some strategies to increase storage density are elaborated and some trends regarding future perspectives on molecular data storage from the literature are critically evaluated. This work ends with highlighting the demand for new strategies setting up reliable solutions for future data management technologies
    • …
    corecore