24,043 research outputs found

    A uniformly accurate (UA) multiscale time integrator pseudospectral method for the Dirac equation in the nonrelativistic limit regime

    Get PDF
    We propose and rigourously analyze a multiscale time integrator Fourier pseudospectral (MTI-FP) method for the Dirac equation with a dimensionless parameter ε∈(0,1]\varepsilon\in(0,1] which is inversely proportional to the speed of light. In the nonrelativistic limit regime, i.e. 0<ε≪10<\varepsilon\ll 1, the solution exhibits highly oscillatory propagating waves with wavelength O(ε2)O(\varepsilon^2) and O(1)O(1) in time and space, respectively. Due to the rapid temporal oscillation, it is quite challenging in designing and analyzing numerical methods with uniform error bounds in ε∈(0,1]\varepsilon\in(0,1]. We present the MTI-FP method based on properly adopting a multiscale decomposition of the solution of the Dirac equation and applying the exponential wave integrator with appropriate numerical quadratures. By a careful study of the error propagation and using the energy method, we establish two independent error estimates via two different mathematical approaches as hm0+τ2ε2h^{m_0}+\frac{\tau^2}{\varepsilon^2} and hm0+τ2+ε2h^{m_0}+\tau^2+\varepsilon^2, where hh is the mesh size, τ\tau is the time step and m0m_0 depends on the regularity of the solution. These two error bounds immediately imply that the MTI-FP method converges uniformly and optimally in space with exponential convergence rate if the solution is smooth, and uniformly in time with linear convergence rate at O(τ)O(\tau) for all ε∈(0,1]\varepsilon\in(0,1] and optimally with quadratic convergence rate at O(τ2)O(\tau^2) in the regimes when either ε=O(1)\varepsilon=O(1) or 0<ε≲τ0<\varepsilon\lesssim \tau. Numerical results are reported to demonstrate that our error estimates are optimal and sharp. Finally, the MTI-FP method is applied to study numerically the convergence rates of the solution of the Dirac equation to those of its limiting models when ε→0+\varepsilon\to0^+.Comment: 25 pages, 1 figur

    Modeling Emotion Influence from Images in Social Networks

    Full text link
    Images become an important and prevalent way to express users' activities, opinions and emotions. In a social network, individual emotions may be influenced by others, in particular by close friends. We focus on understanding how users embed emotions into the images they uploaded to the social websites and how social influence plays a role in changing users' emotions. We first verify the existence of emotion influence in the image networks, and then propose a probabilistic factor graph based emotion influence model to answer the questions of "who influences whom". Employing a real network from Flickr as experimental data, we study the effectiveness of factors in the proposed model with in-depth data analysis. Our experiments also show that our model, by incorporating the emotion influence, can significantly improve the accuracy (+5%) for predicting emotions from images. Finally, a case study is used as the anecdotal evidence to further demonstrate the effectiveness of the proposed model

    Auto-Encoding Scene Graphs for Image Captioning

    Full text link
    We propose Scene Graph Auto-Encoder (SGAE) that incorporates the language inductive bias into the encoder-decoder image captioning framework for more human-like captions. Intuitively, we humans use the inductive bias to compose collocations and contextual inference in discourse. For example, when we see the relation `person on bike', it is natural to replace `on' with `ride' and infer `person riding bike on a road' even the `road' is not evident. Therefore, exploiting such bias as a language prior is expected to help the conventional encoder-decoder models less likely overfit to the dataset bias and focus on reasoning. Specifically, we use the scene graph --- a directed graph (G\mathcal{G}) where an object node is connected by adjective nodes and relationship nodes --- to represent the complex structural layout of both image (I\mathcal{I}) and sentence (S\mathcal{S}). In the textual domain, we use SGAE to learn a dictionary (D\mathcal{D}) that helps to reconstruct sentences in the S→G→D→S\mathcal{S}\rightarrow \mathcal{G} \rightarrow \mathcal{D} \rightarrow \mathcal{S} pipeline, where D\mathcal{D} encodes the desired language prior; in the vision-language domain, we use the shared D\mathcal{D} to guide the encoder-decoder in the I→G→D→S\mathcal{I}\rightarrow \mathcal{G}\rightarrow \mathcal{D} \rightarrow \mathcal{S} pipeline. Thanks to the scene graph representation and shared dictionary, the inductive bias is transferred across domains in principle. We validate the effectiveness of SGAE on the challenging MS-COCO image captioning benchmark, e.g., our SGAE-based single-model achieves a new state-of-the-art 127.8127.8 CIDEr-D on the Karpathy split, and a competitive 125.5125.5 CIDEr-D (c40) on the official server even compared to other ensemble models
    • …
    corecore