1,125 research outputs found

    A General Theory for Direct Quantitative Analysis of Antigen

    Full text link
    A theory for direct quantitative analysis of an antigen is proposed. It is based on a potential homogenous immunoreaction system. It establishes an equation to describe the concentration change of the antigen and antibody complex. A maximum point is found in the concentration profile of the complex which can be used to calculate the concentration of the antigen. An experimental scheme was designed for a commercial time-resolved fluoroimmunoassay kit for HBsAg, which is based heterogeneous immunoreaction. The results showed that the theory is practically applicable.Comment: 7pages, 2 figure

    Lepton-Jet Correlations in Deep Inelastic Scattering at the Electron-Ion Collider.

    Get PDF
    We propose the lepton-jet correlation in deep inelastic scattering as a unique tool for the tomography of nucleons and nuclei at the electron-ion collider (EIC). The azimuthal angular correlation between the final state lepton and jet depends on the transverse momentum dependent quark distributions. We take the example of single transverse spin asymmetries to show the sensitivity to the quark Sivers function. When the correlation is studied in lepton-nucleus collisions, transverse momentum broadening effects can be used to explore cold nuclear matter effects. These features make lepton-jet correlations an important new hard probe at the EIC

    Double-Real-Virtual and Double-Virtual-Real Corrections to the Three-Loop Thrust Soft Function

    Full text link
    We compute the O(αs3){\cal O}(\alpha_s^3) double-real-virtual (RRV) and double-virtual-real (VVR) soft contributions to the thrust/zero-jettiness event shape. The result clears up one of the most stubborn obstacles toward the complete O(αs3){\cal O}(\alpha_s^3) thrust soft function. The results presented here serve as the key input to realize the next-to-next-to-next-to-leading logarithmic prime (N3{}^3LL') % and even the next-to-next-to-next-to-next-to-leading logarithmic (N4{}^4LL) resummation of the thrust event shape. The obtained results also constitute the important ingredients of the NN-jettiness-subtraction scheme at next-to-next-to-next-to-leading order (N3{}^3LO).Comment: Updated version to simplify the results. Now the complete VRR results are presented by including the qqˉq\bar{q} channel. Full agreement was found with a recent calculation in arXiv:2401.0524

    Semantic Object Parsing with Local-Global Long Short-Term Memory

    Full text link
    Semantic object parsing is a fundamental task for understanding objects in detail in computer vision community, where incorporating multi-level contextual information is critical for achieving such fine-grained pixel-level recognition. Prior methods often leverage the contextual information through post-processing predicted confidence maps. In this work, we propose a novel deep Local-Global Long Short-Term Memory (LG-LSTM) architecture to seamlessly incorporate short-distance and long-distance spatial dependencies into the feature learning over all pixel positions. In each LG-LSTM layer, local guidance from neighboring positions and global guidance from the whole image are imposed on each position to better exploit complex local and global contextual information. Individual LSTMs for distinct spatial dimensions are also utilized to intrinsically capture various spatial layouts of semantic parts in the images, yielding distinct hidden and memory cells of each position for each dimension. In our parsing approach, several LG-LSTM layers are stacked and appended to the intermediate convolutional layers to directly enhance visual features, allowing network parameters to be learned in an end-to-end way. The long chains of sequential computation by stacked LG-LSTM layers also enable each pixel to sense a much larger region for inference benefiting from the memorization of previous dependencies in all positions along all dimensions. Comprehensive evaluations on three public datasets well demonstrate the significant superiority of our LG-LSTM over other state-of-the-art methods.Comment: 10 page

    Interpretable Structure-Evolving LSTM

    Full text link
    This paper develops a general framework for learning interpretable data representation via Long Short-Term Memory (LSTM) recurrent neural networks over hierarchal graph structures. Instead of learning LSTM models over the pre-fixed structures, we propose to further learn the intermediate interpretable multi-level graph structures in a progressive and stochastic way from data during the LSTM network optimization. We thus call this model the structure-evolving LSTM. In particular, starting with an initial element-level graph representation where each node is a small data element, the structure-evolving LSTM gradually evolves the multi-level graph representations by stochastically merging the graph nodes with high compatibilities along the stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two connected nodes from their corresponding LSTM gate outputs, which is used to generate a merging probability. The candidate graph structures are accordingly generated where the nodes are grouped into cliques with their merging probabilities. We then produce the new graph structure with a Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in local optimums by stochastic sampling with an acceptance probability. Once a graph structure is accepted, a higher-level graph is then constructed by taking the partitioned cliques as its nodes. During the evolving process, representation becomes more abstracted in higher-levels where redundant information is filtered out, allowing more efficient propagation of long-range data dependencies. We evaluate the effectiveness of structure-evolving LSTM in the application of semantic object parsing and demonstrate its advantage over state-of-the-art LSTM models on standard benchmarks.Comment: To appear in CVPR 2017 as a spotlight pape

    Numerical simulation for the solidification of magnesium alloy under ultrasonic

    Get PDF
    corecore