5,140 research outputs found
r-Process Nucleosynthesis in Shocked Surface Layers of O-Ne-Mg Cores
We demonstrate that rapid expansion of the shocked surface layers of an
O-Ne-Mg core following its collapse can result in r-process nucleosynthesis. As
the supernova shock accelerates through these layers, it makes them expand so
rapidly that free nucleons remain in disequilibrium with alpha-particles
throughout most of the expansion. This allows heavy r-process isotopes
including the actinides to form in spite of the very low initial neutron excess
of the matter. We estimate that yields of heavy r-process nuclei from this site
may be sufficient to explain the Galactic inventory of these isotopes.Comment: 11 pages, 1 figure, to appear in the Astrophysical Journal Letter
Collaborative Inference of Coexisting Information Diffusions
Recently, \textit{diffusion history inference} has become an emerging
research topic due to its great benefits for various applications, whose
purpose is to reconstruct the missing histories of information diffusion traces
according to incomplete observations. The existing methods, however, often
focus only on single information diffusion trace, while in a real-world social
network, there often coexist multiple information diffusions over the same
network. In this paper, we propose a novel approach called Collaborative
Inference Model (CIM) for the problem of the inference of coexisting
information diffusions. By exploiting the synergism between the coexisting
information diffusions, CIM holistically models multiple information diffusions
as a sparse 4th-order tensor called Coexisting Diffusions Tensor (CDT) without
any prior assumption of diffusion models, and collaboratively infers the
histories of the coexisting information diffusions via a low-rank approximation
of CDT with a fusion of heterogeneous constraints generated from additional
data sources. To improve the efficiency, we further propose an optimal
algorithm called Time Window based Parallel Decomposition Algorithm (TWPDA),
which can speed up the inference without compromise on the accuracy by
utilizing the temporal locality of information diffusions. The extensive
experiments conducted on real world datasets and synthetic datasets verify the
effectiveness and efficiency of CIM and TWPDA
Ultraprecise Rydberg atomic localization using optical vortices
We propose a robust localization of the highly-excited Rydberg atoms,
interacting with doughnut-shaped optical vortices. Compared with the earlier
standing-wave (SW)-based localization methods, a vortex beam can provide an
ultrahigh-precision two-dimensional localization solely in the zero-intensity
center, within a confined excitation region down to the nanometer scale. We
show that the presence of the Rydberg-Rydberg interaction permits
counter-intuitively much stronger confinement towards a high spatial resolution
when it is partially compensated by a suitable detuning. In addition, applying
an auxiliary SW modulation to the two-photon detuning allows a
three-dimensional confinement of Rydberg atoms. In this case, the vortex field
provides a transverse confinement while the SW modulation of the two-photon
detuning localizes the Rydberg atoms longitudinally. To develop a new
subwavelength localization technique, our results pave one-step closer to
reduce excitation volumes to the level of a few nanometers, representing a
feasible implementation for the future experimental applications.Comment: oe in pres
A Learning-Style Theory for Understanding Autistic Behaviors
Understanding autism's ever-expanding array of behaviors, from sensation to cognition, is a major challenge. We posit that autistic and typically developing brains implement different algorithms that are better suited to learn, represent, and process different tasks; consequently, they develop different interests and behaviors. Computationally, a continuum of algorithms exists, from lookup table (LUT) learning, which aims to store experiences precisely, to interpolation (INT) learning, which focuses on extracting underlying statistical structure (regularities) from experiences. We hypothesize that autistic and typical brains, respectively, are biased toward LUT and INT learning, in low- and high-dimensional feature spaces, possibly because of their narrow and broad tuning functions. The LUT style is good at learning relationships that are local, precise, rigid, and contain little regularity for generalization (e.g., the name–number association in a phonebook). However, it is poor at learning relationships that are context dependent, noisy, flexible, and do contain regularities for generalization (e.g., associations between gaze direction and intention, language and meaning, sensory input and interpretation, motor-control signal and movement, and social situation and proper response). The LUT style poorly compresses information, resulting in inefficiency, sensory overload (overwhelm), restricted interests, and resistance to change. It also leads to poor prediction and anticipation, frequent surprises and over-reaction (hyper-sensitivity), impaired attentional selection and switching, concreteness, strong local focus, weak adaptation, and superior and inferior performances on simple and complex tasks. The spectrum nature of autism can be explained by different degrees of LUT learning among different individuals, and in different systems of the same individual. Our theory suggests that therapy should focus on training autistic LUT algorithm to learn regularities
Recommended from our members
Learning and Adaptation in a Recurrent Model of V1 Orientation Selectivity
Learning and adaptation in the domain of orientation processing are among the most studied topics in the literature. However, little effort has been devoted to explaining the diverse array of experimental findings via a physiologically based model. We have started to address this issue in the framework of the recurrent model of V1 orientation selectivity and found that reported changes in V1 orientation tuning curves after learning and adaptation can both be explained with the model. Specifically, the sharpening of orientation tuning curves near the trained orientation after learning can be accounted for by slightly reducing net excitatory connections to cells around the trained orientation, while the broadening and peak shift of the tuning curves after adaptation can be reproduced by appropriately scaling down both excitation and inhibition around the adapted orientation. In addition, we investigated the perceptual consequences of the tuning curve changes induced by learning and adaptation using signal detection theory. We found that in the case of learning, the physiological changes can account for the psychophysical data well. In the case of adaptation, however, there is a clear discrepancy between the psychophysical data from alert human subjects and the physiological data from anesthetized animals. Instead, human adaptation studies can be better accounted for by the learning data from behaving animals. Our work suggests that adaptation in behaving subjects may be viewed as a short-term form of learning
Recommended from our members
V1 orientation plasticity is explained by broadly tuned feedforward inputs and intracortical sharpening
Orientation adaptation and perceptual learning change orientation tuning curves of V1 cells. Adaptation shifts tuning curve peaks away from the adapted orientation, reduces tuning curve slopes near the adapted orientation, and increases the responses on the far flank of tuning curves. Learning an orientation discrimination task increases tuning curve slopes near the trained orientation. These changes have been explained previously in a recurrent model (RM) of orientation selectivity. However, the RM generates only complex cells when they are well tuned, so that there is currently no model of orientation plasticity for simple cells. In addition, some feedforward models, such as the modified feedforward model (MFM), also contain recurrent cortical excitation, and it is unknown whether they can explain plasticity. Here, we compare plasticity in the MFM, which simulates simple cells, and a recent modification of the RM (MRM), which displays a continuum of simple-to-complex characteristics. Both pre- and postsynaptic-based modifications of the recurrent and feedforward connections in the models are investigated. The MRM can account for all the learning- and adaptation-induced plasticity, for both simple and complex cells, while the MFM cannot. The key features from the MRM required for explaining plasticity are broadly tuned feedforward inputs and sharpening by a Mexican hat intracortical interaction profile. The mere presence of recurrent cortical interactions in feedforward models like the MFM is insufficient; such models have more rigid tuning curves. We predict that the plastic properties must be absent for cells whose orientation tuning arises from a feedforward mechanism
Existence Result for Impulsive Differential Equations with Integral Boundary Conditions
We investigate the following differential equations: -(y[1](x))'+q(x)y(x)=λf(x,y(x)), with impulsive and integral boundary conditions -Δ(y[1](xi))=Ii(y(xi)), i=1,2,…,m, y(0)-ay[1](0)=∫0ωg0(s)y(s)ds, y(ω)-by[1](ω)=∫0ωg1(s)y(s)ds, where y[1](x)=p(x)y'(x). The expression of Green's function and the existence of positive solution for the system are obtained. Upper and lower bounds for positive solutions are also given. When p(t), I(·), g0(s), and g1(s) take different values, the system can be simplified to some forms which has been studied in the works by Guo and LakshmiKantham (1988), Guo et al. (1995), Boucherif (2009), He et al. (2011), and Atici and Guseinov (2001). Our discussion is based on the fixed point index theory in cones
Semantic Segmentation on VSPW Dataset through Contrastive Loss and Multi-dataset Training Approach
Video scene parsing incorporates temporal information, which can enhance the
consistency and accuracy of predictions compared to image scene parsing. The
added temporal dimension enables a more comprehensive understanding of the
scene, leading to more reliable results. This paper presents the winning
solution of the CVPR2023 workshop for video semantic segmentation, focusing on
enhancing Spatial-Temporal correlations with contrastive loss. We also explore
the influence of multi-dataset training by utilizing a label-mapping technique.
And the final result is aggregating the output of the above two models. Our
approach achieves 65.95% mIoU performance on the VSPW dataset, ranked 1st place
on the VSPW challenge at CVPR 2023.Comment: 1st Place Solution for CVPR 2023 PVUW VSS Trac
- …