17,707 research outputs found
Weak, Quiet Magnetic Fields Seen in the Venus Atmosphere.
The existence of a strong internal magnetic field allows probing of the interior through both long term changes of and short period fluctuations in that magnetic field. Venus, while Earth's twin in many ways, lacks such a strong intrinsic magnetic field, but perhaps short period fluctuations can still be used to probe the electrical conductivity of the interior. Toward the end of the Venus Express mission, an aerobraking campaign took the spacecraft below the ionosphere into the very weakly electrically conducting atmosphere. As the spacecraft descended from 150 to 140 km altitude, the magnetic field became weaker on average and less noisy. Below 140 km, the median field strength became steady but the short period fluctuations continued to weaken. The weakness of the fluctuations indicates they might not be useful for electromagnetic sounding of the atmosphere from a high altitude platform such as a plane or balloon, but possibly could be attempted on a lander
sscMap: An extensible Java application for connecting small-molecule drugs using gene-expression signatures
Background: Connectivity mapping is a process to recognize novel
pharmacological and toxicological properties in small molecules by comparing
their gene expression signatures with others in a database. A simple and robust
method for connectivity mapping with increased specificity and sensitivity was
recently developed, and its utility demonstrated using experimentally derived
gene signatures.
Results: This paper introduces sscMap (statistically significant connections'
map), a Java application designed to undertake connectivity mapping tasks using
the recently published method. The software is bundled with a default
collection of reference gene-expression profiles based on the publicly
available dataset from the Broad Institute Connectivity Map 02, which includes
data from over 7000 Affymetrix microarrays, for over 1000 small-molecule
compounds, and 6100 treatment instances in 5 human cell lines. In addition, the
application allows users to add their custom collections of reference profiles
and is applicable to a wide range of other 'omics technologies.
Conclusions: The utility of sscMap is two fold. First, it serves to make
statistically significant connections between a user-supplied gene signature
and the 6100 core reference profiles based on the Broad Institute expanded
dataset. Second, it allows users to apply the same improved method to
custom-built reference profiles which can be added to the database for future
referencing. The software can be freely downloaded from
http://purl.oclc.org/NET/sscMapComment: 3 pages, 1 table, 1 eps figur
Thermal field over Tibetan Plateau and Indian summer monsson rainfall
The interannual variability of the temperature anomalies over the Tibetan Plateau (25-45 °N, 75-105 °E) is examined in relation to the Indian summer monsoon rainfall (ISMR: June to September total rainfall). For this purpose, the temperature anomaly data of the central-eastern Tibetan Plateau is divided into three regions using principal component analysis and the ISMR data for the period 1957-89 have been used. It is found that the January temperature anomaly of Region 2 has a significant negative relationship (r = -0.67) with the ISMR of the subsequent season. This region is located over the northeastern part of the Tibetan Plateau, mostly in Qinghai province, including the Bayan Harr Mountain range and the Qaidam Basin. This relationship is consistent and robust during the period of analysis and can be used to predict the strength of the Indian summer monsoon in the subsequent season. It was found that the January temperature anomaly in this region was associated with a persistent winter circulation pattern over the Eurasian continent during January through to March. Finally, the variation patterns of the temperature anomalies in all three regions over the central-eastern Tibetan Plateau during extreme years of the ISMR are examined. It is concluded that the January temperature anomaly over the northeastern Tibetan Plateau can be useful in forecasting the drought and flood conditions over India, especially in predicting the monsoon rainfall over the areas lying along the monsoon trough
Measuring Time-Sensitive and Topic-Specific Influence in Social Networks with LSTM and Self-Attention.
Influence measurement in social networks is vital to various real-world applications, such as online marketing and political campaigns. In this paper, we investigate the problem of measuring time-sensitive and topic-specific influence based on streaming texts and dynamic social networks. A user's influence can change rapidly in response to a new event and vary on different topics. For example, the political influence of Douglas Jones increased dramatically after winning the Alabama special election, and then rapidly decreased after the election week. During the same period, however, Douglas Jones' influence on sports remained low. Most existing approaches can only model the influence based on static social network structures and topic distributions. Furthermore, as popular social networking services embody many features to connect their users, multi-typed interactions make it hard to learn the roles that different interactions play when propagating information. To address these challenges, we propose a Time-sensitive and Topic-specific Influence Measurement (TTIM) method, to jointly model the streaming texts and dynamic social networks. We simulate the influence propagation process with a self-attention mechanism to learn the contributions of different interactions and track the influence dynamics with a matrix-adaptive long short-term memory. To the best of our knowledge, this is the first attempt to measure time-sensitive and topic-specific influence. Furthermore, the TTIM model can be easily adapted to supporting online learning which consumes constant training time on newly arrived data for each timestamp. We comprehensively evaluate the proposed TTIM model on five datasets from Twitter and Reddit. The experimental results demonstrate promising performance compared to the state-of-the-art social influence analysis models and the potential of TTIM in visualizing influence dynamics and topic distribution
The mass area of jets
We introduce a new characteristic of jets called mass area. It is defined so
as to measure the susceptibility of the jet's mass to contamination from soft
background. The mass area is a close relative of the recently introduced
catchment area of jets. We define it also in two variants: passive and active.
As a preparatory step, we generalise the results for passive and active areas
of two-particle jets to the case where the two constituent particles have
arbitrary transverse momenta. As a main part of our study, we use the mass area
to analyse a range of modern jet algorithms acting on simple one and
two-particle systems. We find a whole variety of behaviours of passive and
active mass areas depending on the algorithm, relative hardness of particles or
their separation. We also study mass areas of jets from Monte Carlo simulations
as well as give an example of how the concept of mass area can be used to
correct jets for contamination from pileup. Our results show that the
information provided by the mass area can be very useful in a range of
jet-based analyses.Comment: 36 pages, 12 figures; v2: improved quality of two plots, added entry
in acknowledgments, nicer form of formulae in appendix A; v3: added section
with MC study and pileup correction, version accepted by JHE
Non-Gaussian states for continuous variable quantum computation via Gaussian maps
We investigate non-Gaussian states of light as ancillary inputs for
generating nonlinear transformations required for quantum computing with
continuous variables. We consider a recent proposal for preparing a cubic phase
state, find the exact form of the prepared state and perform a detailed
comparison to the ideal cubic phase state. We thereby identify the main
challenges to preparing an ideal cubic phase state and describe the gates
implemented with the non-ideal prepared state. We also find the general form of
operations that can be implemented with ancilla Fock states, together with
Gaussian input states, linear optics and squeezing transformations, and
homodyne detection with feed forward, and discuss the feasibility of continuous
variable quantum computing using ancilla Fock states.Comment: 8 pages, 6 figure
Probing Shadowed Nuclear Sea with Massive Gauge Bosons in the Future Heavy-Ion Collisions
The production of the massive bosons and could provide an
excellent tool to study cold nuclear matter effects and the modifications of
nuclear parton distribution functions (nPDFs) relative to parton distribution
functions (PDFs) of a free proton in high energy nuclear reactions at the LHC
as well as in heavy-ion collisions (HIC) with much higher center-of mass
energies available in the future colliders. In this paper we calculate the
rapidity and transverse momentum distributions of the vector boson and their
nuclear modification factors in p+Pb collisions at TeV and in
Pb+Pb collisions at TeV in the framework of perturbative QCD
by utilizing three parametrization sets of nPDFs: EPS09, DSSZ and nCTEQ. It is
found that in heavy-ion collisions at such high colliding energies, both the
rapidity distribution and the transverse momentum spectrum of vector bosons are
considerably suppressed in wide kinematic regions with respect to p+p reactions
due to large nuclear shadowing effect. We demonstrate that in the massive
vector boson productions processes with sea quarks in the initial-state may
give more contributions than those with valence quarks in the initial-state,
therefore in future heavy-ion collisions the isospin effect is less pronounced
and the charge asymmetry of W boson will be reduced significantly as compared
to that at the LHC. Large difference between results with nCTEQ and results
with EPS09 and DSSZ is observed in nuclear modifications of both rapidity and
distributions of and in the future HIC.Comment: 13 pages, 21 figures, version accepted for publication in Eur. Phys.
J.
Robust Re-Identification by Multiple Views Knowledge Distillation
To achieve robustness in Re-Identification, standard methods leverage tracking information in a Video-To-Video fashion. However, these solutions face a large drop in performance for single image queries (e.g., Image-To-Video setting). Recent works address this severe degradation by transferring temporal information from a Video-based network to an Image-based one. In this work, we devise a training strategy that allows the transfer of a superior knowledge, arising from a set of views depicting the target object. Our proposal - Views Knowledge Distillation (VKD) - pins this visual variety as a supervision signal within a teacher-student framework, where the teacher educates a student who observes fewer views. As a result, the student outperforms not only its teacher but also the current state-of-the-art in Image-To-Video by a wide margin (6.3% mAP on MARS, 8.6% on Duke-Video-ReId and 5% on VeRi-776). A thorough analysis - on Person, Vehicle and Animal Re-ID - investigates the properties of VKD from a qualitatively and quantitatively perspective
Towards Accurate Estimation of the Proportion of True Null Hypotheses in Multiple Testing
BACKGROUND: Biomedical researchers are now often faced with situations where it is necessary to test a large number of hypotheses simultaneously, eg, in comparative gene expression studies using high-throughput microarray technology. To properly control false positive errors the FDR (false discovery rate) approach has become widely used in multiple testing. The accurate estimation of FDR requires the proportion of true null hypotheses being accurately estimated. To date many methods for estimating this quantity have been proposed. Typically when a new method is introduced, some simulations are carried out to show the improved accuracy of the new method. However, the simulations are often very limited to covering only a few points in the parameter space. RESULTS: Here I have carried out extensive in silico experiments to compare some commonly used methods for estimating the proportion of true null hypotheses. The coverage of these simulations is unprecedented thorough over the parameter space compared to typical simulation studies in the literature. Thus this work enables us to draw conclusions globally as to the performance of these different methods. It was found that a very simple method gives the most accurate estimation in a dominantly large area of the parameter space. Given its simplicity and its overall superior accuracy I recommend its use as the first choice for estimating the proportion of true null hypotheses in multiple testing
Optimization of Brownian ratchets for the manipulation of charged components within supported lipid bilayers
In probability theory, there is a counter-intuitive result that it is possible to construct a winning strategy from two individually losing (or at most breaking-even) "games" by alternating between them. The work presented here demonstrates the application of this principle to supported lipid bilayers (SLBs) in order to create directed motion of charged lipid components in the membrane, which was achieved through the use of "Brownian ratchets" in patterned SLBs. Both a finite element analysis model and an experimental setup have been used to investigate the role of key parameters for the operation of these ratchets: (1) the asymmetry of the ratchet teeth and (2) the relation of the ratchet height to the period of the applied electric field. Importantly, we find that the efficiency of the ratchet for a given charged species is dependent on the diffusion coefficient. This opens the possibility for separation of membrane species according to their size or viscous drag coefficient within the membrane
- …