15,585 research outputs found
General relativistic x ray (UV) polarization rotations as a quantitative test for black holes
It is now 11 years since a potentially easily observable and quantitative test for black holes using general relativistic polarization rotations was proposed (Stark and Connors 1977, and Connors and Stark 1977). General relativistic rotations of the x ray polarization plane of 10 to 100 degrees with x ray energy (between 1 and 100 keV) are predicted for black hole x ray binaries. (Classically, by symmetry, there is no rotation.) Unfortunately, x ray polarimetry has not been taken sufficiently seriously during this period, and this test has not yet been performed. A similar (though probably less clean) effect is expected in the UV for supermassive black holes in some quasars active galactic nuclei. Summarizing: (1) a quantitative test (proposed in 1977) for black holes exists; (2) x ray polarimetry of galactic x ray binaries sensitive to at least 1/2 percent between 1 keV and 100 keV is needed (polarimetry in the UV of quasars and AGN will also be of interest); and (3) proportional counters using timerise discrimination were shown in laboratory experiments able to perform x ray polarimetry and this and other methods need to be developed
Gravitational radiation from rotating gravitational collapse
The efficiency of gravitational wave emission from axisymmetric rotating collapse to a black hole was found to be very low: Delta E/Mc sq. less than 7 x 10(exp -4). The main waveform shape is well defined and nearly independent of the details of the collapse. Such a signature will allow pattern recognition techniques to be used when searching experimental data. These results (which can be scaled in mass) were obtained using a fully general relativistic computer code that evolves rotating axisymmetric configurations and directly computes their gravitational radiation emission
Gated rotation mechanism of site-specific recombination by ĻC31 integrase
Integrases, such as that of the Streptomyces temperate bacteriophage ĻC31, promote site-specific recombination between DNA sequences in the bacteriophage and bacterial genomes to integrate or excise the phage DNA. ĻC31 integrase belongs to the serine recombinase family, a large group of structurally related enzymes with diverse biological functions. It has been proposed that serine integrases use a āsubunit rotationā mechanism to exchange DNA strands after double-strand DNA cleavage at the two recombining att sites, and that many rounds of subunit rotation can occur before the strands are religated. We have analyzed the mechanism of ĻC31 integrase-mediated recombination in a topologically constrained experimental system using hybrid āphesā recombination sites, each of which comprises a ĻC31 att site positioned adjacent to a regulatory sequence recognized by Tn3 resolvase. The topologies of reaction products from circular substrates containing two phes sites support a right-handed subunit rotation mechanism for catalysis of both integrative and excisive recombination. Strand exchange usually terminates after a single round of 180Ā° rotation. However, multiple processive ā360Ā° rotationā rounds of strand exchange can be observed, if the recombining sites have nonidentical base pairs at their centers. We propose that a regulatory āgatingā mechanism normally blocks multiple rounds of strand exchange and triggers product release after a single round
The Sender-Excited Secret Key Agreement Model: Capacity, Reliability and Secrecy Exponents
We consider the secret key generation problem when sources are randomly
excited by the sender and there is a noiseless public discussion channel. Our
setting is thus similar to recent works on channels with action-dependent
states where the channel state may be influenced by some of the parties
involved. We derive single-letter expressions for the secret key capacity
through a type of source emulation analysis. We also derive lower bounds on the
achievable reliability and secrecy exponents, i.e., the exponential rates of
decay of the probability of decoding error and of the information leakage.
These exponents allow us to determine a set of strongly-achievable secret key
rates. For degraded eavesdroppers the maximum strongly-achievable rate equals
the secret key capacity; our exponents can also be specialized to previously
known results.
In deriving our strong achievability results we introduce a coding scheme
that combines wiretap coding (to excite the channel) and key extraction (to
distill keys from residual randomness). The secret key capacity is naturally
seen to be a combination of both source- and channel-type randomness. Through
examples we illustrate a fundamental interplay between the portion of the
secret key rate due to each type of randomness. We also illustrate inherent
tradeoffs between the achievable reliability and secrecy exponents. Our new
scheme also naturally accommodates rate limits on the public discussion. We
show that under rate constraints we are able to achieve larger rates than those
that can be attained through a pure source emulation strategy.Comment: 18 pages, 8 figures; Submitted to the IEEE Transactions on
Information Theory; Revised in Oct 201
Towards electron transport measurements in chemically modified graphene: The effect of a solvent
Chemical functionalization of graphene modifies the local electron density of
the carbon atoms and hence electron transport. Measuring these changes allows
for a closer understanding of the chemical interaction and the influence of
functionalization on the graphene lattice. However, not only chemistry, in this
case diazonium chemistry, has an effect on the electron transport. Latter is
also influenced by defects and dopants resulting from different processing
steps. Here, we show that solvents used in the chemical reaction process change
the transport properties. In more detail, the investigated combination of
isopropanol and heating treatment reduces the doping concentration and
significantly increases the mobility of graphene. Furthermore, the isopropanol
treatment alone increases the concentration of dopants and introduces an
asymmetry between electron and hole transport which might be difficult to
distinguish from the effect of functionalization. The results shown in this
work demand a closer look on the influence of solvents used for chemical
modification in order to understand their influence
Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations
This paper establishes information-theoretic limits in estimating a finite
field low-rank matrix given random linear measurements of it. These linear
measurements are obtained by taking inner products of the low-rank matrix with
random sensing matrices. Necessary and sufficient conditions on the number of
measurements required are provided. It is shown that these conditions are sharp
and the minimum-rank decoder is asymptotically optimal. The reliability
function of this decoder is also derived by appealing to de Caen's lower bound
on the probability of a union. The sufficient condition also holds when the
sensing matrices are sparse - a scenario that may be amenable to efficient
decoding. More precisely, it is shown that if the n\times n-sensing matrices
contain, on average, \Omega(nlog n) entries, the number of measurements
required is the same as that when the sensing matrices are dense and contain
entries drawn uniformly at random from the field. Analogies are drawn between
the above results and rank-metric codes in the coding theory literature. In
fact, we are also strongly motivated by understanding when minimum rank
distance decoding of random rank-metric codes succeeds. To this end, we derive
distance properties of equiprobable and sparse rank-metric codes. These
distance properties provide a precise geometric interpretation of the fact that
the sparse ensemble requires as few measurements as the dense one. Finally, we
provide a non-exhaustive procedure to search for the unknown low-rank matrix.Comment: Accepted to the IEEE Transactions on Information Theory; Presented at
IEEE International Symposium on Information Theory (ISIT) 201
Protocluster Discovery in Tomographic Ly Forest Flux Maps
We present a new method of finding protoclusters using tomographic maps of
Ly Forest flux. We review our method of creating tomographic flux maps
and discuss our new high performance implementation, which makes large
reconstructions computationally feasible. Using a large N-body simulation, we
illustrate how protoclusters create large-scale flux decrements, roughly 10
Mpc across, and how we can use this signal to find them in flux maps.
We test the performance of our protocluster finding method by running it on the
ideal, noiseless map and tomographic reconstructions from mock surveys, and
comparing to the halo catalog. Using the noiseless map, we find protocluster
candidates with about 90% purity, and recover about 75% of the protoclusters
that form massive clusters (). We
construct mock surveys similar to the ongoing COSMOS Lyman-Alpha Mapping And
Tomography Observations (CLAMATO) survey. While the existing data has an
average sightline separation of 2.3 Mpc, we test separations of 2 - 6
Mpc to see what can be tolerated for our application. Using
reconstructed maps from small separation mock surveys, the protocluster
candidate purity and completeness are very close what was found in the
noiseless case. As the sightline separation increases, the purity and
completeness decrease, although they remain much higher than we initially
expected. We extended our test cases to mock surveys with an average separation
of 15 Mpc, meant to reproduce high source density areas of the BOSS
survey. We find that even with such a large sightline separation, the method
can still be used to find some of the largest protoclusters.Comment: 18 pages, 12 figure
- ā¦