46,127 research outputs found
Secure thermal infrared communications using engineered blackbody radiation
The thermal (emitted) infrared frequency bands, from 20â40â
THz and 60â100â
THz, are best known for applications in thermography. This underused and unregulated part of the spectral range offers opportunities for the development of secure communications. The âTHz Torch' concept was recently presented by the authors. This technology fundamentally exploits engineered blackbody radiation, by partitioning thermally-generated spectral noise power into pre-defined frequency channels; the energy in each channel is then independently pulsed modulated and multiplexing schemes are introduced to create a robust form of short-range secure communications in the far/mid infrared. To date, octave bandwidth (25â50â
THz) single-channel links have been demonstrated with 380â
bps speeds. Multi-channel âTHz Torch' frequency division multiplexing (FDM) and frequency-hopping spread-spectrum (FHSS) schemes have been proposed, but only a slow 40â
bps FDM scheme has been demonstrated experimentally. Here, we report a much faster 1,280â
bps FDM implementation. In addition, an experimental proof-of-concept FHSS scheme is demonstrated for the first time, having a 320â
bps data rate. With both 4-channel multiplexing schemes, measured bit error rates (BERs) of < 10(â6) are achieved over a distance of 2.5â
cm. Our approach represents a new paradigm in the way niche secure communications can be established over short links
Recommended from our members
An agent-based DDM for high level architecture
The Data Distribution Management (DDM) service is one of the six services provided in the Runtime Infrastructure (RTI) of High Level Architecture (HLA). Its purpose is to perform data filtering and reduce irrelevant data communicated between federates. The two DDM schemes proposed for RTI, region based and grid based DDM, are oriented to send as little irrelevant data to subscribers as possible, but only manage to filter part of this information and some irrelevant data is still being communicated. Previously (G. Tan et al., 2000), we employed intelligent agents to perform data filtering in HLA, implemented an agent based DDM in RTI (ARTI) and compared it with the other two filtering mechanisms. The paper reports on additional experiments, results and analysis using two scenarios: the AWACS sensing aircraft simulation and the air traffic control simulation scenario. Experimental results show that compared with other mechanisms, the agent based approach communicates only relevant data and minimizes network communication, and is also comparable in terms of time efficiency. Some guidelines on when the agent based scheme can be used are also give
Quantum Phase Transition in Finite-Size Lipkin-Meshkov-Glick Model
Lipkin model of arbitrary particle-number N is studied in terms of exact
differential-operator representation of spin-operators from which we obtain the
low-lying energy spectrum with the instanton method of quantum tunneling. Our
new observation is that the well known quantum phase transition can also occur
in the finite-N model only if N is an odd-number. We furthermore demonstrate a
new type of quantum phase transition characterized by level-crossing which is
induced by the geometric phase interference and is marvelously periodic with
respect to the coupling parameter. Finally the conventional quantum phase
transition is understood intuitively from the tunneling formulation in the
thermodynamic limit.Comment: 4 figure
The Luminosity - E_p Relation within Gamma--Ray Bursts and Implications for Fireball Models
Using a sample of 2408 time-resolved spectra for 91 BATSE gamma-ray bursts
(GRBs) presented by Preece et al., we show that the relation between the
isotropic-equivalent luminosity (L_iso) and the spectral peak energy (E_p) in
the cosmological rest frame, L_iso \propto E_p^2, not only holds within these
bursts, but also holds among these GRBs, assuming that the burst rate as a
function of redshift is proportional to the star formation rate. The possible
implications of this relation for the emission models of GRBs are discussed. We
suggest that both the kinetic-energy-dominated internal shock model and the
magnetic-dissipation-dominated external shock model can well interpret this
relation. We constrain the parameters for these two models, and find that they
are in a good agreement with the parameters from the fittings to the afterglow
data (abridged).Comment: 3 pages plus 5 figures, emulateapj style, accepted for publication in
ApJ Letter
Recommended from our members
Experimental and Numerical Investigation on Progressive Collapse Resistance of Post-tensioned Precast Concrete Beam-Column Sub-assemblages
In this paper, four 1/2 scaled precast concrete (PC) beam-column sub-assemblages with high performance connection were tested under push-down loading procedure to study the load resisting mechanism of PC frames subjected to different column removal scenarios. The parameters investigated include the location of column removal and effective prestress in tendons. The test results indicated that the failure modes of unbonded post-tensioned precast concrete (PTPC) frames were different from that of reinforced concrete (RC) frames: no cracks formed in the beams and wide opening formed near the beam to column interfaces. For specimens without overhanging beams, the failure of side column was eccentric compression failure. Moreover, the load resisting mechanisms in PC frames were significantly different from that of RC frames: the compressive arch action (CAA) developed in concrete during column removal was mainly due to actively applied pre-compressive stress in the concrete; CAA will not vanish when severe crush in concrete occurred. Thus, it may provide negative contribution for load resistance when the displacement exceeds one-beam depth; the tensile force developed in the tendons could provide catenary action from the beginning of the test. Moreover, to deeper understand the behavior of tested specimens, numerical analyses were carried out. The effects of concrete strength, axial compression ratio at side columns, and loading approaches on the behavior of the sub-assemblages were also investigated based on validated numerical analysis
Comparison between the Torquato-Rintoul theory of the interface effect in composite media and elementary results
We show that the interface effect on the properties of composite media
recently proposed by Torquato and Rintoul (TR) [Phys. Rev. Lett. 75, 4067
(1995)] is in fact elementary, and follows directly from taking the limit in
the dipolar polarizability of a coated sphere: the TR ``critical values'' are
simply those that make the dipolar polarizability vanish. Furthermore, the new
bounds developed by TR either coincide with the Clausius-Mossotti (CM) relation
or provide poor estimates. Finally, we show that the new bounds of TR do not
agree particularly well with the original experimental data that they quote.Comment: 13 pages, Revtex, 8 Postscript figure
Multi-wavelength variability properties of Fermi blazar S5 0716+714
S5 0716+714 is a typical BL Lacertae object. In this paper we present the
analysis and results of long term simultaneous observations in the radio,
near-infrared, optical, X-ray and -ray bands, together with our own
photometric observations for this source. The light curves show that the
variability amplitudes in -ray and optical bands are larger than those
in the hard X-ray and radio bands and that the spectral energy distribution
(SED) peaks move to shorter wavelengths when the source becomes brighter, which
are similar to other blazars, i.e., more variable at wavelengths shorter than
the SED peak frequencies. Analysis shows that the characteristic variability
timescales in the 14.5 GHz, the optical, the X-ray, and the -ray bands
are comparable to each other. The variations of the hard X-ray and 14.5 GHz
emissions are correlated with zero-lag, so are the V band and -ray
variations, which are consistent with the leptonic models. Coincidences of
-ray and optical flares with a dramatic change of the optical
polarization are detected. Hadronic models do not have the same nature
explanation for these observations as the leptonic models. A strong optical
flare correlating a -ray flare whose peak flux is lower than the
average flux is detected. Leptonic model can explain this variability
phenomenon through simultaneous SED modeling. Different leptonic models are
distinguished by average SED modeling. The synchrotron plus synchrotron
self-Compton (SSC) model is ruled out due to the extreme input parameters.
Scattering of external seed photons, such as the hot dust or broad line region
emission, and the SSC process are probably both needed to explain the
-ray emission of S5 0716+714.Comment: 43 pages, 13 figures, 3 tables, to be appeared in Ap
Isospin breaking and - mixing in the reaction
We make a theoretical study of the and
reactions with an aim to determine the
isospin violation and the mixing of the and resonances.
We make use of the chiral unitary approach where these two resonances appear as
composite states of two mesons, dynamically generated by the meson-meson
interaction provided by chiral Lagrangians. We obtain a very narrow shape for
the production in agreement with a BES experiment. As to the amount
of isospin violation, or and mixing, assuming constant
vertices for the primary and
production, we find results which
are much smaller than found in the recent experimental BES paper, but
consistent with results found in two other related BES experiments. We have
tried to understand this anomaly by assuming an I=1 mixture in the
wave function, but this leads to a much bigger width of the mass
distribution than observed experimentally. The problem is solved by using the
primary production driven by followed by , which induces an extra singularity in the loop functions needed to
produce the and resonances. Improving upon earlier work
along the same lines, and using the chiral unitary approach, we can now predict
absolute values for the ratio which are in fair agreement with experiment. We also show that the same
results hold if we had the resonance or a mixture of these two
states, as seems to be the case in the BES experiment
Semantic Object Parsing with Graph LSTM
By taking the semantic object parsing task as an exemplar application
scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network,
which is the generalization of LSTM from sequential data or multi-dimensional
data to general graph-structured data. Particularly, instead of evenly and
fixedly dividing an image to pixels or patches in existing multi-dimensional
LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each
arbitrary-shaped superpixel as a semantically consistent node, and adaptively
construct an undirected graph for each image, where the spatial relations of
the superpixels are naturally used as edges. Constructed on such an adaptive
graph topology, the Graph LSTM is more naturally aligned with the visual
patterns in the image (e.g., object boundaries or appearance similarities) and
provides a more economical information propagation route. Furthermore, for each
optimization step over Graph LSTM, we propose to use a confidence-driven scheme
to update the hidden and memory states of nodes progressively till all nodes
are updated. In addition, for each node, the forgets gates are adaptively
learned to capture different degrees of semantic correlation with neighboring
nodes. Comprehensive evaluations on four diverse semantic object parsing
datasets well demonstrate the significant superiority of our Graph LSTM over
other state-of-the-art solutions.Comment: 18 page
- âŚ