2,956 research outputs found
Space Missions for Automation and Robotics Technologies (SMART) Program
NASA is currently considering the establishment of a Space Mission for Automation and Robotics Technologies (SMART) Program to define, develop, integrate, test, and operate a spaceborne national research facility for the validation of advanced automation and robotics technologies. Initially, the concept is envisioned to be implemented through a series of shuttle based flight experiments which will utilize telepresence technologies and real time operation concepts. However, eventually the facility will be capable of a more autonomous role and will be supported by either the shuttle or the space station. To ensure incorporation of leading edge technology in the facility, performance capability will periodically and systematically be upgraded by the solicitation of recommendations from a user advisory group. The facility will be managed by NASA, but will be available to all potential investigators. Experiments for each flight will be selected by a peer review group. Detailed definition and design is proposed to take place during FY 86, with the first SMART flight projected for FY 89
Frequency-modulated continuous-wave LiDAR compressive depth-mapping
We present an inexpensive architecture for converting a frequency-modulated
continuous-wave LiDAR system into a compressive-sensing based depth-mapping
camera. Instead of raster scanning to obtain depth-maps, compressive sensing is
used to significantly reduce the number of measurements. Ideally, our approach
requires two difference detectors. % but can operate with only one at the cost
of doubling the number of measurments. Due to the large flux entering the
detectors, the signal amplification from heterodyne detection, and the effects
of background subtraction from compressive sensing, the system can obtain
higher signal-to-noise ratios over detector-array based schemes while scanning
a scene faster than is possible through raster-scanning. %Moreover, we show how
a single total-variation minimization and two fast least-squares minimizations,
instead of a single complex nonlinear minimization, can efficiently recover
high-resolution depth-maps with minimal computational overhead. Moreover, by
efficiently storing only data points from measurements of an
pixel scene, we can easily extract depths by solving only two linear equations
with efficient convex-optimization methods
Fast Hadamard transforms for compressive sensing of joint systems: measurement of a 3.2 million-dimensional bi-photon probability distribution
We demonstrate how to efficiently implement extremely high-dimensional
compressive imaging of a bi-photon probability distribution. Our method uses
fast-Hadamard-transform Kronecker-based compressive sensing to acquire the
joint space distribution. We list, in detail, the operations necessary to
enable fast-transform-based matrix-vector operations in the joint space to
reconstruct a 16.8 million-dimensional image in less than 10 minutes. Within a
subspace of that image exists a 3.2 million-dimensional bi-photon probability
distribution. In addition, we demonstrate how the marginal distributions can
aid in the accuracy of joint space distribution reconstructions
Compressive Direct Imaging of a Billion-Dimensional Optical Phase-Space
Optical phase-spaces represent fields of any spatial coherence, and are
typically measured through phase-retrieval methods involving a computational
inversion, interference, or a resolution-limiting lenslet array. Recently, a
weak-values technique demonstrated that a beam's Dirac phase-space is
proportional to the measurable complex weak-value, regardless of coherence.
These direct measurements require scanning through all possible
position-polarization couplings, limiting their dimensionality to less than
100,000. We circumvent these limitations using compressive sensing, a numerical
protocol that allows us to undersample, yet efficiently measure
high-dimensional phase-spaces. We also propose an improved technique that
allows us to directly measure phase-spaces with high spatial resolution and
scalable frequency resolution. With this method, we are able to easily measure
a 1.07-billion-dimensional phase-space. The distributions are numerically
propagated to an object placed in the beam path, with excellent agreement. This
protocol has broad implications in signal processing and imaging, including
recovery of Fourier amplitudes in any dimension with linear algorithmic
solutions and ultra-high dimensional phase-space imaging.Comment: 7 pages, 5 figures. Added new larger dataset and fixed typo
Position-Momentum Bell-Nonlocality with Entangled Photon Pairs
Witnessing continuous-variable Bell nonlocality is a challenging endeavor,
but Bell himself showed how one might demonstrate this nonlocality. Though Bell
nearly showed a violation using the CHSH inequality with sign-binned
position-momentum statistics of entangled pairs of particles measured at
different times, his demonstration is subject to approximations not realizable
in a laboratory setting. Moreover, he doesn't give a quantitative estimation of
the maximum achievable violation for the wavefunction he considers. In this
article, we show how his strategy can be reimagined using the transverse
positions and momenta of entangled photon pairs measured at different
propagation distances, and we find that the maximum achievable violation for
the state he considers is actually very small relative to the upper limit of
. Although Bell's wavefunction does not produce a large violation of
the CHSH inequality, other states may yet do so.Comment: 6 pages, 3 figure
Flight elements: Fault detection and fault management
Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system
Compressively characterizing high-dimensional entangled states with complementary, random filtering
The resources needed to conventionally characterize a quantum system are
overwhelmingly large for high- dimensional systems. This obstacle may be
overcome by abandoning traditional cornerstones of quantum measurement, such as
general quantum states, strong projective measurement, and assumption-free
characterization. Following this reasoning, we demonstrate an efficient
technique for characterizing high-dimensional, spatial entanglement with one
set of measurements. We recover sharp distributions with local, random
filtering of the same ensemble in momentum followed by position---something the
uncertainty principle forbids for projective measurements. Exploiting the
expectation that entangled signals are highly correlated, we use fewer than
5,000 measurements to characterize a 65, 536-dimensional state. Finally, we use
entropic inequalities to witness entanglement without a density matrix. Our
method represents the sea change unfolding in quantum measurement where methods
influenced by the information theory and signal-processing communities replace
unscalable, brute-force techniques---a progression previously followed by
classical sensing.Comment: 13 pages, 7 figure
Virus Propagation in Multiple Profile Networks
Suppose we have a virus or one competing idea/product that propagates over a
multiple profile (e.g., social) network. Can we predict what proportion of the
network will actually get "infected" (e.g., spread the idea or buy the
competing product), when the nodes of the network appear to have different
sensitivity based on their profile? For example, if there are two profiles
and in a network and the nodes of profile
and profile are susceptible to a highly spreading
virus with probabilities and
respectively, what percentage of both profiles will actually get infected from
the virus at the end? To reverse the question, what are the necessary
conditions so that a predefined percentage of the network is infected? We
assume that nodes of different profiles can infect one another and we prove
that under realistic conditions, apart from the weak profile (great
sensitivity), the stronger profile (low sensitivity) will get infected as well.
First, we focus on cliques with the goal to provide exact theoretical results
as well as to get some intuition as to how a virus affects such a multiple
profile network. Then, we move to the theoretical analysis of arbitrary
networks. We provide bounds on certain properties of the network based on the
probabilities of infection of each node in it when it reaches the steady state.
Finally, we provide extensive experimental results that verify our theoretical
results and at the same time provide more insight on the problem
- …