17,624 research outputs found
Recommended from our members
A Testbed for Developing and Evaluating GNSS Signal Authentication Techniques
An experimental testbed has been created for developing
and evaluating Global Navigation Satellite System (GNSS)
signal authentication techniques. The testbed advances the state
of the art in GNSS signal authentication by subjecting candidate
techniques to the strongest publicly-acknowledged GNSS spoofing
attacks. The testbed consists of a real-time phase-coherent GNSS
signal simulator that acts as spoofer, a real-time softwaredefined
GNSS receiver that plays the role of defender, and
post-processing versions of both the spoofer and defender. Two
recently-proposed authentication techniques are analytically and
experimentally evaluated: (1) a defense based on anomalous
received power in a GNSS band, and (2) a cryptographic
defense against estimation-and-replay-type spoofing attacks. The
evaluation reveals weaknesses in both techniques; nonetheless,
both significantly complicate a successful GNSS spoofing attackAerospace Engineering and Engineering Mechanic
Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications
Communication systems to date primarily aim at reliably communicating bit
sequences. Such an approach provides efficient engineering designs that are
agnostic to the meanings of the messages or to the goal that the message
exchange aims to achieve. Next generation systems, however, can be potentially
enriched by folding message semantics and goals of communication into their
design. Further, these systems can be made cognizant of the context in which
communication exchange takes place, providing avenues for novel design
insights. This tutorial summarizes the efforts to date, starting from its early
adaptations, semantic-aware and task-oriented communications, covering the
foundations, algorithms and potential implementations. The focus is on
approaches that utilize information theory to provide the foundations, as well
as the significant role of learning in semantics and task-aware communications.Comment: 28 pages, 14 figure
Guidance, Navigation and Control for UAV Close Formation Flight and Airborne Docking
Unmanned aerial vehicle (UAV) capability is currently limited by the amount of energy that can be stored onboard or the small amount that can be gathered from the environment. This has historically lead to large, expensive vehicles with considerable fuel capacity. Airborne docking, for aerial refueling, is a viable solution that has been proven through decades of implementation with manned aircraft, but had not been successfully tested or demonstrated with UAVs. The prohibitive challenge is the highly accurate and reliable relative positioning performance that is required to dock with a small target, in the air, amidst external disturbances. GNSS-based navigation systems are well suited for reliable absolute positioning, but fall short for accurate relative positioning. Direct, relative sensor measurements are precise, but can be unreliable in dynamic environments. This work proposes an experimentally verified guidance, navigation and control solution that enables a UAV to autonomously rendezvous and dock with a drogue that is being towed by another autonomous UAV. A nonlinear estimation framework uses precise air-to-air visual observations to correct onboard sensor measurements and produce an accurate relative state estimate. The state of the drogue is estimated using known geometric and inertial characteristics and air-to-air observations. Setpoint augmentation algorithms compensate for leader turn dynamics during formation flight, and drogue physical constraints during docking. Vision-aided close formation flight has been demonstrated over extended periods; as close as 4 m; in wind speeds in excess of 25 km/h; and at altitudes as low as 15 m. Docking flight tests achieved numerous airborne connections over multiple flights, including five successful docking manoeuvres in seven minutes of a single flight. To the best of our knowledge, these are the closest formation flights performed outdoors and the first UAV airborne docking
Navigation Facility for High Accuracy Offline Trajectory and Attitude Estimation in Airborne Applications
The paper focuses on a navigation facility, relying on commercial-off-the-shelf (COTS) technology, developed to generate high-accuracy attitude and trajectory measurements in postprocessing. Target performance is cm-level positioning with tenth of degree attitude accuracy. The facility is based on the concept of GPS-aided inertial navigation but comprises carrier-phase differential GPS (CDGPS) processing and attitude estimation based on multiantenna GPS configurations. Expected applications of the system include: (a) performance assessment of integrated navigation systems, developed for general aviation aircraft and medium size unmanned aircraft systems (UAS); (b) generation of reference measurements to evaluate the flight performance of airborne sensors (e.g., radar or laser); and (c) generation of reference trajectory and attitude for improving imaging quality of airborne remote sensing data. The paper describes system architecture, selected algorithms for data processing and integration, and theoretical performance evaluation. Experimental results are also presented confirming the effectiveness of the implemented approach
Real-time topology optimization via learnable mappings
In traditional topology optimization, the computing time required to
iteratively update the material distribution within a design domain strongly
depends on the complexity or size of the problem, limiting its application in
real engineering contexts. This work proposes a multi-stage machine learning
strategy that aims to predict an optimal topology and the related stress fields
of interest, either in 2D or 3D, without resorting to any iterative analysis
and design process. The overall topology optimization is treated as regression
task in a low-dimensional latent space, that encodes the variability of the
target designs. First, a fully-connected model is employed to surrogate the
functional link between the parametric input space characterizing the design
problem and the latent space representation of the corresponding optimal
topology. The decoder branch of an autoencoder is then exploited to reconstruct
the desired optimal topology from its latent representation. The deep learning
models are trained on a dataset generated through a standard method of topology
optimization implementing the solid isotropic material with penalization, for
varying boundary and loading conditions. The underlying hypothesis behind the
proposed strategy is that optimal topologies share enough common patterns to be
compressed into small latent space representations without significant
information loss. Results relevant to a 2D Messerschmitt-B\"olkow-Blohm beam
and a 3D bridge case demonstrate the capabilities of the proposed framework to
provide accurate optimal topology predictions in a fraction of a second
HRMobile: A lightweight, local architecture for heart rate measurement
Heart rate and heart rate variability (HRV) are important metrics in the study of numerous physical and psychiatric conditions. Previously, measurement of heart rate was relegated to clinical settings, and was neither convenient nor captured a patient’s typical resting state. In effect, this made gathering heart rate data costly and introduced noise. The current prevalence of mobile phone technology and Internet access has increased the viability of remote health monitoring, thus presenting an opportunity to substantially improve the speed, convenience, and reliability of heart rate readings. Recent attention has focused on different methods for remote, non-contact heart rate measurement. Of these methods, video presents perhaps the best option for optimizing cost and convenience. This thesis introduces a lightweight architecture for estimating heart rate and HRV using a smartphone camera. The system presented here runs locally on a smartphone, requiring only a phone camera and 15s or more of continuous video of a subject’s face. No Internet connection or networking is necessary. Building the system to run locally in this manner means that this software confers benefits such as greater user privacy, offline availability, reliability, cost effectiveness, and speed. However, it also introduces added constraints on computational complexity. With these tradeoffs in mind, the system presented here is implemented within an Android mobile app. The performance of our approach fell short of that of existing state-of-the-art methods in terms of mean absolute error (MAE) of heart rate estimation, achieving MAE during validation that was over greater than other existing approaches. There are a number of factors which may contribute to this performance discrepancy, including limitations in the diversity of the data used with respect to gender, age, skin tone, and heart rate intensity. Further, remote photoplethysmographic (rPPG) signal generated by this architecture contains a large number of noise artifacts which are difficult to consistently remove through signal processing. This noise is the primary reason for the underperformance of this architecture, and could potentially be explained by model and feature engineering decisions which were made to address the risk of overfitting on the limited dataset used in this work
Resource Allocation Framework: Validation of Numerical Models of Complex Engineering Systems against Physical Experiments
An increasing reliance on complex numerical simulations for high consequence decision making is the motivation for experiment-based validation and uncertainty quantification to assess, and when needed, to improve the predictive capabilities of numerical models. Uncertainties and biases in model predictions can be reduced by taking two distinct actions: (i) increasing the number of experiments in the model calibration process, and/or (ii) improving the physics sophistication of the numerical model. Therefore, decision makers must select between further code development and experimentation while allocating the finite amount of available resources. This dissertation presents a novel framework to assist in this selection between experimentation and code development for model validation strictly from the perspective of predictive capability. The reduction and convergence of discrepancy bias between model prediction and observation, computed using a suitable convergence metric, play a key role in the conceptual formulation of the framework. The proposed framework is demonstrated using two non-trivial case study applications on the Preston-Tonks-Wallace (PTW) code, which is a continuum-based plasticity approach to modeling metals, and the ViscoPlastic Self-Consistent (VPSC) code which is a mesoscopic plasticity approach to modeling crystalline materials. Results show that the developed resource allocation framework is effective and efficient in path selection (i.e. experimentation and/or code development) resulting in a reduction in both model uncertainties and discrepancy bias. The framework developed herein goes beyond path selection in the validation of numerical models by providing a methodology for the prioritization of optimal experimental settings and an algorithm for prioritization of code development. If the path selection algorithm selects the experimental path, optimal selection of the settings at which these physical experiments are conducted as well as the sequence of these experiments is vital to maximize the gain in predictive capability of a model. The Batch Sequential Design (BSD) is a methodology utilized in this work to achieve the goal of selecting the optimal experimental settings. A new BSD selection criterion, Coverage Augmented Expected Improvement for Predictive Stability (C-EIPS), is developed to minimize the maximum reduction in the model discrepancy bias and coverage of the experiments within the domain of applicability. The functional form of the new criterion, C-EIPS, is demonstrated to outperform its predecessor, the EIPS criterion, and the distance-based criterion when discrepancy bias is high and coverage is low, while exhibiting a comparable performance to the distance-based criterion in efficiently maximizing the predictive capability of the VPSC model as discrepancy decreases and coverage increases. If the path selection algorithm selects the code development path, the developed framework provides an algorithm for the prioritization of code development efforts. In coupled systems, the predictive accuracy of the simulation hinges on the accuracy of individual constituent models. Potential improvement in the predictive accuracy of the simulation that can be gained through improving a constituent model depends not only on the relative importance, but also on the inherent uncertainty and inaccuracy of that particular constituent. As such, a unique and quantitative code prioritization index (CPI) is proposed to accomplish the task of prioritizing code development efforts, and its application is demonstrated on a case study of a steel frame with semi-rigid connections. Findings show that the CPI is effective in identifying the most critical constituent of the coupled system, whose improvement leads to the highest overall enhancement of the predictive capability of the coupled model
Taming and Leveraging Directionality and Blockage in Millimeter Wave Communications
To cope with the challenge for high-rate data transmission, Millimeter Wave(mmWave) is one potential solution. The short wavelength unlatched the era of directional mobile communication. The semi-optical communication requires revolutionary thinking. To assist the research and evaluate various algorithms, we build a motion-sensitive mmWave testbed with two degrees of freedom for environmental sensing and general wireless communication.The first part of this thesis contains two approaches to maintain the connection in mmWave mobile communication. The first one seeks to solve the beam tracking problem using motion sensor within the mobile device. A tracking algorithm is given and integrated into the tracking protocol. Detailed experiments and numerical simulations compared several compensation schemes with optical benchmark and demonstrated the efficiency of overhead reduction. The second strategy attempts to mitigate intermittent connections during roaming is multi-connectivity. Taking advantage of properties of rateless erasure code, a fountain code type multi-connectivity mechanism is proposed to increase the link reliability with simplified backhaul mechanism. The simulation demonstrates the efficiency and robustness of our system design with a multi-link channel record.The second topic in this thesis explores various techniques in blockage mitigation. A fast hear-beat like channel with heavy blockage loss is identified in the mmWave Unmanned Aerial Vehicle (UAV) communication experiment due to the propeller blockage. These blockage patterns are detected through Holm\u27s procedure as a problem of multi-time series edge detection. To reduce the blockage effect, an adaptive modulation and coding scheme is designed. The simulation results show that it could greatly improve the throughput given appropriately predicted patterns. The last but not the least, the blockage of directional communication also appears as a blessing because the geometrical information and blockage event of ancillary signal paths can be utilized to predict the blockage timing for the current transmission path. A geometrical model and prediction algorithm are derived to resolve the blockage time and initiate active handovers. An experiment provides solid proof of multi-paths properties and the numeral simulation demonstrates the efficiency of the proposed algorithm
Context Aided Tracking with Adaptive Hyperspectral Imagery
A methodology for the context-aided tracking of ground vehicles in remote airborne imagery is developed in which a background model is inferred from hyperspectral imagery. The materials comprising the background of a scene are remotely identified and lead to this model. Two model formation processes are developed: a manual method, and method that exploits an emerging adaptive, multiple-object-spectrometer instrument. A semi-automated background modeling approach is shown to arrive at a reasonable background model with minimal operator intervention. A novel, adaptive, and autonomous approach uses a new type of adaptive hyperspectral sensor, and converges to a 66% correct background model in 5% the time of the baseline {a 95% reduction in sensor acquisition time. A multiple-hypothesis-tracker is incorporated, which utilizes background statistics to form track costs and associated track maintenance thresholds. The context-aided system is demonstrated in a high- fidelity tracking testbed, and reduces track identity error by 30%
- …