379 research outputs found
An Adaptive Design Methodology for Reduction of Product Development Risk
Embedded systems interaction with environment inherently complicates
understanding of requirements and their correct implementation. However,
product uncertainty is highest during early stages of development. Design
verification is an essential step in the development of any system, especially
for Embedded System. This paper introduces a novel adaptive design methodology,
which incorporates step-wise prototyping and verification. With each adaptive
step product-realization level is enhanced while decreasing the level of
product uncertainty, thereby reducing the overall costs. The back-bone of this
frame-work is the development of Domain Specific Operational (DOP) Model and
the associated Verification Instrumentation for Test and Evaluation, developed
based on the DOP model. Together they generate functionally valid test-sequence
for carrying out prototype evaluation. With the help of a case study 'Multimode
Detection Subsystem' the application of this method is sketched. The design
methodologies can be compared by defining and computing a generic performance
criterion like Average design-cycle Risk. For the case study, by computing
Average design-cycle Risk, it is shown that the adaptive method reduces the
product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure
Neural Network-Based Multi-Target Detection within Correlated Heavy-Tailed Clutter
This work addresses the problem of range-Doppler multiple target detection in
a radar system in the presence of slow-time correlated and heavy-tailed
distributed clutter. Conventional target detection algorithms assume
Gaussian-distributed clutter, but their performance is significantly degraded
in the presence of correlated heavy-tailed distributed clutter. Derivation of
optimal detection algorithms with heavy-tailed distributed clutter is
analytically intractable. Furthermore, the clutter distribution is frequently
unknown. This work proposes a deep learning-based approach for multiple target
detection in the range-Doppler domain. The proposed approach is based on a
unified NN model to process the time-domain radar signal for a variety of
signal-to-clutter-plus-noise ratios (SCNRs) and clutter distributions,
simplifying the detector architecture and the neural network training
procedure. The performance of the proposed approach is evaluated in various
experiments using recorded radar echoes, and via simulations, it is shown that
the proposed method outperforms the conventional cell-averaging constant
false-alarm rate (CA-CFAR), the ordered-statistic CFAR (OS-CFAR), and the
adaptive normalized matched-filter (ANMF) detectors in terms of probability of
detection in the majority of tested SCNRs and clutter scenarios.Comment: Accepted to IEEE Transactions on Aerospace and Electronic System
A location scale based CFAR detection framework for FOPEN SAR images
The problem of target detection in a complex clutter environment, with Constant False Alarm Ratio (CFAR), is addressed in this paper. In particular an algorithm for CFAR target detection is applied to the context of FOliage PENetrating (FOPEN) Synthetic Aperture Radar (SAR) imaging. The extreme value distributions family is used to model the data and exploiting the location-scale property of this family of distributions, a multi-model CFAR algorithm is derived. Performance analysis on real data confirms the capability of the developed framework to control the false alarm probability
Multi-Target Detection Capability of Linear Fusion Approach Under Different Swerling Models of Target Fluctuation
In evolving radar systems, detection is regarded as a fundamental stage in their receiving end. Consequently, detection performance enhancement of a CFAR variant represents the basic requirement of these systems, since the CFAR strategy plays a key role in automatic detection process. Most existing CFAR variants need to estimate the background level before constructing the detection threshold. In a multi-target state, the existence of spurious targets could cause inaccurate estimation of background level. The occurrence of this effect will result in severely degrading the performance of the CFAR algorithm. Lots of research in the CFAR design have been achieved. However, the gap in the previous works is that there is no CFAR technique that can operate in all or most environmental varieties. To overcome this challenge, the linear fusion (LF) architecture, which can operate with the most environmental and target situations, has been presented
Embedded System Optimization of Radar Post-processing in an ARM CPU Core
Algorithms executed on the radar processor system contributes to a significant performance bottleneck of the overall radar system. One key performance concern is
the latency in target detection when dealing with hard deadline systems. Research has shown software optimization as one major contributor to radar system performance
improvements. This thesis aims at software optimizations using a manual and automatic approach and analyzing the results to make informed future decisions
while working with an ARM processor system. In order to ascertain an optimized implementation, a question put forward was whether the algorithms on the ARM
processor could work with a 6-antenna implementation without a decline in the performance. However, an answer would also help project how many additional
algorithms can still be added without performance decline.
The manual optimization was done based on the quantitative analysis of the software execution time. The manual optimization approach looked at the vectorization
strategy using the NEON vector register on the ARM CPU to reimplement the initial Constant False Alarm Rate(CFAR) Detection algorithm. An additional
optimization approach was eliminating redundant loops while going through the Range Gates and Doppler filters. In order to determine the best compiler for automatic
code optimization for the radar algorithms on the ARM processor, the GCC and Clang compilers were used to compile the initial algorithms and the optimized
implementation on the radar post-processing stage.
Analysis of the optimization results showed that it is possible to run the radar post-processing algorithms on the ARM processor at the 6-antenna implementation
without system load stress. In addition, the results show an excellent headroom margin based on the defined scenario. The result analysis further revealed that the
effect of dynamic memory allocation could not be underrated in situations where performance is a significant concern. Additional statements from the result demonstrated
that the GCC and Clang compiler has their strength and weaknesses when used in the compilation. One limiting factor to note on the optimization using the
NEON register is the sample size’s effect on the optimization implementation. Although it fits into the test samples used based on the defined scenario, there might
be varying results in varying window cell size situations that might not necessarily improve the time constraints
Improve Performance of FLASE Alarm Detection by using CFAR and Low Pass Filter
Cyber –Physical System (CPS) is an integration of physical systems with computation, communication and controlling. CPS has various applications such as power networks, transportation networks, healthcare applications, infrastructures and industrial process. CPS connects the virtual world with the physical world. Wireless Sensor Networks (WSN) are the vital part of CPS because they have the strong sensing capabilities. In CPS healthcare application various sensors are used to collect the data from patients. Many times these sensors generate a large number of false alarms. Due to these false alarms confusion is created and it reduces the efficiency of overall healthcare services. There are still a lot of challenges in healthcare such as intoperability, security and privacy, autonomy and device verifiability. In this paper, we improve the performance of false alarm detection by using CFAR (constant false alarm rate) and the low pass filter. Thus we are using low pass filter here because our actual values will be present in the lower frequency region. The noise has higher frequency thus we tend to remove them by using a low pass filter
Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems
The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system
Multi-model CFAR detection in FOliage PENetrating SAR images
A multi-model approach for Constant False Alarm Ratio (CFAR) detection of vehicles through foliage in FOliage PENetrating (FOPEN) SAR images is presented. Extreme value distributions and Location Scale properties are exploited to derive an adaptive CFAR approach that is able to cope with different forest densities. Performance analysis on real data is carried out to estimate the detection and false alarm probabilities in the presence of a ground truth
Space-time reduced rank methods and CFAR signal detection algorithms with applications to HPRF radar
In radar applications, the statistical properties (covariance matrix) of the interference are typically unknown a priori and are estimated from a dataset with limited sample support. Often, the limited sample support leads to numerically ill-conditioned radar detectors. Under such circumstances, classical interference cancellation methods such as sample matrix inversion (SMI) do not perform satisfactorily. In these cases, innovative reduced-rank space-time adaptive processing (STAP) techniques outperform full-rank techniques. The high pulse repetition frequency (HPRF) radar problem is analyzed and it is shown that it is in the class of adaptive radar with limited sample support. Reduced-rank methods are studied for the HPRF radar problem. In particular, the method known as diagonally loaded covariance matrix SMI (L-SMI) is closely investigated. Diagonal loading improves the numerical conditioning of the estimated covariance matrix, and hence, is well suited to be applied in a limited sample support environment. The performance of L-SMI is obtained through a theoretical distribution of the output conditioned signal-to-noise ratio of the space-time array. Reduced-rank techniques are extended to constant false alarm rate (CFAR) detectors based on the generalized likelihood ratio test (GLRT). Two new modified CFAR GLRT detectors are considered and analyzed. The first is a subspace-based GLRT detector where subspace-based transformations are applied to the data prior to detection. A subspace transformation adds statistical stability which tends to improve performance at the expense of an additional SNR loss. The second detector is a modified GLRT detector that incorporates a diagonally loaded covariance matrix. Both detectors show improved performance over the traditional GLRT
- …