1,757 research outputs found
Unmanned ground operations using semantic image segmentation through a Bayesian network
This paper discusses the machine vision element of a system designed to allow automated taxiing for Unmanned Aerial System (UAS) around civil aerodromes. The purpose of the computer vision system is to provide direct sensor data which can be used to validate vehicle position, in addition to detect potential collision risks. This is achieved through the use of a singular monocular sensor. Untrained clustering is used to segment the visual feed before descriptors of each cluster (primarily colour and texture) are then used to estimate the class. As the competency of each individual estimate can vary based on multiple factors (number of pixels, lighting conditions and even surface type) a Bayesian network is used to perform probabilistic data fusion, in order to improve the classification results. This result is shown to perform accurate image segmentation in real-world conditions, providing information viable for map matching
Machine vision for UAS ground operations: using semantic segmentation with a bayesian network classifier
This paper discusses the machine vision element of a system designed to allow Unmanned Aerial System (UAS) to perform automated taxiing around civil aerodromes, with only a monocular camera. The purpose of the computer vision system is to provide direct sensor data which can be used to validate vehicle position, in addition to detecting potential collision risks. In practice, untrained clustering is used to segment the visual feed before descriptors of each cluster (primarily colour and texture) are used to estimate the class. As the competency of each individual estimate can vary dependent on multiple factors (number of pixels, lighting conditions and even surface type). A Bayesian network is used to perform probabilistic data fusion, in order to improve the classification results. This result is shown to perform accurate image segmentation in real-world conditions, providing information viable for localisation and obstacle detection
Developing 3D Virtual Safety Risk Terrain for UAS Operations in Complex Urban Environments
Unmanned Aerial Systems (UAS), an integral part of the Advanced Air Mobility
(AAM) vision, are capable of performing a wide spectrum of tasks in urban
environments. The societal integration of UAS is a pivotal challenge, as these
systems must operate harmoniously within the constraints imposed by regulations
and societal concerns. In complex urban environments, UAS safety has been a
perennial obstacle to their large-scale deployment. To mitigate UAS safety risk
and facilitate risk-aware UAS operations planning, we propose a novel concept
called \textit{3D virtual risk terrain}. This concept converts public risk
constraints in an urban environment into 3D exclusion zones that UAS operations
should avoid to adequately reduce risk to Entities of Value (EoV). To implement
the 3D virtual risk terrain, we develop a conditional probability framework
that comprehensively integrates most existing basic models for UAS ground risk.
To demonstrate the concept, we build risk terrains on a Chicago downtown model
and observe their characteristics under different conditions. We believe that
the 3D virtual risk terrain has the potential to become a new routine tool for
risk-aware UAS operations planning, urban airspace management, and policy
development. The same idea can also be extended to other forms of societal
impacts, such as noise, privacy, and perceived risk.Comment: 33 pages, 19 figure
Multi-sensor data fusion techniques for RPAS detect, track and avoid
Accurate and robust tracking of objects is of growing interest amongst the computer vision scientific community. The ability of a multi-sensor system to detect and track objects, and accurately predict their future trajectory is critical in the context of mission- and safety-critical applications. Remotely Piloted Aircraft System (RPAS) are currently not equipped to routinely access all classes of airspace since certified Detect-and-Avoid (DAA) systems are yet to be developed. Such capabilities can be achieved by incorporating both cooperative and non-cooperative DAA functions, as well as providing enhanced communications, navigation and surveillance (CNS) services. DAA is highly dependent on the performance of CNS systems for Detection, Tacking and avoiding (DTA) tasks and maneuvers. In order to perform an effective detection of objects, a number of high performance, reliable and accurate avionics sensors and systems are adopted including non-cooperative sensors (visual and thermal cameras, Laser radar (LIDAR) and acoustic sensors) and cooperative systems (Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Collision Avoidance System (TCAS)). In this paper the sensors and system information candidates are fully exploited in a Multi-Sensor Data Fusion (MSDF) architecture. An Unscented Kalman Filter (UKF) and a more advanced Particle Filter (PF) are adopted to estimate the state vector of the objects based for maneuvering and non-maneuvering DTA tasks. Furthermore, an artificial neural network is conceptualised/adopted to exploit the use of statistical learning methods, which acts to combined information obtained from the UKF and PF. After describing the MSDF architecture, the key mathematical models for data fusion are presented. Conceptual studies are carried out on visual and thermal image fusion architectures
Multi-target detection and recognition by UAVs using online POMDPs
This paper tackles high-level decision-making techniques for robotic missions, which involve both active sensing and symbolic goal reaching, under uncertain probabilistic environments and strong time constraints. Our case study is a POMDP model of an online multi-target detection and recognition mission by an autonomous UAV.The POMDP model of the multi-target detection and recognition problem is generated online from a list of areas of interest, which are automatically extracted at the beginning of the flight from a coarse-grained high altitude observation of the scene. The POMDP observation model relies on a statistical abstraction of an image processing algorithm's output used to detect targets. As the POMDP problem cannot be known and thus optimized before the beginning of the flight, our main contribution is an ``optimize-while-execute'' algorithmic framework: it drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints. We present new results from real outdoor flights and SAIL simulations, which highlight both the benefits of using POMDPs in multi-target detection and recognition missions, and of our`optimize-while-execute'' paradigm
An investigation into hazard-centric analysis of complex autonomous systems
This thesis proposes a hypothesis that a conventional, and essentially manual, HAZOP process can be
improved with information obtained with model-based dynamic simulation, using a Monte Carlo
approach, to update a Bayesian Belief model representing the expected relations between cause and
effects – and thereby produce an enhanced HAZOP. The work considers how the expertise of a
hazard and operability study team might be augmented with access to behavioural models,
simulations and belief inference models. This incorporates models of dynamically complex system
behaviour, considering where these might contribute to the expertise of a hazard and operability study
team, and how these might bolster trust in the portrayal of system behaviour. With a questionnaire
containing behavioural outputs from a representative systems model, responses were collected from a
group with relevant domain expertise. From this it is argued that the quality of analysis is dependent
upon the experience and expertise of the participants but this might be artificially augmented using
probabilistic data derived from a system dynamics model. Consequently, Monte Carlo simulations of
an improved exemplar system dynamics model are used to condition a behavioural inference model
and also to generate measures of emergence associated with the deviation parameter used in the study.
A Bayesian approach towards probability is adopted where particular events and combinations of
circumstances are effectively unique or hypothetical, and perhaps irreproducible in practice.
Therefore, it is shown that a Bayesian model, representing beliefs expressed in a hazard and
operability study, conditioned by the likely occurrence of flaw events causing specific deviant
behaviour from evidence observed in the system dynamical behaviour, may combine intuitive
estimates based upon experience and expertise, with quantitative statistical information representing
plausible evidence of safety constraint violation. A further behavioural measure identifies potential
emergent behaviour by way of a Lyapunov Exponent. Together these improvements enhance the
awareness of potential hazard cases
Automated taxiing for unmanned aircraft systems
Over the last few years, the concept of civil Unmanned Aircraft System(s) (UAS) has been realised, with small UASs commonly used in industries such as law enforcement, agriculture and mapping. With increased development in other areas, such as logistics and advertisement, the size and range of civil UAS is likely to grow. Taken to the logical conclusion, it is likely that large scale UAS will be operating in civil airspace within the next decade.
Although the airborne operations of civil UAS have already gathered much research attention, work is also required to determine how UAS will function when on the ground. Motivated by the assumption that large UAS will share ground facilities with manned aircraft, this thesis describes the preliminary development of an Automated Taxiing System(ATS) for UAS operating at civil aerodromes.
To allow the ATS to function on the majority of UAS without the need for additional hardware, a visual sensing approach has been chosen, with the majority of work focusing on monocular image processing techniques. The purpose of the computer vision system is to provide direct sensor data which can be used to validate the vehicle s position, in addition to detecting potential collision risks. As aerospace regulations require the most robust and reliable algorithms for control, any methods which are not fully definable or explainable will not be suitable for real-world use. Therefore, non-deterministic methods and algorithms with hidden components (such as Artificial Neural Network (ANN)) have not been used. Instead, the visual sensing is achieved through a semantic segmentation, with separate segmentation and classification stages. Segmentation is performed using superpixels and reachability clustering to divide the image into single content clusters. Each cluster is then classified using multiple types of image data, probabilistically fused within a Bayesian network.
The data set for testing has been provided by BAE Systems, allowing the system to be trained and tested on real-world aerodrome data. The system has demonstrated good performance on this limited dataset, accurately detecting both collision risks and terrain features for use in navigation
- …