42 research outputs found

    Modelling and tracking objects with a topology preserving self-organising neural network

    Get PDF
    Human gestures form an integral part in our everyday communication. We use gestures not only to reinforce meaning, but also to describe the shape of objects, to play games, and to communicate in noisy environments. Vision systems that exploit gestures are often limited by inaccuracies inherent in handcrafted models. These models are generated from a collection of training examples which requires segmentation and alignment. Segmentation in gesture recognition typically involves manual intervention, a time consuming process that is feasible only for a limited set of gestures. Ideally gesture models should be automatically acquired via a learning scheme that enables the acquisition of detailed behavioural knowledge only from topological and temporal observation. The research described in this thesis is motivated by a desire to provide a framework for the unsupervised acquisition and tracking of gesture models. In any learning framework, the initialisation of the shapes is very crucial. Hence, it would be beneficial to have a robust model not prone to noise that can automatically correspond the set of shapes. In the first part of this thesis, we develop a framework for building statistical 2D shape models by extracting, labelling and corresponding landmark points using only topological relations derived from competitive hebbian learning. The method is based on the assumption that correspondences can be addressed as an unsupervised classification problem where landmark points are the cluster centres (nodes) in a high-dimensional vector space. The approach is novel in that the network can be used in cases where the topological structure of the input pattern is not known a priori thus no topology of fixed dimensionality is imposed onto the network. In the second part, we propose an approach to minimise the user intervention in the adaptation process, which requires to specify a priori the number of nodes needed to represent an object, by utilising an automatic criterion for maximum node growth. Furthermore, this model is used to represent motion in image sequences by initialising a suitable segmentation that separates the object of interest from the background. The segmentation system takes into consideration some illumination tolerance, images as inputs from ordinary cameras and webcams, some low to medium cluttered background avoiding extremely cluttered backgrounds, and that the objects are at close range from the camera. In the final part, we extend the framework for the automatic modelling and unsupervised tracking of 2D hand gestures in a sequence of k frames. The aim is to use the tracked frames as training examples in order to build the model and maintain correspondences. To do that we add an active step to the Growing Neural Gas (GNG) network, which we call Active Growing Neural Gas (A-GNG) that takes into consideration not only the geometrical position of the nodes, but also the underlined local feature structure of the image, and the distance vector between successive images. The quality of our model is measured through the calculation of the topographic product. The topographic product is our topology preserving measure which quantifies the neighbourhood preservation. In our system we have applied specific restrictions in the velocity and the appearance of the gestures to simplify the difficulty of the motion analysis in the gesture representation. The proposed framework has been validated on applications related to sign language. The work has great potential in Virtual Reality (VR) applications where the learning and the representation of gestures becomes natural without the need of expensive wear cable sensors

    Linear Regression and Unsupervised Learning For Tracking and Embodied Robot Control.

    Get PDF
    Computer vision problems, such as tracking and robot navigation, tend to be solved using models of the objects of interest to the problem. These models are often either hard-coded, or learned in a supervised manner. In either case, an engineer is required to identify the visual information that is important to the task, which is both time consuming and problematic. Issues with these engineered systems relate to the ungrounded nature of the knowledge imparted by the engineer, where the systems have no meaning attached to the representations. This leads to systems that are brittle and are prone to failure when expected to act in environments not envisaged by the engineer. The work presented in this thesis removes the need for hard-coded or engineered models of either visual information representations or behaviour. This is achieved by developing novel approaches for learning from example, in both input (percept) and output (action) spaces. This approach leads to the development of novel feature tracking algorithms, and methods for robot control. Applying this approach to feature tracking, unsupervised learning is employed, in real time, to build appearance models of the target that represent the input space structure, and this structure is exploited to partition banks of computationally efficient, linear regression based target displacement estimators. This thesis presents the first application of regression based methods to the problem of simultaneously modeling and tracking a target object. The computationally efficient Linear Predictor (LP) tracker is investigated, along with methods for combining and weighting flocks of LP’s. The tracking algorithms developed operate with accuracy comparable to other state of the art online approaches and with a significant gain in computational efficiency. This is achieved as a result of two specific contributions. First, novel online approaches for the unsupervised learning of modes of target appearance that identify aspects of the target are introduced. Second, a general tracking framework is developed within which the identified aspects of the target are adaptively associated to subsets of a bank of LP trackers. This results in the partitioning of LP’s and the online creation of aspect specific LP flocks that facilitate tracking through significant appearance changes. Applying the approach to the percept action domain, unsupervised learning is employed to discover the structure of the action space, and this structure is used in the formation of meaningful perceptual categories, and to facilitate the use of localised input-output (percept-action) mappings. This approach provides a realisation of an embodied and embedded agent that organises its perceptual space and hence its cognitive process based on interactions with its environment. Central to the proposed approach is the technique of clustering an input-output exemplar set, based on output similarity, and using the resultant input exemplar groupings to characterise a perceptual category. All input exemplars that are coupled to a certain class of outputs form a category - the category of a given affordance, action or function. In this sense the formed perceptual categories have meaning and are grounded in the embodiment of the agent. The approach is shown to identify the relative importance of perceptual features and is able to solve percept-action tasks, defined only by demonstration, in previously unseen situations. Within this percept-action learning framework, two alternative approaches are developed. The first approach employs hierarchical output space clustering of point-to-point mappings, to achieve search efficiency and input and output space generalisation as well as a mechanism for identifying the important variance and invariance in the input space. The exemplar hierarchy provides, in a single structure, a mechanism for classifying previously unseen inputs and generating appropriate outputs. The second approach to a percept-action learning framework integrates the regression mappings used in the feature tracking domain, with the action space clustering and imitation learning techniques developed in the percept-action domain. These components are utilised within a novel percept-action data mining methodology, that is able to discover the visual entities that are important to a specific problem, and to map from these entities onto the action space. Applied to the robot control task, this approach allows for real-time generation of continuous action signals, without the use of any supervision or definition of representations or rules of behaviour

    Exploring space situational awareness using neuromorphic event-based cameras

    Get PDF
    The orbits around earth are a limited natural resource and one that hosts a vast range of vital space-based systems that support international systems use by both commercial industries, civil organisations, and national defence. The availability of this space resource is rapidly depleting due to the ever-growing presence of space debris and rampant overcrowding, especially in the limited and highly desirable slots in geosynchronous orbit. The field of Space Situational Awareness encompasses tasks aimed at mitigating these hazards to on-orbit systems through the monitoring of satellite traffic. Essential to this task is the collection of accurate and timely observation data. This thesis explores the use of a novel sensor paradigm to optically collect and process sensor data to enhance and improve space situational awareness tasks. Solving this issue is critical to ensure that we can continue to utilise the space environment in a sustainable way. However, these tasks pose significant engineering challenges that involve the detection and characterisation of faint, highly distant, and high-speed targets. Recent advances in neuromorphic engineering have led to the availability of high-quality neuromorphic event-based cameras that provide a promising alternative to the conventional cameras used in space imaging. These cameras offer the potential to improve the capabilities of existing space tracking systems and have been shown to detect and track satellites or ‘Resident Space Objects’ at low data rates, high temporal resolutions, and in conditions typically unsuitable for conventional optical cameras. This thesis presents a thorough exploration of neuromorphic event-based cameras for space situational awareness tasks and establishes a rigorous foundation for event-based space imaging. The work conducted in this project demonstrates how to enable event-based space imaging systems that serve the goals of space situational awareness by providing accurate and timely information on the space domain. By developing and implementing event-based processing techniques, the asynchronous operation, high temporal resolution, and dynamic range of these novel sensors are leveraged to provide low latency target acquisition and rapid reaction to challenging satellite tracking scenarios. The algorithms and experiments developed in this thesis successfully study the properties and trade-offs of event-based space imaging and provide comparisons with traditional observing methods and conventional frame-based sensors. The outcomes of this thesis demonstrate the viability of event-based cameras for use in tracking and space imaging tasks and therefore contribute to the growing efforts of the international space situational awareness community and the development of the event-based technology in astronomy and space science applications

    Dense Vision in Image-guided Surgery

    Get PDF
    Image-guided surgery needs an efficient and effective camera tracking system in order to perform augmented reality for overlaying preoperative models or label cancerous tissues on the 2D video images of the surgical scene. Tracking in endoscopic/laparoscopic scenes however is an extremely difficult task primarily due to tissue deformation, instrument invasion into the surgical scene and the presence of specular highlights. State of the art feature-based SLAM systems such as PTAM fail in tracking such scenes since the number of good features to track is very limited. When the scene is smoky and when there are instrument motions, it will cause feature-based tracking to fail immediately. The work of this thesis provides a systematic approach to this problem using dense vision. We initially attempted to register a 3D preoperative model with multiple 2D endoscopic/laparoscopic images using a dense method but this approach did not perform well. We subsequently proposed stereo reconstruction to directly obtain the 3D structure of the scene. By using the dense reconstructed model together with robust estimation, we demonstrate that dense stereo tracking can be incredibly robust even within extremely challenging endoscopic/laparoscopic scenes. Several validation experiments have been conducted in this thesis. The proposed stereo reconstruction algorithm has turned out to be the state of the art method for several publicly available ground truth datasets. Furthermore, the proposed robust dense stereo tracking algorithm has been proved highly accurate in synthetic environment (< 0.1 mm RMSE) and qualitatively extremely robust when being applied to real scenes in RALP prostatectomy surgery. This is an important step toward achieving accurate image-guided laparoscopic surgery.Open Acces

    Audio-coupled video content understanding of unconstrained video sequences

    Get PDF
    Unconstrained video understanding is a difficult task. The main aim of this thesis is to recognise the nature of objects, activities and environment in a given video clip using both audio and video information. Traditionally, audio and video information has not been applied together for solving such complex task, and for the first time we propose, develop, implement and test a new framework of multi-modal (audio and video) data analysis for context understanding and labelling of unconstrained videos. The framework relies on feature selection techniques and introduces a novel algorithm (PCFS) that is faster than the well-established SFFS algorithm. We use the framework for studying the benefits of combining audio and video information in a number of different problems. We begin by developing two independent content recognition modules. The first one is based on image sequence analysis alone, and uses a range of colour, shape, texture and statistical features from image regions with a trained classifier to recognise the identity of objects, activities and environment present. The second module uses audio information only, and recognises activities and environment. Both of these approaches are preceded by detailed pre-processing to ensure that correct video segments containing both audio and video content are present, and that the developed system can be made robust to changes in camera movement, illumination, random object behaviour etc. For both audio and video analysis, we use a hierarchical approach of multi-stage classification such that difficult classification tasks can be decomposed into simpler and smaller tasks. When combining both modalities, we compare fusion techniques at different levels of integration and propose a novel algorithm that combines advantages of both feature and decision-level fusion. The analysis is evaluated on a large amount of test data comprising unconstrained videos collected for this work. We finally, propose a decision correction algorithm which shows that further steps towards combining multi-modal classification information effectively with semantic knowledge generates the best possible results

    Development of situation recognition, environment monitoring and patient condition monitoring service modules for hospital robots

    Get PDF
    An aging society and economic pressure have caused an increase in the patient-to-staff ratio leading to a reduction in healthcare quality. In order to combat the deficiencies in the delivery of patient healthcare, the European Commission in the FP6 scheme approved the financing of a research project for the development of an Intelligent Robot Swarm for Attendance, Recognition, Cleaning and Delivery (iWARD). Each iWARD robot contained a mobile, self-navigating platform and several modules attached to it to perform their specific tasks. As part of the iWARD project, the research described in this thesis is interested to develop hospital robot modules which are able to perform the tasks of surveillance and patient monitoring in a hospital environment for four scenarios: Intruder detection, Patient behavioural analysis, Patient physical condition monitoring, and Environment monitoring. Since the Intruder detection and Patient behavioural analysis scenarios require the same equipment, they can be combined into one common physical module called Situation recognition module. The other two scenarios are to be served by their separate modules: Environment monitoring module and Patient condition monitoring module. The situation recognition module uses non-intrusive machine vision-based concepts. The system includes an RGB video camera and a 3D laser sensor, which monitor the environment in order to detect an intruder, or a patient lying on the floor. The system deals with various image-processing and sensor fusion techniques. The environment monitoring module monitors several parameters of the hospital environment: temperature, humidity and smoke. The patient condition monitoring system remotely measures the following body conditions: body temperature, heart rate, respiratory rate, and others, using sensors attached to the patient’s body. The system algorithm and module software is implemented in C/C++ and uses the OpenCV image analysis and processing library and is successfully tested on Linux (Ubuntu) Platform. The outcome of this research has significant contribution to the robotics application area in the hospital environment

    A salad bowl, salt and not a drop to drink: recipe for disaster? Three essays on the economic value of agricultural land in a changing environment

    No full text
    This dissertation contains three core chapters which share a common theme of natural resource management in Australia and a common analytical technique of Ricardian hedonic price theory applied to agricultural land values. Chapter 1: This chapter presents a Ricardian analysis of the impact of projected climate change on Australian broadacre agricultural land values. Using several years of farmlevel sales data, we estimate the value of agricultural land as a function of climate attributes. We leverage satellite imagery-based land use data to separate our analysis by cropping and grazing land. Making this distinction is particularly important due to choice based sampling (as a consequence of land sale frequency) that would otherwise severely bias our land value estimates. We base our damage estimates on CSIRO climate projections for the 21st century, as used by the Intergovernmental Panel on Climate Change. We find that projected climate change would erode agricultural land values by around 10 per cent by 2050, and nearly 40 per cent by the end of the century; and negatively impact at least 95 per cent of the existing agricultural resource base. This damage is unlikely to occur suddenly. Rather, it would be equivalent to taxing agricultural productivity by about 0.6 per cent per year for the next 85 years. Chapter 2: Australia’s northern area has vast but largely undeveloped land that would be arable if irrigated. We analyse the net economic benefits of allocating northern Australia’s divertible surface water to irrigation, a scheme that would require significant investment in infrastructure for dam and canal construction. We estimate the benefits to northern Australia, using a Ricardian hedonic approach to forecast the economic value of constructing major new irrigation schemes that would be capitalised into agricultural land values. We use publicly available information from existing and potential Australian irrigation schemes to define the cost of constructing large water storages and distribution infrastructure, as well as on-farm irrigation infrastructure. We find that the costs of turning northern Australia into an irrigated food bowl are likely to exceed even the most optimistic benefits that would be capitalised into land prices by a multiple of between 1.1 and 3.2. Chapter 3: This chapter explores the damage wrought on broadacre agricultural property values by dryland salinity in the south-west agricultural region of Western Australia – one of Australia’s most productive wheat growing areas. We use a Ricardian hedonic approach based on 20 years of farm sales data to estimate salinity damages. We find that the damage caused by salinity in the south-west varies from approximately 20 per cent for land that is slightly affected by salinity to as much as 87 per cent for land that is extremely saline. Using these estimates, we project that the upside from eliminating existing salinity on 5.3 million hectares of currently impacted land would be worth approximately 2.6billion.Conversely,ifleftunchecked,wefindthatanadditional3.75millionhectaresoflandworthapproximately2.6 billion. Conversely, if left unchecked, we find that an additional 3.75 million hectares of land worth approximately 5.85 billion is likely to become saline at some point in the future

    Interactive Evolutionary Algorithms for Image Enhancement and Creation

    Get PDF
    Image enhancement and creation, particularly for aesthetic purposes, are tasks for which the use of interactive evolutionary algorithms would seem to be well suited. Previous work has concentrated on the development of various aspects of the interactive evolutionary algorithms and their application to various image enhancement and creation problems. Robust evaluation of algorithmic design options in interactive evolutionary algorithms and the comparison of interactive evolutionary algorithms to alternative approaches to achieving the same goals is generally less well addressed. The work presented in this thesis is primarily concerned with different interactive evolutionary algorithms, search spaces, and operators for setting the input values required by image processing and image creation tasks. A secondary concern is determining when the use of the interactive evolutionary algorithm approach to image enhancement problems is warranted and how it compares with alternative approaches. Various interactive evolutionary algorithms were implemented and compared in a number of specifically devised experiments using tasks of varying complexity. A novel aspect of this thesis, with regards to other work in the study of interactive evolutionary algorithms, was that statistical analysis of the data gathered from the experiments was performed. This analysis demonstrated, contrary to popular assumption, that the choice of algorithm parameters, operators, search spaces, and even the underlying evolutionary algorithm has little effect on the quality of the resulting images or the time it takes to develop them. It was found that the interaction methods chosen when implementing the user interface of the interactive evolutionary algorithms had a greater influence on the performances of the algorithms
    corecore