7,799 research outputs found
Feature Extraction and Classification of Flaws in Radio Graphical Weld Images Using ANN
In this paper, a novel approach for the detection and classification of flaws in weld images is presented. Computer based weld image analysis is most significant method. The method has been applied for detecting and discriminating flaws in the weld that may corresponds false alarms or all possible nine types of weld defects (Slag Inclusion, Wormhole, Porosity, Incomplete penetration, Under cuts, Cracks, Lack of fusion, Weaving fault Slag line), after being successfully tested on80 radiographic images obtained from EURECTEST, International scientific Association Brussels, Belgium, and 24 radiographs of ship weld provided by Technic Control Co. (Poland) were used, obtained from Ioannis Valavanis Greece.. The procedure to detect all the types of flaws and feature extraction is implemented by segmentation algorithm which can overcome computer complexity problem. Our problem focuses on the high performance classification by optimization of feature set by various selection algorithms like sequential forward search (SFS), sequential backward search algorithm (SBS) and sequential forward floating search algorithm (SFFS). Features are important for measuring parameters which leads in directional to understand image. We introduced 23 geometric features, and 14 texture features. The Experimental results show that our proposed method gives good performance of radiographic images
Modelling and tracking objects with a topology preserving self-organising neural network
Human gestures form an integral part in our everyday communication. We use
gestures not only to reinforce meaning, but also to describe the shape of objects,
to play games, and to communicate in noisy environments. Vision systems that
exploit gestures are often limited by inaccuracies inherent in handcrafted models.
These models are generated from a collection of training examples which requires
segmentation and alignment. Segmentation in gesture recognition typically involves manual intervention, a time consuming process that is feasible only for a
limited set of gestures. Ideally gesture models should be automatically acquired
via a learning scheme that enables the acquisition of detailed behavioural knowledge only from topological and temporal observation.
The research described in this thesis is motivated by a desire to provide a framework for the unsupervised acquisition and tracking of gesture models. In any
learning framework, the initialisation of the shapes is very crucial. Hence, it would
be beneficial to have a robust model not prone to noise that can automatically correspond the set of shapes. In the first part of this thesis, we develop a framework
for building statistical 2D shape models by extracting, labelling and corresponding
landmark points using only topological relations derived from competitive hebbian learning. The method is based on the assumption that correspondences can
be addressed as an unsupervised classification problem where landmark points
are the cluster centres (nodes) in a high-dimensional vector space. The approach
is novel in that the network can be used in cases where the topological structure of
the input pattern is not known a priori thus no topology of fixed dimensionality is imposed onto the network.
In the second part, we propose an approach to minimise the user intervention
in the adaptation process, which requires to specify a priori the number of nodes
needed to represent an object, by utilising an automatic criterion for maximum
node growth. Furthermore, this model is used to represent motion in image sequences by initialising a suitable segmentation that separates the object of interest
from the background. The segmentation system takes into consideration some illumination tolerance, images as inputs from ordinary cameras and webcams, some
low to medium cluttered background avoiding extremely cluttered backgrounds,
and that the objects are at close range from the camera.
In the final part, we extend the framework for the automatic modelling and
unsupervised tracking of 2D hand gestures in a sequence of k frames. The aim
is to use the tracked frames as training examples in order to build the model and
maintain correspondences. To do that we add an active step to the Growing Neural Gas (GNG) network, which we call Active Growing Neural Gas (A-GNG) that
takes into consideration not only the geometrical position of the nodes, but also the
underlined local feature structure of the image, and the distance vector between
successive images. The quality of our model is measured through the calculation
of the topographic product. The topographic product is our topology preserving
measure which quantifies the neighbourhood preservation.
In our system we have applied specific restrictions in the velocity and the appearance of the gestures to simplify the difficulty of the motion analysis in the gesture representation. The proposed framework has been validated on applications
related to sign language. The work has great potential in Virtual Reality (VR) applications where the learning and the representation of gestures becomes natural
without the need of expensive wear cable sensors
Robust modelling and tracking of NonRigid objects using Active-GNG
This paper presents a robust approach to nonrigid modelling and tracking. The contour of the object is described by an active growing neural gas (A-GNG) network which allows the model to re-deform locally. The approach is novel in that the nodes of the network are described by their geometrical position, the underlying local feature structure of the image, and the distance vector between the modal image and any successive images. A second contribution is the correspondence of the nodes which is measured through the calculation of the topographic product, a topology preserving objective function which quantifies the neighbourhood preservation before and after the mapping. As a result, we can achieve the automatic modelling and tracking of objects without using any annotated training sets. Experimental results have shown the superiority of our proposed method over the original growing neural gas (GNG) network
Visual region understanding: unsupervised extraction and abstraction
The ability to gain a conceptual understanding of the world in uncontrolled environments is the ultimate goal of vision-based computer systems. Technological
societies today are heavily reliant on surveillance and security infrastructure, robotics, medical image analysis, visual data categorisation and search, and smart device user interaction, to name a few. Out of all the complex problems tackled
by computer vision today in context of these technologies, that which lies closest to the original goals of the field is the subarea of unsupervised scene analysis or scene modelling. However, its common use of low level features does not provide
a good balance between generality and discriminative ability, both a result and a symptom of the sensory and semantic gaps existing between low level computer
representations and high level human descriptions.
In this research we explore a general framework that addresses the fundamental
problem of universal unsupervised extraction of semantically meaningful visual
regions and their behaviours. For this purpose we address issues related to
(i) spatial and spatiotemporal segmentation for region extraction, (ii) region shape modelling, and (iii) the online categorisation of visual object classes and the spatiotemporal analysis of their behaviours. Under this framework we propose (a)
a unified region merging method and spatiotemporal region reduction, (b) shape
representation by the optimisation and novel simplication of contour-based growing neural gases, and (c) a foundation for the analysis of visual object motion properties using a shape and appearance based nearest-centroid classification algorithm
and trajectory plots for the obtained region classes.
1
Specifically, we formulate a region merging spatial segmentation mechanism
that combines and adapts features shown previously to be individually useful,
namely parallel region growing, the best merge criterion, a time adaptive threshold, and region reduction techniques. For spatiotemporal region refinement we
consider both scalar intensity differences and vector optical flow. To model the shapes of the visual regions thus obtained, we adapt the growing neural gas for
rapid region contour representation and propose a contour simplication technique. A fast unsupervised nearest-centroid online learning technique next groups observed region instances into classes, for which we are then able to analyse spatial
presence and spatiotemporal trajectories. The analysis results show semantic correlations to real world object behaviour. Performance evaluation of all steps across
standard metrics and datasets validate their performance
Fast 2D/3D object representation with growing neural gas
This work presents the design of a real-time system to model visual objects with the use of self-organising networks. The architecture of the system addresses multiple computer vision tasks such as image segmentation, optimal parameter estimation and object representation. We first develop a framework for building non-rigid shapes using the growth mechanism of the self-organising maps, and then we define an optimal number of nodes without overfitting or underfitting the network based on the knowledge obtained from information-theoretic considerations. We present experimental results for hands and faces, and we quantitatively evaluate the matching capabilities of the proposed method with the topographic product. The proposed method is easily extensible to 3D objects, as it offers similar features for efficient mesh reconstruction
- …