165 research outputs found

    Lypsylehmien tunnistaminen ja seuranta konenäön avulla

    Get PDF
    Improving the monitoring of health and well-being of dairy cows through the use of computer vision based systems is a topic of ongoing research. A reliable and low-cost method for identifying cow individuals would enable automatic detection of stress, sickness or injury, and the daily observation of the animals would be made easier. Neural networks have been used successfully in the identification of cow individuals, but methods are needed that do not require incessant annotation work to generate training datasets when there are changes within a group. Methods for person re-identification and tracking have been researched extensively, with the aim of generalizing beyond the training set. These methods have been found suitable also for re-identifying and tracking previously unseen dairy cows in video frames. In this thesis, a metric-learning based re-identification model pre-trained on an existing cow dataset is compared to a similar model that has been trained on new video data recorded at Luke Maaninka research farm in Spring 2021, which contains 24 individually labelled cow individuals. The models are evaluated in tracking context as appearance descriptors in Kalman filter based tracking algorithm. The test data is video footage from a separate enclosure in Maaninka and a group of 24 previously unseen cow individuals. In addition, a simple procedure is proposed for the automatic labeling of cow identities in images based on RFID data collected from cow ear tags and feeding stations, and the known feeding station locations

    Efficient Methods for Continuous and Discrete Shape Analysis

    Get PDF
    When interpreting an image of a given object, humans are able to abstract from the presented color information in order to really see the presented object. This abstraction is also known as shape. The concept of shape is not defined exactly in Computer Vision and in this work, we use three different forms of these definitions in order to acquire and analyze shapes. This work is devoted to improve the efficiency of methods that solve important applications of shape analysis. The most important problem in order to analyze shapes is the problem of shape acquisition. To simplify this very challenging problem, numerous researchers have incorporated prior knowledge into the acquisition of shapes. We will present the first approach to acquire shapes given a certain shape knowledge that computes always the global minimum of the involved functional which incorporates a Mumford-Shah like functional with a certain class of shape priors including statistic shape prior and dynamical shape prior. In order to analyze shapes, it is not only important to acquire shapes, but also to classify shapes. In this work, we follow the concept of defining a distance function that measures the dissimilarity of two given shapes. There are two different ways of obtaining such a distance function that we address in this work. Firstly, we model the set of all shapes as a metric space induced by the shortest path on an orbifold. The shortest path will provide us with a shape morphing, i.e., a continuous transformation from one shape into another. Secondly, we address the problem of shape matching that finds corresponding points on two shapes with respect to a preselected feature. Our main contribution for the problem of shape morphing lies in the immense acceleration of the morphing computation. Instead of solving partial resp. ordinary differential equations, we are able to solve this problem via a gradient descent approach that subsequently shortens the length of a path on the given manifold. During our runtime test, we observed a run-time acceleration of up to a factor of 1000. Shape matching is a classical discrete problem. If each shape is discretized by N shape points, most Computer Vision methods needed a cubic run-time. We will provide two approaches how to reduce this worst-case complexity to O(N2 log(N)). One approach exploits the planarity of the involved graph in order to efficiently compute N shortest path in a graph of O(N2) vertices. The other approach computes a minimal cut in a planar graph in O(N log(N)). In order to make this approach applicable to shape matching, we improved the run-time of a recently developed graph cut approach by an empirical factor of 2–4

    Unsupervised maritime target detection

    Get PDF
    The unsupervised detection of maritime targets in grey scale video is a difficult problem in maritime video surveillance. Most approaches assume that the camera is static and employ pixel-wise background modelling techniques for foreground detection; other methods rely on colour or thermal information to detect targets. These methods fail in real-world situations when the static camera assumption is violated, and colour or thermal data is unavailable. In defence and security applications, prior information and training samples of targets may be unavailable for training a classifier; the learning of a one class classifier for the background may be impossible as well. Thus, an unsupervised online approach that attempts to learn from the scene data is highly desirable. In this thesis, the characteristics of the maritime scene and the ocean texture are exploited for foreground detection. Two fast and effective methods are investigated for target detection. Firstly, online regionbased background texture models are explored for describing the appearance of the ocean. This approach avoids the need for frame registration because the model is built spatially rather than temporally. The texture appearance of the ocean is described using Local Binary Pattern (LBP) descriptors. Two models are proposed: one model is a Gaussian Mixture (GMM) and the other, referred to as a Sparse Texture Model (STM), is a set of histogram texture distributions. The foreground detections are optimized using a Graph Cut (GC) that enforces spatial coherence. Secondly, feature tracking is investigated as a means of detecting stable features in an image frame that typically correspond to maritime targets; unstable features are background regions. This approach is a Track-Before-Detect (TBD) concept and it is implemented using a hierarchical scheme for motion estimation, and matching of Scale- Invariant Feature Transform (SIFT) appearance features. The experimental results show that these approaches are feasible for foreground detection in maritime video when the camera is either static or moving. Receiver Operating Characteristic (ROC) curves were generated for five test sequences and the Area Under the ROC Curve (AUC) was analyzed for the performance of the proposed methods. The texture models, without GC optimization, achieved an AUC of 0.85 or greater on four out of the five test videos. At 50% True Positive Rate (TPR), these four test scenarios had a False Positive Rate (FPR) of less than 2%. With the GC optimization, an AUC of greater than 0.8 was achieved for all the test cases and the FPR was reduced in all cases when compared to the results without the GC. In comparison to the state of the art in background modelling for maritime scenes, our texture model methods achieved the best performance or comparable performance. The two texture models executed at a reasonable processing frame rate. The experimental results for TBD show that one may detect target features using a simple track score based on the track length. At 50% TPR a FPR of less than 4% is achieved for four out of the five test scenarios. These results are very promising for maritime target detection

    Constraints as Features

    Full text link
    In this paper, we introduce a new approach to con-strained clustering which treats the constraints as features. Our method augments the original feature space with ad-ditional dimensions, each of which derived from a given Cannot-link constraints. The specified Cannot-link pair gets extreme coordinates values, and the rest of the points get coordinate values that express their spatial influence from the specified constrained pair. After augmenting all the new features, a standard unconstrained clustering al-gorithm can be performed, like k-means or spectral clus-tering. We demonstrate the efficacy of our method for ac-tive semi-supervised learning applied to image segmenta-tion and compare it to alternative methods. We also eval-uate the performance of our method on the four most com-monly evaluated datasets from the UCI machine learning repository. 1

    Automatic video segmentation employing object/camera modeling techniques

    Get PDF
    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not reflected in the technical system, making it difficult to manipulate the video at the object level. The realization of object-based manipulation will introduce many new possibilities for working with videos like composing new scenes from pre-existing video objects or enabling user-interaction with the scene. Moreover, object-based video compression, as defined in the MPEG-4 standard, can provide high compression ratios because the foreground objects can be sent independently from the background. In the case that the scene background is static, the background views can even be combined into a large panoramic sprite image, from which the current camera view is extracted. This results in a higher compression ratio since the sprite image for each scene only has to be sent once. A prerequisite for employing object-based video processing is automatic (or at least user-assisted semi-automatic) segmentation of the input video into semantic units, the video objects. This segmentation is a difficult problem because the computer does not have the vast amount of pre-knowledge that humans subconsciously use for object detection. Thus, even the simple definition of the desired output of a segmentation system is difficult. The subject of this thesis is to provide algorithms for segmentation that are applicable to common video material and that are computationally efficient. The thesis is conceptually separated into three parts. In Part I, an automatic segmentation system for general video content is described in detail. Part II introduces object models as a tool to incorporate userdefined knowledge about the objects to be extracted into the segmentation process. Part III concentrates on the modeling of camera motion in order to relate the observed camera motion to real-world camera parameters. The segmentation system that is described in Part I is based on a background-subtraction technique. The pure background image that is required for this technique is synthesized from the input video itself. Sequences that contain rotational camera motion can also be processed since the camera motion is estimated and the input images are aligned into a panoramic scene-background. This approach is fully compatible to the MPEG-4 video-encoding framework, such that the segmentation system can be easily combined with an object-based MPEG-4 video codec. After an introduction to the theory of projective geometry in Chapter 2, which is required for the derivation of camera-motion models, the estimation of camera motion is discussed in Chapters 3 and 4. It is important that the camera-motion estimation is not influenced by foreground object motion. At the same time, the estimation should provide accurate motion parameters such that all input frames can be combined seamlessly into a background image. The core motion estimation is based on a feature-based approach where the motion parameters are determined with a robust-estimation algorithm (RANSAC) in order to distinguish the camera motion from simultaneously visible object motion. Our experiments showed that the robustness of the original RANSAC algorithm in practice does not reach the theoretically predicted performance. An analysis of the problem has revealed that this is caused by numerical instabilities that can be significantly reduced by a modification that we describe in Chapter 4. The synthetization of static-background images is discussed in Chapter 5. In particular, we present a new algorithm for the removal of the foreground objects from the background image such that a pure scene background remains. The proposed algorithm is optimized to synthesize the background even for difficult scenes in which the background is only visible for short periods of time. The problem is solved by clustering the image content for each region over time, such that each cluster comprises static content. Furthermore, it is exploited that the times, in which foreground objects appear in an image region, are similar to the corresponding times of neighboring image areas. The reconstructed background could be used directly as the sprite image in an MPEG-4 video coder. However, we have discovered that the counterintuitive approach of splitting the background into several independent parts can reduce the overall amount of data. In the case of general camera motion, the construction of a single sprite image is even impossible. In Chapter 6, a multi-sprite partitioning algorithm is presented, which separates the video sequence into a number of segments, for which independent sprites are synthesized. The partitioning is computed in such a way that the total area of the resulting sprites is minimized, while simultaneously satisfying additional constraints. These include a limited sprite-buffer size at the decoder, and the restriction that the image resolution in the sprite should never fall below the input-image resolution. The described multisprite approach is fully compatible to the MPEG-4 standard, but provides three advantages. First, any arbitrary rotational camera motion can be processed. Second, the coding-cost for transmitting the sprite images is lower, and finally, the quality of the decoded sprite images is better than in previously proposed sprite-generation algorithms. Segmentation masks for the foreground objects are computed with a change-detection algorithm that compares the pure background image with the input images. A special effect that occurs in the change detection is the problem of image misregistration. Since the change detection compares co-located image pixels in the camera-motion compensated images, a small error in the motion estimation can introduce segmentation errors because non-corresponding pixels are compared. We approach this problem in Chapter 7 by integrating risk-maps into the segmentation algorithm that identify pixels for which misregistration would probably result in errors. For these image areas, the change-detection algorithm is modified to disregard the difference values for the pixels marked in the risk-map. This modification significantly reduces the number of false object detections in fine-textured image areas. The algorithmic building-blocks described above can be combined into a segmentation system in various ways, depending on whether camera motion has to be considered or whether real-time execution is required. These different systems and example applications are discussed in Chapter 8. Part II of the thesis extends the described segmentation system to consider object models in the analysis. Object models allow the user to specify which objects should be extracted from the video. In Chapters 9 and 10, a graph-based object model is presented in which the features of the main object regions are summarized in the graph nodes, and the spatial relations between these regions are expressed with the graph edges. The segmentation algorithm is extended by an object-detection algorithm that searches the input image for the user-defined object model. We provide two objectdetection algorithms. The first one is specific for cartoon sequences and uses an efficient sub-graph matching algorithm, whereas the second processes natural video sequences. With the object-model extension, the segmentation system can be controlled to extract individual objects, even if the input sequence comprises many objects. Chapter 11 proposes an alternative approach to incorporate object models into a segmentation algorithm. The chapter describes a semi-automatic segmentation algorithm, in which the user coarsely marks the object and the computer refines this to the exact object boundary. Afterwards, the object is tracked automatically through the sequence. In this algorithm, the object model is defined as the texture along the object contour. This texture is extracted in the first frame and then used during the object tracking to localize the original object. The core of the algorithm uses a graph representation of the image and a newly developed algorithm for computing shortest circular-paths in planar graphs. The proposed algorithm is faster than the currently known algorithms for this problem, and it can also be applied to many alternative problems like shape matching. Part III of the thesis elaborates on different techniques to derive information about the physical 3-D world from the camera motion. In the segmentation system, we employ camera-motion estimation, but the obtained parameters have no direct physical meaning. Chapter 12 discusses an extension to the camera-motion estimation to factorize the motion parameters into physically meaningful parameters (rotation angles, focal-length) using camera autocalibration techniques. The speciality of the algorithm is that it can process camera motion that spans several sprites by employing the above multi-sprite technique. Consequently, the algorithm can be applied to arbitrary rotational camera motion. For the analysis of video sequences, it is often required to determine and follow the position of the objects. Clearly, the object position in image coordinates provides little information if the viewing direction of the camera is not known. Chapter 13 provides a new algorithm to deduce the transformation between the image coordinates and the real-world coordinates for the special application of sport-video analysis. In sport videos, the camera view can be derived from markings on the playing field. For this reason, we employ a model of the playing field that describes the arrangement of lines. After detecting significant lines in the input image, a combinatorial search is carried out to establish correspondences between lines in the input image and lines in the model. The algorithm requires no information about the specific color of the playing field and it is very robust to occlusions or poor lighting conditions. Moreover, the algorithm is generic in the sense that it can be applied to any type of sport by simply exchanging the model of the playing field. In Chapter 14, we again consider panoramic background images and particularly focus ib their visualization. Apart from the planar backgroundsprites discussed previously, a frequently-used visualization technique for panoramic images are projections onto a cylinder surface which is unwrapped into a rectangular image. However, the disadvantage of this approach is that the viewer has no good orientation in the panoramic image because he looks into all directions at the same time. In order to provide a more intuitive presentation of wide-angle views, we have developed a visualization technique specialized for the case of indoor environments. We present an algorithm to determine the 3-D shape of the room in which the image was captured, or, more generally, to compute a complete floor plan if several panoramic images captured in each of the rooms are provided. Based on the obtained 3-D geometry, a graphical model of the rooms is constructed, where the walls are displayed with textures that are extracted from the panoramic images. This representation enables to conduct virtual walk-throughs in the reconstructed room and therefore, provides a better orientation for the user. Summarizing, we can conclude that all segmentation techniques employ some definition of foreground objects. These definitions are either explicit, using object models like in Part II of this thesis, or they are implicitly defined like in the background synthetization in Part I. The results of this thesis show that implicit descriptions, which extract their definition from video content, work well when the sequence is long enough to extract this information reliably. However, high-level semantics are difficult to integrate into the segmentation approaches that are based on implicit models. Intead, those semantics should be added as postprocessing steps. On the other hand, explicit object models apply semantic pre-knowledge at early stages of the segmentation. Moreover, they can be applied to short video sequences or even still pictures since no background model has to be extracted from the video. The definition of a general object-modeling technique that is widely applicable and that also enables an accurate segmentation remains an important yet challenging problem for further research

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience

    A wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 215-220).This thesis presents Duo, the first wearable system to autonomously learn a kinematic model of the wearer via body-mounted absolute orientation sensors and a head-mounted camera. With Duo, we demonstrate the significant benefits of endowing a wearable system with the ability to sense the kinematic configuration of the wearer's body. We also show that a kinematic model can be autonomously estimated offline from less than an hour of recorded video and orientation data from a wearer performing unconstrained, unscripted, household activities within a real, unaltered, home environment. We demonstrate that our system for autonomously estimating this kinematic model places very few constraints on the wearer's body, the placement of the sensors, and the appearance of the hand, which, for example, allows it to automatically discover a left-handed kinematic model for a left-handed wearer, and to automatically compensate for distinct camera mounts, and sensor configurations. Furthermore, we show that this learned kinematic model efficiently and robustly predicts the location of the dominant hand within video from the head-mounted camera even in situations where vision-based hand detectors would be likely to fail.(cont.) Additionally, we show ways in which the learned kinematic model can facilitate highly efficient processing of large databases of first person experience. Finally, we show that the kinematic model can efficiently direct visual processing so as to acquire a large number of high quality segments of the wearer's hand and the manipulated objects. Within the course of justifying these claims, we present methods for estimating global image motion, segmenting foreground motion, segmenting manipulation events, finding and representing significant hand postures, segmenting visual regions, and detecting visual points of interest with associated shape descriptors. We also describe our architecture and user-level application for machine augmented annotation and browsing of first person video and absolute orientations. Additionally, we present a real-time application in which the human and wearable cooperate through tightly integrated behaviors coordinated by the wearable's kinematic perception, and together acquire high-quality visual segments of manipulable objects that interest the wearable.by Charles Clark Kemp.Ph.D

    Spectral Target Detection using Physics-Based Modeling and a Manifold Learning Technique

    Get PDF
    Identification of materials from calibrated radiance data collected by an airborne imaging spectrometer depends strongly on the atmospheric and illumination conditions at the time of collection. This thesis demonstrates a methodology for identifying material spectra using the assumption that each unique material class forms a lower-dimensional manifold (surface) in the higher-dimensional spectral radiance space and that all image spectra reside on, or near, these theoretic manifolds. Using a physical model, a manifold characteristic of the target material exposed to varying illumination and atmospheric conditions is formed. A graph-based model is then applied to the radiance data to capture the intricate structure of each material manifold, followed by the application of the commute time distance (CTD) transformation to separate the target manifold from the background. Detection algorithms are then applied in the CTD subspace. This nonlinear transformation is based on a random walk on a graph and is derived from an eigendecomposition of the pseudoinverse of the graph Laplacian matrix. This work provides a geometric interpretation of the CTD transformation, its algebraic properties, the atmospheric and illumination parameters varied in the physics-based model, and the influence the target manifold samples have on the orientation of the coordinate axes in the transformed space. This thesis concludes by demonstrating improved detection results in the CTD subspace as compared to detection in the original spectral radiance space

    Apprentissage automatique pour simplifier l’utilisation de banques d’images cardiaques

    Get PDF
    The recent growth of data in cardiac databases has been phenomenal. Cleveruse of these databases could help find supporting evidence for better diagnosis and treatment planning. In addition to the challenges inherent to the large quantity of data, the databases are difficult to use in their current state. Data coming from multiple sources are often unstructured, the image content is variable and the metadata are not standardised. The objective of this thesis is therefore to simplify the use of large databases for cardiology specialists withautomated image processing, analysis and interpretation tools. The proposed tools are largely based on supervised machine learning techniques, i.e. algorithms which can learn from large quantities of cardiac images with groundtruth annotations and which automatically find the best representations. First, the inconsistent metadata are cleaned, interpretation and visualisation of images is improved by automatically recognising commonly used cardiac magnetic resonance imaging views from image content. The method is based on decision forests and convolutional neural networks trained on a large image dataset. Second, the thesis explores ways to use machine learning for extraction of relevant clinical measures (e.g. volumes and masses) from3D and 3D+t cardiac images. New spatio-temporal image features are designed andclassification forests are trained to learn how to automatically segment the main cardiac structures (left ventricle and left atrium) from voxel-wise label maps. Third, a web interface is designed to collect pairwise image comparisons and to learn how to describe the hearts with semantic attributes (e.g. dilation, kineticity). In the last part of the thesis, a forest-based machinelearning technique is used to map cardiac images to establish distances and neighborhoods between images. One application is retrieval of the most similar images.L'explosion récente de données d'imagerie cardiaque a été phénoménale. L'utilisation intelligente des grandes bases de données annotées pourrait constituer une aide précieuse au diagnostic et à la planification de thérapie. En plus des défis inhérents à la grande taille de ces banques de données, elles sont difficilement utilisables en l'état. Les données ne sont pas structurées, le contenu des images est variable et mal indexé, et les métadonnées ne sont pas standardisées. L'objectif de cette thèse est donc le traitement, l'analyse et l'interprétation automatique de ces bases de données afin de faciliter leur utilisation par les spécialistes de cardiologie. Dans ce but, la thèse explore les outils d'apprentissage automatique supervisé, ce qui aide à exploiter ces grandes quantités d'images cardiaques et trouver de meilleures représentations. Tout d'abord, la visualisation et l'interprétation d'images est améliorée en développant une méthode de reconnaissance automatique des plans d'acquisition couramment utilisés en imagerie cardiaque. La méthode se base sur l'apprentissage par forêts aléatoires et par réseaux de neurones à convolution, en utilisant des larges banques d'images, où des types de vues cardiaques sont préalablement établies. La thèse s'attache dans un deuxième temps au traitement automatique des images cardiaques, avec en perspective l'extraction d'indices cliniques pertinents. La segmentation des structures cardiaques est une étape clé de ce processus. A cet effet une méthode basée sur les forêts aléatoires qui exploite des attributs spatio-temporels originaux pour la segmentation automatique dans des images 3Det 3D+t est proposée. En troisième partie, l'apprentissage supervisé de sémantique cardiaque est enrichi grâce à une méthode de collecte en ligne d'annotations d'usagers. Enfin, la dernière partie utilise l'apprentissage automatique basé sur les forêts aléatoires pour cartographier des banques d'images cardiaques, tout en établissant les notions de distance et de voisinage d'images. Une application est proposée afin de retrouver dans une banque de données, les images les plus similaires à celle d'un nouveau patient
    corecore