99 research outputs found

    Modeling and tracking relative movement of object parts

    Get PDF
    Video surveillance systems play an important role in many civilian and military applications, for the purposes of security and surveillance. Object detection is an important component in a video surveillance system, used to identify possible objects of interest and to generate data for tracking and analysis purposes. Not much exploration has been done to track the moving parts of the object which is being tracked. Some of the promising techniques like Kalman Filter, Mean-shift algorithm, Matching Eigen Space, Discrete Wavelet Transform, Curvelet Transform, Distance Metric Learning have shown good performance for keeping track of moving object. Most of this work is focused on studying and analyzing various object tracking techniques which are available. Most of the techniques which are available for object tracking have heavy computation requirements. The intention of this research is to design a technique, which is not computationally intensive and to be able to track relative movements of object parts in real time. The research applies a technique called foreground detection (also known as background subtraction) for tracking the object as it is not computationally intensive. For tracking the relative movement of object parts, a skeletonization technique is used. During implementation, it is found that using skeletonization technique, it is harder to extract the objects parts

    Automated Video Analysis of Animal Movements Using Gabor Orientation Filters

    Get PDF
    To quantify locomotory behavior, tools for determining the location and shape of an animal’s body are a first requirement. Video recording is a convenient technology to store raw movement data, but extracting body coordinates from video recordings is a nontrivial task. The algorithm described in this paper solves this task for videos of leeches or other quasi-linear animals in a manner inspired by the mammalian visual processing system: the video frames are fed through a bank of Gabor filters, which locally detect segments of the animal at a particular orientation. The algorithm assumes that the image location with maximal filter output lies on the animal’s body and traces its shape out in both directions from there. The algorithm successfully extracted location and shape information from video clips of swimming leeches, as well as from still photographs of swimming and crawling snakes. A Matlab implementation with a graphical user interface is available online, and should make this algorithm conveniently usable in many other contexts

    ASKIT: Approximate Skeletonization Kernel-Independent Treecode in High Dimensions

    Full text link
    We present a fast algorithm for kernel summation problems in high-dimensions. These problems appear in computational physics, numerical approximation, non-parametric statistics, and machine learning. In our context, the sums depend on a kernel function that is a pair potential defined on a dataset of points in a high-dimensional Euclidean space. A direct evaluation of the sum scales quadratically with the number of points. Fast kernel summation methods can reduce this cost to linear complexity, but the constants involved do not scale well with the dimensionality of the dataset. The main algorithmic components of fast kernel summation algorithms are the separation of the kernel sum between near and far field (which is the basis for pruning) and the efficient and accurate approximation of the far field. We introduce novel methods for pruning and approximating the far field. Our far field approximation requires only kernel evaluations and does not use analytic expansions. Pruning is not done using bounding boxes but rather combinatorially using a sparsified nearest-neighbor graph of the input. The time complexity of our algorithm depends linearly on the ambient dimension. The error in the algorithm depends on the low-rank approximability of the far field, which in turn depends on the kernel function and on the intrinsic dimensionality of the distribution of the points. The error of the far field approximation does not depend on the ambient dimension. We present the new algorithm along with experimental results that demonstrate its performance. We report results for Gaussian kernel sums for 100 million points in 64 dimensions, for one million points in 1000 dimensions, and for problems in which the Gaussian kernel has a variable bandwidth. To the best of our knowledge, all of these experiments are impossible or prohibitively expensive with existing fast kernel summation methods.Comment: 22 pages, 6 figure

    Characterization of Posidonia Oceanica Seagrass Aerenchyma through Whole Slide Imaging: A Pilot Study

    Full text link
    Characterizing the tissue morphology and anatomy of seagrasses is essential to predicting their acoustic behavior. In this pilot study, we use histology techniques and whole slide imaging (WSI) to describe the composition and topology of the aerenchyma of an entire leaf blade in an automatic way combining the advantages of X-ray microtomography and optical microscopy. Paraffin blocks are prepared in such a way that microtome slices contain an arbitrarily large number of cross sections distributed along the full length of a blade. The sample organization in the paraffin block coupled with whole slide image analysis allows high throughput data extraction and an exhaustive characterization along the whole blade length. The core of the work are image processing algorithms that can identify cells and air lacunae (or void) from fiber strand, epidermis, mesophyll and vascular system. A set of specific features is developed to adequately describe the convexity of cells and voids where standard descriptors fail. The features scrutinize the local curvature of the object borders to allow an accurate discrimination between void and cell through machine learning. The algorithm allows to reconstruct the cells and cell membrane features that are relevant to tissue density, compressibility and rigidity. Size distribution of the different cell types and gas spaces, total biomass and total void volume fraction are then extracted from the high resolution slices to provide a complete characterization of the tissue along the leave from its base to the apex

    An Unified Multiscale Framework for Planar, Surface, and Curve Skeletonization

    Get PDF
    Computing skeletons of 2D shapes, and medial surface and curve skeletons of 3D shapes, is a challenging task. In particular, there is no unified framework that detects all types of skeletons using a single model, and also produces a multiscale representation which allows to progressively simplify, or regularize, all skeleton types. In this paper, we present such a framework. We model skeleton detection and regularization by a conservative mass transport process from a shape's boundary to its surface skeleton, next to its curve skeleton, and finally to the shape center. The resulting density field can be thresholded to obtain a multiscale representation of progressively simplified surface, or curve, skeletons. We detail a numerical implementation of our framework which is demonstrably stable and has high computational efficiency. We demonstrate our framework on several complex 2D and 3D shapes

    Image processing platform for the analysis of brain vascular patterns

    Get PDF
    Aquest projecte consisteix en el desenvolupament d'una aplicació web per al suport metge de l'anàlisi d'imatges cerebrovasculars. L'objectiu és crear un prototip obert i modular que serveixi com a exemple i plantilla per al desenvolupament d'altres projectes. L'objectiu és aconseguir una alternativa a les opcions comercials actualment existents d'eines d'anàlisi de dades en la indústria de la salut. L'aplicació es desenvolupa utilitzant el llenguatge Python. L'aplicació permet a l'usuari carregar imatges mèdiques contingudes en fitxers DICOM, aquestes imatges són processades per eliminar el soroll i extreure els vasos sanguinis de la imatge de cara a l'anàlisi. Els resultats es resumeixen en tres gràfics: un anomenat mapa isocronal que reflecteix l'evolució temporal de el flux de la sang, un altre gràfic mostrant l'esquelet de l'estructura o xarxa de sistema vascular, i un últim gràfic que representa dades numèriques extretes com a paràmetres de l'anàlisi de l'esquelet. El framework Dash és usat per implementar la interfície i la interacció amb l'usuari. L'usuari pot carregar dues mostres diferents a el mateix temps i executar una anàlisi per comparar els resultats de les dues mostres en una mateixa pantalla. Finalment l'aplicació s'empaqueta en un contenidor virtual usant la plataforma Docker. Després de provar l'aplicació amb imatges reals de mostra proporcionades per l'Hospital Sant Joan de Déu, els resultats obtinguts són satisfactoris ja que l'aplicació funciona adequadament així com els algoritmes de processat d'imatge aplicats. Malgrat les limitacions de el projecte, el treball realitzat pot servir com a punt de partida per a futurs desenvolupaments.Este proyecto consiste en el desarrollo de una aplicación web para el soporte médico del análisis de imágenes cerebrovasculares. El objetivo es crear un prototipo abierto y modular que sirva como ejemplo y plantilla para el desarrollo de otros proyectos. El objetivo es conseguir una alternativa a las opciones comerciales actualmente existentes de herramientas de análisis de datos en la industria de la salud. La aplicación se desarrolla usando el lenguaje Python. La aplicación permite al usuario cargar imágenes médicas contenidas en ficheros DICOM, esas imágenes son procesadas para eliminar el ruido y extraer los vasos sanguíneos de la imagen de cara al análisis. Los resultados se resumen en tres gráficos: uno llamado mapa isocronal que refleja la evolución temporal del flujo de la sangre, otro gráfico mostrando el esqueleto de la estructura o red del sistema vascular, y un último gráfico que representa datos numéricos extraídos como parámetros del análisis del esqueleto. El framework Dash es usado para implementar la interfaz y la interacción con el usuario. El usuario puede cargar dos muestras diferentes al mismo tiempo y ejecutar un análisis para comparar los resultados de las dos muestras en una misma pantalla. Finalmente la aplicación se empaqueta en un contenedor virtual usando la plataforma Docker. Tras probar la aplicación con imágenes reales de muestra proporcionadas por el Hospital Sant Joan de Déu, los resultados obtenidos son satisfactorios ya que la aplicación funciona adecuadamente así como los algoritmos de procesado de imagen aplicados. Pese a las limitaciones del proyecto, el trabajo realizado puede servir como punto de partida para futuros desarrollos.This project consists in the development of a web application for the support of medical professionals in the analysis of cerebrovascular image data. The objective is to build an open and modular prototype that can serve as an example or template for the development of other projects. The purpose is to have an open alternative to the commercial options currently available for data analysis tools in the health industry market. The application is developed using Python. The application allows the user to load medical images contained in DICOM files, those images are processed for noise removal and binarization in order to build the result graphs. The results are three graphs: an image graph called “isochronal map” reflecting the temporal evolution of the blood flow, an image graph showing the skeleton of the vascular system structure, a box-plot graph representing the numerical branch data extracted from the skeleton. The Dash framework is used to construct the user interface and to implement the user interaction functionalities. The subject can load two different samples at the same time and execute the analysis to compare the results for both samples in the same screen. Finally the application is containerized using Docker to package it and make it multi-platform. The app is tested and the results are satisfactory as the resulting application works properly and so do the image processing algorithms for the input data provided by the Hospital Sant Joan de Déu. Despite its obvious limitations, the work done serves as a starting point for future developments

    A new approach for centerline extraction in handwritten strokes: an application to the constitution of a code book

    Get PDF
    International audienceWe present in this paper a new method of analysis and decomposition of handwritten documents into glyphs (graphemes) and their associated code book. The different techniques that are involved in this paper are inspired by image processing methods in a large sense and mathematical models implying graph coloring. Our approaches provide firstly a rapid and detailed characterization of handwritten shapes based on dynamic tracking of the handwriting (curvature, thickness, direction, etc.) and also a very efficient analysis method for the categorization of basic shapes (graphemes). The tools that we have produced enable paleographers to study quickly and more accurately a large volume of manuscripts and to extract a large number of characteristics that are specific to an individual or an era

    Development of Ti-Fe-based powders for laser additive manufacturing of ultrafine lamellar eutectics

    Get PDF
    Years of academic research has gone into developing Ti-Fe-based ultrafine eutectic and near-eutectic alloys with remarkable mechanical properties. Cast ingots (few mm in dimensions) have demonstrated high compressive strengths (> 2 GPa) similar to bulk metallic glasses (BMGs), while retaining more than 15 % plasticity at room temperature [1–3]. However, conventional casting methods are incapable of providing uniform and high cooling rates necessary for growing such ultrafine microstructures over large dimensions without introducing significant heterogeneities. On the other hand, laser-based Additive Manufacturing (AM) techniques with inherently very high cooling rates like Selective Laser Melting (SLM) (ranging 106 K/s) or Laser Metal Deposition (LMD) (ranging 104 – 105 K/s) are appropriate for such microstructural growth and their track and layer-wise building approach maintains an almost constant cooling rate throughout bulk. This strongly motivates the development of high-quality powders for SLM and LMD trials. In this work, pre-alloyed powder of Fe-rich near-eutectic composition Fe82.4Ti17.6 (at %) was developed for LMD, while powders of two Ti-rich compositions: near-eutectic Ti66Fe27Nb3Sn4 (at %) and off-eutectic Ti73.5Fe23Nb1.5Sn2 (at %) were explored for SLM trials. Three gas atomisation methods, namely Crucible-based Gas atomisation (CGA), Crucible-Free atomisation (CFA) and Arc-melting Atomisation (AMA) were investigated for optimising powder production. In addition to conventional techniques, a novel methodology was proposed for one-step screening of powders’ key features based on advanced image analysis of X-Ray Computed Tomography (XCT) data. The methodology generated volume-weighted particle size distributions (which were validated against conventional laser diffraction), provided accurate estimations of internal porosity and quantitatively evaluated the 3D morphology of powders. In order to create a solidification knowledge dataset and further optimise the processing of powders under high cooling rates, in-depth microstructural studies were performed on these powders sieved into different particle size ranges (experiencing different solidification rates during atomisation). Results revealed that powder particle size is clearly related to, and can possibly predict, the solidification pathway followed during gas atomisation as well as its degree of completion. The ultrafine interlamellar spacing λ (< 190 μm) of lamellar eutectics observed in powders of near-eutectic compostitions increased almost linearly with particle size and revealed solidification rates similar to those encountered during SLM/LMD processing of the same or similar compositions. Therefore, this work highlights the potential of gas atomisation as a method to study rapid solidification and Laser-AM processing. Finally, two alloys were consolidated by AM using pre-alloyed powders and characterised mechanically, i.e. LMD-built Fe82.4Ti17.6 with lamellar eutectic microstructure and SLM-built Ti73.5Fe23Nb1.5Sn2 (off-eutectic) showing a unique “composite” microstructure of α-Ti and β-Ti grains strengthened by FeTi dispersoids that partially arranged themeselves as fine lamellas. Both alloys showed high compressive yield strengths (≈ 1.8 GPa and ≈ 1.9 GPa) at room temperature, with Ti73.5Fe23Nb1.5Sn2 showing high plasticity up to 20 %. The alloy showed higher tensile yield strength and elongation at intermediate temperatures (450 °C to 600 °C) than popular (α+β) aerospace alloys, like Ti-6Al-4V built by laser-AM [4–6]. LMD-built Fe82.4Ti17.6 largely remained brittle below 500 °C, but out-performed similar induction cast [7] and sintered alloys in compressive yield strength, thus proving an impressive candidate for compression-based applications (like tools) in the intermediate temperature range.Programa de Doctorado en Ciencia e Ingeniería de Materiales por la Universidad Carlos III de MadridPresidenta: Mónica Campos Gómez.- Secretaria: Carmen Cepeda Jiménez.- Vocal: María San Sebastián Ormazába

    Fast extraction of neuron morphologies from large-scale SBFSEM image stacks

    Get PDF
    Neuron morphology is frequently used to classify cell-types in the mammalian cortex. Apart from the shape of the soma and the axonal projections, morphological classification is largely defined by the dendrites of a neuron and their subcellular compartments, referred to as dendritic spines. The dimensions of a neuron’s dendritic compartment, including its spines, is also a major determinant of the passive and active electrical excitability of dendrites. Furthermore, the dimensions of dendritic branches and spines change during postnatal development and, possibly, following some types of neuronal activity patterns, changes depending on the activity of a neuron. Due to their small size, accurate quantitation of spine number and structure is difficult to achieve (Larkman, J Comp Neurol 306:332, 1991). Here we follow an analysis approach using high-resolution EM techniques. Serial block-face scanning electron microscopy (SBFSEM) enables automated imaging of large specimen volumes at high resolution. The large data sets generated by this technique make manual reconstruction of neuronal structure laborious. Here we present NeuroStruct, a reconstruction environment developed for fast and automated analysis of large SBFSEM data sets containing individual stained neurons using optimized algorithms for CPU and GPU hardware. NeuroStruct is based on 3D operators and integrates image information from image stacks of individual neurons filled with biocytin and stained with osmium tetroxide. The focus of the presented work is the reconstruction of dendritic branches with detailed representation of spines. NeuroStruct delivers both a 3D surface model of the reconstructed structures and a 1D geometrical model corresponding to the skeleton of the reconstructed structures. Both representations are a prerequisite for analysis of morphological characteristics and simulation signalling within a neuron that capture the influence of spines
    corecore