23 research outputs found

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Contribution à l'analyse de la dynamique des écritures anciennes pour l'aide à l'expertise paléographique

    Get PDF
    Mes travaux de thèse s inscrivent dans le cadre du projet ANR GRAPHEM1 (Graphemebased Retrieval and Analysis for PaleograpHic Expertise of Middle Age Manuscripts). Ilsprésentent une contribution méthodologique applicable à l'analyse automatique des écrituresanciennes pour assister les experts en paléographie dans le délicat travail d étude et dedéchiffrage des écritures.L objectif principal est de contribuer à une instrumetation du corpus des manuscritsmédiévaux détenus par l Institut de Recherche en Histoire des Textes (IRHT Paris) en aidantles paléographes spécialisés dans ce domaine dans leur travail de compréhension de l évolutiondes formes de l écriture par la mise en place de méthodes efficaces d accès au contenu desmanuscrits reposant sur une analyse fine des formes décrites sous la formes de petits fragments(les graphèmes). Dans mes travaux de doctorats, j ai choisi d étudier la dynamique del élément le plus basique de l écriture appelé le ductus2 et qui d après les paléographes apportebeaucoup d informations sur le style d écriture et l époque d élaboration du manuscrit.Mes contributions majeures se situent à deux niveaux : une première étape de prétraitementdes images fortement dégradées assurant une décomposition optimale des formes en graphèmescontenant l information du ductus. Pour cette étape de décomposition des manuscrits, nousavons procédé à la mise en place d une méthodologie complète de suivi de traits à partir del extraction d un squelette obtenu à partir de procédures de rehaussement de contraste et dediffusion de gradients. Le suivi complet du tracé a été obtenu à partir de l application des règlesfondamentales d exécution des traits d écriture, enseignées aux copistes du Moyen Age. Il s agitd information de dynamique de formation des traits portant essentiellement sur des indicationsde directions privilégiées.Dans une seconde étape, nous avons cherché à caractériser ces graphèmes par desdescripteurs de formes visuelles compréhensibles à la fois par les paléographes et lesinformaticiens et garantissant une représentation la plus complète possible de l écriture d unpoint de vue géométrique et morphologique. A partir de cette caractérisation, nous avonsproposé une approche de clustering assurant un regroupement des graphèmes en classeshomogènes par l utilisation d un algorithme de classification non-supervisé basée sur lacoloration de graphe. Le résultat du clustering des graphèmes a conduit à la formation dedictionnaires de formes caractérisant de manière individuelle et discriminante chaque manuscrittraité. Nous avons également étudié la puissance discriminatoire de ces descripteurs afin d obtenir la meilleure représentation d un manuscrit en dictionnaire de formes. Cette étude a étéfaite en exploitant les algorithmes génétiques par leur capacité à produire de bonne sélection decaractéristiques.L ensemble de ces contributions a été testé à partir d une application CBIR sur trois bases demanuscrits dont deux médiévales (manuscrits de la base d Oxford et manuscrits de l IRHT, baseprincipale du projet), et une base comprenant de manuscrits contemporains utilisée lors de lacompétition d identification de scripteurs d ICDAR 2011. L exploitation de notre méthode dedescription et de classification a été faite sur une base contemporaine afin de positionner notrecontribution par rapport aux autres travaux relevant du domaine de l identification d écritures etétudier son pouvoir de généralisation à d autres types de documents. Les résultats trèsencourageants que nous avons obtenus sur les bases médiévales et la base contemporaine, ontmontré la robustesse de notre approche aux variations de formes et de styles et son caractèrerésolument généralisable à tout type de documents écrits.My thesis work is part of the ANR GRAPHEM Project (Grapheme based Retrieval andAnalysis for Expertise paleographic Manuscripts of Middle Age). It represents a methodologicalcontribution applicable to the automatic analysis of ancient writings to assist the experts inpaleography in the delicate work of the studying and deciphering the writing.The main objective is to contribute to an instrumentation of the corpus of medievalmanuscripts held by Institut de Recherche en Histoire de Textes (IRHT-Paris), by helping thepaleographers specialized in this field in their work of understanding the evolution of forms inthe writing, with the establishment of effective methods to access the contents of manuscriptsbased on a fine analysis of the forms described in the form of small fragments (graphemes). Inmy PhD work, I chose to study the dynamic of the most basic element of the writing called theductus and which according to the paleographers, brings a lot of information on the style ofwriting and the era of the elaboration of the manuscript.My major contribution is situated at two levels: a first step of preprocessing of severelydegraded images to ensure an optimal decomposition of the forms into graphemes containingthe ductus information. For this decomposition step of manuscripts, we have proceeded to theestablishment of a complete methodology for the tracings of strokes by the extraction of theskeleton obtained from the contrast enhancement and the diffusion of the gradient procedures.The complete tracking of the strokes was obtained from the application of fundamentalexecution rules of the strokes taught to the scribes of the Middle Ages. It is related to thedynamic information of the formation of strokes focusing essentially on indications of theprivileged directions.In a second step, we have tried to characterize the graphemes by visual shape descriptorsunderstandable by both the computer scientists and the paleographers and thus unsuring themost complete possible representation of the wrting from a geometrical and morphological pointof view. From this characterization, we have have proposed a clustering approach insuring agrouping of graphemes into homogeneous classes by using a non-supervised classificationalgorithm based on the graph coloring. The result of the clustering of graphemes led to theformation of a codebook characterizing in an individual and discriminating way each processedmanuscript. We have also studied the discriminating power of the descriptors in order to obtaina better representation of a manuscript into a codebook. This study was done by exploiting thegenetic algorithms by their ability to produce a good feature selection.The set of the contributions was tested from a CBIR application on three databases ofmanuscripts including two medieval databases (manuscripts from the Oxford and IRHTdatabases), and database of containing contemporary manuscripts used in the writersidentification contest of ICDAR 2011. The exploitation of our description and classificationmethod was applied on a cotemporary database in order to position our contribution withrespect to other relevant works in the writrings identification domain and study itsgeneralization power to other types of manuscripts. The very encouraging results that weobtained on the medieval and contemporary databases, showed the robustness of our approachto the variations of the shapes and styles and its resolutely generalized character to all types ofhandwritten documents.PARIS5-Bibliotheque electronique (751069902) / SudocSudocFranceF

    Mineralogical Limitations for X-Ray Tomography of Crystalline Cumulate Rocks

    Get PDF
    The use of x-ray computed tomography (XRCT) on igneous rocks enables the visualisation and quantification of the 3D texture of the rock and of the crystal population as opposed to a more traditional 2D vision using thin sections and 3D stereological conversions. Although still in its infancy, the application of XRCT on igneous rocks provides a 3D map of the distribution of each mineral phase, the overall dimensional metrology of crystals (size, area volume, shape, orientation and geometry) and potentially their crystallographic orientation. The precision of crystal size distributions (CSD), which are often used for describing rocks and understanding igneous processes, is enhanced by the use of the 3D analysis of crystal size. XRCT shows promising results when applied to volcanic rocks but only limited applications with intrusive igneous rocks In this study, we compare the dimensional metrology of crystals in 2D and in 3D of a Tugtutoq peridotite sample using a thin section and 3D tomography data to investigate how different the 3D data is from the 2D to test the utility of using XRCT on a dense cumulate. The tomography data is processed in four steps: i) post-processing which includes filtering noise and correcting artefacts linked to the XRCT acquisition; ii) segmentation of the cumulus phase in the sample (in this case olivine); iii) separation of the segmented olivine into realistic and discrete crystals; and iv) the extraction of the data from the 3D-separated olivine crystals (size, location and distribution in the sample, shape, orientation, …). The results of the tomography scan of the peridotite sample are close to the thin section data but the error associated with the tomography data is difficult to quantify. That error is likely high due to the low contrast in attenuation coefficients between the crystal populations – linked to the similar densities of the mineral phases in the sample – and the close proximity and extensive contact between the olivine crystals. When applied to cumulate rocks, the method cannot be automated to produce reproducible and objective results, but we suggest the application of this method for processing tomography data with either volcanic rocks or cumulates solidified from a more permeable mush, where there is minimal contact between the crystals of interest and the density contrast between phenocrysts and groundmass is high

    Methods and algorithms for quantitative analysis of metallomic images to assess traumatic brain injury

    Get PDF
    The primary aim of this thesis is to develop image processing algorithms to quantitatively determine the link between traumatic brain injury (TBI) severity and chronic traumatic encephalopathy (CTE) neuropathology, specifically looking into the role of blood-brain barrier disruption following TBI. In order to causally investigate the relationship between the tau protein neurodegenerative disease CTE and TBI, mouse models of blast neurotrauma (BNT) and impact neurotrauma (INT) are investigated. First, a high-speed video tracking algorithm is developed based on K-means clustering, active contours and Kalman filtering to comparatively study the head kinematics in blast and impact experiments. Then, to compare BNT and INT neuropathology, methods for quantitative analysis of macroscopic optical images and fluorescent images are described. The secondary aim of this thesis focuses on developing methods for a novel application of metallomic imaging mass spectrometry (MIMS) to biological tissue. Unlike traditional modalities used to assess neuropathology, that suffer from limited sensitivity and analytical capacity, MIMS uses a mass spectrometer -- an analytical instrument for measuring elements and isotopes with high dynamic range, sensitivity and specificity -- as the imaging sensor to generate spatial maps with spectral (vector-valued) data per pixel. Given the vector nature of MIMS data, a unique end-to-end processing pipeline is designed to support data acquisition, visualization and interpretation. A novel multi-modal and multi-channel image registration (MMMCIR) method using multi-variate mutual information as a similarity metric is developed in order to establish correspondence between two images of arbitrary modality. The MMMCIR method is then used to automatically segment MIMS images of the mouse brain and systematically evaluate the levels of relevant elements and isotopes after experimental closed-head impact injury on the impact side (ipsilateral) and opposing side (contralateral) of the brain. This method quantifiably confirms observed differences in gadolinium levels for a cohort of images. Finally, MIMS images of human lacrimal sac biopsy samples are used for preliminary clinicopathological assessments, supporting the utility of the unique insights MIMS provides by correlating areas of inflammation to areas of elevated toxic metals. The image processing methods developed in this work demonstrate the significant capabilities of MIMS and its role in enhancing our understanding of the underlying pathological mechanisms of TBI and other medical conditions.2019-07-09T00:00:00

    Statistical Modelling and Variability of the Subtropical Front, New Zealand

    Get PDF
    Ocean fronts are narrow zones of intense dynamic activity that play an important role in global ocean-atmosphere interactions. Of particular significance is the circumglobal frontal system of the Southern Ocean where intermediate water masses are formed, heat, salt, nutrients and momentum are redistributed and carbon dioxide is absorbed. The northern limit of this frontal band is marked by the Subtropical Front, where subtropical gyre water convergences with colder subantarctic water. Owing to their highly variable nature, both in space and time, ocean fronts are notoriously difficult features to adequately sample using traditional in-situ techniques. We therefore propose a new and innovative statistical modelling approach to detecting and monitoring ocean fronts from AVHRR SST images. Weighted local likelihood is used to provide a nonparametric description of spatial variations in the position and strength of individual fronts within an image. Although we use the new algorithm on AVHRR data it is suitable for other satellite data or model output. The algorithm is used to study the spatial and temporal variability of a localized section of the Subtropical Front past New Zealand, known locally as the Southland Front. Twenty-one years (January 1985 to December 2005) of estimates of the front’s position, temperature and strength are examined using cross correlation and wavelet analysis to investigate the role that remote atmospheric and oceanic forcing relating to the El Nino-Southern Oscillation may play in interannual frontal variability. Cold (warm) anomalies are observed at the Southland Front three to four months after peak El Nino (La Nina) events. The gradient of the front changes one to two seasons in advance of extreme ENSO events suggesting that it may be used as a precursor to changes in the Southern Oscillation. There are strong seasonal dependencies to the correlation between ENSO indices and frontal characteristics. In addition, the frequency and phase relationships are inconsistent indicating that no one physical mechanism or mode of climate variability is responsible for the teleconnection

    In situ particle size instrumentation for improved parameterisation and validation of estuarine sediment transport models.

    Get PDF
    In estuaries containing cohesive sediment, flocculation and break-up of the suspended particles during the tidal cycle has implications for the monitoring and modelling of sediment transport. Monitoring of suspended sediment concentration using in situ optical or acoustic instruments is problematic since the amount of light or sound scattered from the suspended sediment is proportional to both the suspended concentration and the size of the particles. Numerical sediment transport models are heavily reliant upon such concentration data. Particle size variation also directly affects model parameterisation by influencing the settling velocity. A critical review of current particle sizing techniques shows that in situ imaging offers the best option in terms of cost, accuracy and versatility. This thesis presents a new, low cost video-based instrument for measuring the in situ particle size distribution. The system uses two CCTV cameras to view a total size range of 4 to 3000 microm. Illumination is provided by miniature microsecond flash units. These allow blur-free images of particles to be obtained in current speeds of up to 1.4 ms-1, which are saved to hard disk at a frame rate of up to 10 s-1. The instrument package is designed for small-boat operation and deployment in profile mode. Calculation of size and shape parameters is accomplished in software using automated image-processing algorithms. An efficient and accurate edge coincidence technique is developed to detect in focus particles. Instrument performance is evaluated through a case study of the Blyth estuary (Suffolk, UK). Particle size data from a small reach of the estuary are presented for both a spring and neap tide. The process of flocculation is clearly shown, and a semi-empirical model of particle size variation is derived based on turbulent intensity and suspended sediment concentration. The modelled sizes are used to derive settling velocity data for a 2DH model of sediment transport using a simplified model of floe density. Model output is improved in comparison to using a fixed value of settling velocity. Two distinct particle size subpopulations are observed which affect both the settling velocity and the calibration of ADCP backscatter data for sediment concentration between flood and ebb. In addition, rapid resuspension of bed material at the beginning of the flood tide is successfully simulated using a two-layer bed model. It is concluded that the new instrument is a valuable aid to monitoring and modelling of sediment transport

    Proceedings of ICMMB2014

    Get PDF

     Ocean Remote Sensing with Synthetic Aperture Radar

    Get PDF
    The ocean covers approximately 71% of the Earth’s surface, 90% of the biosphere and contains 97% of Earth’s water. The Synthetic Aperture Radar (SAR) can image the ocean surface in all weather conditions and day or night. SAR remote sensing on ocean and coastal monitoring has become a research hotspot in geoscience and remote sensing. This book—Progress in SAR Oceanography—provides an update of the current state of the science on ocean remote sensing with SAR. Overall, the book presents a variety of marine applications, such as, oceanic surface and internal waves, wind, bathymetry, oil spill, coastline and intertidal zone classification, ship and other man-made objects’ detection, as well as remotely sensed data assimilation. The book is aimed at a wide audience, ranging from graduate students, university teachers and working scientists to policy makers and managers. Efforts have been made to highlight general principles as well as the state-of-the-art technologies in the field of SAR Oceanography
    corecore