457 research outputs found

    Phytoplankton functional types from Space.

    Get PDF
    The concept of phytoplankton functional types has emerged as a useful approach to classifying phytoplankton. It finds many applications in addressing some serious contemporary issues facing science and society. Its use is not without challenges, however. As noted earlier, there is no universally-accepted set of functional types, and the types used have to be carefully selected to suit the particular problem being addressed. It is important that the sum total of all functional types matches all phytoplankton under consideration. For example, if in a biogeochemical study, we classify phytoplankton as silicifiers, calcifiers, DMS-producers and nitrogen fix- ers, then there is danger that the study may neglect phytoplankton that do not contribute in any significant way to those functions, but may nevertheless be a significant contributor to, say primary production. Such considerations often lead to the adoption of a category of “other phytoplankton” in models, with no clear defining traits assigned them, but that are nevertheless necessary to close budgets on phytoplankton processes. Since this group is a collection of all phytoplankton that defy classification according to a set of traits, it is difficult to model their physi- ological processes. Our understanding of the diverse functions of phytoplankton is still growing, and as we recognize more functions, there will be a need to balance the desire to incorporate the increasing number of functional types in models against observational challenges of identifying and mapping them adequately. Modelling approaches to dealing with increasing functional diversity have been proposed, for example, using the complex adaptive systems theory and system of infinite diversity, as in the work of Bruggemann and Kooijman (2007). But it is unlikely that remote-sensing approaches might be able to deal with anything but a few prominent functional types. As long as these challenges are explicitly addressed, the functional- type concept should continue to fill a real need to capture, in an economic fashion, the diversity in phytoplankton, and remote sensing should continue to be a useful tool to map them. Remote sensing of phytoplankton functional types is an emerging field, whose potential is not fully realised, nor its limitations clearly established. In this report, we provide an overview of progress to date, examine the advantages and limitations of various methods, and outline suggestions for further development. The overview provided in this chapter is intended to set the stage for detailed considerations of remote-sensing applications in later chapters. In the next chapter, we examine various in situ methods that exist for observing phytoplankton functional types, and how they relate to remote-sensing techniques. In the subsequent chapters, we review the theoretical and empirical bases for the existing and emerging remote-sensing approaches; assess knowledge about the limitations, assumptions, and likely accuracy or predictive skill of the approaches; provide some preliminary comparative analyses; and look towards future prospects with respect to algorithm development, validation studies, and new satellite mis- sions

    The relationship between early and intermediate level spatial vision during typical development and in autism spectrum disorder

    Get PDF
    Certaines recherches ont investigué le traitement visuel de bas et de plus hauts niveaux chez des personnes neurotypiques et chez des personnes ayant un trouble du spectre de l’autisme (TSA). Cependant, l’interaction développementale entre chacun de ces niveaux du traitement visuel n’est toujours pas bien comprise. La présente thèse a donc deux objectifs principaux. Le premier objectif (Étude 1) est d’évaluer l’interaction développementale entre l’analyse visuelle de bas niveaux et de niveaux intermédiaires à travers différentes périodes développementales (âge scolaire, adolescence et âge adulte). Le second objectif (Étude 2) est d’évaluer la relation fonctionnelle entre le traitement visuel de bas niveaux et de niveaux intermédiaires chez des adolescents et des adultes ayant un TSA. Ces deux objectifs ont été évalué en utilisant les mêmes stimuli et procédures. Plus précisément, la sensibilité de formes circulaires complexes (Formes de Fréquences Radiales ou FFR), définies par de la luminance ou par de la texture, a été mesurée avec une procédure à choix forcés à deux alternatives. Les résultats de la première étude ont illustré que l’information locale des FFR sous-jacents aux processus visuels de niveaux intermédiaires, affecte différemment la sensibilité à travers des périodes développementales distinctes. Plus précisément, lorsque le contour est défini par de la luminance, la performance des enfants est plus faible comparativement à celle des adolescents et des adultes pour les FFR sollicitant la perception globale. Lorsque les FFR sont définies par la texture, la sensibilité des enfants est plus faible comparativement à celle des adolescents et des adultes pour les conditions locales et globales. Par conséquent, le type d’information locale, qui définit les éléments locaux de la forme globale, influence la période à laquelle la sensibilité visuelle atteint un niveau développemental similaire à celle identifiée chez les adultes. Il est possible qu’une faible intégration visuelle entre les mécanismes de bas et de niveaux intermédiaires explique la sensibilité réduite des FFR chez les enfants. Ceci peut être attribué à des connexions descendantes et horizontales immatures ainsi qu’au sous-développement de certaines aires cérébrales du système visuel. Les résultats de la deuxième étude ont démontré que la sensibilité visuelle en autisme est influencée par la manipulation de l’information locale. Plus précisément, en présence de luminance, la sensibilité est seulement affectée pour les conditions sollicitant un traitement local chez les personnes avec un TSA. Cependant, en présence de texture, la sensibilité est réduite pour le traitement visuel global et local. Ces résultats suggèrent que la perception de formes en autisme est reliée à l’efficacité à laquelle les éléments locaux (luminance versus texture) sont traités. Les connexions latérales et ascendantes / descendantes des aires visuelles primaires sont possiblement tributaires d’un déséquilibre entre les signaux excitateurs et inhibiteurs, influençant ainsi l’efficacité à laquelle l’information visuelle de luminance et de texture est traitée en autisme. Ces résultats supportent l’hypothèse selon laquelle les altérations de la perception visuelle de bas niveaux (local) sont à l’origine des atypies de plus hauts niveaux chez les personnes avec un TSA.Most studies investigating visual perception in typically developing populations and in Autism Spectrum Disorder (ASD) have assessed lower- (local) and higher-levels (global) of processing in isolation. However, much less is known about the developmental interactions between mechanisms mediating early- and intermediate-level vision in both typically developing populations and in ASD. Based on such premise, the present thesis had two main objectives. The first objective (Study 1) was to evaluate the developmental interplay between low- and intermediate-levels of visual analysis at different periods of typical development (school-age, adolescence and adulthood). The second objective (Study 2) was to evaluate the functional relationship between low- and intermediate-levels of visual analysis in adolescents and adults diagnosed with ASD. Common methodologies were used to assess both objectives. Specifically, sensitivity to slightly curved circles (Radial Frequency Patterns or RFP), defined by luminance or texture information, was measured using a two alternative temporal forced choice procedure. Results obtained in Study 1 demonstrated that local information defining a RFP (mediated by intermediate visual mechanisms) differentially affected sensitivity at different periods of development. Specifically, when the contour was luminance-defined, children performed worse when compared to adolescents and adults only when RFPs targeted a global processing style (few deformations along the RFP’s contour). When RFPs were texture-defined, children’s sensitivity was worse compared to that of adolescents and adults for both local and global conditions. Therefore, timing of adult-like sensitivity to RFPs is dependent on the type of local physical elements defining its global shape. Poor visual integration between low and intermediate visual mechanisms, which could be attributed to immature feedback and horizontal connections as well as under-developed visual cortical areas, may account for such reduced sensitivity in children. Results obtained from Study 2 demonstrated that manipulating the local physical elements of RFPs impacts visual sensitivity in ASD. Specifically, sensitivity to RFPs is unaffected in ASD only when visual analysis is dependent on local deformations of luminance-defined contours. However, sensitivity is reduced for both local and global visual analysis when shapes are texture-defined. Such results suggest that intermediate-level, shape perception in ASD is functionally related to the efficacy with which local physical elements (luminance versus texture) are processed. It is possible that abnormal lateral or feed-forward / feedback connectivity within primary visual areas in ASD, which possibly arise from excitatory / inhibitory signalling imbalance, accounts for differential efficacy with which luminance and texture information is processed in ASD. These results support the hypothesis that atypical higher-level perception in ASD, when present, may have early (local) visual origins

    Mechanotransduction impairment in adolescent idiopathic scoliosis

    Full text link
    La scoliose idiopathique de l'adolescent (SIA) est une courbure rachidienne tridimensionnelle de plus de 10° qui affecte 4% de la population pédiatrique. L’hétérogénéité de ce désordre musculo-squelettique complexe explique notre incompréhension des causes de la SIA. Néanmoins, plusieurs facteurs biologiques ont été associées à son étiologie. Les réponses osseuses aux stimulations mécaniques normalement appliquées sont nécessaire au fonctionnement optimal du système squelettique. Cependant, la mécanotransduction des tissus musculo-squelettiques dans la SIA est méconnu. L'objectif principal de cette thèse était d'étudier l'apport de la mécanotransduction dans l'étiologie de la SIA au niveau cellulaire et moléculaire. Nous avons étudié les ostéoblastes des patients atteints de SIA et des sujets témoins. L'induction mécanique a été réalisée à l'aide d'une application d'écoulement de fluide oscillatoire. L’immunofluorescence (IF) et la microscopie confocale ont été utilisées pour évaluer les cils, l'actine et les tests fonctionnels. Les modifications moléculaires ont été étudiés par qPCR ou ELISA. Un séquençage d'exome entier sur une cohorte de 73 SIA et 70 sujets témoins appariés a été fait, pour vérifier l'hypothèse que l'accumulation de variants rares dans des gènes impliqués dans la mécanotranduction cellulaire contribueraient à l'étiologie de la SIA. Nous avons découvert une élongation anormale des cils des ostéoblastes SIA, qui étaient significativement plus longs que ceux des sujets témoins dans des conditions de ciliogenèse. Les cellules SIA soumises à une application d'écoulement de fluide, n'ont pas été capable d'ajuster la longueur de leurs cils proportionnellement à la force appliquée. La réponse de l'ajustement de la longueur des cils était significativement différente de celle des ostéoblastes témoins, par des stimulations à court et à long terme.. L'expression des facteurs ostéogéniques était significativement réduite dans les ostéoblastes SIA, suggérant une diminution de la mécanosensibilité. De plus, l'analyse transcriptomique en réponse aux forces appliquées a révélé une altération de l'expression des gènes impliqués dans la voie canonique de Wnt. L'augmentation de la sécrétion du facteur VEGF-A en réponse aux forces appliquées dans les ostéoblastes témoins n'a pas été détectée dans les ostéoblastes SIA. Notre analyse SKAT-O des données du séquençage d’exomes entiers a confirmé l’accumulation de variants rares dans la SIA au niveau de gènes associés à la mécanotransduction cellulaire. Les conséquences de ces anomalies de mécanotransduction ont été étudié par des études cellulaires fonctionnelles, démontrant que les ostéoblastes SIA n’ont pas réussi à se positionner ni à s’allonger proportionnellement au flux bidirectionnel appliqué. Le réarrangement des filaments d'actine induit par l’application d’un flux a été compromis dans la SIA. . Enfin, il a été démontré que le flux de fluide avait un effet inhibiteur sur leur migration. Nos données suggèrent une mécanotransduction altérée dans les ostéoblastes SIA affectant les cils, les voies moléculaires de signalisation, le cytosquelette et le comportement de la cellule en réponse à l'écoulement appliqué. La réponse cellulaire à ces stimulations joue un rôle dans la structure, la force, la forme et le fonctionnement du système squelettique. Etudier le profil de réponse altérée des cellules osseuses scoliotiques peut mener à la conception des approches thérapeutiques plus efficacesAdolescent idiopathic scoliosis (AIS) is a three-dimensional spinal curvature that affects up to 4% of children. As a complex disorder, the cause of AIS is still poorly understood. However, multiple categories of biological factors have been found to be associated with its etiology. The role of biomechanics has been acknowledged by clinicians both in the description of deformity and in relation to bracing treatments. Bone responses to routinely applied forces are an important part in a tightly regulated network that is necessary for the optimal function of the skeletal system. However, little is known about the mechanotransduction of musculoskeletal tissues in AIS. The main goal of this dissertation was to investigate the contribution of mechanotransduction in the etiology of AIS from a cellular-molecular aspect. We studied primary osteoblasts obtained intraoperatively from AIS patients and compared them to samples from trauma cases as controls. Fluid flow application was used for mechanical induction. Immunofluorescence staining, and confocal microscopy was used to assess cilia, actin and cellular tests. Molecular changes were followed using RT-PCR or ELISA. We also performed whole exome sequencing (WES) to test the hypothesis that rare variants accumulation in genes involved in cellular mechanotransduction could contribute to AIS etiology. We found an abnormal cilia elongation among AIS osteoblasts, which grew significantly longer than controls. AIS cells after fluid flow application failed to adjust their cilia length in proportion to the applied force. Under both short- and long-term flow applications, their cilia length adjustment was significantly different from controls. Notably, the elevation in the expression of osteogenic factors, that was normally observed with control osteoblasts, was significantly reduced in AIS osteoblasts, suggesting a decrease in their mechanosensitivity. Moreover, transcriptomic analysis following the applied forces revealed an altered expression of genes involved in the Wnt canonical pathway. Strain induced increase in secreted VEGF-A in control osteoblasts was not detected in AIS flow-conditioned media. At the genomic level, our SKAT-O analysis of the WES data also supported the involvement of heterogenous defects in genes pertaining to the cellular mechanotransduction machinery. We tested the consequence of these mechanotransduction abnormalities in a series of functional cellular studies. As expected and unlike controls, AIS osteoblasts failed to position or elongate themselves in proportion to the bidirectional applied flow. The strain-induced rearrangement of actin filaments was compromised in AIS osteoblasts. Finally, fluid flow showed to have an inhibitory effect on their migration contrasting with control cells that migrated significantly faster under flow. In summary, our data strongly suggest an impaired mechanotransduction in AIS osteoblasts that affect cilia, downstream signaling molecular pathways, cytoskeleton and finally the behaviour of the whole cell in response to flow. Fluid flow is one of the main mechanical forces applied physiologically to the bone cells. Cellular responses to these stimulations play a critical role in the structure, strength, shape and optimal performance of the skeletal system. Mapping the impaired profile response of scoliotic bone cells can help in designing more efficient therapeutic approaches or explaining the mechanisms behind less than optimal bracing outcomes

    Automated shape analysis and visualization of the human back.

    Get PDF
    Spinal and back deformities can lead to pain and discomfort, disrupting productivity, and may require prolonged treatment. The conventional method of assessing and monitoring tile de-formity using radiographs has known radiation hazards. An alternative approach for monitoring the deformity is to base the assessment on the shape of back surface. Though three-dimensional data acquisition methods exist, techniques to extract relevant information for clinical use have not been widely developed. Thi's thesis presentsthe content and progression of research into automated analysis and visu-alization of three-dimensional laser scans of the human back. Using mathematical shape analysis, methods have been developed to compute stable curvature of the back surface and to detect the anatomic landmarks from the curvature maps. Compared with manual palpation, the landmarks have been detected to within accuracy of 1.15mm and precision of 0.8111m.Based on the detected spinous process landmarks, the back midline which is the closest surface approximation of the spine, has been derived using constrained polynomial fitting and statistical techniques. Three-dimensional geometric measurementsbasedon the midline were then corn-puted to quantify the deformity. Visualization plays a crucial role in back shape analysis since it enables the exploration of back deformities without the need for physical manipulation of the subject. In the third phase,various visualization techniques have been developed, namely, continuous and discrete colour maps, contour maps and three-dimensional views. In the last phase of the research,a software system has been developed for automating the tasks involved in analysing, visualizing and quantifying of the back shape. The novel aspectsof this research lie in the development of effective noise smoothing methods for stable curvature computation; improved shape analysis and landmark detection algorithm; effective techniques for visualizing the shape of the back; derivation of the back midline using constrained polynomials and computation of three dimensional surface measurements.

    Enhanced clustering analysis pipeline for performance analysis of parallel applications

    Get PDF
    Clustering analysis is widely used to stratify data in the same cluster when they are similar according to the specific metrics. We can use the cluster analysis to group the CPU burst of a parallel application, and the regions on each process in-between communication calls or calls to the parallel runtime. The resulting clusters obtained are the different computational trends or phases that appear in the application. These clusters are useful to understand the behavior of the computation part of the application and focus the analyses on those that present performance issues. Although density-based clustering algorithms are a powerful and efficient tool to summarize this type of information, their traditional user-guided clustering methodology has many shortcomings and deficiencies in dealing with the complexity of data, the diversity of data structures, high-dimensionality of data, and the dramatic increase in the amount of data. Consequently, the majority of DBSCAN-like algorithms have weaknesses to handle high-dimensionality and/or Multi-density data, and they are sensitive to their hyper-parameter configuration. Furthermore, extracting insight from the obtained clusters is an intuitive and manual task. To mitigate these weaknesses, we have proposed a new unified approach to replace the user-guided clustering with an automated clustering analysis pipeline, called Enhanced Cluster Identification and Interpretation (ECII) pipeline. To build the pipeline, we propose novel techniques including Robust Independent Feature Selection, Feature Space Curvature Map, Organization Component Analysis, and hyper-parameters tuning to feature selection, density homogenization, cluster interpretation, and model selection which are the main components of our machine learning pipeline. This thesis contributes four new techniques to the Machine Learning field with a particular use case in Performance Analytics field. The first contribution is a novel unsupervised approach for feature selection on noisy data, called Robust Independent Feature Selection (RIFS). Specifically, we choose a feature subset that contains most of the underlying information, using the same criteria as the Independent component analysis. Simultaneously, the noise is separated as an independent component. The second contribution of the thesis is a parametric multilinear transformation method to homogenize cluster densities while preserving the topological structure of the dataset, called Feature Space Curvature Map (FSCM). We present a new Gravitational Self-organizing Map to model the feature space curvature by plugging the concepts of gravity and fabric of space into the Self-organizing Map algorithm to mathematically describe the density structure of the data. To homogenize the cluster density, we introduce a novel mapping mechanism to project the data from the non-Euclidean curved space to a new Euclidean flat space. The third contribution is a novel topological-based method to study potentially complex high-dimensional categorized data by quantifying their shapes and extracting fine-grain insights from them to interpret the clustering result. We introduce our Organization Component Analysis (OCA) method for the automatic arbitrary cluster-shape study without an assumption about the data distribution. Finally, to tune the DBSCAN hyper-parameters, we propose a new tuning mechanism by combining techniques from machine learning and optimization domains, and we embed it in the ECII pipeline. Using this cluster analysis pipeline with the CPU burst data of a parallel application, we provide the developer/analyst with a high-quality SPMD computation structure detection with the added value that reflects the fine grain of the computation regions.El análisis de conglomerados se usa ampliamente para estratificar datos en el mismo conglomerado cuando son similares según las métricas específicas. Nosotros puede usar el análisis de clúster para agrupar la ráfaga de CPU de una aplicación paralela y las regiones en cada proceso intermedio llamadas de comunicación o llamadas al tiempo de ejecución paralelo. Los clusters resultantes obtenidos son las diferentes tendencias computacionales o fases que aparecen en la solicitud. Estos clusters son útiles para entender el comportamiento de la parte de computación del aplicación y centrar los análisis en aquellos que presenten problemas de rendimiento. Aunque los algoritmos de agrupamiento basados en la densidad son una herramienta poderosa y eficiente para resumir este tipo de información, su La metodología tradicional de agrupación en clústeres guiada por el usuario tiene muchas deficiencias y deficiencias al tratar con la complejidad de los datos, la diversidad de estructuras de datos, la alta dimensionalidad de los datos y el aumento dramático en la cantidad de datos. En consecuencia, el La mayoría de los algoritmos similares a DBSCAN tienen debilidades para manejar datos de alta dimensionalidad y/o densidad múltiple, y son sensibles a su configuración de hiperparámetros. Además, extraer información de los clústeres obtenidos es una forma intuitiva y tarea manual Para mitigar estas debilidades, hemos propuesto un nuevo enfoque unificado para reemplazar el agrupamiento guiado por el usuario con un canalización de análisis de agrupamiento automatizado, llamada canalización de identificación e interpretación de clúster mejorada (ECII). para construir el tubería, proponemos técnicas novedosas que incluyen la selección robusta de características independientes, el mapa de curvatura del espacio de características, Análisis de componentes de la organización y ajuste de hiperparámetros para la selección de características, homogeneización de densidad, agrupación interpretación y selección de modelos, que son los componentes principales de nuestra canalización de aprendizaje automático. Esta tesis aporta cuatro nuevas técnicas al campo de Machine Learning con un caso de uso particular en el campo de Performance Analytics. La primera contribución es un enfoque novedoso no supervisado para la selección de características en datos ruidosos, llamado Robust Independent Feature. Selección (RIFS).Específicamente, elegimos un subconjunto de funciones que contiene la mayor parte de la información subyacente, utilizando el mismo criterios como el análisis de componentes independientes. Simultáneamente, el ruido se separa como un componente independiente. La segunda contribución de la tesis es un método de transformación multilineal paramétrica para homogeneizar densidades de clústeres mientras preservando la estructura topológica del conjunto de datos, llamado Mapa de Curvatura del Espacio de Características (FSCM). Presentamos un nuevo Gravitacional Mapa autoorganizado para modelar la curvatura del espacio característico conectando los conceptos de gravedad y estructura del espacio en el Algoritmo de mapa autoorganizado para describir matemáticamente la estructura de densidad de los datos. Para homogeneizar la densidad del racimo, introducimos un mecanismo de mapeo novedoso para proyectar los datos del espacio curvo no euclidiano a un nuevo plano euclidiano espacio. La tercera contribución es un nuevo método basado en topología para estudiar datos categorizados de alta dimensión potencialmente complejos mediante cuantificando sus formas y extrayendo información detallada de ellas para interpretar el resultado de la agrupación. presentamos nuestro Método de análisis de componentes de organización (OCA) para el estudio automático de forma arbitraria de conglomerados sin una suposición sobre el distribución de datos.Postprint (published version

    Efficient and Accurate Segmentation of Defects in Industrial CT Scans

    Get PDF
    Industrial computed tomography (CT) is an elementary tool for the non-destructive inspection of cast light-metal or plastic parts. A comprehensive testing not only helps to ensure the stability and durability of a part, it also allows reducing the rejection rate by supporting the optimization of the casting process and to save material (and weight) by producing equivalent but more filigree structures. With a CT scan it is theoretically possible to locate any defect in the part under examination and to exactly determine its shape, which in turn helps to draw conclusions about its harmfulness. However, most of the time the data quality is not good enough to allow segmenting the defects with simple filter-based methods which directly operate on the gray-values—especially when the inspection is expanded to the entire production. In such in-line inspection scenarios the tight cycle times further limit the available time for the acquisition of the CT scan, which renders them noisy and prone to various artifacts. In recent years, dramatic advances in deep learning (and convolutional neural networks in particular) made even the reliable detection of small objects in cluttered scenes possible. These methods are a promising approach to quickly yield a reliable and accurate defect segmentation even in unfavorable CT scans. The huge drawback: a lot of precisely labeled training data is required, which is utterly challenging to obtain—particularly in the case of the detection of tiny defects in huge, highly artifact-afflicted, three-dimensional voxel data sets. Hence, a significant part of this work deals with the acquisition of precisely labeled training data. Firstly, we consider facilitating the manual labeling process: our experts annotate on high-quality CT scans with a high spatial resolution and a high contrast resolution and we then transfer these labels to an aligned ``normal'' CT scan of the same part, which holds all the challenging aspects we expect in production use. Nonetheless, due to the indecisiveness of the labeling experts about what to annotate as defective, the labels remain fuzzy. Thus, we additionally explore different approaches to generate artificial training data, for which a precise ground truth can be computed. We find an accurate labeling to be crucial for a proper training. We evaluate (i) domain randomization which simulates a super-set of reality with simple transformations, (ii) generative models which are trained to produce samples of the real-world data distribution, and (iii) realistic simulations which capture the essential aspects of real CT scans. Here, we develop a fully automated simulation pipeline which provides us with an arbitrary amount of precisely labeled training data. First, we procedurally generate virtual cast parts in which we place reasonable artificial casting defects. Then, we realistically simulate CT scans which include typical CT artifacts like scatter, noise, cupping, and ring artifacts. Finally, we compute a precise ground truth by determining for each voxel the overlap with the defect mesh. To determine whether our realistically simulated CT data is eligible to serve as training data for machine learning methods, we compare the prediction performance of learning-based and non-learning-based defect recognition algorithms on the simulated data and on real CT scans. In an extensive evaluation, we compare our novel deep learning method to a baseline of image processing and traditional machine learning algorithms. This evaluation shows how much defect detection benefits from learning-based approaches. In particular, we compare (i) a filter-based anomaly detection method which finds defect indications by subtracting the original CT data from a generated ``defect-free'' version, (ii) a pixel-classification method which, based on densely extracted hand-designed features, lets a random forest decide about whether an image element is part of a defect or not, and (iii) a novel deep learning method which combines a U-Net-like encoder-decoder-pair of three-dimensional convolutions with an additional refinement step. The encoder-decoder-pair yields a high recall, which allows us to detect even very small defect instances. The refinement step yields a high precision by sorting out the false positive responses. We extensively evaluate these models on our realistically simulated CT scans as well as on real CT scans in terms of their probability of detection, which tells us at which probability a defect of a given size can be found in a CT scan of a given quality, and their intersection over union, which gives us information about how precise our segmentation mask is in general. While the learning-based methods clearly outperform the image processing method, the deep learning method in particular convinces by its inference speed and its prediction performance on challenging CT scans—as they, for example, occur in in-line scenarios. Finally, we further explore the possibilities and the limitations of the combination of our fully automated simulation pipeline and our deep learning model. With the deep learning method yielding reliable results for CT scans of low data quality, we examine by how much we can reduce the scan time while still maintaining proper segmentation results. Then, we take a look on the transferability of the promising results to CT scans of parts of different materials and different manufacturing techniques, including plastic injection molding, iron casting, additive manufacturing, and composed multi-material parts. Each of these tasks comes with its own challenges like an increased artifact-level or different types of defects which occasionally are hard to detect even for the human eye. We tackle these challenges by employing our simulation pipeline to produce virtual counterparts that capture the tricky aspects and fine-tuning the deep learning method on this additional training data. With that we can tailor our approach towards specific tasks, achieving reliable and robust segmentation results even for challenging data. Lastly, we examine if the deep learning method, based on our realistically simulated training data, can be trained to distinguish between different types of defects—the reason why we require a precise segmentation in the first place—and we examine if the deep learning method can detect out-of-distribution data where its predictions become less trustworthy, i.e. an uncertainty estimation

    A feature-based reverse engineering system using artificial neural networks

    Get PDF
    Reverse Engineering (RE) is the process of reconstructing CAD models from scanned data of a physical part acquired using 3D scanners. RE has attracted a great deal of research interest over the last decade. However, a review of the literature reveals that most research work have focused on creation of free form surfaces from point cloud data. Representing geometry in terms of surface patches is adequate to represent positional information, but can not capture any of the higher level structure of the part. Reconstructing solid models is of importance since the resulting solid models can be directly imported into commercial solid modellers for various manufacturing activities such as process planning, integral property computation, assembly analysis, and other applications. This research discusses the novel methodology of extracting geometric features directly from a data set of 3D scanned points, which utilises the concepts of artificial neural networks (ANNs). In order to design and develop a generic feature-based RE system for prismatic parts, the following five main tasks were investigated. (1) point data processing algorithms; (2) edge detection strategies; (3) a feature recogniser using ANNs; (4) a feature extraction module; (5) a CAD model exchanger into other CAD/CAM systems via IGES. A key feature of this research is the incorporation of ANN in feature recognition. The use of ANN approach has enabled the development of a flexible feature-based RE methodology that can be trained to deal with new features. ANNs require parallel input patterns. In this research, four geometric attributes extracted from a point set are input to the ANN module for feature recognition: chain codes, convex/concave, circular/rectangular and open/closed attribute. Recognising each feature requires the determination of these attributes. New and robust algorithms are developed for determining these attributes for each of the features. This feature-based approach currently focuses on solving the feature recognition problem based on 2.5D shapes such as block pocket, step, slot, hole, and boss, which are common and crucial in mechanical engineering products. This approach is validated using a set of industrial components. The test results show that the strategy for recognising features is reliable

    Application of Advanced MRI to Fetal Medicine and Surgery

    Get PDF
    Robust imaging is essential for comprehensive preoperative evaluation, prognostication, and surgical planning in the field of fetal medicine and surgery. This is a challenging task given the small fetal size and increased fetal and maternal motion which affect MRI spatial resolution. This thesis explores the clinical applicability of post-acquisition processing using MRI advances such as super-resolution reconstruction (SRR) to generate optimal 3D isotropic volumes of anatomical structures by mitigating unpredictable fetal and maternal motion artefact. It paves the way for automated robust and accurate rapid segmentation of the fetal brain. This enables a hierarchical analysis of volume, followed by a local surface-based shape analysis (joint spectral matching) using mathematical markers (curvedness, shape index) that infer gyrification. This allows for more precise, quantitative measurements, and calculation of longitudinal correspondences of cortical brain development. I explore the potential of these MRI advances in three clinical settings: fetal brain development in the context of fetal surgery for spina bifida, airway assessment in fetal tracheolaryngeal obstruction, and the placental-myometrial-bladder interface in placenta accreta spectrum (PAS). For the fetal brain, MRI advances demonstrated an understanding of the impact of intervention on cortical development which may improve fetal candidate selection, neurocognitive prognostication, and parental counselling. This is of critical importance given that spina bifida fetal surgery is now a clinical reality and is routinely being performed globally. For the fetal trachea, SRR can provide improved anatomical information to better select those pregnancies where an EXIT procedure is required to enable the fetal airway to be secured in a timely manner. This would improve maternal and fetal morbidity outcomes associated with haemorrhage and hypoxic brain injury. Similarly, in PAS, SRR may assist surgical planning by providing enhanced anatomical assessment and prediction for adverse peri-operative maternal outcome such as bladder injury, catastrophic obstetric haemorrhage and maternal death

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus
    • …
    corecore