274 research outputs found

    Depth-based Multi-View 3D Video Coding

    Get PDF

    Characterization of Carotid Plaques with Ultrasound Non-Invasive Vascular Elastography (NIVE) : Feasibility and Correlation with High-Resolution Magnetic Resonance Imaging

    Full text link
    L’accident vasculaire cérébral (AVC) est une cause principale de décès et de morbidité dans le monde; une bonne partie des AVC est causée par la plaque d’athérosclérose carotidienne. La prévention de l’AVC chez les patients ayant une plaque carotidienne demeure controversée, vu les risques et bénéfices ambigus associés au traitement chirurgical ou médical. Plusieurs méthodes d’imagerie ont été développées afin d’étudier la plaque vulnérable (dont le risque est élevé), mais aucune n’est suffisamment validée ou accessible pour permettre une utilisation comme outil de dépistage. L’élastographie non-invasive vasculaire (NIVE) est une technique nouvelle qui cartographie les déformations (élasticité) de la plaque afin de détecter les plaque vulnérables; cette technique n’est pas encore validée cliniquement. Le but de ce projet est d’évaluer la capacité de NIVE de caractériser la composition de la plaque et sa vulnérabilité in vivo chez des patients ayant des plaques sévères carotidiennes, en utilisant comme étalon de référence, l’imagerie par résonance magnétique (IRM) à haute-résolution. Afin de poursuivre cette étude, une connaissance accrue de l’AVC, l’athérosclérose, la plaque vulnérable, ainsi que des techniques actuelles d’imagerie de la plaque carotidienne, est requise. Trente-et-un sujets ont été examinés par NIVE par ultrasonographie et IRM à haute-résolution. Sur 31 plaques, 9 étaient symptomatiques, 17 contenaient des lipides, et 7 étaient vulnérables selon l’IRM. Les déformations étaient significativement plus petites chez les plaques contenant des lipides, avec une sensibilité élevée et une spécificité modérée. Une association quadratique entre la déformation et la quantité de lipide a été trouvée. Les déformations ne pouvaient pas distinguer les plaques vulnérables ou symptomatiques. En conclusion, NIVE par ultrasonographie est faisable chez des patients ayant des sténoses carotidiennes significatives et peut détecter la présence d’un coeur lipidique. Des études supplémentaires de progression de la plaque avec NIVE sont requises afin d’identifier les plaques vulnérables.Stroke is a leading cause of death and morbidity worldwide, and a significant proportion of strokes are caused by carotid atherosclerotic plaque rupture. Prevention of stroke in patients with carotid plaque poses a significant challenge to physicians, as risks and benefits of surgical or medical treatments remain equivocal. Many imaging techniques have been developed to identify and study vulnerable (high-risk) atherosclerotic plaques, but none is sufficiently validated or accessible for population screening. Non-invasive vascular elastography (NIVE) is a novel ultrasonic technique that maps carotid plaque strain (elasticity) characteristics to detect its vulnerability; it has not been clinically validated yet. The goal of this project is to evaluate the ability of ultrasound NIVE strain analysis to characterize carotid plaque composition and vulnerability in vivo in patients with significant plaque burden, as determined by the reference standard, high resolution MRI. To undertake this study, a thorough understanding of stroke, atherosclerosis, vulnerable plaque, and current non-invasive carotid plaque imaging techniques is required. Thirty-one subjects underwent NIVE and high-resolution MRI of internal carotid arteries. Of 31 plaques, 9 were symptomatic, 17 contained lipid and 7 were vulnerable on MRI. Strains were significantly lower in plaques containing a lipid core compared to those without lipid, with high sensitivity and moderate specificity. A quadratic fit was found between strain and lipid content. Strains did not discriminate symptomatic patients or vulnerable plaques. In conclusion, ultrasound NIVE is feasible in patients with significant carotid stenosis and can detect the presence of a lipid core. Further studies of plaque progression with NIVE are required to identify vulnerable plaques

    Health systems data interoperability and implementation

    Get PDF
    Objective The objective of this study was to use machine learning and health standards to address the problem of clinical data interoperability across healthcare institutions. Addressing this problem has the potential to make clinical data comparable, searchable and exchangeable between healthcare providers. Data sources Structured and unstructured data has been used to conduct the experiments in this study. The data was collected from two disparate data sources namely MIMIC-III and NHanes. The MIMIC-III database stored data from two electronic health record systems which are CareVue and MetaVision. The data stored in these systems was not recorded with the same standards; therefore, it was not comparable because some values were conflicting, while one system would store an abbreviation of a clinical concept, the other would store the full concept name and some of the attributes contained missing information. These few issues that have been identified make this form of data a good candidate for this study. From the identified data sources, laboratory, physical examination, vital signs, and behavioural data were used for this study. Methods This research employed a CRISP-DM framework as a guideline for all the stages of data mining. Two sets of classification experiments were conducted, one for the classification of structured data, and the other for unstructured data. For the first experiment, Edit distance, TFIDF and JaroWinkler were used to calculate the similarity weights between two datasets, one coded with the LOINC terminology standard and another not coded. Similar sets of data were classified as matches while dissimilar sets were classified as non-matching. Then soundex indexing method was used to reduce the number of potential comparisons. Thereafter, three classification algorithms were trained and tested, and the performance of each was evaluated through the ROC curve. Alternatively the second experiment was aimed at extracting patient’s smoking status information from a clinical corpus. A sequence-oriented classification algorithm called CRF was used for learning related concepts from the given clinical corpus. Hence, word embedding, random indexing, and word shape features were used for understanding the meaning in the corpus. Results Having optimized all the model’s parameters through the v-fold cross validation on a sampled training set of structured data ( ), out of 24 features, only ( 8) were selected for a classification task. RapidMiner was used to train and test all the classification algorithms. On the final run of classification process, the last contenders were SVM and the decision tree classifier. SVM yielded an accuracy of 92.5% when the and parameters were set to and . These results were obtained after more relevant features were identified, having observed that the classifiers were biased on the initial data. On the other side, unstructured data was annotated via the UIMA Ruta scripting language, then trained through the CRFSuite which comes with the CLAMP toolkit. The CRF classifier obtained an F-measure of 94.8% for “nonsmoker” class, 83.0% for “currentsmoker”, and 65.7% for “pastsmoker”. It was observed that as more relevant data was added, the performance of the classifier improved. The results show that there is a need for the use of FHIR resources for exchanging clinical data between healthcare institutions. FHIR is free, it uses: profiles to extend coding standards; RESTFul API to exchange messages; and JSON, XML and turtle for representing messages. Data could be stored as JSON format on a NoSQL database such as CouchDB, which makes it available for further post extraction exploration. Conclusion This study has provided a method for learning a clinical coding standard by a computer algorithm, then applying that learned standard to unstandardized data so that unstandardized data could be easily exchangeable, comparable and searchable and ultimately achieve data interoperability. Even though this study was applied on a limited scale, in future, the study would explore the standardization of patient’s long-lived data from multiple sources using the SHARPn open-sourced tools and data scaling platformsInformation ScienceM. Sc. (Computing

    Afar, Ethiopia: a local seismic survey

    Get PDF
    A network of four independently-recording seismic stations was operated by the University of Durham in South-Central Afar during 1973 and 1974. Each station consisted of a three-component set of seismometers, whose signals were recorded on to magnetic tape. This study concerns local earthquakes recorded from February to September, 1974.250 earthquakes were located from relative arrival times of P and S phases using an optimized, laterally homogeneous, 4-layer structural model. Upper crustal P-wave velocities are found to be 4.4±0.2 km s(^-1) (0 to 4.5 km depth) and 6.2± 0.1 km s(^-1) (4.5 to 11 km). Deeper structure is poorly constrained. Anomalous upper mantle exists, with low seismic velocity (Vp about 7.4 km s(^-1)) and raised Poisson's ratio (0.31). S(_n) is transmitted, 8.0 km s(^-1) upper mantle cannot exist above about 43 km depth. Earthquake focal depths within Afar do not exceed 5 km. Epicentres correlate well with Recent axial volcanism. Spatial epicentral patterns reflect intense regional NW-SE extensional faulting. One line of epicentres shows the NNE-SSW trend of the Main Ethiopian rift. Focal mechanisms are very poorly constrained, but are consistent with NW-SE strike-slip or normal faulting, or with NE-SW dextral transcurrent faulting. Signal duration magnitude and Richter local magnitude scales are defined for Afar, Frequency-magnitude b-coefficient values are 0.87+0.05, The three-component records are polarization filtered, a technique previously applied only to teleseisms. The performance of the filters is discussed. Azimuths and apparent angles of incidence of events are determined from their first arrivals at a single recording station. Hypocentres are then obtained by ray tracing. Earthquake frequency spectra are computed through the fast Fourier transform. The spectra are dominated by the effects of the superficial crust below the receivers. Crustal transfer ratios are discussed. Increased attenuation is demonstrated below the Tendaho graben. Seismic source parameters are calculated using BRUNE's (1970) method. All results are consistent with diffuse NE-SW crustal extension. It is concluded that well-defined spreading axes do not yet exist

    Exploiting random projections and sparsity with random forests and gradient boosting methods - Application to multi-label and multi-output learning, random forest model compression and leveraging input sparsity

    Full text link
    Within machine learning, the supervised learning field aims at modeling the input-output relationship of a system, from past observations of its behavior. Decision trees characterize the input-output relationship through a series of nested ``if-then-else'' questions, the testing nodes, leading to a set of predictions, the leaf nodes. Several of such trees are often combined together for state-of-the-art performance: random forest ensembles average the predictions of randomized decision trees trained independently in parallel, while tree boosting ensembles train decision trees sequentially to refine the predictions made by the previous ones. The emergence of new applications requires scalable supervised learning algorithms in terms of computational power and memory space with respect to the number of inputs, outputs, and observations without sacrificing accuracy. In this thesis, we identify three main areas where decision tree methods could be improved for which we provide and evaluate original algorithmic solutions: (i) learning over high dimensional output spaces, (ii) learning with large sample datasets and stringent memory constraints at prediction time and (iii) learning over high dimensional sparse input spaces. A first approach to solve learning tasks with a high dimensional output space, called binary relevance or single target, is to train one decision tree ensemble per output. However, it completely neglects the potential correlations existing between the outputs. An alternative approach called multi-output decision trees fits a single decision tree ensemble targeting simultaneously all the outputs, assuming that all outputs are correlated. Nevertheless, both approaches have (i) exactly the same computational complexity and (ii) target extreme output correlation structures. In our first contribution, we show how to combine random projection of the output space, a dimensionality reduction method, with the random forest algorithm decreasing the learning time complexity. The accuracy is preserved, and may even be improved by reaching a different bias-variance tradeoff. In our second contribution, we first formally adapt the gradient boosting ensemble method to multi-output supervised learning tasks such as multi-output regression and multi-label classification. We then propose to combine single random projections of the output space with gradient boosting on such tasks to adapt automatically to the output correlation structure. The random forest algorithm often generates large ensembles of complex models thanks to the availability of a large number of observations. However, the space complexity of such models, proportional to their total number of nodes, is often prohibitive, and therefore these modes are not well suited under stringent memory constraints at prediction time. In our third contribution, we propose to compress these ensembles by solving a L1-based regularization problem over the set of indicator functions defined by all their nodes. Some supervised learning tasks have a high dimensional but sparse input space, where each observation has only a few of the input variables that have non zero values. Standard decision tree implementations are not well adapted to treat sparse input spaces, unlike other supervised learning techniques such as support vector machines or linear models. In our fourth contribution, we show how to exploit algorithmically the input space sparsity within decision tree methods. Our implementation yields a significant speed up both on synthetic and real datasets, while leading to exactly the same model. It also reduces the required memory to grow such models by exploiting sparse instead of dense memory storage for the input matrix.Parmi les techniques d'apprentissage automatique, l'apprentissage supervisé vise à modéliser les relations entrée-sortie d'un système, à partir d'observations de son fonctionnement. Les arbres de décision caractérisent cette relation entrée-sortie à partir d'un ensemble hiérarchique de questions appelées les noeuds tests amenant à une prédiction, les noeuds feuilles. Plusieurs de ces arbres sont souvent combinés ensemble afin d'atteindre les performances de l'état de l'art: les ensembles de forêts aléatoires calculent la moyenne des prédictions d'arbres de décision randomisés, entraînés indépendamment et en parallèle alors que les ensembles d'arbres de boosting entraînent des arbres de décision séquentiellement, améliorant ainsi les prédictions faites par les précédents modèles de l'ensemble. L'apparition de nouvelles applications requiert des algorithmes d'apprentissage supervisé efficaces en terme de puissance de calcul et d'espace mémoire par rapport au nombre d'entrées, de sorties, et d'observations sans sacrifier la précision du modèle. Dans cette thèse, nous avons identifié trois domaines principaux où les méthodes d'arbres de décision peuvent être améliorées pour lequel nous fournissons et évaluons des solutions algorithmiques originales: (i) apprentissage sur des espaces de sortie de haute dimension, (ii) apprentissage avec de grands ensembles d'échantillons et des contraintes mémoires strictes au moment de la prédiction et (iii) apprentissage sur des espaces d'entrée creux de haute dimension. Une première approche pour résoudre des tâches d'apprentissage avec un espace de sortie de haute dimension, appelée "binary relevance" ou "single target", est l’apprentissage d’un ensemble d'arbres de décision par sortie. Toutefois, cette approche néglige complètement les corrélations potentiellement existantes entre les sorties. Une approche alternative, appelée "arbre de décision multi-sorties", est l’apprentissage d’un seul ensemble d'arbres de décision pour toutes les sorties, faisant l'hypothèse que toutes les sorties sont corrélées. Cependant, les deux approches ont (i) exactement la même complexité en temps de calcul et (ii) visent des structures de corrélation de sorties extrêmes. Dans notre première contribution, nous montrons comment combiner des projections aléatoires (une méthode de réduction de dimensionnalité) de l'espace de sortie avec l'algorithme des forêts aléatoires diminuant la complexité en temps de calcul de la phase d'apprentissage. La précision est préservée, et peut même être améliorée en atteignant un compromis biais-variance différent. Dans notre seconde contribution, nous adaptons d'abord formellement la méthode d'ensemble "gradient boosting" à la régression multi-sorties et à la classification multi-labels. Nous proposons ensuite de combiner une seule projection aléatoire de l'espace de sortie avec l’algorithme de "gradient boosting" sur de telles tâches afin de s'adapter automatiquement à la structure des corrélations existant entre les sorties. Les algorithmes de forêts aléatoires génèrent souvent de grands ensembles de modèles complexes grâce à la disponibilité d'un grand nombre d'observations. Toutefois, la complexité mémoire, proportionnelle au nombre total de noeuds, de tels modèles est souvent prohibitive, et donc ces modèles ne sont pas adaptés à des contraintes mémoires fortes lors de la phase de prédiction. Dans notre troisième contribution, nous proposons de compresser ces ensembles en résolvant un problème de régularisation basé sur la norme L1 sur l'ensemble des fonctions indicatrices défini par tous leurs noeuds. Certaines tâches d'apprentissage supervisé ont un espace d'entrée de haute dimension mais creux, où chaque observation possède seulement quelques variables d'entrée avec une valeur non-nulle. Les implémentations standards des arbres de décision ne sont pas adaptées pour traiter des espaces d'entrée creux, contrairement à d'autres techniques d'apprentissage supervisé telles que les machines à vecteurs de support ou les modèles linéaires. Dans notre quatrième contribution, nous montrons comment exploiter algorithmiquement le creux de l'espace d'entrée avec les méthodes d'arbres de décision. Notre implémentation diminue significativement le temps de calcul sur des ensembles de données synthétiques et réelles, tout en fournissant exactement le même modèle. Cela permet aussi de réduire la mémoire nécessaire pour apprendre de tels modèles en exploitant des méthodes de stockage appropriées pour la matrice des entrées

    Numerical Simulation in Biomechanics and Biomedical Engineering

    Get PDF
    In the first contribution, Morbiducci and co-workers discuss the theoretical and methodological bases supporting the Lagrangian- and Euler-based methods, highlighting their application to cardiovascular flows. The second contribution, by the Ansón and van Lenthe groups, proposes an automated virtual bench test for evaluating the stability of custom shoulder implants without the necessity of mechanical testing. Urdeitx and Doweidar, in the third paper, also adopt the finite element method for developing a computational model aim to study cardiac cell behavior under mechano-electric stimulation. In the fourth contribution, Ayensa-Jiménez et al. develop a methodology to approximate the multidimensional probability density function of the parametric analysis obtained developing a mathematical model of the cancer evolution. The fifth paper is oriented to the topological data analysis; the group of Cueto and Chinesta designs a predictive model capable of estimating the state of drivers using the data collected from motion sensors. In the sixth contribution, the Ohayon and Finet group uses wall shear stress-derived descriptors to study the role of recirculation in the arterial restenosis due to different malapposed and overlapping stent conditions. In the seventh contribution, the research group of Antón demonstrates that the simulation time can be reduced for cardiovascular numerical analysis considering an adequate geometry-reduction strategy applicable to truncated patient specific artery. In the eighth paper, Grasa and Calvo present a numerical model based on the finite element method for simulating extraocular muscle dynamics. The ninth paper, authored by Kahla et al., presents a mathematical mechano-pharmaco-biological model for bone remodeling. Martínez, Peña, and co-workers propose in the tenth paper a methodology to calibrate the dissection properties of aorta layer, with the aim of providing useful information for reliable numerical tools. In the eleventh contribution, Martínez-Bocanegra et al. present the structural behavior of a foot model using a detailed finite element model. The twelfth contribution is centered on the methodology to perform a finite, element-based, numerical model of a hydroxyapatite 3D printed bone scaffold. In the thirteenth paper, Talygin and Gorodkov present analytical expressions describing swirling jets for cardiovascular applications. In the fourteenth contribution, Schenkel and Halliday propose a novel non-Newtonian particle transport model for red blood cells. Finally, Zurita et al. propose a parametric numerical tool for analyzing a silicone customized 3D printable trachea-bronchial prosthesis

    Contact Damage on Ceramic Laminates

    Get PDF
    La difusión de los materiales cerámicos en muchos campos de la industria es amplia y está en fuerte expansión, debido a las excelentes propiedades de estos materiales, ya sean mecánicas, térmicas, tribológicas o biológicas. Sin embargo, su fragilidad intrínseca y falta de fiabilidad limitan una mayor difusión en esas aplicaciones donde se precisa alta resistencia estructural. La producción de composites multilaminares es un camino prometedor para aumentar la fiabilidad de los cerámicos. Los cerámicos multicapa permiten que las propiedades mecánicas sean mejores que las de los componentes, debido a la presencia en la superficie de tensiones residuales de compresión provocadas diferencias de expansión térmica entre las capas.Las aplicaciones óptimas de estos materiales son las que están relacionadas con las propiedades superficiales; por eso la respuesta a las cargas por contacto son especialmente importantes para caracterizar las propiedades mecánicas y para mejorar el diseño de cerámicos composites avanzados.Las técnicas de indentación Hertziana son herramientas muy útiles para estudiar este tipo de carga, que por otro lado es difícil de caracterizar por ensayos mecánico tradicionales. El daño por contacto en materiales frágiles aparece principalmente como grietas anillo en la superficie, que pueden desarrollarse como grietas cono, características de este tipo de carga. Este agrietamiento es perjudicial para la funcionalidad del material, y puede llevar al fallo de la pieza. Las cerámicas tenaces, por otro lado, pueden presentar un daño, cuasi-plástico, que se genera debajo la superficie en forma de microagrietamento, y que es causa de deformación inelástica.En esta tesis, se caracteriza la resistencia al daño por contacto materiales cerámicos en base alúmina, incluyendo todos los aspectos de ese daño, desde la aparición de fisuras superficiales, a la propagación de grietas frágiles en la primera capa y su influencia sobre la resistencia del material, hasta el fallo inducido por carga de contacto. Se comparan medidas experimentales con análisis a los Elementos Finitos de los parámetros involucrados en cada caso, lo que permite formular pautas para una correcta caracterización y diseño de cerámicas multicapas avanzados.Se vio que la presencia de tensiones residuales es efectiva en mejorar la resistencia a la formación de grieta anillo, sea generada por cargas monotónicas, cíclicas o estáticas.La alta resistencia frente a este último tipo de carga revela que existen mecanismos de puenteo intergranular que se oponen a la formación de grietas, lo que era inesperado por el tamaño de grano fino, y que se atribuye a un efecto de grieta corta, comparada con la microestructura. Ensayos cíclicos de larga duración mostraron, por otro lado, que en los materiales multicapas aparece daño superficial más severo que en los monolíticos, lo que sugiere un cambio del daño predominante hacía una degradación superficial producida por cuasi-plasticidad.Las tensiones residuales afectan tanto la longitud como el ángulo de la grieta cono. Se modeló el problema mediante Elementos Finitos y algoritmos de propagación de grieta, lo que permitió predecir el crecimiento de grieta en función tanto de las tensiones residuales, como de otros parámetros microestructurales, y determinar del ángulo de la grieta cono en materiales policristalinos.La respuesta a cargas remotas de materiales indentados, en otras palabras la degradación de la resistencia, se ve afectada por la geometría de la grieta cono, y por otros factores que son consecuencia de la estructura laminar, tales como las tensiones residuales y la redistribución de carga por el desajuste elástico entre capas. Asimismo, la resistencia por contacto, o sea la resistencia a compresión roma localizada, se ve mejorada en materiales laminares, como consecuencia de las tensiones residuales. Sin embargo, se evidenció que existe el riesgo de que se genere tensión elevada en las capas interiores bajo ambos tipos de carga, y se propusieron consideraciones generales sobre el diseño de materiales laminares.En definitiva, se consiguió una caracterización exhaustiva de las propiedades de contacto mecánico de los materiales estudiados, y se amplió y mejoró el conocimiento de la propagación de grieta en materiales frágiles policristalinos.The use of ceramic materials in many industrial fields is spread and ever-increasing, for their excellent properties, either mechanical, thermal, tribological or biological. However, their intrinsic brittleness and lack of reliability are obstacles to further spreading these materials in applications where structural resistance is required. To build multilayered composite structures is a promising way which aims to increase the reliability of ceramics. As it is common in composite materials, layered materials allow the mechanical properties to be superior to those of the constituent materials, in the studied case due to the presence of compressive residual stress in the surface.The best applications for such materials are those related to the surface properties; for this reason the response to contact loading is especially important to characterize the mechanical properties and to assist in the design of advanced ceramic composites. Hertzian indentation techniques provide a powerful tool to study such type of loading, which is otherwise difficult to characterize with the traditional mechanical testing methodologies. Contact damage in brittle materials appears mainly as surface ring-cracks, which can develop in a characteristic cone crack. Such fissuration is detrimental to the functionality of the material, and can lead to the failure of the component. Tough ceramics often present another type of damage, the so-called quasi-plasticity, generated as subsurface microcracking and which is cause of inelastic deformation.In this thesis, alumina-based ceramic laminates were characterized in their resistance to contact damage in all its aspects, starting from the appearance of surface fissures, to the propagation of brittle cracks in the first layer and its influence on the material strength, to the contact loadinginduced failure. Experimental measurements were coupled with Finite Element analysis of the involved parameters, which assisted in formulating comprehensive guidelines for the correct characterization and the design of advanced multilayered ceramics. The presence of residual stress in ceramic laminates proved to be effective in improving the material resistance to the ring cracking, generated by monotonic, cyclic and longlasting tests.The better resistance to these latter revealed the existence of grain bridging hindering the crack formation, unexpected in fine-grained alumina and which was related to the small crack character of the ring crack. Longer lasting cyclic tests showed that more severe damage appears in the multilayered materials than in the monolithic one, suggesting a modification of the redominant damage mode to quasi-plastic-derived surface degradation.Propagation of long cone cracks is affected by residual stress in both the length and angle. An automatic Finite Element model of crack propagation allowed to predict crack growth as a function of both the extrinsic residual stresses and of microstructural parameters, which helped address the long-open question of the cone crack angle on polycrystalline materials.The response to remote loading of indented materials, in other words the strength degradation, is conditioned by the cone crack geometry, as well as by other factors deriving from the laminated structure, such as the presence of residual stress itself and the load redistribution due to the elastic mismatch between layers. Similarly, the contact strength, i.e. the resistance to local blunt compression, is improved in the composite materials as a consequence of the residual stresses. Nevertheless, the risk of high stress in the lower tensile layers was highlighted for both types of loading and general consideration on the design of laminated materials were proposed.In the overall, a comprehensive characterization of the contact properties of the studied materials was achieved, and the understanding of crack propagation on brittle polycrystalline materials was broadened and improved
    • …
    corecore