7 research outputs found

    Cause-Effect Inference in Location-Scale Noise Models: Maximum Likelihood vs. Independence Testing

    Full text link
    A fundamental problem of causal discovery is cause-effect inference, learning the correct causal direction between two random variables. Significant progress has been made through modelling the effect as a function of its cause and a noise term, which allows us to leverage assumptions about the generating function class. The recently introduced heteroscedastic location-scale noise functional models (LSNMs) combine expressive power with identifiability guarantees. LSNM model selection based on maximizing likelihood achieves state-of-the-art accuracy, when the noise distributions are correctly specified. However, through an extensive empirical evaluation, we demonstrate that the accuracy deteriorates sharply when the form of the noise distribution is misspecified by the user. Our analysis shows that the failure occurs mainly when the conditional variance in the anti-causal direction is smaller than that in the causal direction. As an alternative, we find that causal model selection through residual independence testing is much more robust to noise misspecification and misleading conditional variance.Comment: preprin

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Temperature aware power optimization for multicore floating-point units

    Full text link

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. 漏 2011 IEEE

    Machine learning for the subsurface characterization at core, well, and reservoir scales

    Get PDF
    The development of machine learning techniques and the digitization of the subsurface geophysical/petrophysical measurements provides a new opportunity for the industries focusing on exploration and extraction of subsurface earth resources, such as oil, gas, coal, geothermal energy, mining, and sequestration. With more data and more computation power, the traditional methods for subsurface characterization and engineering that are adopted by these industries can be automized and improved. New phenomenon can be discovered, and new understandings may be acquired from the analysis of big data. The studies conducted in this dissertation explore the possibility of applying machine learning to improve the characterization of geological materials and geomaterials. Accurate characterization of subsurface hydrocarbon reservoirs is essential for economical oil and gas reservoir development. The characterization of reservoir formation requires the integration interpretation of data from different sources. Large-scale seismic measurements, intermediate-scale well logging measurements, and small-scale core sample measurements help engineers understand the characteristics of the hydrocarbon reservoirs. Seismic data acquisition is expensive and core samples are sparse and have limited volume. Consequently, well log acquisition provides essential information that improves seismic analysis and core analysis. However, the well logging data may be missing due to financial or operational challenges or may be contaminated due to complex downhole environment. At the near-wellbore scale, I solve the data constraint problem in the reservoir characterization by applying machine learning models to generate synthetic sonic traveltime and NMR logs that are crucial for geomechanical and pore-scale characterization, respectively. At the core scale, I solve the problems in fracture characterization by processing the multipoint sonic wave propagation measurements using machine learning to characterize the dispersion, orientation, and distribution of cracks embedded in material. At reservoir scale, I utilize reinforcement learning models to achieve automatic history matching by using a fast-marching-based reservoir simulator to estimate reservoir permeability that controls pressure transient response of the well. The application of machine learning provides new insights into traditional subsurface characterization techniques. First, by applying shallow and deep machine learning models, sonic logs and NMR T2 logs can be acquired from other easy-to-acquire well logs with high accuracy. Second, the development of the sonic wave propagation simulator enables the characterization of crack-bearing materials with the simple wavefront arrival times. Third, the combination of reinforcement learning algorithms and encapsulated reservoir simulation provides a possible solution for automatic history matching

    Courbure discr猫te : th茅orie et applications

    Get PDF
    International audienceThe present volume contains the proceedings of the 2013 Meeting on discrete curvature, held at CIRM, Luminy, France. The aim of this meeting was to bring together researchers from various backgrounds, ranging from mathematics to computer science, with a focus on both theory and applications. With 27 invited talks and 8 posters, the conference attracted 70 researchers from all over the world. The challenge of finding a common ground on the topic of discrete curvature was met with success, and these proceedings are a testimony of this wor

    Advanced acquisition and reconstruction techniques in magnetic resonance imaging

    Get PDF
    Menci贸n Internacional en el t铆tulo de doctorMagnetic Resonance Imaging (MRI) is a biomedical imaging modality with outstanding features such as excellent soft tissue contrast and very high spatial resolution. Despite its great properties, MRI suffers from some drawbacks, such as low sensitivity and long acquisition times. This thesis focuses on providing solutions for the second MR drawback, through the use of compressed sensing methodologies. Compressed sensing is a novel technique that enables the reduction of acquisition times and can also improve spatiotemporal resolution and image quality. Compressed sensing surpasses the traditional limits of Nyquist sampling theories by enabling the reconstruction of images from an incomplete number of acquired samples, provided that 1) the images to reconstruct have a sparse representation in a certain domain, 2) the undersampling applied is random and 3) specific non-linear reconstruction algorithms are used. Cardiovascular MRI has to overcome many limitations derived from the respiratory and cardiac cycles, and has very strict requirements in terms of spatiotemporal resolution. Hence, any improvement in terms of reducing acquisition times or increasing image quality by means of compressed sensing will be highly beneficial. This thesis aims to investigate the benefits that compressed sensing may provide in two cardiovascular MR applications: The acquisition of small-animal cardiac cine images and the visualization of human coronary atherosclerotic plaques. Cardiac cine in small-animals is a widely used approach to assess cardiovascular function. In this work we proposed a new compressed sensing methodology to reduce acquisition times in self-gated cardiac cine sequences. This methodology was developed as a modification of the Split Bregman reconstruction algorithm to include the minimization of Total Variation across both spatial and temporal dimensions. We simulated compressed sensing acquisitions by retrospectively undersampling complete acquisitions. The accuracy of the results was evaluated with functional measurements in both healthy animals and animals with myocardial infarction. The method reached accelerations rates of 10-14 for healthy animals and acceleration rates of 10 in the case of unhealthy animals. We verified these theoretically-feasible acceleration factors in practice with the implementation of a real compressed sensing acquisition in a 7 T small-animal MR scanner. We demonstrated that acceleration factors around 10 are achievable in practice, close to those obtained in the previous simulations. However, we found some small differences in image quality between simulated and real undersampled compressed sensing reconstructions at high acceleration rates; this might be explained by differences in their sensitivity to motion contamination during acquisition. The second cardiovascular application explored in this thesis is the visualization of atherosclerotic plaques in coronary arteries in humans. Nowadays, in vivo visualization and classification of plaques by MRI is not yet technically feasible. Acceleration techniques such as compressed sensing may greatly contribute to the feasibility of the application in vivo. However, it is advisable to carry out a systematic study of the basic technical requirements for the coronary plaque visualization prior to designing specific acquisition techniques. On simulation studies we assessed spatial resolution, SNR and motion limits required for the proper visualization of coronary plaques and we proposed a new hybrid acquisition scheme that reduces sensitivity to motion. In order to evaluate the benefits that acceleration techniques might provide, we evaluated different parallel imaging algorithms and we also implemented a compressed sensing methodology that incorporates information from the coil sensitivity profile of the phased-array coil used. We found that, with the coil setup analyzed, acceleration benefits were greatly limited by the small size of the FOV of interest. Thus, dedicated phased-arrays need to be designed to enhance the benefits that accelerating techniques may provide on coronary artery plaque imaging in vivo.La Imagen por Resonancia Magn茅tica (IRM) es una modalidad de imagen biom茅dica con notables caracter铆sticas tales como un excelente contraste en tejidos blandos y una muy alta resoluci贸n espacial. Sin embargo, a pesar de estas importantes propiedades, la IRM tiene algunos inconvenientes, como una baja sensibilidad y tiempos de adquisici贸n muy largos. Esta tesis se centra en buscar soluciones para el segundo inconveniente mencionado a trav茅s del uso de metodolog铆as de compressed sensing. Compressed sensing es una t茅cnica novedosa que permite la reducci贸n de los tiempos de adquisici贸n y tambi茅n la mejora de la resoluci贸n espacio-temporal y la calidad de las im谩genes. La teor铆a de compressed sensing va m谩s all谩 los l铆mites tradicionales de la teor铆a de muestreo de Nyquist, permitiendo la reconstrucci贸n de im谩genes a partir de un n煤mero incompleto de muestras siempre que se cumpla que 1) las im谩genes a reconstruir tengan una representaci贸n dispersa (sparse) en un determinado dominio, 2) el submuestreo aplicado sea aleatorio y 3) se usen algoritmos de reconstrucci贸n no lineales espec铆ficos. La resonancia magn茅tica cardiovascular tiene que superar muchas limitaciones derivadas de los ciclos respiratorios y cardiacos, y adem谩s tiene que cumplir unos requisitos de resoluci贸n espacio-temporal muy estrictos. De ah铆 que cualquier mejora que se pueda conseguir bien reduciendo tiempos de adquisici贸n o bien aumentando la calidad de las im谩genes resultar铆a altamente beneficiosa. Esta tesis tiene como objetivo investigar los beneficios que la t茅cnica de compressed sensing puede proporcionar a dos aplicaciones punteras en RM cardiovascular, la adquisici贸n de cines cardiacos de peque帽o animal y la visualizaci贸n de placas ateroscler贸ticas en arterias coronarias en humano. La adquisici贸n de cines cardiacos en peque帽o animal es una aplicaci贸n ampliamente usada para evaluar funci贸n cardiovascular. En esta tesis, proponemos una metodolog铆a de compressed sensing para reducir los tiempos de adquisici贸n de secuencias de cine cardiaco denominadas self-gated. Desarrollamos esta metodolog铆a modificando el algoritmo de reconstrucci贸n de Split-Bregman para incluir la minimizaci贸n de la Variaci贸n Total a trav茅s de la dimensi贸n temporal adem谩s de la espacial. Para ello, simulamos adquisiciones de compressed sensing submuestreando retrospectivamente adquisiciones completas. La calidad de los resultados se evalu贸 con medidas funcionales tanto en animales sanos como en animales a los que se les produjo un infarto cardiaco. El m茅todo propuesto mostr贸 que factores de aceleraci贸n de 10-14 son posibles para animales sanos y en torno a 10 para animales infartados. Estos factores de aceleraci贸n te贸ricos se verificaron en la pr谩ctica mediante la implementaci贸n de una adquisici贸n submuestreada en un esc谩ner de IRM de peque帽o animal de 7 T. Se demostr贸 que aceleraciones en torno a 10 son factibles en la pr谩ctica, valor muy cercano a los obtenidos en las simulaciones previas. Sin embargo para factores de aceleraci贸n muy altos, se apreciaron algunas diferencias entre la calidad de las im谩genes con submuestreo simulado y las realmente submuestreadas; esto puede ser debido a una mayor sensibilidad a la contaminaci贸n por movimiento durante la adquisici贸n. La segunda aplicaci贸n cardiovascular explorada en esta tesis es la visualizaci贸n de placas ateroscler贸ticas en arterias coronarias en humanos. Hoy en d铆a, la visualizaci贸n y clasificaci贸n in vivo de es te tipo de placas mediante IRM a煤n no es t茅cnicamente posible. Pero no hay duda de que t茅cnicas de aceleraci贸n, como compressed sensing, pueden contribuir enormemente a la consecuci贸n de la aplicaci贸n in vivo. Sin embargo, como paso previo a la evaluaci贸n de las t茅cnicas de aceleraci贸n, es conveniente hacer un estudio sistem谩tico de los requerimientos t茅cnicos necesarios para la correcta visualizaci贸n y caracterizaci贸n de las placas coronarias. Mediante simulaciones establecimos los l铆mites de se帽al a ruido, resoluci贸n espacial y movimiento requeridos para la correcta visualizaci贸n de las placas y propusimos un nuevo esquema de adquisici贸n h铆brido que reduce la sensibilidad al movimiento. Para valorar los beneficios que las t茅cnicas de aceleraci贸n pueden aportar, evaluamos diferentes algoritmos de imagen en paralelo e implementamos una metodolog铆a de compresed sensing que tiene en cuenta la informaci贸n de los mapas de sensibilidad de las antenas utilizadas. En este estudio se encontr贸, que para la configuraci贸n de antenas analizadas, los beneficios de la aceleraci贸n est谩n muy limitados por el peque帽o campo de vis贸n utilizado. Por tanto, para incrementar los beneficios que estas t茅cnicas de aceleraci贸n pueden aportar la imagen de placas coronarias in vivo, es necesario dise帽ar antenas espec铆ficas para esta aplicaci贸n.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Elfar Adalsteinsson.- Secretario: Juan Miguel Parra Robles.- Vocal: Pedro Ramos Cabre
    corecore