13 research outputs found

    Automating the Reconstruction of Neuron Morphological Models: the Rivulet Algorithm Suite

    Get PDF
    The automatic reconstruction of single neuron cells is essential to enable large-scale data-driven investigations in computational neuroscience. The problem remains an open challenge due to various imaging artefacts that are caused by the fundamental limits of light microscopic imaging. Few previous methods were able to generate satisfactory neuron reconstruction models automatically without human intervention. The manual tracing of neuron models is labour heavy and time-consuming, making the collection of large-scale neuron morphology database one of the major bottlenecks in morphological neuroscience. This thesis presents a suite of algorithms that are developed to target the challenge of automatically reconstructing neuron morphological models with minimum human intervention. We first propose the Rivulet algorithm that iteratively backtracks the neuron fibres from the termini points back to the soma centre. By refining many details of the Rivulet algorithm, we later propose the Rivulet2 algorithm which not only eliminates a few hyper-parameters but also improves the robustness against noisy images. A soma surface reconstruction method was also proposed to make the neuron models biologically plausible around the soma body. The tracing algorithms, including Rivulet and Rivulet2, normally need one or more hyper-parameters for segmenting the neuron body out of the noisy background. To make this pipeline fully automatic, we propose to use 2.5D neural network to train a model to enhance the curvilinear structures of the neuron fibres. The trained neural networks can quickly highlight the fibres of interests and suppress the noise points in the background for the neuron tracing algorithms. We evaluated the proposed methods in the data released by both the DIADEM and the BigNeuron challenge. The experimental results show that our proposed tracing algorithms achieve the state-of-the-art results

    Data Fusion of Surface Meshes and Volumetric Representations

    Get PDF
    The term Data Fusion refers to integrating knowledge from at least two independent sources of information such that the result is more than merely the sum of all inputs. In our project, the knowledge about a given specimen comprises its acquisitions from optical 3D scans and Computed Tomography with a special focus on limited-angle artifacts. In industrial quality inspection those imaging techniques are commonly used for non-destructive testing. Additional sources of information are digital descriptions for manufacturing, or tactile measurements of the specimen. Hence, we have several representations comprising the object as a whole, each with certain shortcomings and unique insights. We strive for combining all their strengths and compensating their weaknesses in order to create an enhanced representation of the acquired object. To achieve this, the identification of correspondences in the representations is the first task. We extract a subset with prominent exterior features from each input because all acquisitions include these features. To this end, regional queries from random seeds on an enclosing hull are employed. Subsequently, the relative orientation of the original data sets is calculated based on their subsets, as those comprise the - potentially defective - areas of overlap. We consider global features such as principal components and barycenters for the alignment, since in this specific case classical point-to-point comparisons are prone to error. Our alignment scheme outperforms traditional approaches and can even be enhanced by considering limited-angle artifacts in the reconstruction process of Computed Tomography. An analysis of local gradients in the resulting volumetric representation allows to distinguish between reliable observations and defects. Lastly, tactile measurements are extremely accurate but lack a suitable 3D representation. Thus, we also present an approach for converting them in a 3D surface suiting our work flow. As a result, the respective inputs are now aligned with each other, indicate the quality of the included information, and are in compatible format to be combined in a subsequent step. The data fusion result permits more accurate metrological tasks and increases the precision of detecting flaws in production or indications of wear-out. The final step of combining the data sets is briefly presented here along with the resulting augmented representation, but in its entirety and details subject to another PhD thesis within our joint project

    Novel Cardiac Mapping Approaches and Multimodal Techniques to Unravel Multidomain Dynamics of Complex Arrhythmias Towards a Framework for Translational Mechanistic-Based Therapeutic Strategies

    Full text link
    [ES] Las arritmias cardíacas son un problema importante para los sistemas de salud en el mundo desarrollado debido a su alta incidencia y prevalencia a medida que la población envejece. La fibrilación auricular (FA) y la fibrilación ventricular (FV) se encuentran entre las arritmias más complejas observadas en la práctica clínica. Las consecuencias clínicas de tales alteraciones arrítmicas incluyen el desarrollo de eventos cardioembólicos complejos en la FA, y repercusiones dramáticas debido a procesos fibrilatorios sostenidos que amenazan la vida infringiendo daño neurológico tras paro cardíaco por FV, y que pueden provocar la muerte súbita cardíaca (MSC). Sin embargo, a pesar de los avances tecnológicos de las últimas décadas, sus mecanismos intrínsecos se comprenden de forma incompleta y, hasta la fecha, las estrategias terapéuticas carecen de una base mecanicista suficiente y poseen bajas tasas de éxito. Entre los mecanismos implicados en la inducción y perpetuación de arritmias cardíacas, como la FA, se cree que las dinámicas de las fuentes focales y reentrantes de alta frecuencia, en sus diferentes modalidades, son las fuentes primarias que mantienen la arritmia. Sin embargo, se sabe poco sobre los atractores, así como, de la dinámica espacio-temporal de tales fuentes fibrilatorias primarias, específicamente, las fuentes focales o rotacionales dominantes que mantienen la arritmia. Por ello, se ha desarrollado una plataforma computacional, para comprender los factores (activos, pasivos y estructurales) determinantes, y moduladores de dicha dinámica. Esto ha permitido establecer un marco para comprender la compleja dinámica de los rotores con énfasis en sus propiedades deterministas para desarrollar herramientas basadas en los mecanismos para ayuda diagnóstica y terapéutica. Comprender los procesos fibrilatorios es clave para desarrollar marcadores y herramientas fisiológica- y clínicamente relevantes para la ayuda de diagnóstico temprano. Específicamente, las propiedades espectrales y de tiempo-frecuencia de los procesos fibrilatorios han demostrado resaltar el comportamiento determinista principal de los mecanismos intrínsecos subyacentes a las arritmias y el impacto de tales eventos arrítmicos. Esto es especialmente relevante para determinar el pronóstico temprano de los supervivientes comatosos después de un paro cardíaco debido a fibrilación ventricular (FV). Las técnicas de mapeo electrofisiológico, el mapeo eléctrico y óptico cardíaco, han demostrado ser recursos muy valiosos para dar forma a nuevas hipótesis y desarrollar nuevos enfoques mecanicistas y estrategias terapéuticas mejoradas. Esta tecnología permite además el trabajo multidisciplinar entre clínicos y bioingenieros, para el desarrollo y validación de dispositivos y metodologías para identificar biomarcadores multi-dominio que permitan rastrear con precisión la dinámica de las arritmias identificando fuentes dominantes y atractores con alta precisión para ser dianas de estrategias terapeúticas innovadoras. Es por ello que uno de los objetivos fundamentales ha sido la implantación y validación de nuevos sistemas de mapeo en distintas configuraciones que sirvan de plataforma de desarrollo de nuevas estrategias terapeúticas. Aunque el mapeo panorámico es el método principal y más completo para rastrear simultáneamente biomarcadores electrofisiológicos, su adopción por la comunidad científica es limitada principalmente debido al coste elevado de la tecnología. Aprovechando los avances tecnológicos recientes, nos hemos enfocado en desarrollar, y validar, sistemas de mapeo óptico de alta resolución para registro panorámico cardíaco, utilizando modelos clínicamente relevantes para la investigación básica y la bioingeniería.[CA] Les arítmies cardíaques són un problema important per als sistemes de salut del món desenvolupat a causa de la seva alta incidència i prevalença a mesura que la població envelleix. La fibril·lació auricular (FA) i la fibril·lació ventricular (FV), es troben entre les arítmies més complexes observades a la pràctica clínica. Les conseqüències clíniques d'aquests trastorns arítmics inclouen el desenvolupament d'esdeveniments cardioembòlics complexos en FA i repercussions dramàtiques a causa de processos fibril·latoris sostinguts que posen en perill la vida amb danys neurològics posteriors a la FV, que condueixen a una aturada cardíaca i a la mort cardíaca sobtada (SCD). Tanmateix, malgrat els avanços tecnològics de les darreres dècades, els seus mecanismes intrínsecs s'entenen de forma incompleta i, fins a la data, les estratègies terapèutiques no tenen una base mecanicista suficient i tenen baixes taxes d'èxit. La majoria dels avenços en el desenvolupament de biomarcadors òptims i noves estratègies terapèutiques en aquest camp provenen de tècniques valuoses en la investigació de mecanismes d'arítmia. Entre els mecanismes implicats en la inducció i perpetuació de les arítmies cardíaques, es creu que les fonts primàries subjacents a l'arítmia són les fonts focals reingressants d'alta freqüència dinàmica i AF, en les seves diferents modalitats. Tot i això, se sap poc sobre els atractors i la dinàmica espaciotemporal d'aquestes fonts primàries fibril·ladores, específicament les fonts rotacionals o focals dominants que mantenen l'arítmia. Per tant, s'ha desenvolupat una plataforma computacional per entendre determinants actius, passius, estructurals i moduladors d'aquestes dinàmiques. Això va permetre establir un marc per entendre la complexa dinàmica multidomini dels rotors amb ènfasi en les seves propietats deterministes per desenvolupar enfocaments mecanicistes per a l'ajuda i la teràpia diagnòstiques. La comprensió dels processos fibril·latoris és clau per desenvolupar puntuacions i eines rellevants fisiològicament i clínicament per ajudar al diagnòstic precoç. Concretament, les propietats espectrals i de temps-freqüència dels processos fibril·latoris han demostrat destacar un comportament determinista important dels mecanismes intrínsecs subjacents a les arítmies i l'impacte d'aquests esdeveniments arítmics. Mitjançant coneixements previs, processament de senyals, tècniques d'aprenentatge automàtic i anàlisi de dades, es va desenvolupar una puntuació de risc mecanicista a la aturada cardíaca per FV. Les tècniques de cartografia òptica cardíaca i electrofisiològica han demostrat ser recursos inestimables per donar forma a noves hipòtesis i desenvolupar nous enfocaments mecanicistes i estratègies terapèutiques. Aquesta tecnologia ha permès durant molts anys provar noves estratègies terapèutiques farmacològiques o ablatives i desenvolupar mètodes multidominis per fer un seguiment precís de la dinàmica d'arrímies que identifica fonts i atractors dominants. Tot i que el mapatge panoràmic és el mètode principal per al seguiment simultani de paràmetres electrofisiològics, la seva adopció per part de la comunitat multidisciplinària d'investigació cardiovascular està limitada principalment pel cost de la tecnologia. Aprofitant els avenços tecnològics recents, ens centrem en el desenvolupament i la validació de sistemes de mapes òptics de baix cost per a imatges panoràmiques mitjançant models clínicament rellevants per a la investigació bàsica i la bioenginyeria.[EN] Cardiac arrhythmias are a major problem for health systems in the developed world due to their high incidence and prevalence as the population ages. Atrial fibrillation (AF) and ventricular fibrillation (VF), are amongst the most complex arrhythmias seen in the clinical practice. Clinical consequences of such arrhythmic disturbances include developing complex cardio-embolic events in AF, and dramatic repercussions due to sustained life-threatening fibrillatory processes with subsequent neurological damage under VF, leading to cardiac arrest and sudden cardiac death (SCD). However, despite the technological advances in the last decades, their intrinsic mechanisms are incompletely understood, and, to date, therapeutic strategies lack of sufficient mechanistic basis and have low success rates. Most of the progress for developing optimal biomarkers and novel therapeutic strategies in this field has come from valuable techniques in the research of arrhythmia mechanisms. Amongst the mechanisms involved in the induction and perpetuation of cardiac arrhythmias such AF, dynamic high-frequency re-entrant and focal sources, in its different modalities, are thought to be the primary sources underlying the arrhythmia. However, little is known about the attractors and spatiotemporal dynamics of such fibrillatory primary sources, specifically dominant rotational or focal sources maintaining the arrhythmia. Therefore, a computational platform for understanding active, passive and structural determinants, and modulators of such dynamics was developed. This allowed stablishing a framework for understanding the complex multidomain dynamics of rotors with enphasis in their deterministic properties to develop mechanistic approaches for diagnostic aid and therapy. Understanding fibrillatory processes is key to develop physiologically and clinically relevant scores and tools for early diagnostic aid. Specifically, spectral and time-frequency properties of fibrillatory processes have shown to highlight major deterministic behaviour of intrinsic mechanisms underlying the arrhythmias and the impact of such arrhythmic events. Using prior knowledge, signal processing, machine learning techniques and data analytics, we aimed at developing a reliable mechanistic risk-score for comatose survivors of cardiac arrest due to VF. Cardiac optical mapping and electrophysiological mapping techniques have shown to be unvaluable resources to shape new hypotheses and develop novel mechanistic approaches and therapeutic strategies. This technology has allowed for many years testing new pharmacological or ablative therapeutic strategies, and developing multidomain methods to accurately track arrhymia dynamics identigying dominant sources and attractors. Even though, panoramic mapping is the primary method for simultaneously tracking electrophysiological parameters, its adoption by the multidisciplinary cardiovascular research community is limited mainly due to the cost of the technology. Taking advantage of recent technological advances, we focus on developing and validating low-cost optical mapping systems for panoramic imaging using clinically relevant models for basic research and bioengineering.Calvo Saiz, CJ. (2022). Novel Cardiac Mapping Approaches and Multimodal Techniques to Unravel Multidomain Dynamics of Complex Arrhythmias Towards a Framework for Translational Mechanistic-Based Therapeutic Strategies [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/182329TESI

    Resolving Ambiguities in Monocular 3D Reconstruction of Deformable Surfaces

    Get PDF
    In this thesis, we focus on the problem of recovering 3D shapes of deformable surfaces from a single camera. This problem is known to be ill-posed as for a given 2D input image there exist many 3D shapes that give visually identical projections. We present three methods which make headway towards resolving these ambiguities. We believe that our work represents a significant step towards making surface reconstruction methods of practical use. First, we propose a surface reconstruction method that overcomes the limitations of the state-of-the-art template-based and non-rigid structure from motion methods. We neither track points over many frames, nor require a sophisticated deformation model, or depend on a reference image. In our method, we establish correspondences between pairs of frames in which the shape is different and unknown. We then estimate homographies between corresponding local planar patches in both images. These yield approximate 3D reconstructions of points within each patch up to a scale factor. Since we consider overlapping patches, we can enforce them to be consistent over the whole surface. Finally, a local deformation model is used to fit a triangulated mesh to the 3D point cloud, which makes the reconstruction robust to both noise and outliers in the image data. Second, we propose a novel approach to recovering the 3D shape of a deformable surface from a monocular input by taking advantage of shading information in more generic contexts than conventional Shape-from-Shading (SfS) methods. This includes surfaces that may be fully or partially textured and lit by arbitrarily many light sources. To this end, given a lighting model, we learn the relationship between a shading pattern and the corresponding local surface shape. At run time, we first use this knowledge to recover the shape of surface patches and then enforce spatial consistency between the patches to produce a global 3D shape. Instead of treating texture as noise as in many SfS approaches, we exploit it as an additional source of information. We validate our approach quantitatively and qualitatively using both synthetic and real data. Third, we introduce a constrained latent variable model that inherently accounts for geometric constraints such as inextensibility defined on the mesh model. To this end, we learn a non-linear mapping from the latent space to the output space, which corresponds to vertex positions of a mesh model, such that the generated outputs comply with equality and inequality constraints expressed in terms of the problem variables. Since its output is encouraged to satisfy such constraints inherently, using our model removes the need for computationally expensive methods that enforce these constraints at run time. In addition, our approach is completely generic and could be used in many other different contexts as well, such as image classification to impose separation of the classes, and articulated tracking to constrain the space of possible poses

    Learning From Multi-Frame Data

    Get PDF
    Multi-frame data-driven methods bear the promise that aggregating multiple observations leads to better estimates of target quantities than a single (still) observation. This thesis examines how data-driven approaches such as deep neural networks should be constructed to improve over single-frame-based counterparts. Besides algorithmic changes, as for example in the design of artificial neural network architectures or the algorithm itself, such an examination is inextricably linked with the consideration of the synthesis of synthetic training data in meaningful size (even if no annotations are available) and quality (if real ground-truth acquisition is not possible), which capture all temporal effects with high fidelity. We start with the introduction of a new algorithm to accelerate a nonparametric learning algorithm by using a GPU adapted implementation to search for the nearest neighbor. While the approaches known so far are clearly surpassed, this empirically reveals that the data generated can be managed within a reasonable time and that several inputs can be processed in parallel even under hardware restrictions. Based on a learning-based solution, we introduce a novel training protocol to bridge the need for carefully curated training data and demonstrate better performance and robustness than a non-parametric search for the nearest neighbor via temporal video alignments. Effective learning in the absence of labels is required when dealing with larger amounts of data that are easy to capture but not feasible or at least costly to label. In addition, we show new ways to generate plausible and realistic synthesized data and their inevitability when it comes to closing the gap to expensive and almost infeasible real-world acquisition. These eventually achieve state-of-the-art results in classical image processing tasks such as reflection removal and video deblurring

    NASA Tech Briefs, April 2000

    Get PDF
    Topics covered include: Imaging/Video/Display Technology; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Bio-Medical; Test and Measurement; Mathematics and Information Sciences; Books and Reports

    Preclinical MRI of the Kidney

    Get PDF
    This Open Access volume provides readers with an open access protocol collection and wide-ranging recommendations for preclinical renal MRI used in translational research. The chapters in this book are interdisciplinary in nature and bridge the gaps between physics, physiology, and medicine. They are designed to enhance training in renal MRI sciences and improve the reproducibility of renal imaging research. Chapters provide guidance for exploring, using and developing small animal renal MRI in your laboratory as a unique tool for advanced in vivo phenotyping, diagnostic imaging, and research into potential new therapies. Written in the highly successful Methods in Molecular Biology series format, chapters include introductions to their respective topics, lists of the necessary materials and reagents, step-by-step, readily reproducible laboratory protocols, and tips on troubleshooting and avoiding known pitfalls. Cutting-edge and thorough, Preclinical MRI of the Kidney: Methods and Protocols is a valuable resource and will be of importance to anyone interested in the preclinical aspect of renal and cardiorenal diseases in the fields of physiology, nephrology, radiology, and cardiology. This publication is based upon work from COST Action PARENCHIMA, supported by European Cooperation in Science and Technology (COST). COST (www.cost.eu) is a funding agency for research and innovation networks. COST Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation. PARENCHIMA (renalmri.org) is a community-driven Action in the COST program of the European Union, which unites more than 200 experts in renal MRI from 30 countries with the aim to improve the reproducibility and standardization of renal MRI biomarkers

    Preclinical MRI of the kidney : methods and protocols

    Get PDF
    This Open Access volume provides readers with an open access protocol collection and wide-ranging recommendations for preclinical renal MRI used in translational research. The chapters in this book are interdisciplinary in nature and bridge the gaps between physics, physiology, and medicine. They are designed to enhance training in renal MRI sciences and improve the reproducibility of renal imaging research. Chapters provide guidance for exploring, using and developing small animal renal MRI in your laboratory as a unique tool for advanced in vivo phenotyping, diagnostic imaging, and research into potential new therapies. Written in the highly successful Methods in Molecular Biology series format, chapters include introductions to their respective topics, lists of the necessary materials and reagents, step-by-step, readily reproducible laboratory protocols, and tips on troubleshooting and avoiding known pitfalls. Cutting-edge and thorough, Preclinical MRI of the Kidney: Methods and Protocols is a valuable resource and will be of importance to anyone interested in the preclinical aspect of renal and cardiorenal diseases in the fields of physiology, nephrology, radiology, and cardiology. This publication is based upon work from COST Action PARENCHIMA, supported by European Cooperation in Science and Technology (COST). COST (www.cost.eu) is a funding agency for research and innovation networks. COST Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation. PARENCHIMA (renalmri.org) is a community-driven Action in the COST program of the European Union, which unites more than 200 experts in renal MRI from 30 countries with the aim to improve the reproducibility and standardization of renal MRI biomarkers

    [<sup>18</sup>F]fluorination of biorelevant arylboronic acid pinacol ester scaffolds synthesized by convergence techniques

    Get PDF
    Aim: The development of small molecules through convergent multicomponent reactions (MCR) has been boosted during the last decade due to the ability to synthesize, virtually without any side-products, numerous small drug-like molecules with several degrees of structural diversity.(1) The association of positron emission tomography (PET) labeling techniques in line with the “one-pot” development of biologically active compounds has the potential to become relevant not only for the evaluation and characterization of those MCR products through molecular imaging, but also to increase the library of radiotracers available. Therefore, since the [18F]fluorination of arylboronic acid pinacol ester derivatives tolerates electron-poor and electro-rich arenes and various functional groups,(2) the main goal of this research work was to achieve the 18F-radiolabeling of several different molecules synthesized through MCR. Materials and Methods: [18F]Fluorination of boronic acid pinacol esters was first extensively optimized using a benzaldehyde derivative in relation to the ideal amount of Cu(II) catalyst and precursor to be used, as well as the reaction solvent. Radiochemical conversion (RCC) yields were assessed by TLC-SG. The optimized radiolabeling conditions were subsequently applied to several structurally different MCR scaffolds comprising biologically relevant pharmacophores (e.g. β-lactam, morpholine, tetrazole, oxazole) that were synthesized to specifically contain a boronic acid pinacol ester group. Results: Radiolabeling with fluorine-18 was achieved with volumes (800 μl) and activities (≤ 2 GBq) compatible with most radiochemistry techniques and modules. In summary, an increase in the quantities of precursor or Cu(II) catalyst lead to higher conversion yields. An optimal amount of precursor (0.06 mmol) and Cu(OTf)2(py)4 (0.04 mmol) was defined for further reactions, with DMA being a preferential solvent over DMF. RCC yields from 15% to 76%, depending on the scaffold, were reproducibly achieved. Interestingly, it was noticed that the structure of the scaffolds, beyond the arylboronic acid, exerts some influence in the final RCC, with electron-withdrawing groups in the para position apparently enhancing the radiolabeling yield. Conclusion: The developed method with high RCC and reproducibility has the potential to be applied in line with MCR and also has a possibility to be incorporated in a later stage of this convergent “one-pot” synthesis strategy. Further studies are currently ongoing to apply this radiolabeling concept to fluorine-containing approved drugs whose boronic acid pinacol ester precursors can be synthesized through MCR (e.g. atorvastatin)

    Annual Report of the Board of Regents of the Smithsonian Institution, showing the operations, expenditures, and condition of the Institution for the year ending June 30, 1898.

    Get PDF
    Annual Report of the Smithsonian Institution. 4 Mar. HD 309 (pts. 1 and 2), 55-3, v91-92, 2042p. [3833-3834] Research related to the American Indian
    corecore