17 research outputs found

    Algorithms for Fluorescence Lifetime Microscopy and Optical Coherence Tomography Data Analysis: Applications for Diagnosis of Atherosclerosis and Oral Cancer

    Get PDF
    With significant progress made in the design and instrumentation of optical imaging systems, it is now possible to perform high-resolution tissue imaging in near real-time. The prohibitively large amount of data obtained from such high-speed imaging systems precludes the possibility of manual data analysis by an expert. The paucity of algorithms for automated data analysis has been a major roadblock in both evaluating and harnessing the full potential of optical imaging modalities for diagnostic applications. This consideration forms the central theme of the research presented in this dissertation. Specifically, we investigate the potential of automated analysis of data acquired from a multimodal imaging system that combines fluorescence lifetime imaging (FLIM) with optical coherence tomography (OCT), for the diagnosis of atherosclerosis and oral cancer. FLIM is a fluorescence imaging technique that is capable of providing information about auto fluorescent tissue biomolecules. OCT on the other hand, is a structural imaging modality that exploits the intrinsic reflectivity of tissue samples to provide high resolution 3-D tomographic images. Since FLIM and OCT provide complimentary information about tissue biochemistry and structure, respectively, we hypothesize that the combined information from the multimodal system would increase the sensitivity and specificity for the diagnosis of atherosclerosis and oral cancer. The research presented in this dissertation can be divided into two main parts. The first part concerns the development and applications of algorithms for providing quantitative description of FLIM and OCT images. The quantitative FLIM and OCT features obtained in the first part of the research, are subsequently used to perform automated tissue diagnosis based on statistical classification models. The results of the research presented in this dissertation show the feasibility of using automated algorithms for FLIM and OCT data analysis for performing tissue diagnosis

    Caractérisation de vortex intraventriculaires par échographie Doppler ultrarapide

    Full text link
    Les maladies cardiaques sont une cause majeure de mortalité dans le monde (la première cause en Amérique du nord [192]), et la prise en charge de ses maladies entraîne des coûts élevés pour la société. La prévalence de l’insuffisance cardiaque augmente fortement avec l’âge, et, avec une population vieillissante, elle va demeurer une préoccupation croissante dans le futur, non seulement pour les pays industrialisés mais aussi pour ceux en développement. Ainsi il est important d’avoir une bonne compréhension de son mécanisme pour obtenir des diagnostics précoces et un meilleur prognostic pour les patients. Parmi les différentes formes d’insuffisance cardiaque, on trouve la dysfonction diastolique qui se traduit par une déficience du remplissage du ventricule. Pour une meilleure compréhension de ce mécanisme, de nombreuses études se sont intéressées au mouvement du sang dans le ventricule. On sait notamment qu’au début de la diastole le flux entrant prend la forme d’un anneau vortical (ou vortex ring). La formation d’un vortex ring par le flux sanguin après le passage d’une valve a été décrite pour la première fois en 1513 par Léonard de Vinci (Fig. 0.1). En effet après avoir moulé l’aorte dans du verre et ajouter des graines pour observer le flux se déplaçant dans son fantôme, il a décrit l’apparition du vortex au passage de la valve aortique. Ces travaux ont pu être confirmés 500 ans plus tard avec l’apparition de l’IRM [66]. Dans le ventricule, le même phénomène se produit après la valve mitrale, c’est ce qu’on appelle le vortex diastolique. Or, le mouvement d’un fluide (ici le sang) est directement relié a son environnement : la forme du ventricule, la forme de la valve, la rigidité des parois... L’intérêt est donc grandissant pour étudier de manière plus approfondie ce vortex diastolique qui pourrait apporter de précieuses informations sur la fonction diastolique. Les modalités d’imagerie permettant de le visualiser sont l’IRM et l’échographie. Cette thèse présente l’ensemble des travaux effectués pour permettre une meilleure caractérisation du vortex diastolique dans le ventricule gauche par imagerie ultrasonore Doppler. Pour suivre la dynamique de ce vortex dans le temps, il est important d’obtenir une bonne résolution temporelle. En effet, la diastole ventriculaire dure en moyenne 0.5 s pour un coeur humain au repos, une cadence élevée est donc essentielle pour suivre les différentes étapes de la diastole. La qualité des signaux Doppler est également primordiale pour obtenir une bonne estimation des vitesses du flux sanguin dans le ventricule. Pour étudier ce vortex, nous nous sommes intéressés à la mesure de sa vorticité en son centre v et à l’évolution de cette dernière dans le temps. Le travail se divise ainsi en trois parties, pour chaque un article a été rédigé : 1. Développement d’une séquence Doppler ultrarapide : La séquence se base sur l’utilisation d’ondes divergentes qui permettent d’atteindre une cadence d’image élevée. Associée à la vortographie, une méthode pour localiser le centre du vortex diastolique et en déduire sa vorticité, nous avons pu suivre la dynamique de la vorticité dans le temps. Cette séquence a permis d’établir une preuve de concept grâce à des acquisitions in vitro et in vivo sur des sujets humains volontaires. 2. Développement d’une séquence triplex : En se basant sur la séquence ultrarapide Doppler, on cherche ici à ajouter des informations supplémentaires, notamment sur le mouvement des parois. La séquence triplex permet non seulement de récupérer le mouvement sanguin avec une haute cadence d’images mais aussi le Doppler tissulaire. Au final, nous avons pu déduire les Doppler couleur, tissulaire, et spectral, en plus d’un Bmode de qualité grâce à la compensation de mouvement. On peut alors observer l’interdépendance entre la dynamique du vortex et celle des parois, en récupérant tous les indices nécessaires sur le même cycle cardiaque avec une acquisition unique. 3. Développement d’un filtre automatique : La quantification de la vorticité dépend directement des vitesses estimées par le Doppler. Or, en raison de leur faible amplitude, les signaux sanguins doivent être filtrés. En effet lors de l’acquisition les signaux sont en fait une addition des signaux sanguins et tissulaires. Le filtrage est une étape essentielle pour une estimation précise et non biaisée de la vitesse. La dernière partie de ce doctorat s’est donc concentrée sur la mise au point d’un filtre performant qui se base sur les dimensions spatiales et temporelles des acquisitions. On effectue ainsi un filtrage du tissu mais aussi du bruit. Une attention particulière a été portée à l’automatisation de ce filtre avec l’utilisation de critères d’information qui se basent sur la théorie de l’information.Heart disease is one of the leading causes of death in the world (first cause in North America [192]), and causes high health care costs for society. The prevalence of heart failure increases dramatically with age and, due to the ageing of the population, will remain a major concern in the future, not only for developed countries, but also for developing countries. It is therefore crucial to have a good understanding of its mechanism to obtain an early diagnosis and a better prognosis for patients. Diastolic dysfunction is one of the variations of heart failure and leads to insufficient filling of the ventricle. To better understand the dysfunction, several studies have examined the blood motion in the ventricle. It is known that at the beginning of diastole, the filling flow creates a vortex pattern known as a vortex ring. This development of the ring by blood flow after passage through a valve was first described in 1513 by Leonardo Da Vinci (Fig. 0.1). After molding a glass phantom in an aorta and adding seeds to visually observe the flow through the phantom, he could describe the vortex ring development of the blood coming out of the aortic valve. His work was confirmed 500 years later with the emergence of MRI [66]. The same pattern can be observed in the left ventricle when the flow emerges from the mitral valve, referred to as the diastolic vortex. The flow motion (in our case the blood) is directly related to its environment : shape of the ventricle, shape of the valve, stiffness of the walls... There is therefore a growing interest in further studies on this diastolic vortex that could lead to valuable information on diastolic function. The imaging modalities which can be used to visualize the vortex are MRI and ultrasound. This thesis presents the work carried out to allow a better characterization of the diastolic vortex in the left ventricle by Doppler ultrasound imaging. For temporal monitoring of vortex dynamics, a high temporal resolution is required, since the ventricular diastole is about 0.5 s on average for a resting human heart. The quality of Doppler signals is also of utmost importance to get an accurate estimate of the blood flow velocity in the ventricle. To study this vortex, we focused on evaluating the core vorticity evaluation and especially on its evolution in time. The work is divided in three parts, and for each of them an article has been written : 1. Ultrafast Doppler sequence : The sequence is based on diverging waves, which resulted in a high frame rate. In combination with vortography, a method to locate the vortex core and derive its vorticity, the vortex dynamics could be tracked over time. This ix sequence could establish a proof of concept based on in vitro and in vivo acquisitions on healthy human volunteers. 2. Triplex sequence : Based on the ultrafast sequence, we were interested in adding information on the wall motion. The triplex sequence is able to recover not only the blood motion with a high framerate but also tissue Doppler. In the end, we could derive color, tissue, and spectral Doppler, along with a high quality Bmode by using motion compensation. The interdependence between vortex and walls dynamics could be highlighted by acquiring all the required parameters over a single cardiac cycle. 3. Automatic clutter filter : Vorticity quantification depends directly on the estimation of Doppler velocity. However, due to their low amplitude, blood signals must be filtered. Indeed, acquired signals are actually an addition of tissue and blood signals. Filtering is a critical step for an unbiased and accurate velocity estimation. The last part of this doctoral thesis has focused on the design of an efficient filter that takes advantage of the temporal and spatial dimensions of the acquisitions. Thus the tissue alongside the noise is removed. Particular care was taken to automatize the filter by applying information criteria based on information theory

    Feature based estimation of myocardial motion from tagged MR images

    Get PDF
    In the past few years we witnessed an increase in mortality due to cancer relative to mortality due to cardiovascular diseases. In 2008, the Netherlands Statistics Agency reports that 33.900 people died of cancer against 33.100 deaths due to cardiovascular diseases, making cancer the number one cause of death in the Netherlands [33]. Even if the rate of people affected by heart diseases is continually rising, they "simply don’t die of it", according to the research director Prof. Mat Daemen of research institute CARIM of the University of Maastricht [50]. The reason for this is the early diagnosis, and the treatment of people with identified risk factors for diseases like ischemic heart disease, hypertrophic cardiomyopathy, thoracic aortic disease, pericardial (sac around the heart) disease, cardiac tumors, pulmonary artery disease, valvular disease, and congenital heart disease before and after surgical repair. Cardiac imaging plays a crucial role in the early diagnosis, since it allows the accurate investigation of a large amount of imaging data in a small amount of time. Moreover, cardiac imaging reduces costs of inpatient care, as has been shown in recent studies [77]. With this in mind, in this work we have provided several tools with the aim to help the investigation of the cardiac motion. In chapters 2 and 3 we have explored a novel variational optic flow methodology based on multi-scale feature points to extract cardiac motion from tagged MR images. Compared to constant brightness methods, this new approach exhibits several advantages. Although the intensity of critical points is also influenced by fading, critical points do retain their characteristic even in the presence of intensity changes, such as in MR imaging. In an experiment in section 5.4 we have applied this optic flow approach directly on tagged MR images. A visual inspection confirmed that the extracted motion fields realistically depicted the cardiac wall motion. The method exploits also the advantages from the multiscale framework. Because sparse velocity formulas 2.9, 3.7, 6.21, and 7.5 provide a number of equations equal to the number of unknowns, the method does not suffer from the aperture problem in retrieving velocities associated to the critical points. In chapters 2 and 3 we have moreover introduced a smoothness component of the optic flow equation described by means of covariant derivatives. This is a novelty in the optic flow literature. Many variational optic flow methods present a smoothness component that penalizes for changes from global assumptions such as isotropic or anisotropic smoothness. In the smoothness term proposed deviations from a predefined motion model are penalized. Moreover, the proposed optic flow equation has been decomposed in rotation-free and divergence-free components. This decomposition allows independent tuning of the two components during the vector field reconstruction. The experiments and the Table of errors provided in 3.8 showed that the combination of the smoothness term, influenced by a predefined motion model, and the Helmholtz decomposition in the optic flow equation reduces the average angular error substantially (20%-25%) with respect to a similar technique that employs only standard derivatives in the smoothness term. In section 5.3 we extracted the motion field of a phantom of which we know the ground truth of and compared the performance of this optic flow method with the performance of other optic flow methods well known in the literature, such as the Horn and Schunck [76] approach, the Lucas and Kanade [111] technique and the tuple image multi-scale optic flow constraint equation of Van Assen et al. [163]. Tests showed that the proposed optic flow methodology provides the smallest average angular error (AAE = 3.84 degrees) and L2 norm = 0.1. In this work we employed the Helmholtz decomposition also to study the cardiac behavior, since the vector field decomposition allows to investigate cardiac contraction and cardiac rotation independently. In chapter 4 we carried out an analysis of cardiac motion of ten volunteers and one patient where we estimated the kinetic energy for the different components. This decomposition is useful since it allows to visualize and quantify the contributions of each single vector field component to the heart beat. Local measurements of the kinetic energy have also been used to detect areas of the cardiac walls with little movement. Experiments on a patient and a comparison between a late enhancement cardiac image and an illustration of the cardiac kinetic energy on a bull’s eye plot illustrated that a correspondence between an infarcted area and an area with very small kinetic energy exists. With the aim to extend in the future the proposed optic flow equation to a 3D approach, in chapter 6 we investigated the 3D winding number approach as a tool to locate critical points in volume images. We simplified the mathematics involved with respect to a previous work [150] and we provided several examples and applications such as cardiac motion estimation from 3-dimensional tagged images, follicle and neuronal cell counting. Finally in chapter 7 we continued our investigation on volume tagged MR images, by retrieving the cardiac motion field using a 3-dimensional and simple version of the proposed optic flow equation based on standard derivatives. We showed that the retrieved motion fields display the contracting and rotating behavior of the cardiac muscle. We moreover extracted the through-plane component, which provides a realistic illustration of the vector field and is missed by 2-dimensional approaches

    Computational modelling of diastole for human ventricle

    Get PDF
    Diastolic heart failure (DHF) with normal systolic pump function has been typically observed in the majority of HF patients. DHF changes regular diastolic behaviour of left-ventricle (LV), and increases the ventricular wall stress. Therefore, normalisation of increased LV wall stress is the cornerstone of many existing and new therapeutic treatments. However, information regarding such regional stress-strain distribution for human LV is extremely limited in the literature. Thus, the study aimed at estimating the normal range and regional variation of diastolic stress-strain field in healthy human LVs, and exploring the infl uence of fibre structure, geometrical heterogeneity and material properties on passive infl ation of LV. It is envisaged that such information could be used as targets for future in-silico studies to design optimised HF treatments. FE modelling of passive diastolic mechanics was carried out using personalised ventricular geometry, that was constructed from magnetic resonance imaging (MRI), and structure-based orthotropic constitutive law. Laplace-Dirichlet-Region growing-Finite element (LDRF) algorithm was developed in order to assign the myocardium fibre map on ventricular geometry. The effect of right ventricle (RV) deformation, that has not been taken into account by the majority of researchers due to modelling simplification, was investigated for the first time by comparing the results predicted by bi-ventricle (BV) and single LV models, constructed from the aforementioned MRI data. In addition, personalised in-vivo measurement of fibre structure, that might be different in individual subjects and diseased conditions, is still an open question. Therefore, the sensitivity of LV diastolic mechanics to the details of the fibre structure was accomplished for the first time using eight different fibre orientations. In-vivo passive orthotropic myocardium properties for healthy human myocardium, indispensable for personalised LV wall stress estimation, was identified, and subsequently, the regional variations of LV wall stress-strain were investigated by incorporating geometrical heterogeneity, personalised myocardium properties and LV base movements in the FE models. RV deformation increased average fibre and sheet stress-strain in LV wall during diastole, and therefore, the effect should always be included in cardiac biomechanics study. Any pathological remodelling, that increased the amount of transmural fibre angle, led to an additional LV infl ation. The study indicates that a change in fibre orientation may contribute to the heart failure with preserved ejection fraction (HFpEF) development. Future therapeutic intervention should consider the effect of altered fibre orientation for better outcome. Due to the ill-posed nature of the inverse optimisation problem, the average myocardial stiffness was extracted by identifying the normal ranges of the parameters. A novel method was developed by combining FE modelling, response surface method (RSM) and genetic algorithm (GA) to identify the passive orthotropic myocardium properties for healthy human myocardium using routinely used clinical data. These myocardium properties can directly be utilised in future computational studies. Although the regional stress-strain distribution of the LV wall was highly heterogeneous amongst the individuals, it was observed that the inner wall of the LV experienced higher fibre stress compared to the outer wall. The LV wall near the base and the lateral region received greater stress-strain compared to the other regions. The incorporation of LV base movement (not addressed in the literature) improved the FE model predictions, and therefore, it is recommended to be considered in later studies. In addition, normal ranges of various stress-strain components in different regions of LV wall were reported for five healthy human ventricles considering RV deformation, LV base movement, and subject-specific myocardium properties. This information could be used as a reference map for future studies. The study revealed that the FE modelling can be employed to analyse the effect of geometry, fibre-structure and material properties on normal ventricular mechanics, and therefore, can provide a greater insight into the underlying mechanics of failing heart and plan for optimised surgical intervention. Hence, the research has impacts on computational cardiac biomechanics as well as clinical cardiac physiology fields

    Sparse and low-rank techniques for the efficient restoration of images

    Get PDF
    Image reconstruction is a key problem in numerous applications of computer vision and medical imaging. By removing noise and artifacts from corrupted images, or by enhancing the quality of low-resolution images, reconstruction methods are essential to provide high-quality images for these applications. Over the years, extensive research efforts have been invested toward the development of accurate and efficient approaches for this problem. Recently, considerable improvements have been achieved by exploiting the principles of sparse representation and nonlocal self-similarity. However, techniques based on these principles often suffer from important limitations that impede their use in high-quality and large-scale applications. Thus, sparse representation approaches consider local patches during reconstruction, but ignore the global structure of the image. Likewise, because they average over groups of similar patches, nonlocal self-similarity methods tend to over-smooth images. Such methods can also be computationally expensive, requiring a hour or more to reconstruct a single image. Furthermore, existing reconstruction approaches consider either local patch-based regularization or global structure regularization, due to the complexity of combining both regularization strategies in a single model. Yet, such combined model could improve upon existing techniques by removing noise or reconstruction artifacts, while preserving both local details and global structure in the image. Similarly, current approaches rarely consider external information during the reconstruction process. When the structure to reconstruct is known, external information like statistical atlases or geometrical priors could also improve performance by guiding the reconstruction. This thesis addresses limitations of the prior art through three distinct contributions. The first contribution investigates the histogram of image gradients as a powerful prior for image reconstruction. Due to the trade-off between noise removal and smoothing, image reconstruction techniques based on global or local regularization often over-smooth the image, leading to the loss of edges and textures. To alleviate this problem, we propose a novel prior for preserving the distribution of image gradients modeled as a histogram. This prior is combined with low-rank patch regularization in a single efficient model, which is then shown to improve reconstruction accuracy for the problems of denoising and deblurring. The second contribution explores the joint modeling of local and global structure regularization for image restoration. Toward this goal, groups of similar patches are reconstructed simultaneously using an adaptive regularization technique based on the weighted nuclear norm. An innovative strategy, which decomposes the image into a smooth component and a sparse residual, is proposed to preserve global image structure. This strategy is shown to better exploit the property of structure sparsity than standard techniques like total variation. The proposed model is evaluated on the problems of completion and super-resolution, outperforming state-of-the-art approaches for these tasks. Lastly, the third contribution of this thesis proposes an atlas-based prior for the efficient reconstruction of MR data. Although popular, image priors based on total variation and nonlocal patch similarity often over-smooth edges and textures in the image due to the uniform regularization of gradients. Unlike natural images, the spatial characteristics of medical images are often restricted by the target anatomical structure and imaging modality. Based on this principle, we propose a novel MRI reconstruction method that leverages external information in the form of an probabilistic atlas. This atlas controls the level of gradient regularization at each image location, via a weighted total-variation prior. The proposed method also exploits the redundancy of nonlocal similar patches through a sparse representation model. Experiments on a large scale dataset of T1-weighted images show this method to be highly competitive with the state-of-the-art

    Fabrication-Aware Design with Performative Criteria

    Get PDF
    Artists and architects often need to handle multiple constraints during design of physical constructions. We define a performative constraint as any constraint on design that is tied to the performance of the model--either during fabrication, construction, daily use, or destruction. Even for small to medium scale models, there are functional criteria such as the ease of fabrication and the assembly process, or even the interplay of light with the material. Computational tools can greatly aid in this process, assisting with the lower-level performative constraints, while the designer handles the high-level artistic decisions. Additionally, using new fabrication methods, our tools can aid in lowering the difficulty of building complex constructions, making them accessible to hobbyists. In this thesis, we present three computational methods for designing with different approaches, each with a different material, fabrication method, and use case. The first method is a construction with intersecting planar pieces that can be laser cut or milled. These 3D forms are assembled by sliding pieces into each other along straight slits, and do not require other support such as glue or screws. We present a mathematical abstraction that formalizes the constraints between pieces as a graph, including fabrication and assembly constraints, and ensure global rigidity of the sculpture. We also propose an optimization algorithm to guide the user using automatic constraint satisfaction based on analysis of the constraint relation graph. We demonstrate our approach by creating several small- to medium-scale examples including functional furniture. The second method presents a solution to building a 3D sculpture out of existing building blocks that can be found in many homes. Starting from the voxelization of a 3D mesh we merge voxels to form larger bricks, and then analyze and repair structural problems based on a graph representation of the block connections. We then output layer-by-layer building instructions to allow a user to quickly and easily build the model. We also present extensions such as hollowing the models to use less bricks, limiting the number of bricks of each size, and including color constraints. We present both real and virtual brick constructions and associated timings, showing improvements over previous work. The final case presented tackles the inverse design problem of finding a surface to produce a target caustic on a receiver plane when light is refracted or reflected. This is an example where the performative constraint is the principal driver of the design. We introduce an optimal transport formulation to find a correspondence between the incoming light and the output target light distribution. We then show a 3D optimization that finds the surface that transports light based on the correspondence map. Our approach supports piecewise smooth surfaces that are as smooth as possible but allow for creases, to greatly reduce the amount of artifacts while allowing light to be completely diverted producing completely black regions. We show how this leads to a very large space of high-contrast, high-resolution caustic images, including point and line singularities of infinite light density as well as photo-realistic images. Our approach leads to surfaces that can be milled using standard CNC milling. We demonstrate the approach showing both simulated and fabricated examples

    Differential geometry methods for biomedical image processing : from segmentation to 2D/3D registration

    Get PDF
    This thesis establishes a biomedical image analysis framework for the advanced visualization of biological structures. It consists of two important parts: 1) the segmentation of some structures of interest in 3D medical scans, and 2) the registration of patient-specific 3D models with 2D interventional images. Segmenting biological structures results in 3D computational models that are simple to visualize and that can be analyzed quantitatively. Registering a 3D model with interventional images permits to position the 3D model within the physical world. By combining the information from a 3D model and 2D interventional images, the proposed framework can improve the guidance of surgical intervention by reducing the ambiguities inherent to the interpretation of 2D images. Two specific segmentation problems are considered: 1) the segmentation of large structures with low frequency intensity nonuniformity, and 2) the detection of fine curvilinear structures. First, we directed our attention toward the segmentation of relatively large structures with low frequency intensity nonuniformity. Such structures are important in medical imaging since they are commonly encountered in MRI. Also, the nonuniform diffusion of the contrast agent in some other modalities, such as CTA, leads to structures of nonuniform appearance. A level-set method that uses a local-linear region model is defined, and applied to the challenging problem of segmenting brain tissues in MRI. The unique characteristics of the proposed method permit to account for important image nonuniformity implicitly. To the best of our knowledge, this is the first time a region-based level-set model has been used to perform the segmentation of real world MRI brain scans with convincing results. The second segmentation problem considered is the detection of fine curvilinear structures in 3D medical images. Detecting those structures is crucial since they can represent veins, arteries, bronchi or other important tissues. Unfortunately, most currently available curvilinear structure detection filters incur significant signal lost at bifurcations of two structures. This peculiarity limits the performance of all subsequent processes, whether it be understanding an angiography acquisition, computing an accurate tractography, or automatically classifying the image voxels. This thesis presents a new curvilinear structure detection filter that is robust to the presence of X- and Y-junctions. At the same time, it is conceptually simple and deterministic, and allows for an intuitive representation of the structure’s principal directions. Once a 3D computational model is available, it can be used to enhance surgical guidance. A 2D/3D non-rigid method is proposed that brings a 3D centerline model of the coronary arteries into correspondence with bi-plane fluoroscopic angiograms. The registered model is overlaid on top of the interventional angiograms to provide surgical assistance during image-guided chronic total occlusion procedures, which reduces the uncertainty inherent in 2D interventional images. A fully non-rigid registration model is proposed and used to compensate for any local shape discrepancy. This method is based on a variational framework, and uses a simultaneous matching and reconstruction process. With a typical run time of less than 3 seconds, the algorithms are fast enough for interactive applications

    An Efficient Compression Method for Multiplanar Reformulated Biomedical Images

    No full text

    Advanced Automation for Space Missions

    Get PDF
    The feasibility of using machine intelligence, including automation and robotics, in future space missions was studied
    corecore