153 research outputs found
Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease
Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges.
The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified.
The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing.
The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD.
The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD
Detection of Power Line Supporting Towers via Interpretable Semantic Segmentation of 3D Point Clouds
The inspection and maintenance of energy transmission networks are demanding and
crucial tasks for any transmission system operator. They rely on a combination of on-theground
staff and costly low-flying helicopters to visually inspect the power grid structure.
Recently, LiDAR-based inspections have shown the potential to accelerate and increase
inspection precision. These high-resolution sensors allow one to scan an environment and
store it in a 3D point cloud format for further processing and analysis by maintenance
specialists to prevent fires and damage to the electrical system. However, this task is
especially demanding to handle on time when we consider the extensive area that the
transmission network covers. Nonetheless, the transition to point cloud data allows us to
take advantage of Deep Learning to automate these inspections, by detecting collisions
between the grid and the revolving scene.
Deep Learning is a recent and powerful tool that has been successfully applied to a
myriad of real-life problems, such as image recognition and speech generation. With the
introduction of affordable LiDAR sensors, the application of Deep Learning on 3D data
emerged, with numerous methods being proposed every day to address difficult problems,
from 3D object detection to 3D point cloud segmentation. Alas, state-of-the-art methods
are remarkably complex, composed of millions of trainable parameters, and take several
weeks, if not months, to train on specific hardware, which makes it difficult for traditional
companies, like utilities, to employ them.
Therefore, we explore a novel mathematical framework that allows us to define tailored
operators that incorporate prior knowledge regarding our problem. These operators
are then integrated into a learning agent, called SCENE-Net, that detects power line supporting
towers in 3D point clouds. SCENE-Net allows for the interpretability of its results,
which is not possible in conventional models, it shows an efficient training and inference
time of 85 mn and 20 ms on a regular laptop. Our model is composed of 11 trainable
geometrical parameters, like the height of a cylinder, and has a Precision gain of 24%
against a comparable CNN with 2190 parameters.A inspeção e manutenção de redes de transmissão de energia são tarefas cruciais para
operadores de rede. Recentemente, foram adotadas inspeções utilizando sensores LiDAR
de forma a acelerar este processo e aumentar a sua precisão. Estes sensores são objetos de
alta precisão que conseguem inspecionar ambientes e guarda-los no formato de nuvens
de pontos 3D, para serem posteriormente analisadas por specialistas que procuram prevenir
fogos florestais e danos à estruta eléctrica. No entanto, esta tarefa torna-se bastante
difícil de concluir em tempo útil pois a rede de transmissão é bastasnte vasta. Por isso,
podemos tirar partido da transição para dados LiDAR e utilizar aprendizagem profunda
para automatizar as inspeções à rede.
Aprendizagem profunda é um campo recente e em grande desenvolvimento, sendo
aplicado a vários problemas do nosso quotidiano e facilmente atinge um desempenho
superior ao do ser humano, como em reconhecimento de imagens, geração de voz, entre
outros. Com o desenvolvimento de sensores LiDAR acessíveis, o uso de aprendizagem
profunda em dados 3D rapidamente se desenvolveu, apresentando várias metodologias
novas todos os dias que respondem a problemas complexos, como deteção de objetos
3D. No entanto, modelos do estado da arte são incrivelmente complexos e compostos
por milhões de parâmetros e demoram várias semanas, senão meses, a treinar em GPU
potentes, o que dificulta a sua utilização em empresas tradicionais, como a EDP.
Portanto, nós exploramos uma nova teoria matemática que nos permite definir operadores
específicos que incorporaram conhecimento sobre o nosso problema. Estes operadores
são integrados num modelo de aprendizagem prounda, designado SCENE-Net,
que deteta torres de suporte de linhas de transmissão em nuvens de pontos. SCENE-Net
permite a interpretação dos seus resultados, aspeto que não é possível com modelos convencionais,
demonstra um treino eficiente de 85 minutos e tempo de inferência de 20
milissegundos num computador tradicional. O nosso modelo contém apenas 11 parâmetros
geométricos, como a altura de um cilindro, e demonstra um ganho de Precisão de
24% quando comparado com uma CNN com 2190 parâmetros
Integrasjon av et minimalistisk sett av sensorer for kartlegging og lokalisering av landbruksroboter
Robots have recently become ubiquitous in many aspects of daily life. For in-house applications there is vacuuming, mopping and lawn-mowing robots. Swarms of robots have been used in Amazon warehouses for several years. Autonomous driving cars, despite being set back by several safety issues, are undeniably becoming the standard of the automobile industry. Not just being useful for commercial applications, robots can perform various tasks, such as inspecting hazardous sites, taking part in search-and-rescue missions. Regardless of end-user applications, autonomy plays a crucial role in modern robots. The essential capabilities required for autonomous operations are mapping, localization and navigation. The goal of this thesis is to develop a new approach to solve the problems of mapping, localization, and navigation for autonomous robots in agriculture. This type of environment poses some unique challenges such as repetitive patterns, large-scale sparse features environments, in comparison to other scenarios such as urban/cities, where the abundance of good features such as pavements, buildings, road lanes, traffic signs, etc., exists.
In outdoor agricultural environments, a robot can rely on a Global Navigation Satellite System (GNSS) to determine its whereabouts. It is often limited to the robot's activities to accessible GNSS signal areas. It would fail for indoor environments. In this case, different types of exteroceptive sensors such as (RGB, Depth, Thermal) cameras, laser scanner, Light Detection and Ranging (LiDAR) and proprioceptive sensors such as Inertial Measurement Unit (IMU), wheel-encoders can be fused to better estimate the robot's states. Generic approaches of combining several different sensors often yield superior estimation results but they are not always optimal in terms of cost-effectiveness, high modularity, reusability, and interchangeability. For agricultural robots, it is equally important for being robust for long term operations as well as being cost-effective for mass production.
We tackle this challenge by exploring and selectively using a handful of sensors such as RGB-D cameras, LiDAR and IMU for representative agricultural environments. The sensor fusion algorithms provide high precision and robustness for mapping and localization while at the same time assuring cost-effectiveness by employing only the necessary sensors for a task at hand. In this thesis, we extend the LiDAR mapping and localization methods for normal urban/city scenarios to cope with the agricultural environments where the presence of slopes, vegetation, trees render the traditional approaches to fail. Our mapping method substantially reduces the memory footprint for map storing, which is important for large-scale farms. We show how to handle the localization problem in dynamic growing strawberry polytunnels by using only a stereo visual-inertial (VI) and depth sensor to extract and track only invariant features. This eliminates the need for remapping to deal with dynamic scenes. Also, for a demonstration of the minimalistic requirement for autonomous agricultural robots, we show the ability to autonomously traverse between rows in a difficult environment of zigzag-liked polytunnel using only a laser scanner. Furthermore, we present an autonomous navigation capability by using only a camera without explicitly performing mapping or localization. Finally, our mapping and localization methods are generic and platform-agnostic, which can be applied to different types of agricultural robots.
All contributions presented in this thesis have been tested and validated on real robots in real agricultural environments. All approaches have been published or submitted in peer-reviewed conference papers and journal articles.Roboter har nylig blitt standard i mange deler av hverdagen. I hjemmet har vi støvsuger-, vaske- og gressklippende roboter. Svermer med roboter har blitt brukt av Amazons varehus i mange år. Autonome selvkjørende biler, til tross for å ha vært satt tilbake av sikkerhetshensyn, er udiskutabelt på vei til å bli standarden innen bilbransjen. Roboter har mer nytte enn rent kommersielt bruk. Roboter kan utføre forskjellige oppgaver, som å inspisere farlige områder og delta i leteoppdrag. Uansett hva sluttbrukeren velger å gjøre, spiller autonomi en viktig rolle i moderne roboter. De essensielle egenskapene for autonome operasjoner i landbruket er kartlegging, lokalisering og navigering. Denne type miljø gir spesielle utfordringer som repetitive mønstre og storskala miljø med få landskapsdetaljer, sammenlignet med andre steder, som urbane-/bymiljø, hvor det finnes mange landskapsdetaljer som fortau, bygninger, trafikkfelt, trafikkskilt, etc.
I utendørs jordbruksmiljø kan en robot bruke Global Navigation Satellite System (GNSS) til å navigere sine omgivelser. Dette begrenser robotens aktiviteter til områder med tilgjengelig GNSS signaler. Dette vil ikke fungere i miljøer innendørs. I ett slikt tilfelle vil reseptorer mot det eksterne miljø som (RGB-, dybde-, temperatur-) kameraer, laserskannere, «Light detection and Ranging» (LiDAR) og propriopsjonære detektorer som treghetssensorer (IMU) og hjulenkodere kunne brukes sammen for å bedre kunne estimere robotens tilstand. Generisk kombinering av forskjellige sensorer fører til overlegne estimeringsresultater, men er ofte suboptimale med hensyn på kostnadseffektivitet, moduleringingsgrad og utbyttbarhet. For landbruksroboter så er det like viktig med robusthet for lang tids bruk som kostnadseffektivitet for masseproduksjon.
Vi taklet denne utfordringen med å utforske og selektivt velge en håndfull sensorer som RGB-D kameraer, LiDAR og IMU for representative landbruksmiljø. Algoritmen som kombinerer sensorsignalene gir en høy presisjonsgrad og robusthet for kartlegging og lokalisering, og gir samtidig kostnadseffektivitet med å bare bruke de nødvendige sensorene for oppgaven som skal utføres. I denne avhandlingen utvider vi en LiDAR kartlegging og lokaliseringsmetode normalt brukt i urbane/bymiljø til å takle landbruksmiljø, hvor hellinger, vegetasjon og trær gjør at tradisjonelle metoder mislykkes. Vår metode reduserer signifikant lagringsbehovet for kartlagring, noe som er viktig for storskala gårder. Vi viser hvordan lokaliseringsproblemet i dynamisk voksende jordbær-polytuneller kan løses ved å bruke en stereo visuel inertiel (VI) og en dybdesensor for å ekstrahere statiske objekter. Dette eliminerer behovet å kartlegge på nytt for å klare dynamiske scener. I tillegg demonstrerer vi de minimalistiske kravene for autonome jordbruksroboter. Vi viser robotens evne til å bevege seg autonomt mellom rader i ett vanskelig miljø med polytuneller i sikksakk-mønstre ved bruk av kun en laserskanner. Videre presenterer vi en autonom navigeringsevne ved bruk av kun ett kamera uten å eksplisitt kartlegge eller lokalisere. Til slutt viser vi at kartleggings- og lokaliseringsmetodene er generiske og platform-agnostiske, noe som kan brukes med flere typer jordbruksroboter.
Alle bidrag presentert i denne avhandlingen har blitt testet og validert med ekte roboter i ekte landbruksmiljø. Alle forsøk har blitt publisert eller sendt til fagfellevurderte konferansepapirer og journalartikler
Recommended from our members
A Novel Approach to Shadow Boundary Detection Based on an Adaptive Direction-Tracking Filter for Brain-Machine Interface Applications
In this paper, a Brain-Machine Interface (BMI) system is proposed to automatically control the navigation of wheelchairs by detecting the shadows on their route. In this context, a new algorithm to detect shadows in a single image is proposed. Specifically, a novel adaptive direction tracking filter (ADT) is developed to extract feature information along the direction of shadow boundaries. The proposed algorithm avoids extraction of features around all directions of pixels, which significantly improves the efficiency and accuracy of shadow features extraction. Higher-order statistics (HOS) features such as skewness and kurtosis in addition to other optical features are used as input to different Machine Learning (ML) based classifiers, specifically, a Multilayer Perceptron (MLP), Autoencoder (AE), 1D-Convolutional Neural Network (1D-CNN) and Support Vector Machine (SVM), to perform the shadow boundaries detection task. Comparative results demonstrate that the proposed MLP-based system outperforms all the other state-of-the-art approaches, reporting accuracy rates up to 84.63%
Creation and maintenance of visual incremental maps and hierarchical localization.
Over the last few years, the presence of the mobile robotics has considerably
increased in a wide variety of environments. It is common to find robots that carry
out repetitive and specific applications and also, they can be used for working at
dangerous environments and to perform precise tasks. These robots can be
found in a variety of social environments, such as industry, household,
educational and health scenarios. For that reason, they need a specific and
continuous research and improvement work. Specifically, autonomous mobile
robots require a very precise technology to perform tasks without human
assistance.
To perform tasks autonomously, the robots must be able to navigate in an
unknown environment. For that reason, the autonomous mobile robots must be
able to address the mapping and localization tasks: they must create a model of
the environment and estimate their position and orientation.
This PhD thesis proposes and analyses different methods to carry out the map
creation and the localization tasks in indoor environments. To address these
tasks only visual information is used, specifically, omnidirectional images, with a
360º field of view. Throughout the chapters of this document solutions for
autonomous navigation tasks are proposed, they are solved using
transformations in the images captured by a vision system mounted on the robot.
Firstly, the thesis focuses on the study of the global appearance descriptors in
the localization task. The global appearance descriptors are algorithms that
transform an image globally, into a unique vector. In these works, a deep
comparative study is performed. In the experiments different global appearance
descriptors are used along with omnidirectional images and the results are
compared. The main goal is to obtain an optimized algorithm to estimate the robot
position and orientation in real indoor environments. The experiments take place
with real conditions, so some visual changes in the scenes can occur, such as
camera defects, furniture or people movements and changes in the lighting
conditions. The computational cost is also studied; the idea is that the robot has
to localize the robot in an accurate mode, but also, it has to be fast enought.
Additionally, a second application, whose goal is to carry out an incremental
mapping in indoor environments, is presented. This application uses the best
global appearance descriptors used in the localization task, but this time they are
constructed with the purpose of solving the mapping problem using an
incremental clustering technique. The application clusters a batch of images that
are visually similar; every group of images or cluster is expected to identify a zone
of the environment. The shape and size of the cluster can vary while the robot is
visiting the different rooms. Nowadays. different algorithms can be used to obtain
the clusters, but all these solutions usually work properly when they work ‘offline’,
starting from the whole set of data to cluster. The main idea of this study is
to obtain the map incrementally while the robot explores the new environment.
Carrying out the mapping incrementally while the robot is still visiting the area is very interesting since having the map separated into nodes with relationships of
similitude between them can be used subsequently for the hierarchical
localization tasks, and also, to recognize environments already visited in the
model.
Finally, this PhD thesis includes an analysis of deep learning techniques for
localization tasks. Particularly, siamese networks have been studied. Siamese
networks are based on classic convolutional networks, but they permit evaluating
two images simultaneously. These networks output a similarity value between the
input images, and that information can be used for the localization tasks.
Throughout this work the technique is presented, the possible architectures are
analysed and the results after the experiments are shown and compared. Using
the siamese networks, the localization in real operation conditions and
environments is solved, focusing on improving the performance against
illumination changes on the scene. During the experiments the room retrieval
problem, the hierarchical localization and the absolute localization have been
solved.Durante los últimos años, la presencia de la robótica móvil ha aumentado
substancialmente en una gran variedad de entornos y escenarios. Es habitual
encontrar el uso de robots para llevar a cabo aplicaciones repetitivas y
específicas, así como tareas en entornos peligrosos o con resultados que deben
ser muy precisos. Dichos robots se pueden encontrar tanto en ámbitos
industriales como en familiares, educativos y de salud; por ello, requieren un
trabajo específico y continuo de investigación y mejora. En concreto, los robots
móviles autónomos requieren de una tecnología precisa para desarrollar tareas
sin ayuda del ser humano.
Para realizar tareas de manera autónoma, los robots deben ser capaces de
navegar por un entorno ‘a priori’ desconocido. Por tanto, los robots móviles
autónomos deben ser capaces de realizar la tarea de creación de mapas,
creando un modelo del entorno y la tarea de localización, esto es estimar su
posición y orientación.
La presente tesis plantea un diseño y análisis de diferentes métodos para realizar
las tareas de creación de mapas y localización en entornos de interior. Para estas
tareas se emplea únicamente información visual, en concreto, imágenes
omnidireccionales, con un campo de visión de 360º. En los capítulos de este
trabajo se plantean soluciones a las tareas de navegación autónoma del robot
mediante transformaciones en las imágenes que este es capaz de captar.
En cuanto a los trabajos realizados, en primer lugar, se presenta un estudio de
descriptores de apariencia global en tareas de localización. Los descriptores de
apariencia global son transformaciones capaces de obtener un único vector que
describa globalmente una imagen. En este trabajo se realiza un estudio
exhaustivo de diferentes métodos de apariencia global adaptando su uso a
imágenes omnidireccionales. Se trata de obtener un algoritmo optimizado para
estimar la posición y orientación del robot en entornos reales de oficina, donde
puede surgir cambios visuales en el entorno como movimientos de cámara, de
mobiliario o de iluminación en la escena. También se evalúa el tiempo empleado
para realizar esta estimación, ya que el trabajo de un robot debe ser preciso,
pero también factible en cuanto a tiempos de computación.
Además, se presenta una segunda aplicación donde el estudio se centra en la
creación de mapas de entornos de interior de manera incremental. Esta
aplicación hace uso de los descriptores de apariencia global estudiados para la
tarea de localización, pero en este caso se utilizan para la construcción de mapas
utilizando la técnica de ‘clustering’ incremental. En esta aplicación, conjuntos de
imágenes visualmente similares se agrupan en un único grupo. La forma y
cantidad de grupos es variable conforme el robot avanza en el entorno.
Actualmente, existen diferentes algoritmos para obtener la separación de un
entorno en nodos, pero las soluciones efectivas se realizan de manera ‘off-line’,
es decir, a posteriori una vez se tienen todas las imágenes captadas. El trabajo
presentado permite realizar esta tarea de manera incremental mientras el robot explora el nuevo entorno. Realizar esta tarea mientras se visita el resto del
entorno puede ser muy interesante ya que tener el mapa separado por nodos
con relaciones de proximidad entre ellos se puede ir utilizando para tareas de
localización jerárquica. Además, es posible reconocer entornos ya visitados o
similares a nodos pasados.
Por último, la tesis también incluye el estudio de técnicas de aprendizaje
profundo (‘deep learning’) para tareas de localización. En concreto, se estudia el
uso de las redes siamesas, una técnica poco explorada en robótica móvil, que
está basada en las clásicas redes convolucionales, pero en la que dos imágenes
son evaluadas al mismo tiempo. Estas redes dan un valor de similitud entre el
par de imágenes de entrada, lo que permite realizar tareas de localización visual.
En este trabajo se expone esta técnica, se presentan las estructuras que pueden
tener estas redes y los resultados tras la experimentación. Se evalúa la tarea de
localización en entornos heterogéneos en los que el principal problema viene
dado por cambios en la iluminación de la escena. Con las redes siamesas se
trata de resolver el problema de estimación de estancia, el problema de
localización jerárquica y el de localización absoluta
Vehicle make and model recognition for intelligent transportation monitoring and surveillance.
Vehicle Make and Model Recognition (VMMR) has evolved into a significant subject of study due to its importance in numerous Intelligent Transportation Systems (ITS), such as autonomous navigation, traffic analysis, traffic surveillance and security systems. A highly accurate and real-time VMMR system significantly reduces the overhead cost of resources otherwise required. The VMMR problem is a multi-class classification task with a peculiar set of issues and challenges like multiplicity, inter- and intra-make ambiguity among various vehicles makes and models, which need to be solved in an efficient and reliable manner to achieve a highly robust VMMR system. In this dissertation, facing the growing importance of make and model recognition of vehicles, we present a VMMR system that provides very high accuracy rates and is robust to several challenges. We demonstrate that the VMMR problem can be addressed by locating discriminative parts where the most significant appearance variations occur in each category, and learning expressive appearance descriptors. Given these insights, we consider two data driven frameworks: a Multiple-Instance Learning-based (MIL) system using hand-crafted features and an extended application of deep neural networks using MIL. Our approach requires only image level class labels, and the discriminative parts of each target class are selected in a fully unsupervised manner without any use of part annotations or segmentation masks, which may be costly to obtain. This advantage makes our system more intelligent, scalable, and applicable to other fine-grained recognition tasks. We constructed a dataset with 291,752 images representing 9,170 different vehicles to validate and evaluate our approach. Experimental results demonstrate that the localization of parts and distinguishing their discriminative powers for categorization improve the performance of fine-grained categorization. Extensive experiments conducted using our approaches yield superior results for images that were occluded, under low illumination, partial camera views, or even non-frontal views, available in our real-world VMMR dataset. The approaches presented herewith provide a highly accurate VMMR system for rea-ltime applications in realistic environments.\\ We also validate our system with a significant application of VMMR to ITS that involves automated vehicular surveillance. We show that our application can provide law inforcement agencies with efficient tools to search for a specific vehicle type, make, or model, and to track the path of a given vehicle using the position of multiple cameras
Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping
The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability
Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach
In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated.
In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied mono-modal registration techniques. The method can be used for registering multi-modal images with full and partial data.
Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models.
In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network.
Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest
- …