3,715 research outputs found
Penalized Orthogonal Iteration for Sparse Estimation of Generalized Eigenvalue Problem
We propose a new algorithm for sparse estimation of eigenvectors in
generalized eigenvalue problems (GEP). The GEP arises in a number of modern
data-analytic situations and statistical methods, including principal component
analysis (PCA), multiclass linear discriminant analysis (LDA), canonical
correlation analysis (CCA), sufficient dimension reduction (SDR) and invariant
co-ordinate selection. We propose to modify the standard generalized orthogonal
iteration with a sparsity-inducing penalty for the eigenvectors. To achieve
this goal, we generalize the equation-solving step of orthogonal iteration to a
penalized convex optimization problem. The resulting algorithm, called
penalized orthogonal iteration, provides accurate estimation of the true
eigenspace, when it is sparse. Also proposed is a computationally more
efficient alternative, which works well for PCA and LDA problems. Numerical
studies reveal that the proposed algorithms are competitive, and that our
tuning procedure works well. We demonstrate applications of the proposed
algorithm to obtain sparse estimates for PCA, multiclass LDA, CCA and SDR.
Supplementary materials are available online
Quantitative Screening of Cervical Cancers for Low-Resource Settings: Pilot Study of Smartphone-Based Endoscopic Visual Inspection After Acetic Acid Using Machine Learning Techniques
Background: Approximately 90% of global cervical cancer (CC) is mostly found in low- and middle-income countries. In most cases, CC can be detected early through routine screening programs, including a cytology-based test. However, it is logistically difficult to offer this program in low-resource settings due to limited resources and infrastructure, and few trained experts. A visual inspection following the application of acetic acid (VIA) has been widely promoted and is routinely recommended as a viable form of CC screening in resource-constrained countries. Digital images of the cervix have been acquired during VIA procedure with better quality assurance and visualization, leading to higher diagnostic accuracy and reduction of the variability of detection rate. However, a colposcope is bulky, expensive, electricity-dependent, and needs routine maintenance, and to confirm the grade of abnormality through its images, a specialist must be present. Recently, smartphone-based imaging systems have made a significant impact on the practice of medicine by offering a cost-effective, rapid, and noninvasive method of evaluation. Furthermore, computer-aided analyses, including image processing-based methods and machine learning techniques, have also shown great potential for a high impact on medicinal evaluations
Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials
Accurate color reproduction is important in many applications of 3D printing,
from design prototypes to 3D color copies or portraits. Although full color is
available via other technologies, multi-jet printers have greater potential for
graphical 3D printing, in terms of reproducing complex appearance properties.
However, to date these printers cannot produce full color, and doing so poses
substantial technical challenges, from the shear amount of data to the
translucency of the available color materials. In this paper, we propose an
error diffusion halftoning approach to achieve full color with multi-jet
printers, which operates on multiple isosurfaces or layers within the object.
We propose a novel traversal algorithm for voxel surfaces, which allows the
transfer of existing error diffusion algorithms from 2D printing. The resulting
prints faithfully reproduce colors, color gradients and fine-scale details.Comment: 15 pages, 14 figures; includes supplemental figure
From 3D Models to 3D Prints: an Overview of the Processing Pipeline
Due to the wide diffusion of 3D printing technologies, geometric algorithms
for Additive Manufacturing are being invented at an impressive speed. Each
single step, in particular along the Process Planning pipeline, can now count
on dozens of methods that prepare the 3D model for fabrication, while analysing
and optimizing geometry and machine instructions for various objectives. This
report provides a classification of this huge state of the art, and elicits the
relation between each single algorithm and a list of desirable objectives
during Process Planning. The objectives themselves are listed and discussed,
along with possible needs for tradeoffs. Additive Manufacturing technologies
are broadly categorized to explicitly relate classes of devices and supported
features. Finally, this report offers an analysis of the state of the art while
discussing open and challenging problems from both an academic and an
industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and
Innovation action; Grant agreement N. 68044
Brain MR Image Segmentation: From Multi-Atlas Method To Deep Learning Models
Quantitative analysis of the brain structures on magnetic resonance (MR) images plays a crucial role in examining brain development and abnormality, as well as in aiding the treatment planning. Although manual delineation is commonly considered as the gold standard, it suffers from the shortcomings in terms of low efficiency and inter-rater variability. Therefore, developing automatic anatomical segmentation of human brain is of importance in providing a tool for quantitative analysis (e.g., volume measurement, shape analysis, cortical surface mapping). Despite a large number of existing techniques, the automatic segmentation of brain MR images remains a challenging task due to the complexity of the brain anatomical structures and the great inter- and intra-individual variability among these anatomical structures. To address the existing challenges, four methods are proposed in this thesis. The first work proposes a novel label fusion scheme for the multi-atlas segmentation. A two-stage majority voting scheme is developed to address the over-segmentation problem in the hippocampus segmentation of brain MR images. The second work of the thesis develops a supervoxel graphical model for the whole brain segmentation, in order to relieve the dependencies on complicated pairwise registration for the multi-atlas segmentation methods. Based on the assumption that pixels within a supervoxel are supposed to have the same label, the proposed method converts the voxel labeling problem to a supervoxel labeling problem which is solved by a maximum-a-posteriori (MAP) inference in Markov random field (MRF) defined on supervoxels. The third work incorporates attention mechanism into convolutional neural networks (CNN), aiming at learning the spatial dependencies between the shallow layers and the deep layers in CNN and producing an aggregation of the attended local feature and high-level features to obtain more precise segmentation results. The fourth method takes advantage of the success of CNN in computer vision, combines the strength of the graphical model with CNN, and integrates them into an end-to-end training network. The proposed methods are evaluated on public MR image datasets, such as MICCAI2012, LPBA40, and IBSR. Extensive experiments demonstrate the effectiveness and superior performance of the three proposed methods compared with the other state-of-the-art methods
Development of a metrology method for the alignment of optical systems
Tese de mestrado integrado, Engenharia Física, Universidade de Lisboa, Faculdade de Ciências, 2022The alignment of optical systems is a fundamental step during metrological tests to obtain
meaningful and traceable results. Depending on the application, several approaches can be used to
achieve the alignment of the components. However, for large and complex systems, the typical approach
relies on the materialization of the optic axis using a gaussian-like light source. The work developed on
this thesis is focused on this approach. Using a gaussian-like beam that serves as a reference source for
axis materialization along with a bidimensional image sensor to monitor this light source, it is possible
to align an optical system. Another requirement is an imaging processing technique. Depending on the
application, several methods are commonly used to characterise and determine the center of a spot.
The first step for this thesis was to make a brief study based on beam alignment techniques that
are commonly used to have a better grasp on the state of the art associated with this process. The ideal
beam processing method must be applicable to large images in real-time to allow the alignment to occur
in an iterative way, while being as precise as possible with all the necessary information displayed in a
computer screen to aid the operator in positioning optical components in the setups. To verify the
potential of the proposed method, other commonly used methods were tested alongside it, using
generated images with gaussian spots, benchmarking their performance and, consequently, obtaining
positive results. Afterwards, a setup was constructed to test the proposed method to a wide variety of extreme
conditions, comparing the results to another commonly used method for the same application. The
automation of the process was achieved in LabVIEW 2017, a software capable of controlling the
equipment and acquiring data automatically. The results obtained show that the method performed better
than the other tested, and a further analysis can be done to fully characterise its performance. Some
improvements can be made in the future, but nonetheless, the method is ready to be employed in a lab
environment and achieve satisfactory results
Registration of pre-operative lung cancer PET/CT scans with post-operative histopathology images
Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice.Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice
- …