843 research outputs found

    Shared Nearest-Neighbor Quantum Game-Based Attribute Reduction with Hierarchical Coevolutionary Spark and Its Application in Consistent Segmentation of Neonatal Cerebral Cortical Surfaces

    Full text link
    © 2012 IEEE. The unprecedented increase in data volume has become a severe challenge for conventional patterns of data mining and learning systems tasked with handling big data. The recently introduced Spark platform is a new processing method for big data analysis and related learning systems, which has attracted increasing attention from both the scientific community and industry. In this paper, we propose a shared nearest-neighbor quantum game-based attribute reduction (SNNQGAR) algorithm that incorporates the hierarchical coevolutionary Spark model. We first present a shared coevolutionary nearest-neighbor hierarchy with self-evolving compensation that considers the features of nearest-neighborhood attribute subsets and calculates the similarity between attribute subsets according to the shared neighbor information of attribute sample points. We then present a novel attribute weight tensor model to generate ranking vectors of attributes and apply them to balance the relative contributions of different neighborhood attribute subsets. To optimize the model, we propose an embedded quantum equilibrium game paradigm (QEGP) to ensure that noisy attributes do not degrade the big data reduction results. A combination of the hierarchical coevolutionary Spark model and an improved MapReduce framework is then constructed that it can better parallelize the SNNQGAR to efficiently determine the preferred reduction solutions of the distributed attribute subsets. The experimental comparisons demonstrate the superior performance of the SNNQGAR, which outperforms most of the state-of-the-art attribute reduction algorithms. Moreover, the results indicate that the SNNQGAR can be successfully applied to segment overlapping and interdependent fuzzy cerebral tissues, and it exhibits a stable and consistent segmentation performance for neonatal cerebral cortical surfaces

    IMAGE PROCESSING, SEGMENTATION AND MACHINE LEARNING MODELS TO CLASSIFY AND DELINEATE TUMOR VOLUMES TO SUPPORT MEDICAL DECISION

    Get PDF
    Techniques for processing and analysing images and medical data have become the main’s translational applications and researches in clinical and pre-clinical environments. The advantages of these techniques are the improvement of diagnosis accuracy and the assessment of treatment response by means of quantitative biomarkers in an efficient way. In the era of the personalized medicine, an early and efficacy prediction of therapy response in patients is still a critical issue. In radiation therapy planning, Magnetic Resonance Imaging (MRI) provides high quality detailed images and excellent soft-tissue contrast, while Computerized Tomography (CT) images provides attenuation maps and very good hard-tissue contrast. In this context, Positron Emission Tomography (PET) is a non-invasive imaging technique which has the advantage, over morphological imaging techniques, of providing functional information about the patient’s disease. In the last few years, several criteria to assess therapy response in oncological patients have been proposed, ranging from anatomical to functional assessments. Changes in tumour size are not necessarily correlated with changes in tumour viability and outcome. In addition, morphological changes resulting from therapy occur slower than functional changes. Inclusion of PET images in radiotherapy protocols is desirable because it is predictive of treatment response and provides crucial information to accurately target the oncological lesion and to escalate the radiation dose without increasing normal tissue injury. For this reason, PET may be used for improving the Planning Treatment Volume (PTV). Nevertheless, due to the nature of PET images (low spatial resolution, high noise and weak boundary), metabolic image processing is a critical task. The aim of this Ph.D thesis is to develope smart methodologies applied to the medical imaging field to analyse different kind of problematic related to medical images and data analysis, working closely to radiologist physicians. Various issues in clinical environment have been addressed and a certain amount of improvements has been produced in various fields, such as organs and tissues segmentation and classification to delineate tumors volume using meshing learning techniques to support medical decision. In particular, the following topics have been object of this study: • Technique for Crohn’s Disease Classification using Kernel Support Vector Machine Based; • Automatic Multi-Seed Detection For MR Breast Image Segmentation; • Tissue Classification in PET Oncological Studies; • KSVM-Based System for the Definition, Validation and Identification of the Incisinal Hernia Reccurence Risk Factors; • A smart and operator independent system to delineate tumours in Positron Emission Tomography scans; 3 • Active Contour Algorithm with Discriminant Analysis for Delineating Tumors in Positron Emission Tomography; • K-Nearest Neighbor driving Active Contours to Delineate Biological Tumor Volumes; • Tissue Classification to Support Local Active Delineation of Brain Tumors; • A fully automatic system of Positron Emission Tomography Study segmentation. This work has been developed in collaboration with the medical staff and colleagues at the: • Dipartimento di Biopatologia e Biotecnologie Mediche e Forensi (DIBIMED), University of Palermo • Cannizzaro Hospital of Catania • Istituto di Bioimmagini e Fisiologia Molecolare (IBFM) Centro Nazionale delle Ricerche (CNR) of Cefalù • School of Electrical and Computer Engineering at Georgia Institute of Technology The proposed contributions have produced scientific publications in indexed computer science and medical journals and conferences. They are very useful in terms of PET and MRI image segmentation and may be used daily as a Medical Decision Support Systems to enhance the current methodology performed by healthcare operators in radiotherapy treatments. The future developments of this research concern the integration of data acquired by image analysis with the managing and processing of big data coming from a wide kind of heterogeneous sources

    Computer aided process planning for multi-axis CNC machining using feature free polygonal CAD models

    Get PDF
    This dissertation provides new methods for the general area of Computer Aided Process Planning, often referred to as CAPP. It specifically focuses on 3 challenging problems in the area of multi-axis CNC machining process using feature free polygonal CAD models. The first research problem involves a new method for the rapid machining of Multi-Surface Parts. These types of parts typically have different requirements for each surface, for example, surface finish, accuracy, or functionality. The CAPP algorithms developed for this problem ensure the complete rapid machining of multi surface parts by providing better setup orientations to machine each surface. The second research problem is related to a new method for discrete multi-axis CNC machining of part models using feature free polygonal CAD models. This problem specifically considers a generic 3-axis CNC machining process for which CAPP algorithms are developed. These algorithms allow the rapid machining of a wide variety of parts with higher geometric accuracy by enabling access to visible surfaces through the choice of appropriate machine tool configurations (i.e. number of axes). The third research problem addresses challenges with geometric singularities that can occur when 2D slice models are used in process planning. The conversion from CAD to slice model results in the loss of model surface information, the consequence of which could be suboptimal or incorrect process planning. The algorithms developed here facilitate transfer of complete surface geometry information from CAD to slice models. The work of this dissertation will aid in developing the next generation of CAPP tools and result in lower cost and more accurately machined components

    Kernel Methods for Machine Learning with Life Science Applications

    Get PDF

    Featured Anomaly Detection Methods and Applications

    Get PDF
    Anomaly detection is a fundamental research topic that has been widely investigated. From critical industrial systems, e.g., network intrusion detection systems, to people’s daily activities, e.g., mobile fraud detection, anomaly detection has become the very first vital resort to protect and secure public and personal properties. Although anomaly detection methods have been under consistent development over the years, the explosive growth of data volume and the continued dramatic variation of data patterns pose great challenges on the anomaly detection systems and are fuelling the great demand of introducing more intelligent anomaly detection methods with distinct characteristics to cope with various needs. To this end, this thesis starts with presenting a thorough review of existing anomaly detection strategies and methods. The advantageous and disadvantageous of the strategies and methods are elaborated. Afterward, four distinctive anomaly detection methods, especially for time series, are proposed in this work aiming at resolving specific needs of anomaly detection under different scenarios, e.g., enhanced accuracy, interpretable results, and self-evolving models. Experiments are presented and analysed to offer a better understanding of the performance of the methods and their distinct features. To be more specific, the abstracts of the key contents in this thesis are listed as follows: 1) Support Vector Data Description (SVDD) is investigated as a primary method to fulfill accurate anomaly detection. The applicability of SVDD over noisy time series datasets is carefully examined and it is demonstrated that relaxing the decision boundary of SVDD always results in better accuracy in network time series anomaly detection. Theoretical analysis of the parameter utilised in the model is also presented to ensure the validity of the relaxation of the decision boundary. 2) To support a clear explanation of the detected time series anomalies, i.e., anomaly interpretation, the periodic pattern of time series data is considered as the contextual information to be integrated into SVDD for anomaly detection. The formulation of SVDD with contextual information maintains multiple discriminants which help in distinguishing the root causes of the anomalies. 3) In an attempt to further analyse a dataset for anomaly detection and interpretation, Convex Hull Data Description (CHDD) is developed for realising one-class classification together with data clustering. CHDD approximates the convex hull of a given dataset with the extreme points which constitute a dictionary of data representatives. According to the dictionary, CHDD is capable of representing and clustering all the normal data instances so that anomaly detection is realised with certain interpretation. 4) Besides better anomaly detection accuracy and interpretability, better solutions for anomaly detection over streaming data with evolving patterns are also researched. Under the framework of Reinforcement Learning (RL), a time series anomaly detector that is consistently trained to cope with the evolving patterns is designed. Due to the fact that the anomaly detector is trained with labeled time series, it avoids the cumbersome work of threshold setting and the uncertain definitions of anomalies in time series anomaly detection tasks

    NASA Tech Briefs, June 2006

    Get PDF
    Topics covered include: Magnetic-Field-Response Measurement-Acquisition System; Platform for Testing Robotic Vehicles on Simulated Terrain; Interferometer for Low-Uncertainty Vector Metrology; Rayleigh Scattering for Measuring Flow in a Nozzle Testing Facility; "Virtual Feel" Capaciflectors; FETs Based on Doped Polyaniline/Polyethylene Oxide Fibers; Miniature Housings for Electronics With Standard Interfaces; Integrated Modeling Environment; Modified Recursive Hierarchical Segmentation of Data; Sizing Structures and Predicting Weight of a Spacecraft; Stress Testing of Data-Communication Networks; Framework for Flexible Security in Group Communications; Software for Collaborative Use of Large Interactive Displays; Microsphere Insulation Panels; Single-Wall Carbon Nanotube Anodes for Lithium Cells; Tantalum-Based Ceramics for Refractory Composites; Integral Flexure Mounts for Metal Mirrors for Cryogenic Use; Templates for Fabricating Nanowire/Nanoconduit- Based Devices; Measuring Vapors To Monitor the State of Cure of a Resin; Partial-Vacuum-Gasketed Electrochemical Corrosion Cell; Theodolite Ring Lights; Integrating Terrain Maps Into a Reactive Navigation Strategy; Reducing Centroid Error Through Model-Based Noise Reduction; Adaptive Modeling Language and Its Derivatives; Stable Satellite Orbits for Global Coverage of the Moon; and Low-Cost Propellant Launch From a Tethered Balloo

    Buried RF Sensors for Smart Road Infrastructure: Empirical Communication Range Testing, Propagation by Line of Sight, Diffraction and Reflection Model and Technology Comparison for 868 MHz–2.4 GHz

    Get PDF
    Updating the road infrastructure requires the potential mass adoption of the road studs currently used in car detection, speed monitoring, and path marking. Road studs commonly include RF transceivers connecting the buried sensors to an offsite base station for centralized data management. Since traffic monitoring experiments through buried sensors are resource expensive and difficult, the literature detailing it is insufficient and inaccessible due to various strategic reasons. Moreover, as the main RF frequencies adopted for stud communication are either 868/915 MHz or 2.4 GHz, the radio coverage differs, and it is not readily predictable due to the low-power communication in the near proximity of the ground. This work delivers a reference study on low-power RF communication ranging for the two above frequencies up to 60 m. The experimental setup employs successive measurements and repositioning of a base station at three different heights of 0.5, 1 and 1.5 m, and is accompanied by an extensive theoretical analysis of propagation, including line of sight, diffraction, and wall reflection. Enhancing the tutorial value of this work, a correlation analysis using Pearson’s coefficient and root mean square error is performed between the field test and simulation results

    Feature-based hybrid inspection planning for complex mechanical parts

    Get PDF
    Globalization and emerging new powers in the manufacturing world are among many challenges, major manufacturing enterprises are facing. This resulted in increased alternatives to satisfy customers\u27 growing needs regarding products\u27 aesthetic and functional requirements. Complexity of part design and engineering specifications to satisfy such needs often require a better use of advanced and more accurate tools to achieve good quality. Inspection is a crucial manufacturing function that should be further improved to cope with such challenges. Intelligent planning for inspection of parts with complex geometric shapes and free form surfaces using contact or non-contact devices is still a major challenge. Research in segmentation and localization techniques should also enable inspection systems to utilize modern measurement technologies capable of collecting huge number of measured points. Advanced digitization tools can be classified as contact or non-contact sensors. The purpose of this thesis is to develop a hybrid inspection planning system that benefits from the advantages of both techniques. Moreover, the minimization of deviation of measured part from the original CAD model is not the only characteristic that should be considered when implementing the localization process in order to accept or reject the part; geometric tolerances must also be considered. A segmentation technique that deals directly with the individual points is a necessary step in the developed inspection system, where the output is the actual measured points, not a tessellated model as commonly implemented by current segmentation tools. The contribution of this work is three folds. First, a knowledge-based system was developed for selecting the most suitable sensor using an inspection-specific features taxonomy in form of a 3D Matrix where each cell includes the corresponding knowledge rules and generate inspection tasks. A Travel Salesperson Problem (TSP) has been applied for sequencing these hybrid inspection tasks. A novel region-based segmentation algorithm was developed which deals directly with the measured point cloud and generates sub-point clouds, each of which represents a feature to be inspected and includes the original measured points. Finally, a new tolerance-based localization algorithm was developed to verify the functional requirements and was applied and tested using form tolerance specifications. This research enhances the existing inspection planning systems for complex mechanical parts with a hybrid inspection planning model. The main benefits of the developed segmentation and tolerance-based localization algorithms are the improvement of inspection decisions in order not to reject good parts that would have otherwise been rejected due to misleading results from currently available localization techniques. The better and more accurate inspection decisions achieved will lead to less scrap, which, in turn, will reduce the product cost and improve the company potential in the market

    수치 모델과 그래프 이론을 이용한 향상된 영상 분할 연구 -폐 영상에 응용-

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 공과대학 협동과정 바이오엔지니어링전공, 2016. 2. 김희찬.This dissertation presents a thoracic cavity segmentation algorithm and a method of pulmonary artery and vein decomposition from volumetric chest CT, and evaluates their performances. The main contribution of this research is to develop an automated algorithm for segmentation of the clinically meaningful organ. Although there are several methods to improve the organ segmentation accuracy such as the morphological method based on threshold algorithm or the object selection method based on the connectivity information our novel algorithm uses numerical algorithms and graph theory which came from the computer engineering field. This dissertation presents a new method through the following two examples and evaluates the results of the method. The first study aimed at the thoracic cavity segmentation. The thoracic cavity is the organ enclosed by the thoracic wall and the diaphragm surface. The thoracic wall has no clear boundary. Moreover since the diaphragm is the thin surface, this organ might have lost parts of its surface in the chest CT. As the previous researches, a method which found the mediastinum on the 2D axial view was reported, and a thoracic wall extraction method and several diaphragm segmentation methods were also informed independently. But the thoracic cavity volume segmentation method was proposed in this thesis for the first time. In terms of thoracic cavity volumetry, the mean±SD volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), and false negative ratio on VOR (FNRV) of the proposed method were 98.17±0.84%, 0.49±0.23%, and 1.34±0.83%, respectively. The proposed semi-automatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart), performed with high accuracy and may be useful for clinical purposes. The second study proposed a method to decompose the pulmonary vessel into vessel subtrees for separation of the artery and vein. The volume images of the separated artery and vein could be used for a simulation support data in the lung cancer. Although a clinician could perform the separation in his imagination, and separate the vessel into the artery and vein in the manual, an automatic separation method is the better method than other methods. In the previous semi-automatic method, root marking of 30 to 40 points was needed while tracing vessels under 2D slice view, and this procedure needed approximately an hour and a half. After optimization of the feature value set, the accuracy of the arterial and venous decomposition was 89.71 ± 3.76% in comparison with the gold standard. This framework could be clinically useful for studies on the effects of the pulmonary arteries and veins on lung diseases.Chapter 1 General Introduction 2 1.1 Image Informatics using Open Source 3 1.2 History of the segmentation algorithm 5 1.3 Goal of Thesis Work 8 Chapter 2 Thoracic cavity segmentation algorithm using multi-organ extraction and surface fitting in volumetric CT 10 2.1 Introduction 11 2.2 Related Studies 13 2.3 The Proposed Thoracic Cavity Segmentation Method 16 2.4 Experimental Results 35 2.5 Discussion 41 2.6 Conclusion 45 Chapter 3 Semi-automatic decomposition method of pulmonary artery and vein using two level minimum spanning tree constructions for non-enhanced volumetric CT 46 3.1 Introduction 47 3.2 Related Studies 51 3.3 Artery and Vein Decomposition 55 3.4 An Efficient Decomposition Method 70 3.5 Evaluation 75 3.6 Discussion and Conclusion 85 References 88 Abstract in Korean 95Docto
    corecore