10 research outputs found

    A proposal of quantum-inspired machine learning for medical purposes: An application case

    Get PDF
    Learning tasks are implemented via mappings of the sampled data set, including both the classical and the quantum framework. Biomedical data characterizing complex diseases such as cancer typically require an algorithmic support for clinical decisions, especially for early stage tumors that typify breast cancer patients, which are still controllable in a therapeutic and surgical way. Our case study consists of the prediction during the pre-operative stage of lymph node metastasis in breast cancer patients resulting in a negative diagnosis after clinical and radiological exams. The classifier adopted to establish a baseline is characterized by the result invariance for the order permutation of the input features, and it exploits stratifications in the training procedure. The quantum one mimics support vector machine mapping in a high-dimensional feature space, yielded by encoding into qubits, while being characterized by complexity. Feature selection is exploited to study the performances associated with a low number of features, thus implemented in a feasible time. Wide variations in sensitivity and specificity are observed in the selected optimal classifiers during cross-validations for both classification system types, with an easier detection of negative or positive cases depending on the choice between the two training schemes. Clinical practice is still far from being reached, even if the flexible structure of quantum-inspired classifier circuits guarantees further developments to rule interactions among features: this preliminary study is solely intended to provide an overview of the particular tree tensor network scheme in a simplified version adopting just product states, as well as to introduce typical machine learning procedures consisting of feature selection and classifier performance evaluation

    2018 Faculty Excellence Showcase, AFIT Graduate School of Engineering & Management

    Get PDF
    Excerpt: As an academic institution, we strive to meet and exceed the expectations for graduate programs and laud our values and contributions to the academic community. At the same time, we must recognize, appreciate, and promote the unique non-academic values and accomplishments that our faculty team brings to the national defense, which is a priority of the Federal Government. In this respect, through our diverse and multi-faceted contributions, our faculty, as a whole, excel, not only along the metrics of civilian academic expectations, but also along the metrics of military requirements, and national priorities

    Academic Year 2019-2020 Faculty Excellence Showcase, AFIT Graduate School of Engineering & Management

    Get PDF
    An excerpt from the Dean\u27s Message: There is no place like the Air Force Institute of Technology (AFIT). There is no academic group like AFIT’s Graduate School of Engineering and Management. Although we run an educational institution similar to many other institutions of higher learning, we are different and unique because of our defense-focused graduate-research-based academic programs. Our programs are designed to be relevant and responsive to national defense needs. Our programs are aligned with the prevailing priorities of the US Air Force and the US Department of Defense. Our faculty team has the requisite critical mass of service-tested faculty members. The unique composition of pure civilian faculty, military faculty, and service-retired civilian faculty makes AFIT truly unique, unlike any other academic institution anywhere

    Air Force Institute of Technology Research Report 2011

    Get PDF
    This report summarizes the research activities of the Air Force Institute of Technology’s Graduate School of Engineering and Management. It describes research interests and faculty expertise; lists student theses/dissertations; identifies research sponsors and contributions; and outlines the procedures for contacting the school. Included in the report are: faculty publications, conference presentations, consultations, and funded research projects. Research was conducted in the areas of Aeronautical and Astronautical Engineering, Electrical Engineering and Electro-Optics, Computer Engineering and Computer Science, Systems and Engineering Management, Operational Sciences, Mathematics, Statistics and Engineering Physics

    Quantifying Performance Bias in Label Fusion

    Get PDF
    Classification systems are employed to remotely assess whether an element of interest falls into a target class or non-target class. These systems have uses in fields as far ranging as biostatistics to search engine keyword analysis. The performance of the system is often summarized as a trade-off between the proportions of elements correctly labeled as target plotted against the number of elements incorrectly labeled as target. These are empirical estimates of the true positive and false positive rates. These rates are often plotted to create a receiver operating characteristic (ROC) curve that acts as a visual tool to assess classification system performance. The research contained in this thesis focuses on the label fusion technique and the bias that can occur when using incorrect assumptions regarding the partitioning of the event set. This partitioning may be defined in terms of what will be called within and across label fusion. The major goals of this work are the formulaic development and quantification of performance bias between different types of across and within label fusion and analysis of the effects of individual classification system performance, correlation, and target environment on the magnitude of bias between these two types of label fusion

    Statistical Inference on Optimal Points to Evaluate Multi-State Classification Systems

    Get PDF
    In decision making, an optimal point represents the settings for which a classification system should be operated to achieve maximum performance. Clearly, these optimal points are of great importance in classification theory. Not only is the selection of the optimal point of interest, but quantifying the uncertainty in the optimal point and its performance is also important. The Youden index is a metric currently employed for selection and performance quantification of optimal points for classification system families. The Youden index quantifies the correct classification rates of a classification system, and its confidence interval quantifies the uncertainty in this measurement. This metric currently focuses on two or three classes, and only allows for the utility of correct classifications and the cost of total misclassifications to be considered. An alternative to this metric for three or more classes is a cost function which considers the sum of incorrect classification rates. This new metric is preferable as it can include class prevalences and costs associated with every classification. In multi-class settings this informs better decisions and inferences on optimal points. The work in this dissertation develops theory and methods for confidence intervals on a metric based on misclassfication rates, Bayes Cost, and where possible, the thresholds found for an optimal point using Bayes Cost. Hypothesis tests for Bayes Cost are also developed to test a classification systems performance or compare systems with an emphasis on classification systems involving three or more classes. Performance of the newly proposed methods is demonstrated with simulation

    Analyzing a Method to Determine the Utility of Adding a Classification System to a Sequence for Improved Accuracy

    Get PDF
    Frequently, ensembles of classification systems are combined into a sequence in order to better enhance the accuracy in classifying objects of interest. However, there is a point in which adding an additional system to a sequence no longer enhances the system as either the increase in operational costs exceeds the benefit of improvements in classification or the addition of the system does not increase accuracy at all. This research will examine a utility measure to determine the valid or invalid nature of adding a classification system to a sequence of such systems based on the ratio of the change in accuracy to the increase in operational costs. Three general classification sequence strategies defined on a two-class population outcome will be examined: Believe the Positive, Believe the Negative and Believe the Extreme. Through simulation, this research will identify which characteristics of the individual classification systems and the sequence have the greatest impact on the utility measure and provide guidance on the threshold value for the utility measure that differentiates between when the addition of a system to the sequence may be useful (valid) and when it is not (invalid). This work expands upon known accuracy and cost equations for each of the different sequential strategies in order to generalize them to any fixed number of classification systems in a sequence. From these accuracy and cost calculations, the utility measure can be computed for different scenarios and recommendations are made as to the characteristics that enhance the utility of adding additional systems to a sequence in order to improve classification accuracy
    corecore