22 research outputs found

    Solving Algorithmic Problems in Finitely Presented Groups via Machine Learning

    Full text link
    Machine learning and pattern recognition techniques have been successfully applied to algorithmic problems in free groups. In this dissertation, we seek to extend these techniques to finitely presented non-free groups, in particular to polycyclic and metabelian groups that are of interest to non-commutative cryptography. As a prototypical example, we utilize supervised learning methods to construct classifiers that can solve the conjugacy decision problem, i.e., determine whether or not a pair of elements from a specified group are conjugate. The accuracies of classifiers created using decision trees, random forests, and N-tuple neural network models are evaluated for several non-free groups. The very high accuracy of these classifiers suggests an underlying mathematical relationship with respect to conjugacy in the tested groups. In addition to testing these techniques on several well-known finitely presented groups, we introduce a new family of metabelian groups for which we analyze the computational complexity of the conjugacy search problem. We prove that for the family in general the time complexity of the conjugacy search problem is exponential, while for a subfamily the problem is polynomial. We also show that for some of these groups the conjugacy search problem is an instance of the discrete logarithm problem. We also apply machine learning techniques to solving the conjugacy search problem. For each platform group we train a N-tuple regression network that can produce a candidate conjugator for a pair of conjugate elements. This candidate is then used as the initial state of a local search for a conjugator in the Cayley graph, in what we call regression-based conjugacy search (RBCS). RBCS can be applied to groups such as polycyclic groups for which other heuristic approaches, such as the length-based attack, are ineffective

    Tensor Denoising via Amplification and Stable Rank Methods

    Full text link
    Tensors in the form of multilinear arrays are ubiquitous in data science applications. Captured real-world data, including video, hyperspectral images, and discretized physical systems, naturally occur as tensors and often come with attendant noise. Under the additive noise model and with the assumption that the underlying clean tensor has low rank, many denoising methods have been created that utilize tensor decomposition to effect denoising through low rank tensor approximation. However, all such decomposition methods require estimating the tensor rank, or related measures such as the tensor spectral and nuclear norms, all of which are NP-hard problems. In this work we leverage our previously developed framework of tensor amplification\textit{tensor amplification}, which provides good approximations of the spectral and nuclear tensor norms, to denoising synthetic tensors of various sizes, ranks, and noise levels, along with real-world tensors derived from physiological signals. We also introduce two new notions of tensor rank -- stable slice rank\textit{stable slice rank} and stable \textit{stable }XX-rank\textit{-rank} -- and new denoising methods based on their estimation. The experimental results show that in the low rank context, tensor-based amplification provides comparable denoising performance in high signal-to-noise ratio (SNR) settings and superior performance in noisy (i.e., low SNR) settings, while the stable XX-rank method achieves superior denoising performance on the physiological signal data

    A Novel Tropical Geometry-based Interpretable Machine Learning Method: Pilot Application to Delivery of Advanced Heart Failure Therapies

    Full text link
    Abstract—A model’s interpretability is essential to many practical applications such as clinical decision support systems. In this paper, a novel interpretable machine learning method is presented, which can model the relationship between input variables and responses in humanly understandable rules. The method is built by applying tropical geometry to fuzzy inference systems, wherein variable encoding functions and salient rules can be discovered by supervised learning. Experiments using synthetic datasets were conducted to demonstrate the performance and capacity of the proposed algorithm in classification and rule discovery. Furthermore, we present a pilot application in identifying heart failure patients that are eligible for advanced therapies as proof of principle. From our results on this particular application, the proposed network achieves the highest F1 score. The network is capable of learning rules that can be interpreted and used by clinical providers. In addition, existing fuzzy domain knowledge can be easily transferred into the network and facilitate model training. In our application, with the existing knowledge, the F1 score was improved by over 5%. The characteristics of the proposed network make it promising in applications requiring model reliability and justification

    Quadratic Multilinear Discriminant Analysis for Tensorial Data Classification

    No full text
    Over the past decades, there has been an increase of attention to adapting machine learning methods to fully exploit the higher order structure of tensorial data. One problem of great interest is tensor classification, and in particular the extension of linear discriminant analysis to the multilinear setting. We propose a novel method for multilinear discriminant analysis that is radically different from the ones considered so far, and it is the first extension to tensors of quadratic discriminant analysis. Our proposed approach uses invariant theory to extend the nearest Mahalanobis distance classifier to the higher-order setting, and to formulate a well-behaved optimization problem. We extensively test our method on a variety of synthetic data, outperforming previously proposed MDA techniques. We also show how to leverage multi-lead ECG data by constructing tensors via taut string, and use our method to classify healthy signals versus unhealthy ones; our method outperforms state-of-the-art MDA methods, especially after adding significant levels of noise to the signals. Our approach reached an AUC of 0.95(0.03) on clean signals—where the second best method reached 0.91(0.03)—and an AUC of 0.89(0.03) after adding noise to the signals (with a signal-to-noise-ratio of −30)—where the second best method reached 0.85(0.05). Our approach is fundamentally different than previous work in this direction, and proves to be faster, more stable, and more accurate on the tests we performed

    A deep learning framework for automated detection and quantitative assessment of liver trauma

    Full text link
    Abstract Background Both early detection and severity assessment of liver trauma are critical for optimal triage and management of trauma patients. Current trauma protocols utilize computed tomography (CT) assessment of injuries in a subjective and qualitative (v.s. quantitative) fashion, shortcomings which could both be addressed by automated computer-aided systems that are capable of generating real-time reproducible and quantitative information. This study outlines an end-to-end pipeline to calculate the percentage of the liver parenchyma disrupted by trauma, an important component of the American Association for the Surgery of Trauma (AAST) liver injury scale, the primary tool to assess liver trauma severity at CT. Methods This framework comprises deep convolutional neural networks that first generate initial masks of both liver parenchyma (including normal and affected liver) and regions affected by trauma using three dimensional contrast-enhanced CT scans. Next, during the post-processing step, human domain knowledge about the location and intensity distribution of liver trauma is integrated into the model to avoid false positive regions. After generating the liver parenchyma and trauma masks, the corresponding volumes are calculated. Liver parenchymal disruption is then computed as the volume of the liver parenchyma that is disrupted by trauma. Results The proposed model was trained and validated on an internal dataset from the University of Michigan Health System (UMHS) including 77 CT scans (34 with and 43 without liver parenchymal trauma). The Dice/recall/precision coefficients of the proposed segmentation models are 96.13/96.00/96.35% and 51.21/53.20/56.76%, respectively, in segmenting liver parenchyma and liver trauma regions. In volume-based severity analysis, the proposed model yields a linear regression relation of 0.95 in estimating the percentage of liver parenchyma disrupted by trauma. The model shows an accurate performance in avoiding false positives for patients without any liver parenchymal trauma. These results indicate that the model is generalizable on patients with pre-existing liver conditions, including fatty livers and congestive hepatopathy. Conclusion The proposed algorithms are able to accurately segment the liver and the regions affected by trauma. This pipeline demonstrates an accurate performance in estimating the percentage of liver parenchyma that is affected by trauma. Such a system can aid critical care medical personnel by providing a reproducible quantitative assessment of liver trauma as an alternative to the sometimes subjective AAST grading system that is used currently.http://deepblue.lib.umich.edu/bitstream/2027.42/173589/1/12880_2022_Article_759.pd

    Automated Spleen Injury Detection Using 3D Active Contours and Machine Learning

    No full text
    The spleen is one of the most frequently injured organs in blunt abdominal trauma. Computed tomography (CT) is the imaging modality of choice to assess patients with blunt spleen trauma, which may include lacerations, subcapsular or parenchymal hematomas, active hemorrhage, and vascular injuries. While computer-assisted diagnosis systems exist for other conditions assessed using CT scans, the current method to detect spleen injuries involves the manual review of scans by radiologists, which is a time-consuming and repetitive process. In this study, we propose an automated spleen injury detection method using machine learning. CT scans from patients experiencing traumatic injuries were collected from Michigan Medicine and the Crash Injury Research Engineering Network (CIREN) dataset. Ninety-nine scans of healthy and lacerated spleens were split into disjoint training and test sets, with random forest (RF), naive Bayes, SVM, k-nearest neighbors (k-NN) ensemble, and subspace discriminant ensemble models trained via 5-fold cross validation. Of these models, random forest performed the best, achieving an Area Under the receiver operating characteristic Curve (AUC) of 0.91 and an F1 score of 0.80 on the test set. These results suggest that an automated, quantitative assessment of traumatic spleen injury has the potential to enable faster triage and improve patient outcomes
    corecore