108 research outputs found

    A Comparison of RF-DNA Fingerprinting Using High/Low Value Receivers with ZigBee Devices

    Get PDF
    The ZigBee specification provides a niche capability, extending the IEEE 802.15.4 standard to provide a wireless mesh network solution. ZigBee-based devices require minimal power and provide a relatively long-distance, inexpensive, and secure means of networking. The technology is heavily utilized, providing energy management, ICS automation, and remote monitoring of Critical Infrastructure (CI) operations; it also supports application in military and civilian health care sectors. ZigBee networks lack security below the Network layer of the OSI model, leaving them vulnerable to open-source hacking tools that allow malicous attacks such as MAC spoofing or Denial of Service (DOS). A method known as RF-DNA Fingerprinting provides an additional level of security at the Physical (PHY) level, where the transmitted waveform of a device is examined, rather than its bit-level credentials which can be easily manipulated. RF-DNA fingerprinting allows a unique human-like signature for a device to be obtained and a subsequent decision made whether to grant access or deny entry to a secure network. Two NI receivers were used here to simultaneously collect RF emissions from six Atmel AT86RF230 transceivers. The time-domain response of each device was used to extract features and generate unique RF-DNA fingerprints. These fingeprints were used to perform Device Classification using two discrimination processes known as MDA/ML and GRLVQI. Each process (classifier) was used to examine both the Full-Dimensional (FD) and reduced dimensional feature-sets for the high-value PXIe and low-value USRP receivers. The reduced feature-sets were determined using DRA for both quantitative and qualitative subsets. Additionally, each classifier performed Device Classification using a hybrid interleaved set of fingerprints from both receivers

    Local feature selection for multiple instance learning with applications.

    Get PDF
    Feature selection is a data processing approach that has been successfully and effectively used in developing machine learning algorithms for various applications. It has been proven to effectively reduce the dimensionality of the data and increase the accuracy and interpretability of machine learning algorithms. Conventional feature selection algorithms assume that there is an optimal global subset of features for the whole sample space. Thus, only one global subset of relevant features is learned. An alternative approach is based on the concept of Local Feature Selection (LFS), where each training sample can have its own subset of relevant features. Multiple Instance Learning (MIL) is a variation of traditional supervised learning, also known as single instance learning. In MIL, each object is represented by a set of instances, or a bag. While bags are labeled, the labels of their instances are unknown. The ambiguity of the instance labels makes the feature selection for MIL challenging. Although feature selection in traditional supervised learning has been researched extensively, there are only a few methods for the MIL framework. Moreover, localized feature selection for MIL has not been researched. This dissertation focuses on developing a local feature selection method for the MIL framework. Our algorithm, called Multiple Instance Local Salient Feature Selection (MI-LSFS), searches the feature space to find the relevant features within each bag. We also propose a new multiple instance classification algorithm, called MILES-LFS, that integrates information learned by MI-LSFS during the feature selection process to identify a reduced subset of representative bags and instances. We show that using a more focused subset of prototypes can improve the performance while significantly reducing the computational complexity. Other applications of the proposed MI-LSFS include a new method that uses our MI-LSFS algorithm to explore and investigate the features learned by a Convolutional Neural Network (CNN) model; a visualization method for CNN models, called Gradient-weighted Sample Activation Map (Grad-SAM), that uses the locally learned features of each sample to highlight their relevant and salient parts, and a novel explanation method, called Classifier Explanation by Local Feature Selection (CE-LFS), to explain the decisions of trained models. The proposed MI-LSFS and its applications are validated using several synthetic and real data sets. We report and compare quantitative measures such as Rand Index, Area Under Curve (AUC), and accuracy. We also provide qualitative measures by visualizing and interpreting the selected features and their effects

    Improving random forests by feature dependence analysis

    Get PDF
    Random forests (RFs) have been widely used for supervised learning tasks because of their high prediction accuracy good model interpretability and fast training process. However they are not able to learn from local structures as convolutional neural networks (CNNs) do when there exists high dependency among features. They also cannot utilize features that are jointly dependent on the label but marginally independent of it. In this dissertation we present two approaches to address these two problems respectively by dependence analysis. First a local feature sampling (LFS) approach is proposed to learn and use the locality information of features to group dependent/correlated features to train each tree. For image data the local information of features (pixels) is defined by the 2-D grid of the image. For non-image data we provided multiple ways of estimating this local structure. Our experiments shows that RF with LFS has reduced correlation and improved accuracy on multiple UCI datasets. To address the latter issue of random forest mentioned we propose a way to categorize features as marginally dependent features and jointly dependent features the latter is defined by minimum dependence sets (MDS\u27s) or by stronger dependence sets (SDS\u27s). Algorithms to identify MDS\u27s and SDS\u27s are provided. We then present a feature dependence mapping (FDM) approach to map the jointly dependent features to another feature space where they are marginally dependent. We show that by using FDM decision tree and RF have improved prediction performance on artificial datasets and a protein expression dataset

    Vector Quantization Codebook Design and Application Based on the Clonal Selection Algorithm

    Get PDF
    In the area of digital image compression, the vector quantization algorithm is a simple, effective and attractive method. After the introduction of the basic principle of the vector quantization and the classical algorithm for vector quantization codebook design, the paper, based on manifold distance, presents a clonal selection code book design method, using disintegrating method to produce initial code book and then to obtain the final code book through optimization with the clonal selection cluster method based on the manifold distance. Through experiment, based on manifold distance, compared the clonal selection codebook design algorithm (MDCSA) with the hereditary codebook design algorithm and LBG algorithm. According to the result of the experiment, MDCSA is more suitable for the evolution algorithm of the image compression

    Object detection for big data

    Get PDF
    "May 2014."Dissertation supervisor: Dr. Tony X. Han.Includes vita.We have observed significant advances in object detection over the past few decades and gladly seen the related research has began to contribute to the world: Vehicles could automatically stop before hitting any pedestrian; Face detectors have been integrated into smart phones and tablets; Video surveillance systems could locate the suspects and stop crimes. All these applications demonstrate the substantial research progress on object detection. However learning a robust object detector is still quite challenging due to the fact that object detection is a very unbalanced big data problem. In this dissertation, we aim at improving the object detector's performance from different aspects. For object detection, the state-of-the-art performance is achieved through supervised learning. The performances of object detectors of this kind are mainly determined by two factors: features and underlying classification algorithms. We have done thorough research on both of these factors. Our contribution involves model adaption, local learning, contextual boosting, template learning and feature development. Since the object detection is an unbalanced problem, in which positive examples are hard to be collected, we propose to adapt a general object detector for a specific scenario with a few positive examples; To handle the large intra-class variation problem lying in object detection task, we propose a local adaptation method to learn a set of efficient and effective detectors for a single object category; To extract the effective context from the huge amount of negative data in object detection, we introduce a novel contextual descriptor to iteratively improve the detector; To detect object with a depth sensor, we design an effective depth descriptor; To distinguish the object categories with the similar appearance, we propose a local feature embedding and template selection algorithm, which has been successfully incorporated into a real-world fine-grained object recognition application. All the proposed algorithms and featuIncludes bibliographical references (pages 117-130)

    Precision spectroscopy of the 2S-6P transition in atomic deuterium

    Get PDF
    Die Quantenelektrodynamik (QED) bildet die Grundlage aller anderen Quantenfeldtheorien, auf denen das Standardmodell der Teilchenphysik aufgebaut ist. Derzeit ist klar, dass unser fundamentales Naturverständnis unvollständig ist, sodass erwartet wird, dass das Standardmodell um neue Teilchen oder Wechselwirkungen verändert oder erweitert werden muss. Eine Möglichkeit, diese Grenzen der Grundlagenphysik zu erforschen, ist die Durchführung von Präzisionsmessungen. Diese Arbeit untersucht die Präzisionslaserspektroskopie von Deuterium, wo die Übergangsenergien zwischen verschiedenen Energiezuständen des an den Kern gebundenen Elektrons mit Techniken wie ultrastabilen Lasern und dem Frequenzkamm genau gemessen werden können. Aufgrund der Einfachheit der wasserstoffähnlichen Atome können ihre Energieniveaus anhand der QED-Theorie für gebundene Zustände genau berechnet werden, und mit dem Experiment mit der relativen Genauigkeit in der Größenordnung von 101210^{-12} verglichen werden. Ein solcher Vergleich zwischen Theorie und Experiment ist mit der Bestimmung von Naturkonstanten verbunden, die als Parameter in die Theorie eingehen. Erst wenn mehr unabhängige Messungen als Parameter vorliegen, kann die Theorie überprüft werden. Der Vergleich zwischen Theorie und Laser-Spektroskopie im Deuterium betrifft die Ryd-berg-Konstante RR_\infty und den Deuteronen-Ladungsradius rdr_d. Dies erfordert mindestens zwei Messungen der verschiedenen Übergangsfrequenzen, um diese Konstanten zu bestimmen, und mehr Messungen, um die Theorie zu testen. Im Gegensatz zum Wasserstoff gibt es bei Deuterium nur wenige ausreichend genaue Messungen der Übergänge. In dieser Arbeit wird die erste Untersuchung des 2S-6P-Übergangs in Deuterium vorgestellt, die mit der bestehenden Frequenzmessung des 1S-2S-Übergangs kombiniert werden kann, um RR_\infty und rdr_d zu erhalten. Zusammen mit der Messung des 2S-2P-Übergangs von myonischem Deuterium stellt diese Bestimmung einen Theorietest dar. Ein solcher Vergleich ist wichtig, um die anhaltende Diskrepanz zwischen dem Ergebnis aus myonischem Deuterium und dem Durchschnitt früherer Daten aus elektronischem Deuterium, sowie die Spannungen zwischen den jüngsten Ergebnissen aus der Wasserstoffspektroskopie, zu beleuchten. Im Gegensatz zu Wasserstoff wird die Präzisionsspektroskopie des 2S-6P-Übergangs in Deuterium durch die gleichzeitige Anregung unaufgelöster Hyperfeinstruktur-Komponenten erschwert, was zur unaufgelösten Quanteninterferenz führen kann. Diese Arbeit untersucht die möglichen systematischen Effekte, die mit dieser Komplikation verbunden sind. Zusammen mit analytischen störungstheoretischen Modellen werden Supercomputersimulationen durchgeführt, um diese Effekte zu analysieren. Es wird gezeigt, dass die Quanteninterferenz für alle 2S-nnP-Übergänge in Deuterium stark unterdrückt wird, wodurch Präzisionsmessungen dieser Übergänge möglich werden. Darüber hinaus wird ein weiterer Effekt in Deuterium im Vergleich zu Wasserstoff untersucht, der sich aus der Lichtkraft ergibt, die auf die Atome in der stehenden Welle des Spektroskopielichts wirkt. Trotz zusätzlicher Zustandsvielfalt durch die gleichzeitige Anregung unaufgelöster Hyperfeinkomponenten wird gezeigt, dass diese sogenannte ``Lichtkraftverschiebung'' mit dem gut verstandenen Effekt im Wasserstoff vergleichbar ist. Die größte Herausforderung bei der Messung des 2S-6P-Ein-Photonen-Übergangs in Deuterium ist die Doppler-Verschiebung erster Ordnung. Ein großer Teil dieser Arbeit befasst sich daher mit dem verbesserten aktiven faserbasierten Retroreflektor (AFR), der eine Technik zur Unterdrückung dieser Verschiebung darstellt. Der zentrale Teil des AFR ist der Faserkollimator, der für die Erzeugung hochwertiger gegenläufiger Laserstrahlen erforderlich ist. Die Entwicklung und Charakterisierung eines solchen Kollimators für die nahe ultraviolette Wellenlänge des 2S-6P-Übergangs ist eine der wichtigsten Errungenschaften des verbesserten AFR. Die Ergebnisse dieser Arbeit können für andere Anwendungen von Interesse sein, bei denen eine hohe Strahlqualität oder wellenfront-zurückverfolgende Strahlen wichtig sind. Darüber hinaus werden die Einschränkungen der AFR untersucht, die sich aus polarisationserhaltenden Singlemode-Fasern ergeben. Neben anderen Verbesserungen wurde eine Polarisationsüberwachung der Spektroskopielaserstrahlen implementiert. Es werden verschiedene Charakterisierungsmessungen vorgestellt, um die Leistungsfähigkeit des verbesserten AFR zu demonstrieren. Schließlich wird in dieser Arbeit eine vorläufige Messung des 2S-6P-Übergangs in Deuterium vorgestellt. Für diese Messung wurde ein neuer Kryostat in die Apparatur eingebaut, der die Stabilität des Spektroskopiesignals durch reduzierte Temperaturschwankungen verbessert. Die Erzeugung des kryogenen Deuterium-Atomstrahls wurde in Abhängigkeit von der Düsentemperatur analysiert, was eine wichtige Studie für künftige Spektroskopiemessungen darstellt. Darüber hinaus wurden für die Präzisionsmessung verschiedene systematische Effekte untersucht, darunter die Fehlausrichtung des Atomstrahls und die elektrischen Streufelder. Es wird gezeigt, dass eine Präzisionsmessung des 2S-6P-Übergangs in Deuterium mit einer ähnlichen Unsicherheit wie in Wasserstoff machbar ist. Nach der vorläufigen Unsicherheitsabschätzung kann die 2S1/2_{1/2}-6P1/2_{1/2}-Übergangsfrequenz in Deuterium auf \SI{1.7}{kHz} bestimmt werden, was einer relativen Genauigkeit von 2.3×10122.3 \times 10^{-12} entspricht. Zusammen mit der 1S-2S-Messung kann dieses Ergebnis bereits die genauesten Bestimmungen des Deuteronenradius und der Rydberg-Konstante aus dem elektronischen Deuterium ermöglichen, sodass die Unsicherheiten für die Rydberg-Konstante und den Deuteronenradius δR5×105m1\delta R_\infty \simeq 5\times 10^{-5}\,\text{m}^{-1} bzw.~\delta r_d \simeq \SI{0.002}{fm} betragen. Dieses Ergebnis bildet die Grundlage für eine zukünftige Präzisionsmessung, bei der die 2S-6P-Übergangsfrequenz mit ähnlicher Genauigkeit wie bei Wasserstoff bestimmt werden soll, was δR2×105m1\delta R_\infty \simeq 2\times 10^{-5}\,\text{m}^{-1} und \delta r_d \simeq \SI{0.0007}{fm} entsprechen würde. Der Vergleich mit dem Ergebnis von myonischem Deuterium würde es dann erlauben, die QED-Theorie für gebundene Zustände auf dem Niveau von 9×10139 \times 10^{-13} zu testen.Quantum electrodynamics (QED) forms the basis for all other quantum field theories, upon which the Standard Model of particle physics is constructed. Currently, it is clear that our fundamental understanding of nature is incomplete, such that the Standard Model is expected to be modified or extended by new particles or interactions. One way to explore these frontiers of fundamental physics is to perform precision measurements. This thesis studies the precision laser spectroscopy of deuterium, where the transition energies between different energy states of the electron bound to the nucleus can be accurately measured with techniques such as ultra-stable lasers and the frequency comb. Due to the simplicity of hydrogen-like atoms, their energy levels can be precisely calculated from bound-state QED and confronted with the experiment with the relative accuracy on the order of 101210^{-12}. Such a comparison between theory and experiment is linked to the determination of fundamental constants, which enter the theory as parameters. Only if more indepedendent measurements are available than there are parameters, the theory can be tested. The comparison between theory and laser spectroscopy in deuterium concerns the Rydberg constant RR_\infty and the deuteron charge radius rdr_d. This requires at least two different transition frequency measurements to determine those constants, and more measurements to test the theory. Contrary to hydrogen, only few accurate enough transition frequency measurements are available in deuterium. This thesis presents the first study of the 2S-6P transition in deuterium, which can be combined with the existing 1S-2S transition frequency measurement to obtain RR_\infty and rdr_d. Together with the 2S-2P transition measurement from muonic deuterium, this determination provides a theory test. Such a comparison is important to shine light on the persisting discrepancy between the result from muonic deuterium and the average of previous data from electronic deuterium, as well as tensions between the recent results from hydrogen spectroscopy. In contrast to hydrogen, precision spectroscopy of the 2S-6P transition in deuterium is complicated by the simultaneous excitation of unresolved hyperfine components, possibly leading to unresolved quantum interference. This thesis studies the possible systematic effects associated with this complication. Along with analytical perturbative models, supercomputer simulations are performed to analyze these effects. It is shown, that quantum interference is strongly suppressed for all 2S-nnP transitions in deuterium, making precision measurements of these transitions possible. Furthermore, another effect is studied in deuterium compared to hydrogen, which arises from the light force acting on the atoms in the standing wave of the spectroscopy light. Despite additional state manifolds from the simultaneous excitation of unresolved hyperfine components, it is shown that this so-called ``light force shift'' is comparable to the well understood effect in hydrogen. The main challenge of measuring the one-photon 2S-6P transition in deuterium is the first-order Doppler shift. Therefore, a large part of this thesis contributes to the improved active fiber-based retroreflector (AFR), which is a technique to suppress this shift. The central part of the AFR is the fiber collimator, which is required to produce high-quality counter-propagating laser beams. Designing and characterizing such a collimator for the near ultra-violet wavelength of the 2S-6P transition is one of the main achievements of the improved AFR. The results of this work can be of interest to other applications where a high beam quality or wavefront-retracing beams are important. Furthermore, the limitations of the AFR arising from single-mode polarization-maintaining fibers are investigated. Along with other improvements, a polarization monitor of the spectroscopy laser beams has been implemented. Various characterization measurements are presented to demonstrate the performance of the improved AFR. Finally, this thesis presents a preliminary measurement of the 2S-6P transition in deuterium. For this measurement, a new cryostat has been installed in the apparatus, which improves the stability of the spectroscopy signal due to reduced temperature fluctuations. The cryogenic deuterium atomic beam generation has been analyzed in dependence on the nozzle temperature, which is an important study for future spectroscopy measurements. Furthermore, for the precision measurement different systematic effects have been investigated, including the atomic beam misalignment and the stray electric fields. It is demonstrated that a precision measurement of the 2S-6P transition in deuterium with a similar uncertainty than in hydrogen is feasible. According to the preliminary uncertainty budget, the 2S1/2_{1/2}-6P1/2_{1/2} transition frequency in deuterium can be determined to \SI{1.7}{kHz}, which corresponds to 2.3×10122.3 \times 10^{-12} relative accuracy. Together with the 1S-2S measurement, already this result can enable the most accurate determinations of the deuteron radius and the Rydberg constant from the electronic deuterium with the uncertainties on the Rydberg constant and the deuteron radius of δR5×105m1\delta R_\infty \simeq 5\times 10^{-5}\,\text{m}^{-1} and \delta r_d \simeq \SI{0.002}{fm}, respectively. This result sets the stage for a future precision measurement, where the 2S-6P transition frequency is expected to be determined with the similar accuracy as in hydrogen, which would correspond to δR2×105m1\delta R_\infty \simeq 2\times 10^{-5}\,\text{m}^{-1} and \delta r_d \simeq \SI{0.0007}{fm}. The comparison to the result from muonic deuterium would then allow to test bound-state QED at the level of 9×10139 \times 10^{-13}

    Symmetry and Topology in Superconductors - Odd-frequency pairing and edge states -

    Full text link
    Superconductivity is a phenomenon where the macroscopic quantum coherence appears due to the pairing of electrons. This offers a fascinating arena to study the physics of broken gauge symmetry. However, the important symmetries in superconductors are not only the gauge invariance. Especially, the symmetry properties of the pairing, i.e., the parity and spin-singlet/spin-triplet, determine the physical properties of the superconducting state. Recently it has been recognized that there is the important third symmetry of the pair amplitude, i.e., even or odd parity with respect to the frequency. The conventional uniform superconducting states correspond to the even-frequency pairing, but the recent finding is that the odd-frequency pair amplitude arises in the spatially non-uniform situation quite ubiquitously. Especially, this is the case in the Andreev bound state (ABS) appearing at the surface/interface of the sample. The other important recent development is on the nontrivial topological aspects of superconductors. As the band insulators are classified by topological indices into (i) conventional insulator, (ii) quantum Hall insulator, and (iii) topological insulator, also are the gapped superconductors. The influence of the nontrivial topology of the bulk states appears as the edge or surface of the sample. In the superconductors, this leads to the formation of zero energy ABS (ZEABS). Therefore, the ABSs of the superconductors are the place where the symmetry and topology meet each other which offer the stage of rich physics. In this review, we discuss the physics of ABS from the viewpoint of the odd-frequency pairing, the topological bulk-edge correspondence, and the interplay of these two issues. It is described how the symmetry of the pairing and topological indices determines the absence/presence of the ZEABS, its energy dispersion, and properties as the Majorana fermions.Comment: 91 pages, 38 figures, Review article, references adde

    Advances in SCA and RF-DNA Fingerprinting Through Enhanced Linear Regression Attacks and Application of Random Forest Classifiers

    Get PDF
    Radio Frequency (RF) emissions from electronic devices expose security vulnerabilities that can be used by an attacker to extract otherwise unobtainable information. Two realms of study were investigated here, including the exploitation of 1) unintentional RF emissions in the field of Side Channel Analysis (SCA), and 2) intentional RF emissions from physical devices in the field of RF-Distinct Native Attribute (RF-DNA) fingerprinting. Statistical analysis on the linear model fit to measured SCA data in Linear Regression Attacks (LRA) improved performance, achieving 98% success rate for AES key-byte identification from unintentional emissions. However, the presence of non-Gaussian noise required the use of a non-parametric classifier to further improve key guessing attacks. RndF based profiling attacks were successful in very high dimensional data sets, correctly guessing all 16 bytes of the AES key with a 50,000 variable dataset. With variable reduction, Random Forest still outperformed Template Attack for this data set, requiring fewer traces and achieving higher success rates with lower misclassification rate. Finally, the use of a RndF classifier is examined for intentional RF emissions from ZigBee devices to enhance security using RF-DNA fingerprinting. RndF outperformed parametric MDA/ML and non-parametric GRLVQI classifiers, providing up to GS =18.0 dB improvement (reduction in required SNR). Network penetration, measured using rogue ZigBee devices, show that the RndF method improved rogue rejection in noisier environments - gains of up to GS =18.0 dB are realized over previous methods

    Quantum Inspired Machine Learning Algorithms for Adaptive Radiotherapy

    Full text link
    Adaptive radiotherapy (ART) refers to the modification of radiotherapy treatment plans in response to patient anatomical and physiological changes over the course of treatment and has been recognized as an important step towards maximizing the curative potential of radiation therapy through personalized medicine. This dissertation explores the novel application of quantum physics principles and deep machine learning techniques to address three challenges towards the clinical implementation of ART: (1) efficient calculation of optimal treatment parameters, (2) adaptation to geometrical changes over the treatment period while mitigating associated uncertainties, and (3) understanding the relationship between individual patient characteristics and clinical outcomes. Applications of quantum and machine learning modeling in other fields support the potential of this novel approach. For efficient optimization, we developed and tested a quantum-inspired, stochastic algorithm for intensity-modulated radiotherapy: quantum tunnel annealing (QTA). By modeling the likelihood probability of accepting a higher energy solution after a particle tunneling through a potential energy barrier, QTA features an additional degree of freedom not shared by traditional stochastic optimization methods such as simulated annealing (SA). QTA achieved convergence up to 46.6% (26.8%) faster than SA for beamlet weight optimization and direct aperture optimization respectively. The results of this study suggest that the additional degree of freedom provided by QTA can improve convergence rates and achieve a more efficient and, potentially, effective treatment planning process. For geometrical adaptation, we investigated the feasibility of predicting patient changes across a fractionated treatment schedule using two approaches. The first was based on a joint framework (referred to as QRNN) employing quantum mechanics in combination with deep recurrent neural networks (RNNs). The second approach was developed based on a classical framework (MRNN), which modelled patient anatomical changes as a Markov process. We evaluated and compared these two approaches’ performance characteristics using a dataset of 125 head and neck cancer patients who received fractionated radiotherapy. The MRNN framework exhibited slightly better performance than the QRNN framework, with MRNN(QRNN) validation area under the receiver operating characteristic curve (AUC) scores [95% CI] of 0.742 [0.721-0.763] (0.675 [0.64-0.71]), 0.709 [0.683-0.735] (0.656 [0.634-0.677]), 0.724 [0.688-0.76] (0.652 [0.608-0.696]), and 0.698 [0.682-0.714] (0.605 [0.57-0.64]) for system state vector sizes of 4, 6, 8, and 10, respectively. A similar trend was also observed when the fully trained models were applied to an external testing dataset of 20 patients. These results suggest that these stochastic models provide added value in predicting patient changes during the course of adaptive radiotherapy. Towards understanding the relationship between patient characteristics and clinical outcomes, we performed a series of studies which investigated the use of quantitative patient features for predicting clinical outcomes in laryngeal cancer patients who underwent treatment in a bioselection paradigm based on surgeon-assessed response to induction chemotherapy. Among the features investigated from CT scans taken before and after induction chemotherapy, two (gross tumor volume change between pre- and post-induction chemotherapy, and nodal stage) had prognostic value for predicting patient outcomes using standard regression models. Artificial neural networks did not improve predictive performance in this case. Taken together, the significance of these studies lies in their contribution to the body of knowledge of medical physics and in their demonstration of the use of novel techniques which incorporate quantum mechanics and machine learning as a joint framework for treatment planning optimization and prediction of anatomical patient changes over time.PHDApplied PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169954/1/jpakela_1.pd
    corecore