30 research outputs found
Information limits of imaging through highly diffusive materials using spatiotemporal measurements of diffuse photons
Conventional medical imaging instruments are bulky, expensive, and use harmful ionising radiation. Combining ultrafast single-photon detectors and pulsed laser sources at optical wavelengths has the potential to offer inexpensive, safe, and potentially wearable alternatives. However, photons at optical wavelengths are strongly scattered by biological tissue, which corrupts direct imaging information about regions of absorbing interactions below the tissue surface. The work in this thesis studies the potential of measuring indirect imaging information by resolving diffuse photon measurements in space and time. The practical limits of imaging through highly diffusive material, e.g., biological tissue, is explored and validated with experimental measurements. The ill-posed problem of using the information in diffuse photon measurements to reconstruct images at the limits of the highly diffusive regime is tackled using probabilistic machine learning, demonstrating the potential of migrating diffuse optical imaging techniques beyond the currently accepted limits and underlining the importance of uncertainty quantification in reconstructions. The thesis is concluded with a challenging biomedical optics experiment to transmit photons diametrically through an adult human head. This problem was tackled experimentally and numerically using an anatomically accurate Monte Carlo simulation which uncovered key practical considerations when detecting photons at the extreme limits of the highly diffusive regime. Although the experimental measurements were inconclusive, comparisons with the numerical results were promising. More in-depth numerical simulations indicated that light could be guided in regions of low scattering and absorption to reach deep areas inside the head, and photons can, in principle, be transmitted through the entire diameter of the head. The collective evidence presented in this thesis reveals the potential of diffuse optical imaging to extend beyond the currently accepted limits to non-invasively image deep regions of the human body and brain using optical wavelengths
Studies of hybrid pixel detectors for use in Transmission Electron Microscopy
Hybrid pixel detectors (HPDs) are a class of direct electron detectors that have been adopted for use in a wide variety of experimental modalities across all branches of electron microscopy. Nevertheless, this does not preclude the possibility of further improvement and optimisation of their performance for specific applications and increasing the range of experiments for which they are suitable. The aims of this thesis are two-fold. Firstly, to develop a more comprehensive understanding of the current generation HPDs using Si sensors, with a view to optimising their design. Secondly, to determine the advantages of alternative sensor materials that, in principle, should improve the performance of HPDs in transmission electron microscopy (TEM) due to their increased stopping power.
The three chapters review the relevant theoretical background. This includes the physics underpinning the performance of semiconductor-based sensors in electron microscopy as well as the operation of detectors more generally and the theory underlying the metrics used to evaluate detector performance in Chapter 1. In Chapter 2, TEM as a key tool in the study of nano- and atomic scale systems is also introduced, along with an overview of the detector technologies used in TEM. Also presented as part of the background material in Chapter 3 is a description of the experimental methods and software packages used to acquire the results presented in the latter half of the thesis.
Chapter 4, the first results chapter, presents a comparison of the performance of Medipix3 detectors with Si sensors with various combination of pixel pitch and sensor thickness for 60 keV and 200 keV electrons. In Chapter 5, simulations of the interactions of electrons with energies ranging from 30-300 keV with GaAs:Cr and CdTe/CZT, two of the most viable alternatives to Si for use in the sensors of HPDs, are compared with simulations of the interactions of electrons with Si. A comparative study of the performance of a Medipix3 device with GaAs:Cr sensor with that of a Si sensor of the same thickness and pixel pitch for electrons with energies ranging from 60-300 keV is presented in Chapter 6. Also included in this Chapter are the results of investigations into the defects present in the CaAs:Cr sensor material and how these affect device performance. These consist of confocal scanning transmission electron microscopy scans used to estimate the size and shape of individual pixels and how these relate to the linearity of pixelsâ response, as well as studies of how the efficacy of a standard flat field depends on the incident electron flux. In the final results chapter, the focus shifts to preliminary measurements of the response of an integrating detector with GaAs:Cr sensor to electrons. These initial experimental measurements prompted further simulations investigating how the backside contact of GaAs:Cr sensors can be improved when using electrons
The mass composition of massive early-type galaxies
Es ist anzunehmen, dass die Vielfalt der Galaxien im lokalen Universum aus sukzessiven Generationen von Galaxienverschmelzungen hervorgegangen ist. Massereiche Ellipsen stehen dabei an der Spitze der Hierarchie der Galaxienverschmelzungen. AuĂerdem bergen sie die gröĂten supermassereichen Schwarzen Löcher.
Das Szenario der hierarchischen Verschmelzungen kann viele der beobachteten Eigenschaften von Ellipsen erklĂ€ren. Dennoch bleibt die genaue Zusammensetzung der Massen in diesen Galaxien schleierhaft. Die Massenfunktion lokaler schwarzer Löcher, und insbesondere ihr oberes Ende, sind nicht bekannt. Auch wissen wir nicht, welcher Anteil der Gesamtmasse einer Galaxie den Sternen und welcher der dunklen Materie zuzuschreiben ist, da es hier stets eine unbekannte Fraktion an stellaren Objekten gibt, welche Masse zur Galaxie beitragen, aber kaum oder gar kein Licht. Auf der einen Seite gibt es eine unbekannte Anzahl an lichtschwachen Zwergsternen, und auf der anderen Seite einen unbekannten Bruchteil an Sternen, der zu Relikten kollabiert ist. Die ursprĂŒngliche massen funktion (UMF) der Sterne umfasst diese Information. Verschiedene Studien der UMF haben eine andere UFM in massiven Ellipsen als in weniger massereichen Galaxien wie unserer MilchstraĂe ermittelt. Doch meistens produzieren verschiedene Methoden widersprĂŒchliche Resultate fĂŒr dieselben Galaxien.
Auf der Messung nicht-parametrischer Sichtliniengeschwindigkeitsverteilungen (SGV) basierende dynamische Modelle können genĂŒtzt werden, um Galaxienmassen zu messen und in einzelne Komponenten zu zerlegen. In dieser Dissertation messe ich die nicht-parametrischen SGV von 9 + 1 Ellipsen bis zur Fluchtgeschwindigkeit des jeweiligen Potentials mit unserem Code WINGIFT. Darauf basierend konstruiere ich fĂŒr acht der Galaxien Schwarzschild Orbit-Modelle.
Dabei prĂ€sentiere ich hier die Entdeckung eines von nur vier bisher dynamisch gemessenen Schwarzen Löchern mit M_BH > 10^10 Mâ, sowie zwei empirische Relationen zwischen M_BH und der zentralen FlĂ€chenhelligkeit, sowie zentralen OberflĂ€chendichte massiver Ellipsen.
Mit diesen Relationen lĂ€sst sich das obere Ende der Massenfunktion lokaler schwarzer Löcher in der Zukunft gezielt erforschen. FĂŒr sieben der Galaxien prĂ€sentiere ich dynamische Evidenz fĂŒr interne Gradienten der UMF.
Solche intrinsischen Gradienten der UMF könnten die Diskrepanzen bisheriger auf verschiedenen Methoden basierenden Messungen der UMF lösen.
Die gefundenen Gradienten suggerieren, dass sich in den Zentren von Ellipsen sehr kompakte Regionen vorfinden âČ 1 kpc, deren stellare Populationen einen höheren Anteil an entweder lichtschwachen Zwergsternen oder Relikten vorweisen als es fĂŒr Populationen im Rest des Universums der Fall ist.It is thought that most galaxies in the local universe are the outcome of several generations of hierarchical mergers of progenitor galaxies. Massive early-type galaxies (ETGs) occupy the top ranks of this hierarchy. They also harbour the biggest supermassive black holes (SMBHS) in the local universe.
The merger framework can explain many of the observed properties of different kinds of ETGs. However, the exact mass compositions of these objects remains elusive: For once, the local SMBH mass function is poorly understood and barely sampled at the high mass end. We also do not know how much galaxy mass is contributed by stars and how
much by dark matter, because an unknown fraction of stars are low-luminosity dwarf stars, and another unknown fraction of more massive stars have turned into remnants â both of these contribute a significant amount of mass to galaxies, but little or no light. The stellar
initial mass function (IMF) underlying the stellar population(s) of a galaxy encompasses this information. Different studies, using different methods have claimed that the IMF in massive ETGs is different from that of less massive galaxies like the Milky Way. But these results have thus far remained overwhelmingly contradictory on the level of individual galaxies.
Accurate measurements of non-parametric line-of-sight velocity distributions (LOSVDs) in ETGs can be analysed with Schwarzschild orbit models to produce precise galaxy mass decompositions. In this thesis, I measure the full non-parametric shape of LOSVDs all the way to the escape velocity of each galaxyâs gravitation potential for a total of 9 +
1 massive ETGs using our kinematic fitting code WINGFIT. For eight of the galaxies I construct Schwarzschild models based on these kinematics. I present the discovery of one of so far only four SMBHs more massive than 10^10 Mâ with direct dynamical detections, and two new SMBH-host scaling relations between MBH and the central surface brightness, as well as surface mass-density of massive galaxies. In the future, these empirical relations can be used for a targeted sampling of the high mass end of the local SMBH mass function. For seven of the ETGs, I present dynamical evidence for internal radial gradients of the IMF. Such gradients can potentially explain the contradictions between previous IMF measurements from different methods. These measurements suggest that the centers of ETGs contain very spatially concentrated regions (r âČ 1 kpc) of stellar populations with an enhanced fraction of either low-luminosity dwarfs or remnants relative to stellar
populations in the rest of the universe
AmĂ©lioration et dĂ©sagrĂ©gation des donnĂ©es GRACE et GRACE-FO pour lâestimation des variations de stock dâeau terrestre et dâeau souterraine Ă fine Ă©chelle
Abstract : Groundwater is an essential natural resource for domestic, industrial and agricultural uses worldwide. Unfortunately, climate change, excess withdrawal, population growth and other human impacts can affect its dynamics and availability. These excessive demands can lead to lower groundwater levels and depletion of aquifers, and potentially to increased water scarcity. Despite the abundance of lakes and rivers in many parts of Canada, the potential depletion of groundwater remains a major concern, particularly in the southern Prairie. Groundwater is traditionally monitored through in-situ piezometric wells, which are scarcely distributed in Canada and many parts of the world. Consequently, its quantities, distribution and availability are not well known, both spatially and temporally. Fortunately, the launch of the twin satellite systems of Gravity Recovery And Climate Experiment (GRACE) in 2002 and its successor, GRACE Follow-On in 2018 (GRACE-FO) opened up new ways to study groundwater changes. These platforms measure the variations of the Earth's gravity field, which in turn can be related to terrestrial water storage (TWS). The main objective of this thesis is to improve the estimation and spatial resolution of TWS and related groundwater storage changes (GWS), using GRACE and GRACE-FO data. This challenge was addressed through four specific objectives, where original approaches were developed in each case. The first objective was to understand and better take into account the uncertainties associated with the hydrological models (the Global Land Data Assimilation System (GLDAS), and the Water Global Assessment Prognosis hydrological model (WGHM)), generally used in the processing of GRACE or GRACE-FO data. The thesis proposes a new approach based on the Gauss-Markov model to estimate the optimal hydrological parameters from GLDAS, considering six different surface schemes. The Förstner estimator and the best quadratic unbiased estimator of the variance components were used with a least-squares method to estimate the optimal hydrological parameters and their errors. The comparison of the optimal TWS derived from GLDAS to the TWS derived from WGHM showed a very significant correlation of r = 0.91. The correlation obtained with GRACE was r = 0.71, which increased to r = 0.81 when the groundwater component was removed from GRACE. Compared to WGHM and GRACE, the optimal TWS calculated from GLDAS had much smaller errors (RMSE = 7 to 8.5 mm) than those obtained when individual surface schemes are considered (RMSE = 10 to 21 mm); demonstrating the performance of the proposed approach. The second specific objective was to understand regional variations in TWS and their uncertainties. The approach was applied over the Canadian landmass. To achieve the goal, the thesis proposes a new modeling of glacial isostatic adjustment uplift (GIA) in Canada. The comparison of the results of the proposed model and three other existing models with data from 149 very high precision GPS stations demonstrated its superiority in the region considered. The regional approach proposed was then used to extract TWS by correcting the effects of the GIA and leakage. The analyzes showed patterns of significant seasonal variations in TWS, with values ranging between -160 mm and 80 mm. Overall TWS showed a positive slope of temporal variations over the Canadian landmass (+ 6.6 mm/year) with GRACE and GRACE-FO combined. The slope reached up to 45 mm/year in the Hudson Bay region. The third objective was to extract GWS component using a comprehensive rigorous approach to reconstruct, refine and map the variations of GWS and its associated uncertainties. The approach used the methods proposed in the two previous objectives. Moreover, a new filtering approach called Gaussian-Han-Fan (GHF) was developed and integrated into the process in order to have a more robust procedure for extracting information from GRACE and GRACE-FO data. The performance and merits of the proposed filter compared to previous filters were analyzed. Then, the groundwater signal was reconstructed by taking into account all the other components, including surface water variations (estimated using satellite altimetry data). The results showed that the average variations of GWS are between -200 mm and +230 mm in the Canadian Prairies. The maximum and minimum GWS trends were found around the Hudson Bay region (approximately 55 mm/year) and southern Prairies (approximately -20 mm/year), respectively. The error on GWS was around 10% (about 19 mm). The estimated GWS changes were validated using the data from 116 in-situ wells. This validation showed a significant level of correlation (r > |0.70|, P |0.90|, P |0,70|, P |0,90|, P < 10-4, RMSE < 30 mm). Enfin, le dernier objectif consistait Ă amĂ©liorer la rĂ©solution spatiale des rĂ©sultats extraits des donnĂ©es GRACE de 1° Ă 0.25°. Ainsi, une nouvelle approche basĂ©e sur l'ajustement des conditions a dâabord Ă©tĂ© proposĂ©e pour estimer les paramĂštres hydrologiques optimaux et leurs erreurs. Elle est lĂ©gĂšrement diffĂ©rente de la mĂ©thode proposĂ©e dans le premier objectif. Ensuite, les corrections requises pour extraire les anomalies de TWS et ses incertitudes de maniĂšre rigoureuse ont Ă©tĂ© effectuĂ©es suivant la mĂ©thodologie prĂ©sentĂ©e Ă lâobjectif 3. Par la suite une nouvelle mĂ©thode basĂ©e sur la combinaison spectrale-spatiale a Ă©tĂ© dĂ©veloppĂ©e pour dĂ©river les anomalies de TWS Ă Ă©chelle rĂ©duite (0.25°), en combinant de maniĂšre optimale les modĂšles GRACE et les paramĂštres hydrologiques. Enfin, les anomalies dâeau souterraines ont Ă©tĂ© dĂ©rivĂ©es en utilisant les anomalies de TWS estimĂ©es. Les validations ont Ă©tĂ© faites Ă partir des donnĂ©es de 75 puits en aquifĂšre non confinĂ© en Alberta. Elles dĂ©montrent le potentiel de lâapproche proposĂ©e avec une corrĂ©lation trĂšs significative de = 0.80 et un RMSE de 11 mm. Ainsi, la recherche proposĂ©e dans la thĂšse a permis de faire des avancĂ©es importantes dans lâextraction dâinformation sur le stockage total dâeau et les eaux souterraines Ă partir des donnĂ©es des satellites gravimĂ©triques GRACE et GRACE-FO. Elle propose et valide plusieurs nouvelles approches originales en sâappuyant sur des donnĂ©es in-situ. Elle ouvre Ă©galement plusieurs nouvelles avenues de recherche, qui permettront de faciliter une utilisation plus opĂ©rationnelle de ces types de donnĂ©es Ă lâĂ©chelle rĂ©gionale, voire locale
Three Dimensional Widefield Imaging with Coherent Nonlinear Scattering Optical Tomography
A full derivation of the recently introduced technique of Harmonic Optical Tomography (HOT), which is based on a sequence of nonlinear harmonic holographic field measurements, is presented. The rigorous theory of harmonic holography is developed and the image transfer theory used for HOT is demonstrated. A novel treatment of phase matching of homogeneous and in-homogeneous samples is presented. This approach provides a simple and intuitive interpretation of coherent nonlinear scattering. This detailed derivation is aimed at an introductory level to allow anyone with an optics background to be able to understand the details of coherent imaging of linear and nonlinear scattered fields, holographic image transfer models, and harmonic optical tomography
Laser-plasma interactions as tools for studying processes in quantum electrodynamics
Conventional particle accelerators and astronomical observations have long been some of the only tools for studying processes in high energy physics. The development of laser-plasma sources and high gradient accelerators will therefore be a key asset to these studies. In particular, laser-plasma accelerators have favourable spatial and temporal properties for studies into intense processes, and can be readily coupled to a wide array of other laser-plasma sources creating unique environments. Here, coupling to an X-ray source and intense laser focus were used to study processes in quantum electrodynamics.
To study the linear Breit-Wheeler process, a 40 ps laser was used to drive a volumetric X-ray emitter. Line emission from a thin-foil Ge target, produced a highly efficient (3.4%), dense source of 1.3 â 1.9 keV X-rays, with 3 ± 1 (stat.) ±0.4 (sys.) Ă10^{12} photons/eV/sphere. These X-rays were collided with bremsstrahlung gamma rays (with energies up to 800 MeV) to investigate electron-positron pair production. The X-ray source was well-optimised for studying this interaction, and would allow the detection of Breit-Wheeler pairs if used with a moderately improved electron beam for generating bremsstrahlung (3Ă the highest electron energy and 5Ă the total charge, as achieved previously). This would constitute the first laser-plasma photon- photon collider with low virtuality (energy off mass-shell â 10^{â20} MeV^2).
In order to differentiate between competing models of electron radiation reaction in strong field quantum electrodynamics, a narrow energy-spread electron beam was studied. By utilising shock injection into a laser wakefield accelerator, a high energy (1260±40 MeV), narrow energy- spread (4.1±0.9 %) beam was generated. This is one of only a few studies that have successfully achieved these electron beam properties. While the shot-to-shot reproducibility of the electron beam was limited to 60%, the relative energy-spread was sufficiently small that differentiation of radiation reaction models could be readily achieved in future experiments.
With the upcoming commissioning of many multi-PW laser facilities, these studies demonstrate how active research into quantum electrodynamics can be achieved on the smaller, more accessible, laser-laboratory scale.Open Acces
Stable Isotopes in Tree Rings
This Open Access volume highlights how tree ring stable isotopes have been used to address a range of environmental issues from paleoclimatology to forest management, and anthropogenic impacts on forest growth. It will further evaluate weaknesses and strengths of isotope applications in tree rings. In contrast to older tree ring studies, which predominantly applied a pure statistical approach this book will focus on physiological mechanisms that influence isotopic signals and reflect environmental impacts. Focusing on connections between physiological responses and drivers of isotope variation will also clarify why environmental impacts are not linearly reflected in isotope ratios and tree ring widths. This volume will be of interest to any researcher and educator who uses tree rings (and other organic matter proxies) to reconstruct paleoclimate as well as to understand contemporary functional processes and anthropogenic influences on native ecosystems. The use of stable isotopes in biogeochemical studies has expanded greatly in recent years, making this volume a valuable resource to a growing and vibrant community of researchers
X-ray dark-field and phase retrieval without optics, via the Fokker-Planck equation
Emerging methods of x-ray imaging that capture phase and dark-field effects
are equipping medicine with complementary sensitivity to conventional
radiography. These methods are being applied over a wide range of scales, from
virtual histology to clinical chest imaging, and typically require the
introduction of optics such as gratings. Here, we consider extracting x-ray
phase and dark-field signals from bright-field images collected using nothing
more than a coherent x-ray source and detector. Our approach is based on the
Fokker--Planck equation for paraxial imaging, which is the diffusive
generalization of the transport-of-intensity equation. Specifically, we utilize
the Fokker--Planck equation in the context of propagation-based phase-contrast
imaging, where we show that two intensity images are sufficient for successful
retrieval of the projected thickness and dark-field signals associated with the
sample. We show the results of our algorithm using both a simulated dataset and
an experimental dataset. These demonstrate that the x-ray dark-field signal can
be extracted from propagation-based images, and that x-ray phase can be
retrieved with better spatial resolution when dark-field effects are taken into
account. We anticipate the proposed algorithm will be of benefit in biomedical
imaging, industrial settings, and other non-invasive imaging applications.Comment: 16 pages, 8 figure
Recommended from our members
Spatio-temporal generation of large-scale hazardous events that may cause flooding
This thesis treats the generation of weather scenarios that may cause flooding, generally referred to as events, within the context of flood risk. A key concept for the risk approach is that not only hazardous events are generated, but that realistic probabilities, or frequencies, are connected to the events. The purpose of this thesis is to provide some methodological improvements on flood hazard generators and to bring into discussion some useful concepts with regards to spatio-temporal challenges.
The flood risk approach is largely data driven. The methods used in this thesis start from obtained data sets, provided by others. General computer algorithms and statistical models are used to generate hypothetical, or `synthetic' data sets. These synthetic data sets are investigated and analysed using computational code.
The starting point is the generation of a large synthetic set of pan-European river discharge events, spread out over several hundreds of locations in Europe. Some methodological advances are provided, which allow moving discharge waves to be tracked throughout all major river basins in Europe. A key point in the used methodology is to capture the spatial dependence between events occurring at different locations, which will be referred to as the static spatio-temporal approach. Compared to a local approach, where each location is considered individually, the gains of considering spatial dependence seem rather clear. However, it appears that the static spatio-temporal approach does not work well with an event-based approach, since this methodology implicitly puts boundaries on the procedure of spatio-temporal event identification.
Therefore, the next step was the development of a generator that could provide large synthetic sets of precipitation events, over the entire Atlantic sea and Europe. A key point in the methodology used here is that not only the events are dynamic, as their movement is tracked, but also the event descriptors are dynamic. Hence, here the dynamic spatio-temporal approach is introduced, which implies the generator methodology moved beyond generation at a fixed set of locations. The methodological framework is established, which allows a first version of such a dynamic generator, with the potential to be applied globally. And some exploration is provided of methodological extensions that allow to treat multiple variables concurrently in a compound framework.
With the newly introduced complexity of the dynamic generator, the final step was to start generating big synthetic data sets and to thoroughly test what the generator produced. A comprehensive sensitivity analysis provides some first insights into the behaviour of the generator, which allow to understand two key concepts. First, with the dynamic spatio-temporal approach, spatial coherence of extremes comes naturally. This contrasts with the static spatio-temporal approach where spatial coherence of extremes has to be assumed prior to the modelling and is typically done by the application of a spatial process. Second, for any location, the dynamic spatio-temporal approach allows to more directly include events occurring in the area surrounding a location to compute extremes at that particular location. Relatively short data records are a standard limitation for the risk approach, whereby this `dynamic expansion of information' may be of help.
The provided methodological advances and new concepts may help the way forward to the generation of global hazard events that may cause flooding. Large-scale, coherent event sets allow interactions and system behaviour to be studied, which is a main requirement to be able to compute the system risk. In addition, an interesting outlook is provided for future research. The dynamic spatio-temporal approach may in the future be able to provide not only spatially coherent extremes, but also temporally coherent extremes. This could be a first step towards credible global flood risk time series, which could be a very useful tool, in the current predicament of global climate change
Improving Interaction in Visual Analytics using Machine Learning
Interaction is one of the most fundamental components in visual analytical systems, which transforms people from mere viewers to active participants in the process of analyzing and understanding data. Therefore, fast and accurate interaction techniques are key to establishing a successful human-computer dialogue, enabling a smooth visual data exploration. Machine learning is a branch of artificial intelligence that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. It has been utilized in a wide variety of fields, where it is not straightforward to develop a conventional algorithm for effectively performing a task. Inspired by this, we see the opportunity to improve the current interactions in visual analytics by using machine learning methods.
In this thesis, we address the need for interaction techniques that are both fast, enabling a fluid interaction in visual data exploration and analysis, and also accurate, i.e., enabling the user to effectively select specific data subsets. First, we present a new, fast and accurate brushing technique for scatterplots, based on the Mahalanobis brush, which we have optimized using data from a user study. Further, we present a new solution for a near-perfect sketch-based brushing technique, where we exploit a convolutional neural network (CNN) for estimating the intended data selection from a fast and simple click-and-drag interaction and from the data distribution in the visualization. Next, we propose an innovative framework which offers the user opportunities to improve the brushing technique while using it. We tested this framework with CNN-based brushing and the result shows that the underlying model can be refined (better performance in terms of accuracy) and personalized by very little time of retraining. Besides, in order to investigate to which degree the human should be involved into the model design and how good the empirical model can be with a more careful design, we extended our Mahalanobis brush (the best current empirical model in terms of accuracy for brushing points in a scatterplot) by further incorporating the data distribution information, captured by kernel density estimation (KDE). Based on this work, we then provide a detailed comparison between empirical modeling and implicit modeling by machine learning (deep learning). Lastly, we introduce a new, machine learning based approach that enables the fast and accurate querying of time series data based on a swift sketching interaction. To achieve this, we build upon existing LSTM technology (long short-term memory) to encode both the sketch and the time series data in two networks with shared parameters.
All the proposed interaction techniques in this thesis were demonstrated by application examples and evaluated via user studies. The integration of machine learning knowledge into visualization opens further possible research directions.Doktorgradsavhandlin