5,842 research outputs found
Recommended from our members
Commissioning, Benchmarking and Clinical Application of a Novel Fiber Optic CT Scanner for Precise Three-Dimensional Radiation Dosimetry
Radiotherapy is a prominent cancer treatment modality in medicine, aiming to deliver adequate doses to the target while minimizing harm to healthy tissue. Recent advancements in computer technology, machine engineering, and imaging have facilitated intricate treatment planning and accurate radiation administration. These advancements have allowed for more precise dose distributions to be delivered to cancer patients. However, even small discrepancies in setup or delivery can result in significant dose variations. While treatment planning systems provide 3D dose calculations, there is currently a lack of 3D measurement tools in the clinic to verify the accuracy of dose calculation and delivery. Presently, medical physicists rely on 2D dose plane comparisons with treatment planning calculations using gamma index analyses. However, these results do not directly correlate with clinical dose-volume constraints, and detecting delivery errors using 1D or 2D dosimetry is challenging. The implementation of 3D dosimetry not only ensures the safety of radiation treatment but also facilitates the development of new emerging radiation treatment techniques. This study aims to commission and validate a clinically viable optical scanner for 3D dosimetry and apply the developed system to address current clinical and pre-clinical challenges, thereby advancing our understanding of treatment uncertainties in modern radiotherapy.
The optical CT scanner that was developed comprises four key components: an LED illuminator, an aquarium with matching fluid, a fiber optic taper, and a CCD camera. The LED illuminator emits uniform and parallel red light at a peak wavelength of 625 nm and a full width at half maximum (FWHM) of 20 nm in continuous mode. The aquarium is constructed with transparent acrylic walls and is designed to accommodate the 3D dosimeter PRESAGE, which can be fixed on a rotation stage inside the tank. Clear acrylic has excellent optical clarity and light transmission, with a refractive index of 1.49 that is close to the average refractive index (1.54) of PRESAGE. To match the refractive index of the 3D dosimeters, a matching liquid composed of 90% Octyl Salicylate and 10% Octyl-P-Methoxy Cinnamate is filled in the tank. The fiber optic taper serves two functions: first, it demagnifies the projection images while preserving their shape, and second, it effectively reduces the acceptance angle of the light reaching the CCD camera. The CCD camera used in the system is an Allied Vision model with a resolution of 0.016 mm, capable of acquiring 2D projection images from various angles. The principle of the optical CT scanner follows that of CT imaging, where 2D projection images from different angles are used to reconstruct volumetric 3D dose images using the filtered back projection technique. To validate the dosimetric measurements and assess the uncertainties of the 3D dosimetry system, 21 benchmark experiments, including mechanical, imaging, and dosimetry tests were conducted. Furthermore, the developed system was employed for various applications, including patient-specific IMRT QA, small field dosimetry using kilovoltage and megavoltage beams, as well as end-to-end testing of stereotactic radiosurgery.
A comprehensive analysis assessed uncertainties in each scanner component. Mechanical tests showed maximum uncertainties below 1%. By employing background subtraction and calibration techniques, measurement uncertainty was reduced to <1% in the optimal dose range. Background subtraction resulted in a remarkable 77% reduction in uncertainty by mitigating artifacts, ambient light, and refractive light. Reproducibility was excellent, with mean and standard deviation of dose differences below 0.4% and 1.1%, respectively, in three repeat scans. Dose distribution measurements exhibited strong agreement (passing rates: 98%-100%) between 3D measurements, treatment planning calculations, and EBT3 film dosimetry. Results confirm the optical CT scanner's robustness and accuracy for clinical 3D radiation dosimetry. The study also demonstrates that the developed 3D dosimetry system surpasses the limitations of traditional 2D gamma tests by providing clinicians with more clinically relevant information. This includes measured dose-volume histograms (DVHs) and the evaluation of gamma failing points in 3D space, enabling a comprehensive assessment of individual treatment plans. Furthermore, the study showcased the feasibility of utilizing this system to characterize a radiosurgery platform. It successfully assessed mechanical and dosimetric errors in off-axis delivery and evaluated the accuracy of treatment planning dose calculations, including modeling small fields, out-of-field dose, and multi-leaf collimator (MLC) characteristics. In addition, compelling evidence was presented that the high-resolution 3D dosimeter used in this study is capable of accurate dosimetry for both megavoltage and kilovoltage small fields. Importantly, the dosimeter exhibits no energy or dose rate dependence, further supporting its reliability and suitability for precise dosimetry measurements.
The intricate and three-dimensional nature of dose distributions in modern radiotherapy necessitated the development of 3D dosimetry measurements, particularly for treatments with precise margins, such as SRS and SBRT. The newly developed 3D dosimetry system offers significant enhancements to current QA practices, delivering more clinically relevant comparison results and bolstering patient safety. Furthermore, it can be utilized for independent inspections across multiple institutions or remote dosimetry verification. Beyond its applications in clinical settings, the presented 3D dosimetry system holds the potential to expedite the development and utilization of novel radiation platforms
Tipping Points and Early Warning Signals in the Climate-Carbon System
This is a thesis about tipping points and early warning signals. The tipping points investigated
are related to various components of the climate-carbon system. In contrast, the work on
early warning signals has more generic applications, however in this thesis they are analysed
in the context of the climate-carbon system. The thesis begins with an introduction to the
climate-carbon system as well as a discussion of tipping points in the Earth system. Then a more
mathematical summary of tipping points and early warning signals is given. An investigation
into the ‘compost bomb’ is undertaken, in which the spatial structure of soils is accounted
for. It is found that a hot summer could cause a compost bomb. The effect of biogeochemical
heating on the stability of the global carbon cycle is investigated and it is found to play only
a small role. The potential for instabilities in the climate-carbon cycle is further investigated
when the dynamic behaviour of the ocean carbon cycle is accounted for. It is found that some
CMIP6 models may be close to having an unstable carbon cycle. Spatial early warning signals
are investigated in the context of more rapidly forced systems. It is found that spatial early
warning signals perform better when the system is rapidly forced compared with time series
based early warning signals. The typical assumptions about white noise made when using
early warning signals are also studied. It is found that time correlated noise may mask the early
warning signal. It is shown that a spectral analysis can avoid this problem.European Commissio
Exact steady states of minimal models of nonequilibrium statistical mechanics
Systems out of equilibrium with their environment are ubiquitous in nature. Of particular relevance to biological applications are models in which each microscopic component spontaneously generates its own motion. Known collectively as active matter, such models are natural effective descriptions of many biological systems, from subcellular motors to flocks of birds. One would like to understand such phenomena using the tools of statistical mechanics, yet the inherent nonequilibrium setting means that the most powerful classical results of that field cannot be applied. This circumstance has fuelled interest in exactly solvable models of active matter. The aim in studying such models is twofold. Firstly, as exactly solvable model are often minimal, it makes them good candidates as generic coarse-grained descriptions of real-world processes. Secondly, even if the model in question does not correspond directly to some situation realizable in experiment, its exact solution may suggest some general principles, which could also apply to more complex phenomena.
A typical tool to investigate the properties of a large system is to study the behaviour of a probe particle placed in such an environment. In this context, cases of interest are both an active particle in a passive environment or an active particle in an active environment. One model that has attracted much attention in this regard is the asymmetric simple exclusion process (ASEP), which is a prototypical minimal model of driven diffusive transport. In this thesis, I consider two variations of the ASEP on a ring geometry. The first is a system of symmetrically diffusing particles with one totally asymmetric (driven) defect particle. The second is a system of partially asymmetric particles, with one defect that may overtake the other particles. I analyze the steady states of these systems using two exact methods: the matrix product ansatz, and, for the second model the Bethe ansatz. This allows me to derive the exact density profiles and mean currents for these models, and, for the second model, the diffusion constant. Moreover, I use the Yang-Baxter formalism to study the general class of two-species partially asymmetric processes with overtaking. This allows me to determine conditions under which such models can be solved using the Bethe ansatz
Data-assisted modeling of complex chemical and biological systems
Complex systems are abundant in chemistry and biology; they can be multiscale, possibly high-dimensional or stochastic, with nonlinear dynamics and interacting components. It is often nontrivial (and sometimes impossible), to determine and study the macroscopic quantities of interest and the equations they obey. One can only (judiciously or randomly) probe the system, gather observations and study trends. In this thesis, Machine Learning is used as a complement to traditional modeling and numerical methods to enable data-assisted (or data-driven) dynamical systems. As case studies, three complex systems are sourced from diverse fields: The first one is a high-dimensional computational neuroscience model of the Suprachiasmatic Nucleus of the human brain, where bifurcation analysis is performed by simply probing the system. Then, manifold learning is employed to discover a latent space of neuronal heterogeneity. Second, Machine Learning surrogate models are used to optimize dynamically operated catalytic reactors. An algorithmic pipeline is presented through which it is possible to program catalysts with active learning. Third, Machine Learning is employed to extract laws of Partial Differential Equations describing bacterial Chemotaxis. It is demonstrated how Machine Learning manages to capture the rules of bacterial motility in the macroscopic level, starting from diverse data sources (including real-world experimental data). More importantly, a framework is constructed though which already existing, partial knowledge of the system can be exploited. These applications showcase how Machine Learning can be used synergistically with traditional simulations in different scenarios: (i) Equations are available but the overall system is so high-dimensional that efficiency and explainability suffer, (ii) Equations are available but lead to highly nonlinear black-box responses, (iii) Only data are available (of varying source and quality) and equations need to be discovered. For such data-assisted dynamical systems, we can perform fundamental tasks, such as integration, steady-state location, continuation and optimization. This work aims to unify traditional scientific computing and Machine Learning, in an efficient, data-economical, generalizable way, where both the physical system and the algorithm matter
Synthesis of multifunctional glyco-pseudodendrimers and glyco-dendrimers and their investigation as anti-Alzheimer agents
As the world population is aging, the cases of Alzheimer’s Disease (AD) are increasing. AD is a disorder of the brain which is characterized by the aggregation of amyloid beta (Aβ) plaques. This leads to the death of numerous brain cells thus affecting the cognitive and motor functions of the individual. Till date, no cure for the disease is available. Aβ are peptides with 40/42 amino acid residues but, their exact mechanism(s) of action in AD is under debate. Having different amino acid residues makes them susceptible to form hydrogen bonds. Dendrimers with sugar units are often referred to as glycopolymers and have been shown to have potential anti-amyloidogenic activity. However, they also have drawbacks, the synthesis involves multiple tedious steps, and dendrimers themselves offer only a limited number of functional units. Pseudodendrimers are another class of branched polymers based on hyperbranched polymers. Unlike the dendrimers, they are easy to synthesize with a dense shell of functional units on the surface. One of the main goals in this dissertation is the synthesis and characterization of pseudodendrimers and dendrimers based on 2,2-bis(hydroxymethyl)-propionic acid (bis-MPA), an aliphatic polyester scaffold, as it offers biocompatibility and easy degradability. Furthermore, they are decorated with mannose units on the surface using a ‘click’ reaction forming glyco-pseudodendrimers and glyco-dendrimers. A detailed characterization of their structures and physical properties was undertaken using techniques such as size exclusion chromatography, asymmetric flow field flow fractionation (AF4), and dynamic light scattering.
The second main focus of this work has been to investigate the interaction of synthesized glyco-pseudodendrimers and glyco-dendrimers with Aβ 40 peptides. For this task, five different concentrations of the synthesized glycopolymers were tested with Aβ 40 using the Thioflavin T assay. The results of the synthesized polymers which produced the best results of showing maximum anti-aggregation behavior against Aβ 40 were confirmed with circular dichroism spectroscopy. AF4 was also used to investigate Aβ 40-glycopolymer aggregates, which has never been done before and constitutes the highlight of this dissertation. Atomic force microscopy was used to image Aβ 40-glycopseudodenrimer aggregates.
A basic but important step in the development of drug delivery platforms is to evaluate the toxicity of the drugs synthesized. In this work, preliminary studies of the cytotoxicity of glyco-pseudodendrimers were performed in two different cell lines. Thus, this study comprises a preliminary investigation of the anti-amyloidogenic activity of glyco-pseudodendrimers synthesized on an aliphatic polyester backbone.:Abstract
List of Tables
List of Figures
Abbreviations
1 Introduction
1.1 Objectives of the work
1.2 Thesis overview
2 Fundamentals and Literature
2.1 Alzheimer’s Disease and its impact
2.1.1 Neurological diagnosis of AD
2.1.2 Histopathology of AD
2.1.3 Amyloid precursor protein (APP) and its role in AD
2.2. Amyloid Beta (Aβ) peptide
2.2.1 Aβ peptide
2.2.2. Location and function
2.2.3 Amyloid hypothesis
2.2.4 The mechanism of Aβ aggregation
2.2.5 Amyloid fibrils
2.2.6 Toxicity of Aβ
2.3 Research methods to study Aβ aggregates
2.3.1 Models to study the mode of action of aggregates
2.3.2 Endogenous Aβ aggregates and synthetic aggregates
2.3.3 Strategies to alter aggregation of amyloids
2.4 Treatment and therapeutics
2.4.1 Current therapeutics
2.4.2 Current therapeutic research
2.4.2.1 Reduction of Aβ production
2.4.2.2 Reduction of Aβ plaque accumulation
2.4.2.2.1 Anti-amyloid aggregation agents
2.4.2.2.2 Metals
2.4.2.2.3 Immunotherapy
2.4.2.2.4 Dendrimers as potential anti-amyloidogenic agent
2.6 Dendrimers
2.6.1 Definition
2.6.2 Structure
Table of Contents
2.6.3 Synthesis
2.6.4 Properties
2.7 Pseudodendrimers - a sub-class of hyperbranched polymer
2.7.1 Definition
2.7.2 Structure
2.7.3 Synthesis
3 Analytical Techniques
3.1 Size Exclusion Chromatography Coupled to Light Scattering (SEC-MALS)
3.2 Asymmetric Flow Field Flow Fractionation (AF4)
3.3 Dynamic Light Scattering
3.4 Molecular Dynamics Simulation
3.5 Nuclear Magnetic Resonance Spectroscopy
3.6 Thioflavin T fluorescence
3.6.1 Kinetic analysis
3.7 Circular Dichroism Spectroscopy
3.8 Atomic Force Microscopy
3.9 Cytotoxic assay
3.9.1 MTT assay
3.9.2 Determining the level of reactive oxygen species
3.9.3 Changes in mitochondrial transmembrane potential
3.9.4 Flow cytometric detection of phosphatidyl serine exposure
4 Experimental Details and Methodology
4.1 Details of chemicals/components used
4.1.1 Other materials
4.1.2 Peptide preparation
4.1.3 Buffer preparation
4.1.4 Fibril growth conditions
4.2 Synthesis and characterization of polymers
4.2.1 Synthesis and characterization of pseudodendrimers and dendrimers
4.2.1.1 Synthesis of hyperbranched polymer (1)
4.2.1.2 Synthesis of protected monomer
4.2.1.2.1 bis-MPA acetonide (2)
4.2.1.2.2 bis-MPA-acetonide anhydride (3)
4.2.1.3 Synthesis of protected pseudodendrimers (4, 6 and 8) and
protected dendrimers (10, 12, and 14)
4.2.1.4 Deprotection of pseudodendrimers (5,7, and 9) and dendrimers
(11,13 and 15)
4.2.2 Synthesis of glyco-pseudodendrimers and glyco-dendrimers
4.2.2.1 Pentynoic anhydride (16)
4.2.2.2 Synthesis of pentinate modified pseudodendrimers (17, 18
and 19) and dendrimers (20, 21 and 22)
4.2.2.3 3-Azido-1-propanol (23)
4.2.2.4 Mannose propyl azide tetraacetate (24)
Table of Contents
4.2.2.5 Mannosepropylazide (25)
4.2.2.6 Glyco-pseudodendrimers (Gl-P) (26, 27 and 28) and glyco-
dendrimers (Gl-D) (29, 30 and 31)
4.3 Analytical techniques and their general details
4.3.1 SEC-MALS - Instrumentation, software and analysis
4.3.2 AF4 - Instrumentation, software and analysis
4.3.2.1 Sample preparation
4.3.2.2 Method development for analysis of Gl-P and Gl-D
4.3.2.3 Method development for analysis of Aβ 40 and its interaction
with Gl-P and Gl-D
4.3.3 Batch DLS - Instrumentation, software and analysis
4.3.3.1 Sample preparation
4.3.4 Theoretical calculations and molecular dynamics simulations
4.3.4.1 Ab-initio calculations
4.3.4.2 Modelling of the polymer structures
4.3.4.2.1 Pseudodendrimers
4.3.4.2.2 Dendrimers
4.3.4.2.3 Modification of the polymers with special end groups
4.3.4.2.4 Preparing of the THF solvent box
4.3.4.2.5 Solvation of the polymer structures
4.3.4.3 Molecular dynamics simulations
4.3.4.3.1 Evaluation of the simulation trajectories
4.4 Investigation of interaction of Gl-P and Gl-D with amyloid beta (Aβ 40)
4.4.1 ThT Assay - Instrumentation and software
4.4.1.1 Sample preparation
4.4.1.2 Kinetics based on ThT assay- software and data analysis
4.4.2 CD spectroscopy - Instrumentation and software
4.4.2.1 Sample preparation
4.4.3 AFM - Instrumentation and software
4.4.3.1 Substrate and sample preparation
4.4.3.2 Height determination and counting procedures
4.4.3.3 Topography and diameter
4.5 Cytotoxicity
4.5.1 Zeta potential
4.5.2 Cell culturing
4.5.3 Sample preparation
4.5.4 MTT assay
4.5.5 Changes in mitochondrial transmembrane potential (JC-1 method)
4.5.6 Flow cytometric detection of phosphatidyl serine exposure
(Annexin V and PI method)
5 Results and Discussion
5.1 Synthesis and characterization of glyco-pseudodendrimers and glyco-
dendrimers
5.1.1 Synthesis and characterization of hyperbranched polyester
Table of Contents
5.1.2 Synthesis and characterization of pseudodendrimers P-G1-OH,
P-G2-OH and P-G3-OH
5.1.3 Synthesis and characterization of dendrimers D-G4-OH, D-G5-OH
and D-G6-OH
5.1.4 Synthesis and characterization of Gl-P and Gl-D
5.1.4.1 Molecular size determination of Gl-P and Gl-D using SEC
5.1.4.2 Particle size determination using batch DLS
5.1.4.3 Apparent densities
5.1.4.4 Molecular size determination of Gl-P and Gl-D using AF4 .....
5.1.5 Molecular dynamics simulation
5.2 Investigation of interaction of Gl-P and Gl-D with amyloid beta (Aβ 40) ......
5.2.1 ThT Assay
5.2.1.1 Kinetics based on ThT assay
5.2.2 CD spectroscopy
5.2.3 Time dependent AF4
5.3.2.1 Separation of Aβ 40 by AF4
5.3.2.2 Aβ 40 amyloid aggregation in the presence of Gl-P and Gl-D
5.2.4 AFM
5.2.4.1 Height
5.2.4.2 Topography and diameter
5.2.4.3 Length
5.2.4.4 Morphology
5.2.5 Cytotoxicity
5.2.5.1 MTT assay
5.2.5.2 Changes in mitochondrial transmembrane potential
5.2.5.3 Flow cytometric detection of phosphatidyl serine exposure
6 Conclusions and Outlook
7 Bibliography
Appendix
Acknowledgement
Laser Technologies for Applications in Quantum Information Science
Scientific progress in experimental physics is inevitably dependent on continuing advances in the underlying technologies. Laser technologies enable controlled coherent and dissipative atom-light interactions and micro-optical technologies allow for the implementation of versatile optical systems not accessible with standard optics.
This thesis reports on important advances in both technologies with targeted applications ranging from Rydberg-state mediated quantum simulation and computation with individual atoms in arrays of optical tweezers to high-resolution spectroscopy of highly-charged ions.
A wide range of advances in laser technologies are reported: The long-term stability and maintainability of external-cavity diode laser systems is improved significantly by introducing a mechanically adjustable lens mount. Tapered-amplifier modules based on a similar lens mount are developed. The diode laser systems are complemented by digital controllers for laser frequency and intensity stabilisation. The controllers offer a bandwidth of up to 1.25 MHz and a noise performance set by the commercial STEMlab platform. In addition, shot-noise limited photodetectors optimised for intensity stabilisation and Pound-Drever-Hall frequency stabilisation as well as a fiber based detector for beat notes in the MHz-regime are developed. The capabilities of the presented techniques are demonstrated by analysing the performance of a laser system used for laser cooling of Rb85 at a wavelength of 780 nm. A reference laser system is stabilised to a spectroscopic reference provided by modulation transfer spectroscopy. This spectroscopy scheme is analysed finding optimal operation at high modulation indices. A suitable signal is generated with a compact and cost-efficient module. A scheme for laser offset-frequency stabilisation based on an optical phase-locked loop is realised. All frequency locks derived from the reference laser system offer a Lorentzian linewidth of 60 kHz (FWHM) in combination with a long-term stability of 130 kHz peak-to-peak within 10 days. Intensity stabilisation based on acousto-optic modulators in combination with the digital controller allows for real-time intensity control on microsecond time scales complemented by a sample and hold feature with a response time of 150 ns.
High demands on the spectral properties of the laser systems are put forward for the coherent excitation of quantum states. In this thesis, the performance of active frequency stabilisation is enhanced by introducing a novel current modulation technique for diode lasers. A flat response from DC to 100 MHz and a phase lag below 90° up to 25 MHz are achieved extending the bandwidth available for laserfrequency stabilisation. Applying this technique in combination with a fast proportional-derivative controller, two laser fields with a relative phase noise of 42 mrad for driving rubidium ground state transitions are realised. A laser system for coherent Rydberg excitation via a two-photon scheme provides light at 780 nm and at 480 nm via frequency-doubling from 960 nm. An output power of 0.6 W at 480 nm from a single-mode optical fiber is obtained . The frequencies of both laser systems are stabilised to a high-finesse reference cavity resulting in a linewidth of 1.02 kHz (FWHM) at 960 nm. Numerical simulations quantify the effect of the finite linewidth on the coherence of Rydberg Rabi-oscillations. A laser system similar to the 480 nm Rydberg system is developed for spectroscopy on highly charged bismuth.
Advanced optical technologies are also at the heart of the micro-optical generation of tweezer arrays that offer unprecedented scalability of the system size. By using an optimised lens system in combination with an automatic evaluation routine, a tweezer array with several thousand sites and trap waists below 1 μm is demonstrated. A similar performance is achieved with a microlens array produced in an additive manufacturing process. The microlens design is optimised for the manufacturing process. Furthermore, scattering rates in dipole traps due to suppressed resonant light are analysed proving the feasibility of dipole trap generation using tapered amplifier systems
Monomial warm inflation revisited
We revisit the idea that the inflaton may have dissipated part of its energy
into a thermal bath during inflation, considering monomial inflationary
potentials and three different forms of dissipation rate. Using a numerical
Fokker-Planck approach to describe the stochastic dynamics of inflationary
fluctuations, we confront this scenario with current bounds on the spectrum of
curvature fluctuations and primordial gravitational waves. We also obtain
analytical approximations that outperform those frequently used in previous
analyses. We show that only our numerical Fokker-Planck method is accurate,
fast and precise enough to test these models against current data. We advocate
its use in future studies of warm inflation. We also apply the stochastic
inflation formalism to this scenario, finding that the resulting spectrum is
the same as the one obtained with standard perturbation theory. We discuss the
origin and convenience of using a commonly implemented large thermal correction
to the primordial spectrum and the implications of such a term for a specific
scenario. Improved bounds on the scalar spectral index will further constrain
warm inflation in the near future.Comment: 66 pages, 16 figures; v3: improved discussion and additional appendi
Efficient Deep Learning for Real-time Classification of Astronomical Transients
A new golden age in astronomy is upon us, dominated by data. Large astronomical surveys are broadcasting unprecedented rates of information, demanding machine learning as a critical component in modern scientific pipelines to handle the deluge of data. The upcoming Legacy Survey of Space and Time (LSST) of the Vera C. Rubin Observatory will raise the big-data bar for time- domain astronomy, with an expected 10 million alerts per-night, and generating many petabytes of data over the lifetime of the survey. Fast and efficient classification algorithms that can operate in real-time, yet robustly and accurately, are needed for time-critical events where additional resources can be sought for follow-up analyses. In order to handle such data, state-of-the-art deep learning architectures coupled with tools that leverage modern hardware accelerators are essential.
The work contained in this thesis seeks to address the big-data challenges of LSST by proposing novel efficient deep learning architectures for multivariate time-series classification that can provide state-of-the-art classification of astronomical transients at a fraction of the computational costs of other deep learning approaches. This thesis introduces the depthwise-separable convolution and the notion of convolutional embeddings to the task of time-series classification for gains in classification performance that are achieved with far fewer model parameters than similar methods. It also introduces the attention mechanism to time-series classification that improves performance even further still, with significant improvement in computational efficiency, as well as further reduction in model size. Finally, this thesis pioneers the use of modern model compression techniques to the field of photometric classification for efficient deep learning deployment. These insights informed the final architecture which was deployed in a live production machine learning system, demonstrating the capability to operate efficiently and robustly in real-time, at LSST scale and beyond, ready for the new era of data intensive astronomy
Molecular kinetic modelling of non-equilibrium transport of confined van der Waals fluids
A thermodynamically consistent kinetic model is proposed for the non-equilibrium transport of confined van der Waals fluids, where the long-range molecular attraction is considered by a mean-field term in the transport equation, and the transport coefficients are tuned to match the experimental data. The equation of state of the van der Waals fluids can be obtained from an appropriate choice of the pair correlation function. By contrast, the modified Enskog theory predicts non-physical negative transport coefficients near the critical temperature and may not be able to recover the Boltzmann equation in the dilute limit. In addition, the shear viscosity and thermal conductivity are predicted more accurately by taking gas molecular attraction into account, while the softened Enskog formula for hard-sphere molecules performs better in predicting the bulk viscosity. The present kinetic model agrees with the Boltzmann model in the dilute limit and with the Navier-Stokes equations in the continuum limit, indicating its capability in modelling dilute-to-dense and continuum-to-non-equilibrium flows. The new model is examined thoroughly and validated by comparing it with the molecular dynamics simulation results. In contrast to the previous studies, our simulation results reveal the importance of molecular attraction even for high temperatures, which holds the molecules to the bulk while the hard-sphere model significantly overestimates the density near the wall. Because the long-range molecular attraction is considered appropriately in the present model, the velocity slip and temperature jump at the surface for the more realistic van der Waals fluids can be predicted accurately
- …