278 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Data-driven deep-learning methods for the accelerated simulation of Eulerian fluid dynamics

    Get PDF
    Deep-learning (DL) methods for the fast inference of the temporal evolution of fluid-dynamics systems, based on the previous recognition of features underlying large sets of fluid-dynamics data, have been studied. Specifically, models based on convolution neural networks (CNNs) and graph neural networks (GNNs) were proposed and discussed. A U-Net, a popular fully-convolutional architecture, was trained to infer wave dynamics on liquid surfaces surrounded by walls, given as input the system state at previous time-points. A term for penalising the error of the spatial derivatives was added to the loss function, which resulted in a suppression of spurious oscillations and a more accurate location and length of the predicted wavefronts. This model proved to accurately generalise to complex wall geometries not seen during training. As opposed to the image data-structures processed by CNNs, graphs offer higher freedom on how data is organised and processed. This motivated the use of graphs to represent the state of fluid-dynamic systems discretised by unstructured sets of nodes, and GNNs to process such graphs. Graphs have enabled more accurate representations of curvilinear geometries and higher resolution placement exclusively in areas where physics is more challenging to resolve. Two novel GNN architectures were designed for fluid-dynamics inference: the MuS-GNN, a multi-scale GNN, and the REMuS-GNN, a rotation-equivariant multi-scale GNN. Both architectures work by repeatedly passing messages from each node to its nearest nodes in the graph. Additionally, lower-resolutions graphs, with a reduced number of nodes, are defined from the original graph, and messages are also passed from finer to coarser graphs and vice-versa. The low-resolution graphs allowed for efficiently capturing physics encompassing a range of lengthscales. Advection and fluid flow, modelled by the incompressible Navier-Stokes equations, were the two types of problems used to assess the proposed GNNs. Whereas a single-scale GNN was sufficient to achieve high generalisation accuracy in advection simulations, flow simulation highly benefited from an increasing number of low-resolution graphs. The generalisation and long-term accuracy of these simulations were further improved by the REMuS-GNN architecture, which processes the system state independently of the orientation of the coordinate system thanks to a rotation-invariant representation and carefully designed components. To the best of the author’s knowledge, the REMuS-GNN architecture was the first rotation-equivariant and multi-scale GNN. The simulations were accelerated between one (in a CPU) and three (in a GPU) orders of magnitude with respect to a CPU-based numerical solver. Additionally, the parallelisation of multi-scale GNNs resulted in a close-to-linear speedup with the number of CPU cores or GPUs.Open Acces

    TURBOMOLE: Today and Tomorrow

    Get PDF
    TURBOMOLE is a highly optimized software suite for large-scale quantum-chemical and materials science simulations of molecules, clusters, extended systems, and periodic solids. TURBOMOLE uses Gaussian basis sets and has been designed with robust and fast quantum-chemical applications in mind, ranging from homogeneous and heterogeneous catalysis to inorganic and organic chemistry and various types of spectroscopy, light–matter interactions, and biochemistry. This Perspective briefly surveys TURBOMOLE’s functionality and highlights recent developments that have taken place between 2020 and 2023, comprising new electronic structure methods for molecules and solids, previously unavailable molecular properties, embedding, and molecular dynamics approaches. Select features under development are reviewed to illustrate the continuous growth of the program suite, including nuclear electronic orbital methods, Hartree–Fock-based adiabatic connection models, simplified time-dependent density functional theory, relativistic effects and magnetic properties, and multiscale modeling of optical properties

    Quantum computation of stopping power for inertial fusion target design

    Full text link
    Stopping power is the rate at which a material absorbs the kinetic energy of a charged particle passing through it -- one of many properties needed over a wide range of thermodynamic conditions in modeling inertial fusion implosions. First-principles stopping calculations are classically challenging because they involve the dynamics of large electronic systems far from equilibrium, with accuracies that are particularly difficult to constrain and assess in the warm-dense conditions preceding ignition. Here, we describe a protocol for using a fault-tolerant quantum computer to calculate stopping power from a first-quantized representation of the electrons and projectile. Our approach builds upon the electronic structure block encodings of Su et al. [PRX Quantum 2, 040332 2021], adapting and optimizing those algorithms to estimate observables of interest from the non-Born-Oppenheimer dynamics of multiple particle species at finite temperature. Ultimately, we report logical qubit requirements and leading-order Toffoli costs for computing the stopping power of various projectile/target combinations relevant to interpreting and designing inertial fusion experiments. We estimate that scientifically interesting and classically intractable stopping power calculations can be quantum simulated with roughly the same number of logical qubits and about one hundred times more Toffoli gates than is required for state-of-the-art quantum simulations of industrially relevant molecules such as FeMoCo or P450

    Algorithms for sparse convolution and sublinear edit distance

    Get PDF
    In this PhD thesis on fine-grained algorithm design and complexity, we investigate output-sensitive and sublinear-time algorithms for two important problems. (1) Sparse Convolution: Computing the convolution of two vectors is a basic algorithmic primitive with applications across all of Computer Science and Engineering. In the sparse convolution problem we assume that the input and output vectors have at most t nonzero entries, and the goal is to design algorithms with running times dependent on t. For the special case where all entries are nonnegative, which is particularly important for algorithm design, it is known since twenty years that sparse convolutions can be computed in near-linear randomized time O(t log^2 n). In this thesis we develop a randomized algorithm with running time O(t \log t) which is optimal (under some mild assumptions), and the first near-linear deterministic algorithm for sparse nonnegative convolution. We also present an application of these results, leading to seemingly unrelated fine-grained lower bounds against distance oracles in graphs. (2) Sublinear Edit Distance: The edit distance of two strings is a well-studied similarity measure with numerous applications in computational biology. While computing the edit distance exactly provably requires quadratic time, a long line of research has lead to a constant-factor approximation algorithm in almost-linear time. Perhaps surprisingly, it is also possible to approximate the edit distance k within a large factor O(k) in sublinear time O~(n/k + poly(k)). We drastically improve the approximation factor of the known sublinear algorithms from O(k) to k^{o(1)} while preserving the O(n/k + poly(k)) running time.In dieser Doktorarbeit über feinkörnige Algorithmen und Komplexität untersuchen wir ausgabesensitive Algorithmen und Algorithmen mit sublinearer Lauf-zeit für zwei wichtige Probleme. (1) Dünne Faltungen: Die Berechnung der Faltung zweier Vektoren ist ein grundlegendes algorithmisches Primitiv, das in allen Bereichen der Informatik und des Ingenieurwesens Anwendung findet. Für das dünne Faltungsproblem nehmen wir an, dass die Eingabe- und Ausgabevektoren höchstens t Einträge ungleich Null haben, und das Ziel ist, Algorithmen mit Laufzeiten in Abhängigkeit von t zu entwickeln. Für den speziellen Fall, dass alle Einträge nicht-negativ sind, was insbesondere für den Entwurf von Algorithmen relevant ist, ist seit zwanzig Jahren bekannt, dass dünn besetzte Faltungen in nahezu linearer randomisierter Zeit O(t \log^2 n) berechnet werden können. In dieser Arbeit entwickeln wir einen randomisierten Algorithmus mit Laufzeit O(t \log t), der (unter milden Annahmen) optimal ist, und den ersten nahezu linearen deterministischen Algorithmus für dünne nichtnegative Faltungen. Wir stellen auch eine Anwendung dieser Ergebnisse vor, die zu scheinbar unverwandten feinkörnigen unteren Schranken gegen Distanzorakel in Graphen führt. (2) Sublineare Editierdistanz: Die Editierdistanz zweier Zeichenketten ist ein gut untersuchtes Ähnlichkeitsmaß mit zahlreichen Anwendungen in der Computerbiologie. Während die exakte Berechnung der Editierdistanz nachweislich quadratische Zeit erfordert, hat eine lange Reihe von Forschungsarbeiten zu einem Approximationsalgorithmus mit konstantem Faktor in fast-linearer Zeit geführt. Überraschenderweise ist es auch möglich, die Editierdistanz k innerhalb eines großen Faktors O(k) in sublinearer Zeit O~(n/k + poly(k)) zu approximieren. Wir verbessern drastisch den Approximationsfaktor der bekannten sublinearen Algorithmen von O(k) auf k^{o(1)} unter Beibehaltung der O(n/k + poly(k))-Laufzeit

    Assessing sustainable development in industrial regions towards smart built environment management using Earth observation big data

    Get PDF
    This thesis investigates the sustainability of nationwide industrial regions using Earth observation big data, from environmental and socio-economic perspectives. The research contributes to spatial methodology design and decision-making support. New spatial methods, including the robust geographical detector and the concept of geocomplexity, are proposed to demonstrate the spatial properties of industrial sustainability. The study delivers scientific decision-making advice to industry stakeholders and policymakers for the post-construction assessment and future planning phases. The research has been published in prestigious geography journals, demonstrating its success

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    N-type Organic Materials for Thermoelectric Applications

    Get PDF
    Harvesting waste heat as a renewable energy source could allow us to power small devices in everyday life, from medical devices to wireless networks from heat sources all around us. In particular, the use of organic materials as the active thermoelectric component opens up the possibility of flexible, printed electronics and ease of cheaper mass reproducibility. In this work, 3 topics are explored: (i) the use of graphene-based materials for thermoelectric applications, (ii) understanding how heat can move through polymer thin films with topographical features, in particular P3HT, and (iii) the effect ladderisation has on the polymer BBB and the resulting thermal and structural properties of the laddered structure, BBL. Graphene is a versatile material with intrinsically high carrier mobility. However, having impressive electronic properties is not wholly advantageous for thermoelectric energy harvesting as it usually leads to a high electronic thermal conductivity which reduces a material’s zT. In an attempt to remedy this, many researchers have successfully merged graphene with polymer. One approach is to covalently bond functional groups on to the graphene surface, then polymerise a layer of monomers on top. The functional groups interrupt the repetitive structure of graphene, inducing more phonon scattering events which reduces the lattice thermal conductivity. This process also reduces carrier mobility, hence reducing electronic thermal conductivity and carrier conductivity which is an unwanted but inevitable effect. A layer of polymer is hence grown on top to restore some electronic pathways. This was the approach deployed in this work, the polymer of choice was PEDOT, thus a sulfonate group was chosen to be the functional group. While the thermoelectric properties of pure graphene in this work yield values agreeing with literature, it was clear that the substrate choice played a noticeable part. It was found that graphene on a silicon nitride (Si3N4) terminated substrate, a higher Seebeck coefficient of ~25 μV/K was measured in comparison to ~17 μV/K for graphene on aluminium oxide (Al2O3) terminated substrates. This lead to zT reaching a maximum value of ~3x10-3. This may be explained by a possible band gap opening of 0.22 eV, observed with UPS, which was not observed for graphene on Al2O3. Raman spectroscopy showed that the D-band associated with disorder and defects within the graphene lattice was present for graphene on Al2O3, and not for graphene on Si3N4 which could also explain the lower Seebeck coefficient as this parameter is also dependent on carrier mobility. For functionalised graphene/PEDOT films, again, samples on Si3N4 performed better than films on Al2O3. Using XPS, it was found that a larger concentration of functional groups were bonded to the graphene surface for films on Al2O3 which could be due to the different fermi levels of the graphene on their respective substrate materials, and also due to the presence of more graphene edges on graphene/ Al2O3 films (as shown by the D-band in the Raman spectrum). The surface functionalisation was successful in reducing the thermal conductivity, however, the electrical conductivity was heavily damaged, in particular for films on Al2O3 where the electrical conductivity is almost an order of magnitude less and thermal conductivity was approximately half that values seen with films on Si3N4. The lower concentration of functional groups seen in the films on Si3N4 were hence beneficial to the system. Raman spectroscopy also revealed a different morphology between the two sample types where a higher degree of crystallinity due to shorter chains is seen in the films on Si3N4 and can also contribute to the higher electronic properties. Overall, it is shown that graphene works as a good base material for thermoelectric materials, and it is possible to exploit parameters such as substrate choice and functionalisation efficiency to tune the thermoelectric parameters. All in-plane thermal conductivity measurements throughout my work rely on the simple assumption that heat flux is homogeneous and one dimensional. However, for thin films, where topographical roughness is inevitable, heat flux will begin to deviate from the ideal scenario and the measured values will begin to deviate from true intrinsic values of the material. My second project focuses on understanding how measured values are affected by a simple rectangular dip/trough on the surface of a thin film and whether modelled scenarios can be used to represent realistic scenarios. Finite element modelling was used to represent a segment of a doped P3HT thin film, of thermal conductivity 0.4 W/mK, and a thickness and width of 300 nm and 1500 nm respectively, and a single surface feature. The film is modelled with a membrane layer underneath, of thickness 144 nm, representing a substrate with a thermal conductivity of 2.6 W/mK. Fourier’s law was then used to extract a thermal conductivity value that represents a real measured value. Fourier’s law states that the local heat flux is proportional to the area in which it travels through – therefore we’d only expect a measured thermal conductivity value to deviate from the material’s intrinsic value if the area of the film changes perpendicular to the heat flux. By comparing the extracted thermal conductivity to the intrinsic thermal conductivity of the material, defined in the model, it is shown that as the feature for deeper (at constant width), the extracted value deviated super linearly from the intrinsic value. However, with a full 500 nm wide crack in the film, the extracted value is only 40% lower than the intrinsic thermal conductivity which is only possible due to the presence of a more thermally conductivity membrane (membrane is 10 times more conductive than film). Colour scaled images show that heat is redirected into the membrane to allow continuous heat flow to the other side of the crack. In the case the membrane is less thermally conductive than the film, the membrane is not able to redirect the heat effectively enough and the extracted thermal conductivity drops by almost 100% (film is 20 times more conductive than the membrane). A much less significant effect is seen when keeping the depth of the feature constant whilst varying the width. This is because the change is now parallel to the heat flow. For this section, the feature depth was kept constant at 210 nm with varying widths. The peak deviation from intrinsic thermal conductivity is seen when the width of the feature is ~66% of the full simulated length, beyond this point the extracted thermal conductivity begins to converge back to the material’s intrinsic value. At this point, the maximum deviation was ~25% lower than the intrinsic thermal conductivity. When the width of the surface feature is 50% of the segment width, the original film height gets treated as peaks. Thus, the asymmetric nature of this curve tells us that heat is redirect much more efficiently into the constricted areas, as opposed to the peaks. This is, again, due to the aid of a more thermally conductive membrane. When the film is more thermally conductive than the membrane, no aid is given and the maximum deviation of the extracted thermal conductivity peaks symmetrically when the feature is at 50% of the simulated segment. In this case, the maximum deviation is much higher, at ~35%. The same is therefore seen for a suspended film. My final project explores the affect that ladderising a polymer from a single strand to a double strand has on thermal conductivity. The two polymers studied were the single stranded polymer, BBB and the double stranded ladder polymer, BBL, both of which are very similar in structure. The thermal conductivity of BBB was 0.28 W/mK which is typical of an amorphous polymer. The thermal conductivity of BBL was significantly higher at ~1 W/mK. The high thermal conductivity can be attributed to the fact that ladder polymers are much stiffer due to the double bonds between units and may also exhibit a higher order of crystallinity in comparison to an amorphous polymer. The amorphous and semi-crystalline nature of BBB and BBL respectively are confirmed by GIWAX data. This is interesting due to the significant percent increase in thermal conductivity in doped BBB, where it a small degree of crystallinity might be expected. BBB had an activation energy of 0.25 eV whereas the activation energy of BBL was lower, at 0.10eV. This suggests that the BBL structure has shallower traps associated with disorder which promotes carrier mobility and phonon propagation, agreeing with the thermal conductivity data. The structure of these polymers were analysed further using FTIR where it is clear that doping affected the two structures differently. Doped BBL is seen to have a slight peak shift associated with the neutral C=O bond which can suggest a stronger electron-phonon coupling. The spectra also suggests the dopant is affecting the C=N, C-N, C=O and C-C units within BBL, however, the dopants are residing locally near the C=O units within BBB. The more delocalised nature of polarons within doped BBL may explain the wider polaron band seen in the UV-VIS spectrum
    corecore