6,505 research outputs found

    Machine learning in solar physics

    Full text link
    The application of machine learning in solar physics has the potential to greatly enhance our understanding of the complex processes that take place in the atmosphere of the Sun. By using techniques such as deep learning, we are now in the position to analyze large amounts of data from solar observations and identify patterns and trends that may not have been apparent using traditional methods. This can help us improve our understanding of explosive events like solar flares, which can have a strong effect on the Earth environment. Predicting hazardous events on Earth becomes crucial for our technological society. Machine learning can also improve our understanding of the inner workings of the sun itself by allowing us to go deeper into the data and to propose more complex models to explain them. Additionally, the use of machine learning can help to automate the analysis of solar data, reducing the need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a Living Review in Solar Physics (LRSP

    Integrated Optical Fiber Sensor for Simultaneous Monitoring of Temperature, Vibration, and Strain in High Temperature Environment

    Full text link
    Important high-temperature parts of an aero-engine, especially the power-related fuel system and rotor system, are directly related to the reliability and service life of the engine. The working environment of these parts is extremely harsh, usually overloaded with high temperature, vibration and strain which are the main factors leading to their failure. Therefore, the simultaneous measurement of high temperature, vibration, and strain is essential to monitor and ensure the safe operation of an aero-engine. In my thesis work, I have focused on the research and development of two new sensors for fuel and rotor systems of an aero-engine that need to withstand the same high temperature condition, typically at 900 °C or above, but with different requirements for vibration and strain measurement. Firstly, to meet the demand for high temperature operation, high vibration sensitivity, and high strain resolution in fuel systems, an integrated sensor based on two fiber Bragg gratings in series (Bi-FBG sensor) to simultaneously measure temperature, strain, and vibration is proposed and demonstrated. In this sensor, an L-shaped cantilever is introduced to improve the vibration sensitivity. By converting its free end displacement into a stress effect on the FBG, the sensitivity of the L-shaped cantilever is improved by about 400% compared with that of straight cantilevers. To compensate for the strain sensitivity of FBGs, a spring-beam strain sensitization structure is designed and the sensitivity is increased to 5.44 pm/με by concentrating strain deformation. A novel decoupling method ‘Steps Decoupling and Temperature Compensation (SDTC)’ is proposed to address the interference between temperature, vibration, and strain. A model of sensing characteristics and interference of different parameters is established to achieve accurate signal decoupling. Experimental tests have been performed and demonstrated the good performance of the sensor. Secondly, a sensor based on cascaded three fiber Fabry-Pérot interferometers in series (Tri-FFPI sensor) for multiparameter measurement is designed and demonstrated for engine rotor systems that require higher vibration frequencies and greater strain measurement requirements. In this sensor, the cascaded-FFPI structure is introduced to ensure high temperature and large strain simultaneous measurement. An FFPI with a cantilever for high vibration frequency measurement is designed with a miniaturized size and its geometric parameters optimization model is established to investigate the influencing factors of sensing characteristics. A cascaded-FFPI preparation method with chemical etching and offset fusion is proposed to maintain the flatness and high reflectivity of FFPIs’ surface, which contributes to the improvement of measurement accuracy. A new high-precision cavity length demodulation method is developed based on vector matching and clustering-competition particle swarm optimization (CCPSO) to improve the demodulation accuracy of cascaded-FFPI cavity lengths. By investigating the correlation relationship between the cascaded-FFPI spectral and multidimensional space, the cavity length demodulation is transformed into a search for the highest correlation value in space, solving the problem that the cavity length demodulation accuracy is limited by the resolution of spectral wavelengths. Different clustering and competition characteristics are designed in CCPSO to reduce the demodulation error by 87.2% compared with the commonly used particle swarm optimization method. Good performance and multiparameter decoupling have been successfully demonstrated in experimental tests

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Advanced Nanomaterials for Electrochemical Energy Conversion and Storage

    Get PDF
    This book focuses on advanced nanomaterials for energy conversion and storage, covering their design, synthesis, properties and applications in various fields. Developing advanced nanomaterials for high-performance and low-cost energy conversion and storage devices and technologies is of great significance in order to solve the issues of energy crisis and environmental pollution. In this book, various advanced nanomaterials for batteries, capacitors, electrocatalysis, nanogenerators, and magnetic nanomaterials are presente

    Describing Faces for Identification: Getting the Message, But Not The Picture

    Full text link
    Although humans rely on faces and language for social communication, the role of language in communicating about faces is poorly understood. Describing faces and identifying faces from verbal descriptions are important tasks in social and criminal justice settings. Prior research indicates that people have difficulty relaying face identity to others via verbal description, however little is known about the process, correlates, or content of communication about faces (hereafter ‘face communication’). In Chapter Two, I investigated face communication accuracy and its relationship with an individual’s perceptual face skill. I also examined the efficacy of a brief training intervention for improving face description ability. I found that individuals could complete face communication tasks with above chance levels of accuracy, in both interactive and non-interactive conditions, and that abilities in describing faces and using face descriptions for identification were related to an individual’s perceptual face skill. However, training was not effective for improving face description ability. In Chapter Three, I investigated qualitative attributes of face descriptions. I found no evidence of qualitative differences in face descriptions as a function of the describer’s perceptual skill with faces, the identification utility of descriptions, or the describer’s familiarity with the face. In Chapters Two and Three, the reliability of measures may have limited the ability to detect relationships between face communication accuracy and potential correlates of performance. Consequently, in Chapter Four, I examined face communication accuracy when using constrained face descriptions, derived using a rating scale, and the relationship between the identification utility of such descriptions and their reliability (test-retest and multi-rater). I found that constrained face descriptions were less useful for identification than free descriptions and the reliability of a description was unrelated to its identification utility. Together, findings in this thesis indicate that face communication is very challenging – both for individuals undertaking the task, and for researchers seeking to measure performance reliably. Given the mechanisms contributing to variance in face communication accuracy remain largely elusive, legal stakeholders would be wise to use caution when relying on evidence involving face description

    Information Theory for Complex Systems Scientists

    Full text link
    In the 21st century, many of the crucial scientific and technical issues facing humanity can be understood as problems associated with understanding, modelling, and ultimately controlling complex systems: systems comprised of a large number of non-trivially interacting components whose collective behaviour can be difficult to predict. Information theory, a branch of mathematics historically associated with questions about encoding and decoding messages, has emerged as something of a lingua franca for those studying complex systems, far exceeding its original narrow domain of communication systems engineering. In the context of complexity science, information theory provides a set of tools which allow researchers to uncover the statistical and effective dependencies between interacting components; relationships between systems and their environment; mereological whole-part relationships; and is sensitive to non-linearities missed by commonly parametric statistical models. In this review, we aim to provide an accessible introduction to the core of modern information theory, aimed specifically at aspiring (and established) complex systems scientists. This includes standard measures, such as Shannon entropy, relative entropy, and mutual information, before building to more advanced topics, including: information dynamics, measures of statistical complexity, information decomposition, and effective network inference. In addition to detailing the formal definitions, in this review we make an effort to discuss how information theory can be interpreted and develop the intuition behind abstract concepts like "entropy," in the hope that this will enable interested readers to understand what information is, and how it is used, at a more fundamental level

    Application of deep learning methods in materials microscopy for the quality assessment of lithium-ion batteries and sintered NdFeB magnets

    Get PDF
    Die Qualitätskontrolle konzentriert sich auf die Erkennung von Produktfehlern und die Überwachung von Aktivitäten, um zu überprüfen, ob die Produkte den gewünschten Qualitätsstandard erfüllen. Viele Ansätze für die Qualitätskontrolle verwenden spezialisierte Bildverarbeitungssoftware, die auf manuell entwickelten Merkmalen basiert, die von Fachleuten entwickelt wurden, um Objekte zu erkennen und Bilder zu analysieren. Diese Modelle sind jedoch mühsam, kostspielig in der Entwicklung und schwer zu pflegen, während die erstellte Lösung oft spröde ist und für leicht unterschiedliche Anwendungsfälle erhebliche Anpassungen erfordert. Aus diesen Gründen wird die Qualitätskontrolle in der Industrie immer noch häufig manuell durchgeführt, was zeitaufwändig und fehleranfällig ist. Daher schlagen wir einen allgemeineren datengesteuerten Ansatz vor, der auf den jüngsten Fortschritten in der Computer-Vision-Technologie basiert und Faltungsneuronale Netze verwendet, um repräsentative Merkmale direkt aus den Daten zu lernen. Während herkömmliche Methoden handgefertigte Merkmale verwenden, um einzelne Objekte zu erkennen, lernen Deep-Learning-Ansätze verallgemeinerbare Merkmale direkt aus den Trainingsproben, um verschiedene Objekte zu erkennen. In dieser Dissertation werden Modelle und Techniken für die automatisierte Erkennung von Defekten in lichtmikroskopischen Bildern von materialografisch präparierten Schnitten entwickelt. Wir entwickeln Modelle zur Defekterkennung, die sich grob in überwachte und unüberwachte Deep-Learning-Techniken einteilen lassen. Insbesondere werden verschiedene überwachte Deep-Learning-Modelle zur Erkennung von Defekten in der Mikrostruktur von Lithium-Ionen-Batterien entwickelt, von binären Klassifizierungsmodellen, die auf einem Sliding-Window-Ansatz mit begrenzten Trainingsdaten basieren, bis hin zu komplexen Defekterkennungs- und Lokalisierungsmodellen, die auf ein- und zweistufigen Detektoren basieren. Unser endgültiges Modell kann mehrere Klassen von Defekten in großen Mikroskopiebildern mit hoher Genauigkeit und nahezu in Echtzeit erkennen und lokalisieren. Das erfolgreiche Trainieren von überwachten Deep-Learning-Modellen erfordert jedoch in der Regel eine ausreichend große Menge an markierten Trainingsbeispielen, die oft nicht ohne weiteres verfügbar sind und deren Beschaffung sehr kostspielig sein kann. Daher schlagen wir zwei Ansätze vor, die auf unbeaufsichtigtem Deep Learning zur Erkennung von Anomalien in der Mikrostruktur von gesinterten NdFeB-Magneten basieren, ohne dass markierte Trainingsdaten benötigt werden. Die Modelle sind in der Lage, Defekte zu erkennen, indem sie aus den Trainingsdaten indikative Merkmale von nur "normalen" Mikrostrukturmustern lernen. Wir zeigen experimentelle Ergebnisse der vorgeschlagenen Fehlererkennungssysteme, indem wir eine Qualitätsbewertung an kommerziellen Proben von Lithium-Ionen-Batterien und gesinterten NdFeB-Magneten durchführen

    Machine-learning models for analysis of biomass reactions and prediction of reaction energies

    Get PDF
    Biomass and derived compounds have the potential to form the basis of a sustainable economy by providing a renewable source of many chemicals. The selective synthesis and conversion of biomass compounds are often catalyzed by transition metal catalysts. Computational screening has emerged as a promising tool for discovery and optimization of active and selective catalysts, but most existing examples focus on small molecule reactions. In this study, the density functional theory (DFT) approach is first validated by comparing computational results to experiments for ethanol conversion over molybdenum oxide. Subsequently, DFT is combined with machine-learning approaches to identify and overcome challenges associated with computational screening of biomass catalysts. A recursive algorithm is used to elucidate possible intermediates and chemical bond cleavage reactions are for linear biomass molecules containing up to six carbons. Machine-learning algorithms based on the Mol2Vec embedding are applied to classify reaction types and predict gas-phase reaction energies and adsorption energies on Rh(111) (MAE ~0.4 eV). With the workflow, we are able to combine the physics-based density functional tight binding method with the machine learning model to identify the stable binding geometries of biomass intermediates on the Rh (111) surface. Finally, we show preliminary results toward the development of a neural network force field based on the Gaussian multipole feature approach. The results indicate that this strategy is a promising route toward fast and accurate predictions of both energies and forces of hydrocarbons on a range of transition-metal surfaces. The results of this thesis demonstrate the utility of machine-learning techniques for studying biomass reactions, and indicate the potential for further developments in this field.Ph.D
    • …
    corecore