12,400 research outputs found

    Machine learning in solar physics

    Full text link
    The application of machine learning in solar physics has the potential to greatly enhance our understanding of the complex processes that take place in the atmosphere of the Sun. By using techniques such as deep learning, we are now in the position to analyze large amounts of data from solar observations and identify patterns and trends that may not have been apparent using traditional methods. This can help us improve our understanding of explosive events like solar flares, which can have a strong effect on the Earth environment. Predicting hazardous events on Earth becomes crucial for our technological society. Machine learning can also improve our understanding of the inner workings of the sun itself by allowing us to go deeper into the data and to propose more complex models to explain them. Additionally, the use of machine learning can help to automate the analysis of solar data, reducing the need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a Living Review in Solar Physics (LRSP

    On Capacity Optimality of OAMP: Beyond IID Sensing Matrices and Gaussian Signaling

    Full text link
    This paper investigates a large unitarily invariant system (LUIS) involving a unitarily invariant sensing matrix, an arbitrarily fixed signal distribution, and forward error control (FEC) coding. A universal Gram-Schmidt orthogonalization is considered for the construction of orthogonal approximate message passing (OAMP), which renders the results applicable to general prototypes without the differentiability restriction. For OAMP with Lipschitz continuous local estimators, we develop two variational single-input-single-output transfer functions, based on which we analyze the achievable rate of OAMP. Furthermore, when the state evolution of OAMP has a unique fixed point, we reveal that OAMP reaches the constrained capacity predicted by the replica method of the LUIS with an arbitrary signal distribution based on matched FEC coding. The replica method is rigorous for LUIS with Gaussian signaling and for certain sub-classes of LUIS with arbitrary signal distributions. Several area properties are established based on the variational transfer functions of OAMP. Meanwhile, we elaborate a replica constrained capacity-achieving coding principle for LUIS, based on which irregular low-density parity-check (LDPC) codes are optimized for binary signaling in the simulation results. We show that OAMP with the optimized codes has significant performance improvement over the un-optimized ones and the well-known Turbo linear MMSE algorithm. For quadrature phase-shift keying (QPSK) modulation, replica constrained capacity-approaching bit error rate (BER) performances are observed under various channel conditions.Comment: Single column, 34 pages, 9 figure

    Twenty-five years of sensor array and multichannel signal processing: a review of progress to date and potential research directions

    Get PDF
    In this article, a general introduction to the area of sensor array and multichannel signal processing is provided, including associated activities of the IEEE Signal Processing Society (SPS) Sensor Array and Multichannel (SAM) Technical Committee (TC). The main technological advances in five SAM subareas made in the past 25 years are then presented in detail, including beamforming, direction-of-arrival (DOA) estimation, sensor location optimization, target/source localization based on sensor arrays, and multiple-input multiple-output (MIMO) arrays. Six recent developments are also provided at the end to indicate possible promising directions for future SAM research, which are graph signal processing (GSP) for sensor networks; tensor-based array signal processing, quaternion-valued array signal processing, 1-bit and noncoherent sensor array signal processing, machine learning and artificial intelligence (AI) for sensor arrays; and array signal processing for next-generation communication systems

    Integrated Optical Fiber Sensor for Simultaneous Monitoring of Temperature, Vibration, and Strain in High Temperature Environment

    Full text link
    Important high-temperature parts of an aero-engine, especially the power-related fuel system and rotor system, are directly related to the reliability and service life of the engine. The working environment of these parts is extremely harsh, usually overloaded with high temperature, vibration and strain which are the main factors leading to their failure. Therefore, the simultaneous measurement of high temperature, vibration, and strain is essential to monitor and ensure the safe operation of an aero-engine. In my thesis work, I have focused on the research and development of two new sensors for fuel and rotor systems of an aero-engine that need to withstand the same high temperature condition, typically at 900 °C or above, but with different requirements for vibration and strain measurement. Firstly, to meet the demand for high temperature operation, high vibration sensitivity, and high strain resolution in fuel systems, an integrated sensor based on two fiber Bragg gratings in series (Bi-FBG sensor) to simultaneously measure temperature, strain, and vibration is proposed and demonstrated. In this sensor, an L-shaped cantilever is introduced to improve the vibration sensitivity. By converting its free end displacement into a stress effect on the FBG, the sensitivity of the L-shaped cantilever is improved by about 400% compared with that of straight cantilevers. To compensate for the strain sensitivity of FBGs, a spring-beam strain sensitization structure is designed and the sensitivity is increased to 5.44 pm/ΌΔ by concentrating strain deformation. A novel decoupling method ‘Steps Decoupling and Temperature Compensation (SDTC)’ is proposed to address the interference between temperature, vibration, and strain. A model of sensing characteristics and interference of different parameters is established to achieve accurate signal decoupling. Experimental tests have been performed and demonstrated the good performance of the sensor. Secondly, a sensor based on cascaded three fiber Fabry-PĂ©rot interferometers in series (Tri-FFPI sensor) for multiparameter measurement is designed and demonstrated for engine rotor systems that require higher vibration frequencies and greater strain measurement requirements. In this sensor, the cascaded-FFPI structure is introduced to ensure high temperature and large strain simultaneous measurement. An FFPI with a cantilever for high vibration frequency measurement is designed with a miniaturized size and its geometric parameters optimization model is established to investigate the influencing factors of sensing characteristics. A cascaded-FFPI preparation method with chemical etching and offset fusion is proposed to maintain the flatness and high reflectivity of FFPIs’ surface, which contributes to the improvement of measurement accuracy. A new high-precision cavity length demodulation method is developed based on vector matching and clustering-competition particle swarm optimization (CCPSO) to improve the demodulation accuracy of cascaded-FFPI cavity lengths. By investigating the correlation relationship between the cascaded-FFPI spectral and multidimensional space, the cavity length demodulation is transformed into a search for the highest correlation value in space, solving the problem that the cavity length demodulation accuracy is limited by the resolution of spectral wavelengths. Different clustering and competition characteristics are designed in CCPSO to reduce the demodulation error by 87.2% compared with the commonly used particle swarm optimization method. Good performance and multiparameter decoupling have been successfully demonstrated in experimental tests

    The State of the Art in Deep Learning Applications, Challenges, and Future Prospects::A Comprehensive Review of Flood Forecasting and Management

    Get PDF
    Floods are a devastating natural calamity that may seriously harm both infrastructure and people. Accurate flood forecasts and control are essential to lessen these effects and safeguard populations. By utilizing its capacity to handle massive amounts of data and provide accurate forecasts, deep learning has emerged as a potent tool for improving flood prediction and control. The current state of deep learning applications in flood forecasting and management is thoroughly reviewed in this work. The review discusses a variety of subjects, such as the data sources utilized, the deep learning models used, and the assessment measures adopted to judge their efficacy. It assesses current approaches critically and points out their advantages and disadvantages. The article also examines challenges with data accessibility, the interpretability of deep learning models, and ethical considerations in flood prediction. The report also describes potential directions for deep-learning research to enhance flood predictions and control. Incorporating uncertainty estimates into forecasts, integrating many data sources, developing hybrid models that mix deep learning with other methodologies, and enhancing the interpretability of deep learning models are a few of these. These research goals can help deep learning models become more precise and effective, which will result in better flood control plans and forecasts. Overall, this review is a useful resource for academics and professionals working on the topic of flood forecasting and management. By reviewing the current state of the art, emphasizing difficulties, and outlining potential areas for future study, it lays a solid basis. Communities may better prepare for and lessen the destructive effects of floods by implementing cutting-edge deep learning algorithms, thereby protecting people and infrastructure

    Encoder-Decoder Networks for Self-Supervised Pretraining and Downstream Signal Bandwidth Regression on Digital Antenna Arrays

    Full text link
    This work presents the first applications of self-supervised learning applied to data from digital antenna arrays. Encoder-decoder networks are pretrained on digital array data to perform a self-supervised noisy-reconstruction task called channel in-painting, in which the network infers the contents of array data that has been masked with zeros. The self-supervised step requires no human-labeled data. The encoder architecture and weights from pretraining are then transferred to a new network with a task-specific decoder, and the new network is trained on a small volume of labeled data. We show that pretraining on the unlabeled data allows the new network to perform the task of bandwidth regression on the digital array data better than an equivalent network that is trained on the same labeled data from random initialization

    Magnetic Resonance Parameter Mapping using Self-supervised Deep Learning with Model Reinforcement

    Full text link
    This paper proposes a novel self-supervised learning method, RELAX-MORE, for quantitative MRI (qMRI) reconstruction. The proposed method uses an optimization algorithm to unroll a model-based qMRI reconstruction into a deep learning framework, enabling the generation of highly accurate and robust MR parameter maps at imaging acceleration. Unlike conventional deep learning methods requiring a large amount of training data, RELAX-MORE is a subject-specific method that can be trained on single-subject data through self-supervised learning, making it accessible and practically applicable to many qMRI studies. Using the quantitative T1T_1 mapping as an example at different brain, knee and phantom experiments, the proposed method demonstrates excellent performance in reconstructing MR parameters, correcting imaging artifacts, removing noises, and recovering image features at imperfect imaging conditions. Compared with other state-of-the-art conventional and deep learning methods, RELAX-MORE significantly improves efficiency, accuracy, robustness, and generalizability for rapid MR parameter mapping. This work demonstrates the feasibility of a new self-supervised learning method for rapid MR parameter mapping, with great potential to enhance the clinical translation of qMRI

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Is attention all you need in medical image analysis? A review

    Full text link
    Medical imaging is a key component in clinical diagnosis, treatment planning and clinical trial design, accounting for almost 90% of all healthcare data. CNNs achieved performance gains in medical image analysis (MIA) over the last years. CNNs can efficiently model local pixel interactions and be trained on small-scale MI data. The main disadvantage of typical CNN models is that they ignore global pixel relationships within images, which limits their generalisation ability to understand out-of-distribution data with different 'global' information. The recent progress of Artificial Intelligence gave rise to Transformers, which can learn global relationships from data. However, full Transformer models need to be trained on large-scale data and involve tremendous computational complexity. Attention and Transformer compartments (Transf/Attention) which can well maintain properties for modelling global relationships, have been proposed as lighter alternatives of full Transformers. Recently, there is an increasing trend to co-pollinate complementary local-global properties from CNN and Transf/Attention architectures, which led to a new era of hybrid models. The past years have witnessed substantial growth in hybrid CNN-Transf/Attention models across diverse MIA problems. In this systematic review, we survey existing hybrid CNN-Transf/Attention models, review and unravel key architectural designs, analyse breakthroughs, and evaluate current and future opportunities as well as challenges. We also introduced a comprehensive analysis framework on generalisation opportunities of scientific and clinical impact, based on which new data-driven domain generalisation and adaptation methods can be stimulated
    • 

    corecore