17,896 research outputs found

    In-situ crack and keyhole pore detection in laser directed energy deposition through acoustic signal and deep learning

    Full text link
    Cracks and keyhole pores are detrimental defects in alloys produced by laser directed energy deposition (LDED). Laser-material interaction sound may hold information about underlying complex physical events such as crack propagation and pores formation. However, due to the noisy environment and intricate signal content, acoustic-based monitoring in LDED has received little attention. This paper proposes a novel acoustic-based in-situ defect detection strategy in LDED. The key contribution of this study is to develop an in-situ acoustic signal denoising, feature extraction, and sound classification pipeline that incorporates convolutional neural networks (CNN) for online defect prediction. Microscope images are used to identify locations of the cracks and keyhole pores within a part. The defect locations are spatiotemporally registered with acoustic signal. Various acoustic features corresponding to defect-free regions, cracks, and keyhole pores are extracted and analysed in time-domain, frequency-domain, and time-frequency representations. The CNN model is trained to predict defect occurrences using the Mel-Frequency Cepstral Coefficients (MFCCs) of the lasermaterial interaction sound. The CNN model is compared to various classic machine learning models trained on the denoised acoustic dataset and raw acoustic dataset. The validation results shows that the CNN model trained on the denoised dataset outperforms others with the highest overall accuracy (89%), keyhole pore prediction accuracy (93%), and AUC-ROC score (98%). Furthermore, the trained CNN model can be deployed into an in-house developed software platform for online quality monitoring. The proposed strategy is the first study to use acoustic signals with deep learning for insitu defect detection in LDED process.Comment: 36 Pages, 16 Figures, accepted at journal Additive Manufacturin

    Passive Radio Frequency-based 3D Indoor Positioning System via Ensemble Learning

    Full text link
    Passive radio frequency (PRF)-based indoor positioning systems (IPS) have attracted researchers' attention due to their low price, easy and customizable configuration, and non-invasive design. This paper proposes a PRF-based three-dimensional (3D) indoor positioning system (PIPS), which is able to use signals of opportunity (SoOP) for positioning and also capture a scenario signature. PIPS passively monitors SoOPs containing scenario signatures through a single receiver. Moreover, PIPS leverages the Dynamic Data Driven Applications System (DDDAS) framework to devise and customize the sampling frequency, enabling the system to use the most impacted frequency band as the rated frequency band. Various regression methods within three ensemble learning strategies are used to train and predict the receiver position. The PRF spectrum of 60 positions is collected in the experimental scenario, and three criteria are applied to evaluate the performance of PIPS. Experimental results show that the proposed PIPS possesses the advantages of high accuracy, configurability, and robustness.Comment: DDDAS 202

    Neural Architecture Search: Insights from 1000 Papers

    Full text link
    In the past decade, advances in deep learning have resulted in breakthroughs in a variety of areas, including computer vision, natural language understanding, speech recognition, and reinforcement learning. Specialized, high-performing neural architectures are crucial to the success of deep learning in these areas. Neural architecture search (NAS), the process of automating the design of neural architectures for a given task, is an inevitable next step in automating machine learning and has already outpaced the best human-designed architectures on many tasks. In the past few years, research in NAS has been progressing rapidly, with over 1000 papers released since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized and comprehensive guide to neural architecture search. We give a taxonomy of search spaces, algorithms, and speedup techniques, and we discuss resources such as benchmarks, best practices, other surveys, and open-source libraries

    High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent

    Full text link
    In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to privately learning large machine learning models. In high dimension, it is common for some model's parameters to carry more information than others. To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients' (approximately) greatest entry. We show theoretically that DP-GCD can achieve a logarithmic dependence on the dimension for a wide range of problems by naturally exploiting their structural properties (such as quasi-sparse solutions). We illustrate this behavior numerically, both on synthetic and real datasets

    Vegetation responses to variations in climate: A combined ordinary differential equation and sequential Monte Carlo estimation approach

    Get PDF
    Vegetation responses to variation in climate are a current research priority in the context of accelerated shifts generated by climate change. However, the interactions between environmental and biological factors still represent one of the largest uncertainties in projections of future scenarios, since the relationship between drivers and ecosystem responses has a complex and nonlinear nature. We aimed to develop a model to study the vegetation’s primary productivity dynamic response to temporal variations in climatic conditions as measured by rainfall, temperature and radiation. Thus, we propose a new way to estimate the vegetation response to climate via a non-autonomous version of a classical growth curve, with a time-varying growth rate and carrying capacity parameters according to climate variables. With a Sequential Monte Carlo Estimation to account for complexities in the climate-vegetation relationship to minimize the number of parameters. The model was applied to six key sites identified in a previous study, consisting of different arid and semiarid rangelands from North Patagonia, Argentina. For each site, we selected the time series of MODIS NDVI, and climate data from ERA5 Copernicus hourly reanalysis from 2000 to 2021. After calculating the time series of the a posteriori distribution of parameters, we analyzed the explained capacity of the model in terms of the linear coefficient of determination and the parameters distribution variation. Results showed that most rangelands recorded changes in their sensitivity over time to climatic factors, but vegetation responses were heterogeneous and influenced by different drivers. Differences in this climate-vegetation relationship were recorded among different cases: (1) a marginal and decreasing sensitivity to temperature and radiation, respectively, but a high sensitivity to water availability; (2) high and increasing sensitivity to temperature and water availability, respectively; and (3) a case with an abrupt shift in vegetation dynamics driven by a progressively decreasing sensitivity to water availability, without any changes in the sensitivity either to temperature or radiation. Finally, we also found that the time scale, in which the ecosystem integrated the rainfall phenomenon in terms of the width of the window function used to convolve the rainfall series into a water availability variable, was also variable in time. This approach allows us to estimate the connection degree between ecosystem productivity and climatic variables. The capacity of the model to identify changes over time in the vegetation-climate relationship might inform decision-makers about ecological transitions and the differential impact of climatic drivers on ecosystems.Estación Experimental Agropecuaria BarilocheFil: Bruzzone, Octavio Augusto. Instituto Nacional de Tecnología Agropecuaria (INTA). Estación Experimental Agropecuaria Bariloche; ArgentinaFil: Bruzzone, Octavio Augusto. Consejo Nacional de Investigaciones Cientificas y Tecnicas. Instituto de Investigaciones Forestales y Agropecuarias Bariloche; ArgentinaFil: Perri, Daiana Vanesa. Instituto Nacional de Tecnologia Agropecuaria (INTA). Estación Experimental Agropecuaria Bariloche. Área de Recursos Naturales; ArgentinaFil: Perri, Daiana Vanesa. Consejo Nacional de Investigaciones Cientificas y Tecnicas. Instituto de Investigaciones Forestales y Agropecuarias Bariloche; ArgentinaFil: Easdale, Marcos Horacio. Instituto Nacional de Tecnologia Agropecuaria (INTA). Estación Experimental Agropecuaria Bariloche. Área de Recursos Naturales; ArgentinaFil: Easdale, Marcos Horacio. Consejo Nacional de Investigaciones Cientificas y Tecnicas. Instituto de Investigaciones Forestales y Agropecuarias Bariloche; Argentin

    Trainable Variational Quantum-Multiblock ADMM Algorithm for Generation Scheduling

    Full text link
    The advent of quantum computing can potentially revolutionize how complex problems are solved. This paper proposes a two-loop quantum-classical solution algorithm for generation scheduling by infusing quantum computing, machine learning, and distributed optimization. The aim is to facilitate employing noisy near-term quantum machines with a limited number of qubits to solve practical power system optimization problems such as generation scheduling. The outer loop is a 3-block quantum alternative direction method of multipliers (QADMM) algorithm that decomposes the generation scheduling problem into three subproblems, including one quadratically unconstrained binary optimization (QUBO) and two non-QUBOs. The inner loop is a trainable quantum approximate optimization algorithm (T-QAOA) for solving QUBO on a quantum computer. The proposed T-QAOA translates interactions of quantum-classical machines as sequential information and uses a recurrent neural network to estimate variational parameters of the quantum circuit with a proper sampling technique. T-QAOA determines the QUBO solution in a few quantum-learner iterations instead of hundreds of iterations needed for a quantum-classical solver. The outer 3-block ADMM coordinates QUBO and non-QUBO solutions to obtain the solution to the original problem. The conditions under which the proposed QADMM is guaranteed to converge are discussed. Two mathematical and three generation scheduling cases are studied. Analyses performed on quantum simulators and classical computers show the effectiveness of the proposed algorithm. The advantages of T-QAOA are discussed and numerically compared with QAOA which uses a stochastic gradient descent-based optimizer.Comment: 11 page

    Plateau-reduced Differentiable Path Tracing

    Full text link
    Current differentiable renderers provide light transport gradients with respect to arbitrary scene parameters. However, the mere existence of these gradients does not guarantee useful update steps in an optimization. Instead, inverse rendering might not converge due to inherent plateaus, i.e., regions of zero gradient, in the objective function. We propose to alleviate this by convolving the high-dimensional rendering function that maps scene parameters to images with an additional kernel that blurs the parameter space. We describe two Monte Carlo estimators to compute plateau-free gradients efficiently, i.e., with low variance, and show that these translate into net-gains in optimization error and runtime performance. Our approach is a straightforward extension to both black-box and differentiable renderers and enables optimization of problems with intricate light transport, such as caustics or global illumination, that existing differentiable renderers do not converge on.Comment: Accepted to CVPR 2023. Project page and interactive demos at https://mfischer-ucl.github.io/prdpt

    Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data

    Full text link
    We propose Compressed Vertical Federated Learning (C-VFL) for communication-efficient training on vertically partitioned data. In C-VFL, a server and multiple parties collaboratively train a model on their respective features utilizing several local iterations and sharing compressed intermediate results periodically. Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data. We prove convergence of non-convex objectives at a rate of O(1T)O(\frac{1}{\sqrt{T}}) when the compression error is bounded over the course of training. We provide specific requirements for convergence with common compression techniques, such as quantization and top-kk sparsification. Finally, we experimentally show compression can reduce communication by over 90%90\% without a significant decrease in accuracy over VFL without compression

    latent Dirichlet allocation method-based nowcasting approach for prediction of silver price

    Get PDF
    Silver is a metal that offers significant value to both investors and companies. The purpose of this study is to make an estimation of the price of silver. While making this estimation, it is planned to include the frequency of searches on Google Trends for the words that affect the silver price. Thus, it is aimed to obtain a more accurate estimate. First, using the Latent Dirichlet Allocation method, the keywords to be analyzed in Google Trends were collected from various articles on the Internet. Mining data from Google Trends combined with the information obtained by LDA is the new approach this study took, to predict the price of silver. No study has been found in the literature that has adopted this approach to estimate the price of silver. The estimation was carried out with Random Forest Regression, Gaussian Process Regression, Support Vector Machine, Regression Trees and Artificial Neural Networks methods. In addition, ARIMA, which is one of the traditional methods that is widely used in time series analysis, was also used to benchmark the accuracy of the methodology. The best MSE ratio was obtained as 0,000227131 ± 0.0000235205 by the Regression Trees method. This score indicates that it would be a valid technique to estimate the price of "Silver" by using Google Trends data using the LDA method

    Assessing performance of artificial neural networks and re-sampling techniques for healthcare datasets.

    Get PDF
    Re-sampling methods to solve class imbalance problems have shown to improve classification accuracy by mitigating the bias introduced by differences in class size. However, it is possible that a model which uses a specific re-sampling technique prior to Artificial neural networks (ANN) training may not be suitable for aid in classifying varied datasets from the healthcare industry. Five healthcare-related datasets were used across three re-sampling conditions: under-sampling, over-sampling and combi-sampling. Within each condition, different algorithmic approaches were applied to the dataset and the results were statistically analysed for a significant difference in ANN performance. The combi-sampling condition showed that four out of the five datasets did not show significant consistency for the optimal re-sampling technique between the f1-score and Area Under the Receiver Operating Characteristic Curve performance evaluation methods. Contrarily, the over-sampling and under-sampling condition showed all five datasets put forward the same optimal algorithmic approach across performance evaluation methods. Furthermore, the optimal combi-sampling technique (under-, over-sampling and convergence point), were found to be consistent across evaluation measures in only two of the five datasets. This study exemplifies how discrete ANN performances on datasets from the same industry can occur in two ways: how the same re-sampling technique can generate varying ANN performance on different datasets, and how different re-sampling techniques can generate varying ANN performance on the same dataset
    corecore