77 research outputs found

    A factor augmented vector autoregressive model and a stacked de-noising auto-encoders forecast combination to predict the price of oil

    Get PDF
    The following dissertation aims to show the benefits of a forecast combination between an econometric and a deep learning approach. On one side, a Factor Augmented Vector Autoregressive Model (FAVAR) with naming variables identification following Stock and Watson (2016)1; on the other side, a Stacked De-noising Auto-Encoder with Bagging (SDAE-B) following Zhao, Li and Yu (2017)2 are implemented. From January 2010 to September 2018 Two-hundred-eighty-one monthly series are used to predict the price of the West Texas Intermediate (WTI). The model performance is analysed by Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE) and Directional Accuracy (DA). The combination benefits from both SDAE-B’s high accuracy and FAVAR’s interpretation features through impulse response functions (IRFs) and forecast error variance decomposition (FEVD)

    Compression Boosts Differentially Private Federated Learning

    Get PDF
    International audienceFederated Learning allows distributed entities to train a common model collaboratively without sharing their own data. Although it prevents data collection and aggregation by exchanging only parameter updates, it remains vulnerable to various inference and reconstruction attacks where a malicious entity can learn private information about the participants’ training data from the captured gradients. Differential Privacy is used to obtain theoretically sound privacy guarantees against such inference attacks by noising the exchanged update vectors. However, the added noise isproportional to the model size which can be very large with modern neural networks. This can result in poor model quality. In this paper, compressive sensing is used to reduce the model size and hence increase model quality without sacrificing privacy. We show experimentally, using 2 datasets, that our privacy-preserving proposal can reduce the communication costs by up to 95% with only a negligible performance penalty compared to traditional non-private federated learning schemes

    Soft-Sensor for Class Prediction of the Percentage of Pentanes in Butane at a Debutanizer Column

    Get PDF
    Refineries are complex industrial systems that transform crude oil into more valuable subproducts. Due to the advances in sensors, easily measurable variables are continuously monitored and several data-driven soft-sensors are proposed to control the distillation process and the quality of the resultant subproducts. However, data preprocessing and soft-sensor modelling are still complex and time-consuming tasks that are expected to be automatised in the context of Industry 4.0. Although recently several automated learning (autoML) approaches have been proposed, these rely on model configuration and hyper-parameters optimisation. This paper advances the state-ofthe- art by proposing an autoML approach that selects, among different normalisation and feature weighting preprocessing techniques and various well-known Machine Learning (ML) algorithms, the best configuration to create a reliable soft-sensor for the problem at hand. As proven in this research, each normalisation method transforms a given dataset differently, which ultimately affects the ML algorithm performance. The presented autoML approach considers the features preprocessing importance, including it, and the algorithm selection and configuration, as a fundamental stage of the methodology. The proposed autoML approach is applied to real data from a refinery in the Basque Country to create a soft-sensor in order to complement the operators’ decision-making that, based on the operational variables of a distillation process, detects 400 min in advance with 98.925% precision if the resultant product does not reach the quality standards.This research received no external funding

    Wavelet Theory

    Get PDF
    The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior

    Development and application of molecular and computational tools to image copper in cells

    Get PDF
    Copper is a trace element which is essential for many biological processes. A deficiency or excess of copper(I) ions, which is its main oxidation state of copper in cellular environment, is increasingly linked to the development of neurodegenerative diseases such as Parkinson’s and Alzheimer’s disease (PD and AD). The regulatory mechanisms for copper(I) are under active investigation and lysosomes which are best known as cellular “incinerators” have been found to play an important role in the trafficking of copper inside the cell. Therefore, it is important to develop reliable experimental methods to detect, monitor and visualise this metal in cells and to develop tools that allow to improve the data quality of microscopy recordings. This would enable the detailed exploration of cellular processes related to copper trafficking through lysosomes. The research presented in this thesis aimed to develop chemical and computational tools that can help to investigate concentration changes of copper(I) in cells (particularly in lysosomes), and it presents a preliminary case study that uses the here developed microscopy image quality enhancement tools to investigate lysosomal mobility changes upon treatment of cells with different PD or AD drugs. Chapter I first reports the synthesis of a previously reported copper(I) probe (CS3). The photophysical properties of this probe and functionality on different cell lines was tested and it was found that this copper(I) sensor predominantly localized in lipid droplets and that its photostability and quantum yield were insufficient to be applied for long term investigations of cellular copper trafficking. Therefore, based on the insights of this probe a new copper(I) selective fluorescent probe (FLCS1) was designed, synthesized, and characterized which showed superior photophysical properties (photostability, quantum yield) over CS3. The probe showed selectivity for copper(I) over other physiological relevant metals and showed strong colocalization in lysosomes in SH-SY5Y cells. This probe was then used to study and monitor lysosomal copper(I) levels via fluorescence lifetime imaging microscopy (FLIM); to the best of my knowledge this is the first copper(I) probe based on emission lifetime. Chapter II explores different computational deep learning approaches for improving the quality of recorded microscopy images. In total two existing networks were tested (fNET, CARE) and four new networks were implemented, tested, and benchmarked for their capabilities of improving the signal-to-noise ratio, upscaling the image size (GMFN, SRFBN-S, Zooming SlowMo) and interpolating image sequences (DAIN, Zooming SlowMo) in z- and t-dimension of multidimensional simulated and real-world datasets. The best performing networks of each category were then tested in combination by sequentially applying them on a low signal-to-noise ratio, low resolution, and low frame-rate image sequence. This image enhancement workstream for investigating lysosomal mobility was established. Additionally, the new frame interpolation networks were implemented in user-friendly Google Colab notebooks and were made publicly available to the scientific community on the ZeroCostDL4Mic platform. Chapter III provides a preliminary case study where the newly developed fluorescent copper(I) probe in combination with the computational enhancement algorithms was used to investigate the effects of five potential Parkinson’s disease drugs (rapamycin, digoxin, curcumin, trehalose, bafilomycin A1) on the mobility of lysosomes in live cells.Open Acces

    Machine learning-based algorithms to knowledge extraction from time series data: A review

    Get PDF
    To predict the future behavior of a system, we can exploit the information collected in the past, trying to identify recurring structures in what happened to predict what could happen, if the same structures repeat themselves in the future as well. A time series represents a time sequence of numerical values observed in the past at a measurable variable. The values are sampled at equidistant time intervals, according to an appropriate granular frequency, such as the day, week, or month, and measured according to physical units of measurement. In machine learning-based algorithms, the information underlying the knowledge is extracted from the data themselves, which are explored and analyzed in search of recurring patterns or to discover hidden causal associations or relationships. The prediction model extracts knowledge through an inductive process: the input is the data and, possibly, a first example of the expected output, the machine will then learn the algorithm to follow to obtain the same result. This paper reviews the most recent work that has used machine learning-based techniques to extract knowledge from time series data

    Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Get PDF
    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio
    • 

    corecore