2,543 research outputs found

    Data-driven model reduction and transfer operator approximation

    Get PDF
    In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis (TICA), dynamic mode decomposition (DMD), and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods

    An Efficient Classification Model using Fuzzy Rough Set Theory and Random Weight Neural Network

    Get PDF
    In the area of fuzzy rough set theory (FRST), researchers have gained much interest in handling the high-dimensional data. Rough set theory (RST) is one of the important tools used to pre-process the data and helps to obtain a better predictive model, but in RST, the process of discretization may loss useful information. Therefore, fuzzy rough set theory contributes well with the real-valued data. In this paper, an efficient technique is presented based on Fuzzy rough set theory (FRST) to pre-process the large-scale data sets to increase the efficacy of the predictive model. Therefore, a fuzzy rough set-based feature selection (FRSFS) technique is associated with a Random weight neural network (RWNN) classifier to obtain the better generalization ability. Results on different dataset show that the proposed technique performs well and provides better speed and accuracy when compared by associating FRSFS with other machine learning classifiers (i.e., KNN, Naive Bayes, SVM, decision tree and backpropagation neural network)

    Integrated intelligent systems for industrial automation: the challenges of Industry 4.0, information granulation and understanding agents .

    Get PDF
    The objective of the paper consists in considering the challenges of new automation paradigm Industry 4.0 and reviewing the-state-of-the-art in the field of its enabling information and communication technologies, including Cyberphysical Systems, Cloud Computing, Internet of Things and Big Data. Some ways of multi-dimensional, multi-faceted industrial Big Data representation and analysis are suggested. The fundamentals of Big Data processing with using Granular Computing techniques have been developed. The problem of constructing special cognitive tools to build artificial understanding agents for Integrated Intelligent Enterprises has been faced

    Recent Developments in the Econometrics of Program Evaluation

    Get PDF
    Many empirical questions in economics and other social sciences depend on causal effects of programs or policies. In the last two decades much research has been done on the econometric and statistical analysis of the effects of such programs or treatments. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization and other areas of empirical micro-economics. In this review we discuss some of the recent developments. We focus primarily on practical issues for empirical researchers, as well as provide a historical overview of the area and give references to more technical research.program evaluation, causality, unconfoundedness, Rubin Causal Model, potential outcomes, instrumental variables

    Recent developments in the econometrics of program evaluation

    Get PDF
    Many empirical questions in economics and other social sciences depend on causal effects of programs or policies. In the last two decades much research has been done on the econometric and statistical analysis of the effects of such programs or treatments. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization and other areas of empirical micro-economics. In this review we discuss some of the recent developments. We focus primarily on practical issues for empirical researchers, as well as provide a historical overview of the area and give references to more technical research.

    A systematic review on multi-criteria group decision-making methods based on weights: analysis and classification scheme

    Get PDF
    Interest in group decision-making (GDM) has been increasing prominently over the last decade. Access to global databases, sophisticated sensors which can obtain multiple inputs or complex problems requiring opinions from several experts have driven interest in data aggregation. Consequently, the field has been widely studied from several viewpoints and multiple approaches have been proposed. Nevertheless, there is a lack of general framework. Moreover, this problem is exacerbated in the case of experts’ weighting methods, one of the most widely-used techniques to deal with multiple source aggregation. This lack of general classification scheme, or a guide to assist expert knowledge, leads to ambiguity or misreading for readers, who may be overwhelmed by the large amount of unclassified information currently available. To invert this situation, a general GDM framework is presented which divides and classifies all data aggregation techniques, focusing on and expanding the classification of experts’ weighting methods in terms of analysis type by carrying out an in-depth literature review. Results are not only classified but analysed and discussed regarding multiple characteristics, such as MCDMs in which they are applied, type of data used, ideal solutions considered or when they are applied. Furthermore, general requirements supplement this analysis such as initial influence, or component division considerations. As a result, this paper provides not only a general classification scheme and a detailed analysis of experts’ weighting methods but also a road map for researchers working on GDM topics or a guide for experts who use these methods. Furthermore, six significant contributions for future research pathways are provided in the conclusions.The first author acknowledges support from the Spanish Ministry of Universities [grant number FPU18/01471]. The second and third author wish to recognize their support from the Serra Hunter program. Finally, this work was supported by the Catalan agency AGAUR through its research group support program (2017SGR00227). This research is part of the R&D project IAQ4EDU, reference no. PID2020-117366RB-I00, funded by MCIN/AEI/10.13039/ 501100011033.Peer ReviewedPostprint (published version

    Full Issue

    Get PDF

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin
    corecore