472,947 research outputs found

    Computational Intelligence Based Electronic Healthcare Data Analytics Using Feature Selection with Classification by Deep Learning Architecture

    Get PDF
    EHRs (Electronic health records) are a source of big data that offer a wealth of clinical patient health data. However, because these notes are free-form texts, writing formats and styles range greatly amongst various records, text data from eHRs, such as discharge rapid notes, provide analysis challenges. This research proposed novel technique in electronic healthcare data analysis based on feature selection and classification utilizingDL methods. here the input is collected as input EH data, is processed for dimensionality reduction, noise removal. A public data pre-processing method for dealing with HD-EHR data is dimensionality reduction, which tries to minimize amount of EHR representational features while enhancing effectiveness of following data analysis, such as classification. The processed data features has been selected utilizingweighted curvature based feature selection with support vector machine. Then this selected deep features has been classified using sparse encoder transfer learning. the experimental analysis has been carried out for various EH datasets in terms of accuracy of 96%, precision of 92%, recall of 77%, F-1 score of 72%, MAP of 65

    Transfers by force and deception lead to stability in an evolutionary learning process when controlled by net profit but not by turnover

    Get PDF
    An evolutionary process is characterized by heritable variation through random mutation, positive selection of the fittest, and random genetic drift. A learning process can be similarly organized and does not need insight or understanding. Instructions are changed randomly, evaluated, and better instructions are propagated. While evolution of an enzyme or a company is a long-lasting process (change of hardware) learning is a fast process (change of software). In my model the basic ensemble consists of a source and a sink. Both have saturating benefit functions (b) and linear cost functions (c). In cost domination (b-c0) sink takes it - both at free will - thus creating a basic superadditivity. It is not reasonable to give when b-c>0 or take when b-c<0. However, with force and deception source and sink of an ensemble can be overcome to give or take although it is not reasonable for them. This leads to further superadditivity within the ensemble. But now subadditivity will appear in addition in certain regions of the transfer space. I observe organisms or companies learning by trial and error to optimize superadditivity without changing the characteristics of the benefit function or the cost function. The role of a third-party master of an ensemble to create superadditivity in the absence of cost domination in source or benefit domination in sink by force and deception is investigated in connected and unconnected ensembles. Employees and companies can be rated according to turnover or net profit. My model confirms the superiority of the benchmark net profit as self-limiting, sustainable incentive in an evolutionary learning process

    Towards learning free naive bayes nearest neighbor-based domain adaptation

    Get PDF
    As of today, object categorization algorithms are not able to achieve the level of robustness and generality necessary to work reliably in the real world. Even the most powerful convolutional neural network we can train fails to perform satisfactorily when trained and tested on data from different databases. This issue, known as domain adaptation and/or dataset bias in the literature, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. Recent work showed that by casting the problem into the image-to-class recognition framework, the domain adaptation problem is significantly alleviated [23]. Here we follow this approach, and show how a very simple, learning free Naive Bayes Nearest Neighbor (NBNN)-based domain adaptation algorithm can significantly alleviate the distribution mismatch among source and target data, especially when the number of classes and the number of sources grow. Experiments on standard benchmarks used in the literature show that our approach (a) is competitive with the current state of the art on small scale problems, and (b) achieves the current state of the art as the number of classes and sources grows, with minimal computational requirements. © Springer International Publishing Switzerland 2015
    • …
    corecore