7 research outputs found

    The Pseudo-Pascal Triangle of Maximum Deng Entropy

    Get PDF
    PPascal triangle (known as Yang Hui Triangle in Chinese) is an important model in mathematics while the entropy has been heavily studied in physics or as uncertainty measure in information science. How to construct the the connection between Pascal triangle and uncertainty measure is an interesting topic. One of the most used entropy, Tasllis entropy, has been modelled with Pascal triangle. But the relationship of the other entropy functions with Pascal triangle is still an open issue. Dempster-Shafer evidence theory takes the advantage to deal with uncertainty than probability theory since the probability distribution is generalized as basic probability assignment, which is more efficient to model and handle uncertain information. Given a basic probability assignment, its corresponding uncertainty measure can be determined by Deng entropy, which is the generalization of Shannon entropy. In this paper, a Pseudo-Pascal triangle based the maximum Deng entropy is constructed. Similar to the Pascal triangle modelling of Tasllis entropy, this work provides the a possible way of Deng entropy in physics and information theory

    Classification of aerial laser scanning point clouds using machine learning: a comparison between Random Forest and Tensorflow

    Get PDF
    In this investigation a comparison between two machine learning (ML) models for semantic classification of an aerial laser scanner point cloud is presented. One model is Random Forest (RF), the other is a multi-layer neural network, TensorFlow (TF). Accuracy results were compared over a growing set of training data, using a stratified independent sampling over classes from 5% to 50% of the total dataset. Results show RF to have average F1=0.823 for the 9 classes considered, whereas TF had average F1=0.450. F1 values where higher for RF than TF, due to complexity in the determination of a suitable composition of the hidden layers of the neural network in TF, and this can likely be improved to reach higher accuracy values. Further study in this sense is planned

    Evidential Identification of New Target based on Residual

    Get PDF
    Both incompleteness of frame of discernment and interference of data will lead to conflict evidence and wrong fusion. However how to identify new target that is out of frame of discernment is important but difficult when it is possible that data are interfered. In this paper, evidential identification based on residual is proposed to identify new target that is out of frame of discernment when it is possible that data are interfered. Through finding the numerical relation in different attributes, regress equations are established among various attributes in frame of discernment. And then collected data will be adjusted according to three mean value. Finally according to weighted residual it is able to decide whether the target requested to identify is new target. Numerical examples are used to verify this method

    New Failure Mode and Effects Analysis based on D Numbers Downscaling Method

    Get PDF
    Failure mode and effects analysis (FMEA) is extensively applied to process potential faults in systems, designs, and products. Nevertheless, traditional FMEA, classical risk priority number (RPN), acquired by multiplying the ratings of occurrence, detection, and severity, risk assessment, is not effective to process the uncertainty in FMEA. Many methods have been proposed to solve the issue but deficiencies exist, such as huge computing quality and the mutual exclusivity of propositions. In fact, because of the subjectivity of experts, the boundary of two adjacent evaluation ratings is fuzzy so that the propositions are not mutually exclusive. To address the issues, in this paper, a new method to evaluate risk in FMEA based on D numbers and evidential downscaling method, named as D numbers downscaling method, is proposed. In the proposed method, D numbers based on the data are constructed to process uncertain information and aggregate the assessments of risk factors, for they permit propositions to be not exclusive mutually. Evidential downscaling method decreases the number of ratings from 10 to 3, and the frame of discernment from 2^{10} to 2^3 , which greatly reduce the computational complexity. Besides, a numerical example is illustrated to validate the high efficiency and feasibility of the proposed method

    Datapohjainen mallinnus viskoosin laadun karakterisoimiseksi: lähestyminen koneoppimisen keinoin

    Get PDF
    Demand for textile fibers is increasing, and cellulosic man-made fibers can be utilized as an alternative substance for oil-based end products in textile industry. To compete with oil-based products, a more accessible quality characterization could be helpful. The aim of this study is to examine the possibilities of a machine learning method called Random Forest in the viscose fiber production and to find out, if the machine learning method Random Forest is applicable for the viscose quality modelling. This is due to traditional regression methods such linear regression not having been successfully applied for the quality characterisation. The study consists of literature review and an applied part. The literature review considers dissolving pulp and viscose production as well as machine learning and more precisely an algorithm called Random Forest. The applied part consists of data analysis, data handling and other methods required in order to achieve the most accurate Random Forest model. The study shows, that the Random Forest algorithm has a potential to model the quality behaviour, especially in comparison to traditional linear regression. The Random Forest model can predict with 95% confidence if the viscose quality classifies as good or bad, but the numerical prediction for the quality parameter has a large error margin for the 95% confidence. It is suggested, that the error margin could be lower, if the utilized data was whole and the number of data points was larger.Tekstiilikuitujen tarve on kasvussa, ja selluloosapohjaisia kuituja voitaisiin käyttää korvaavana raaka-aineena tekstiiliteollisuuden öljypohjaisille tuotteille. Kilpaillakseen öljypohjaisten tuotteiden kanssa, paremmin saatavilla oleva mallintaminen tekstiilikuitujen laadulle voisi olla hyödyllistä. Tämän tutkimuksen tarkoituksena on tutkia koneoppimismenetelmän “Random Forest” mahdollisuuksia viskoosikuitujen valmistuksessa ja selvittää, voiko Random Forest menetelmää käyttää viskoosin laadun mallintamiseen. Viskoosin laatua ei ole pystytty mallintamaan perinteisillä lineaarisen mallintamisen keinoilla, ja tästä syystä lähestyminen koneoppimisen kautta on valittu. Tämä tutkimus koostuu kirjallisuuskatsauksesta ja soveltavasta osiosta. Kirjallisuuskatsauksessa käsitellään liukosellun ja viskoosin tuotantoa, sekä koneoppimista ja erityisesti Random Forest- algoritmia. Soveltava osa koostuu data-analyysistä, datan käsittelyn keinoista ja muista metodeista, joita tarvitaan tarkan Random Forest mallin luomiseen. Tutkimuksen tulos osoittaa, että Random Forest- algoritmilla on potentiaalia viskoosin laadun mallintamiseen. Random Forest malli pystyy ennustamaan 95% varmuudella, onko viskoosin laatu hyvä tai huono, mutta numeerisella ennusteella on suhteellisen suuri virhemarginaali. Virhemarginaalia voisi saada pienennettyä, mikäli käytettävä data olisi eheämpää ja datapisteitä olisi enemmän
    corecore