299 research outputs found

    Relative rationality: Is machine rationality subjective?

    Full text link
    Rational decision making in its linguistic description means making logical decisions. In essence, a rational agent optimally processes all relevant information to achieve its goal. Rationality has two elements and these are the use of relevant information and the efficient processing of such information. In reality, relevant information is incomplete, imperfect and the processing engine, which is a brain for humans, is suboptimal. Humans are risk averse rather than utility maximizers. In the real world, problems are predominantly non-convex and this makes the idea of rational decision-making fundamentally unachievable and Herbert Simon called this bounded rationality. There is a trade-off between the amount of information used for decision-making and the complexity of the decision model used. This explores whether machine rationality is subjective and concludes that indeed it is

    Impact of Artificial Intelligence on Economic Theory

    Full text link
    Artificial intelligence has impacted many aspects of human life. This paper studies the impact of artificial intelligence on economic theory. In particular we study the impact of artificial intelligence on the theory of bounded rationality, efficient market hypothesis and prospect theory

    On Robot Revolution and Taxation

    Full text link
    Advances in artificial intelligence are resulting in the rapid automation of the work force. The tools that are used to automate are called robots. Bill Gates proposed that in order to deal with the problem of the loss of jobs and reduction of the tax revenue we ought to tax the robots. The problem with taxing the robots is that it is not easy to know what a robot is. This article studies the definition of a robot and the implication of advances in robotics on taxation. It is evident from this article that it is a difficult task to establish what a robot is and what is not a robot. It concludes that taxing robots is the same as increasing corporate tax

    Bayesian Approach to Neuro-Rough Models

    Full text link
    This paper proposes a neuro-rough model based on multi-layered perceptron and rough set. The neuro-rough model is then tested on modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62%. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.Comment: 24 pages, 5 figures, 1 tabl

    Evaluating the Impact of Missing Data Imputation through the use of the Random Forest Algorithm

    Full text link
    This paper presents an impact assessment for the imputation of missing data. The data set used is HIV Seroprevalence data from an antenatal clinic study survey performed in 2001. Data imputation is performed through five methods: Random Forests, Autoassociative Neural Networks with Genetic Algorithms, Autoassociative Neuro-Fuzzy configurations, and two Random Forest and Neural Network based hybrids. Results indicate that Random Forests are superior in imputing missing data in terms both of accuracy and of computation time, with accuracy increases of up to 32% on average for certain variables when compared with autoassociative networks. While the hybrid systems have significant promise, they are hindered by their Neural Network components. The imputed data is used to test for impact in three ways: through statistical analysis, HIV status classification and through probability prediction with Logistic Regression. Results indicate that these methods are fairly immune to imputed data, and that the impact is not highly significant, with linear correlations of 96% between HIV probability prediction and a set of two imputed variables using the logistic regression analysis

    Bayesian approach to rough set

    Full text link
    This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.Comment: 20 pages, 3 figure

    Blockchain and Artificial Intelligence

    Full text link
    It is undeniable that artificial intelligence (AI) and blockchain concepts are spreading at a phenomenal rate. Both technologies have distinct degree of technological complexity and multi-dimensional business implications. However, a common misunderstanding about blockchain concept, in particular, is that blockchain is decentralized and is not controlled by anyone. But the underlying development of a blockchain system is still attributed to a cluster of core developers. Take smart contract as an example, it is essentially a collection of codes (or functions) and data (or states) that are programmed and deployed on a blockchain (say, Ethereum) by different human programmers. It is thus, unfortunately, less likely to be free of loopholes and flaws. In this article, through a brief overview about how artificial intelligence could be used to deliver bug-free smart contract so as to achieve the goal of blockchain 2.0, we to emphasize that the blockchain implementation can be assisted or enhanced via various AI techniques. The alliance of AI and blockchain is expected to create numerous possibilities

    Creativity and Artificial Intelligence: A Digital Art Perspective

    Full text link
    This paper describes the application of artificial intelligence to the creation of digital art. AI is a computational paradigm that codifies intelligence into machines. There are generally three types of artificial intelligence and these are machine learning, evolutionary programming and soft computing. Machine learning is the statistical approach to building intelligent systems. Evolutionary programming is the use of natural evolutionary systems to design intelligent machines. Some of the evolutionary programming systems include genetic algorithm which is inspired by the principles of evolution and swarm optimization which is inspired by the swarming of birds, fish, ants etc. Soft computing includes techniques such as agent based modelling and fuzzy logic. Opportunities on the applications of these to digital art are explored.Comment: 5 page

    A note on the separability index

    Full text link
    In discriminating between objects from different classes, the more separable these classes are the less computationally expensive and complex a classifier can be used. One thus seeks a measure that can quickly capture this separability concept between classes whilst having an intuitive interpretation on what it is quantifying. A previously proposed separability measure, the separability index (SI) has been shown to intuitively capture the class separability property very well. This short note highlights the limitations of this measure and proposes a slight variation to it by combining it with another form of separability measure that captures a quantity not covered by the Separability Index

    Comparison of Data Imputation Techniques and their Impact

    Full text link
    Missing and incomplete information in surveys or databases can be imputed using different statistical and soft-computing techniques. This paper comprehensively compares auto-associative neural networks (NN), neuro-fuzzy (NF) systems and the hybrid combinations the above methods with hot-deck imputation. The tests are conducted on an eight category antenatal survey and also under principal component analysis (PCA) conditions. The neural network outperforms the neuro-fuzzy system for all tests by an average of 5.8%, while the hybrid method is on average 15.9% more accurate yet 50% less computationally efficient than the NN or NF systems acting alone. The global impact assessment of the imputed data is performed by several statistical tests. It is found that although the imputed accuracy is high, the global effect of the imputed data causes the PCA inter-relationships between the dataset to become altered. The standard deviation of the imputed dataset is on average 36.7% lower than the actual dataset which may cause an incorrect interpretation of the results.Comment: 7 page
    • …
    corecore