267 research outputs found

    On Robot Revolution and Taxation

    Full text link
    Advances in artificial intelligence are resulting in the rapid automation of the work force. The tools that are used to automate are called robots. Bill Gates proposed that in order to deal with the problem of the loss of jobs and reduction of the tax revenue we ought to tax the robots. The problem with taxing the robots is that it is not easy to know what a robot is. This article studies the definition of a robot and the implication of advances in robotics on taxation. It is evident from this article that it is a difficult task to establish what a robot is and what is not a robot. It concludes that taxing robots is the same as increasing corporate tax

    Impact of Artificial Intelligence on Economic Theory

    Full text link
    Artificial intelligence has impacted many aspects of human life. This paper studies the impact of artificial intelligence on economic theory. In particular we study the impact of artificial intelligence on the theory of bounded rationality, efficient market hypothesis and prospect theory

    Relative rationality: Is machine rationality subjective?

    Full text link
    Rational decision making in its linguistic description means making logical decisions. In essence, a rational agent optimally processes all relevant information to achieve its goal. Rationality has two elements and these are the use of relevant information and the efficient processing of such information. In reality, relevant information is incomplete, imperfect and the processing engine, which is a brain for humans, is suboptimal. Humans are risk averse rather than utility maximizers. In the real world, problems are predominantly non-convex and this makes the idea of rational decision-making fundamentally unachievable and Herbert Simon called this bounded rationality. There is a trade-off between the amount of information used for decision-making and the complexity of the decision model used. This explores whether machine rationality is subjective and concludes that indeed it is

    Bayesian approach to rough set

    Full text link
    This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.Comment: 20 pages, 3 figure

    Bayesian Approach to Neuro-Rough Models

    Full text link
    This paper proposes a neuro-rough model based on multi-layered perceptron and rough set. The neuro-rough model is then tested on modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62%. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.Comment: 24 pages, 5 figures, 1 tabl

    Evaluating the Impact of Missing Data Imputation through the use of the Random Forest Algorithm

    Full text link
    This paper presents an impact assessment for the imputation of missing data. The data set used is HIV Seroprevalence data from an antenatal clinic study survey performed in 2001. Data imputation is performed through five methods: Random Forests, Autoassociative Neural Networks with Genetic Algorithms, Autoassociative Neuro-Fuzzy configurations, and two Random Forest and Neural Network based hybrids. Results indicate that Random Forests are superior in imputing missing data in terms both of accuracy and of computation time, with accuracy increases of up to 32% on average for certain variables when compared with autoassociative networks. While the hybrid systems have significant promise, they are hindered by their Neural Network components. The imputed data is used to test for impact in three ways: through statistical analysis, HIV status classification and through probability prediction with Logistic Regression. Results indicate that these methods are fairly immune to imputed data, and that the impact is not highly significant, with linear correlations of 96% between HIV probability prediction and a set of two imputed variables using the logistic regression analysis

    A note on the separability index

    Full text link
    In discriminating between objects from different classes, the more separable these classes are the less computationally expensive and complex a classifier can be used. One thus seeks a measure that can quickly capture this separability concept between classes whilst having an intuitive interpretation on what it is quantifying. A previously proposed separability measure, the separability index (SI) has been shown to intuitively capture the class separability property very well. This short note highlights the limitations of this measure and proposes a slight variation to it by combining it with another form of separability measure that captures a quantity not covered by the Separability Index

    Artificial Intelligence Techniques for Steam Generator Modelling

    Full text link
    This paper investigates the use of different Artificial Intelligence methods to predict the values of several continuous variables from a Steam Generator. The objective was to determine how the different artificial intelligence methods performed in making predictions on the given dataset. The artificial intelligence methods evaluated were Neural Networks, Support Vector Machines, and Adaptive Neuro-Fuzzy Inference Systems. The types of neural networks investigated were Multi-Layer Perceptions, and Radial Basis Function. Bayesian and committee techniques were applied to these neural networks. Each of the AI methods considered was simulated in Matlab. The results of the simulations showed that all the AI methods were capable of predicting the Steam Generator data reasonably accurately. However, the Adaptive Neuro-Fuzzy Inference system out performed the other methods in terms of accuracy and ease of implementation, while still achieving a fast execution time as well as a reasonable training time.Comment: 23 page

    Artificial Intelligence and Asymmetric Information Theory

    Full text link
    When human agents come together to make decisions, it is often the case that one human agent has more information than the other. This phenomenon is called information asymmetry and this distorts the market. Often if one human agent intends to manipulate a decision in its favor the human agent can signal wrong or right information. Alternatively, one human agent can screen for information to reduce the impact of asymmetric information on decisions. With the advent of artificial intelligence, signaling and screening have been made easier. This paper studies the impact of artificial intelligence on the theory of asymmetric information. It is surmised that artificial intelligent agents reduce the degree of information asymmetry and thus the market where these agents are deployed become more efficient. It is also postulated that the more artificial intelligent agents there are deployed in the market the less is the volume of trades in the market. This is because for many trades to happen the asymmetry of information on goods and services to be traded should exist, creating a sense of arbitrage

    Modeling and Controlling Interstate Conflict

    Full text link
    Bayesian neural networks were used to model the relationship between input parameters, Democracy, Allies, Contingency, Distance, Capability, Dependency and Major Power, and the output parameter which is either peace or conflict. The automatic relevance determination was used to rank the importance of input variables. Control theory approach was used to identify input variables that would give a peaceful outcome. It was found that using all four controllable variables Democracy, Allies, Capability and Dependency; or using only Dependency or only Capabilities avoids all the predicted conflicts.Comment: 6 page
    • …
    corecore