90,588 research outputs found
Algorithms & Fiduciaries: Existing and Proposed Regulatory Approaches to Artificially Intelligent Financial Planners
Artificial intelligence is no longer solely in the realm of science fiction. Today, basic forms of machine learning algorithms are commonly used by a variety of companies. Also, advanced forms of machine learning are increasingly making their way into the consumer sphere and promise to optimize existing markets. For financial advising, machine learning algorithms promise to make advice available 24â7 and significantly reduce costs, thereby opening the market for financial advice to lower-income individuals. However, the use of machine learning algorithms also raises concerns. Among them, whether these machine learning algorithms can meet the existing fiduciary standard imposed on human financial advisers and how responsibility and liability should be partitioned when an autonomous algorithm falls short of the fiduciary standard and harms a client. After summarizing the applicable law regulating investment advisers and the current state of robo-advising, this Note evaluates whether robo-advisers can meet the fiduciary standard and proposes alternate liability schemes for dealing with increasingly sophisticated machine learning algorithms
Is it ethical to avoid error analysis?
Machine learning algorithms tend to create more accurate models with the
availability of large datasets. In some cases, highly accurate models can hide
the presence of bias in the data. There are several studies published that
tackle the development of discriminatory-aware machine learning algorithms. We
center on the further evaluation of machine learning models by doing error
analysis, to understand under what conditions the model is not working as
expected. We focus on the ethical implications of avoiding error analysis, from
a falsification of results and discrimination perspective. Finally, we show
different ways to approach error analysis in non-interpretable machine learning
algorithms such as deep learning.Comment: Presented as a poster at the 2017 Workshop on Fairness,
Accountability, and Transparency in Machine Learning (FAT/ML 2017
Machine learning to analyze single-case data : a proof of concept
Visual analysis is the most commonly used method for interpreting data from singlecase designs, but levels of interrater agreement remain a concern. Although structured
aids to visual analysis such as the dual-criteria (DC) method may increase interrater
agreement, the accuracy of the analyses may still benefit from improvements. Thus, the
purpose of our study was to (a) examine correspondence between visual analysis and
models derived from different machine learning algorithms, and (b) compare the
accuracy, Type I error rate and power of each of our models with those produced by
the DC method. We trained our models on a previously published dataset and then
conducted analyses on both nonsimulated and simulated graphs. All our models
derived from machine learning algorithms matched the interpretation of the visual
analysts more frequently than the DC method. Furthermore, the machine learning
algorithms outperformed the DC method on accuracy, Type I error rate, and power.
Our results support the somewhat unorthodox proposition that behavior analysts may
use machine learning algorithms to supplement their visual analysis of single-case data,
but more research is needed to examine the potential benefits and drawbacks of such an
approach
Practical feature subset selection for machine learning
Machine learning algorithms automatically extract knowledge from machine readable information. Unfortunately, their success is usually dependant on the quality of the data that they operate on. If the data is inadequate, or contains extraneous and irrelevant information, machine learning algorithms may produce less accurate and less understandable results, or may fail to discover anything of use at all. Feature subset selection can result in enhanced performance, a reduced hypothesis search space, and, in some cases, reduced storage requirement. This paper describes a new feature selection algorithm that uses a correlation based heuristic to determine the âgoodnessâ of feature subsets, and evaluates its effectiveness with three common machine learning algorithms. Experiments using a number of standard machine learning data sets are presented. Feature subset selection gave significant improvement for all three algorithm
Learning Multiple Defaults for Machine Learning Algorithms
The performance of modern machine learning methods highly depends on their
hyperparameter configurations. One simple way of selecting a configuration is
to use default settings, often proposed along with the publication and
implementation of a new algorithm. Those default values are usually chosen in
an ad-hoc manner to work good enough on a wide variety of datasets. To address
this problem, different automatic hyperparameter configuration algorithms have
been proposed, which select an optimal configuration per dataset. This
principled approach usually improves performance, but adds additional
algorithmic complexity and computational costs to the training procedure. As an
alternative to this, we propose learning a set of complementary default values
from a large database of prior empirical results. Selecting an appropriate
configuration on a new dataset then requires only a simple, efficient and
embarrassingly parallel search over this set. We demonstrate the effectiveness
and efficiency of the approach we propose in comparison to random search and
Bayesian Optimization
Automatic generation of hardware Tree Classifiers
Machine Learning is growing in popularity and spreading across different fields for various applications. Due to this trend, machine learning algorithms use different hardware platforms and are being experimented to obtain high test accuracy and throughput. FPGAs are well-suited hardware platform for machine learning because of its re-programmability and lower power consumption. Programming using FPGAs for machine learning algorithms requires substantial engineering time and effort compared to software implementation. We propose a software assisted design flow to program FPGA for machine learning algorithms using our hardware library. The hardware library is highly parameterized and it accommodates Tree Classifiers. As of now, our library consists of the components required to implement decision trees and random forests. The whole automation is wrapped around using a python script which takes you from the first step of having a dataset and design choices to the last step of having a hardware descriptive code for the trained machine learning model
A comparison of addressee detection methods for multiparty conversations
Several algorithms have recently been proposed for recognizing addressees in a group conversational setting. These algorithms can rely on a variety of factors including previous conversational roles, gaze and type of dialogue act. Both statistical supervised machine learning algorithms as well as rule based methods have been developed. In this paper, we compare several algorithms developed for several different genres of muliparty dialogue, and propose a new synthesis algorithm that matches the performance of machine learning algorithms while maintaning the transparancy of semantically meaningfull rule-based algorithms
- âŠ