646,322 research outputs found

    Algorithms & Fiduciaries: Existing and Proposed Regulatory Approaches to Artificially Intelligent Financial Planners

    Get PDF
    Artificial intelligence is no longer solely in the realm of science fiction. Today, basic forms of machine learning algorithms are commonly used by a variety of companies. Also, advanced forms of machine learning are increasingly making their way into the consumer sphere and promise to optimize existing markets. For financial advising, machine learning algorithms promise to make advice available 24–7 and significantly reduce costs, thereby opening the market for financial advice to lower-income individuals. However, the use of machine learning algorithms also raises concerns. Among them, whether these machine learning algorithms can meet the existing fiduciary standard imposed on human financial advisers and how responsibility and liability should be partitioned when an autonomous algorithm falls short of the fiduciary standard and harms a client. After summarizing the applicable law regulating investment advisers and the current state of robo-advising, this Note evaluates whether robo-advisers can meet the fiduciary standard and proposes alternate liability schemes for dealing with increasingly sophisticated machine learning algorithms

    Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository

    Full text link
    Machine learning qualifies computers to assimilate with data, without being solely programmed [1, 2]. Machine learning can be classified as supervised and unsupervised learning. In supervised learning, computers learn an objective that portrays an input to an output hinged on training input-output pairs [3]. Most efficient and widely used supervised learning algorithms are K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Large Margin Nearest Neighbor (LMNN), and Extended Nearest Neighbor (ENN). The main contribution of this paper is to implement these elegant learning algorithms on eleven different datasets from the UCI machine learning repository to observe the variation of accuracies for each of the algorithms on all datasets. Analyzing the accuracy of the algorithms will give us a brief idea about the relationship of the machine learning algorithms and the data dimensionality. All the algorithms are developed in Matlab. Upon such accuracy observation, the comparison can be built among KNN, SVM, LMNN, and ENN regarding their performances on each dataset.Comment: To be published in the 4th IEEE International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2018

    Machine learning algorithms applied to weed management in integrated crop-livestock systems: a systematic literature review.

    Get PDF
    In recent times, there has been an environmental pressure to reduce the amount of pesticides applied to crops and, consequently, the crop production costs. Therefore, investments have been made in technologies that could potentially reduce the usage of herbicides on weeds. Among such technologies, Machine Learning approaches are rising in number of applications and potential impact. Therefore, this article aims to identify the main machine learning algorithms used in integrated crop-livestock systems for weed management. Based on a systematic literature review, it was possible to determine where the selected studies were performed and which crop types were mostly used. The main research terms in this study were: "machine learning algorithms" + "weed management" + "integrated crop-livestock system". Although no results were found for the three terms altogether, the combinations involving "weed management" + "integrated crop-livestock system" and "machine learning algorithms" + "weed management" returned a significant number of studies which were subjected to a second layer of refinement by applying an eligibility criteria. The achieved results show that most of the studies were from the United States and from nations in Asia. Machine vision and deep learning were the most used machine learning models, representing 28% and 19% of all cases, respectively. These systems were applied to different practical solutions, the most prevalent being smart sprayers, which allow for a site-specific herbicide application

    Is it ethical to avoid error analysis?

    Full text link
    Machine learning algorithms tend to create more accurate models with the availability of large datasets. In some cases, highly accurate models can hide the presence of bias in the data. There are several studies published that tackle the development of discriminatory-aware machine learning algorithms. We center on the further evaluation of machine learning models by doing error analysis, to understand under what conditions the model is not working as expected. We focus on the ethical implications of avoiding error analysis, from a falsification of results and discrimination perspective. Finally, we show different ways to approach error analysis in non-interpretable machine learning algorithms such as deep learning.Comment: Presented as a poster at the 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017

    Learning Multiple Defaults for Machine Learning Algorithms

    Get PDF
    The performance of modern machine learning methods highly depends on their hyperparameter configurations. One simple way of selecting a configuration is to use default settings, often proposed along with the publication and implementation of a new algorithm. Those default values are usually chosen in an ad-hoc manner to work good enough on a wide variety of datasets. To address this problem, different automatic hyperparameter configuration algorithms have been proposed, which select an optimal configuration per dataset. This principled approach usually improves performance, but adds additional algorithmic complexity and computational costs to the training procedure. As an alternative to this, we propose learning a set of complementary default values from a large database of prior empirical results. Selecting an appropriate configuration on a new dataset then requires only a simple, efficient and embarrassingly parallel search over this set. We demonstrate the effectiveness and efficiency of the approach we propose in comparison to random search and Bayesian Optimization

    Automatic generation of hardware Tree Classifiers

    Full text link
    Machine Learning is growing in popularity and spreading across different fields for various applications. Due to this trend, machine learning algorithms use different hardware platforms and are being experimented to obtain high test accuracy and throughput. FPGAs are well-suited hardware platform for machine learning because of its re-programmability and lower power consumption. Programming using FPGAs for machine learning algorithms requires substantial engineering time and effort compared to software implementation. We propose a software assisted design flow to program FPGA for machine learning algorithms using our hardware library. The hardware library is highly parameterized and it accommodates Tree Classifiers. As of now, our library consists of the components required to implement decision trees and random forests. The whole automation is wrapped around using a python script which takes you from the first step of having a dataset and design choices to the last step of having a hardware descriptive code for the trained machine learning model

    Machine learning algorithms applied to the estimation of liquidity: the 10-year United States treasury bond

    Get PDF
    Purpose: Having defined liquidity, the aim is to assess the predictive capacity of its representative variables, so that economic fluctuations may be better understood. Design/methodology/approach: Conceptual variables that are representative of liquidity will be used to formulate the predictions. The results of various machine learning models will be compared, leading to some reflections on the predictive value of the liquidity variables, with a view to defining their selection. Findings: The predictive capacity of the model was also found to vary depending on the source of the liquidity, in so far as the data on liquidity within the private sector contributed more than the data on public sector liquidity to the prediction of economic fluctuations. International liquidity was seen as a more diffuse concept, and the standardization of its definition could be the focus of future studies. A benchmarking process was also performed when applying the state-of-the-art machine learning models. Originality/value: Better understanding of these variables might help us toward a deeper understanding of the operation of financial markets. Liquidity, one of the key financial market variables, is neither well-defined nor standardized in the existing literature, which calls for further study. Hence, the novelty of an applied study employing modern data science techniques can provide a fresh perspective on financial markets
    corecore