26,420 research outputs found

    Is technical analysis in the foreign exchange market profitable? a genetic programming approach

    Get PDF
    Using genetic programming techniques to find technical trading rules, we find strong evidence of economically significant out-of-sample excess returns to those rules for each of six exchange rates, over the period 1981-1995. Further, when the dollar/deutschemark rules are allowed to determine trades in the other markets, there is a significant improvement in performance in all cases, except for the deutschemark/yen. Betas calculated for the returns according to various benchmark portfolios provide no evidence that the returns to these rules are compensation for bearing systematic risk. Bootstrapping results on the dollar/deutschemark indicate that the trading rules are detecting patterns in the data that are not captured by standard statistical models.Programming (Mathematics) ; Foreign exchange

    Gradient-free activation maximization for identifying effective stimuli

    Full text link
    A fundamental question for understanding brain function is what types of stimuli drive neurons to fire. In visual neuroscience, this question has also been posted as characterizing the receptive field of a neuron. The search for effective stimuli has traditionally been based on a combination of insights from previous studies, intuition, and luck. Recently, the same question has emerged in the study of units in convolutional neural networks (ConvNets), and together with this question a family of solutions were developed that are generally referred to as "feature visualization by activation maximization." We sought to bring in tools and techniques developed for studying ConvNets to the study of biological neural networks. However, one key difference that impedes direct translation of tools is that gradients can be obtained from ConvNets using backpropagation, but such gradients are not available from the brain. To circumvent this problem, we developed a method for gradient-free activation maximization by combining a generative neural network with a genetic algorithm. We termed this method XDream (EXtending DeepDream with real-time evolution for activation maximization), and we have shown that this method can reliably create strong stimuli for neurons in the macaque visual cortex (Ponce et al., 2019). In this paper, we describe extensive experiments characterizing the XDream method by using ConvNet units as in silico models of neurons. We show that XDream is applicable across network layers, architectures, and training sets; examine design choices in the algorithm; and provide practical guides for choosing hyperparameters in the algorithm. XDream is an efficient algorithm for uncovering neuronal tuning preferences in black-box networks using a vast and diverse stimulus space.Comment: 16 pages, 8 figures, 3 table

    Exploring a New ExpAce: The Complementarities between Experimental Economics and Agent-based Computational Economics

    Get PDF
    What is the relationship, if any, between Experimental Economics and Agent-based Computational Economics? Experimental Economics (EXP) investigates individual behaviour (and the emergence of aggregate regularities) by means of human subject experiments. Agent-based Computational Economics (ACE), on the other hand, studies the relationships between the micro and the macro level with the aid of artificial experiments. Note that the way ACE makes use of experiments to formulate theories is indeed similar to the way EXP does. The question we want to address is whether they can complement and integrate with each other. What can Agent-based computational Economics give to, and take from, Experimental Economics? Can they help and sustain each other, and ultimately gain space out of their restricted respective niches of practitioners? We believe that the answer to all these questions is yes: there can be and there should be profitable “contaminations” in both directions, of which we provide a first comprehensive discussion.Experimental Economics, Agent-based Computational Economics, Agent-Based Models, Simulation.

    DATA MINING: A SEGMENTATION ANALYSIS OF U.S. GROCERY SHOPPERS

    Get PDF
    Consumers make choices about where to shop based on their preferences for a shopping environment and experience as well as the selection of products at a particular store. This study illustrates how retail firms and marketing analysts can utilize data mining techniques to better understand customer profiles and behavior. Among the key areas where data mining can produce new knowledge is the segmentation of customer data bases according to demographics, buying patterns, geographics, attitudes, and other variables. This paper builds profiles of grocery shoppers based on their preferences for 33 retail grocery store characteristics. The data are from a representative, nationwide sample of 900 supermarket shoppers collected in 1999. Six customer profiles are found to exist, including (1) "Time Pressed Meat Eaters", (2) "Back to Nature Shoppers", (3) "Discriminating Leisure Shoppers", (4) "No Nonsense Shoppers", (5) "The One Stop Socialites", and (6) "Middle of the Road Shoppers". Each of the customer profiles is described with respect to the underlying demographics and income. Consumer shopping segments cut across most demographic groups but are somewhat correlated with income. Hierarchical lists of preferences reveal that low price is not among the top five most important store characteristics. Experience and preferences for internet shopping shows that of the 44% who have access to the internet, only 3% had used it to order food.Consumer/Household Economics, Food Consumption/Nutrition/Food Safety,

    Prospect patents, data markets, and the commons in data-driven medicine : openness and the political economy of intellectual property rights

    Get PDF
    Scholars who point to political influences and the regulatory function of patent courts in the USA have long questioned the courts’ subjective interpretation of what ‘things’ can be claimed as inventions. The present article sheds light on a different but related facet: the role of the courts in regulating knowledge production. I argue that the recent cases decided by the US Supreme Court and the Federal Circuit, which made diagnostics and software very difficult to patent and which attracted criticism for a wealth of different reasons, are fine case studies of the current debate over the proper role of the state in regulating the marketplace and knowledge production in the emerging information economy. The article explains that these patents are prospect patents that may be used by a monopolist to collect data that everybody else needs in order to compete effectively. As such, they raise familiar concerns about failure of coordination emerging as a result of a monopolist controlling a resource such as datasets that others need and cannot replicate. In effect, the courts regulated the market, primarily focusing on ensuring the free flow of data in the emerging marketplace very much in the spirit of the ‘free the data’ language in various policy initiatives, yet at the same time with an eye to boost downstream innovation. In doing so, these decisions essentially endorse practices of personal information processing which constitute a new type of public domain: a source of raw materials which are there for the taking and which have become most important inputs to commercial activity. From this vantage point of view, the legal interpretation of the private and the shared legitimizes a model of data extraction from individuals, the raw material of information capitalism, that will fuel the next generation of data-intensive therapeutics in the field of data-driven medicine

    How costly is sustained low inflation for the U.S. economy?

    Get PDF
    The authors study the welfare cost of inflation in a general equilibrium life-cycle model that includes households that live for many periods, production and capital, simple monetary and financial sectors, and a fairly elaborate government sector. The government’s taxation of capital income is not indexed for inflation. They find that a plausibly calibrated version of this model has a steady state that matches a variety of facts about the postwar U.S. economy. They use the model to estimate the welfare cost of permanent, policy-induced changes in the inflation rate and find that most of the costs of inflation are direct and indirect consequences of the fact that inflation increases the effective tax rate on capital income. The cost estimates are an order of magnitude larger than other estimates in the literature.Economic conditions - United States ; Inflation (Finance)

    Assessing hyper parameter optimization and speedup for convolutional neural networks

    Get PDF
    The increased processing power of graphical processing units (GPUs) and the availability of large image datasets has fostered a renewed interest in extracting semantic information from images. Promising results for complex image categorization problems have been achieved using deep learning, with neural networks comprised of many layers. Convolutional neural networks (CNN) are one such architecture which provides more opportunities for image classification. Advances in CNN enable the development of training models using large labelled image datasets, but the hyper parameters need to be specified, which is challenging and complex due to the large number of parameters. A substantial amount of computational power and processing time is required to determine the optimal hyper parameters to define a model yielding good results. This article provides a survey of the hyper parameter search and optimization methods for CNN architectures
    corecore