620 research outputs found

    Non-convex Optimization for Machine Learning

    Full text link
    A vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks. The freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) relaxed optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization. On the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques - popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties. This monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. The monograph will lead the reader through several widely used non-convex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.Comment: The official publication is available from now publishers via http://dx.doi.org/10.1561/220000005

    Developing Real Estate Automated Valuation Models by Learning from Heterogeneous Data Sources

    Get PDF
    In this paper we propose a data acquisition methodology, and a Machine Learning solution for the partially automated evaluation of real estate properties. The novelty and importance of the approach lies in two aspects: (1) when compared to Automated Valuation Models (AVMs) as available to real estate operators, it is highly adaptive and non-parametric, and integrates diverse data sources; (2) when compared to Machine Learning literature that has addressed real estate applications, it is more directly linked to the actual business processes of appraisal companies: in this context prices that are advertised online are normally not the most relevant source of information, while an appraisal document must be proposed by an expert and approved by a validator, possibly with the help of technological tools. We describe a case study using a set of 7988 appraisal documents for residential properties in Turin, Italy. Open data were also used, including location, nearby points of interest, comparable property prices, and the Italian revenue service area code. The observed mean error as measured on an independent test set was around 21 K€, for an average property value of about 190 K€. The AVM described here can help the stakeholders in this process (experts, appraisal company) to provide a reference price to be used by the expert, to allow the appraisal company to validate their evaluations in a faster and cheaper way, to help the expert in listing a set of comparable properties, that need to be included in the appraisal document

    Developing Real Estate Automated Valuation Models by Learning from Heterogeneous Data Sources

    Get PDF
    In this paper we propose a data acquisition methodology, and a Machine Learning solution for the partially automated evaluation of real estate properties. The novelty and importance of the approach lies in two aspects: (1) when compared to Automated Valuation Models (AVMs) as available to real estate operators, it is highly adaptive and non-parametric, and integrates diverse data sources; (2) when compared to Machine Learning literature that has addressed real estate applications, it is more directly linked to the actual business processes of appraisal companies: in this context prices that are advertised online are normally not the most relevant source of information, while an appraisal document must be proposed by an expert and approved by a validator, possibly with the help of technological tools. We describe a case study using a set of 7988 appraisal documents for residential properties in Turin, Italy. Open data were also used, including location, nearby points of interest, comparable property prices, and the Italian revenue service area code. The observed mean error as measured on an independent test set was around 21 K€, for an average property value of about 190 K€. The AVM described here can help the stakeholders in this process (experts, appraisal company) to provide a reference price to be used by the expert, to allow the appraisal company to validate their evaluations in a faster and cheaper way, to help the expert in listing a set of comparable properties, that need to be included in the appraisal document

    Non Linear Modelling of Financial Data Using Topologically Evolved Neural Network Committees

    No full text
    Most of artificial neural network modelling methods are difficult to use as maximising or minimising an objective function in a non-linear context involves complex optimisation algorithms. Problems related to the efficiency of these algorithms are often mixed with the difficulty of the a priori estimation of a network's fixed topology for a specific problem making it even harder to appreciate the real power of neural networks. In this thesis, we propose a method that overcomes these issues by using genetic algorithms to optimise a network's weights and topology, simultaneously. The proposed method searches for virtually any kind of network whether it is a simple feed forward, recurrent, or even an adaptive network. When the data is high dimensional, modelling its often sophisticated behaviour is a very complex task that requires the optimisation of thousands of parameters. To enable optimisation techniques to overpass their limitations or failure, practitioners use methods to reduce the dimensionality of the data space. However, some of these methods are forced to make unrealistic assumptions when applied to non-linear data while others are very complex and require a priori knowledge of the intrinsic dimension of the system which is usually unknown and very difficult to estimate. The proposed method is non-linear and reduces the dimensionality of the input space without any information on the system's intrinsic dimension. This is achieved by first searching in a low dimensional space of simple networks, and gradually making them more complex as the search progresses by elaborating on existing solutions. The high dimensional space of the final solution is only encountered at the very end of the search. This increases the system's efficiency by guaranteeing that the network becomes no more complex than necessary. The modelling performance of the system is further improved by searching not only for one network as the ideal solution to a specific problem, but a combination of networks. These committces of networks are formed by combining a diverse selection of network species from a population of networks derived by the proposed method. This approach automatically exploits the strengths and weaknesses of each member of the committee while avoiding having all members giving the same bad judgements at the same time. In this thesis, the proposed method is used in the context of non-linear modelling of high-dimensional financial data. Experimental results are'encouraging as both robustness and complexity are concerned.Imperial Users onl

    Text-mining in macroeconomics: the wealth of words

    Get PDF
    The coming to life of the Royal Society in 1660 surely represented an important milestone in the history of science, not least in Economics. Yet, its founding motto, ``Nullius in verba'', could be somewhat misleading. Words in fact may play an important role in Economics. In order to extract relevant information that words provide, this thesis relies on state-of-the-art methods from the information retrieval and computer science communities. Chapter 1 shows how policy uncertainty indices can be constructed via unsupervised machine learning models. Using unsupervised algorithms proves useful in terms of the time and resources needed to compute these indices. The unsupervised machine learning algorithm, called Latent Dirichlet Allocation (LDA), allows obtaining the different themes in documents without any prior information about their context. Given that this algorithm is widely used throughout this thesis, this chapter offers a detailed while intuitive description of its underlying mechanics. Chapter 2 uses the LDA algorithm to categorize the political uncertainty embedded in the Scottish media. In particular, it models the uncertainty regarding Brexit and the Scottish referendum for independence. These referendum-related indices are compared with the Google search queries ``Scottish independence'' and ``Brexit'', showing strong similarities. The second part of the chapter examines the relationship of these indices on investment in a longitudinal panel dataset of 2,589 Scottish firms over the period 2008-2017. It presents evidence of greater sensitivity for firms that are financially constrained or whose investment is to a greater degree irreversible. Additionally, it is found that Scottish companies located on the border with England have a stronger negative correlation with Scottish political uncertainty than those operating in the rest of the country. Contrary to expectations, we notice that investment coming from manufacturing companies appears less sensitive to political uncertainty. Chapter 3 builds eight different policy-related uncertainty indicators for the four largest euro area countries using press-media in German, French, Italian and Spanish from January 2000 until May 2019. This is done in two steps. Firstly, a continuous bag of word model is used to obtain semantically similar words to ``economy'' and ``uncertainty'' across the four languages and contexts. This allows for the retrieval of all news-articles relevant to economic uncertainty. Secondly, LDA is again employed to model the different sources of uncertainty for each country, highlighting how easily LDA can adapt to different languages and contexts. Using a Bayesian Structural Vector Autoregressive set up (BSVAR) a strong heterogeneity in the relationship between uncertainty and investment in machinery and equipment is then documented. For example, while investment in France, Italy and Spain reacts heavily to political uncertainty shocks, in Germany it is more sensitive to trade uncertainty shocks. Finally, Chapter 4 analyses English language media from Europe, India and the United States, augmented by a sentiment analysis to study how different narratives concerning cryptocurrencies influence their prices. The time span ranges from April 2013 to December 2018 a period where cryptocurrency prices experienced a parabolic behaviour. In addition, this case study is motivated by Shiller's belief that narratives around cryptocurrencies might have led to this price behaviour. Nonetheless, the relationship between narratives and prices ought to be driven by complex interactions. For example, articles written in the media about a specific phenomenon will attract or detract new investors depending on their content and tone (sentiment). Moreover, the press might also react to price changes by increasing the coverage of a given topic. For this reason, a recent causal model, Convergent Cross Mapping (CCM), suited to discovering causal relationships in complex dynamical ecosystems is used. I find bidirectional causal relationships between narratives concerning investment and regulation while a mild unidirectional causal association exists in narratives that relate technology and security to prices

    Beyond Spatial Auto-Regressive Models: Predicting Housing Prices with Satellite Imagery

    Full text link
    When modeling geo-spatial data, it is critical to capture spatial correlations for achieving high accuracy. Spatial Auto-Regression (SAR) is a common tool used to model such data, where the spatial contiguity matrix (W) encodes the spatial correlations. However, the efficacy of SAR is limited by two factors. First, it depends on the choice of contiguity matrix, which is typically not learnt from data, but instead, is assumed to be known apriori. Second, it assumes that the observations can be explained by linear models. In this paper, we propose a Convolutional Neural Network (CNN) framework to model geo-spatial data (specifi- cally housing prices), to learn the spatial correlations automatically. We show that neighborhood information embedded in satellite imagery can be leveraged to achieve the desired spatial smoothing. An additional upside of our framework is the relaxation of linear assumption on the data. Specific challenges we tackle while implementing our framework include, (i) how much of the neighborhood is relevant while estimating housing prices? (ii) what is the right approach to capture multiple resolutions of satellite imagery? and (iii) what other data-sources can help improve the estimation of spatial correlations? We demonstrate a marked improvement of 57% on top of the SAR baseline through the use of features from deep neural networks for the cities of London, Birmingham and Liverpool.Comment: 10 pages, 5 figure
    • …
    corecore