2,019 research outputs found

    Structure from motion systems for architectural heritage. A survey of the internal loggia courtyard of Palazzo dei Capitani, Ascoli Piceno, Italy

    Get PDF
    We present the results of a point-cloud-based survey deriving from the use of image-based techniques, in particular with multi-image monoscopic digital photogrammetry systems and software, the so-called “structure-from-motion” technique. The aim is to evaluate the advantages and limitations of such procedures in architectural surveying, particularly in conditions that are “at the limit”. A particular case study was chosen: the courtyard of Palazzo dei Capitani del Popolo in Ascoli Piceno, Italy, which can be considered the ideal example due to its notable vertical, rather than horizontal, layout. In this context, by comparing and evaluating the different results, we present experimentation regarding this single case study with the aim of identifying the best workflow to realise a complex, articulated set of representations—using 3D modelling and 2D processing—necessary to correctly document the particular characteristics of such an architectural object

    Search Heuristics, Case-Based Reasoning and Software Project Effort Prediction

    Get PDF
    This paper reports on the use of search techniques to help optimise a case-based reasoning (CBR) system for predicting software project effort. A major problem, common to ML techniques in general, has been dealing with large numbers of case features, some of which can hinder the prediction process. Unfortunately searching for the optimal feature subset is a combinatorial problem and therefore NP-hard. This paper examines the use of random searching, hill climbing and forward sequential selection (FSS) to tackle this problem. Results from examining a set of real software project data show that even random searching was better than using all available for features (average error 35.6% rather than 50.8%). Hill climbing and FSS both produced results substantially better than the random search (15.3 and 13.1% respectively), but FSS was more computationally efficient. Providing a description of the fitness landscape of a problem along with search results is a step towards the classification of search problems and their assignment to optimum search techniques. This paper attempts to describe the fitness landscape of this problem by combining the results from random searches and hill climbing, as well as using multi-dimensional scaling to aid visualisation. Amongst other findings, the visualisation results suggest that some form of heuristic-based initialisation might prove useful for this problem

    Rethinking macroeconomics: how G5 currency markets have responded to unconventional monetary policy

    Get PDF
    The G5 carry trade, where high interest rate currencies appreciate and low interest rate currencies depreciate, had been a persistent anomaly in financial markets since the collapse of Bretton Woods in 1971. Conventional economics said that the reverse should happen: low interest rates were supposed to stimulate the domestic economy, leading to growth and currency appreciation, rather than fund cross-border positions in search of higher yields. The Global Financial Crisis resulted in a major dislocation of currency markets, after which the G5 carry trade reversed. This paper is an empirical study of this reversal, and the implications for macroeconomic theory. Using overnight and one-month carry trades as a proxy for market reactions to monetary policy, the period leading up to the Global Financial Crisis and this reversal was increasingly subdued: the so-called ‘Great Moderation’. Financial crises show up as outliers in the data: temporary reversals of the carry trade during which periods central banks provide additional liquidity in the form of lower interest rates. These results suggest that, prior to 2008, conventional monetary policy – using high/low interest rates to dampen/boost growth and inflation – was being counteracted by capital flows in the opposite direction, in search of high yields. Only since 2008, with unconventional monetary policy – QE, negative interest rates and a reduction in banks’ proprietary trading – have G5 currencies responded as predicted by conventional economics: low/high interest rates G5 currencies have appreciated/depreciated. The results suggest that macroeconomic theories need to be reconsidered, to take account of cross-border capital flows in search of yield, and the effectiveness of unconventional monetary policy

    Visualising Fe speciation diversity in ocean particulate samples by micro X-ray absorption near-edge spectroscopy

    Get PDF
    This paper is not subject to U.S. copyright. The definitive version was published in Environmental Chemistry 11 (2014): 10-17, doi:10.1071/EN13075.It is a well known truism that natural materials are inhomogeneous, so analysing them on a point-by-point basis can generate a large volume of data, from which it becomes challenging to extract understanding. In this paper, we show an example in which particles taken from the ocean in two different regions (the Western Subarctic Pacific and the Australian sector of the Southern Ocean, south of Tasmania) are studied by Fe K-edge micro X-ray absorption near-edge spectroscopy (μXANES). The resulting set of data consists of 209 spectra from the Western Subarctic Pacific and 126 from the Southern Ocean. We show the use of principal components analysis with an interactive projection visualisation tool to reduce the complexity of the data to something manageable. The Western Subarctic Pacific particles were grouped into four main populations, each of which was characterised by spectra consistent with mixtures of 1–3 minerals: (1) Fe3+ oxyhydroxides + Fe3+ clays + Fe2+ phyllosilicates, (2) Fe3+ clays, (3) mixed-valence phyllosilicates and (4) magnetite + Fe3+ clays + Fe2+ silicates, listed in order of abundance. The Southern Ocean particles break into three clusters: (1) Fe3+-bearing clays + Fe3+ oxyhydroxides, (2) Fe2+ silicates + Fe3+ oxyhydroxides and (3) Fe3+ oxides + Fe3+-bearing clays + Fe2+ silicates, in abundance order. Although there was some overlap between the two regions, this analysis shows that the particulate Fe mineral assemblage is distinct between the Western Subarctic Pacific and the Southern Ocean, with potential implications for the bioavailability of particulate Fe in these two iron-limited regions. We then discuss possible advances in the methods, including automatic methods for characterising the structure of the data.The operations of the Advanced Light Source at Lawrence Berkeley National Laboratory are supported by the Director, Office of Science, Office of Basic Energy Sciences, US Department of Energy under contract number DE-AC02-05CH11231. Collection of samples for the VERTIGO project was supported by the US National Science Foundation Program in Chemical Oceanography to Ken Buesseler and the US Department of Energy, Office of Science, Biological and Environmental Research Program to Jim Bishop. The SAZ-SENSE project was supported by the Australian Government Cooperative Research Centres Programme. Collection of spectroscopic data by PJL was supported through the WHOI Postdoctoral Scholar Program, WHOI Independent Study Award and NSF Chemical Oceanography

    Simple and Effective Visual Models for Gene Expression Cancer Diagnostics

    Get PDF
    In the paper we show that diagnostic classes in cancer gene expression data sets, which most often include thousands of features (genes), may be effectively separated with simple two-dimensional plots such as scatterplot and radviz graph. The principal innovation proposed in the paper is a method called VizRank, which is able to score and identify the best among possibly millions of candidate projections for visualizations. Compared to recently much applied techniques in the field of cancer genomics that include neural networks, support vector machines and various ensemble-based approaches, VizRank is fast and finds visualization models that can be easily examined and interpreted by domain experts. Our experiments on a number of gene expression data sets show that VizRank was always able to find data visualizations with a small number of (two to seven) genes and excellent class separation. In addition to providing grounds for gene expression cancer diagnosis, VizRank and its visualizations also identify small sets of relevant genes, uncover interesting gene interactions and point to outliers and potential misclassifications in cancer data sets

    Unravelling black box machine learning methods using biplots

    Get PDF
    Following the development of new mathematical techniques, the improvement of computer processing power and the increased availability of possible explanatory variables, the financial services industry is moving toward the use of new machine learning methods, such as neural networks, and away from older methods such as generalised linear models. However, their use is currently limited because they are seen as “black box” models, which gives predictions without justifications and which are therefore not understood and cannot be trusted. The goal of this dissertation is to expand on the theory and use of biplots to visualise the impact of the various input factors on the output of the machine learning black box. Biplots are used because they give an optimal two-dimensional representation of the data set on which the machine learning model is based.The biplot allows every point on the biplot plane to be converted back to the original ��-dimensions – in the same format as is used by the machine learning model. This allows the output of the model to be represented by colour coding each point on the biplot plane according to the output of an independently calibrated machine learning model. The interaction of the changing prediction probabilities – represented by the coloured output – in relation to the data points and the variable axes and category level points represented on the biplot, allows the machine learning model to be globally and locally interpreted. By visualing the models and their predictions, this dissertation aims to remove the stigma of calling non-linear models “black box” models and encourage their wider application in the financial services industry

    Applying blended conceptual spaces to variable choice and aesthetics in data visualisation

    Get PDF
    Computational creativity is an active area of research within the artificial intelligence domain that investigates what aspects of computing can be considered as an analogue to the human creative process. Computers can be programmed to emulate the type of things that the human mind can. Artificial creativity is worthy of study for two reasons. Firstly, it can help in understanding human creativity and secondly it can help with the design of computer programs that appear to be creative. Although the implementation of creativity in computer algorithms is an active field, much of the research fails to specify which of the known theories of creativity it is aligning with. The combination of computational creativity with computer generated visualisations has the potential to produce visualisations that are context sensitive with respect to the data and could solve some of the current automation problems that computers experience. In addition theories of creativity could theoretically compute unusual data combinations, or introducing graphical elements that draw attention to the patterns in the data. More could be learned about the creativity involved as humans go about the task of generating a visualisation. The purpose of this dissertation was to develop a computer program that can automate the generation of a visualisation, for a suitably chosen visualisation type over a small domain of knowledge, using a subset of the computational creativity criteria, in order to try and explore the effects of the introduction of conceptual blending techniques. The problem is that existing computer programs that generate visualisations are lacking the creativity, intuition, background information, and visual perception that enable a human to decide what aspects of the visualisation will expose patterns that are useful to the consumer of the visualisation. The main research question that guided this dissertation was, “How can criteria derived from theories of creativity be used in the generation of visualisations?”. In order to answer this question an analysis was done to determine which creativity theories and artificial intelligence techniques could potentially be used to implement the theories in the context of those relevant to computer generated visualisations. Measurable attributes and criteria that were sufficient for an algorithm that claims to model creativity were explored. The parts of the visualisation pipeline were identified and the aspects of visualisation generation that humans are better at than computers was explored. Themes that emerged in both the computational creativity and the visualisation literature were highlighted. Finally a prototype was built that started to investigate the use of computational creativity methods in the ‘variable choice’, and ‘aesthetics’ stages of the data visualisation pipeline.School of ComputingM. Sc. (Computing
    • …
    corecore