43 research outputs found

    Explaining the Adaptive Generalisation Gap

    Full text link
    We conjecture that the inherent difference in generalisation between adaptive and non-adaptive gradient methods stems from the increased estimation noise in the flattest directions of the true loss surface. We demonstrate that typical schedules used for adaptive methods (with low numerical stability or damping constants) serve to bias relative movement towards flat directions relative to sharp directions, effectively amplifying the noise-to-signal ratio and harming generalisation. We further demonstrate that the numerical stability/damping constant used in these methods can be decomposed into a learning rate reduction and linear shrinkage of the estimated curvature matrix. We then demonstrate significant generalisation improvements by increasing the shrinkage coefficient, closing the generalisation gap entirely in both Logistic Regression and Deep Neural Network experiments. Finally, we show that other popular modifications to adaptive methods, such as decoupled weight decay and partial adaptivity can be shown to calibrate parameter updates to make better use of sharper, more reliable directions

    Assessment of fundamental strategic issues in structural change in United Kingdom and South African ports by systemic scenarios

    No full text
    The future complexity of strategic issues in international structural change was demonstrated by UK and SA ports. This arose from the likely extent of structural constraints and the effects of stakeholder power. From a review of emerging Advanced Systems Theory a new Boundary -spanning perspective of strategy was developed, that led to the specification of conceptual circumstances of potential outcomes of change. Since existing systems methodologies could not accommodate future power relationships, a new methodology and data collection technique was developed. The circumstances were developed into multiple scenarios which were judged by international decision-makers. These judgements were subjected to quantitative and qualitative analysis from a Strategic Choice Perspective. The outcome was a Boundary -spanning 'Long-term Strategic Service Industry' model which proposed the outlines of the future strategy and organisational structure that ought to be adopted to meet 'public interest' constraints. A dual subject and methodological contribution was made

    Dimension-adaptive bounds on compressive FLD Classification

    Get PDF
    Efficient dimensionality reduction by random projections (RP) gains popularity, hence the learning guarantees achievable in RP spaces are of great interest. In finite dimensional setting, it has been shown for the compressive Fisher Linear Discriminant (FLD) classifier that forgood generalisation the required target dimension grows only as the log of the number of classes and is not adversely affected by the number of projected data points. However these bounds depend on the dimensionality d of the original data space. In this paper we give further guarantees that remove d from the bounds under certain conditions of regularity on the data density structure. In particular, if the data density does not fill the ambient space then the error of compressive FLD is independent of the ambient dimension and depends only on a notion of ‘intrinsic dimension'

    Network industries in the new economy

    Get PDF
    In this paper we discuss two propositions: the supply and demand of knowledge, and network externalities. We outline the characteristics that distinguish knowledge- intensive industries from the general run of manufacturing and service businesses. Knowledge intensity and knowledge specialisation has developed as markets and globalisation have grown, leading to progressive incentives to outsource and for industries to deconstruct. The outcome has been more intensive competition. The paper looks at what is potentially the most powerful economic mechanism: positive feedback, alternatively known as demand-side increasing returns, network effects, or network externalities. We present alternative demand curves that incorporate positive feedback and discuss their potential economic and strategic consequences. We argue that knowledge supply and demand, and the dynamics of network externalities create new situations for our traditional industrial economy such that new types of economies of scale are emerging and "winner takes all" strategies are having more influence. This is the first of a pair of papers. A second paper will take the argument further and look at the nature of firms' strategies in the new world, arguing that technology standards, technical platforms, consumer networks, and supply chain strategies are making a significant contribution to relevant strategies within the new economy

    Cartographic generalization

    Get PDF
    This short paper gives a subjective view on cartographic generalization, its achievements in the past, and the challenges it faces in the future

    A Bayesian formulation of search, control and the exploration/exploitation trade-off

    Get PDF
    A new approach to optimisation is introduced based on a precise probabilistic statement of what is ideally required of an optimisation method. It is convenient to express the formalism in terms of the control of a stationary environment. This leads to an objective function for the controller which unifies the objectives of exploration and exploitation, thereby providing a quantitative principle for managing this trade-off. This is demonstrated using a variant of the multi-armed bandit problem. This approach opens new possibilities for optimisation algorithms, particularly by using neural network or other adaptive methods for the adaptive controller. It also opens possibilities for deepening understanding of existing methods. The realisation of these possibilities requires research into practical approximations of the exact formalism

    Building generalization using deep learning

    Get PDF
    Cartographic generalization is a problem, which poses interesting challenges to automation. Whereas plenty of algorithms have been developed for the different sub-problems of generalization (e.g. simplification, displacement, aggregation), there are still cases, which are not generalized adequately or in a satisfactory way. The main problem is the interplay between different operators. In those cases the benchmark is the human operator, who is able to design an aesthetic and correct representation of the physical reality. Deep Learning methods have shown tremendous success for interpretation problems for which algorithmic methods have deficits. A prominent example is the classification and interpretation of images, where deep learning approaches outperform the traditional computer vision methods. In both domains – computer vision and cartography – humans are able to produce a solution; a prerequisite for this is, that there is the possibility to generate many training examples for the different cases. Thus, the idea in this paper is to employ Deep Learning for cartographic generalizations tasks, especially for the task of building generalization. An advantage of this task is the fact that many training data sets are available from given map series. The approach is a first attempt using an existing network. In the paper, the details of the implementation will be reported, together with an in depth analysis of the results. An outlook on future work will be given

    Estudio de procesos y herramientas aplicables a la generalización vectorial de entidades lineales.

    Full text link
    Se presenta un estudio de algoritmos que ofrecen resultados óptimos en cuanto a lo que a la generalización vectorial de entidades lineales se refiere. Este estudio se encuentra dentro del marco del proyecto CENIT España Virtual para la investigación de nuevos algoritmos de procesado cartográfico. La generalización constituye uno de los procesos cartográficos más complejos, cobrando su mayor importancia a la hora de confeccionar mapas derivados a partir de otros a mayores escalas. La necesidad de una generalización se hace patente ante la imposibilidad de representar la realidad en su totalidad, teniendo ésta que ser limitada o reducida para la posterior elaboración del mapa, manteniendo, eso sí, las características esenciales del espacio geográfico cartografiado. La finalidad, por tanto, es obtener una imagen simplificada pero representativa de la realidad. Debido a que casi el ochenta por ciento de la cartografía vectorial está compuesta por elementos lineales, la investigación se centra en aquellos algoritmos capaces de procesar y actuar sobre éstos, demostrando además que su aplicación puede extenderse al tratamiento de elementos superficiales ya que son tratados a partir de la línea cerrada que los define. El estudio, además, profundiza en los procesos englobados dentro de la exageración lineal que pretenden destacar o enfatizar aquellos rasgos de entidades lineales sin los que la representatividad de nuestro mapa se vería mermada. Estas herramientas, acompañadas de otras más conocidas como la simplificación y el suavizado de líneas, pueden ofrecer resultados satisfactorios dentro de un proceso de generalización. Abstract: A study of algorithms that provide optimal results in vector generalization is presented. This study is within the CENIT project framework of the España Virtual for research of new cartographic processing algorithms. The generalization is one of the more complex mapping processes, taking its greatest importance when preparing maps derived from other at larger scales. The need for generalization is evident given the impossibility of representing whole real world, taking it to be limited or reduced for the subsequent preparation of the map, keeping main features of the geographical space. Therefore, the goal is to obtain a simplified but representative image of the reality. Due to nearly eighty percent of the mapping vector is composed of linear elements, the research focuses on those algorithms that can process them, proving that its application can also be extended to the treatment of surface elements as they are treated from the closed line that defines them. Moreover, the study focussed into the processes involved within the linear exaggeration intended to highlight or emphasize those features of linear entities that increase the representativeness of our map. These tools, together with others known as the simplification and smoothing of lines, can provide satisfactory results in a process of generalization

    CartAGen: an Open Source Research Platform for Map Generalization

    Get PDF
    International audienceAutomatic map generalization is a complex task that is still a research problem and requires the development of research prototypes before being usable in productive map processes. In the meantime, reproducible research principles are becoming a standard. Publishing reproducible research means that researchers share their code and their data so that other researchers might be able to reproduce the published experiments, in order to check them, extend them, or compare them to their own experiments. Open source software is a key tool to share code and software, and CartAGen is the first open source research platform that tackles the overall map generalization problem: not only the building blocks that are generalization algorithms, but also methods to chain them, and spatial analysis tools necessary for data enrichment. This paper presents the CartAGen platform, its architecture and its components. The main component of the platform is the implementation of several multi-agent based models of the literature such as AGENT, CartACom, GAEL, CollaGen, or DIOGEN. The paper also explains and discusses different ways, as a researcher, to use or to contribute to CartAGen
    corecore