1,823 research outputs found

    Proactive Personality and the Big Five as Predictors of Motivation to Learn

    Get PDF
    In an environment of changing psychological contracts, corporate downsizing, and increases in alternative self-paced training delivery channels, motivation to learn is believed to represent a key variable in employee self-development that distinguishes employees who will thrive from those who will not. Predicting this variable, then, becomes an important step in managing workforce development and helping employees help themselves. Therefore, the efficacy of relevant personality characteristics as predictors of motivation to learn was investigated. Proactive personality and the Big Five factors of personality were hypothesized to be predictive of motivation to learn. These personality variables are relevant because they have been demonstrated to have important impacts on similar work-related outcomes. Results indicated that proactive personality had a significant positive relationship with motivation to learn. Extraversion, openness, and conscientiousness also had significant positive relationships with motivation to learn. Neuroticism and agreeableness were not significantly related to motivation to learn. No evidence was found for the hypothesis that motivation to learn partially mediates the relationship between personality and participation in developmental activities. Results suggest that personality can be employed as a useful predictor of motivation to learn, which in turn predicts development behaviors. Application of these findings includes the design of employee selection for learning organizations and building an employee development process to fit the needs of employees with differing personalities and motivation to learn. Additional implications of motivation to learn in the workplace are discussed

    Optimising Distributions with Natural Gradient Surrogates

    Full text link
    Natural gradient methods have been used to optimise the parameters of probability distributions in a variety of settings, often resulting in fast-converging procedures. Unfortunately, for many distributions of interest, computing the natural gradient has a number of challenges. In this work we propose a novel technique for tackling such issues, which involves reframing the optimisation as one with respect to the parameters of a surrogate distribution, for which computing the natural gradient is easy. We give several examples of existing methods that can be interpreted as applying this technique, and propose a new method for applying it to a wide variety of problems. Our method expands the set of distributions that can be efficiently targeted with natural gradients. Furthermore, it is fast, easy to understand, simple to implement using standard autodiff software, and does not require lengthy model-specific derivations. We demonstrate our method on maximum likelihood estimation and variational inference tasks

    ClassBench: A Packet Classification Benchmark

    Get PDF
    Due to the importance and complexity of the packet classification problem, a myriad of algorithms and re-sulting implementations exist. The performance and capacity of many algorithms and classification devices, including TCAMs, depend upon properties of the filter set and query patterns. Unlike microprocessors in the field of computer architecture, there are no standard performance evaluation tools or techniques avail-able to evaluate packet classification algorithms and products. Network service providers are reluctant to distribute copies of real filter sets for security and confidentiality reasons, hence realistic test vectors are a scarce commodity. The small subset of the research community who obtain real filter sets either limit performance evaluation to the small sample space or employ ad hoc methods of modifying those filter sets. In response to this problem, we present ClassBench, a suite of tools for benchmarking packet classification algorithms and devices. ClassBench includes a Filter Set Generator that produces synthetic filter sets that accurately model the characteristics of real filter sets. Along with varying the size of the filter sets, we provide high-level control over the composition of the filters in the resulting filter set. The tools suite also includes a Trace Generator that produces a sequence of packet headers to exercise the synthetic filter set. Along with specifying the relative size of the trace, we provide a simple mechanism for controlling locality of reference in the trace. While we have already found ClassBench to be very useful in our own research, we seek to initiate a broader discussion and solicit input from the community to guide the refinement of the tools and codification of a formal benchmarking methodology

    Towards a Packet Classification Benchmark

    Get PDF
    Packet classification is the enabling technology for next generation network services and often the primary bottleneck in high-performance routers. Due to the importance and complexity of the problem, a myriad of algorithms and resulting implementations exist. The performance and capacity of many algorithms and classification devices, including TCAMs, depend upon properties of the filter set and query patterns. Unlike microprocessors in the field of computer architecture, there are no standard performance evaluation tools or techniques available to evaluate packet classification algorithms and products. Network service providers are reluctant to distribute copies of real filter databases for security and confidentiality reasons, hence realistic test vectors are a scarce commodity. The small subset of the research community who obtain real databases either limit performance evaluation to the small sample space or employ ad hoc methods of modifying those databases. We present a tool for creating synthetic filter databases that retain characteristics of a seed database and provide systematic mechanisms for varying the number and composition of the filters. We propose a benchmarking methodology based on this tool that provides a mechanism for evaluating packet classification performance on a uniform scale. We seek to initiate a broader discussion within the community that will result in a standard packet classification benchmark

    Identifiable Feature Learning for Spatial Data with Nonlinear ICA

    Full text link
    Recently, nonlinear ICA has surfaced as a popular alternative to the many heuristic models used in deep representation learning and disentanglement. An advantage of nonlinear ICA is that a sophisticated identifiability theory has been developed; in particular, it has been proven that the original components can be recovered under sufficiently strong latent dependencies. Despite this general theory, practical nonlinear ICA algorithms have so far been mainly limited to data with one-dimensional latent dependencies, especially time-series data. In this paper, we introduce a new nonlinear ICA framework that employs tt-process (TP) latent components which apply naturally to data with higher-dimensional dependency structures, such as spatial and spatio-temporal data. In particular, we develop a new learning and inference algorithm that extends variational inference methods to handle the combination of a deep neural network mixing function with the TP prior, and employs the method of inducing points for computational efficacy. On the theoretical side, we show that such TP independent components are identifiable under very general conditions. Further, Gaussian Process (GP) nonlinear ICA is established as a limit of the TP Nonlinear ICA model, and we prove that the identifiability of the latent components at this GP limit is more restricted. Namely, those components are identifiable if and only if they have distinctly different covariance kernels. Our algorithm and identifiability theorems are explored on simulated spatial data and real world spatio-temporal data.Comment: Work under revie

    Project Zeus: Design of a Broadband Network and its Application on a University Campus

    Get PDF
    This is a report of the results of the initial step in a plan for the design, deployment and operation of a high speed campus network at Washington University. The network is based on ATM switching technology that has been developed here during the last several years. This network will support ubiquitous multimedia workstations with high-resolution graphics and video capabilities, open up a wide range of new applications in research and education. It will support aggregate throughputs of hundreds of gigabits per second and will be designed to support port of 100 MB/s is now in operation. The next phase of network implementation will operate at 155 Mb/s port rates, with higher rates introduced as the demand arises and as economics permits. We propose to move this technology quickly into a production setting where the objectives of network use and network research can be pursued concurrently

    Meta-Learning Probabilistic Inference For Prediction

    Get PDF
    This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We develop ML-PIP, a general framework for Meta-Learning approximate Probabilistic Inference for Prediction. ML-PIP extends existing probabilistic interpretations of meta-learning to cover a broad class of methods. 2) We introduce VERSA, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass. VERSA substitutes optimization at test time with forward passes through inference networks, amortizing the cost of inference and relieving the need for second derivatives during training. 3) We evaluate VERSA on benchmark datasets where the method sets new state-of-the-art results, handles arbitrary numbers of shots, and for classification, arbitrary numbers of classes at train and test time. The power of the approach is then demonstrated through a challenging few-shot ShapeNet view reconstruction task

    Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes

    Get PDF
    The goal of this paper is to design image classification systems that, after an initial multi-task training phase, can automatically adapt to new tasks encountered at test time. We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature. The resulting approach, called CNAPs, comprises a classifier whose parameters are modulated by an adaptation network that takes the current task's dataset as input. We demonstrate that CNAPs achieves state-of-the-art results on the challenging Meta-Dataset benchmark indicating high-quality transfer-learning. We show that the approach is robust, avoiding both over-fitting in low-shot regimes and under-fitting in high-shot regimes. Timing experiments reveal that CNAPs is computationally efficient at test-time as it does not involve gradient based adaptation. Finally, we show that trained models are immediately deployable to continual learning and active learning where they can outperform existing approaches that do not leverage transfer learning
    corecore