873,212 research outputs found

    You get what you pay for: Incentives and Selection in the Education System

    Get PDF
    We analyse worker self-selection with a special focus on teachers. The point of the paper is that worker composition is generally endogenous, due to worker self-selection. In a first step we analyse lab experimental data to provide causal evidence on particular sorting patterns. This evidence sets the stage for our field data analysis, which focuses specifically on selection patterns of teachers. We find that teachers are more risk averse than employees in other professions, which indicates that relatively risk averse individuals sort into teaching occupations under the current system. Using survey measures on trust and reciprocity we also find that teachers trust more and are less negatively reciprocal than other employees. Finally, we establish differences in personality based on the Big Five concept.education, training and the labour market;

    Pre-Training LiDAR-Based 3D Object Detectors Through Colorization

    Full text link
    Accurate 3D object detection and understanding for self-driving cars heavily relies on LiDAR point clouds, necessitating large amounts of labeled data to train. In this work, we introduce an innovative pre-training approach, Grounded Point Colorization (GPC), to bridge the gap between data and labels by teaching the model to colorize LiDAR point clouds, equipping it with valuable semantic cues. To tackle challenges arising from color variations and selection bias, we incorporate color as "context" by providing ground-truth colors as hints during colorization. Experimental results on the KITTI and Waymo datasets demonstrate GPC's remarkable effectiveness. Even with limited labeled data, GPC significantly improves fine-tuning performance; notably, on just 20% of the KITTI dataset, GPC outperforms training from scratch with the entire dataset. In sum, we introduce a fresh perspective on pre-training for 3D object detection, aligning the objective with the model's intended role and ultimately advancing the accuracy and efficiency of 3D object detection for autonomous vehicles

    Predicting Token Impact Towards Efficient Vision Transformer

    Full text link
    Token filtering to reduce irrelevant tokens prior to self-attention is a straightforward way to enable efficient vision Transformer. This is the first work to view token filtering from a feature selection perspective, where we weigh the importance of a token according to how much it can change the loss once masked. If the loss changes greatly after masking a token of interest, it means that such a token has a significant impact on the final decision and is thus relevant. Otherwise, the token is less important for the final decision, so it can be filtered out. After applying the token filtering module generalized from the whole training data, the token number fed to the self-attention module can be obviously reduced in the inference phase, leading to much fewer computations in all the subsequent self-attention layers. The token filter can be realized using a very simple network, where we utilize multi-layer perceptron. Except for the uniqueness of performing token filtering only once from the very beginning prior to self-attention, the other core feature making our method different from the other token filters lies in the predictability of token impact from a feature selection point of view. The experiments show that the proposed method provides an efficient way to approach a light weighted model after optimized with a backbone by means of fine tune, which is easy to be deployed in comparison with the existing methods based on training from scratch.Comment: 10 page

    Immune network algorithm in monthly streamflow prediction at Johor river

    Get PDF
    This study proposes an alternative method in generating future stream flow data with single-point river stage. Prediction of stream flow data is important in water resources engineering for planning and design purposes in order to estimate long term forecasting. This paper utilizes Artificial Immune System (AIS) in modelling the stream flow of one stations of Johor River. AIS has the abilities of self-organizing, memory, recognition, adaptive and ability of learning inspired from the immune system. Immune Network Algorithm is part of the three main algorithm in AIS. The model of Immune Network Algorithm used in this study is aiNet. The training process in aiNet is partly inspired by clonal selection principle and the other part uses antibody interactions for removing redundancy and finding data patterns. Like any other traditional statistical and stochastic techniques, results from this study, exhibit that, Immune Network Algorithm is capable of producing future stream flow data at monthly duration with various advantages

    Self-generated neural activity : models and perspective

    Get PDF
    Poster presentation: The brain is autonomously active and this self-sustained neural activity is in general modulated, but not driven, by the sensory input data stream [1,2]. Traditionally one has regarded this eigendynamics as resulting from inter-modular recurrent neural activity [3]. Understanding the basic modules for cognitive computation is, in this view, the primary focus of research and the overall neural dynamics would be determined by the the topology of the intermodular pathways. Here we examine an alternative point of view, asking whether certain aspects of the neural eigendynamics have a central functional role for overall cognitive computation [4,5]. Transiently stable neural activity is regularly observed on the cognitive time-scale of 80–100 ms, with indications that neural competition [6] plays an important role in the selection of the transiently stable neural ensembles [7], also denoted winning coalitions [8]. We report on a theory approach which implements these two principles, transient-state dynamics and neural competition, in terms of an associative neural network with clique encoding [9]. A cognitive system [10] with a non-trivial internal eigendynamics has two seemingly contrasting tasks to fulfill. The internal processes need to be regular and not chaotic on one side, but sensitive to the afferent sensory stimuli on the other side. We show, that these two contrasting demands can be reconciled within our approach based on competitive transient-state dynamics, when allowing the sensory stimuli to modulate the competition for the next winning coalition. By testing the system with the bars problem, we find an emerging cognitive capability. Only based on the two basic architectural principles, neural competition and transient-state dynamics, with no explicit algorithmic encoding, the system performs on its own a non-linear independent component analysis of input data stream. The system has rudimentary biological features. All learning is local Hebbian-style, unsupervised and online. It exhibits an ever-ongoing eigendynamics and at no time is the state or the value of synaptic strengths reset or the system restarted; there is no separation between training and performance. We believe that this kind of approach – cognitive computation with autonomously active neural networks – to be an emerging field, relevant both for system neuroscience and synthetic cognitive systems
    corecore