5,077 research outputs found

    Naming Racism: A Conceptual Look at Internalized Racism in US Schools

    Get PDF
    Internalized racism describes the conscious and unconscious acceptance of a racial hierarchy where whites are consistently ranked above People of Color. Although scholars across multiple disciplines have discussed this concept, the role of schools in instilling and perpetuating internalized racism within Students of Color has very rarely been examined. This paper is a conceptual piece that utilizes a Critical Race Theory framework to acknowledge the racialized experiences within classroom pedagogy, curriculum, and unequal school resources. We examine how these factors can negatively affect racial group-identity and contribute to internalized racism for Students of Color. Because internalized racism works to sustain educational and social inequity, this paper also explores ways that schools can function to break this cycle

    A Unified View of Piecewise Linear Neural Network Verification

    Full text link
    The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure and taking insights from formal methods such as Satisifiability Modulo Theory. These methods are however still far from scaling to realistic neural networks. To facilitate progress on this crucial area, we make two key contributions. First, we present a unified framework that encompasses previous methods. This analysis results in the identification of new methods that combine the strengths of multiple existing approaches, accomplishing a speedup of two orders of magnitude compared to the previous state of the art. Second, we propose a new data set of benchmarks which includes a collection of previously released testcases. We use the benchmark to provide the first experimental comparison of existing algorithms and identify the factors impacting the hardness of verification problems.Comment: Updated version of "Piecewise Linear Neural Network verification: A comparative study

    Adaptive Neural Compilation

    Full text link
    This paper proposes an adaptive neural-compilation framework to address the problem of efficient program learning. Traditional code optimisation strategies used in compilers are based on applying pre-specified set of transformations that make the code faster to execute without changing its semantics. In contrast, our work involves adapting programs to make them more efficient while considering correctness only on a target input distribution. Our approach is inspired by the recent works on differentiable representations of programs. We show that it is possible to compile programs written in a low-level language to a differentiable representation. We also show how programs in this representation can be optimised to make them efficient on a target distribution of inputs. Experimental results demonstrate that our approach enables learning specifically-tuned algorithms for given data distributions with a high success rate.Comment: Submitted to NIPS 2016, code and supplementary materials will be available on author's pag

    Value Propagation Networks

    Full text link
    We present Value Propagation (VProp), a set of parameter-efficient differentiable planning modules built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments. We show that the modules enable learning to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems. We evaluate on static and dynamic configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes, and on a StarCraft navigation scenario, with more complex dynamics, and pixels as input.Comment: Updated to match ICLR 2019 OpenReview's versio

    Efficient Relaxations for Dense CRFs with Sparse Higher Order Potentials

    Full text link
    Dense conditional random fields (CRFs) have become a popular framework for modelling several problems in computer vision such as stereo correspondence and multi-class semantic segmentation. By modelling long-range interactions, dense CRFs provide a labelling that captures finer detail than their sparse counterparts. Currently, the state-of-the-art algorithm performs mean-field inference using a filter-based method but fails to provide a strong theoretical guarantee on the quality of the solution. A question naturally arises as to whether it is possible to obtain a maximum a posteriori (MAP) estimate of a dense CRF using a principled method. Within this paper, we show that this is indeed possible. We will show that, by using a filter-based method, continuous relaxations of the MAP problem can be optimised efficiently using state-of-the-art algorithms. Specifically, we will solve a quadratic programming (QP) relaxation using the Frank-Wolfe algorithm and a linear programming (LP) relaxation by developing a proximal minimisation framework. By exploiting labelling consistency in the higher-order potentials and utilising the filter-based method, we are able to formulate the above algorithms such that each iteration has a complexity linear in the number of classes and random variables. The presented algorithms can be applied to any labelling problem using a dense CRF with sparse higher-order potentials. In this paper, we use semantic segmentation as an example application as it demonstrates the ability of the algorithm to scale to dense CRFs with large dimensions. We perform experiments on the Pascal dataset to indicate that the presented algorithms are able to attain lower energies than the mean-field inference method

    Superpixel-based Two-view Deterministic Fitting for Multiple-structure Data

    Full text link
    This paper proposes a two-view deterministic geometric model fitting method, termed Superpixel-based Deterministic Fitting (SDF), for multiple-structure data. SDF starts from superpixel segmentation, which effectively captures prior information of feature appearances. The feature appearances are beneficial to reduce the computational complexity for deterministic fitting methods. SDF also includes two original elements, i.e., a deterministic sampling algorithm and a novel model selection algorithm. The two algorithms are tightly coupled to boost the performance of SDF in both speed and accuracy. Specifically, the proposed sampling algorithm leverages the grouping cues of superpixels to generate reliable and consistent hypotheses. The proposed model selection algorithm further makes use of desirable properties of the generated hypotheses, to improve the conventional fit-and-remove framework for more efficient and effective performance. The key characteristic of SDF is that it can efficiently and deterministically estimate the parameters of model instances in multi-structure data. Experimental results demonstrate that the proposed SDF shows superiority over several state-of-the-art fitting methods for real images with single-structure and multiple-structure data.Comment: Accepted by European Conference on Computer Vision (ECCV

    An inventory of multipurpose Avenue trees of Urban Chandigarh, India

    Get PDF
    Trees in urban ecosystems play a very significant role in environmental protection by checking air and noise pollntants, abating wind, and handling many other functions, in India, Chandigarh is the mosl modern and em,iromnentally safe city and qualifies to be called a GREEN CITY because of its rich tree component. This is so in spite of its high population density, currently over 9,443 people per square kin, perhaps the highest in the country. It has nearly 42,000 trees growing along the roads in a systematic manner. The drives are identified with the type of multipurpose tree species. Nearly 66 tree species (over half indigenous) are seen along the roadsides; these trees provide shade, timber, fuel, fodder, fruit, medicine, and other benefits. In addition, the city is decorated with 11 gardens harboring over 200 types of trees

    Need to establish long-term ecological research network in India

    Get PDF
    The paper discusses the purpose, significance and need for establishing Long-Term Ecological Research in India. LTER was first established in 1980 by the National Science Foundation of USA. Its main mission was to understand ecological phenomena on long-term basis through cooperation and collaboration among scientists from different parts of the world. It also emphasizes on training, sharing research data and helping scientists to manage ecosystems throughout the world through personal visits and electronic linkages. Many countries have become member of the international LTER programme while many more are in the process of joining the network. In India, there is an urgent need of recognizing LTER sites as many ecosystems require long-term research and monitoring. A national meeting of ecologists held in December 2002 had discussed the objectives of LTER Network-India. Further, an ad-hoc Committee of ecologists was proposed under the leadership of Prof. J.S. Singh, during the national seminar at Kurukshetra in January 2004
    corecore