24 research outputs found

    Effective City Planning: A Data Driven Analysis of Infrastructure and Citizen Feedback in Bangalore

    Full text link
    Leveraging civic data, divided into 3 categories spending, infrastructure and citizen feedback, can present a clear picture of the priorities, performance, and pain-points of a city. Data driven insights highlight the current issues faced by citizens as well as disparity between government spending and quality of work, and can aid in providing effective solutions. City infrastructure; footpaths, lighting, and parks, describe the living quality of citizens and can be compared to the annual spending in these sectors to track effectiveness. Analyzing complaints ensures citizen feedback is taken into account during both long-term planning and in short-term solutions to pinpoint critical areas of improvement. Integrating an analysis loop and data driven dashboards can help in improving performance of municipal corporations, while adding transparency between citizens and the city officials. In the paper, constituency rankings across the city infrastructure indicated a low importance towards greenery in terms of Parks, where each constituency has less than 2% of their area as a park. As populations in these areas are already high and increasing, this is likely to worsen in the coming years. Comparing the results with complaints, surprisingly the rankings of footpaths in constituencies were contrary to the number of complaints in these constituencies, with high ranking constituencies receiving the highest number of complaints, which would require further analysis. In terms of street lights, the areas with low quality lighting were associated with a large number of complaints from citizens, indicating that action needs to be taken immediately. Overall, a text analysis of complaints across constituencies reflected the everyday struggles of the city with the top keywords 'roads' and 'vehicles', followed by 'footpaths' and 'garbage', which are both critical problems in Bangalore City today.Comment: 5 pages, Technical Article, Report originally written in 201

    Automatic Task Parallelization of Dataflow Graphs in ML/DL models

    Full text link
    Several methods exist today to accelerate Machine Learning(ML) or Deep-Learning(DL) model performance for training and inference. However, modern techniques that rely on various graph and operator parallelism methodologies rely on search space optimizations which are costly in terms of power and hardware usage. Especially in the case of inference, when the batch size is 1 and execution is on CPUs or for power-constrained edge devices, current techniques can become costly, complicated or inapplicable. To ameliorate this, we present a Critical-Path-based Linear Clustering approach to exploit inherent parallel paths in ML dataflow graphs. Our task parallelization approach further optimizes the structure of graphs via cloning and prunes them via constant propagation and dead-code elimination. Contrary to other work, we generate readable and executable parallel Pytorch+Python code from input ML models in ONNX format via a new tool that we have built called {\bf Ramiel}. This allows us to benefit from other downstream acceleration techniques like intra-op parallelism and potentially pipeline parallelism. Our preliminary results on several ML graphs demonstrate up to 1.9×\times speedup over serial execution and outperform some of the current mechanisms in both compile and runtimes. Lastly, our methods are lightweight and fast enough so that they can be used effectively for power and resource-constrained devices, while still enabling downstream optimizations

    Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

    Full text link
    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.Comment: (Under review

    Gibbs Sampling with Low-Power Spiking Digital Neurons

    Full text link
    Restricted Boltzmann Machines and Deep Belief Networks have been successfully used in a wide variety of applications including image classification and speech recognition. Inference and learning in these algorithms uses a Markov Chain Monte Carlo procedure called Gibbs sampling. A sigmoidal function forms the kernel of this sampler which can be realized from the firing statistics of noisy integrate-and-fire neurons on a neuromorphic VLSI substrate. This paper demonstrates such an implementation on an array of digital spiking neurons with stochastic leak and threshold properties for inference tasks and presents some key performance metrics for such a hardware-based sampler in both the generative and discriminative contexts.Comment: Accepted at ISCAS 201
    corecore