1,045 research outputs found
Protecting Privacy in Indian Schools: Regulating AI-based Technologies' Design, Development and Deployment
Education is one of the priority areas for the Indian government, where Artificial Intelligence (AI) technologies are touted to bring digital transformation. Several Indian states have also started deploying facial recognition-enabled CCTV cameras, emotion recognition technologies, fingerprint scanners, and Radio frequency identification tags in their schools to provide personalised recommendations, ensure student security, and predict the drop-out rate of students but also provide 360-degree information of a student. Further, Integrating Aadhaar (digital identity card that works on biometric data) across AI technologies and learning and management systems (LMS) renders schools a âpanopticonâ.
Certain technologies or systems like Aadhaar, CCTV cameras, GPS Systems, RFID tags, and learning management systems are used primarily for continuous data collection, storage, and retention purposes. Though they cannot be termed AI technologies per se, they are fundamental for designing and developing AI systems like facial, fingerprint, and emotion recognition technologies. The large amount of student data collected speedily through the former technologies is used to create an algorithm for the latter-stated AI systems. Once algorithms are processed using machine learning (ML) techniques, they learn correlations between multiple datasets predicting each studentâs identity, decisions, grades, learning growth, tendency to drop out, and other behavioural characteristics. Such autonomous and repetitive collection, processing, storage, and retention of student data without effective data protection legislation endangers student privacy.
The algorithmic predictions by AI technologies are an avatar of the data fed into the system. An AI technology is as good as the person collecting the data, processing it for a relevant and valuable output, and regularly evaluating the inputs going inside an AI model. An AI model can produce inaccurate predictions if the person overlooks any relevant data. However, the state, school administrations and parentsâ belief in AI technologies as a panacea to student security and educational development overlooks the context in which âdata practicesâ are conducted. A right to privacy in an AI age is inextricably connected to data practices where data gets âcookedâ. Thus, data protection legislation operating without understanding and regulating such data practices will remain ineffective in safeguarding privacy.
The thesis undergoes interdisciplinary research that enables a better understanding of the interplay of data practices of AI technologies with social practices of an Indian school, which the present Indian data protection legislation overlooks, endangering studentsâ privacy from designing and developing to deploying stages of an AI model. The thesis recommends the Indian legislature frame better legislation equipped for the AI/ML age and the Indian judiciary on evaluating the legality and reasonability of designing, developing, and deploying such technologies in schools
Guided rewriting and constraint satisfaction for parallel GPU code generation
Graphics Processing Units (GPUs) are notoriously hard to optimise for manually due to their scheduling and memory hierarchies. What is needed are good automatic code generators and optimisers for such parallel hardware. Functional approaches such as Accelerate, Futhark and LIFT leverage a high-level algorithmic Intermediate Representation (IR) to expose parallelism and abstract the implementation details away from the user. However, producing efficient code for a given accelerator remains challenging. Existing code generators depend on the user input to choose a subset of hard-coded optimizations or automated exploration of implementation search space. The former suffers from the lack of extensibility, while the latter is too costly due to the size of the search space. A hybrid approach is needed, where a space of valid implementations is built automatically and explored with the aid of human expertise.
This thesis presents a solution combining user-guided rewriting and automatically generated constraints to produce high-performance code. The first contribution is an automatic tuning technique to find a balance between performance and memory consumption. Leveraging its functional patterns, the LIFT compiler is empowered to infer tuning constraints and limit the search to valid tuning combinations only.
Next, the thesis reframes parallelisation as a constraint satisfaction problem. Parallelisation constraints are extracted automatically from the input expression, and a solver is used to identify valid rewriting. The constraints truncate the search space to valid parallel mappings only by capturing the scheduling restrictions of the GPU in the context of a given program. A synchronisation barrier insertion technique is proposed to prevent data races and improve the efficiency of the generated parallel mappings.
The final contribution of this thesis is the guided rewriting method, where the user encodes a design space of structural transformations using high-level IR nodes called rewrite points. These strongly typed pragmas express macro rewrites and expose design choices as explorable parameters. The thesis proposes a small set of reusable rewrite points to achieve tiling, cache locality, data reuse and memory optimisation.
A comparison with the vendor-provided handwritten kernel ARM Compute Library and the TVM code generator demonstrates the effectiveness of this thesis' contributions. With convolution as a use case, LIFT-generated direct and GEMM-based convolution implementations are shown to perform on par with the state-of-the-art solutions on a mobile GPU. Overall, this thesis demonstrates that a functional IR yields well to user-guided and automatic rewriting for high-performance code generation
Tools for efficient Deep Learning
In the era of Deep Learning (DL), there is a fast-growing demand for building and deploying Deep Neural Networks (DNNs) on various platforms. This thesis proposes five tools to address the challenges for designing DNNs that are efficient in time, in resources and in power consumption.
We first present Aegis and SPGC to address the challenges in improving the memory efficiency of DL training and inference. Aegis makes mixed precision training (MPT) stabler by layer-wise gradient scaling. Empirical experiments show that Aegis can improve MPT accuracy by at most 4\%. SPGC focuses on structured pruning: replacing standard convolution with group convolution (GConv) to avoid irregular sparsity. SPGC formulates GConv pruning as a channel permutation problem and proposes a novel heuristic polynomial-time algorithm. Common DNNs pruned by SPGC have maximally 1\% higher accuracy than prior work.
This thesis also addresses the challenges lying in the gap between DNN descriptions and executables by Polygeist for software and POLSCA for hardware. Many novel techniques, e.g. statement splitting and memory partitioning, are explored and used to expand polyhedral optimisation. Polygeist can speed up software execution in sequential and parallel by 2.53 and 9.47 times on Polybench/C. POLSCA achieves 1.5 times speedup over hardware designs directly generated from high-level synthesis on Polybench/C.
Moreover, this thesis presents Deacon, a framework that generates FPGA-based DNN accelerators of streaming architectures with advanced pipelining techniques to address the challenges from heterogeneous convolution and residual connections. Deacon provides fine-grained pipelining, graph-level optimisation, and heuristic exploration by graph colouring. Compared with prior designs, Deacon shows resource/power consumption efficiency improvement of 1.2x/3.5x for MobileNets and 1.0x/2.8x for SqueezeNets.
All these tools are open source, some of which have already gained public engagement. We believe they can make efficient deep learning applications easier to build and deploy.Open Acces
Data ethics : building trust : how digital technologies can serve humanity
Data is the magic word of the 21st century. As oil in the 20th century and electricity in the 19th century:
For citizens, data means support in daily life in almost all activities, from watch to laptop, from kitchen to car,
from mobile phone to politics. For business and politics, data means power, dominance, winning the race. Data can be used for good and bad,
for services and hacking, for medicine and arms race. How can we build trust in this complex and ambiguous data world?
How can digital technologies serve humanity? The 45 articles in this book represent a broad range of ethical reflections and recommendations
in eight sections: a) Values, Trust and Law, b) AI, Robots and Humans, c) Health and Neuroscience, d) Religions for Digital Justice, e) Farming, Business, Finance, f) Security, War, Peace, g) Data Governance, Geopolitics, h) Media, Education, Communication.
The authors and institutions come from all continents.
The book serves as reading material for teachers, students, policy makers, politicians, business, hospitals, NGOs and religious organisations alike. It is an invitation for dialogue, debate and building trust!
The book is a continuation of the volume âCyber Ethics 4.0â published in 2018 by the same editors
Stabilization and Resuscitation of Newborns
The majority of newborns do not need medical interventions to manage the neonatal transition after birth. However, every year millions of newborns worldwide require respiratory support immediately after birth, and another considerable number of newborns additionally require extensive resuscitation including chest compressions and drug administration. Despite a significant increase in knowledge and development of enhanced therapy strategies over the past few years, morbidity and mortality caused by failures in neonatal transition remain an important health issue. The purpose of this reprint is to support or introduce novel concepts and add information in the area of the âStabilization and Resuscitation of Newbornsâ, aiming to improve neonatal care and, as the major objective, to enhance neuro-developmental outcomes
Long-term post-harvest field storage of sugar beet (Beta vulgaris subsp. vulgaris)
The post-harvest storage of the sugar beet crop in Sweden occurs in the field. The harvest of roots generally ends along with the month of November, but the processing campaign can continue into February. The loss of quality of the stored roots during this period is economically important. This thesis groups the main mechanisms that results in loss of quality during post-harvest storage in two categories: plant health, and the storage environment. It focuses on the plant health dimension of mechanical properties, and the storage environment dimensions of moisture and temperature. The relationship between key agronomic inputs and mechanical properties and storability of sugar beet roots was investigated. Growing season available nitrogen and water were found to have little impact on mechanical properties. The storability of roots was found to decrease significantly when irrigation gave an optimal soil water availability throughout the season. This is likely a result of an interaction with an unspecified dimension of plant health. The quantification of sugar beet root mechanical properties with a traditional handheld penetrometer applied in-field was found to be reliable. It was also found that the methods used in the analysis of mechanical properties could be expanded to include the apparent modulus of elasticity and that fall-tests can be used to assess dynamic impacts. The use of a short, intense period of forced ventilation of a sugar beet bulk was found to lead to dehydration of sugar beet roots in a predictable manner. This resulted in increases to sucrose concentrations that would lead to greater gross income. Computational Fluid Dynamics modelling of the temperature within a clamp proved to be possible and insightful. The fluid dynamics within the clamp are important to include in such modelling
Short-term forecast techniques for energy management systems in microgrid applications
A Dissertation Submitted in Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy in Sustainable Energy Science and Engineering of the Nelson Mandela African Institution of Science and TechnologyIn the 2015 Paris Agreement, 195 countries adopted a global climate agreement to limit the
global average temperature rise to less than 2°C. Achieving the set targets involves increasing
energy efficiency and embracing cleaner energy solutions. Although advances in computing
and Internet of Things (IoT) technologies have been made, there is limited scientific research
work in this arena that tackles the challenges of implementing low-cost IoT-based Energy
Management System (EMS) with energy forecast and user engagement for adoption by a
layman both in off-grid or microgrid tied to a weak grid.
This study proposes an EMS approach for short-term forecast and monitoring for hybrid
microgrids in emerging countries. This is done by addressing typical submodules of EMS
namely: load forecast, blackout forecast, and energy monitoring module. A short-term load
forecast model framework consisting of a hybrid feature selection and prediction model was
developed. Prediction error performance evaluation of the developed model was done by
varying input predictors and using the principal subset features to perform supervised training
of 20 different conventional prediction models and their hybrid variants. The proposed
principal k-features subset union approach registered low error performance values than
standard feature selection methods when it was used with the âlinear Support Vector Machine
(SVM)â prediction model for load forecast. The hybrid regression model formed from a fusion
of the best 2 models (âlinearSVMâ and âcubicSVMâ) showed improved prediction performance
than the individual regression models with a reduction in Mean Absolute Error (MAE) by
5.4%.
In the case of the EMS blackout prediction aspect, a hybrid Adaptive Similar Day (ASD) and
Random Forest (RF) model for short-term power outage prediction was proposed that predicted
accurately almost half of the blackouts (49.16%), thereby performing slightly better than the
stand-alone RF (32.23%), and ASD (46.57%) models. Additionally, a low-cost EMS smart
meter was developed to realize the implemented energy forecast and offer user engagement
through monitoring and control of the microgrid towards the goal of increasing energy
efficiency
Critical Heritage Studies and the Futures of Europe
Cultural and natural heritage are central to âEuropeâ and âthe European projectâ. They were bound up in the emergence of nation-states in the eighteenth and nineteenth centuries, where they were used to justify differences over which border conflicts were fought. Later, the idea of a âcommon European heritageâ provided a rationale for the development of the European Union. Now, the emergence of ânewâ populist nationalisms shows how the imagined past continues to play a role in cultural and social governance, while a series of interlinked social and ecological crises are changing the ways that heritage operates. New discourses and ontologies are emerging to reconfigure heritage for the circumstances of the present and the uncertainties of the future.
Taking the current role of heritage in Europe as its starting point, Critical Heritage Studies and the Futures of Europe presents a number of case studies that explore key themes in this transformation. Contributors draw on a range of disciplinary perspectives to consider, variously, the role of heritage and museums in the migration and climate âemergenciesâ; approaches to urban heritage conservation and practices of curating cities; digital and digitised heritage; the use of heritage as a therapeutic resource; and critical approaches to heritage and its management. Taken together, the chapters explore the multiple ontologies through which cultural and natural heritage have actively intervened in redrawing the futures of Europe and the world
- âŠ