1,310 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
SCALING UP TASK EXECUTION ON RESOURCE-CONSTRAINED SYSTEMS
The ubiquity of executing machine learning tasks on embedded systems with constrained resources has made efficient execution of neural networks on these systems under the CPU, memory, and energy constraints increasingly important. Different from high-end computing systems where resources are abundant and reliable, resource-constrained systems only have limited computational capability, limited memory, and limited energy supply. This dissertation focuses on how to take full advantage of the limited resources of these systems in order to improve task execution efficiency from different aspects of the execution pipeline. While the existing literature primarily aims at solving the problem by shrinking the model size according to the resource constraints, this dissertation aims to improve the execution efficiency for a given set of tasks from the following two aspects. Firstly, we propose SmartON, which is the first batteryless active event detection system that considers both the event arrival pattern as well as the harvested energy to determine when the system should wake up and what the duty cycle should be. Secondly, we propose Antler, which exploits the affinity between all pairs of tasks in a multitask inference system to construct a compact graph representation of the task set for a given overall size budget. To achieve the aforementioned algorithmic proposals, we propose the following hardware solutions. One is a controllable capacitor array that can expand the system’s energy storage on-the-fly. The other is a FRAM array that can accommodate multiple neural networks running on one system.Doctor of Philosoph
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Developmental Bootstrapping of AIs
Although some current AIs surpass human abilities in closed artificial worlds
such as board games, their abilities in the real world are limited. They make
strange mistakes and do not notice them. They cannot be instructed easily, fail
to use common sense, and lack curiosity. They do not make good collaborators.
Mainstream approaches for creating AIs are the traditional manually-constructed
symbolic AI approach and generative and deep learning AI approaches including
large language models (LLMs). These systems are not well suited for creating
robust and trustworthy AIs. Although it is outside of the mainstream, the
developmental bootstrapping approach has more potential. In developmental
bootstrapping, AIs develop competences like human children do. They start with
innate competences. They interact with the environment and learn from their
interactions. They incrementally extend their innate competences with
self-developed competences. They interact and learn from people and establish
perceptual, cognitive, and common grounding. They acquire the competences they
need through bootstrapping. However, developmental robotics has not yet
produced AIs with robust adult-level competences. Projects have typically
stopped at the Toddler Barrier corresponding to human infant development at
about two years of age, before their speech is fluent. They also do not bridge
the Reading Barrier, to skillfully and skeptically draw on the socially
developed information resources that power current LLMs. The next competences
in human cognitive development involve intrinsic motivation, imitation
learning, imagination, coordination, and communication. This position paper
lays out the logic, prospects, gaps, and challenges for extending the practice
of developmental bootstrapping to acquire further competences and create
robust, resilient, and human-compatible AIs.Comment: 102 pages, 29 figure
Pushing the boundaries of photoconductive sampling in solids
The advent of laser-based optical tools featuring few-cycle pulses with durations of less than a hundred femtoseconds in the late 1980s enabled scientists to initiate and observe the evolution of chemical reactions. This powerful approach combined the interactions of light and matter and unleashed an unprecedented metrology concept that tracks the interactions of atoms and molecules in their natural timescales. Electron wavepacket dynamics take place in the attosecond range, a thousand times faster than molecules. In optical terms, such durations typically last less than the half-cycle duration of optical fields. Consequently, the investigation of such electronic processes necessitates measurement techniques capable of resolving the oscillations of the electric field of light. The primary objective of this thesis is to develop and advance novel field characterisation techniques based on photoconductive sampling.
The first portion of this thesis addresses broadband field characterisation based on nonlinear photoconductive sampling. A theoretical analysis of current formation and localisation in solids is presented, prompting the fabrication of a heterostructured sample with the aim of enhancing the magnitude of the signal obtained from the measurement technique. A thorough proof-of-principle experiment is performed, whereby a significant enhancement in signal magnitude is established. As a consequence of signal improvement, the heterostructured sample reaches the desired stability regime earlier than its traditional bulk counterparts. Moreover, the performance of the heterostructured sample for field characterisation is compared to fused silica and benchmarked against the well-established technique of electro-optic sampling. These results pave the way towards field sampling in low pulse energy systems.
The following section details broadband field characterisation based on linear photoconductive sampling by employing tailored pulses from a waveform synthe- siser. Visible-ultraviolet pulses are utilised to inject carriers in a common semi- conductive material (gallium phosphide), enabling the complete characterisation of a mid-infrared test field. Furthermore, the technique is validated against electro-optic sampling. When compared to electro-optic sampling, the response function of linear photoconductive sampling is concerned with the intensity envelope of the gating field, relaxing the strict requisites on the temporal phase of the gate. The demonstrated results represent a significant achievement in extending field sampling techniques beyond 100 THz and towards the visible range.
Finally, a machine learning-based algorithm for denoising waveforms obtained from a laboratory setting is developed and implemented. The algorithm is based on a one-dimensional convolutional neural network, ideal for processing data presented on an evenly spaced grid. The model is compared with well-established methodologies, namely denoising via the fast Fourier transform and wavelet analysis and exhibits excellent performance, extending the repertoire of tools typically used for combating noise.
The field characterisation methodologies presented in this thesis pave the way towards accessible and cost-effective field sampling techniques, enabling researchers to study field-induced electron dynamics in matter and usher in ultrafast optoelectronic signal processing towards the PHz range. In general, the field characterisation techniques presented occupy a small footprint, and the measurements take place in ambient air conditions, facilitating their integration in existing experimental infrastructures. With the aid of AI-accelerator chips, the machine learning tool developed in this thesis can be implemented during laboratory measurements as a concurrent denoising technique
- …