22,641 research outputs found
Data literacy in the smart university approach
Equipping classrooms with inexpensive sensors for data collection can provide students and teachers with the opportunity to interact with the classroom in a smart way. In this paper two approaches to acquiring contextual data from a classroom environment are presented. We further present our approach to analysing the collected room usage data on site, using low cost single board computer, such as a Raspberry Pi and Arduino units, performing a significant part of the data analysis on-site. We demonstrate how the usage data was used to model specifcic room usage situation as cases in a Case-based reasoning (CBR) system. The room usage data was then integrated in a room recommender system, reasoning on the formalised usage data, allowing for a convenient and intuitive end user experience based on the collected raw sensor data. Having implemented and tested our approaches we are currently investigating the possibility of using (XML)Schema-informed compression to enhance the security and efficiency of the transmission of a large number of sensor reports generated by interpreting the raw data on-site, to our central data sink. We are investigating this new approach to usage data transmission as we are aiming to integrate our on-going work into our vision of the Smart University to ensure and enhance the Smart University's data literacy
Context-aware Collaborative Neuro-Symbolic Inference in Internet of Battlefield Things
IoBTs must feature collaborative, context-aware, multi-modal fusion for real-time, robust decision-making in adversarial environments. The integration of machine learning (ML) models into IoBTs has been successful at solving these problems at a small scale (e.g., AiTR), but state-of-the-art ML models grow exponentially with increasing temporal and spatial scale of modeled phenomena, and can thus become brittle, untrustworthy, and vulnerable when interpreting large-scale tactical edge data. To address this challenge, we need to develop principles and methodologies for uncertainty-quantified neuro-symbolic ML, where learning and inference exploit symbolic knowledge and reasoning, in addition to, multi-modal and multi-vantage sensor data. The approach features integrated neuro-symbolic inference, where symbolic context is used by deep learning, and deep learning models provide atomic concepts for symbolic reasoning. The incorporation of high-level symbolic reasoning improves data efficiency during training and makes inference more robust, interpretable, and resource-efficient. In this paper, we identify the key challenges in developing context-aware collaborative neuro-symbolic inference in IoBTs and review some recent progress in addressing these gaps
Visual analysis of sensor logs in smart spaces: Activities vs. situations
Models of human habits in smart spaces can be expressed by using a multitude of representations whose readability influences the possibility of being validated by human experts. Our research is focused on developing a visual analysis pipeline (service) that allows, starting from the sensor log of a smart space, to graphically visualize human habits. The basic assumption is to apply techniques borrowed from the area of business process automation and mining on a version of the sensor log preprocessed in order to translate raw sensor measurements into human actions. The proposed pipeline is employed to automatically extract models to be reused for ambient intelligence. In this paper, we present an user evaluation aimed at demonstrating the effectiveness of the approach, by comparing it wrt. a relevant state-of-the-art visual tool, namely SITUVIS
Validating the INTERPRETOR Software Architecture for the Interpretation of Large and Noisy Data Sets
In this chapter, the authors validate INTERPRETOR software architecture as a dataflow model of com-
putation for filtering, abstracting, and interpreting large and noisy datasets with two detailed empirical
studies from the authors’ former research endeavours. Also discussed are five further recent and distinct
systems that can be tailored or adapted to use the software architecture. The detailed case studies pre-
sented are from two disparate domains that include intensive care unit data and building sensor data.
By performing pattern mining on five further systems in the way the authors have suggested herein, they
argue that INTERPRETOR software architecture has been validated
Analyzing spacecraft configurations through specialization and default reasoning
For an intelligent system to describe a real-world situation using as few statements as possible, it is necessary to make inferences based on observed data and to incorporate general knowledge of the reasoning domain into the description. These reasoning processes must reduce several levels of specific descriptions into only those few that most precisely describe the situation. Moreover, the system must be able to generate descriptions in the absence of data, as instructed by certain rules of inference. The deductions applied by the system, then, generate a high-level description from the low-level evidence provided by the real and default data sources. An implementation of these ideas in a real-world situation is described. The application concerns evaluation of Space Shuttle electromechanical system configurations by console operators in the Mission Control Center. A production system provides the reasoning mechanism through which the default assignments and specializations occur. Examples are provided within this domain for each type of inference, and the suitability is discussed of each toward achieving the goal of describing a situation in the fewest statements possible. Finally, several enhancements are suggested that will further increase the intelligence of similar spacecraft monitoring applications
Learning Models for Following Natural Language Directions in Unknown Environments
Natural language offers an intuitive and flexible means for humans to
communicate with the robots that we will increasingly work alongside in our
homes and workplaces. Recent advancements have given rise to robots that are
able to interpret natural language manipulation and navigation commands, but
these methods require a prior map of the robot's environment. In this paper, we
propose a novel learning framework that enables robots to successfully follow
natural language route directions without any previous knowledge of the
environment. The algorithm utilizes spatial and semantic information that the
human conveys through the command to learn a distribution over the metric and
semantic properties of spatially extended environments. Our method uses this
distribution in place of the latent world model and interprets the natural
language instruction as a distribution over the intended behavior. A novel
belief space planner reasons directly over the map and behavior distributions
to solve for a policy using imitation learning. We evaluate our framework on a
voice-commandable wheelchair. The results demonstrate that by learning and
performing inference over a latent environment model, the algorithm is able to
successfully follow natural language route directions within novel, extended
environments.Comment: ICRA 201
- …