64 research outputs found
Recommended from our members
Enhancing Usability and Explainability of Data Systems
The recent growth of data science expanded its reach to an ever-growing user base of nonexperts, increasing the need for usability, understandability, and explainability in these systems. Enhancing usability makes data systems accessible to people with different skills and backgrounds alike, leading to democratization of data systems. Furthermore, proper understanding of data and data-driven systems is necessary for the users to trust the function of the systems that learn from data. Finally, data systems should be transparent: when a data system behaves unexpectedly or malfunctions, the users deserve proper explanation of what caused the observed incident. Unfortunately, most existing data systems offer limited usability and support for explanations: these systems are usable only by experts with sound technical skills, and even expert users are hindered by the lack of transparency into the systems\u27 inner workings and functions. The aim of my thesis is to bridge the usability gap between nonexpert users and complex data systems, aid all sort of users, including the expert ones, in data and system understanding, and provide explanations that help reason about unexpected outcomes involving data systems. Specifically, my thesis has the following three goals: (1) enhancing usability of data systems for nonexperts, (2) enable data understanding that can assist users in a variety of tasks such as achieving trust in data-driven machine learning, gaining data understanding, and data cleaning, and (3) explaining causes of unexpected outcomes involving data and data systems.
For enhancing usability, we focus on example-driven user intent discovery. We develop systems based on example-driven interactions in two different settings: querying relational databases and personalized document summarization. Towards data understanding, we develop a new data-profiling primitive that can characterize tuples for which a machine-learned model is likely to produce untrustworthy predictions. We also develop an explanation framework to explain causes of such untrustworthy predictions. Additionally, this new data-profiling primitive enables interactive data cleaning. Finally, we develop two explanation frameworks, tailored to provide explanations in debugging data system components, including the data itself. The explanation frameworks focus on explaining the root cause of a concurrent application\u27s intermittent failure and exposing issues in the data that cause a data-driven system to malfunction
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
NASA space station automation: AI-based technology review
Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures
Recommended from our members
Analysis and Development of Instrument Software Paradigms: Conception and Implementation of a New Instrument Control and Data Acquisition System, Proven by Material Scientific Applications
During the last 50 years, the quality of analysis methods in many scientific disciplines has been enhanced by electronic applications, automation and data processing. While the features, performance and usability of these processes have been continually enhanced, it is conspicuous that the majority of institutes operate own proprietary software. This situation arises for both historical and financial reasons, plus a wish to retain autonomy fuelled by the requirement for a system that remains compatible with both new and legacy hardware.
This thesis reviews the commonly used scientific software systems and their stakeholders and tries to identify generic problems. The demands on instrument systems are summarized by a requirement specification. Based on these requirements, a basic concept is developed that reflects the current state-of-the art in software design and which may provide a blueprint for instrument system architectures. The results are used to create a proof-of-concept implementation. Core to this approach is an application server that comes with a container, which makes use of the Inversion-of-Control pattern to loosely couple and execute components. These do not need to implement fixed interfaces and are thus decoupled from a specific use-case. Components can, for example, be proxies that control and acquire data from legacy hardware, perform calculations, provide a human-machine interface or act as storage. They are dynamically wired to experiments using XML-based Assembly files. Both Assemblies and Components can be published using a central store on a collaboration platform and shared by the community. This increases reusability and allows the use of existing Assemblies with new hardware by simply replacing the hardware proxy modules.
Example components have been provided for the access to legacy and new instrument hardware, the storage of results in the NeXus format, data reduction, simulation with McStas, the execution of customizable scans and the visualization of data
Operating System Support for Redundant Multithreading
Failing hardware is a fact and trends in microprocessor design indicate that the fraction of hardware suffering from permanent and transient faults will continue to increase in future chip generations. Researchers proposed various solutions to this issue with different downsides: Specialized hardware components make hardware more expensive in production and consume additional energy at runtime. Fault-tolerant algorithms and libraries enforce specific programming models on the developer. Compiler-based fault tolerance requires the source code for all applications to be available for recompilation. In this thesis I present ASTEROID, an operating system architecture that integrates applications with different reliability needs.
ASTEROID is built on top of the L4/Fiasco.OC microkernel and extends the system with Romain, an operating system service that transparently replicates user applications. Romain supports single- and multi-threaded applications without requiring access to the application's source code. Romain replicates applications and their resources completely and thereby does not rely on hardware extensions, such as ECC-protected memory. In my thesis I describe how to efficiently implement replication as a form of redundant multithreading in software. I develop mechanisms to manage replica resources and to make multi-threaded programs behave deterministically for replication.
I furthermore present an approach to handle applications that use shared-memory channels with other programs. My evaluation shows that Romain provides 100% error detection and more than 99.6% error correction for single-bit flips in memory and general-purpose registers. At the same time, Romain's execution time overhead is below 14% for single-threaded applications running in triple-modular redundant mode. The last part of my thesis acknowledges that software-implemented fault tolerance methods often rely on the correct functioning of a certain set of hardware and software components, the Reliable Computing Base (RCB).
I introduce the concept of the RCB and discuss what constitutes the RCB of the ASTEROID system and other fault tolerance mechanisms. Thereafter I show three case studies that evaluate approaches to protecting RCB components and thereby aim to achieve a software stack that is fully protected against hardware errors
Recommended from our members
NBS monograph
From Introduction: "This is the third in a planned series of reports involving selective literature reviews of research and development requirements and areas of continuing R & D concern in the computer and information sciences and technologies.
Security of Cyber-Physical Systems
Cyber-physical system (CPS) innovations, in conjunction with their sibling computational and technological advancements, have positively impacted our society, leading to the establishment of new horizons of service excellence in a variety of applicational fields. With the rapid increase in the application of CPSs in safety-critical infrastructures, their safety and security are the top priorities of next-generation designs. The extent of potential consequences of CPS insecurity is large enough to ensure that CPS security is one of the core elements of the CPS research agenda. Faults, failures, and cyber-physical attacks lead to variations in the dynamics of CPSs and cause the instability and malfunction of normal operations. This reprint discusses the existing vulnerabilities and focuses on detection, prevention, and compensation techniques to improve the security of safety-critical systems
- …