9 research outputs found
Meeting Analytics: Creative Activity Support Based on Knowledge Discovery from Discussions
We are researching a mechanism to promote innovation by supporting discussions based on the premise that innovation results from discussions. Ideas are created and developed mainly by conversations in creative meetings like those in brainstorming. Ideas are also refined in the process of repeated discussions. Our previous research called discussion mining was specifically used to collect various data on meetings (statements and their relationships, presentation materials such as slides, audio and video, and participantsâ evaluations on statements). We extracted important statements to be considered especially after the meetings had been held and actions had been undertaken, such as investigations and implementations that were performed in relation to these statements by using the collected data. Here, we present high-probability statements that should lead to innovations during meetings and facilitate creative discussions. We also propose a creative activity support system that should help users to discover and execute essential tasks
Improving fairness in machine learning systems: What do industry practitioners need?
The potential for machine learning (ML) systems to amplify social inequities
and unfairness is receiving increasing popular and academic attention. A surge
of recent work has focused on the development of algorithmic tools to assess
and mitigate such unfairness. If these tools are to have a positive impact on
industry practice, however, it is crucial that their design be informed by an
understanding of real-world needs. Through 35 semi-structured interviews and an
anonymous survey of 267 ML practitioners, we conduct the first systematic
investigation of commercial product teams' challenges and needs for support in
developing fairer ML systems. We identify areas of alignment and disconnect
between the challenges faced by industry practitioners and solutions proposed
in the fair ML research literature. Based on these findings, we highlight
directions for future ML and HCI research that will better address industry
practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in
Computing Systems (CHI 2019
Robust Regression for Safe Exploration in Control
We study the problem of safe learning and exploration in sequential control problems. The goal is to safely collect data samples from an operating environment to learn an optimal controller. A central challenge in this setting is how to quantify uncertainty in order to choose provably-safe actions that allow us to collect useful data and reduce uncertainty, thereby achieving both improved safety and optimality. To address this challenge, we present a deep robust regression model that is trained to directly predict the uncertainty bounds for safe exploration. We then show how to integrate our robust regression approach with model-based control methods by learning a dynamic model with robustness bounds. We derive generalization bounds under domain shifts for learning and connect them with safety and stability bounds in control. We demonstrate empirically that our robust regression approach can outperform conventional Gaussian process (GP) based safe exploration in settings where it is difficult to specify a good GP prior
Robust Regression for Safe Exploration in Control
We study the problem of safe learning and exploration in sequential control problems. The goal is to safely collect data samples from an operating environment to learn an optimal controller. A central challenge in this setting is how to quantify uncertainty in order to choose provably-safe actions that allow us to collect useful data and reduce uncertainty, thereby achieving both improved safety and optimality. To address this challenge, we present a deep robust regression model that is trained to directly predict the uncertainty bounds for safe exploration. We then show how to integrate our robust regression approach with model-based control methods by learning a dynamic model with robustness bounds. We derive generalization bounds under domain shifts for learning and connect them with safety and stability bounds in control. We demonstrate empirically that our robust regression approach can outperform conventional Gaussian process (GP) based safe exploration in settings where it is difficult to specify a good GP prior
Fairness for Robust Log Loss Classification
Developing classification methods with high accuracy that also avoid unfair
treatment of different groups has become increasingly important for data-driven
decision making in social applications. Following the first principles of
distributional robustness, we derive a new classifier that incorporates
fairness criteria into its worst-case logarithmic loss minimization. This
construction takes the form of a minimax game and produces a parametric
exponential family conditional distribution that resembles truncated logistic
regression. We demonstrate the advantages of our approach on three benchmark
fairness datasets
Robust Regression for Safe Exploration in Control
We study the problem of safe learning and exploration in sequential control problems. The goal is to safely collect data samples from an operating environment to learn an optimal controller. A central challenge in this setting is how to quantify uncertainty in order to choose provably-safe actions that allow us to collect useful data and reduce uncertainty, thereby achieving both improved safety and optimality. To address this challenge, we present a deep robust regression model that is trained to directly predict the uncertainty bounds for safe exploration. We then show how to integrate our robust regression approach with model-based control methods by learning a dynamic model with robustness bounds. We derive generalization bounds under domain shifts for learning and connect them with safety and stability bounds in control. We demonstrate empirically that our robust regression approach can outperform conventional Gaussian process (GP) based safe exploration in settings where it is difficult to specify a good GP prior
Shift-Pessimistic Active Learning Using Robust Bias-Aware Prediction
Existing approaches to active learning are generally optimistic about their certainty with respect to data shift between labeled and unlabeled data. They assume that unknown datapoint labels follow the inductive biases of the active learner. As a result, the most useful datapoint labelsâones that refute current inductive biasesâare rarely solicited. We propose a shift-pessimistic approach to active learning that assumes the worst-case about the unknown conditional label distribution. This closely aligns model uncertainty with generalization error, enabling more useful label solicitation. We investigate the theoretical benefits of this approach and demonstrate its empirical advantages on probabilistic binary classification tasks