ASSISTANT AUTOMATIC MULTIMODAL SUGGESTIONS

Abstract

A computing device (e.g., a smartphone, a laptop computer, a tablet computer, a smartwatch, etc.) may provide a user with multimodal suggestions in response to an ambiguous user request. The computing device may evaluate a user request to an assistant and determine that the request is ambiguous as to the user’s intent. The computing device may determine, based on the evaluation, multimodal suggestions to prompt the user to clarify the intent of the user request. The computing device may output the suggestions to the user via various combinations of graphical user interface elements, text displayed via a user interface, and spoken indications. The user may provide an input (e.g., touch, stylus, mouse, keyboard, voice, etc.) to select one of the suggestions. Based on the selected suggestion, the computing device may perform the desired action and may train a user-specific machine learning model to perform the desired action when the user provides the same or similar ambiguous user request in the future

    Similar works