937 research outputs found

    Did You Mean...? Confidence-based Trade-offs in Semantic Parsing

    Full text link
    We illustrate how a calibrated model can help balance common trade-offs in task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show that well-calibrated confidence scores allow us to balance cost with annotator load, improving accuracy with a small number of interactions. We then examine how confidence scores can help optimize the trade-off between usability and safety. We show that confidence-based thresholding can substantially reduce the number of incorrect low-confidence programs executed; however, this comes at a cost to usability. We propose the DidYouMean system which better balances usability and safety.Comment: 9 pages. arXiv admin note: substantial text overlap with arXiv:2211.0744
    • …
    corecore