Should Conversational User Interfaces Make Human 'Errors'?
In case you missed out this amazing session on "Should Conversational User Interfaces Make Human 'Errors'?", you can now explore our walkthrough where we summarized the main takeaways for you.
In this webinar we explored the intricate relationship between Conversational User Interfaces (CUI) and human errors. The discussion, led by notable experts like Elizabeth Stokoe, Saul Albert and Cathy Pearl aimed to contribute valuable insights to their respective fields, emphasizing practical applications and clarity.
The conversation dives into the challenges of designing and producing turns in conversational user interfaces, particularly in responding to one another. Traditional error handling in conversation design is contrasted with the unique challenges posed by large language models, capable of generating plausible but incorrect responses.
Stokoe discusses the CIA's perspective on these challenges, shedding light on how they impact user expectations. The concept of repair in conversation analysis is introduced as a self-righting mechanism to address troubles in speaking, hearing, and understanding. Disfluencies in conversation are highlighted as valuable elements rather than errors to be eliminated. The Duplex project strategically incorporates 'ohms' and 'uh's to enhance the naturalness of conversations. The importance of disfluencies is demonstrated in a conversation where repair is initiated after a misunderstanding.
Further, the intricacies of speech repair in human interaction are explored, showcasing examples where individuals initiate self-repair by repeating prior terms or identifying problems using categorical descriptions. However, challenges arise as speech recognition systems struggle to keep pace with repairs, leading to interruptions in conversations. Examining language imperfections in customer service calls, the discussion unveils differences in call structure and intent between mystery shoppers and real callers.
The application of conversation analysis and repair in AI language models is scrutinized, with an emphasis on balancing error correction and task completion. Various types of repair, including embedded and subtle corrections, are discussed. The potential psychological impact of deliberate speech errors, as observed in figures like Boris Johnson, on the GUI's confidence in generating information is explored. Moreover, the importance of explainability in AI is emphasized, with suggestions to design systems intuitively comprehensible to humans. Ethical considerations are raised regarding the intentional introduction of errors to make AI systems appear more human-like.
The discussion then touches on the use of TTS (Text-to-Speech) to read conversation analytic transcripts, exploring how pitch and breathiness could convey uncertainty. The limitations of large language models in understanding the content they generate are acknowledged, prompting discussions on training efforts and the scarcity of data for LLMs in spoken language.
In the final segment, the panelists discuss the limitations of large language models, suggesting tools for accessing relevant literature, and addressing efforts to train LLMs on spoken language, acknowledging the challenges posed by the lack of available data for this task.
Got curious? Watch the whole webinar below.
Want to know how CDI can help you optimize your virtual assistant? Reach out to our team now and schedule your free expert review for a limited time only.