Conversational Design — UXTALK #25
What is meant by Conversational Design? What are the boundaries of this “new” discipline? How does the Conversational Design of an object change depending on its scope?
We addressed the professional implications of this discipline and its impact among professionals on both the organisational and design level, sharing real cases.
The event is opened by Ilaria Vitali, PhD in Smart products & Conversational Design @Politecnico di Milano, UX Researcher and designer @Habits; She investigates Conversational Design and its appropriate use case.
Following her intervention, Paolo Boni, UX-UI Designer @Roboze, considers how UX practices need to be involved during the design process of any object to facilitate human life.
Watch the full stream of the event here (in Italian)
Conversational User Interfaces (CUI) and how to use them — Dr Ilaria Vitali
By definition, a conversational interface uses conversation as a form of interaction but not all interfaces that use language can be defined as “conversational“.
For example, it is sometimes inappropriate to label a single user’s request to Alexa as a conversation. This human-computer interaction consists of a simple question, and it follows a single Alexa’s answer. This exchange is more similar to a single order given to an AI system, translating Alexa into a Voice User Interface to control devices around the house. This is why there are many controversies on the definition of ‘conversational’ since it is often impossible to go beyond these limited back and forth vocal exchanges with artificial systems.
Indeed, in such cases, it appears more appropriate to talk about “Voice User Interface” (VUI), a sub-category of the wider area of “natural user interfaces”(NUI), UIs that make use of human language.
VUI use language as interface, but users don’t automatically conceive them as “conversational”. As a case in point, considering from the world of IoT devices, a bin that opens if the user says “open”, cannot be regarded as a truly conversational object since “open” is plausibly the only word it understands. Hence, in this case, we are uniquely looking at a Voice User Interface.
So, when AI systems can be considered “Conversational Objects”?
“Conversational Objects” born as Chatbots
Chatbots are Computer programs that recognise a textual input produced by users and afterwords generatea contextualised textual output. They were first created around the 70s following the birth of the AI discipline.
Currently, service providers use chatbots to guarantee24 / h online assistance. Chatbots can replace human intervention for certain operations, especially in repetitive tasks such as information retrieval, thanks to which enterprises significantly benefit customer care. Accordingly, Bots become elements to characterise companies’ brand identity, resulting from their customer engagement skills. Chatbots give the possibility to companies to create their own textual experiences with a dedicated avatar.
Voice User Interfaces & Personal Assistants
Virtual assistants are conversational agents like chatbots but much more powerful. They are cloud-based to be accessed by multiple devices and to be compatible with other connected objects. Consequently, the exceptional opportunity we have with these assistants is that they work as bridges to let the user interact by voice with other devices interconnected through the personal assistant’s cloud.
Embodied Conversational Agents (ECA)
ECAs are animated characters on screens. They can use voice or text, but most importantly, they can use non-verbal language, expressions through gesture, gaze, intonation, and body posture. This line of study was born in the 2000s.
Conversational smart products (ConvSP)
ConvSPs are smart products that embed or embody conversational interfaces. Their smartness derives from the use of ITC technologies. Their form and behaviour display or suggest that they are conversationally enabled. — Dr Ilaria Vitali in ‘A Conversation on Conversational Smart Products’
ConvSPs are devices that embedded a conversational agent, but it is of fundamental importance to understand when a CUI is appropriate: depending on the use case and task, considering the possibility to include a graphical user interface to the conversation user interface, to enhance usability.
How is the UX of a CUI different from a Graphical Interface?
- GUIs are spatial while CUIs are temporal and sequential: Talking time, and speech is produced one word at a time in a specific order. This is why conversation may not always be the quickest way to reach a certain goal
- CUIs/VUIs are dynamic and transient: Speech sequences are always changing, and they may leave no permanent recording of what was said. Navigation may be an issue as it relies on user memory.
Which are the different ways of embedding the agent? What type of products should embody and embed a Voice Personal Assistants (VPA)? What is its impact on the physical products? How to communicate the agent’s presence?
Watch the full keynote (in Italian)
Studying different materials through 3D Printing to improve the UX of running for marathoners — Paolo Boni
Despite being far from what we usually call UX, this project is a splendid example of how the incorporation of User Experience practice into the design process of any object can improve its usability, and most importantly, how this changes the mechanisms of its realisation.
Paolo Boni worked on the design of a pair of shoes for Puma, analysing the role of the changing state of material depending on the physical needs of the marathoner. The materials and shapes have been prototyped with a 3D printer.
The phenomenon from which this user need arises is that of “hitting the wall”, a physical and psychological state encountered by athletes around the 30th km of running.
After intensive user research carried out by interviews, observations, and quantitative data on the foot pressure of users, the problem area has been identified on a series of moments related to the dynamics of force, psychology and extreme effort needed by the marathoner. Therefore, the phenomenon of “hitting the wall” has been placed at the core of the design; the technology of the shoes plays an enabler role for the user, helping the runner to face the challenge.
Finally, project solutions are conceived by investigating the behaviour, qualities and shape of the material in its different states.
The aim of Puma is to change the characteristics of the material of the shoes while the marathoner is running, at around the 30th km in order to be softer and more usable during the complex phenomenon of the “Heating wall”. So, the Research Question is: Can a semi-stable structure be programmed to exhibit both soft and stiff properties? The answer is yes!
Comfort42 is a vivid example of the pervasive impact of UX methodologies in the development of heterogeneous products, that helps understanding how by focusing on the right user need and experimenting with those technologies that best fit the goal great innovation can be generated.
Watch the full keynote (in Italian)
These lectures were held during the 25th UX Talk organised for the students of the Master in User Experience Psychology by Politecnico di Milano & Università Cattolica del Sacro Cuore and of the Higher Education Course in User Experience Design by POLI.design. Follow us on LinkedIn and Facebook to be updated about the upcoming UX Talks, always open to the public.
Curated by Alice Paracolli