Beware of the “Mr. Know-It-All” syndrome
One question about your AI will need to be answered: should we personify it?
For years now, assistants with human names and voices have been popping up: Alexa, Siri… Even ChatGPT is humanized simply by the fact that it responds in natural language, with well-constructed sentences and paragraphs, rather than with operative language or a list of results, like Google Search.
At first sight, wanting to humanize an artificial intelligence seems logical and even natural, because the intelligence that the program imitates is ultimately human. You should nevertheless bear in mind that this choice will affect the degree of authority perceived by users: if you create a “new coworker” for them, it will assume a position within the hierarchy. It is up to you to design your AI to take on the right role. Will it be an intern or junior coworker whose output needs to be checked and corrected, or a business expert to be trusted? By defining the AI’s level of expression and tone, you can influence its audience’s perception to achieve the desired result.
Providing context: a critical point for personified AI
You also need to bear in mind that your AI does not understand, in the true sense of the word, what it is producing. It contents itself with following set rules, which, if they are well defined, produce a convincing result. Your AI will need context and specific instructions to correctly calibrate itself. How will your users provide these? It is part of your job as a UX designer to come up with a method and help users push the AI to produce relevant results.
A few days ago, two tech entrepreneurs shared a story with me about their use of ChatGPT. Used to brainstorm on concepts and code writing, they observed that the AI has answers for everything, but that their relevance varies. Some proposals simply don’t work or are far-fetched. One of them then had the idea of specifying the rules of play to ChatGPT: “You are allowed to tell me if you don’t know.” From that point on, the AI applied the rule and stopped giving an answer at all costs.
This example is interesting because setting rules for the conversation immediately enabled more effective use of the tool. It also demonstrates that if ChatGPT did not have such a “human” image, which leads people to assume that the program knows these rules, it would be more obvious that explicit rules need to be specified before each interaction.
Design has a role to play in avoiding this “human” bias. For example, get away from the conversation model of instant messaging, which echoes the spontaneity of human interactions. Depending on your AI’s area of application, perhaps setting a context for the tool would be easier and faster using selectors and buttons? Consider reusing error or “no results found” messages, commonly found in traditional web tools, when reliability scores are weak, to clearly identify the limits of the AI. Limiting AI to the status of a non-humanized tool is, I feel, an area worth exploring.
Some companies, such as RTE with its project ‘Origami’, have decided to do just that, by developing platforms or plugins that present AI as a way to augment or accelerate knowledge. Some have even gone so far as to choose not to generate a final result, but rather a pool of heterogenous knowledge from clearly identified sources, allowing room for human expertise.