AI in corporate environments: a major challenge for UX Designers to deliver effective, transparent, and ethical Service Design

On 30 November 2022, artificial intelligence (AI) arrived in our everyday lives with a bang, in the form of ChatGPT. Users felt a mixture of reluctance, excitement and hope. And professionals from all sectors started to imagine a goldmine of opportunities for their business due to this meteoric rise of generative AI.

If they have not already done so, almost all major companies* are preparing their R&D departments for the race to create new tools integrating AI, through recruitment drives for data analysts, data scientists and developers, or by turning to specialist service providers.

Because user experience is the foundation of any new tool, to ensure it is taken up and used effectively, it is crucial to think about the UX of these intelligent business solutions. What are the specific UX-related challenges at the centre of this new technology boom that remains obscure?

UX Designer: your checklist to begin designing an AI service


Right from the outset, identify the technological capacity that the company has for the solution

If the initial brief for your project is to use AI to enhance a service or product, and the launch meeting is attended mainly by data and developer profiles, then it is highly likely that the desire to provide a solution integrating generative AI is driven more by R&D and business teams than by users in the field!

While the golden rule in user-centred design is to begin with needs rather than solutions, AI projects are often approached as a technological innovation challenge: play the game!

It may be up to you to identify use cases, or at least to refine them. Ask the teams to explain to you what they have already explored in terms of the technical PoC, what they think they are able to achieve and based on which data sources and, above all, encourage them to explain it all to you using layman’s terms! This is because you will then have to explain, one way or another, the models in your interfaces.


Focus attention on ethics in this specific AI project

This topic should be nothing new for R&D team members and AI experts, but that does not mean that they will have fully integrated ethics in their own project. It is likely that aspects related to data quality and bias management have been considered, but have they covered:

  • The risk of delegating user responsibility to AI?
  • The risk of losing end user skills in the long term?

These questions do not only apply to self-driving vehicles! Consider these aspects rapidly to position AI in a suitable role: is it deployed in the user’s core function or for auxiliary tasks? Will the interface encourage users to reread and double-check the results delivered using their own knowledge or other sources? Will the results be formulated as facts or suggestions? Will reliability or sensitivity scores be highlighted? And so on.

Far from being incidental, these questions enhance the project’s added value and, in any event, help prepare for the arrival of the EU’s AI Act currently under discussion. This act will aim to qualify what kind of AI is acceptable within the European sphere: AI that is fair, explainable, respectful of privacy, robust and transparent. With this in mind, ethics, responsibility and strong governance should be integrated in the solution’s design, to achieve respect for the fundamental concepts that our continent would like to see emerge.


If necessary, push to make sure that design work is not limited to results interfaces

Generative AI needs large amounts of data to learn and develop: it is kind of like a child growing up, with or without your help! Today, many software programs are developed with a set project time, then put into production and left untouched for years. However, AI models are actually designed in such a way that they aim to develop by absorbing all the data they receive, with the risk of completely going off the rails in the absence of regular monitoring and retraining using high-quality data.

To achieve a comprehensive service design, as a UX designer, examine all the tools needed to deploy the AI and develop it over time, including the back-office to work on the algorithm or, as a minimum, functionalities that are specific to retraining by users in the front office.

These tools may be new to many people, even those who are familiar with the traditional web and technical experts. You will probably need to work hard on simplifying the design to demystify complex interfaces and, after it is put into production, delegate all or part of the model’s training to business experts with little or no technical knowledge. Bear in mind positive side effects:

  • Creating user trust in the AI: it is demystified because I know how it is trained and by whom (me!)
  • Taking the project out of a purely technical environment frees up the IT team to focus on new projects, rather than maintenance

For the rest, an AI project is similar to other digital projects: you will need use cases broken down into tasks and goals, user research to formulate or confirm assumptions, and user tests to assess the proposed solutions.

Beware of the “Mr. Know-It-All” syndrome

One question about your AI will need to be answered: should we personify it?

For years now, assistants with human names and voices have been popping up: Alexa, Siri… Even ChatGPT is humanized simply by the fact that it responds in natural language, with well-constructed sentences and paragraphs, rather than with operative language or a list of results, like Google Search.

At first sight, wanting to humanize an artificial intelligence seems logical and even natural, because the intelligence that the program imitates is ultimately human. You should nevertheless bear in mind that this choice will affect the degree of authority perceived by users: if you create a “new coworker” for them, it will assume a position within the hierarchy. It is up to you to design your AI to take on the right role. Will it be an intern or junior coworker whose output needs to be checked and corrected, or a business expert to be trusted? By defining the AI’s level of expression and tone, you can influence its audience’s perception to achieve the desired result.

Providing context: a critical point for personified AI

You also need to bear in mind that your AI does not understand, in the true sense of the word, what it is producing. It contents itself with following set rules, which, if they are well defined, produce a convincing result. Your AI will need context and specific instructions to correctly calibrate itself. How will your users provide these? It is part of your job as a UX designer to come up with a method and help users push the AI to produce relevant results.

A few days ago, two tech entrepreneurs shared a story with me about their use of ChatGPT. Used to brainstorm on concepts and code writing, they observed that the AI has answers for everything, but that their relevance varies. Some proposals simply don’t work or are far-fetched. One of them then had the idea of specifying the rules of play to ChatGPT: “You are allowed to tell me if you don’t know.” From that point on, the AI applied the rule and stopped giving an answer at all costs.

This example is interesting because setting rules for the conversation immediately enabled more effective use of the tool. It also demonstrates that if ChatGPT did not have such a “human” image, which leads people to assume that the program knows these rules, it would be more obvious that explicit rules need to be specified before each interaction.

Design has a role to play in avoiding this “human” bias. For example, get away from the conversation model of instant messaging, which echoes the spontaneity of human interactions. Depending on your AI’s area of application, perhaps setting a context for the tool would be easier and faster using selectors and buttons? Consider reusing error or “no results found” messages, commonly found in traditional web tools, when reliability scores are weak, to clearly identify the limits of the AI. Limiting AI to the status of a non-humanized tool is, I feel, an area worth exploring.

Some companies, such as RTE with its project ‘Origami’, have decided to do just that, by developing platforms or plugins that present AI as a way to augment or accelerate knowledge. Some have even gone so far as to choose not to generate a final result, but rather a pool of heterogenous knowledge from clearly identified sources, allowing room for human expertise.

To conclude

The arrival of these AIs is a big leap forward. It has significantly changed the way people work in all sectors. Designers will experience something similar to the wave of companies that wanted to “get on the web”, before truly knowing what they wanted to do there, in the 2000s and 2010s. Beginning with a technology and finding relevant usage applications, rather than beginning with needs and finding a solution, without limiting yourself to any one technology, is a real challenge.

For the optimists, let’s see this as a great way to highlight the “impactful creative” dimension of our jobs as UX designers. It looks like there will be a long adaptation period for everybody. It is up to us to remain committed champions of users’ interests, so that emerging projects make sense, respect ethical dimensions (social, environmental and systemic), and make the black box that is today’s AI something comprehensible. This will require us, more than anyone, to avoid being afraid or in awe of AI, and to treat it in a suitable and relevant way.


* According to the 2023 edition of the OECD Employment Outlook, section 2.2.2, page 36, 82% of big companies in OECD countries initiated AI projects in 2023, compared with 75% in 2022. The most advanced sectors in terms of AI adoption are finance, health and manufacturing.

Get in touch with our experts

Send us an email