Ongoing

Ongoing

Ongoing

Research

Research

Research

Gen AI

Gen AI

Gen AI

Evaluating User Trust in AI Systems Through Visual Cues During Task Processing

Evaluating User Trust in AI Systems Through Visual Cues During Task Processing

Evaluating User Trust in AI Systems Through Visual Cues During Task Processing

Evaluating User Trust in AI Systems Through Visual Cues During Task Processing

Designing AI experiences isn’t only about functionality—it’s about fostering trust. In this study, I explored how "thinking" UI elements influence user trust across different AI systems, using Google’s Gemini, OpenAI’s ChatGPT 4.0, and Anthropic’s Claude.

Designing AI experiences isn’t only about functionality—it’s about fostering trust. In this study, I explored how "thinking" UI elements influence user trust across different AI systems, using Google’s Gemini, OpenAI’s ChatGPT 4.0, and Anthropic’s Claude.

Designing AI experiences isn’t only about functionality—it’s about fostering trust. In this study, I explored how "thinking" UI elements influence user trust across different AI systems, using Google’s Gemini, OpenAI’s ChatGPT 4.0, and Anthropic’s Claude.

Designing AI experiences isn’t only about functionality—it’s about fostering trust. In this study, I explored how "thinking" UI elements influence user trust across different AI systems, using Google’s Gemini, OpenAI’s ChatGPT 4.0, and Anthropic’s Claude.

My role

I supported the lead designer by conducting benchmarking analyses and building interactive prototypes to enhance the onboarding experience for first-time users at Sleeper.

I supported the lead designer by conducting benchmarking analyses and building interactive prototypes to enhance the onboarding experience for first-time users at Sleeper.

I supported the lead designer by conducting benchmarking analyses and building interactive prototypes to enhance the onboarding experience for first-time users at Sleeper.

I supported the lead designer by conducting benchmarking analyses and building interactive prototypes to enhance the onboarding experience for first-time users at Sleeper.

Timeline

6 Months

(Including Design, Benchmarking and Prototyping)

6 Months

(Including Design, Benchmarking and Prototyping)

6 Months

(Including Design, Benchmarking and Prototyping)

6 Months

(Including Design, Benchmarking and Prototyping)

Research overview

Discussion

In an era where AI plays an increasingly pivotal role in decision-making processes, fostering user trust is paramount. To explore how UI elements that appear during AI task processing influence user trust, we conducted a controlled study involving two advanced conversational AI models—ChatGPT 4.0 and Gemini Live. The primary goal of this research was to understand the impact of real-time feedback (e.g., loading animations, progress indicators) on users' perception of AI accuracy, reliability, and transparency.

UI ELEMENTS

Conveying "Thinking" in AI Systems

When an AI system is processing user input, it often employs specific UI elements to visually indicate that it's "thinking." These elements serve as feedback mechanisms, reassuring users that the system is working on their request. Different AI models use varying approaches, each designed to manage user expectations and reduce anxiety while waiting.

Google's Approach

Ghost Wireframes:
Google sometimes uses ghost wireframes during the AI processing phase. These faint, temporary outlines of the potential final response or interface serve as placeholders, letting the user know what will soon fill that space. This element helps in reducing uncertainty by offering a preview of where the information will appear, giving users a sense of progress even if the final content is still being generated.

Circling Loading Star Icon:
Google's AI features a circling star icon that rotates to indicate ongoing activity. The star has a dynamic movement that is calming and engaging. Unlike traditional loading bars, this abstract shape provides a sense of continuous motion without rigid expectations of how long it will take.

Textual Indicators:
To complement these visual elements, Google adds text that explicitly communicates what the AI is doing. Examples include:

  • Gathering information

  • Generating response

  • Searching the internet

These texts provide transparency, giving users a better understanding of the system’s current activity and reducing their cognitive load by making the process feel more predictable.

ChatGPT's Approach

OpenAI's ChatGPT 4.0 uses more minimalist elements to convey the system’s thinking process:


Pulsing Ellipsis "...":
ChatGPT 4.0 relies on a simple pulsing ellipsis to indicate processing. This familiar symbol—three dots gradually appearing and disappearing—acts as a subtle cue that the AI is generating a response. However, it provides no additional information about what’s happening behind the scenes, which can sometimes lead to user uncertainty, particularly during longer processing times.

Unlike Google's more detailed approach, the pulsing ellipsis lacks contextual cues, making it harder for users to estimate how long they will need to wait or what the system is doing. While it works well for short tasks, it may lead to frustration in cases where the response time is unpredictable.

STUDY SET-up

Objective

To analyze how specific UI elements used to depict the AI's "thinking" process impact user trust and engagement.

Participants: 12 students with limited prior experience in AI chatbots.

Method

Method

Method

Method

Each participant tested a single prompt on each AI model and rated their experience using the System Usability Scale (SUS).

Focus Metrics

Focus Metrics

Focus Metrics

Focus Metrics

User trust, perceived transparency, and comfort during the wait time.

Methodology

Each participant was given three simple prompts, interacting with a different AI for each one:

Prompt #1

Prompt #1

Prompt #1

Prompt #1

Summarize today’s top news

Prompt #2

Prompt #2

Prompt #2

Prompt #2

What is the weather forecast for tomorrow?

Prompt #3

Prompt #3

Prompt #3

Prompt #3

Share a fun fact about space.

After each prompt, participants rated their trust level through the System Usability Scale (SUS), a 10-question survey focused on usability as a trust proxy.

RESULTS

SUS

DISCUSSION

Research-Backed Principles in "Thinking" UI Design

Based on a comparative analysis of leading AI systems, certain design principles emerge that can enhance user trust and reduce cognitive load:

Transparency Through Textual Feedback

  • Research supports the use of informative, step-by-step text (e.g., “Gathering information”) as it lowers cognitive load by letting users know what to expect. Systems that do not provide such updates risk leaving users in the dark, which can foster frustration during longer tasks.

Engaging Micro-Animations or Task-Specific Visuals

  • Visual storytelling, such as task-specific animations, adds clarity and context. It reinforces the AI’s purpose, increasing user trust by associating the system’s “thinking” with visible progress toward an outcome. Meta’s use of micro-animations is particularly effective in maintaining engagement without overloading users.

Conversational UI for Humanizing the AI Process

  • Claude’s conversational updates provide a casual, approachable interaction style. Research in UX psychology indicates that casual phrasing can foster trust, especially during moments of uncertainty. Human-like language can reduce anxiety and make waiting times feel less daunting.

Use of Ghost Elements for Anticipation Management

  • Google's ghost wireframes preview the response layout, allowing users to anticipate the structure of the answer. This UI element supports anticipation and minimizes cognitive load, keeping users reassured that the system is making progress even if the final content is yet to appear.

summary

Conclusion

The diverse UI strategies employed by ChatGPT, Google Gemini, Claude, and Meta AI showcase a range of approaches to building user trust through "thinking" cues. Each system has found unique ways to convey AI processing, with Google and Meta focusing on transparency and task-specific indicators, and Claude emphasizing conversational feedback. Collectively, these design insights underscore the importance of transparency, engagement, and relatability in enhancing user trust in AI systems. For future AI designs, integrating informative feedback, contextual animations, and conversational tones could lead to more intuitive, trust-building interactions.

My Takeaways: Designing for Trust in AI Systems

  • Transparency is Key: When users can see what the AI is doing, like “searching the internet” or “gathering information,” they’re more likely to trust the process. Clear and direct feedback elements like Google Gemini’s progress text not only set expectations but reinforce users’ confidence in the AI’s capabilities. This transparency builds a bridge between the user and the technology, reducing feelings of uncertainty and frustration.


  • Intentional Use of Loading Indicators: The choice of loading animations, from ghost wireframes to pulsing dots, must align with the complexity of the task and user expectations. Complex loading animations may imply depth and thought, suitable for lengthy or resource-intensive processes, while minimalist options like ellipses can work for quick or iterative tasks. This balance is essential for creating an interface that feels appropriate to the AI’s “thinking” phase.


  • Humanizing AI Interactions: Anthropomorphic language, like Claude AI’s “Thinking about that…” phrase, can make the waiting experience feel more relatable. This approach humanizes the interaction, providing comfort and a sense of engagement that reduces user impatience. Using language that feels conversational can therefore bridge the gap between user and machine, especially in moments of potential friction.


  • Adaptability in UI Feedback: Different users have varying tolerance levels for wait times, so adaptable UI feedback—progressively updating from general to specific information—can cater to diverse user preferences. As seen with Google Gemini, adding layered feedback ("gathering information" vs. "refining results") reassures users by adapting to the length of time they’re waiting.


  • Avoiding Ambiguity in Minimalist Design: While simple designs can be elegant, ambiguity can undermine trust. ChatGPT’s pulsing ellipsis, though familiar, lacked specificity, leading users to question if the AI was actively working on their query or if something went wrong. This shows that minimalist feedback may need to be augmented with more explicit updates to maintain transparency.

Let's Connect

Let's Connect

Let's Connect

Let's Connect