top of page

Understanding User Interactions with ChatGPT: Lessons for Optimizing the UX of AI


Prompt result in Adobe Firefly asking it to create an image to be used in an article about the importance of keeping user experience at the center of AI (fail)
Prompt result in Adobe Firefly asking it to create an image to be used in an article about the importance of keeping user experience at the center of AI (fail)

I’m not sure about your workspace, but the virtual halls of my workspace are abuzz. The cause of this ruckus?


Generative AI, of course.


I’ve “heard” thoughts ranging from “it will be an excellent replacement for our chatbot” to “it’s the solution to every problem we are currently facing.” In other words, the integration of GPT-4 (or any generative AI) is what will take our solution/product/company from good to great.


Seeing/hearing this begs me to ask the question: How’s ChatGPT going for you?

I’ve been asking everyone. People from all walks of life (literally everyone from my volleyball leagues to neighbors to coworkers and colleagues). As you can imagine, this group had a big age range (from 9 to 84) and included people from all walks of life. There are people with a high understanding of technology, all the way down to people who struggle with using their iPhone. As a bonus, I also decided to ask ChatGPT how it thinks things are going.


This is what I’ve learned.


I’m trying to be careful not to over-generalize, but most people I spoke with are fascinated by the potential.

  • Younger kids are hopeful that AI would get integrated into their learning and be able to give them new ways to understand difficult subject matter. This group (9–18) was also very entrepreneurial in their thoughts of how to leverage the technology.

  • College students (for the most part) are seeing it as a shortcut / time-saver for projects/papers/ assignments. The entrepreneurial spirit is also alive and well in this group — I had several students comment on how the introduction of AI has made them seriously reconsider their major and wonder in general if the education they were paying for today could be taught to them for free by ChatGPT.

  • Young professionals are trying to leverage AI to be better at their jobs. They also view it as a potential difference maker between them and coworkers who aren’t embracing the technology.

  • Mid to late career workers have the biggest range of reactions. I got everything from “I played with it, not impressed. Seems like the same results I get from doing a Google search” to “It makes my job obsolete, so that’s fun”. Although there is also a healthy amount of fear within this demographic, most of them are hopeful that AI can have a quick (and positive) impact their lives (be it at work or home).

  • Retirees think it’s the end of the world (not really, but kinda).

Generally speaking, when people talked about the possibilities around what AI could mean for their futures, there was cautious excitement or optimism- but when asked about what ChatGPT/AI could do for them today, it was a lot more tempered.

I dug into this a little more.


What I found was that people across all demographics struggled with getting great information/responses from ChatGPT. Quite a few people mentioned the responses from the AI as being “lackluster” or “not really earth shattering”. One particular person pointed out the fact that they had seen some really impressive results in articles and blog posts, but couldn’t get ChatGPT to give them anything other than generic responses about a subject (where they had expertise).


As someone who has spent a lot of time with generative AI, I can empathize. When I first started using ChatGPT, I used it the same way I use a Google search bar. I would ask a question, see the result, move on. I went in with an open (and empty) mind and left with very mixed feelings about the results.


Remember how I told you that I asked ChatGPT how it was going with it’s users? This is a summary of what that conversation looked like:


Me: “I want to learn more about how people interact with you. Do you feel like people are getting what they need pretty quickly or does it take a while for people to learn how to best interact with you?”

ChatGPT: “In terms of how people interact with me, it varies depending on the individual and their level of experience with using AI language models. Some users find it easy to communicate with me and get what they need quickly, while others may take some time to adjust to the interaction and understand how to best communicate their queries.”

Me: “Do you find that a lot of people get frustrated when trying to get information from you?”

ChatGPT: “As an AI language model, I don’t have the ability to perceive emotions, so I can’t say for certain whether or not people get frustrated when interacting with me. However, I understand that sometimes people may not get the information they were looking for or may not fully understand my responses, which could lead to frustration.”

Me: “I think that you’re doing a great job and enjoy working with you. I want to help others so that they can have similar experiences. What advice would you give to people when they’re interacting with you?” — total side note. I will always strive to be kind to any AI, just as I do with people. After all, they may someday be our overlords and I want them to remember that I was always kind.


ChatGPT: “Thank you for your kind words, and I’m glad to hear that you find me helpful. Here are some tips that may be useful for people when interacting with me:

  1. Be clear and specific: It’s important to be as clear and specific as possible when asking a question or making a request. Providing additional context and details can help me better understand your query and provide a more accurate response.

  2. Ask follow-up questions: If my response doesn’t fully answer your question or if you’re not sure about something, don’t hesitate to ask follow-up questions. I’m here to help, and I’m happy to provide additional information or clarification.

  3. Be patient: While I strive to provide responses in a timely manner, it’s important to remember that I’m processing a lot of information and may take a few moments to provide a response. Additionally, I may not always have the exact answer you’re looking for, but I can often provide helpful resources or alternative solutions.

  4. Use natural language: I’m designed to understand natural language, so it’s generally best to ask your questions in a conversational and natural way. However, it’s also helpful to avoid using overly complex or technical language that may be difficult for me to understand.

I hope these tips are helpful, and I’m always here to assist with any questions or information you may need!”


Me: “Based on the advice you provided, are there certain aspects that you find people struggling with?”

ChatGPT: “Yes, there are some common issues that people may struggle with when interacting with me. Here are a few examples:

  1. Providing too little or too much information: It’s important to provide enough context and details so that I can understand your query, but not so much that it becomes difficult to parse. On the other hand, if you provide too little information, it may be difficult for me to understand your query and provide an accurate response.

  2. Asking overly complex or technical questions: While I can understand a wide range of language and topics, overly complex or technical language can sometimes be difficult for me to process. It’s generally best to ask questions in a natural and conversational way, using simpler language whenever possible.

  3. Getting frustrated or giving up too quickly: Sometimes, finding the information you need can take a little bit of effort and persistence. It’s important to be patient and ask follow-up questions or rephrase your query if needed. Additionally, if you’re not getting the information you need, don’t hesitate to ask for help or try a different approach.

This was the crux of my early experiences.


When I asked people what their interactions with ChatGPT looked like, the vast majority was similar to my experience. They arrived, got in, didn’t really have a goal in mind, just want to see what all of the hype or excitement is about. Their experience would look like this: ask a few random questions or try some cool feature they heard about and then they’re gone. They subconsciously put it in a box where they “understand” the purpose of it and move on with their lives. This was the “empty mind” I was referring to earlier. Approaching ChatGPT without a goal in mind makes it a lot more difficult to find anything extraordinary.


So let’s bring this back around. If companies like the one that I’m working for are looking to integrate this kind of advancement across various platforms, how are people going to actually get anything meaningful out of it? The knowledge gap between users, ChatGPT and engineers is immense.


Like a lot of technology solutions, the push for AI is being engineer led, often times how it should be. However, that directional lead should not be done in a silo without user representation. After all, we have all seen what it looks like to have a terrible experience within a great product (looking at you IOS low storage alert). The need for experience design to be at the center of this advancement has never been bigger. Without UX, who is teaching the AI about the user needs? Who will be teaching the user how to engage and get what they’re looking for?


I get it, research takes time, and there is risk to being second, third or 8th to market with an AI strategy. I would argue that releasing AI capabilities without users in the middle is another kind of strategy. It’s a risky strategy with a lot of tech debt on the other end of it.

As this technology moves forward at a breakneck pace, it’s important to recognize that releasing another “tool” that few will be able to use effectively will only frustrate your users.


I’ll finish with this, I asked ChatGPT one more question:

Me: “If I were to integrate you into my workplace, what are some of the factors I should consider before taking any action?”


ChatGPT: “Integrating an AI language model like me into a workplace can provide numerous benefits, but it’s important to consider several factors before taking any action. Here are a few things you may want to consider:

  1. Use case: Before integrating an AI language model like me, it’s important to identify the specific use case or cases that you want to address. For example, do you want to use me to provide customer service, assist with research, or automate certain tasks? Clearly defining the use case will help ensure that the integration is focused and effective.

  2. Data privacy and security: As an AI language model, I rely on data to learn and improve. It’s important to ensure that any data used to train or integrate me is kept secure and that proper data privacy protocols are in place.

  3. User training: Depending on the use case, it may be necessary to provide user training to help employees understand how to interact with an AI language model like me. This may involve providing guidance on best practices for asking questions, interpreting responses, and using any associated tools or interfaces.

Number 1 answer from ChatGPT: BEFORE integrating, identify specific use cases.

It’s not too late, we only get one chance to be proactive in our approach and forever to be reactive. Let’s do the right thing and put the user at the center of the AI discussion.

I’d love to hear your thoughts and open up a discussion. Comment below to get that conversation going.

bottom of page