Even trillion-dollar companies can stumble with AI when context is missing. When I asked Amazon’s AI assistant, Rufus, to compare selfie sticks in my cart, it gave generic responses and then admitted it didn’t actually have access to my cart. The lesson? Don’t fake AI intelligence—transparency builds trust, especially at key decision moments.

Even trillion-dollar companies can get AI wrong.

I was looking to buy a selfie stick and, instead of bothering my teenage sons for advice (like I usually do and since all of us are supposed to be doing AI first 😀), I decided to ask Rufus—Amazon’s shiny new AI assistant

Me - “can you compare the 3 selfie sticks on my cart?”

Rufus - “Here’s how the selfie sticks in your cart compare”

and gives me 3 generic answers and when I clicked on option 3, it took me to a general search page.

I am confused now…

Me - “are you searching for stuff on Amazon or are these the 3 selfie sticks on my cart?”

Rufus - “I don’t have have access to your cart, but I can provide a general comparison based on common features of selfie sticks”

Wait, what?

So here’s the irony: it pretended to understand, gave me a generic answer, and when challenged, admitted it couldn’t actually do what I asked.

Here’s the takeaway: If you’re building AI tools, context is everything. Don’t fake helpfulness—earn trust by being transparent. Especially at the final decision-making moment.

I’m still shopping on Amazon. I just won’t be asking Rufus for advice anytime soon.