It’s ok to be skeptical about AI.

I was reviewing proposals for Amazon’s internal design conference, and saw a proposal that infuriated me beyond any rational reason. According to the author, it’s “past time for skepticism” regarding AI and the contrarian in me replied, “There’s always time for skepticism!”

Skepticism isn’t reflexive dismissal. It’s about having the discipline to not take a tool at face value. The last time we felt this sure that a technology would change everything, the smartphone did – just not in the ways we predicted. We need to consider the realities of what it takes to build these tools and what it means when we use them in our day-to-day task

It’s hard to ignore the problems with much of generative AI – the ethical concerns around training on copyrighted data, the ongoing behavior of AI companies, the circular economy, and the devaluing of skill and expertise. The majority of benefits will almost certainly go to the executive class. You can argue that reliance on AI reduces decision-making ability and negatively impacts users’ desire (or ability) to absorb knowledge. Perhaps you feel that it’s a regression to the mean, or that it reduces the value of expertise and craft (all of which are true).

And yet, I design AI tools at my job to allow people to automate tasks where hands-on engagement isn’t needed or where I can take advantage of generative AI’s pattern matching skills. My personal work is about stretching my skills and gaining expertise in areas, so I use it in targeted ways: creating the underlying CSS grid for my website and being a first-pass editor that identifies low-quality or redundant content according to patterns I defined. Did I use any of the suggestions blindly? No – the editing itself was pretty bad (it frequently suggested removing important information) but it was good at identifying problems.

In creative art, I respect Matthew Inman’s and Jim Lee’s perspectives on AI but struggle to draw the line on what is acceptable. Is algorithmic art acceptable if I write the algorithm? What if I use a “smart” tool in Photoshop to isolate an object from the background, or retouch a background? Do the special effects used to enhance The Wizard of Oz for The Sphere hurt film, or allow VFX artists to accomplish the previously impossible? What separates The Archive In Between, which uses AI to animate original artwork according to the creator’s original scripts, from the the AI slop that pervades social media.

The only difference I can quantify is intent and augmenting human capability, without replacing choice and meaning.

Parallels

In our recent memory we saw a similar transition, with similar levels of hype: the iPhone and the popularization of the smartphone.

I recently had a conversation with a co-worker about the need to explore different UIs for interacting with algorithms. Chat (and voice) is great for many things where the precision of a dedicated UI isn’t required or possible. Despite its ubiquity, we identified many places where it simply failed, and we needed to provide addition UI affordances in order to accomplish things that were straightforward in more structured UIs.

When asked if I though AI would kill UI, or cause a huge change in how people interact with technology, I mentioned how we all thought the smartphone (and, to a lesser degree, tablets and other touch devices) would change computing forever and.

Now, we have access to the internet at all times. We’re connected through ubiquitous social networking. Every news outlet and information source is just a tap away. We can (and do) consume content anywhere, anytime. Cloud storage disconnected us from our own data. Many thought mobile would take over personal computing, and in many ways, it did.

Those of us who were skeptics (as was I, even as I loved my iPhone) never saw the “proper computer” dying. The question was always, what remains? What needs the bigger screens, bigger power, and more precise input? I’ll consume media on my smaller devices, and occasionally game or paint. But as soon as I sit down for a dedicated task – writing this post, or editing code, or do anything that requires side-by-side comparison or precision, I (and many others) return to a more traditional computer.

I have no doubt AI will bring some changes to how we engage with software and hardware, but I don’t think we know enough now to claim that it will kill UI, or creativity, or anything else – but skepticism will keep us honest and keep it working in our benefit.