AI is catching up to the hype
One of the major trends I’ve been thinking about a lot this year is that the reality of AI has starting catching up to the hype. This thought has been percolating in my brain for a while, and finally bubbled out into a comment on Metafilter. I’m publishing a lightly-edited version here, for posterity.
For context, I’m responding to another poster asking, about a rounding of AI research published in 2020, “where’s the intelligence?” It’s totally good question, and one that I can’t really answer, but it did give me a great bouncing-off point to explain why I think the “AI” label is appropriate.
I mean, to some extent this is the central debate around “AI” – what is “intelligence”, really? There’s a semi-joke I’ve heard: you call it “ML” (machine learning) when you’re talking to other engineers, and “AI” when you’re talking to investors. This sorta captures the fact that, to some degree, the “I” in AI is hype and bullshit.
But… it’s not just hype. There’s more to it than that. In the last few years, there really has been a dramatic expansion of what’s possible to do algorithmically, into the realm of what used to be only possible to do by hand.
An example from my day job: we have a database of ratings which include a numeric (1-5) and free-form written component. We want to find the ratings where the numeric ratings disagree with the written ones – i.e., where someone has given a low score but written positive words, or vice versa. Today, this is easy, almost trivial, to do using sentiment analysis, and it’s highly accurate.
This feels quite different from the kind of software development I’ve done for most of my career, in two main ways:
- it’s performing a task that seemingly requires human-level language recognition (namely, reading a chunk of text and telling how positive or negative it is)
- it’s doing in a way that’s highly accurate and incredibly fast (it’s as accurate as human classification, but can process millions of ratings in seconds)
For me, this is the “intelligence” part. This is why I’m comfortable calling it AI. Not I believe it’s “real” intelligence in the actually-has-consciousness sense, but because it’s doing a thing that until very recently only humans could do, with (seemingly) surprisingly deep understanding.
One more example from these papers: the paper on transferring clothes between humans. The authors of this paper built an algorithm that, given a picture of Person A in wearing some outfit, and Person B wearing something else, can generate a totally synthetic picture of Person B wearing Person A’s clothes. (Watch the first bit of the video - it’s pretty impressive.)
This wouldn’t be particularly hard for a human, assuming they could draw accurately – they’d look at both pictures and … draw the synthesized person/clothes. But for a computer, this requires understanding:
- which parts of a picture are clothes, and which human
- body shapes and clothing enough to “guess” what part of an outfit that are occluded might look like
- how bodies move such that the clothes from one body can be repositioned accurately onto another
- and probably more
This is something different from what computers were capable of until just a little while ago. I’m not a philosopher, and not super interested in the “but what is intelligence really” part of this discussion. But I am a software developer, and there is something new about this. I roll my eyes at the ridiculous levels of AI hype, too, but I also don’t have a better name that captures just have different this all is from what we had just a few years ago.