Barely a word can be written about the joy (or terror, depending on the point of view) of getting seemingly thoughtful and intelligent responses from a chat engine powered by generative AI before a chiding response appears from an AI expert. “It’s just a parlor trick / next word predictor / predictive algorithm”.
True. But the more I read these responses, the more they make me question the height of the pedestal we put the human mind on rather than the size of the molehill that ChatGPT and other generative AI rest on.
Teaching machines to emulate our thinking pays dividends in helping us learn about how we think while it also gets some work done for us in the meantime, whether it is picking stocks or writing a letter. A toddler learning to string a sentence together is a also just a pattern matching engine, well stocked with thousands of examples it has been actively listening to.
As for making guesses one word at a time, I frequently start sentences and work my way through them too. In fact, this blog post has been an experiment in my own thought process as I have started each sentence before knowing how it would end. And I have not allowed myself to go back to edit or rearrange any word after I typed it. One exception: I allowed myself to fix typos if I fat fingered them before going on to the next word. I think that’s fair since ChatGPT doesn’t have fat thumbs like I do.
Statements that “ChatGPT isn’t intelligent” are really pronouncements about us, not about ChatGPT. They refer to our self-imagined bar of what “intelligent” is based on the individual thinker and what they suspect is going on in the brains of those humans around them. When viewed that way I think we are passing up a chance for introspection when dismissing the thinking abilities of generative AI.
Moreover, we are missing an opportunity to excercise some humility about our own capabilities. Whoops – seems I misspelled “exercise” and went on too fast. Yeah, I do that kind of thing a lot. Don’t you?
Source: Gartner Hybrid Cloud