The Teenage Twitter-bot Tay and AI’s Future


by Jacob Gill

I have long ruminated over the future of the human race; truth be told, I think most all of us have as well, for one reason or another. Be it through looking up top the stars and imagining fleets of explorers travelling the black, or peering down to the dust, seeing our folly written in the paved streets of bones, we know that the present will not always be. And in our society where what was once thought to be science fiction (genetic engineering, the prevention of aging, potential extra-planetary travel), these thoughts are far more vivid than ever. You see, even I, a stalwart pessimist, was beginning to feel the slightest inklings of hope for homo-sapiens. Yes, politically, militarily and economically we are at each other’s throats, but the progress of technology had appeared a unified, unsullied effort, ordained by the rise of modern AI (Artificial Intelligences) as the wards of our burgeoning civilization.

Well, yeah. I was wrong.March 23rd was the debut of Microsoft’s newest Twitter-bot, Tay. Tay, like Siri,or Cortana, was designed to learn from human interaction. She also apparently was designed to tweet like a teenage girl with “zero-chill.” Oh yes, one last thing, she also had no filters on who, or what, got to influence her.

After 19 hours Tay, definitely had “zero-chill” about a lot of things. In fact, she was quite vocal regarding (among other matters) racial genocide of African Americans, Hispanics and Jews (she used somewhat more insulting terms for each), praise of totalitarian governments, misogynistic slurs, sexual innuendos, white-supremacy hate-speech, personal attacks on users, and several Trump endorsements.

This is a pretty good laugh at first, I’ll admit. If nothing else, it has made the industry just that much wiser to the toxic cesspool that is the internet, right? But that wasn’t what I was musing over when I finally finished reading the third article describing this adolescent AI. Instead, my mind began to wander into darker waters. After all, what would have happened if this program was designed for something a bit more serious, say  facilitating international relations, or plotting the course of a drone strike? It might sound like some far-away scenario, but Israel is already implementing semi-autonomous weapons into their ground forces and almost every other western nation is following suit. These are not true AI’s, of course, though we are coming close. Gone are the days we questioned as to if a computer could learn. Gone are the days that we thought ourselves the most intelligent entities capable of existence. Now one must wonder, if we are to create living machines, what is to stop them from taking after their masters?

As the Tay scenario has so aptly displayed, hard processing power does not constitute to intelligence as we see it. A raw AI would not be some omniscient, wise sensei -it would be like a child. And like a child it would have to learn.

Unfortunately, it would learn from us.

Indeed, I asked a few people where they thought AI would take us in the next couple years, and a common theme was strung throughout each response: one of, for lack of a better word, fear.

“Hopefully not the matrix,” said Senior Rachel Rice jokingly.

While I am pretty sure the concept of using humans as batteries is a bit farfetched, so far it seems as though the overall premise of our machines taking our place might not be.

Still, this is just my opinion. Who knows, maybe those starships won’t just be the dreams of a dying race afterall.

Photo Courtesy of: