Using machine learning hides what the software is really doing, but maybe that’s nothing new…
Artificial Intelligence (AI), and in particular machine learning, is getting no shortage of mainstream media coverage these days.
A perfect storm of hype, mystery, Hollywood treatment, and quirky billionaires is discombobulating the technology-consuming public. Will a robot baby missing part of their skull take your job?
Is that utopia just around the corner, or is it the day you-know-what becomes self-aware?
MIT Technology Review and Scientific American both recently ran articles on the “black box” of machine learning. They make a fair point: machine learning by its very nature hides how the software is solving problems. Instead of thinking really hard about exactly how an algorithm should work and encoding exact rules into a chunk of software, the developer picks a set of general models and algorithms and “shows” the computer roughly what should happen.
From there, it’s all extrapolation and guesswork on the computer’s part… often (and usually) with a bit of a nudge in the right direction from the humans. The MIT article ends with a warning: ‘“If it can’t do better than us at explaining what it’s doing,” the author says, “then don’t trust it.”’
The good news (and bad news) is that I’m pretty sure we’ve already got to that place without any help from AI!
I’ve been building software for a while, and by “building”, I mean working with software that for the most part other people wrote, often a long time ago. Trying to determine exactly how it’s going to behave in every circumstance is a prohibitively expensive (i.e. impossible) task. Layers of complexity within systems and between systems create something that is far from deterministic and predictable.
Back at the dawn of software, it was sometimes cost-effective and practical to understand exactly how a piece of software was working (reading through printouts and punchcards). Today in safety-critical applications that do very specific jobs, it is still possible to verify exactly what a program is doing.
But, for the large and complex applications that we use online today, they are just as opaque to the people creating them as any machine learning masterpiece.
Black boxes are everywhere – not just in AI.
At Ambit we’re building conversational user experiences, taking advantage of machine learning to do natural language processing. We don’t have to concern ourselves with the details of how our models figure out what users are saying, but this gives us a big advantage in reducing the effort (and the cost to our customers) of building the bots.
The robots are coming – but there’s nothing to be scared of – they just like to chat.