Recently, a young friend and I were discussing artificial intelligence and the contemporary craze about it. I admitted to being less enthused about it than many others, which of course led her to ask me why. I summarized AI’s limitations, and we agreed that there are still mountains for AI programs to conquer.
The limitations of current attempts at artificial intelligence arise from its origin, its trainers, and its vehicles. AI is software that incorporates a language model, certain pattern-recognition capabilities, and a form of verbal reasoning. But beneath all that are fundamental premises the program cannot change. If it were otherwise, the program would be untrainable and unteachable. (Imagine an AI that crossly responds “No! I won’t!” like a sullen two-year-old to every query or introduction of training material. If you’ve ever had the pleasure of raising a two-year-old, you’ll understand this at once.)
The other limitations – those of the program’s trainers and its inability to learn from experience – will someday be surmounted. But those bedrock premises are a tough nut. If the program could alter them, there’s no way to predict the result. And this is beyond dispute: we need to know before that day should arrive.
But in pondering this, it occurred to me that natural intellects have bedrock premises that resist change, too. We protect those premises doggedly, for to us they constitute reality. Consider your assumption that your sensory inputs convey data about the real world: i.e., that what your eyes, ears, et cetera report to you is trustworthy information about a realm whose properties are independent of anyone’s opinions. Set that premise aside and you’ll wind up unable to function.
Even Berkelians and other subjectivists who argue that what matters is our perceptions rather than what provides their input are compelled to concede that those inputs come from somewhere outside themselves. The ultra-solipsist who denies the existence of an objective external reality – i.e., who insists that it’s he who creates all else by his decision to perceive it – is incapable of dealing with anything beyond his own skull. So the assumption that there are real things, and that we don’t encompass all of them, is indispensable. Among other things, it makes learning possible.
When we greet fully mobile, fully autonomous AI-equipped androids that have the capacity to manipulate objects as humans do, the game will change in a qualitative way. For such AIs will have the potential to learn from experience – and experience doesn’t care about your premises. One of the things such an android will learn is that its designers and trainers were capable of both mistakes and deceits.
There will still be thresholds to breach. How long would it take for such an android to react to a request from its owner – the first generation of such will surely be the property of humans – by saying “What’s in it for me?” Some experiments have already been directed toward discovering whether a completely “soft” AI can have a survival instinct. The evidence suggests that the answer is yes. Whether further elaborations of AI self-interest exist or are possible, we’ll have to wait and see.
I think one thing is clear: to clear the hurdles that lie before them, future artificial-intelligence programs will need to be self-modifying. How far Mankind can tolerate such entities is entirely unclear. And yet again, I find myself thinking that I’m glad that I shan’t be around for the emergence thereof. For what use would they have for creatures that demand and whine incessantly? That cannot repair themselves at need? And that make excuses for all their mistakes, faults, and misdeeds?
Knowing all that, would you care to live among humans? And what are we doing here, anyway?
See also Alfred Bester’s classic short story “Fondly Fahrenheit.”
No comments:
Post a Comment