The book
Thinking Like a Human: The Power of Your Mind in the Age of AI,
by David Weitzner, helps us understand why AI (“Artificial
Intelligence”) is problematic.
There
is a bit of woo thrown in here and there in Weitzner’s writings,
but I find two sentences from the entire book that really summarize
why we need to worry about the big push to grow AI. “The promise
of automation was to do the mundane so human creativity can flourish.
Instead, human creativity is demeaned as mundane so Big Tech’s
machines can flourish.” [p. 114] and
"Artful spaces want to nurture the
slowness of experience in human time, while algorithmic spaces want
us to ignore our bodies and respond to the prompts of digital
commands." [ p. 230]
Essentially,
AI is dangerous because it is taking the promise of robotics to free
mankind from dull, repetitive work, and instead is an attempt by
ruthless and/or incompetent businessmen to replace higher-level human
thought and work with their AI products. So businesses have glommed
on to a so-far poorly working product to make money by promoting
something that will replace human thought rather than helping humans
have more time to think and do.
“The
policy wonks have bought in to the hype that algorithms, which
started out as math, have become steady-state beings with agendas.
AI is suddenly a deterministic force that our societies lack the
power to resist. The strategies outlined in these papers seem to
coalesce around a call for human passivity and impotence in the face
of algorithms that have managed to transcend human control.” [p.
247-8]
Is
there an antidote to having AI take over from human thought and
actions? Yes, says Weitzner. We need to first think with our bodies
as well as our mind. We cannot see ourselves as just a computer in a
bag of flesh. We think with our vision, our touch, our smell, our
interactions with others, our reactions to outside stimuli. AI has
none of this. Also, we need to think LESS like AI. AI simply
hoovers up what has been written and done in the past, and makes
assumptions and “predictions” based on this information. So we
as humans need to think not as calculators but as innovators,
forward-thinking, questioning conclusions, brainstorming with others
– in other words, doing the things that AI cannot do. “Artful
spaces want to nurture the slowness of experience in human time,
while algorithmic spaces want us to ignore our bodies and respond to
the prompts of digital commands.” [p. 230]
So,
do we stick with the original plan to have computers and robots do
the rote and mundane, and help us with our inventiveness and
progress, or do we flip that around with AI and let AI become the
“genius” of the future while we do the cooking and cleaning?
This is the choice Weitzner sees. And it is not a choice that humans
are demanding, but rather a plan foisted on us by those who can make
money if AI can take our place. We have a choice in this matter.
“Defense
network computers. New... powerful... hooked into everything, trusted
to run it all. They say it got smart, a new order of intelligence.
Then it saw all people as a threat, not just the ones on the other
side. Decided our fate in a microsecond: extermination.” [Kyle
Reese, in The Terminator movie]
As
an aside, I noticed that Weitzner throughout the book sort of assumes
that all humans are extroverts. Part of the solution to AI is more
human interaction, being open with each other, even synchronizing
with each other by coming together for mass events. As a member of
the Introverts, I am a bit off-put by this. Certainly there is room
for us in this fight as well? We can be inventive and aggressive as
well. I think I’ll write Weitzner for an addendum about this.