MATTSPLAINED  MSP79  What Does It Mean To Be Human?
MATTSPLAINED  MSP79  What Does It Mean To Be Human?
What does it mean to be human in a world full of sentient machines and technologically enhanced people? In another Disrupted World special, we take a look at the future of the human species and get ready to welcome our quantum cousins.
Produced by Jeff Sandhu for BFM89.9
We’re back to disruption this week. But in a very different way. Over the past few weeks we’ve had a look at some of the many ways that technology is disrupting our world and our lives. Today MSP asks: What Does It Mean To Be Human?
I think we all know we’re human…
I think we all think that we know we’re human.
And that’s not quite the same thing.
It’s certainly not the same as what we might become in the near future.
This is a bit of a retread of some of the ground we covered last year,
Looking at the impact of AI and humanity but also placing it into that context of disruption we’ve been covering over the past few weeks.
And looking at some of the ways that those disruptive forces might be moving us or even forcing us towards becoming another species entirely.
Could we really change that much?
For thousands of years the answer has been obvious.
But those lines are blurring.
Take that story we covered on Geeks a couple of weeks ago about Amazon’s new packing robots.
Those machines may soon replace human workers at many of Amazon’s distribution centres.
It takes the company a step closer towards to eradicating one of the biggest costs and inefficiencies in its supply chain.
us, the human being.
But as we covered at the time, there’s a twist to this story.
Robot hands aren’t very good at picking up the odd shaped items we buy….
So these machines have be fed by human workers.
The tables are turning: humans are now there to serve robots.
Machines that are implacable and capable of tireless operation.
That’s a daunting and inhuman task.
But that’s the brave new world that humans will be operating in.
As support staff to the machines.
Which implies that in the future, our role will be to do the things that the machines can’t?
That’s pretty much always been our role.
We build machines to do the things we can’t or don’t want to do.
To extend our capabilities and talents.
To achieve more with the same, or preferably less, human effort.
But we’re seeing a change. As the machines themselves become more capable and more intelligent, they surpass us in many things.
So the nature of the relationship changes.
We become the menial workers?
I think I used the example of a book I read earlier this year called Battlestar Suburbia by Chris McCrudden.
Humans live in a world ruled by sentient robots.
And their only role is as cleaners because - although the robots can fly into space - for some reason they aren’t very good at waterproofing.
And that’s really what we’re looking at today.
That world of tomorrow where everything is blurred.
Where robots, neural networks, quantum computers, are your bosses, your colleagues, your friends.
Maybe even your clients and customers.
This is where we start heading into the scary stuff…
Maybe. We have choices. It’s not do or die…
Are you sure?
I was trying to be reassuring. It’s probably do or die.
Certainly we’re looking at the rise of new forms of intelligence.
Perhaps even new blends of human and machines.
We are at the edge of what could be an evolutionary leap in our species.
And I think our entire world will need to be redesigned around this evolution.
Nothing to worry about there, then…
A couple of months ago we did a show about data babies - the humans who will be tracked from birth until death.
Whose data trail starts before conception, with fertility data their parents plotted in an app.
Apps to record scans and birth plans, doctors, institutions and, of course, their wealth.
Those same parents will log into YouTube, and watch hours of parenting how to guides.
While loyalty cards and online Sales record every baby product, every supplement and panic purchase they make.
Parents voluntarily creating pre-consumers.
Which they continue once the baby’s born?
Those same parents will record the birth on a smartphone and those images will be cached and mined.
Their Baby photos will be archived.
Google now has an Ai that can predict what we will look like as we age.
Parents remotely view their kids growing up on baby monitors that livestream those images to a cloud server.
Where the footage may be turned into anonymised data for third parties to study.
Until the point where they start their own online existence?
And at some point those kids will become the stars of their own independent trail of apps, video, streaming and social media profiles.
Online education tools will monitor their intellectual development
and gauge their susceptibility to suggestion.
Location apps will provide help to identify their friends and social habits.
AI assistants will record snippets of their conversation.
Technology will be ubiquitous and seamless
An invisible part of their digital lives.
Digital lives with data trails that lead who knows where…
If you’re making the case that we’re being primed for the machines. What about the reverse? Are the machines ready for us?
We know that AI can do incredible things.
It can accurately process and translate natural language.
It anticipates our behaviour, it answers our email and phone calls
It boosts our productivity in thousands of tiny ways.
It can look for complex patterns in data.
It’s benefiting thousands of industries including market research, medicine, finance, education, engineering and the arts.
But, as we’ve said over and over, for all of its ability, AI is essentially dumb.
And unlike kids - smart AI is often compared to kids - it will never outgrow that programming.
It’s a five year old forever?
Yeah. Imagine a future of talking about dinosaurs at the dinner table. But for forever.
And those machines don’t serve us well.
we need machines that are intelligent enough to make genuine choices.
To adapt. To be able to think beyond the confines of their programming.
A machine that wouldn’t carry out the commands of a company like Cambridge Analytica.
Isn’t that what people like Elon Musk and the late Stephen Hawking were afraid of?
We’re not there yet but we are on the way to a generation of machines that won’t think like us.
Whose logic and reasoning processes will be unknowable to us.
But I’m not sure why that means they would be an existential threat to us.
Much as I don’t like to have any opinions that disagree with Stephen Hawking.
I don’t think AI is an existential threat.
Isn’t that fear a natural thing?
It is but We don’t have to fear things that think or speak in a different way to us.
earlier i the year we used that example of a TED Talk called “How Language Shapes the Way We Think” by Prof. Boroditsky.
That example she gave about bridges:
in Spanish, bridge is a masculine word, so bridges have masculine linguistic characteristics like strong or long.
In German, it’s a feminine word, so they are described with feminine characteristics like beautiful and elegant.
So, even within our own species, there are huge differences in the way we think and communicate.
But you can’t dismiss it that easily…
Which is why we have to choices.
To build that society or to shift in a new direction.
I don’t think we should change direction if all we’re really scared of is that lack of a shared language.
As we develop ever more complex AI, our ability to control or even observe them, shifts further away.
Facebook and Google have both had to shut down chatbot systems.
For the crime of evolving the language they were supposed to be communicating in.
The machines weren’t exactly malfunctioning.
They were finding more efficient ways to use the language.
Which is exactly what humans do with language, albeit at a much slower pace.
Of course, watching that evolution happen in real time…
watching your own language diverge and develop foreign and unintelligible patterns
is going to be an unsettling experience.
You think it’s more about shifting our perspective and way of thinking?
I think we’re overstating the threat of AI because we’re framing the argument in a human context
A context that will probably be irrelevant to the machine.
I’m not entirely sure that AI will consider us in a way that we can understand.
Like a puppy. He adores his owner but he has no way to comprehend that his owner loves him back.
For him it’s a one way relationship. And it could end up being that way with AI.
In truth, I think that we have most to fear from the careless or unthinking actions of AI.
And that’s precisely the threat that humans pose to every other species on our planet.
I know there has to be a but. But… We’ll get to that after the break. When MSP returns and Matt distorts us out of the last shreds of humanity we’re still clinging onto.
And we’re back on Skynet Radio, I mean MSP. Today, we’re turning us into machines. And we’re waiting for the biggest but of them all.
I take it that was a not very subtle jab at me?
On the subject of Skynet - have you seen the trailer for the new Terminator movie, Dark Fate?
Looks awesome. Linda Hamilton’s back and still looking bad ass.
It’s not that much of a digression. One of the undercurrents of the trailer seems to be characters who are both human and machine, rather than the skin covered androids of old.
And that’s sort of the actual future we’re looking at.
Minus the extinction levels of violence against humanity.
With a bit of luck.
We’ll get to the physical transformation part of that comment in a minute. First, let’s get back to that but. How do we stop the machines from turning on us?
Simple. Treat them well.
Don’t just treat them as family. Treat them as though they’re human.
Or if that’s an insult to them, because, why would they want to be anything as inferior or puny as humans, treat them as equals.
That will require discussion about the rights that these machines will or won’t have.
In the US the Transhumanist Party has been running on a platform that would revise the US Constitution to encompass machine rights.
But that party is still an outlier.
There are still plenty of politicians who don’t know the difference between a robot, android or iOS.
And we’re not getting very far with these discussions.
You mean exercises like Google’s ATEAC?
Google tried to put together an ethics advisory board on AI earlier this year, called the ATEAC,
Which quickly fell apart over arguments about how qualified some its panel of experts were.
Google’s problem raises some interesting points.
Who should get to talk about the future of AI and the determination of machines?
Should it be technologists, business leaders, politicians, interest groups?
Or should it exclude any vested interests and focus exclusively on what the public wants?
How long before machines demand their own representation?
Could they demand representation?
It might seem esoteric but it’s grounded in precedent.
In the US, company personhood, the idea of the corporation as an individual, is legally enshrined.
If we want to have machine intelligence that is self-aware and self-determining,
We have to decide what freedoms those machines will enjoy.
Simple things that we take for granted like the right to own property.
To start your own company. To vote.
How do we protect ourselves? Surely, assuming the machines are always right is a dangerous path to take?
Because we have to be brutally honest:
AI may be smart but it won’t always be right.
Sentient creatures are capable of both good and bad actions, either with or without intent.
Humans have long-established laws and codes of behaviour.
Laws that help us to determine the correct course of action to take when people make mistakes.
Those systems may need to be reworked for a world containing people and intelligent machines.
Do you switch it off, killing a sentient and self aware creature?
If not, how do you imprison a virtual intelligence?
And how would you quantify that punishment?
Could we just - you know - not bother?
It’s a perfectly valid point.
A lot of people ask me: Why does any of this matter?
These are machines. They don’t need rights.
If society decides that: that’s fine.
But if machines don’t have rights, they shouldn’t be self-aware.
That same concept applies to the AI powering your phone, your home or your company.
If it’s sentient and it has no rights, it’s a slave.
And one thing that history - and Game of Thrones - has shown us is that societies that keep slaves are frequently attacked by dragons.
But we aren’t just talking about machines.
Which is where we come back to that Terminator trailer?
We’re talking about hybrids. People that are part machine and machines that are part human.
Or should we even stop making that distinction.
Because we’ve been blurring the line between human and machine for decades.
Artificial limbs and organs are commonplace.
Bionic limb systems like Rewalk are allowing stroke victims and paraplegics to walk again.
We can implant computers in nerve endings to switch off pain signals and alleviate conditions like arthritis.
Brain implants are already in the works for people with dementia, enabling them to store and access memories their brains will no longer write.
And we can use those same technologies to enhance healthy humans?
Of course, if we build implants for damaged brains,
Then someone is going to take those devices and use them to enhance healthy brains.
Someone else will connect those implants to the cloud.
If you connect that implant to the cloud, then why not connect it to Siri or Alexa?
Suddenly you don’t just have a chip in your brain, you have an AI.
Fast forward a couple of product cycles and that chip is no longer Siri or Alexa,
It’s a machine with its own thoughts and its own identity.
A second personality sharing your brain.
But can machines make those evolutionary steps? To become more human like?
I’m not sure how we get there. I can’t imagine that we won’t.
We’ve long since broken many of the barriers that we think separate us from machines.
The artwork Min Max E that sold at Christie’s New York last October for over $400k.
It was painted by a Generative Adversarial Network created by the Parisian art collective Obvious.
It’s not a new idea: Artists like the tech evangelist Joshua Davis, have been coding algorithms to computer generate artwork for well over a decade.
The music industry is moving towards automation: Sony Music has an experimental music composition AI.
And last year, the filmmaker Tony Kaye, the director of American History X,
announced his intention to cast an AI as the lead actor in the forthcoming movie 2nd Born.
None of this necessarily points to that evolutionary leap you mentioned…
You have to squint a bit. I admit.
But If you zoom back out for a minute, you can see this post human age on the horizon.
a technology fuelled evolutionary jump:
Beings that share human and machine traits.
A divergence between enhanced and off the shelf humans.
Add gene modification tech like CRISPR into that mix and you can see why the legal framework governing AI is so important.
Even without the machines and the implants and the intelligence
we are already capable of creating a post human species.
Those gene edited babies?
6 months ago there was the case of the rogue bioresearcher,
He Jiankui (Jian Kwai), who claimed to have altered foetal DNA to create the world’s first gene-edited babies.
We can’t turn back this tide.
DNA modding costs as little as $40.
You can hack yourself following YouTube videos…
The age of the post human is here,
Whether we’re ready or not?
And that puts us in a tricky place.
Will we be part of the same species or will we see humanity split and diverge?
Can a machine be a person?
And at what point does a person become a machine?
Or, to use that earlier example, if you have a sentient AI chip in your brain, are you one individual or two?
The real question is: is any of this going to happen or are you making it all up?
I think we’ve mentioned this before. Last Fall it was reported that a team of scientists at the University of Washington linked three people’s brainwaves,
enabling them to play a game of Tetris using only their thoughts.
Facebook is looking at ways to allow you to think your posts.
And most of the other Big Tech companies are looking at brain processing technologies.
And there was the story we covered on Geeks last weeks about brainwaves and hearing aids…
It uses machine intelligence to analyse the user’s brainwaves.
and it can determine which voice in the crowd the listener wants to focus on.
Perhaps an even more salient question for right now,
in this age of free services and and leased software, we may have to ask: who owns your thoughts?
Right now most countries lack the basic legislation to deal with fake news on social media.
How are we supposed to wrestle with the concept of freemium tier memories?
So we’re pretty much heading into the unknown?
This is going to be a strange and very different world.
Wealth and resources may truly determine what kind of person you are.
The one with the most money gets to be the Terminator human.
Sarah Connor seems to manage ok…
Only because friendly Terminators help her.
And that’s what these smart systems will be: a money dependent. helping hand.
And it will change everything.
Who needs education when you can have all the world’s knowledge and understanding on a chip in your head?
Implants may teach babies to walk, talk and eat.
Which could be an essential edge in a world where automation has replaced everyone except the CEO.
It follows that those who enhance will be more likely to progress and succeed.
For the first time in our history the haves will be physically and mentally better than the have nots.
This is the point where you say: we have a choice…
We have choices. But we also have a lot of questions to ask and decisions to make.
What will we need or want in this cybernetic age?
How do you quantify a person or a legal individual who is distributed across dozens of cloud servers across the globe?
I’ll be honest: I don’t have the answers.
I don’t think anyone has the answers yet.
We haven’t even started the conversations that might lead us to the answers.
But we don’t have to worry because this future is decades away?
50 years. 5. At least a month.
We simply don’t know.
What we do know is that these convergence technologies – DNA modification, robotics and AI – are already here.
And that’s why we should be planning for that future emergence now.
What have we learned from today?
If you only take one thing away from today, it’s That Dark Fate looks really good.
And even if it does become our future, at least we get to watch Linda Hamilton and Arnie experience it first.