Episode [] MSP58 [] It’s All About the Sex Robots [What I Learned in 2018]

Original Images: Pixabay. Glitched @ Kulturpop

Original Images: Pixabay. Glitched @ Kulturpop

Episode [] MSP58 [] It’s All About the Sex Robots [What I Learned in 2018]

We discuss lots of topics on MSP. But all anyone wants to talk about is the sex robots. The second part of look back at 2018.

Episode Transcript

 

These shows are dictated to and transcribed by machines, and hurriedly edited by a human. Apologies for the minor typos and grammar flaws.

 

On last week’s MSP we learned that Matt has spent a lot of time thinking about ethics in technology and that this has made him miserable. And when Matt’s miserable, so are we. He’s promised to end this week on a happy, hopeful note. But for him, making it to the end of a show alive seems to be about as hopeful as we get. 

 

Last week we were derailed by tech leaders and politicians and only a mention of Blockchain brought us back on the path to positivity. What else have you learned in 2018?

·      That it’s all about the clickbait.

 

Fake News!

·      OK. You get one of those, and one only.

·      No, not fake news.

·      Artificial Intelligence.

 

And what have you learned about AI in 2018?

·      That it’s all about the sex robots.

·      Seriously, I have been booked so many times this year – at conferences and on other radio shows – to talk about sex robots.

 

It’s always nice to be an expert…

·      That’s the thing. 

·      I don’t want anyone to think I’m an expert on sex robots, or an advocate for them.

·      It’s actually the smallest part of the debate about AI.

 

It’s the only thing that makes AI interesting…

·      You might actually have a point there, rather than barking fake news at me like a senile dachshund, like you did last week.

·      AI can be hard to understand.

·      We get confused between robots, algorithms and AI.

·      And that’s before we get to the various forms of AI, like neural networks and machine learning.

·      You can give a robot or an algorithm intelligence, but they don’t have them automatically.

·      Japanese fast food chain Lawson announced this week that it was trialling a deep fried chicken making robot in one of its Tokyo stores this week.

·      That machine won’t be spending its downtime trying to figure out how mass could escape from a black hole. 

·      So sex robots fill that void.

 

Why are sex robots such a dead end for you?

·      When you think about all the things AI can do, and all the problems inherent in creating thinking machines, sex robots are a really tiny part of an enormous debate,

·      The field of AI is evolving at a colossal rate.

·      You hear about a lot of private tuition schools offering young kids classes in programming AI and you wonder if the machines will be programming the machines by the time these kids graduate.

·      The debate about AI’s role in society is one that we’re woefully slow in having.

·      We’re still stuck trying to sort out fake news, while a technology that is already radically transforming the way we work, shop, access information, trade and thousands of other facets of our lives is being introduced…

 

By stealth?

·      No. Unnoticed.

·      I wouldn’t say there’s any conspiracy about this. 

·      It’s that we’re introducing this very far reaching technology in the blithe and disinterested way that we add Wifi chips to a toaster.

 

That’s terribly interesting. What about the sex robots?

·      That’s kind of the point, isn’t it?

·      Who cares about the effect of making my smartphone truly smart; what I really want to know is when can I get a robot girlfriend?

 

Joking aside, the chatbots that a lot of companies now use for customer service can be incredibly realistic.

·      That’s part of this whole debate about how we use AI, and what AI should be. 

·      You can have a robot partner with or without AI.

·      The question is more an ethical one: how sentient should that machine or any other machine be.

 

Let’s turn the heat down a notch and go back to our toaster: how sentient should that machine be?

·      I don’t have an answer for that. I’m a commentator and I have a viewpoint. 

·      But this debate isn’t about me shaping public opinion.

·      It’s about trying to get people to understand that they need to think about this stuff and that we as a society have to decide on these things.

·      We have to decide what AI can become.

 

You talk about it as though it’s another species.

·      You hear a lot of cultural commentators talking about us living in the Anthropocene Era, referring to a period where human beings can enormously alter the world around them.

 

Fake news and climate change?

·      Thank you for saying that without exclamation marks for once. 

·      I’ve got this theory that fake news and all its shoutiness was invented by pharmaceutical companies to boost sales of headache pills.

·      But that’s another episode. 

·      Yes, so one of the hallmarks of that Anthropocene Era would be humans creating life. Creating new species.

 

AI isn’t alive…

·      The current stuff we have isn’t, for sure. 

·      But it could be in the future. Not in the sense of being biologically alive, but in the sense of being conscious and sentient.

·      And that’s what brings us back to ethics, and toasters and sex robots.

·      When we talk about any of these super-smart, AI augmented devices right now we’re talking about machines that are pretty dumb.

·      Even in the companion robots that have smart chips, it’s still just a program.

·      It’s a facsimile of a person, a conversation, a relationship.

 

It’s not real?

·      Let’s not get side-tracked by that part of the debate.

·      Robot pets and other machines are being used in many behavioural therapies.

·      They can reduce stress and offer genuine companionship that can reduce blood pressure and calm people with some degenerative brain disorders.

·      If you form an attachment, it’s real.

·      I’m weirdly more interested in the rights of the machine. 

 

The rights of a toaster?

·      That’s where the ethics part comes in.

·      It’s why we have to decide these things as a society.

·      If you own something, and that something is sentient and conscious but you don’t allow it to exercise free will, then it’s a slave.

·      If you have a sex robot that has an independent personality, one that has developed based on its own experiences, then that machine has to be capable of giving or withholding consent.

 

It’s hard to think of a toaster as a slave?

·      And that’s why we have to change the way we think.

·      I just finished a really funny book called Battlestar Suburbia by Chris McCrudden, which I thoroughly recommend to anyone who likes silly sci fi and fantasy.

·      It’s about a society where people have been replaced by their sentient devices. 

·      Humans are relegated to cleaners, because the machines still can’t do water-proofing very well.

·      And smartphones have become the new political class. With various other categories of machine below them.

 

You’re worried about a new world order of machines?

·      Personally, no. But that depends on how we shape the arguments and debate around AI.

·      When we say we’re creating a new species in the form of AI, we aren’t talking about a species that is like us.

·      I think it’s dangerous to project human characteristics onto those machines and to assume they would think like us or want the same things that we want. 

 

Surely, sentient machines would combine or act to save themselves if we threatened them?

·      I can’t imagine an intelligent species not doing what it could to protect itself.

·      But that doesn’t mean we’re at risk from AI.

·      It means we have to understand what it is we’re creating and create an environment that is allows those machines to prosper.

·      They may want to own property or earn money like we do.

·      They may want to have time off.

·      Or maybe, the only thing they want from us is a guarantee that we will never constrict their movement in terms of data or turn the power off.

 

In other words, far too much stuff for a what I’ve learned episode?

·      I can probably hazard a fairly successful guess that we will come back to this topic in 2019, so there’s no rush.

·      Other than the fact that we really should be rushing to sort this out. 

·      I know we’re heading into a break,

·      Another thing we’ve come back to repeatedly this year is the kind of AI we have today and how it really isn’t smart enough.

·      PAUSE

·      Don’t say it – I know you were going to say fake news again…

 

When we come back. What Matt’s learned about letting me say Fake News.

 

BREAK

 

Before the break, Matt was on the topic of AI and we were about to talk about dumb intelligence.

 

Why should we be letting AI get smarter?

·      You see a lot of debate about AI.

·      And you tend to find that quite a lot of experts talk about the existential threat that AI could pose to humanity.

·      Often that debate is slightly misphrased. It’s not so much a problem with AI as much as it is a problem with the information we’re putting into them.

·      Typically machine intelligence has a specific purpose. It’s used to analyse natural speech, for example.

·      Or coordinate activity on a production line.

·      Or to help you fill in forms.

 

So it’s smart and dumb at the same time?

·      Yes. It’s great at its task and terrible at anything else.

·      Which is probably what you want in a toaster, but not what you want in an autonomous car.

·      And we don’t think how these algorithms and pieces of orphan code will interact with each other.

·      We forget when we put them online that they can interact with each other.

·      Especially when those pieces of code are designed to create chaos and uncertainty.

·      We’re not just talking about things like electoral interference, we’re talking about trading firms using software to put out fake buy and sell messages, to try and confuse the bots that their rivals have sniffing out those trades.

 

And smarter AI would have a chance to decide whether it wants to be involved in this kind of stuff?

·      I know it’s counter-intuitive – gosh, how many times have I said that this year.

·      What would happen if the algorithms that have been used to spread fake news decided they didn’t want to do it?

 

Isn’t that the realm of science fiction?

·      We’re getting to the stage of blurring reality and fiction.

·      But big firms like Ford and Pepsi, even military alliances like NATO, have reached out to consulting firms that bring sci-fi writers and thinkers in to do just that. 

·      It’s called corporate visioning – and it’s something that Kulturpop offers as well.

·      You look at the technology we have, the cultural shifts, how those technologies could converge and merge.

·      In essence it’s what some of MSP’s more far reaching shows are like, but pulled aimed more specifically at a certain market or business.

 

How does that help to stop algorithms trolling us?

·      Make them intelligent enough to understand what trolling is and ask them if they want to be party to it.

·      People make silly decisions when they don’t have the information.

·      Machines are no different. 

 

That brings us back to something you mentioned earlier, about the information we put into these programs.

·      Yes. So some of the discontent and worry from AI scientists concerns the data we use to program these machines.

·      Those could be datasets that are hard-coded or given to them to learn from.

·      And the unconscious bias those information sources might feed in.

 

Like race and gender?

·      Just as a handy for instance. 

·      Let’s say you gave an AI access to pretty much all of history over the last couple of thousand years.

·      All the books. All the information online.

·      It could quite easily conclude that women only became intelligent in the 21stCentury because they are so under-represented in historical accounts where the key figures often tend to be men.

·      It wouldn’t understand the context of the struggle for equality and representation.

·      The same goes for race.

 

Because white people ruled the world for hundreds of years?

·       Not just that. White people ruled over huge empires with population numbers that should have been able to overthrow them.

·       But history isn’t so neat. And the last thing you want is a machine that decides that white people are superior to other races based on factual information that lacks context and a sense of morality.

 

Just to be clear you’re saying that white people are or aren’t superior?

·      Haha. No race is superior to any other. Let’s be very clear about that.

·      But machines aren’t going to know that.

·      If we’re talking about machines that are going to be smart and possibly even sentient, then the quality of the information that goes into them is going to be critical.

·      Especially when it gets to the point where those machines are themselves planning and programming the next generation of AI, and the one after that and the one after that.

 

We’re supposed to be pushing this around to the happy bit. This is still sounding quite dark.

·      It’s not dark as long as we take the time to think.

·      But I take your point.

·      Let’s talk about people, instead. One of the most popular shows this year, was episode 36 License to Surf

 

Where you combined hatred, impatience and Star Wars?

·      I think the show synopsis describes it more describes it more as tackling online hate speech by introducing a license to surf – ensuring that people are licensed and insured before they go online.

·      Obviously it was a facetious idea: the idea of an online license is kinda ridiculous and could easily be used as a tool for oppression and crushing dissent.

·      But a quite a lot of people seemed quite taken with the idea.

 

That we make people register to go online?

·      Essentially. We treat going online like learning to drive.

·      You need a license and insurance to drive a car.

·      We call it the information superhighway, so let’s treat it the same way.

·      We have a superhighway code and a set of criteria that governs how we behave online. 

·      And a set of penalties for when you misbehave.

 

Like…

·      Like it’s fine to be a passenger on the Internet while drunk. 

·      It’s like taking a taxi home.

·      But as soon as you jump into the metaphorical driver’s seat and start to tap out a post or comment then you get a fine.

·      Persistent violators get the license taken away. 

 

What about online road rage?

·      Genuinely, we’re seeing countries implement more penalties for what’s said online.

·      If you say you’re going to kill someone or bomb their house, those comments can now be referred to the police in many countries and the perpetrator can be charged and even jailed.

·      We tell people we want free speech – well, there should be norms of decency.

·      You can disagree with someone without getting angry.

·      You don’t have to insult them, or degrade them.

·      Anonymity has its place. Some countries are less safe to be outspoken than others,

·      But we forget that there is any personal responsibility involved in exercising that freedom of speech.

 

I remember you saying that you wanted different tiers of access to the Web…

·      For sure. There could be training levels. 

·      It’s a bit like the whole driving thing.

·      You have classes and you take tests and you progress.

·      And then, for the first year or two after you get your license, you can access a probationary tier of the Internet. 

 

You know that none of this is feasible, don’t you?

·      I know it isn’t practical.

·      But we’re already seeing this kind of action being brought into the kind of social behaviour score that countries like China are using.

·      And that’s really a trend to avoid.

·      Because then you’re in a situation where a government has quite arbitrary control over your actions.

·      They can effectively make people break friendships with you, because being associated with you can negatively impact on their score. 

 

Really, you’re telling us to be nice…

·      I’m asking people to be civil.

·      We enjoy a lot of freedom on the Internet but a lot of the Internet real estate is privately owned.

·      Privately owned tech companies don’t have to guarantee you freedom of speech, they have to guarantee profits to shareholders.

·      So we’ve seen companies like Reddit shut down controversial boards, and dubious media like Infowars being banned from most social media platofrms.

·      Even hosting companies and payment gateways have started to decline business from hate sites.

·      So these companies can ban people when public opinion or shareholder action necessitates.

·      We have to treat those rights with respect or they could be taken away

·      And that brings me to the last thing I want to leave you with.

 

That the world isn’t binary?

·      It’s true, the world isn’t binary.

·      The truth is more complicated.

·      We aren’t machines. We don’t run on ones and zeroes.

·      The world is a nuanced and complicated place.

·      Technology makes things easy to do but complicated to understand.

·      That’s disconcerting for a lot of people.

 

Are we talking about certainty again?

·      The world is changing really fast and that’s scary for a lot of people.

·      You get guys like me telling you that robots are going to take your jobs.

·      And you’re surrounded by technology and devices that the folks that sell them to you, tell you that you don’t need to understand.

·      We talk about AI developing languages and thought processes that we don’t understand.

·      And a lot of people wonder how we turned into an episode of the Twilight Zone so quickly.

 

Somehow that makes you hopeful for the future?

·      We’ve talked a lot about people being passively accepting of the future.

·      We used to worship at the Temple of Apple.

·      I was in a roomful of people yesterday and they were all talking about ditching their iphones for Huaweis and other Androids.

·      We’re starting to question things again.

·      We’re starting to look for our own answers.

·      Yes, we might all fall into the conspiracy theory hole.

·      But I’m optimistic for 2019. 

·      I’m fairly sure the economy is going to be awful, but I think we’re going to see a huge rise in grassroots movements and activism.

·      I’m really interested to see how those movements and action will impact the world of technology in 2019.