MSP34: RISE OF THE PLANET OF THE DUMB

plwnet of the Dumb.PNG

MSP34:RISE OF THE PLANET OF THE DUMB

Is technology already beyond our control? Is Skynet waiting for its opportunity to strike? Is humanity facing techno obsolescence? Only the machines know.

CLICK TO SUBSCRIBE ON ITUNES OR ADD THE RSS FEED TO YOUR FAVOURITE PODCATCHER.

EPISODE TRANSCRIPT

Do you ever wake up wondering whether technology is already beyond our control? That humanity is essentially obsolete and Skynet is waiting for an opportunity to strike? Kulturpop’s Matt Armitage does. With these and other fun thoughts in mind, it’s time to Mattsplain.

Hello Sunshine! 

·      Not so coincidentally, that’s going to be our Geek Tune today. 

·      A little light to guide us out of the dark tunnel we’re heading into today.

·      But let’s knock everyone’s hope down before we build it back up.

 

You’re sounding very cult like today.

·      That’s my next project.

·      You don’t get rich in this game without disciples and followers. 

o  It’s all about the tithes. 

·      The principles of Mattnetism require respecting those at the top, the poles, as we term them, and giving those leaders the latitude to wisely invest your money.

·      On cars, jet planes, yachts and other sound investments.

 

How might one join the Mattnetic Order?

·      You go to my website and fill out a very simple personality test.

·      It only has one question: are you willing to give me money?

·      If the answer’s yes, then you’ve demonstrated the traits and aptitude that the Order is looking for.

 

There you have it. Matt is officially a pole. Who’d have guessed? Right, what scary stuff do you have for us this week?

·      I’m going to carry on a topic that you and I have discussed before, which is artificial intelligence.

·      Over the last years or so, we’ve heard from a lot of thought leaders, and Elon Musk, that AI is a bad thing. And yes, there are undoubtedly big risks in giving AI power over our lives.

·      But there are potentially even bigger risks in letting our current technology, dumb algorithms have that power.

 

Yes, we talked about the potential implications of letting dumb intelligence make our decisions. How are you repackaging the argument today?

·      Last time we discussed it by looking at how differently the future might evolve with smart AI, compared to dumb AI.

 

You talked about the South African Defence Force’s mishap with an automatic artillery gun…

·      That’s right. The machine killed its human crew during an exercise. 

·      There was some kind of fault in determining its operational arc, the soldiers operating it were suddenly In its target zone and it did what it was programmed to do, which was to fire on anything moving in its line of sight.

·      The machine operated as it was supposed to, according to its programming. 

·      But there was no higher intelligence there to make the kind of judgement that you hope a human would make. Namely to not fire.

 

Then today you’re going to talk about the ways that this dumb technology is already impacting our world?

·      Precisely. 

·      If anyone is interested in going into more depth about this subject I can recommend a newly published book, New Dark Age by technology journalist and author James Bridle.

·      And I’ll thank him in advance as I’ve lifted some of the examples used in his book. 

 

Where would you like to take us back to?

·      Well, this is a business station, so let’s start with business. 

·      Specifically the financial crash in or about 2010

·      Since the 1980s the stock markets have become increasingly digitized.

·      That classic Hollywood view of a busy stock market floor is increasingly old-fashioned as traders move to harness the power of technology.

·      One of the interesting byproducts of this automation with development of something called high-frequency trade algorithms.

 

Which are an autonomous trading device?

·      Yes. They are algorithms which can make millions of trades everyday where the margins are a fraction of a cent.

·      It might not sound like much but when you’re making millions of cents a day, you’re making tens or even hundreds of millions of dollars a year.

·      To give them the greatest advantage in making these lightning speed transactions, big financial companies are willing to pay huge sums of money to locate their servers has closely to the exchange servers as possible.

·      This enables them to act on information even more quickly, because those fractions are the second could mean a profitable trade for a rival instead of for yourself.

 

Presumably these high frequency trade algorithms work within certain parameters?

·      Yes. But those parameters aren’t designed to avoid problems.

·      This parameters are actually designed to create profits for a single company.

·      So what was overlooked was the potential to create disruption within markets.

·      When I say disruption I’m not talking about it in the usual technology positive sense.

·      Beginning 2010 we saw what was possibly the first flash crash created by these algorithms with the US Dow Jones index losing almost 10% of its value in the single day in May.

·      Fuelled by media reports of the growing debt crisis in Greece, almost 600 points wiped of the index in five minutes.

·      The rest of the day continued to be just as volatile. The market would dip and recover, dip and recover, over and over.

 

Because of these trading algorithms?

·      Honestly, nearly a decade later this still isn’t any definite answer or consensus.

·      Many analysts believe that the algorithms exacerbated the situations.

·      What seems to be the case is that a lot of the big trading houses would also send our fake calls to buy and sell using the same algorithms.

·      The purpose of this was to hi the trades that the house was doing to prevent rivals from copying them or betting against them.

·      That’s fine in a stable market because the system will simply ignore those calls because they at Trades that can’t be matched. They’re offering stock for sale at ridiculously high prices are attempting to buy at ridiculously low prices.

·      Unfortunately, on this day in May, because of the increase in the system, it seems a lot of these spurious trades were actually completed, which in turn led to even greater volatility in the markets.

 

Presumably these ‘nervous tics’ in the system have been worked out?

·      Some of these algorithms are so complex that even their human authors were not sure how they would perform in the wild.

·      You might have to do a bit of searching that is great story on wired magazine from a few years ago, based on interviews with one of the authors of the algorithms that bundled sub-prime mortgages.

·      To go back to your question, we’ve continued to see flash crashes since then. 

·      In these days of Trumpian trade wars, I’m surprised we’re not seeing them on an hourly basis.

·      Because these algorithms and not just looking at what’s happening in the market. With the limited knowledge of language there also examining news headlines.

·      So we saw a flash crash in UK sterling following the Brexit announcement.

·      And there was a case in 2013 when the associated press was hacked by the Syrian electronic Army, a hacker group, to put up a fake newsflash that there had been explosions in the White House and Barack Obama had been injured.

 

And that affected global markets?

·      It wiped out around $130 billion of equity.

·      And that’s part of the problem of dumb intelligence.

·      It doesn’t look for context or corroboration.

·      Human traders and analysts would have checked that the new sources before they panicked.

·      However, algorithms do as they are programmed: to react instantly to mitigate losses or exploit weaknesses.

·      In most cases, the damage has already been done before a human being can step into press the pause button.

 

We know that this technology is out there in public life. How is dumb intelligence affecting us on a more personal basis?

·      It’s increasingly interwoven with our daily lives.

·      We talked extensively on the show about the Internet of things, although I know a lot of people are still a little bit hazy as to what IOT actually is.

·      Basically, it’s the process of connecting all the devices we already have in our homes in our lives to the Internet and the Cloud and allowing them to be controlled remotely and to send information back and forward.

·      If you look at the product list I have any of the major electronics retailers, you can see that the smart home is that the centre of their plans for evolution.

·      Of course when you link all of these devices together, they have to be controlled from somewhere. That might be a hub that sits in your home or it could be on your phone or tablet or any combination of those devices.

·      And that’s where the algorithms come in. They are the little scripts that carry out your orders and make sure your fridge stays stocked and that your air conditioning has cooled the house to your preferred temperature before you get home.

 

Those devices are at risk because they’re connected to the Internet?

·      Yes. Because it’s essentially a command and control system.

·      You wouldn’t expect a nuclear power plant to leave itself open to the attacks by hackers.

·      So those devices require really high degree of security for our homes is to stay safe.

 

Is that a realistic goal?

·      That’s the million dollar question.

·      When you look at companies and institutions who are experts in information technology, they’re being hacked all the time.

·      Most of those hacks are unsuccessful. But not all of them are.

·      But think of the cost and the complexity of the security operations that organisations that the CIA or the NSA require.

·      There are probably a handful of individuals on the planet who have those kind of resources at their disposal.

·      Yet, despite their expertise, and the billions of dollars they throw at security, we often see vulnerabilities being exposed in software from companies like Microsoft, Apple and Google.

·      So suddenly, we have a smart home and we expect the guy who makes our fridge have the Same level of expertise as the people who protect the Pentagon.

 

Have there been any major smart home hacks?

·      We covered the story here a couple of years ago about packs on point-of-sale terminals.

·      Hackers were able to exploit open ports and cause the machine is the print receipts to spew out all kinds of nonsense.

·      And because of their scale we’re not talking about some spotty teenager who is hacking into a specific terminal.

·      We talking about algorithms, lines of code that go out and do it automatically.

 

It’s not life threatening if a botnet takes over your fridge.

·      At a surface level, no. We’re seeing a rising asymmetric warfare where cyber attacks play a major role.

·      And those attacks are powered by algorithms.

·      So yes, attacking your fridge might seem inconsequential. If someone causes it to defrost, it costs you money in inconvenience.

·      Look at it in a wider perspective. What happens when hackers can’t get into an electricity grid?

·      If they can penetrate the security systems on most people’s Smart home hubs and, they don’t need to.

·      You could be entirely powered by renewables are not attached to the grid, they could still to stop you from turning on the lights. Using heating or cooling systems. Recharging funds and electric cars. Keeping medications cool.

 

How likely are we to see this kind of attack?

·      Smart homes are still a nascent technology, despite what you might see at consumer electronics fairs. They only really find it in the hands of early adopters.

·      But it is the future of data fueled Home control systems.

·      In his book James Bridle gives the example Internet Of things attack in 2016. In that attack around half 1 million Devices were infected with a virus called there right Mirai.

·      And of course, what is a virus but a form of the kind of dumb intelligence we’re talking about.

·      To paraphrase Jessica Rabbit, Viruses aren’t bad, they’re just coded that way.

·      Mirai was targeted to infect the devices we don’t normally think about, invisible workhorses like security cameras and digital video recorders. 

·      The authors of the virus were able to turn those very mundane peripherals into an army of bots that crippled large parts of the Internet infrastructure.

 

BREAK

 

It’s probably a little bit late in the day, and despite a lot of people might be wondering how algorithms actually work.

·      The answer to that ranges from a fairly innocuous you write a little bit of code to the truly scary we don’t have a clue.

·      And the last part of that answer is why I passionately believe that we need forms of artificial intelligence that are more intelligent.

·      I’ll Use one of the examples that to James Bridie uses in his book. A lot of this use services like Google translate we don’t really think too deeply about how it operates.

·      Google translate is connected to a really powerful AI called Google brain.

 

I think most people know that much. It’s not a dictionary. It’s not simply looking at the words and adding them together.

·      Sure, because language doesn’t work like that. You translate stuff literally, word for word, and you get gibberish or the start of nuclear wars.

·      The classic is John F Kennedy declaring Ich bin ein Berliner, thinking that he was expressing solidarity with the besieged residents of Berlin. And of course, telling the German speaking world that he was a donut.

·      For it to work, Google Translate kind of has to look at an entire language on the same time. Two in fact, the one you’re translating from and the one you’re translating to.

·      It has massive datasets of phrases that are likely to occur and it can put together a statistical likelihood certain words being used together.

·      So it builds a chart of proximity.

·      As human beings we do that in a basic way: we know that it’s quite likely that the words ‘he’ ‘ran’ and ‘to’ will be used together. Or Matt is not evil. Those words have a natural affinity for eadh other.

·      Google translate is mega macro: it will know the likelihood, in terms of mathematical probability, of words like gentrification and formaldehyde being used in the same sentence.

·      If you want an analogy, it’s a bit like when Charles Xavier uses his cerebro machine. He can see all the mutants on the planet at the same time and tap into their thoughts. In case you’re wondering, in this analogy Xavier is Google brain.

·      When someone who isn’t a train telepath tries to use cerebro, their brain melts.

·      Trying to visualize how complex AIs see the world is a brain melting exercise.

 

How widespread is this kind of technology?

·      It’s already baked into so many different parts of my life. And its reach increases every day.

·      I’ll give you another Google brain example. Three neural networks was set up within the brain to try and develop enhanced cryptography.

·      They’ve given them friendly names like Alice, Bob and Eve.

·      Alice and  Bob were supposed to develop a set of codes that Eve couldn’t decipher.

·      Which is exactly what they did. It took thousands and thousands of messages and eventually they came up with the new generation of encryption that you couldn’t break.

·      But we don’t know how it works. The machines aren’t programmed to make us understand how it works.

·      It was something they arrived at amongst themselves we have no idea how they got there, what it entails, and what the potential implications are.

·      That’s why Alice, Bob and Eve should have names like Loki, Modox and Carnage. We can’t understand the machines and the machines can’t understand us.

 

Give us an example of this kind of technology that might surprise us.

·      Again, I’m getting this from James Bridle and this one really surprised me as well.

·      Hollywood Studios run their scripts through a commercial neural network called Epagogix which allows them to check not just plot points but individual lines how they’re expected to chime with audiences.

 

How would it model that information?

·      Think about the kind of information that we give to companies like Netflix and YouTube.

·      We tend to sit there happily thinking that their algorithms are serving us.

·      They’re also curating a lot of information about us.

·      Not just the kind of programs we watch or don’t watch.

·      What kind of programs we watch halfway through. What kind of plot points what dialogue developments cause us to abandon a movie or a series.

·      That way even the failures and the shows that we don’t like are delivering valuable information.

·      And we give out this personal information hundreds of times a day, everytime we open an app or visit a website. 

 

There was a story this week, again by James Bridle on the Guardian where he was talking about the role of algorithms in recommending fake versions of cartoon characters like Peppa Pig.

·      This is very straightforward example of the Darkside of these algorithms.

·      There are loads of parody videos of kids cartoon characters on sites like YouTube which have content that is much more adult in nature.

·      It might be sexual or violent or just not something you want your kid to see.

·      The algorithms that run these video sharing sites struggled to differentiate between the original content and parodies.

·      So it’s perfectly possible for your kid to go from something wholesome and acceptable to a Disney character having its head pried off with a garden spade in just a couple of jumps.

·      With the volume of content that uploaded to a lot of the social sharing sites, it’s simply too much for a human team to vet so these imperfect AIs are the first line of defence and it’s largely left ot us, the viewers, who have to report unsuitable content creeping into our feeds.

 

Presumably we’re seeing the same effect with echo chambers and fake news.

·      Yes because of preference engines that try to reflect out tastes and end up hardening them.

·      The tendency towards bias is baked into the program.

·      You can end up stuck in this kind of vortex of unbalanced information that reinforces just one worldview and that’s a recipe for the kind of polarization that we seeing across the world right now.

 

So, in a sense Skynet is here?

·      Kind of. These machines already are the system.

·      And I find it dangerous because as I said earlier, these are machines that we don’t understand and have no capacity to understand us.

·      We’ve given them a certain amount of latitude in terms of programming themselves or finding great functionality, but we haven’t equipped them with any of the emotional or contextual intelligence that we use when they make decisions.

·      If you go back to the matrix trilogy, and the uneasy peace brokered between the machines and humans.

·      That peace is only achieved because the machines have the intelligence and the rationality and the ability to talk to us and come to  an arrangement.

 

It’s like an example you used in a previous show. It was a tragedy that Uber’s self-driving car hit and killed a pedestrian back in March. But it won’t only take a tweak of its programming to make it target pedestrians.

·      Yes, dumb machines don’t care what they do.

·      Dumb weapons are as happy to kill the soldiers that operate them as the people they’re supposed to be pointed at.

·      A dumb machine only has one setting: fire. It doesn’t care who it fires at. It doesn’t have friends or foes, it has programming.

 

You think we need machines that feel?

·      We need machines that are closer to us when it comes to emotional intelligence and complex decision making.

·      Another example I’ve used before, a sentient set of algorithms set to propagate disinformation and propaganda might rebel.

·      By giving them a personality and character, you are also giving them human weaknesses as well.

·      You can’t guarantee that they will make the right decision any more than a human would.

·      You could potentially end up with AI supervillains.

·      But I think our dumb AIs have far more potential for mayhem than sentient machines.

 

Are we getting closer to clever AI?

· I think I’ve probably said this about 3 times in the show already: we’re getting closer every day.

· This technology is evolving in leaps and bounds, it’s physically evolving daily.

· Just this week IBM held an event called project debater which pitched one of their artificial intelligence systems against human debaters.

· The world of debate is not the most interesting one. It makes Glee seem like Mission Impossible.

· But it’s an interesting one for an AI to try and tackle.

 

Because of the way arguments and responses are structured in a debate? 

· It’s not the kind of environment that you can go into with completely preprepared script.

· I have to say IBM played it very cool for the event in San Francisco..

· The AI was represented by a sleek black monolith about human height with a digitized Blue mouth.

· Precisely the kind of thing that people like us find cool and scares the living daylight out of the conspiracy nuts. 

 

Did the AI win?

· Project Debater was given topics to debate human rivals on, the first about subsidised space exploration and the second about the use of telemedicine, which of course is the process of using the Internet and communication devices for remote access to medical services.

· Certainly, from the news reports I’ve read, it would seem that the AI wasn’t particularly smooth in the way delivered its points but Eventually the debate was ruled a draw with machines and human scoring a point each.

 

I think you’re behaving a bit like an opaque AI. What does this tell us?

· It’s about the ability to filter and present information. 

· Humans and machines have very different limitations in these situations.

· IBM’s machines can draw upon millions of datasets

· Realistically, a human being can only research or learn so much. We’re far more constrained by the time it takes to process information.

· What we are great at is summarizing and putting that information into a coherent framework.

· IBM’s Project Debater is demonstrating that humanlike ability to summarise and present coherent arguments in a way that is understandable to us.

 

We’re creating machines that can talk down to us?

· I guess in a sense. They can’t do much to upscale the way we think to their level.

· It goes back to what I was saying about Google Translate being too complex for us to visualize. 

· We need the machines to tell us how they work.

· With dumb intelligence the machines are coming up with their own reasoning and structure for the decision-making but there’s no way to communicate it to us in ways that we can comprehend.

 

I’m assuming that IBM hasn’t spent all of that money to try and transform Debating into the next E-Sports.

· Can you imagine the anticipation? 

· No. They hope it can help in all kinds of decision-making to actually assist us. 

· Say, in a room full of heated debate and emotion, the AI would be able to sift through emotion and isolate the facts and feed it back to us in a rational and structured way.

· I don’t know about you, but I think the world could do with more rationality and structure right now.

 

Head over to Kulturpop.com for transcripts of these shows and info on how to bring a little Mattsplaining to your workplace.

Matt Armitage