Original Images: Pixabay. Glitched by Kulturpop.

Original Images: Pixabay. Glitched by Kulturpop.


Wipe your feet and mind your head as we enter a universe where mindless code decides whether we live or die. A dystopia on your doorstep? It’s time to Mattsplain.




About three months ago, Kulturpop’s Matt Armitage claimed that we will soon be living in the Age of Amazon. A world ruled by mega-companies who control every facet of global trade and our economic activity. Has he had a change of heart? Has the algorithm running his thought process short-circuited? To find out, we’ll have to let him Mattsplain.


Now, we’re not actually in the studio together today.

·      No. Your new studios have this really amazing facial recognition software.

·      And they won’t let me in. I’ve had this problem for years, but for some reason facial recognition software identifies me as a basket of kittens. 

·      So, I’m recording my side of the show from kulturpop’s bunker inside a dormant volcano.


So, you’ve changed your mind about Amazon and the mega corporations?

·      Would you be annoyed if I started yet another episode by saying yes and no?

·      I think it’s too early to say that I’m wrong about Amazon, just as it’s too early to say that I’m right.

·      What I’d like to do today is introduce new variables into the argument, and listeners can draw their own conclusions from there.

·      You mentioned algorithms in the introduction, and that’s really what I’d like to talk about today.

·      The intersection of our digital world, these massive IT powered corporate colossi and the uncertain growing power of code.


This is something that you’ve come back to quite often this year. The idea that we’re under threat from machine intelligence.

·      Yes. And thank you for not calling it artificial intelligence.

·      It’s probably quite important to make a distinction here.

·      We often use terms like artificial intelligence comma machine intelligence, Machine learning, deep learning and others, quite interchangeably.

·      I’m not going to go into that side of the argument too much today.

·      If you are a bit confused about the terminology, there’s an on by a guy called Calum McClelland called the difference between artificial intelligence, Machine learning and deep learning.

·      It’s a quick read and a great primer.

·      We’ll post the link with the podcast.


Why is it important that I didn’t call it artificial intelligence?

·      Because we’ve had various people calling artificial intelligence threat over the last couple of years.

·      Including one of our old friends, the man who likes electric cars and space rockets and makes weird comments about cave rescuers.

·      If you’ve been following this show this year,  you’ll know that it’s not artificial intelligence in itself that concerns me.

·      It’s the fact that the AI or machine intelligence or deep learning – whatever you want to call it – 

·      It’s the fact that the systems we have now are not intelligent enough that worry me.

·      Systems that we are releasing into the world that have an incredible ability to affect our lives but have no ability to reason or to question.


A bit like a robot army in one of those old 1950s B-movies?

·      Yes. I’ve mentioned on the shows already, I’ve been reading a lot of Philip K Dick’s short fiction this year, which is absolutely breath-taking for its scope.

·      You can see that a lot of it Is him indulging in thought experiments.

·      And the dangers posed by mindless automatons recurs fairly frequently.

·      I’m sure he would be absolutely fascinated to see the world we live in today.

·      And I’m wondering what he would think of the algorithms that are increasingly operating in the background of our lives.


We’ve covered quite a lot of this ground, this year. What are you bringing that’s new to the table?

·      Just some additional insight, I guess.

·      I love the way that these subjects are always evolving, as is our knowledge of them. 

·      I’m not a machine learning expert. But I do enjoy adding this knowledge to the evolving equation that is the world we live in.

·      That’s the thing about trying to imagine what the world of tomorrow looks like. 

·      It’s like a reverse butterfly effect.

·      Every-time you think you’ve got a handle on it, you learn something that distorts the picture, 

·      or there’s another technological or social revolution that upends your mental image.

·      It’s constantly fascinating…


If you’re the one watching it, rather than being at the receiving end…

·      That’s a fair point.

·      It’s like when people ask me if I’m worried about machines replacing all our jobs and I say no.

·      It’s not because I think I’ve got the resources to survive or some amazing latent talent for exotic dancing that will make me immune.

·      I know my way of life is at risk as much as anyone else’s. 

·      But I think my curiosity overwhelms my fear. 


Then why bother to change things? Why not sit back and let it happen?

·      Because that’s not the nature of the way we live.

·      Human beings aren’t very good at being passive.

·      I’m rereading Terry Pratchett’s Thud! At the moment, which is all about two races of people who periodically get together to refight ancient battles in a thoroughly deadly spirit.

·      We can only be passive for so long. We like to tinker and change things.

·      Sometimes we do a good job and sometimes we don’t.

·      And machine and artificial intelligences are part of that evolving process. 


Can we change things? Can we influence the way the world is turning?

·      I guess that’s a big part of the reason for today’s show.

·      We talk about feeling impotent or powerless in the face of big corporations.

·      And I’ve used this show many many times to say that if we take a more active role we can influence these companies.

·      We can make FB and twitter listen to us.

·      Simply by either threatening to not use or physically not using their services.


And you’re beginning to doubt that?

·      No. I still think we can make the companies move.

·      It’s not clear how much power we would have to do that in a world controlled by a giant mega-company like Amazon, but for now we still have power and agency.


What’s changed?

·      I think the growing realisation that the companies themselves are not really aware of what they’re doing when it comes to algorithms and machine intelligence.

·      That they’re potentially unleashing forces that can only be controlled if we decide to switch off all the systems that connect us.

·      We had our first inkling of the power of the systems to disrupt our world during the financial crash of the noughties and the early teens as the finance industry startedto roll out high frequency trade software.

·      10 years on and economists and analysts are still arguing about the causes and role that technology played in the economic upheaval.


You think that that confusion is an indication of the power that code has?

·      Yes. Because we now it first stage of our technological evolution where we designing systems that we don’t fully understand.

·      If you think about the example of a lot of tech-fuelled disaster, maybe an air crash or a failure at a nuclear power plant.

·      There are systems in place to prevent those disasters.

·      Each circuit or system has its own sensors and detectors, fail-safes and shut offs.


Yet we still have the occasional meltdown?

·      No system is full proof.

·      But At least the systems are designed by us.

·      If something goes wrong we may not be able to prevent that disaster, and we can usually figure out what went wrong to try and prevent it from happening in future.


And you think we’ve passed that point with the machine intelligence we have today?

·      Very much so.

·      Like with the last financial crash, when something goes wrong, we don’t really have a way to understand what the algorithms did and what went wrong.

·      We like to think of software programs is being very straightforward and black-and-white.

·      Ultimately, for a lot of us, it helps to have that image of switch on or switch off.


Yes, we like things to be boiled down to a few points. Is there anything wrong with that?

·      No. But we have to accept that the overview isn’t the picture. It’s the thumbnail.

·      It’s the link that gets you started. The real image has far more depth and clarity.

·      Like the article on I mentioned earlier. We like to have things boiled down to a few points.

·      That article is a four or five minute read. 

·      It can give you the basics, and it certainly does so a lot better than could.

·      At the same time, this is an area so complex that thousands of the world’s finest minds spend their entire working lives exploring it.

·      And even they can’t tell you with any certainty what happened during that financial crash.


Where does that leave us?

·      We’ll get into that more after the break.

·      But before that, think back to all the breadcrumbs.

·      Like Mark Zuckerberg announcing that he had no idea that Facebook’s algorithms could be used to hoodwink a country or sway an election.

·      We know that those algorithms have behavioural characteristics.


You believe him?

·      On the whole, yes.

·      We think of software and programs and algorithms as being linear and limited.

·      But a lot of the systems we use today are designed to adapt to changing conditions.

·      They’re dynamic. They shift with our taste. They analyse news and cultural events.

·      But they don’t interpret them as we do. They’re designing a machine zeitgeist and not a human one.


Surely the designers know the parameters they’re programming?

·      Right now we’re still in control.

·      We can still look at the code – even though it’s a mammoth task. 

·      Facebook has been able to examine its code and make changes that they claim will protect us.

·      Where we have even greater problems is when these competing bits of code interact, such as in the financial system.

·      Those codes are designed to create profits for their authors not to create stability.

·      And we’re on the cusp of the next step already. Where far more powerful and freely operating algorithms are about to be unleashed.

·      If you go back to the nuclear power plant analogy – when the meltdown happens, there’s no way for us to figure out what went wrong 

·      The machines are operating with a language and a code of conduct that we have no way to comprehend.


After the Break, we enter the world of weaponised code.




Before the break we were talking about some of the examples of algorithms behaving badly. I think we touched on the financial crisis and the High Frequency Trade algorithms you talked about with Richard a few shows back…

·      That’s right. Before the break I was saying that companies like Google and Facebook are still in control of their algorithms.

·      The HFT algorithms which are still a major part of the financial and trading sector are an early indicator of where this technology is headed.

·      These are pieces of code that are specifically designed to thrive on volatility.

·      They are designed to go head-to-head with other algorithms and try to outwit them.

·      So their role is to try and distort reality and exploit any confusion they create.

·      That’s how they’re designed.


And because they’re adaptive we don’t know where they’re headed?

·      And neither do their authors.

·      And we suddenly finding that there’s a gap between the code that we can create and control and the code we need to control new and complex systems.


That might need a bit more explaining.

·      We are in a really very strange place right now.

·      Take the fatal autonomous car accident earlier this year which we’ve talked about many times.

·      Lady who was hit was probably killed because the algorithm couldn’t correctly identify her and predict behaviour characteristics.

·      Ideally, the human driver should have then taken control and prevented the accident.

·      The software running the car simply wasn’t clever enough. 


Which brings us to where?

·      There’s a really cool article on the Guardian, written by the technology journalist and author Andrew Smith called Franken-algorithms. 

·      We’ll post the link here. Definitely one of the most interesting reads I’ve come across this year. Well worth half an hour of your time.

·      According to the article we are about to enter a demimonde of powerful and unpredictable code, 

·      code that is capable of determining in its own rules and laws but is dumb by any measure of human intelligence.

·      Neil Johnson, a physicist at the University of Miami who specializes in complexity, especially with regard to volatility in the financial markets, has identified a new breed of programs that he categorises as being like genetic algorithms.

·      These have the ability to rewrite their own code and Johnson claims that they are already in action on sites like Facebook.


What are they doing?

·      We don’t really know. What concerns Johnson and his colleagues is what conclusions the machines are drawing at a macro level.

·      When you look at this stuff at the microlevel…

·      It maybe something straightforward that identifies your face in the photograph.

·      What is less clear is the set of conclusions these genetic algorithms are drawing from the billions of interactions they make.

·      Because those conclusions can be fed back into the system at the microlevel

·      And unless you understand the decision-making process that the program has made the very top of that decision tree, 

·      it’s very hard to identify and unpick those conclusions, or even identify how they might be skewing the system.


And these adaptive algorithms are becoming more widespread?

·      Yes, no, maybe. Experts like Johnson and Author Cathy O’Neill certainly think so.

·      There is a constant push and pull.

·      For example, companies like Amazon have to be on guard for bad actors who are trying to game the pricing system.

·      The easiest way to do that– For both parties–is with an algorithm.

·      So it’s in the interests of both parties to have adaptive algorithms. You block me, I change. I block you, you change.

·      The game goes on. And it can be fought at incredible speed.


Let’s go back a step. Why the yes, no, maybe?

·      Because nearly all of these algorithms are proprietary.

·      Companies like Google don’t want to publish the code because it makes it easier for developers to find ways to exploit the system.

·      And of course they don’t want competitors cutting and pasting big lumps of it and building alternatives.

·      But increasingly there are calls for software companies algorithms to be audited.

·      That countries should have bodies, whether government-controlled independent, to have oversight.


Are we moving in that direction?

·      Welcome to the EU introduced its Data protection rules earlier this year but those really don’t go far enough.

·      Interestingly, there is increasing bipartisan support in the US for regulation of the technology and software companies.

·      Pres Trump has been issuing bombastic tweets targeting Google and Facebook and all the etc is over the past couple of weeks, alleging that the algorithms these companies use our discriminating against Conservatives.

·      At the same time Democrats are moving forwards with proposals and legislation of their own based on the interference in the last election.

·      you have the two sides moving towards a similar solution, albeit for different reasons.


Obviously, this is conjecture, but how much difference would oversight make to these algorithms?

·      Initially it might have some limited impact.

·      I don’t think it would have the kind of impact that either the Republicans and Democrats would hope for.

·      Even in the most extreme scenarios, if some of these companies were broken up into smaller entities, other monoliths would probably rise from those companies.

·      We use Facebook because everybody else is on Facebook. Ditto twitter, Insta gram, snap chat.

·      We use Google search engine because it returns the best results.

·      Breaking those companies up is not going to change the nature of consumer behaviour.

·      We gravitate towards what we consider to be the best solution.


Could legislation control the algorithms?

·      Certainly it could have some impact.

·      For example, in terms of creating a framework of responsibility.

·      That would prevent companies from hiding behind Secret code and claiming that they’re not responsible for its effects.

·      There was a case awhile ago where a Toyota Camry appeared to accelerate no reason and the driver eventually ran off the road and died.

·      It took nearly 2 years of investigations by software experts from NASA to figure out that there was the jumble of competing code and algorithms that crashed against one another, resulting in is weird and unpredictable effects.


And that was in an autonomous car?

·      That was in a normal car. Controlled by a human.

·      You can only imagine what might happen in a self driving car where there maybe more than 100 million lines of code, 

·      some of which is designed to be dynamic and adapt.

·      If we go back to the franken code article, Andrew Smith goes back to the arguments of George Dyson who wrote about much of what we’re experiencing more than 20 years ago in his book Darwin the machines.

·      We’re building systems that am much too complicated for us to control.

·      And the technology that controls them has to be at least as sophisticated as the systems themselves.

·      In effect, we’re building a world that relies on dumb algorithms.


Wouldn’t government or third party oversight help to make the process safer?

·      It’s possible that it could even make it worse.

·      Billions of lines of code making billions of interactions per day.

·      It isn’t the kind of thing that humans can oversee.

·      Realistically, you would probably have to build machines to check the algorithms.

·      And those machines would have to be as sophisticated or more sophisticated than the algorithm they are checking.


Yep. I’ve spotted the flaw there…

·      Yes. In order to police algorithms with intentions and logic processes we can’t comprehend, we have to build a bunch of algorithms with intentions and logic processes we can’t comprehend.


It seems like we have two choices: either we step back from the technology or we race forward to create more sentient and rational machines?

·      Let’s face it. There is probably zero chance of us going backwards unless some calamitous disaster or economic crash meteorite impact forces us to.

·      But was still a long way from those sentient machines.

·      Andrew Smith quotes Toby Walsh an AI professor at the University of New South Wales.

·      We have a lot of problems with creating AI is truly capable of independent decisions.

·      He questions whether we really will have self driving cars in great numbers any time soon for simple reason.

·      no one knows how to write a piece of code that will get a machine to recognise A stop sign


Isn’t that one of the most basic requirements?

·      Go back to the Uber crash earlier this year.

·      The software wasn’t able to recognise what the woman was in time & predictability research behaviours to that identity.

·      Toby Walsh claims that we can’t program a machine to recognise the stop sign.

·      And that goes for everything from stop signs to translating languages.


We don’t have the ability to break those processes down into a sufficient number of steps for a machine to understand?

·      So we design algorithms that allow the machine to learn around the issue.

·      For example, the machine may analyse traffic flows and identify the fact that cars stop at an intersection and conclude, there must be a stop sign here.

·      So it makes a note on its map.

·      But it isn’t seeing processing the information in the same way that we, it’s looking at datasets and making assumptions.


What’s the solution?

·      Usually I do my best to try and leave us with, if not a silver lining, and at least a couple of shiny trinkets as a panacea.

·      I don’t know if I have any to giveaway today.

·      One of the experts that Andrew Smith quotes, a guy called Paul Willmott, who specialises in quantitative analysis.

·      He suggests that we all learn to shoot, make jam and knit.

·      Neil Johnson, the physicist and financial analyst, is a little more optimistic.

·      We may have to change the way we view the problem and adapt away with programme algorithms accordingly.


Will that work?

·      As Johnson points out, we don’t even have the scientific language to do that yet.

·      At the moment, we program machines to create optimal outcomes.

·      What we're not addressing is what the worst possible solution what outcome might be.

·      We have to find a way to predict the unpredictable, and calculate the likelihood that the collision of who knows how many algorithms Will create that output.

·      We need a new science and we need it fast. 

·      Before the police robot identifies your breadsticks as batons and puts you down hard.