The recent AI Summits in the UK and South Korea pretend to be about ensuring AI technology is developed safely and responsibly, yet their outputs are banal statements and platitudes. If states are serious about regulating AI in a way that benefits humanity as a whole, they must undercut the lobbying efforts of the private sector and make them accountable, argues Baroness Kindrom.
Baroness Kidron will be expanding on some of her arguments here in her upcoming LSE public lecture “Tech tantrums – when tech meets humanity”.
Rishi Sunak’s AI Summit last November was, depending on one’s perspective, either a coup for the UK – hosting an important global conversation – or an excuse for the tech bros to scream existential threat whilst stealing the wallet from your pocket. In the rooms and corridors of Bletchley Park, to which only the select were invited – including the perpetrators of current inequities and iniquities such as AI generated child sexual abuse material (CSAM), misinformation, scams, discrimination, and workers’ surveillance but largely excluding critical or cautious voices – a draft declaration was being formed. As one journalist said to me, “everything important is happening out of our sight”.
Meanwhile Elon Musk, in a strange interview with the Prime Minister, declared that “There will come a point where no job is needed – you can have a job if you want one for personal satisfaction, but AI will do everything.” Everything but work out how to meet the needs of those who had been replaced. And while the quote was arresting for the press, there was something quite galling about the lack of commentary on two men who don’t need a job to put food on the table, being caviller about widescale joblessness, or how – if what Musk was saying is even partially true – the manifest and increasing inequalities of our time were about to get bigger. In spite of the urgency expressed by participants and to the dismay of the excluded – the outcome of the summit was the Bletchley Declaration which said – I paraphrase – AI is complicated – and all concerned agreed to meet to discuss further in May 2024 in South Korea.
The argument surely is if we want machines to work in the best interests of society, then the rules we make about how we live with AI must apply to the private sector, and we must keep some powers in public hands.
Meanwhile a more serious attempt at an international treaty was being drafted by the Council of Europe. The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, an international instrument open to ratification by Council of Europe member states and non-member states alike – a chance at a truly global agreement. AI systems are not used to undermine democratic institutions and processes, including the principle of separation of powers, respect for judicial independence and access to justice. But then the US delegation demanded changes, insisting that the signatory countries should not be required to apply treaty obligations to private companies, except when acting on behalf of a public authority.
Without question, AI legislation can usefully be applied to state actors. The Netherlands, the UK and Australia – have all already fallen foul of biased algorithmic decision making with thousands of people being wrongly accused of benefit fraud and cast into poverty, some rendered homeless and costing others their lives. As AI becomes more embedded in justice, immigration, education, health and welfare systems it is inevitable – that similar issues will arise – so there is a good argument for states following AI treaty obligations.
But what human decision-making process could possibly determine that upholding human rights, democracy and the rule of law in the development and deployment of AI technology should not also apply to private companies? The companies who have their hands on the technology that will determine the future of society.
We have allowed global corporations in general and global tech companies in particular to hold unaccountable power over all areas of public and private life – and in doing so – have created a system that is unanswerable to the governments we elect.
Canada, Israel and Japan – all observer rather than member countries – and the United Kingdom backed the US, the EU did not. Nonetheless, the US, where most of the AI companies are headquartered or owned and where it is anticipated most financial benefit will flow, got its way. The Council of Europe’s announcement confirming that individual countries can decide whether or not to apply the treaty principles to private companies outside their public sector activities – was as painful as it was mad. The argument surely is if we want machines to work in the best interests of society, then the rules we make about how we live with AI must apply to the private sector, and we must keep some powers in public hands.
We have allowed global corporations in general and global tech companies in particular to hold unaccountable power over all areas of public and private life – and in doing so – have created a system that is unanswerable to the governments we elect. Which in turn makes our governments unable to be answerable to us. An unaccountable sector, already acting with impunity, is being handed the keys to the systems and processes that will impact on all aspects of human life.
While companies trumpet their desire to be regulated, they spend their billons frustrating it.
And yet like Bletchley, the Seoul Declaration, announced in May, failed to embody the urgency felt by observers or even participants. It can be summarised by the collective commitment to “encourage all relevant actors to foster an enabling environment in which AI is designed, developed, deployed and used in a safe, secure and trustworthy manner, for the good of all and in line with applicable domestic and international frameworks”.
It is fashionable to say that digital tech moves too fast to legislate. But that view has emerged – in large part – because of the extraordinary efforts of corporate companies to frustrate, water down legislation’s efficacy and challenge its validity. And while companies trumpet their desire to be regulated, they spend their billons frustrating it. The AI sector is often described as diverse, young and disruptive, but the majority of Foundation AI models are owned directly by or in the hands of the Big Tech players and the data needs of creating a new model are so great that it is already considered a barrier to entry. Meanwhile across the globe, competition laws designed to contain the excess of previous generations of technological and business innovation – have failed and are continuing to fail to prevent the concentration of monopoly power.
Most questions of technology end up being about uses and abuses. The tech exceptionalism of the last two decades has outsourced governance and ethical decisions to the tech companies that seek to profit by putting their business interests in direct conflict with the needs of society, whether children who wanted a few modest changes to services that they loved, or commoditising everything from our footsteps to our democratic rights.
And rather than making the same mistake again, by treating AI as if it was something other and new, so just as the companies are using well-worn tools of lobbying, courts, and policy capture – the well-worn tools of democracy should also be used to ensure that AI develops to benefit humanity, not just a handful of winners of the Silicon Valley Gold Rush.
All articles posted on this blog give the views of the author(s), and not the position of LSE British Politics and Policy, nor of the London School of Economics and Political Science.
Image credit: Ascannio on Shutterstock