Even those who are enthusiastic about the impact of artificial intelligence (AI) recognise that it needs regulating. But what exactly needs regulating, and how?

One danger must be avoided. Governments must not allow the regulatory debate to be dominated by big tech companies and speculation about possible risks of future AI systems. The recent AI Safety Summit organised by the UK Government, for example, was criticised for inviting CEOs of big technology firms – which create the risks – and leaving out working people, who suffer the consequences.

What are AI’s immediate risks for workers?

The jury may still be out on the impact AI has on the number of jobs available, but other impacts are already being felt in the workplace. The OECD warns that, while AI can reduce dangerous tasks, it may also lead to a higher pace and intensity of work, increasing stress levels and posing risks to the physical safety of workers. The OECD also highlights the mental health risks posed by algorithmic management, noting that the ‘constant and pervasive monitoring and data-driven performance evaluations made possible by AI’ can make workers ‘feel constantly scrutinised and under pressure to perform.’ It is little wonder then that workers subjected to algorithmic management are less positive about AI than workers who interact with it in other ways.

AI is increasingly being used for recruitment, from identifying candidates to final interviews.

AI-enabled surveillance and monitoring tools also pose risks to workers’ privacy. Many websites offering companies software to monitor employees, in some cases promising ‘total control over employees’ computers’, are very likely to be illegal under the EU’s data protection regulations says ETUC. Reports of AI being used to identify signs of workers organising for the purposes of union-busting are also worrying. 

A further risk is that automating tasks previously requiring human know-how could lead to de-skilling of occupations, making them more routine, enabling greater use of temporary contracts and the hiring of less qualified, lower paid workers.  The OECD cautions that ‘wages could decline for workers who find themselves squeezed into a diminished share of tasks due to automation.’

AI is increasingly being used for recruitment, from identifying candidates to final interviews. Bias can be built into AI systems through the choice of parameters and data, and through biases in the data used (data that is incomplete, incorrect, outdated or unrepresentative of the population as a whole). The lack of transparency in such algorithmic decision-making makes discrimination harder to detect, and renders it very difficult for workers to use the legal protections of non-discrimination law.

How can regulation deal with the risks faced by workers today? 

Firstly, all policy and regulation should be guided by the principle of humans in control. AI is a tool that should assist people, not take responsibility away from them. The use of AI should, at the very least, respect human dignity and human rights. Suppliers and employers should be required to be transparent to workers, service users and consumers about the use of AI, and organisations should be accountable for decisions made by it. AI makes it imperative to improve people’s access to and control over their data held by others, whether social media platforms, employers, government or other organisations.

Secondly, and more specifically, workers should have an enforceable right to be informed and consulted, and to negotiate with management, about the use of digital technology. Employers should be obliged to reveal what technology is being used and how, and what data about workers is being collected and how it is being used. Workers should be empowered to collectively bargain with employers on restructuring, changes in work practices and working conditions resulting from AI, and on control over their data.

Governments are beginning to act, but more needs to be done.

Thirdly, there should be a clear commitment to a socially just transition. Developed to guide countries in managing the transition to low carbon economies, these principles should also be applied to manage technology development and deployment. Governments need to accept that they have a major role to manage change and ensure high levels of employment and living standards, and not simply abandon communities and regions to market forces. When an IMF official warns governments to prepare for ‘substantial disruptions in labour markets’ as a result of AI, it is important not only to take note but to act.

The OECD has a major role in shaping the rules and policies that are needed and must not only help ensure AI ‘drives innovation’ and ‘respects human rights’, as the recent Ministerial Council meeting declared, but insist that AI leads to sustainable and inclusive prosperity.

Governments are beginning to act, but more needs to be done. The EU’s recent and much-heralded Artificial Intelligence Act does recognise that workplace applications of AI are high-risk and requires workers to be informed when AI systems are deployed in the workforce. But trade unions in Europe are concerned that the obligations for high-risk systems are still only subject to self-assessment by the provider and call for a dedicated Directive on algorithmic systems in the workplace to uphold the principle of humans in control and empower workers and trade unions to influence AI implementation decisions.

President Biden’s recent Executive Order emphasises that the ‘next steps in AI development should be built on the views of workers, labour unions, educators and employers to support responsible uses of AI’ and that ‘all workers need a seat at the table, including through collective bargaining’. This is a positive step forward, and other governments should take similar action, recognising that collective bargaining is essential to shape AI, and laying the foundation for the rules and policies that are needed to protect workers’ rights and ensure all people are able to reap the benefits of AI innovation.

With the pace of technological innovation, there is no time to lose.