AI adopted without consideration for workers, MPs said

The rapid adoption of artificial intelligence (AI) by businesses during the pandemic has left workers across the UK vulnerable to a range of algorithm-induced harm, including invasive surveillance, discrimination and bullying. severe intensification of work, MEPs have been informed.

In a session examining how AI and digital technology are typically changing the workplace, the Business, Energy and Industrial Strategy (BEIS) Committee was told by Andrew Pakes, Deputy General Secretary of Prospect Union, that the The rapid introduction of new technologies into workplaces across the UK has helped many businesses stay afloat during the pandemic.

But he said the rapid deployment of AI-based technologies in the workplace — including systems for recruiting, emotion sensing, monitoring, productivity monitoring and more — meant the downsides weren’t were not properly taken into account, creating a situation in which employment laws are no longer adequate. to cope with changes in people management via digital technologies.

“What we’ve seen during the pandemic is the acceleration of digital technology that keeps us safe, connected and well – but we’ve also seen in that acceleration less time spent looking at it,” Pakes said.

Giving the example of task assignment software that can help bosses monitor or micromanage their staff, Pakes added, “You log in and it tells you how much time you have to complete a task. What we don’t yet have is clarity on how this data is then used in the management process to determine the speed of a job, or whether you are a ‘good’ worker or a ‘bad’ worker. ” worker.

“What’s behind AI is the use of our data to make choices about everyone’s work life and where they fit in the workplace. Many of our laws currently are presence-based. physical, so health and safety laws regarding physical harm and risk.We do not yet have a legal language or framework that adequately represents the harm or risk created by the use of our data.

In March 2021, the Trades Union Congress (TUC) also warned that there are huge loopholes in UK law regarding the use of AI in the workplace, which would lead to discrimination and unfair treatment. of workers, and called for “urgent legislative changes”.

A year later, in March 2022, the TUC said the intrusive and growing use of surveillance technology in the workplace – often powered by AI – was “out of control”, and pushed workers to be consulted on the implementation of new technologies to work.

Referring to a report by Robin Allen QC, which concluded that UK law did not adequately cover the risks of discrimination and equality that could arise in the workplace as a result of AI, Carly Kind, Director from the Ada Lovelace Institute, told MPs that many of the AI ​​tools deployed were not only on the edge of legality, but also of scientific veracity.

“Things like emotion recognition or classification, which is when interviewees are asked to talk to an automated interviewer or otherwise on screen, and some form of image recognition is given to them. applied who tries to distill from their facial movements if they are reliable or not. or trustworthy,” she said, adding that there is a real “legal vacuum” in the use of AI for emotion classification.

Speaking about how AI-powered tools such as emotion recognition could impact people with neurodivergent diseases, for example, Kind said inequality was a “real concern with AI in general” because it “uses existing datasets to make predictions about the future and tends to optimize for homogeneity and for the status quo – it’s not very good at optimizing for difference or diversity, for example “.

Regarding the accountability of AI, Anna Thomas, director of the Institute for the Future of Work, said that while auditing tools are generally seen as a way to address the damage caused by AI , they are often insufficient to ensure compliance with UK employment and equality laws. .

“In particular, the audit tools themselves will rarely be explicit about the purpose of the audit, or key definitions such as equality and equity, and assumptions for the United States have been introduced” , she said, adding that policymakers should seek to implement a broader system. socio-technical audit to remedy the damage caused by AI. “The tools were generally not designed or equipped to actually address the issues that were discovered,” she said.

The importation of cultural assumptions via technology was also brought up by Pakes, who said that problems with AI in the workplace are exacerbated by the fact that most companies don’t develop their own AI systems. internal management and therefore rely on off-the-shelf products produced elsewhere in the world, where management practices and labor rights may be very different.

Giving the example of Microsoft Teams and Office365 – which contain tools that allow employers to secretly read staff emails and monitor their computer usage at work – Pakes said that while it has been useful in the beginning, the introduction of an automated “productivity score” then created a multitude of problems.

“If all of a sudden, as we found, six months later, when people are locked in disciplinary action, and managers say ‘we looked at your email traffic, we looked at the software you have used using, we’ve reviewed your websites, we don’t think you’re a productive worker” – we think that goes into the scariest use of this technology,” Pakes said.

But he added that the problem is not the technology itself, “it’s the management practice of how the technology is applied that needs to be fixed.”

Case Study: AI-Powered Automation at Amazon

On the benefits of AI for productivity, Brian Palmer, head of public policy for Europe at Amazon, told MPs that the e-commerce giant’s use of automation in its shopping centers distribution is not designed to replace existing jobs, but rather is used to target mundane or repetitive tasks for workers.

“In terms of improved outcomes for people, what we see is improved safety, reduced things like repetitive strain injuries or musculoskeletal disorders, improved employee retention, jobs are more sustainable,” he said.

Reiterating recent testimony given to the Committee for Digital, Culture, Media and Sport (DCMS) by Matthew Cole, postdoctoral researcher at the Oxford Internet Institute, Labor MP Andy McDonald said: empowering – they lead to overwork, to extreme stress and anxiety, and there were joint and health issues.

Asked how the data is used to track employee behaviors and productivity, Palmer denied that Amazon seeks to monitor or monitor employees.

“Their privacy is something we respect,” he said. “The software and hardware we’ve talked about focus on the goods, not the people themselves.” Palmer added that the performance data collected is accessible to the employee through internal systems.

When questioned by committee chairman Darren Jones, who told Palmer he was “incorrect” in his characterization, Palmer said the primary and secondary purpose of Amazon’s systems was to monitor “network health.” and “inventory control”, respectively.

Telling the story of a 63-year-old voter who works for Amazon, Jones said it was a given that the company was monitoring individual worker productivity because that voter had already had two strikes for being too slow to packing items, and could be dismissed by his manager for a third goal.

Following that exchange, Palmer admitted that Amazon workers could be fired for not meeting productivity targets. However, he maintained that there would always be a “human in the loop” and that any performance issue usually results in the worker being moved to a different “function” within the company.

Other witnesses also took issue with Palmer’s characterization of automation at Amazon. Laurence Turner, head of research and policy at the GMB union, said its members had reported an increase in “work intensity” due to ever-higher productivity targets managed via an algorithm.

Turner said algorithmic surveillance also had an impact on people’s mental health, with workers reporting a ‘sense of betrayal’ to the GMB ‘when it becomes clear the employer was secretly monitoring them – members report too often that they will be called to a discipline and presented with a set of numbers, or a set of measurements, which they did not know were being collected from them, and which they do not feel confident to challenge”.

Pakes said members of the Prospect union have also reported similar and “considerable” concerns about the effect of AI on labor intensity.

“There is a danger, in our view, that AI will become a new form of modern Taylorism – also that algorithms will be used for short-term productivity gains at the expense of the long-term employer,” Turner said. , adding that Palmer’s testimony was “a pretty extraordinary body of evidence that doesn’t reflect what our members are telling us about what’s going on in those warehouses.”

On the role of AI in work intensification, Thomas said systems need to be designed with worker outcomes in mind. “If the goal was not just to increase the number of bags someone had to pack in a minute, but also if it was done with a more holistic understanding of the impacts on people – on their well-being, their dignity, their autonomy, on participation – the results are more likely to be successful,” he said.

The committee opened an inquiry into the post-pandemic economic growth of UK labor markets in June 2020, with the stated task of understanding issues relating to the UK workforce, including the impact of new technologies.

A parliamentary inquiry into AI-powered workplace surveillance previously found that AI was being used to monitor and control workers with little accountability or transparency, and called for the creation of a law on workplace surveillance. responsibility for the algorithms.

#adopted #consideration #workers #MPs

Leave a Comment

Your email address will not be published. Required fields are marked *