AI adopted without due consideration for workers, MPs told

MPs have been told that the rapid adoption of artificial intelligence (AI) by enterprises during an epidemic has made workers across the UK vulnerable to the dangers posed by algorithms, including surveillance. Aggression, discrimination and serious increase in employment.

In an episode examining how AI and digital technologies are generally changing the workplace, the Energy and Industry Strategy Committee (BEIS) was told by Andrew Pakes, Deputy Secretary-General of Prospect Union, the rapid introduction of new technologies. Go to workplaces across the region. The UK has helped many enterprises remain silent during the epidemic.

But he said the rapid deployment of AI-enabled technologies in the workplace, including recruitment systems, mood detection, monitoring, productivity monitoring and more, means the downside is not being properly considered. Create a situation where labor laws are no longer appropriate. To deal with changes in how people are being controlled through digital technology.

“What we saw during the epidemic was the acceleration of digital technology that allowed us to secure, connect and run well, but we also saw in that acceleration it took less time,” Pakes said. To check it out. “

Giving an example of a task allocation program that can help bosses with monitoring or micro-management of their employees, Pakes added, “You go in and it tells you when you have to do something. What we do not yet have is the clarity of how that data is used in the management process to determine Speed ​​of work or whether you are a ‘good’ worker or a ‘bad’ employee.

“What is behind AI is the use of our data to make decisions about the working life of individuals and where they sit in the workplace. Many of our laws today are based on physical presence, so health and safety laws about danger and harm to the body. “We do not yet have sufficient language or legal framework to represent the risks or risks created by the use of our data.”

In March 2021, the Trade Union Congress (TUC) warned similarly that huge gaps in UK law surrounding the use of AI in the workplace could lead to discrimination and unfair treatment of workers. Called for an “immediate legislative change.”

A year later, in March 2022, the TUC said the use of disruptive and escalating surveillance technology in the workplace, often powered by AI, was “spreading out of control” and forced workers to resign. Consult on the application of new technologies at work.

Also Read :  Solidus Ai Tech Announces New Partnership With Metaverse Giants Galaxy Arena – Sponsored Bitcoin News

According to a report by Robin Allen QC, UK law does not adequately cover discrimination and the risk of equality in the workplace as a result of AI, said Carly Kind, director of the Ada Lovelace Institute. Tell MPs that many of the AI ​​devices deployed are not only on the outer edge of legitimacy, but also scientifically accurate.

“Things like emotional recognition or classification, which is when the interviewer is automatically asked to interview the interviewer or otherwise on the screen and a form of image recognition is applied to them who try. Separate from their facial expressions whether they are trustworthy or not, ”she said, adding that there is a real“ legal gap ”in using AI for emotional classification.

Talking about how AI-powered devices such as emotion recognition can affect people with neurodivergent conditions, for example, Kind said inequality is a “real concern with AI in general” because it “uses existing data sets.” Existing to make predictions about the future and tend to optimize for consistency.

When it comes to AI, Anna Thomas, director of the Institute for the Future of Employment, said that while auditing tools are usually seen as a means of resolving AI damage, they are often insufficient to guarantee. Achieved compliance with UK labor and equality laws. .

“In particular, self-auditing tools rarely clearly state the purpose of the audit or key definitions, including equality and fairness, and assumptions for the United States are brought in,” she said, adding that policymakers should seek implementation. More comprehensive system. Social technical audits to address AI-induced damage. “Tools are generally not designed or equipped to solve the problems that are detected,” she said. No, ”he said.

The importation of cultural assumptions through technology has also been affected by Pakes, who said the problems surrounding AI in the workplace are exacerbated by the fact that most enterprises do not develop their internal control systems and therefore rely on manufactured products. From the shelf. Elsewhere in the world, the practices of management and labor rights can be very different.

Given the example of the Microsoft and Office365 teams – which have tools that allow employers to secretly read employee emails and track their computer use at work – Pakes said, though, it is helpful to start with an introduction to ” “Productivity assessment” automatically later created a host of problems.

Also Read :  Canada, South Korea to deepen electric vehicle production ties to counter China - National

“If all of a sudden, as we found, six months goes by when people are stuck in discipline and the manager is saying, ‘We looked at your email traffic, we looked at the software you used. “We used to look at your website, we do not believe you are a productive employee” – we think it leads to worse use of this technology, “Pakes said.

But he added that the problem is not the technology, it is the management of how the technology is applied that is addressed. “

Case Study: AI-Powered Automation at Amazon

Regarding the benefits of AI for productivity, Brian Palmer, head of European public policy at Amazon, told lawmakers that the use of automation by the electronics giant in its workplace is not designed to replace existing jobs. And is used instead. Set individual or recurring work goals for workers.

“In terms of improving outcomes for people, what we see is improved safety, reducing things like injuries, repetitive movements or diseases of the musculoskeletal system, improving staff retention, more sustainable work,” he said. “It simply came to our notice then.

Repeating recent testimonies given to the Digital Culture, Media and Sports (DCMS) Committee by Matthew Cole, postdoctoral researcher at the Oxford Internet Institute, Labor MP Andy McDonald said: “The overwhelming evidence suggests that Amazon Use is not. Strengthening – They lead to overwork, stress and anxiety, and joint problems and health problems.

When asked how data is used to track employee behavior and productivity, Palmer denied that Amazon was looking to monitor or track employees.

“Their privacy is something we respect,” he said. “The focus of the software and hardware we are discussing is on the goods, not on the people themselves.” Palmer added that the collected transaction data is accessible to employees through an internal system.

When objected to by committee chairman Darren Jones, who told Palmer he was “inaccurate” in his nature, Palmer said the primary and second purpose of Amazon’s system was to monitor the “health of the network.” And “Inventory Management”, respectively.

Regarding the story of the 63-year-old ingredient who works for Amazon, Jones said it is true that the company monitors the productivity of each employee because the ingredient has already had two strikes for delays in packaging. Can be fired. His manager for the third strike.

Following the move, Palmer admitted that Amazon employees could be fired for failing to meet productivity goals. However, he maintained that there would always be “people in the circle” and that any performance issues usually caused workers to be moved to different “functions” within the company.

Other witnesses also objected to Palmer’s feature of Amazon automation. Laurence Turner, head of research and policy at GMB Unions, said its members reported an increase in “work intensity” as a result of unprecedented high productivity targets managed through algorithms. .

Algorithm monitoring is also affecting people’s mental health, with workers reporting “feelings of betrayal” to the GMB “when it became clear that employers were secretly monitoring them,” Turner said. Often report that they will be called in. Discipline and show with a set of numbers or sets of meters that they do not know are being collected on them and that they do not feel confident in the competition. “

Pakes said Prospect union members also reported similar concerns over the impact of AI on work intensity.

“It’s dangerous that we think AI is becoming a new form of modern Taylorism, a rule used for short-term productivity gains that affect employers in the long run,” Turner said. From Palmer is “a great piece of evidence that does not reflect what our members are telling us about what is going on in those warehouses.”

On the role of AI in job consolidation, Thomas said the system must be designed with results in mind for workers. “If the purpose is not just to increase the number of bags that someone has to pack in a minute, but if it is done with a more comprehensive understanding of the impact on people – on their well-being, their dignity, their autonomy.” “Participation – the result is more likely to be successful.”

The Commission launched an investigation into the post-pandemic growth of the UK labor market in June 2020 with the task of raising awareness of issues surrounding the UK workforce, including the impact of new technologies. Also.

A previous parliamentary investigation into AI-run workplace monitoring found that AI was being used to monitor and control less responsible or transparent workers, and called for the establishment of an accountability law for the algorithm.


Leave a Reply

Your email address will not be published.