Labour law in the age of AI

Guest contribution by | 04.09.2025

From judging the past to shaping the future

Two years ago, artificial intelligence (AI) still sounded like a distant dream to many people. Today, it has become a reality not only in the tech industry, but also in the world of work and in our personal everyday lives. Whether it’s chatbots for customer enquiries, automatic CV analysis, project planning tools or translation apps, AI is all around us. And it is changing a lot of things. However, rapid technological progress is not only turning processes upside down, but also presenting new challenges for labour law.

Legal regulations such as the General Equal Treatment Act (AGG), the General Data Protection Regulation (GDPR) and the Works Constitution Act (BetrVG) were not prepared for this speed of change. At the same time, the EU and the German government are working on supplementary regulations, foremost among them the AI Regulation.

For me, it is not a question of whether we ‘need’ AI in labour law, but rather one of dosage and competence: how much AI can our working world tolerate and how much can we humans tolerate?

Points of contact between AI and labour law

Artificial intelligence is essentially a system that learns from data. Algorithms analyse large amounts of data, recognise patterns and use them to make predictions or suggestions. The better and more diverse the data, the more reliable the results and the lower the risk of bias.

The law is structurally retrospective. It almost always decides on facts that have already occurred and derives its assessment from existing experience, norms and precedents. Legislation is often reactive rather than proactive, and even where the law is intended to be ‘future-oriented’, it is based on existing patterns and known risks.

This is precisely where AI’s particular compatibility with the law lies:

  • AI in the legal field does not need to develop radically new concepts, but can draw on existing legal norms, judgements, commentaries and literature.
  • AI can quickly filter out similarities and differences between new cases and precedents.
  • While we humans spend hours or days researching, AI can compile and weigh up legal texts, case law and opinions in seconds.
  • AI can immediately check whether a new judgement or law conflicts with existing law or which regulatory gaps arise.

Paradoxically, it is precisely this performance capability that harbours considerable risks. Increased efficiency, more objective decisions and faster information processing are met with discrimination due to faulty training data, non-transparent decision-making logic and data protection violations.

A complex legal framework applies here:

  • The General Data Protection Regulation (GDPR) [1] and Section 26 of the Federal Data Protection Act (BDSG) [2] ensure employee data protection. Employers may only process personal data if this is necessary for the employment relationship or if consent has been obtained.
  • European AI Regulation (AI Regulation / AI Act) [3]: Since 2024, employers have been obliged to use AI systems in such a way that they are safe, transparent and traceable. Art. 4 also stipulates that employees must be equipped with sufficient AI skills.
  • Works Constitution Act (BetrVG) [4]: In the case of monitoring, performance or behaviour control by AI, co-determination applies in accordance with Section 87 (1) No. 6 BetrVG. Section 80 (3) BetrVG requires experts to be consulted when AI is used.

It must be clear that AI in companies is not (purely) an IT issue. It is a labour law task that requires the intertwining of technology, culture and law.

Areas of application for AI in business

The possible applications for AI in business are wide-ranging, from simple automation to complex decision-making systems that can transform entire processes. In the administrative area, for example, AI-supported tools can help with scheduling, expense reports or the creation of standardised documents. New dimensions are also opening up in knowledge management: AI can automatically research, structure and translate information from internal and external sources and make it available in an understandable form.

The potential is particularly evident in the area of human resources. When selecting applicants, AI systems analyse documents, filter them according to defined criteria and thus shorten the selection process. In social recruiting, AI identifies potential candidates on social networks and approaches them in a targeted manner. Even initial interviews can be handled by AI-supported chatbots, which collect structured information and prepare the selection process.

In human resources management, the spectrum ranges from automated performance analysis and personalised onboarding programmes to continuous employee development, in which AI recognises learning needs and suggests individual learning paths. For personnel requirements planning, AI can derive trends from historical and current data and recommend the optimal deployment of personnel.

AI is also changing processes in work organisation: it creates needs-based duty rosters that take into account both operational requirements and individual preferences, and takes on routine tasks such as data entry, evaluation and reporting.

This diversity shows that artificial intelligence can increase efficiency and at the same time enable more individualised solutions, provided that its application is transparent, fair and in line with labour law requirements.

Three scenarios with labour law implications

Let’s take a look at three scenarios in which artificial intelligence is used and labour law is affected:

Scenario 1 – Recruiting with AI

A medium-sized mechanical engineering company is urgently looking for skilled workers. To cope with the mountain of applications, HR uses an AI-supported tool. It scans CVs, recognises key qualifications and creates a ranking list. The HR department (if there is one) saves weeks of manual screening.

The selection process becomes faster, more objective and more transparent – at least at first glance.

If the AI’s training data reflects historical application decisions, unconscious biases may be perpetuated (AGG). In addition, applicants must be able to find out why they were not considered (GDPR).

Focus on labour law: The works council must be involved (Section 87 (1) No. 6 BetrVG), processes must be documented and the final decision lies with humans, not machines.

Scenario 2 – AI in team collaboration

A globally distributed project team uses an AI tool that automatically transcribes meetings, assigns tasks and sets priorities. In the morning, everyone opens the app and knows what to do.

The opportunities lie in reduced coordination effort, clear structures and avoiding to-do lists that get ‘lost’ in the minutes.

However, permanent logging can also trigger a feeling of constant control, which affects personal rights. AI can also unconsciously slow down self-organisation if tasks are no longer distributed independently.

From an employment law perspective, its use is only sensible and possible with clear rules in a works agreement, transparency about how it works and voluntary participation when it comes to sensitive data.

Scenario 3 – AI in works council work

The works council of a large retail company works with a specialised AI tool that scans legislative changes, summarises relevant judgements and makes suggestions for works agreements. Instead of spending days researching specialist portals, there is more time available for discussions with employees.

However, confidential information must not be allowed to enter insecure systems. The works council’s right to training and secure resources (Section 80 (2) BetrVG) must be upheld.

Employers should finance secure platforms and offer training. AI cannot replace co-determination, but it can significantly strengthen it, which is in the interests of all employees and employers.

These three scenarios show how AI can be used in a variety of ways in the world of work and that opportunities and risks are often directly related. Efficiency gains, clear structures and new freedom on the one hand, legal obligations, ethical questions and co-determination rights on the other.

This is precisely where it becomes clear that artificial intelligence is not simply ‘there’. It must be designed in such a way that technology and humans complement each other rather than competing with each other. Labour law guidelines help to secure the framework in which innovation can really take effect and humans remain not only humans, but also active decision-makers.

As HR professionals (and not just as labour lawyers), we must therefore ask: How well does this framework already serve a technology that is developing so rapidly? And where do we need to make adjustments so that it will still be viable tomorrow?

Legal framework and unfinished business

The EU AI Regulation (AI Act, Regulation 2024/1689) will apply in full from August 2026 and regulates AI systems on a risk-based basis. High-risk applications – for example in recruiting, performance evaluation or personnel management – are subject to strict obligations regarding transparency, data quality, human oversight and data protection impact assessments.

At national level, the GDPR and Section 26 of the Federal Data Protection Act (BDSG) as well as the Works Constitution Act (BetrVG) apply:

  • Section 80 BetrVG obliges employers to inform the works council in a timely and comprehensive manner.
  • Section 87 (1) No. 6 BetrVG grants a right of co-determination in the case of technical monitoring.
  • Section 90 BetrVG requires consultation before decisions on AI implementations are made.

These requirements create clear rules that promote trust, increase acceptance and minimise risks such as discrimination or data protection violations. They also touch on liability issues: employers must take responsibility for incorrect AI decisions, such as incorrect contract content or discriminatory selection processes. Terminations and termination agreements based on AI analyses require particular transparency and final decision-making authority by humans. In copyright law, it is necessary to clarify who owns AI-generated content and what rights of use exist.

In addition to complying with legal requirements, companies should develop their own practical guidelines for the use of AI, for example on transparency, data access, feedback processes and the limits of automated decisions. A general ban can fuel mistrust. Clear internal standards, on the other hand, signal competence and responsibility.

Labour law as a guard rail – people remain in control

Artificial intelligence in everyday working life is a tool. And like any tool, AI only unleashes its potential when we use it wisely, fairly and responsibly. Labour law provides the guard rails for this: it protects against discrimination, ensures data protection, provides transparency and allows room for co-determination. These rules do not hinder innovation, but rather strengthen trust. They create acceptance among all parties involved.

At the same time, the needs of employers, employees and works councils must be equally taken into account in the design. After all, technology can analyse and structure many things, but creative solutions and relationship building are still deeply human.

Legal assessment is not just information processing, but evaluation. Law thrives on context, ethical considerations and interpretation. AI can simulate these considerations, but cannot (yet) take responsibility for them independently. That is why it can provide enormously efficient support in law, but cannot make the final decision. That must continue to be the responsibility of a human being.

Companies need openness to dialogue, the courage to experiment and the will to use AI in a way that relieves people rather than replacing them. With clear rules, transparent processes and an agile, learning-oriented attitude, we can turn AI into a productivity partner that helps us make the world of work safer, fairer and more humane, rather than a control mechanism.

 

Notes (mainly in German):

Britta Redmann has published a number of highly readable books. ‘Agile Arbeit rechtssicher gestalten’, ‘Lebensphasenorientiertes Leadership’ and ‘Der erfolgreiche Antwalt: Mediation’ are just a few of them. Here you will find an overview of Ms Redmann’s German-language works.

[1] Datenschutz-Grundverordnung (DSGVO)
[2] Bundesdatenschutzgesetz (BDSG)
[3] Verordnung über künstliche Intelligenz (KI-VO)
[4] Betriebsverfassungsgesetz (BetrVG)

Would you like to promote the topic of labour law in the age of AI as an opinion leader? Then feel free to share this article within your network or on social media.

Britta Redmann has published more articles on the t2informatik Blog, including:

    t2informatik Blog: Parental leave - laws, feelings, chances of winning

    Parental leave – laws, feelings, chances of winning

    t2informatik Blog: Life-stage oriented leadership

    Life-stage oriented leadership

    t2informatik Blog: Polywork: The new versatility in the world of work

    Polywork: The new versatility in the world of work

    Britta Redmann

    Britta Redmann

    Britta Redmann is an independent lawyer, mediator and coach and is responsible for HR & Corporate Development at a software manufacturer. She is the author of various specialist books. As a human resources manager, she has accompanied, managed and implemented organisational developments in various industries. Her special expertise lies in the development of organizations up to agile and networked forms of cooperation. She transforms and implements modern concepts such as agility, work 4.0 and digitalisation in terms of labour law.

    In the t2informatik Blog, we publish articles for people in organisations. For these people, we develop and modernise software. Pragmatic. ✔️ Personal. ✔️ Professional. ✔️ Click here to find out more.