How AI works in Graylark

Graylark is a labour-relations AI platform. The AI inside it - GrAI - helps users draft, translate, summarise, analyse changes, and explore agreements. This page sets out, in plain terms, what GrAI does, what it deliberately does not do, where it runs, and the controls we apply.

Policy version: 06/05/2026  

What GrAI is  

GrAI is the AI services that sit inside Graylark. It helps users work with the documents and consultation records they already have in front of them - drafting, translating, summarising, analysing changes, preparing for meetings, and exploring agreements through chat.

GrAI is not a people-analytics system. It does not score, rank, profile, or evaluate individual employees. There is no code path inside Graylark that takes an employee record as input and produces a judgement about that person.

What GrAI actually does

These are the concrete features that ship today, named so you can map them to what you see in the product. Each describes the inputs the model receives and what it returns.

Change Proposal Insights

Generates an analysis of a proposed change. Inputs: the proposal text, the assessment description, the relevant business unit and country context, summaries of related agreements, aggregated Works Council feedback summaries, previous advisories on similar proposals, and any related legal advice notes. Output: structured insight fields (considerations, risks, regulatory context) presented as draft text the user can use as a starting point.

Engagement Chat

A preparation assistant for an EWC or consultation engagement. Inputs: engagement metadata, agenda, uploaded materials, optional EWC agreement text, aggregated feedback themes, opinion summaries - all bounded by character limits at the prompt layer. Output: answers to the user's question about this engagement, e.g. "summarise the open questions across all meetings in this round". Conversational; users can follow up.

Agreement Chat

A document-grounded chat over a specific agreement (CBA, EWC agreement, etc.). Inputs: the agreement text, the document kind, the country, an anonymised insight excerpt. Output: answers to user questions about that agreement, e.g. "what does Article 12 say about consultation timelines".

Aggregated Feedback Summary

Summarises aggregated feedback collected during a consultation round or engagement meeting. Inputs are prepared so that the model receives comments without submitter identifiers. Output: a short aggregate summary for the round. This feature is not used to score, classify, or infer emotions about any individual person.

Assisted Authoring

Drafting help across the platform - for example, drafting a response to a Works Council question on the basis of business change, or drafting the text of an agreement section. Inputs: the open record. Output: a draft for the user to review and edit.

Translation

Two flavours: inline text translation (a snippet, a chat message, a comment) and full document translation (entire DOCX). Inputs: the text or document plus source and target language. Output: the translation in the target language. Used across the languages the platform supports.

AI-Enhanced Reports

Generates narrative sections of structured reports from platform data. Inputs: aggregated platform data the user has the right to see. Output: report narrative the user can edit before publishing.

KAI (internal user support)

A separate assistant that helps users use the platform itself ("how do I add a Works Council member"). KAI is a separate product-support assistant. It is not automatically connected to tenant business records and is intended to answer questions about how to use the platform.

What GrAI does not do

We have made deliberate architectural and product-design choices to prevent the following uses in the product as shipped today:

  • No per-employee scoring, ranking, classification, profiling, or attrition prediction. There is no employee-master-data feed into GrAI; sentiment is anonymised before the model sees it.

  • GrAI is designed as a decision-support tool. Its outputs are drafts for human review and are not used by the platform to make fully automated decisions that by themselves produce legal or similarly significant effects on a data subject.

  • No call to OpenAI, Anthropic, Google Gemini, AWS Bedrock, Azure OpenAI, Vertex, Mistral cloud, or any other third-party hosted LLM.

  • Customer tenant data is not used to train Graylark’s deployed production models.

Where GrAI runs

GrAI inference runs on self-hosted models managed by Graylark inside Graylark’s Google Cloud environment in the EU.

How GrAI inherits the platform's access controls

GrAI is not a side door. A user can only invoke a GrAI feature on records they were already authorised to see. The same privilege checks that gate the underlying modules - change proposals, change advisory gateway, agreements, legal advice, engagements - gate the AI feature on top. Country and business-unit scoping carry through. There is no separate AI permission model.

Safety controls

GrAI services apply two standard safety layers:

  • Prompt safety: enforces the assistant identity, refuses jailbreak / "ignore previous instructions" attacks, applies length and content checks.

  • Response safety: rewrites any leaked model identity in the output, enforces consistent attribution, applies content filtering.

Graylark treats these as required engineering controls and tests them in the GrAI service layer.

Customer controls

Graylark customers are the data controllers. They control whether AI is used in their tenant at all:

  • AI features can be turned off per tenant (master switch in Settings → AI).

  • Individual capabilities (the GrAI Assistant chat, Translation, KAI) can be turned off independently. Cascade rule: if the master is off, the sub-features are off regardless of their own switch.

  • Customers can control which GrAI features are enabled for their tenant. For proposal-insight generation, customers can also control selected source classes, including agreements and client legal advice.

  • Customers can require a per-tenant in-app AI Use Notice that describes the tenant's specific position on AI use to data subjects.

  • Tenant Administrators have access to the AI request audit log under a dedicated privilege - every GrAI invocation is captured with user, feature, model, hashed prompt and response, redacted preview, latency, and the entity (proposal, engagement, agreement) the AI was invoked on.

Employee Representatives

If you are a Works Council member, trade-union representative, or other employee representative reading this:

  • GrAI does not score, rank, profile, or monitor you or any individual.

  • It is a writing and analysis assistant on documents your employer's HR / Employee Relations counterpart already has open. It does not assemble new datasets about employees.

  • GrAI inference runs in Graylark-managed infrastructure in the European Union. For customer tenant processing, it is not connected to third-party hosted consumer AI services such as ChatGPT, Microsoft Copilot, or Google Gemini.

  • No data from your employer's Graylark tenant is used to train any AI model.

  • Final decisions are taken by people, not by AI.

  • Your employer can turn AI off, narrow its scope, or audit its use at any time.

If you have questions, please ask your employer's Data Protection Officer; they can request our AI Use & Safety Policy and have a one-to-one briefing with us.