NIS2 for Platform Teams: What Changes in Practice

Most NIS2 guides are written for compliance officers and legal counsel. This one is for the people who will actually have to answer the auditor's questions: your platform team.

By Jurg van Vliet

NIS2 for Platform Teams: What Changes in Practice

The NIS2 Directive (Directive (EU) 2022/2555) entered enforcement across EU member states in October 2024, and national transpositions are landing now. If you run infrastructure for a European organisation of any significant size, this affects you — probably more than your legal team has communicated.

The problem with most NIS2 guidance is that it stops at the compliance-officer level. It talks about "appropriate technical measures" and "supply chain risk management" without ever getting specific about what that means for the team running Kubernetes clusters, managing cloud accounts, and operating CI/CD pipelines at 2am.

This article maps NIS2 requirements to the infrastructure decisions your platform team makes every day. No policy abstractions. Specific technical implications.

You are probably an "essential" or "important" entity

NIS2 classifies organisations into two categories: essential entities and important entities. The penalties and oversight differ significantly, but both categories face real obligations.

Here is where most organisations underestimate their exposure:

Essential entities include energy, transport, banking, health, drinking water, digital infrastructure, ICT service management (B2B), and public administration. If you are a managed service provider, a cloud service provider, a data centre operator, or a DNS provider — you are an essential entity. Full stop.

Important entities cover a broader set: postal services, waste management, chemicals, food, manufacturing, digital providers, and research. The "digital providers" category is where it gets interesting. Online marketplaces, search engines, and social networking platforms are explicitly included, but so are many SaaS businesses that member states interpret broadly.

The threshold: medium-sized or larger (50+ employees or EUR 10M+ turnover). If you are reading this article, your organisation almost certainly qualifies.

Why this matters for platform teams: The level of scrutiny your infrastructure will face depends on this classification. Essential entities face proactive supervision — regulators can audit you without a triggering incident. Important entities face reactive supervision — audits happen after an incident or report. Either way, your platform team needs to be ready to answer detailed questions about how your infrastructure operates.

Most CTOs we speak with assumed NIS2 was primarily about their security team. It is not. The directive's technical requirements land squarely on whoever controls the infrastructure, the deployment pipeline, and the observability stack. That is your platform team.

Supply chain documentation: your cloud provider is your supply chain

Article 21(2)(d) of NIS2 requires "supply chain security, including security-related aspects concerning the relationships between each entity and its direct suppliers or service providers."

Read that again. Your cloud provider is a direct supplier. Your managed database service is a direct supplier. Your CDN, your DNS provider, your container registry, your secrets manager — all direct suppliers.

What this means in practice:

You need a documented inventory of every third-party service your infrastructure depends on. Not a spreadsheet that someone made once and forgot about. A living inventory that answers these questions:

  • Jurisdiction: Where is this provider headquartered? Where is the data processed? Under which legal frameworks can the provider be compelled to disclose data?
  • Substitutability: If this provider becomes unavailable (or legally problematic), what is the migration path? How long would it take?
  • Access scope: What access does this provider have to your systems and data? Can they read your data at rest? In transit?
  • Incident notification: How does this provider notify you of security incidents? What are their contractual SLAs for notification?

For organisations running on AWS, Azure, or Google Cloud, the jurisdiction question is unavoidable. These are US-headquartered companies subject to the CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 2018), which allows US authorities to compel disclosure of data stored abroad. NIS2 does not explicitly prohibit using US providers, but it does require you to document this risk and demonstrate that you have assessed it.

The practical output your platform team needs to produce:

  1. Service dependency register — Every external service, its function, its jurisdiction, its data access level. This is not a one-time exercise. Automate it. If you run Kubernetes, your cluster's dependencies are a good starting point: container runtime, CNI plugin, CSI drivers, ingress controllers, certificate management, DNS. Many of these phone home or depend on external services.

  2. Risk assessment per supplier — Document the impact if each supplier is compromised, becomes unavailable, or is compelled to act against your interests. Be specific. "AWS eu-west-1 becomes unavailable for 72 hours" is a scenario. "Our container images are in ECR and we have no secondary registry" is a finding.

  3. Contractual review — Does your agreement with each provider include incident notification clauses? Data processing agreements? Do they meet NIS2 requirements for notification timelines? Most hyperscaler standard agreements do not.

This is the kind of documentation that auditors will ask for first because it is concrete and verifiable. You either have a current service dependency register or you do not.

Incident reporting: 24 hours is not much time

NIS2 introduces strict incident reporting timelines that your platform team needs to be able to meet — not in theory, but with actual tooling and processes.

The timeline:

  • 24 hours — Early warning to the competent authority (national CSIRT or relevant body). This notification must include whether the incident is suspected to be caused by unlawful or malicious acts, and whether it could have cross-border impact.
  • 72 hours — Full incident notification. This must include an initial assessment of the incident's severity and impact, plus indicators of compromise where available.
  • 1 month — Final report with detailed description, root cause analysis, mitigation measures applied, and cross-border impact if applicable.

What your observability stack needs to support:

The 24-hour early warning is the critical constraint. That means from the moment a significant incident begins, your team has 24 hours to detect it, assess it, and file a structured notification with the relevant authority.

Work backwards from that requirement:

Detection latency — How long does it take between a security event occurring and your team knowing about it? If your log aggregation has a 15-minute delay, and your alerting checks every 30 minutes, and your on-call engineer takes 30 minutes to respond, you have already spent more than an hour before anyone starts investigating. For a 24-hour reporting deadline, that margin matters.

Log completeness — The 72-hour notification requires indicators of compromise and initial root cause. You cannot produce these if your logging is incomplete. At minimum, your platform needs:

  • API server audit logs (every authentication and authorisation decision)
  • Network flow logs (or at least ingress/egress at the cluster boundary)
  • Container runtime events (image pulls, exec into containers, privilege escalations)
  • Access logs for every externally-facing service
  • Change logs for infrastructure-as-code (who deployed what, when, from which commit)

Structured incident classification — NIS2 defines a "significant incident" as one that has caused or is capable of causing severe operational disruption or financial loss, or has affected or is capable of affecting other natural or legal persons by causing considerable damage. Your incident severity framework needs to map to this definition. Severity 1 in your runbook should correspond to "significant incident" under NIS2, and the escalation path should include regulatory notification as a step.

Practical recommendation: Build a NIS2 incident reporting template into your incident response runbook. When your on-call engineer declares a significant incident, the template should auto-populate with:

  • Timestamp of detection
  • Services affected (from your service catalogue)
  • Data classifications involved (from your data inventory)
  • Initial indicators of compromise (from your SIEM or log analysis)
  • Whether cross-border impact is possible (based on where your users and data reside)

The engineer should not have to think about NIS2 compliance during an incident. The process should produce the required outputs by default.

What auditors will actually ask your platform team

Legal teams tend to prepare for NIS2 audits by assembling policy documents: information security policies, risk management frameworks, business continuity plans. These matter, but they are table stakes. The auditor already assumes you have them.

Where audits get interesting — and where organisations fail — is when the auditor moves from policy to implementation. Here is what your platform team should expect:

Access control:

  • "Show me who has production cluster admin access right now." (Not who is supposed to — who actually does.)
  • "Show me the last time someone's access was reviewed and revoked."
  • "How do you handle access for third-party contractors?"
  • "Walk me through what happens when an engineer leaves the company. How quickly is access revoked across all systems?"

If your answer to any of these requires someone to manually check multiple systems, you have a finding. Auditors expect centralised identity management with automated provisioning and deprovisioning. If your Kubernetes RBAC is managed through static YAML files that someone applies manually, that is a gap.

Change management:

  • "Show me the deployment pipeline for your production environment. Who can push changes, and what gates exist?"
  • "Show me a recent production change. Can you trace it from code commit to deployment, including who approved it?"
  • "How do you handle emergency changes that bypass the normal process?"

GitOps workflows score well here. If every production change is a pull request in a Git repository, reviewed by a second person, and applied by Flux or ArgoCD, you have a verifiable audit trail by default. If you are still doing kubectl apply from laptops, expect follow-up questions.

Logging and monitoring:

  • "Show me your log retention policy. How far back can you investigate an incident?"
  • "Show me how you would detect unauthorised access to your data stores."
  • "What is your mean time to detect a security event?"

The retention question catches many organisations. NIS2 does not specify a retention period, but auditors will ask whether your retention aligns with the 1-month final report timeline. If you only retain logs for 14 days, you cannot produce a root cause analysis for an incident that started 20 days ago.

Business continuity:

  • "What is your recovery time objective for your core platform?"
  • "When was the last time you tested your disaster recovery procedure?"
  • "If your primary cloud region became unavailable, what happens?"

The last question is where supply chain documentation and business continuity intersect. If your entire stack runs in a single cloud provider's single region, your honest answer might be "we would be offline for an extended period." NIS2 does not require multi-cloud, but it does require you to have assessed this risk and have a documented response plan.

The practical checklist

Here is what your platform team should have in place, mapped to NIS2 requirements. This is not exhaustive — it covers the areas where platform teams are most likely to be directly involved.

Logging and detection

  • Centralised log aggregation with defined retention period (minimum 30 days, 90 days recommended)
  • API server audit logging enabled with request and response metadata
  • Network flow logs at cluster boundary
  • Container runtime security events (image pulls, exec, privilege escalation)
  • Alerting on authentication failures, privilege escalations, and unusual access patterns
  • Documented mean time to detect (MTTD) for security events
  • Log integrity protection (immutable storage or cryptographic chaining)

Access control

  • Centralised identity provider integrated with all infrastructure components
  • Role-based access control with documented role definitions
  • Just-in-time or time-limited access for production environments
  • Automated deprovisioning when team members leave or change roles
  • Regular access reviews (quarterly at minimum) with documented outcomes
  • Multi-factor authentication for all infrastructure access
  • Separate credentials for CI/CD pipelines (no shared service accounts with human users)

Supply chain inventory

  • Complete register of external services and providers
  • Jurisdiction documentation for each provider (headquarters, data processing locations)
  • Risk assessment for each provider, including substitutability analysis
  • Contractual review for incident notification and data processing terms
  • Automated dependency scanning for software supply chain (container images, libraries)
  • SBOM (Software Bill of Materials) generation for deployed services
  • Regular review cadence (quarterly) for supply chain register updates

Incident response

  • Documented incident response runbook with NIS2 reporting steps integrated
  • Pre-built notification templates aligned with 24h/72h/1month timelines
  • Defined incident severity levels mapped to NIS2 "significant incident" criteria
  • On-call rotation with documented escalation paths including regulatory notification
  • Tested incident response process (tabletop exercise within the last 12 months)
  • Designated contact for national CSIRT/competent authority communication
  • Post-incident review process that produces the information required for the 1-month final report

Business continuity

  • Documented recovery time objectives (RTO) and recovery point objectives (RPO)
  • Tested backup and restore procedures for stateful services
  • Disaster recovery plan that addresses single-provider and single-region failure
  • Documented dependencies on third-party services for recovery
  • Annual DR test with documented results and remediation actions

What to do next

If you are starting from scratch, prioritise in this order:

Week 1-2: Supply chain inventory. Build the service dependency register. This is the most visible gap and the first thing auditors request. Start with your cloud provider accounts, then work through DNS, CDN, container registry, secrets management, monitoring, and CI/CD tooling.

Week 3-4: Incident response integration. Take your existing incident response runbook and add NIS2 reporting steps. Build the notification templates. Identify your national competent authority and CSIRT. Run a tabletop exercise with the extended timeline requirements.

Month 2: Logging and access control gaps. Audit your current logging against the checklist above. Enable what is missing. Review access control — can you answer the auditor questions listed above with current tooling, or do you need to consolidate?

Month 3: Business continuity testing. If you have not tested your disaster recovery procedure in the last 12 months, schedule it. Document the results. Identify gaps and build a remediation plan.

None of this requires buying new products. It requires your platform team to document what they have, identify gaps, and address them systematically. The organisations that handle NIS2 well are the ones where the platform team owns this work directly rather than waiting for legal or compliance to translate requirements into technical tasks.

NIS2 is not a security-team-only problem. It is an infrastructure problem. Your platform team is best positioned to solve it.