Skip to content
Prime Telecom · Kubernetes Engine

PTKE — Prime Telecom Kubernetes Engine

A multi-tenant self-service platform for provisioning and operating production-grade Kubernetes clusters on Prime Telecom's sovereign OpenStack cloud.

Year 2024–2025
Status Live · production

A look at the product.

Public clouds ship "managed Kubernetes" behind a single button. Prime Telecom needed that same experience, but on their own OpenStack infrastructure — where sovereignty, data residency, and local network economics matter more than any marginal feature on AWS or GCP.

PTKE is the control plane that makes it happen: a Laravel + Vue application that lets tenants provision, scale, upgrade, back up, and operate production Kubernetes clusters on top of Rancher-managed RKE2, OpenStack compute/networking/storage, and Velero-backed disaster recovery. Every platform operation that used to live on a ticket queue — clusters, node pools, floating IPs, block volumes, object storage, add-ons, backups — is now a self-service action on a REST API and a Vue 3 UI.

Challenge

Running OpenStack means running Keystone, Nova, Neutron, Cinder, Swift, and Octavia — each with its own API surface, authentication model, and failure modes. Running Kubernetes on top means layering Rancher, RKE2, CAPI, Helm, kubeconfig files, and DaemonSets. Giving tenants a clean self-service UI over all of that — without leaking the low-level plumbing or breaking operational safety — is most of the work.

And it has to be strictly multi-tenant. Quotas per tenant, separate OpenStack projects, audit trails on every mutating action, SSH keys fanned out only to the right nodes, kubeconfig downloads that can't be abused. One leaked cluster or one cross-tenant action would kill the platform's credibility for everyone else.

Approach

Laravel 12 + Inertia v2 + Vue 3 + Tailwind CSS v4 gives operators a fast, typed experience without a separate frontend build pipeline. Wayfinder generates typed route and action clients so every Vue page talks to the backend through checked contracts. Fortify handles sessions, 2FA, and email verification; Sanctum issues API tokens for the Terraform provider and automation consumers.

The backend is structured as Controllers → Form Requests → Services → Jobs. Multi-second work always goes through the queue. Eleven OpenStack service wrappers sit in front of Keystone, Nova, Neutron, Cinder, Swift, and Octavia; a separate wrapper layer talks to Rancher (v1, v3, and CAPI) for cluster lifecycle. Twenty-eight queued jobs orchestrate provisioning, node-pool changes, backups, health checks, usage recording, and SSH key fan-out. Horizon supervises them; Reverb streams WebSocket updates back to the operator's browser so state changes feel immediate.

Multi-tenancy is enforced end-to-end: every mutating action passes through a Policy with a tenant check plus a Spatie permission, quotas are applied on each resource allocation, and rate limiting runs per user and per tenant. Kubeconfig downloads are audit-logged and throttled. SSH private keys never touch the platform — an in-cluster DaemonSet (ptke-ssh-sync) distributes only public keys into Kubernetes Secrets within each tenant's own cluster.

The click-a-button experience of EKS, GKE, or AKS — but on sovereign, locally-operated Romanian infrastructure.

Key capabilities shipped.

  • Self-service RKE2 cluster lifecycle — provision, scale, upgrade, and clone from templates with drain policies and HA control-plane options
  • Node pool management with labels, taints, flavors, extra volumes, and cluster-autoscaler support
  • OpenStack networking: subnet browsing, floating IP allocation, and Octavia load balancer visibility
  • Cinder block volumes with online resize, Swift object storage with EC2-compatible credentials, and Velero-backed cluster backup & restore
  • Curated add-on catalog (ingress, monitoring, logging, cert-manager, external-dns…) with one-click Helm installation per cluster
  • Multi-tenant quotas, Spatie RBAC, Fortify 2FA, and per-user / per-tenant rate limiting
  • SSH public-key fan-out to every tenant node via an in-cluster DaemonSet — private keys never touch the platform
  • Full Spatie audit log, rate-limited kubeconfig downloads, and an experimental Terraform provider for infrastructure-as-code

Grounded outcomes, measured in production.

Vue Pages
214
Operator screens covering every OpenStack and Rancher primitive
Controllers
43
Across clusters, networking, storage, and admin
Queued Jobs
28
Async orchestration of provisioning, scaling, backups, health checks
OpenStack Services
11
Wrappers for Keystone, Nova, Neutron, Cinder, Swift, Octavia and more

PTKE is in production, delivering managed Kubernetes on sovereign Romanian infrastructure. Operations that used to require a platform engineer on a ticket queue — cluster provisioning, upgrades, volume attach/detach, add-on installation, backup / restore — are now two-click self-service actions with full audit trails and a typed REST API. A beta Terraform provider extends the same contract to infrastructure-as-code workflows.

Technologies used, grouped by role.

Backend
Laravel 12 PHP 8.3+ PostgreSQL Redis Horizon Reverb Fortify Sanctum Spatie Permission
Frontend
Vue 3 Inertia.js v2 TypeScript Tailwind CSS 4 Wayfinder
Infrastructure
OpenStack (Keystone, Nova, Neutron, Cinder, Swift, Octavia) Rancher RKE2 CAPI Helm Velero
Ops & IaC
Terraform Provider Forgejo Actions Pest v4 Playwright Laravel Pint ESLint