
Mar 26, 2026·9 min read
Internal Developer Portal: What It Is and When to Build Your Own
Summarize this article
Every growing engineering organization eventually hits the same wall. Someone asks "who owns the payments service?" in Slack and waits twenty minutes for an answer. A new hire spends their first week chasing runbook links across five different wikis. A senior engineer loses a Friday afternoon provisioning a staging database for a teammate who didn't know which Terraform module to run. None of these are rare events — they're the ambient tax that accumulates daily on every team past a certain size.
An internal developer portal (IDP) is the answer to all of them: a single, authoritative interface where developers can find any service, understand who owns it, read its runbooks, see its deployment status, and provision the infrastructure they need — without filing a ticket or pinging anyone on Slack.
The Problem an IDP Solves
To understand why an IDP matters, it helps to be specific about what it replaces. At most engineering organizations without one, the following exists in parallel and out of sync: a Confluence or Notion wiki with architecture diagrams last updated eighteen months ago; a GitHub repository list with no indication of which services are active versus deprecated; a PagerDuty schedule with current on-call rotations but no link back to which services each team owns; a Terraform Cloud workspace list that only the platform team knows how to navigate; and a collection of Slack channels where institutional knowledge lives ephemerally and re-answers itself every quarter.
The result: developers spend an estimated 4–7 hours per week on friction that a well-built IDP eliminates. At a 50-person engineering team with $180k average fully-loaded cost, that's between $900k and $1.6M in annual engineering time lost to navigation and coordination overhead — not to bugs, not to features, but to finding things and asking people questions that a portal would answer in thirty seconds.
An IDP consolidates all of that into one place with one source of truth. The service catalog tells you who owns the payments service. The runbook is one click from the service entry. The deployment status pulls live from your CI/CD system. Self-service provisioning replaces the Terraform question that was interrupting your platform engineer.
What an Internal Developer Portal Contains
A mature IDP has five core components. Most teams build two or three first and add the rest over six to twelve months.
Service catalog. The foundation. A structured, searchable inventory of every microservice, API, data pipeline, queue, and library your organization runs. Each entry includes the service name and description, the team and named individuals who own it, links to its repository, current deployment status across environments, the on-call rotation responsible for it, links to runbooks and incident playbooks, and its upstream and downstream dependencies. Done well, a catalog entry answers "what is this and who do I talk to?" in under thirty seconds.
Self-service infrastructure. The second most valuable component. Developers can provision what they need through a form or CLI command without filing a ticket. Common workflows: spinning up a new microservice from a template with CI/CD, Kubernetes manifests, and secret management pre-wired; requesting a new database with appropriate sizing and backup policies applied; cloning a staging environment for a feature branch. Before self-service, provisioning a new service typically takes two to five days of back-and-forth. After a well-built self-service flow, it takes under ten minutes. That 85–95% time reduction is consistent across teams that have published their results.
Deployment status and environment visibility. What version is running in each environment, and did the last deploy succeed? Without the portal, this requires checking CI/CD dashboards, asking teammates, or running kubectl commands. With the portal, every developer can see that production is on version 2.14.3 deployed forty minutes ago and healthy, staging is on 2.15.0-rc2 currently deploying, and canary is at 10% traffic with P99 latency within baseline. That visibility cuts "is this deployed yet?" questions that interrupt platform engineers throughout the day.
Documentation co-located with services. Not a replacement for your wiki — an integration with it. The IDP surfaces the runbooks, architecture decision records (ADRs), and API specs that belong to each service, directly alongside that service's catalog entry. Documentation co-located with its subject is more likely to be read and more likely to be kept current. When a developer is looking at the payments service entry and can see the runbook for "payment processor timeout" one click away, they use it.
On-call ownership and incident integration. For every service: who is currently on call, when the rotation changes, and a timeline of recent incidents. When a production alert fires at 2am, the on-call engineer should be able to open the IDP and have immediate access to the runbook, the last three incident postmortems, the current deployment, and the team's Slack channel — without navigating across five different systems under pressure.
When Backstage Makes Sense
Backstage is the default first answer when an engineering organization decides it needs an IDP. It's open-source, backed by Spotify, and has a plugin ecosystem with hundreds of integrations.
Backstage works well when your engineering team is 80 or more engineers, you have a dedicated platform engineering team of at least three people who can own Backstage as a product, and your tooling stack aligns reasonably well with the existing plugin ecosystem.
The realistic maintenance burden is worth understanding before committing. A production Backstage deployment requires a Node.js backend and a PostgreSQL database. Plugin compatibility breaks across minor Backstage versions — upgrading from 1.14 to 1.18 typically requires auditing and updating every installed plugin. A realistic estimate from platform teams that have done this: initial setup takes 4–8 weeks before developers are using it daily, and ongoing maintenance runs 15–20% of one senior engineer's time indefinitely. For a 20–50 person engineering team without a dedicated platform team, that cost profile rarely makes sense.
When Custom Beats Backstage
A custom-built IDP is the better choice in specific situations that are more common than teams expect.
Your service metadata doesn't fit Backstage's entity model. Backstage's catalog uses a fixed entity schema. If your organization has classifications that don't map cleanly — regulatory scope, data residency flags, cost center codes, business criticality tiers — you're either hacking the entity schema or maintaining a parallel system. A custom portal models your actual metadata from day one.
You want embedded workflows, not just links. Backstage can display a link to a runbook. A custom portal can embed runbook execution — with a "Run this step" button that triggers a Lambda, a form that captures required inputs, and an audit log of who ran what and when. The difference between "here is information" and "here is a tool you can use" determines adoption.
Your team is under 30 engineers. At this scale, Backstage's maintenance overhead represents a disproportionate fraction of your platform engineering capacity. A custom portal built to exactly your current needs — perhaps just a service catalog, deployment status, and two or three self-service workflows — can be live in two to three weeks and requires almost no ongoing maintenance.
You've tried Backstage and adoption stalled. This is more common than the platform engineering community publicly acknowledges. Backstage's default UX is generic; it requires significant investment to feel like a product developers actually want to open. Teams that roll it out and watch adoption flatline at 30% often find that a custom portal built around their team's specific pain points gets higher adoption in less time.
What a Custom IDP Enables
The strongest argument for a custom build is the category of features that are technically possible in Backstage but require significant effort to implement well.
Cost-per-service reporting. A custom portal can pull from AWS Cost Explorer or GCP Billing and display, on each service's catalog entry, the infrastructure cost for the past 30 days. Engineering leads get direct visibility into cost by service without leaving the portal. In Backstage, this requires a custom plugin with its own backend, caching layer, and compatibility maintenance across Backstage versions.
Custom approval chains. When a developer requests a new production database, the workflow might need to route through the platform team for sizing review, then security if it will store PII, then finance approval above a cost threshold. A custom portal implements exactly that workflow as a first-class feature.
Integrated incident timeline. A custom portal can pull PagerDuty incidents, Datadog monitors, and GitHub deployments into a unified timeline on each service's page. This is the kind of feature that senior engineers immediately understand the value of and that drives repeat portal usage. Not impossible in Backstage, but the engineering investment to do it well is substantial.
Company-specific onboarding flows. A custom portal can present a new engineer with a personalized checklist: request access to these five systems, clone these repositories, run this setup script, meet these three teammates. Each step tracked in the portal. The result: 30–40% reduction in time-to-first-production-deployment for new engineers, measured at organizations that have built this.
How to Scope the MVP
The mistake most teams make when building an IDP is trying to build everything at once. The right approach is a tight MVP that solves the two or three most acute problems, ships in three to four weeks, gets used, and expands from there.
A reliable MVP scope: service catalog with ownership, on-call rotation, repository link, and deployment status. That's it. No self-service infrastructure, no embedded runbooks, no cost reporting. Just the answer to "what services do we have and who do I talk to about them?"
Ship that. Watch what developers do with it. The second most-requested feature will be obvious within a month — usually it's self-service environment provisioning or deployment visibility. Build that next. The danger of over-scoping is a three-month build that arrives bloated and under-adopted because developers weren't part of shaping it.
The Numbers That Justify the Investment
Companies that have published data on IDP rollouts report consistent outcomes: provisioning time for new services drops from 2–5 days to under 15 minutes; new engineer time-to-first-deployment falls by 30–40%; on-call escalations to senior engineers for "who owns this?" decrease by 60–70% in the first six months; platform team tickets for access and provisioning drop by 50–80%.
None of these require a perfect, fully-featured portal. They require a portal that solves real problems, stays accurate, and gets maintained. A focused custom build on a modern web stack — service catalog synced from your existing metadata sources and a handful of well-designed self-service workflows — delivers most of these gains.
An internal developer portal is one of the highest-leverage investments a platform team can make in the first hundred people of an engineering organization. The tools you give developers shape how fast they can move and how much cognitive overhead they carry every day. Getting that foundation right compounds for years.
Summarize this article
Ready to build a developer portal your team will actually use?
We build custom internal tools and developer portals for engineering teams that want self-service infrastructure without the overhead of maintaining Backstage.
Book a discovery call →

