Appendix D: Portal protocol
This appendix is the wire-level contract every Tela portal must implement. The Hub directories and portals chapter in the User Guide describes portals from a deployment perspective. This appendix specifies what makes something a conformant portal in protocol terms.
This document is the wire-level contract every Tela portal must implement.
Portals are independent processes that aggregate hubs into a directory and
proxy authenticated administrative requests through to the hubs they list.
Awan Saya is one implementation; the planned internal/portal Go package
will be another. Both speak the protocol described here.
The protocol carves out the portal contract from the identity implementation. The contract is small (about ten endpoints, two auth modes, a JSON shape per response) and stable enough to write down. The identity implementation -- accounts, organizations, teams, billing, self-service signup -- is out of scope and lives in whatever store an implementation chooses to pair with the protocol.
Status: draft, version 1.1, identity amendment in flight. The
four open questions in the first draft of this spec were resolved on
2026-04-08 (see section 13). The decisions are baked into sections 2,
4, 5, and 11. The internal/portal Go package, the file-backed store,
the HTTP handlers, the spec-conformance test harness, the migration of
Awan Saya and the telahubd outbound portal client to the new shape,
and the cmd/telaportal single-user binary all landed in the six-commit
extraction series ending in a0677f6. The amendment in section 6
strengthens user-auth credentials to a single mandatory format (bearer
token via the Authorization header) and standardizes the OAuth 2.0
device code flow for desktop client onboarding; rationale in section
13.6. The current amendment bumps the protocol from 1.0 to 1.1 to add
stable UUIDs for hubs, agents, machine registrations, and portals. The
identity model is documented in DESIGN-identity.md; section 1.1 below
summarizes the wire shapes. Rationale and the negotiation break are in
section 13.7. Pre-1.0 the spec is still mutable; post-1.0 it follows
the version negotiation and backward-compatibility rules in section 2.
Discussion of why a portal exists at all, the scaling story, and how TelaVisor is expected to host the protocol in personal-use mode lives in ROADMAP-1.0.md under "Portal architecture: one protocol, many hosts." This document is the contract; that document is the rationale.
1. Roles
Three actors participate in the protocol:
| Role | What it does | Example |
|---|---|---|
| Portal | The HTTP service that hosts the directory. Stores hub records, authenticates clients, and proxies admin requests through to the hubs it lists. | Awan Saya, telaportal (planned), TelaVisor in Portal mode (planned). |
| Hub | A telahubd instance that registers itself with one or more portals so users can discover it without knowing its URL up front. | Any production hub. |
| Client | Anything that talks to the portal as a user. Typically a browser running the portal's web UI, or TelaVisor in Infrastructure mode. | Awan Saya web UI, TelaVisor. |
The portal speaks two distinct authentication modes for two distinct sets of endpoints:
- User auth for the directory query endpoints. The user is whoever the portal's identity store says they are. The protocol does not prescribe how user auth works; sessions, cookies, OAuth, hardcoded admin -- all legal. The protocol only requires that "this request is from user X" is determinable.
- Hub sync auth for the hub-driven
/api/hubs/syncendpoint. The hub presents a sync token issued at registration time. This is an authentication mode independent of user auth.
A portal MAY also serve unauthenticated discovery (/.well-known/tela)
and any other endpoints it wants outside the protocol's scope.
1.1 Entity identity (1.1)
Protocol 1.1 introduces stable UUIDs for every entity that needs identity in the fabric. The full model is in DESIGN-identity.md; this subsection summarizes the wire-level fields a portal sees.
| Entity | Field | Generated by | Stored on |
|---|---|---|---|
| Portal | portalId | the portal, on first start | the portal's own store |
| Hub | hubId | telahubd, on first start | the hub's YAML config |
| Agent install | agentId | telad, on first start | telad.state |
| Machine registration | machineRegistrationId | the hub, on first registration of a new (agentId, machineName) pair | the hub's machine record |
Wire-level naming rules:
- All identity fields use camelCase (
hubId,agentId,machineRegistrationId,portalId). There is no_idsuffix and no all-capsIDform on the wire. Where context is unambiguous a field MAY be named simplyid-- for example, a directory entry'sidis its hub'shubId. - UUIDs are random v4, formatted as the standard 36-character 8-4-4-4-12 hex string with dashes.
- Identity fields are not credentials. Anyone who can read the endpoint can read the IDs. Authority is established by tokens, not by knowledge of an ID.
A 1.1 portal MUST learn a hub's hubId before storing the hub in its
directory (sections 3.2 and 3.6). A 1.1 portal MUST surface the
hubId, agentId, and machineRegistrationId it learns from
upstream hubs in its directory and fleet responses (sections 3.1 and
5). A 1.1 portal MUST expose its own portalId on
/.well-known/tela (section 2).
2. Discovery and version negotiation: /.well-known/tela
A portal MUST serve a JSON document at /.well-known/tela that names
where the hub directory lives and which portal protocol versions the
portal speaks. This is the only well-known endpoint Tela defines and
is the entry point any client uses when given a portal URL. It serves
two purposes: directory discovery and protocol version negotiation.
Request
GET /.well-known/tela HTTP/1.1
Host: portal.example.com
Accept: application/json
No authentication. Portals MAY serve this with Cache-Control: public, max-age=86400 or similar long cache directives because the value rarely
changes.
Response
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
{
"hub_directory": "/api/hubs",
"protocolVersion": "1.1",
"supportedVersions": ["1.1"],
"portalId": "770e8400-e29b-41d4-a716-446655440002"
}
| Field | Type | Required | Description |
|---|---|---|---|
hub_directory | string | yes | Path on the same origin where the portal serves the hub directory endpoints (section 3). MUST be a relative path beginning with /. Implementations SHOULD use /api/hubs as the conventional default; clients MUST honor whatever value the portal returns. |
protocolVersion | string | yes (post-1.0) | The portal protocol version the portal recommends clients use. Major.minor semver string. The portal MUST select this from its supportedVersions list. Pre-1.0 portals MAY ship "0.x" to mark themselves as in development. |
supportedVersions | array of strings | yes (post-1.0) | The full set of portal protocol versions this portal speaks. MUST be non-empty. MUST contain protocolVersion. Newer portals supporting older clients list multiple versions here. |
portalId | string | yes (1.1) | The portal's stable v4 UUID. Generated on the portal's first start, persisted in the portal's store, never rotated under normal operation. Identifies the portal across URL changes; see section 1.1 and DESIGN-identity.md. |
Hub /.well-known/tela
A telahubd instance running protocol 1.1 ALSO serves a separate
/.well-known/tela document at its own origin. The shape is similar
to the portal's but advertises a hubId instead of a portalId:
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
{
"protocolVersion": "1.1",
"supportedVersions": ["1.1"],
"hubId": "550e8400-e29b-41d4-a716-446655440000"
}
This endpoint is unauthenticated. Portals call it during a
user-initiated hub add (section 3.2 context 2) to learn the hub's
hubId before storing the record. Hubs do not advertise
hub_directory (the directory is a portal concept, not a hub one).
Version semantics
Versions follow standard semver discipline applied to a wire protocol:
- Major version bump (
1.x→2.x) signals a breaking change. Clients written for major version N MUST refuse to operate against a portal whosesupportedVersionsdoes not include anyN.*entry. - Minor version bump (
1.0→1.1) signals an additive change: new optional fields, new endpoints, new optional query parameters. A client written against1.0MUST work against any1.xportal, ignoring fields and endpoints it does not understand. - Within a single major version, the portal protocol is strictly additive. Removing fields, renaming fields, changing field types, or changing the semantics of existing fields are all forbidden and require a major version bump. Adding new optional fields, adding new endpoints, and adding new optional query parameters are allowed in a minor version bump.
- Pre-1.0 exception, used exactly once for 1.0 → 1.1. The
identity amendment introduces required fields rather than optional
ones, which violates the additive-only rule above. This is allowed
pre-1.0 because Tela has no backward-compatibility burden yet (see
CLAUDE.md "Pre-1.0: no cruft, no backward compatibility"), and
documented here so the precedent is recorded. The negotiation rule
is unchanged: a 1.0 client only understands
"1.0"and refuses a portal advertising["1.1"]; a 1.1 client only understands"1.1"and refuses a portal advertising["1.0"]. The break is clean. Section 13.7 has the rationale.
Negotiation rule
A client MUST:
- Fetch
/.well-known/telaat session start. - Read
supportedVersionsand select the highest version it understands (where "highest" is by semver ordering). - Use that version's shapes and rules for the rest of the session.
- Refuse to operate (with a clear error to the user) if no version in
supportedVersionsmatches a major version the client supports.
A client SHOULD NOT re-fetch /.well-known/tela mid-session unless it
has reason to believe the portal has been upgraded.
Fallback
If /.well-known/tela is not served (HTTP 404, network error, malformed
JSON), clients MUST fall back to:
hub_directory: "/api/hubs"(the conventional default)protocolVersion: "0"(the unversioned legacy contract, equivalent to the shape this document describes minus the post-1.0 negotiation rules)
This preserves compatibility with portals that predate this document.
A client that has fallen back to protocolVersion: "0" MUST NOT assume
any field beyond what was documented in the legacy contract. In
particular, a client in legacy fallback mode MUST NOT assume any
identity field defined in 1.1 is present.
telahubd's reference client implements the discovery + fallback in
internal/hub/hub.go discoverHubDirectory().
Parallel with the hub wire format
The same negotiation pattern is the obvious answer for the hub wire protocol (the WebSocket protocol between agents/clients and the hub). ROADMAP-1.0.md "Protocol freeze" calls this out as a 1.0 blocker for the hub side. The two protocols are independent and version independently, but they should share the same discipline: well-known discovery surface, additive-only minor versions, breaking changes require a major bump and a refusal-to-talk on mismatch.
3. Hub directory: {hub_directory} endpoints
The hub directory is a small REST resource. The path prefix is whatever
/.well-known/tela returns; in the conventional case that is /api/hubs,
and the rest of this document uses that path for clarity. A portal that
returns a different prefix MUST serve the same shapes under that prefix.
3.1 GET /api/hubs -- list visible hubs
Returns the list of hubs the authenticated user can see. The portal applies whatever visibility rules its identity store dictates: in Awan Saya, that is org/team membership; in a single-user portal, the user sees every hub.
Request:
GET /api/hubs HTTP/1.1
Authorization: <user auth, implementation-specific>
Response:
{
"hubs": [
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"name": "myhub",
"url": "https://hub.example.com",
"canManage": true,
"orgName": "acme"
}
]
}
| Field | Type | Required | Description |
|---|---|---|---|
id | string | yes (1.1) | The hub's hubId. The portal learned this during the register flow (section 3.2) or via /.well-known/tela on the hub URL. Stable across name and url changes; clients SHOULD prefer it as the primary identity key when correlating hubs across portal sources. See section 1.1. |
name | string | yes | Short hub name. Unique within the portal. Used as the addressable identifier in proxy paths. |
url | string | yes | Public hub URL. Either https://... (HTTP+WSS) or http://... (HTTP+WS). The hub's own admin API and WebSocket endpoint live under this URL. |
canManage | bool | yes | True when the authenticated user has admin or owner permission on this hub. Drives whether the client surfaces management actions in its UI. |
orgName | string | no | Free-form display label for the organizational scope this hub belongs to, if the portal models orgs. May be null or omitted. Single-user portals can return null everywhere. |
Authentication failures return 401 Unauthorized with the standard error
shape (section 7). An empty hub list is 200 OK with {"hubs": []}, not
an error.
3.2 POST /api/hubs -- register or update a hub
Adds a new hub to the portal directory or updates an existing one with
the same name. This endpoint is called from two distinct contexts:
- Hub-initiated bootstrap. The hub itself runs
registerWithPortalfromtelahubd, presenting an admin token issued by an out-of-band means (typically a portal admin paste). The portal verifies the admin token, creates a hub record, and returns a fresh sync token. - User-initiated add. A logged-in user adds a hub through the portal UI by entering its URL and a viewer token. No admin token is involved; the portal authenticates the user via its session.
Request:
POST /api/hubs HTTP/1.1
Content-Type: application/json
Authorization: Bearer <admin-token> # context 1
Authorization: <user session> # context 2
{
"name": "myhub",
"url": "https://hub.example.com",
"hubId": "550e8400-e29b-41d4-a716-446655440000",
"viewerToken": "<optional 64-char hex>",
"adminToken": "<optional, context 2 only>"
}
| Field | Type | Required | Description |
|---|---|---|---|
name | string | yes | Short hub name. Must be unique within the portal. Maximum length is implementation-defined (Awan Saya enforces 255). |
url | string | yes | Public hub URL. Maximum length is implementation-defined (Awan Saya enforces 2048). |
hubId | string | yes (1.1, context 1) | The hub's stable hubId. The hub presents its own hubId from telahubd.yaml. The portal stores it on the hub record; it is never updated by PATCH /api/hubs. |
viewerToken | string | no | The hub's console-viewer role token, if the portal will host a web console for the hub. |
adminToken | string | no | The hub's owner or admin token. The portal stores this so it can proxy admin requests later (section 4); the protocol does NOT echo it back in any response. Portals MUST treat stored admin tokens as secrets. |
In context 2 (user-initiated add), the request body MAY omit
hubId. The portal MUST then call GET /.well-known/tela on the
url and read hubId from the response. If that call fails, returns
non-JSON, returns a 1.0 well-known document without hubId, or the
hub is unreachable, the portal MUST refuse the registration with
502 Bad Gateway and an error body explaining the discovery failure.
A 1.1 portal MUST NOT store a hub record without a hubId.
Response:
{
"hubs": [
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"name": "...",
"url": "...",
"canManage": true,
"orgName": null
}
],
"syncToken": "hubsync_AbC123...",
"updated": false
}
| Field | Type | Required | Description |
|---|---|---|---|
hubs | array | yes | The user's full hub list after the registration, in the same shape as GET /api/hubs (including the id field per section 1.1). |
syncToken | string | when context 1 | A fresh sync token the hub will use for PATCH /api/hubs/sync (section 3.3). MUST start with the prefix hubsync_ so clients can distinguish it from other token classes. The portal stores its hash; the cleartext is returned exactly once. Portals MAY omit this field for context-2 calls (user-initiated adds). |
updated | bool | no | True when the registration upserted an existing record rather than creating a new one. Default false. |
A hub that is registered a second time with the same hubId MUST be
upserted (the portal updates name, url, viewerToken, and the
stored admin token, and issues a new sync token). The hub then learns
the new sync token from the response and persists it. This is how a
hub recovers from losing its sync token: re-register with the same
admin token. Identity matching by hubId is what makes a renamed hub
upsert into the existing record rather than creating a duplicate; in
1.0 the upsert was keyed on name and renaming a hub looked like a
new registration.
A portal MUST reject a context-1 registration whose hubId is missing
or whose hubId matches an existing record but whose name collides
with a different hub belonging to another user. The exact response
shape on a name collision is implementation-defined; 409 Conflict
with the standard error body is recommended.
Authorization failures: 401 Unauthorized if no valid auth, 403 Forbidden if the user is authenticated but not authorized to add a hub
under the requested scope (e.g. organization quota reached).
3.3 PATCH /api/hubs/sync -- hub pushes its viewer token
Authenticated by the per-hub sync token, not by user session. This endpoint is the only one in the protocol that uses sync auth; it exists so a hub can refresh its viewer token at the portal without involving a user.
Request:
PATCH /api/hubs/sync HTTP/1.1
Content-Type: application/json
Authorization: Bearer hubsync_AbC123...
{ "name": "myhub", "viewerToken": "<new 64-char hex>" }
| Field | Required | Description |
|---|---|---|
name | yes | The hub name as registered. |
viewerToken | yes | The new console-viewer token the portal should store. |
Response:
{ "ok": true }
The portal MUST verify the sync token using a timing-safe comparison
against the hash it stored during registration. Mismatched tokens
return 401. Unknown hub names return 404.
This endpoint MUST NOT accept user auth. A user wishing to update a
hub's viewer token does so through PATCH /api/hubs (section 3.4),
which is user-authenticated.
3.4 PATCH /api/hubs -- user updates a hub record
User-authenticated update of any field on an existing hub the user can manage. The body is a partial update; only the fields present are changed.
Request:
PATCH /api/hubs HTTP/1.1
Content-Type: application/json
Authorization: <user session>
{
"currentName": "myhub",
"name": "myhub-renamed",
"url": "https://hub.example.com",
"viewerToken": "...",
"adminToken": "..."
}
| Field | Required | Description |
|---|---|---|
currentName | yes | The current name of the hub to update. |
name | no | New hub name. |
url | no | New hub URL. |
viewerToken | no | New viewer token. |
adminToken | no | New admin token (stored as a secret; never echoed back). |
PATCH /api/hubs MUST NOT change the stored hubId. A request body
that includes a hubId field SHOULD be rejected with 400 Bad Request; clients MUST NOT include it. Hub identity is set on
registration and is not user-mutable.
Response: same shape as GET /api/hubs, reflecting the post-update list.
3.5 DELETE /api/hubs -- user removes a hub
DELETE /api/hubs?name=myhub HTTP/1.1
Authorization: <user session>
The hub name is passed as a query parameter, not in the request body, so
clients can use DELETE without a body. A portal MAY accept the name in
a JSON body too, but the query-parameter form is normative.
Authorization MUST be more restrictive than read access: only hub owners, organization owners, or platform admins can delete (in Awan Saya, hub admins explicitly cannot delete). The exact rule is implementation- defined; the protocol only requires that delete is gated tighter than read.
Response: same shape as GET /api/hubs, reflecting the post-delete list.
4. Admin proxy: /api/hub-admin/{hubName}/{operation}
A portal MUST expose an HTTP proxy that lets authenticated users invoke the hub's admin API without having direct network reachability or needing the hub's admin token. The portal holds the admin token (stored during registration) and forwards the request on the user's behalf.
The proxy URL is:
{portal-base-url}/api/hub-admin/{hubName}/{operation}
Where:
{hubName}is the short hub name (URL-encoded if it contains special characters).{operation}is the hub admin path without the leading/api/admin/prefix. Examples:access,agents/barn/logs,update,pair-code,tokens,restart. The portal MUST internally prepend/api/admin/before forwarding to the hub.
The portal MUST NOT accept the legacy double-prefix form
/api/hub-admin/{hubName}/api/admin/{operation}. Clients MUST use the
short form. This is the canonical shape for portal protocol version 1.0
and onward; portals advertising protocolVersion: "0" (legacy fallback
per section 2) used the double-prefix form.
The reason: the portal's /api/hub-admin/ namespace and the hub's
/api/admin/ namespace are unrelated paths that happened to share a
prefix string. Carrying both in one URL was a coincidence of how the
two projects independently organized their admin endpoints, not a
structural relationship. The shorter form decouples the portal URL
shape from the hub URL shape: if the hub ever moved its admin API to
a different path, the portal proxy URL would not change.
4.1 Method passthrough
The proxy MUST forward the original HTTP method unchanged. The Tela hub
admin API uses real REST verbs (GET, POST, PUT, PATCH, DELETE),
and downgrading any of them collapses semantics. In particular,
PATCH /api/hub-admin/myhub/api/admin/update is how a user changes a
hub's release channel through the portal, and any portal that folds
PATCH into POST breaks that path.
4.2 Body and query string passthrough
The proxy MUST forward the original request body byte-for-byte for
methods other than GET and HEAD. The proxy MUST also preserve the
original query string. The portal MUST set Authorization: Bearer <storedAdminToken> on the outbound request and MUST NOT pass through
the inbound Authorization header.
4.3 Response passthrough
The proxy MUST return the upstream response status code and body
unchanged. It SHOULD set Content-Type: application/json and
Cache-Control: no-cache on the response.
4.4 Authorization
A portal MUST require user auth on every proxy call and MUST verify that
the user has canManage on the named hub before forwarding. A user
without manage permission gets 403 Forbidden. A user calling a hub
they cannot see at all gets 404 Not Found. The portal MUST NOT leak
the existence of a hub to users who cannot see it.
4.5 Failure modes
| Condition | Status |
|---|---|
| User not authenticated | 401 |
| Hub does not exist OR user cannot see it | 404 |
User can see hub but lacks canManage | 403 |
| Portal has no admin token stored for this hub | 400 with body {"error":"no admin token stored for this hub"} |
| Hub is unreachable / network error | 502 with body {"error":"hub unreachable"} |
| Hub returned a status code | passthrough (the portal does not interpret the upstream response) |
5. Fleet aggregation: GET /api/fleet/agents
A portal MUST expose an aggregated view of every agent across every hub the user can manage. This is the endpoint TelaVisor and the Awan Saya web UI use to populate the cross-hub Agents tab.
This is the only aggregation endpoint in the protocol. Per-agent actions (restart, update, logs, config-get, config-set, update-status, update-channel, etc.) go through the generic admin proxy (section 4), not through a fleet-specific URL. The aggregation lives in the protocol because it does work no client can replicate efficiently in a single call: the portal already holds per-hub viewer tokens, already iterates the user's hubs to compute the directory list, and is the natural place to handle per-hub timeouts as a unit. Pushing that work to clients would force every client (TelaVisor, Awan Saya, future frontends) to reimplement iteration, token lookup, and timeout handling.
Request
GET /api/fleet/agents HTTP/1.1
Authorization: <user session>
Optional query parameters:
| Parameter | Description |
|---|---|
orgId | Restrict the response to hubs in the given org scope. Implementation-defined; portals that do not model orgs MAY ignore this parameter. |
Response
{
"agents": [
{
"id": "barn",
"agentId": "660e8400-e29b-41d4-a716-446655440001",
"machineRegistrationId": "880e8400-e29b-41d4-a716-446655440003",
"hub": "myhub",
"hubId": "550e8400-e29b-41d4-a716-446655440000",
"hubUrl": "https://hub.example.com",
"online": true,
"version": "v0.6.0-dev.42",
"hostname": "barn.local",
"os": "linux",
"displayName": "Barn",
"tags": ["lab"],
"location": "garage",
"owner": null,
"lastSeen": "2026-04-08T03:14:00Z",
"sessionCount": 0,
"services": [{"port": 22, "name": "SSH"}],
"capabilities": {"fileShare": true}
}
]
}
| Field | Type | Required | Description |
|---|---|---|---|
id | string | yes | The machine name (display label). Not stable across renames; use agentId or machineRegistrationId for identity. |
agentId | string | yes (1.1) | The agentId the agent presented on registration. Stable across machine renames and across hubs (the same telad install on two hubs reports the same agentId to both). The primary identity key for cross-hub correlation. See section 1.1. |
machineRegistrationId | string | yes (1.1) | The hub-local UUID generated when the hub first saw this (agentId, machineName) pair. Stable across reconnects on this hub but unique per hub: the same agent registered with two hubs gets two different machineRegistrationIds. Use it as the per-hub primary key. |
hub | string | yes | The hub's display name. |
hubId | string | yes (1.1) | The hub's hubId, mirrored from the hub's /.well-known/tela or its registration record. Identity for the containing hub. |
hubUrl | string | yes | The hub's URL. |
The portal MUST iterate the user's manageable hubs, query each hub's
/api/status endpoint with the stored viewer token, and merge the
machines arrays into a flat list. Each agent record MUST include
hub, hubId, and hubUrl for the hub the agent belongs to, and
the agentId and machineRegistrationId learned from the hub's
status response. The portal MUST NOT modify the identity fields; they
are passthroughs from the hub's /api/status shape (DESIGN-identity.md
section 6.2). If a hub is unreachable, the portal SHOULD log and skip
it (returning agents from the reachable hubs rather than failing the
whole request).
A portal MAY encounter a 1.0 hub that does not yet expose hubId,
agentId, and machineRegistrationId in its status response. The
portal MUST omit those identity fields from the corresponding fleet
entries rather than fabricating placeholder values. Clients reading
fleet results MUST tolerate identity fields being absent on entries
sourced from 1.0 hubs and SHOULD surface such hubs as "legacy hub --
needs upgrade" in their UI. Per the destroy-and-rebuild policy in
section 13.7, this case is transitional and not expected to persist
beyond the rollout window.
A portal MAY add additional fields to each agent record, but clients MUST tolerate unknown fields and MUST NOT break if a portal omits any optional field.
Per-agent actions go through the admin proxy
To send a management action to a specific agent, use the admin proxy (section 4):
POST /api/hub-admin/myhub/agents/barn/restart HTTP/1.1
Content-Type: application/json
Authorization: <user session>
{}
This forwards to the hub's POST /api/admin/agents/barn/restart. Known
actions include config-get, config-set, logs, restart, update,
update-status, update-channel. Future actions added to the hub work
without portal changes because the proxy is generic.
6. Authentication
The protocol distinguishes three credential types. Each endpoint requires exactly one of them, listed in section 6.1.
6.1 Auth summary
| Endpoint | Auth |
|---|---|
/.well-known/tela | none |
POST /api/oauth/device (section 6.3) | none |
POST /api/oauth/token (section 6.3) | device code in body |
GET /device (section 6.3) | user, browser session |
GET /api/hubs | user |
POST /api/hubs (hub bootstrap) | hub admin token |
POST /api/hubs (user add) | user |
PATCH /api/hubs/sync | hub sync token (hubsync_*) |
PATCH /api/hubs | user |
DELETE /api/hubs | user |
/api/hub-admin/{name}/... | user, gated on canManage |
GET /api/fleet/agents | user |
6.2 User auth credentials
Every endpoint marked "user" in section 6.1 MUST accept a bearer
token in the Authorization header:
Authorization: Bearer <token>
The token format is implementation-defined; portals SHOULD use a long, opaque, cryptographically random string. The protocol does not prescribe how the portal validates the token (database lookup, JWT verification, signed cookie reuse, all are legal) nor where the token came from (see section 6.3 for the standard issuance flow).
A portal MAY additionally accept other credential forms — session
cookies for browser users, mTLS for service-to-service callers — but
bearer-token auth on the Authorization header MUST work alongside
whatever else the portal accepts. This guarantees that a desktop
client written against this spec can reach any conformant portal
without an embedded webview, a redirect URI, or knowledge of the
portal's specific session implementation.
Awan Saya implements both: a session cookie set by the web sign-in
flow takes precedence, and a bearer token in the Authorization
header is checked as a fallback. Both resolve to the same account.
TelaVisor in Portal mode does the same thing for its embedded
loopback portal: a bearer token is generated at process start and
written to ~/.tela/run/portal-endpoint.json alongside the loopback
port; every portal call from TelaVisor uses that bearer token, and
external local tools can read the file to authenticate.
6.3 Device code flow for desktop clients
A portal SHOULD implement the OAuth 2.0 Device Authorization Grant (RFC 8628) so desktop clients can sign a user in without an embedded browser, a redirect URI, or a client secret. Single-user portals (file-backed, no account model) MAY skip this section because the operator configures the bearer token out of band.
The flow has three machine-facing endpoints and one user-facing page:
POST /api/oauth/device
The desktop client initiates the flow. No auth required.
Request body: empty or {}.
Response (200):
{
"device_code": "GmRhmhcxhwAzkoEqiMEg_DnyEysNkuNhszIySk9eS",
"user_code": "WDJB-MJHT",
"verification_uri": "https://portal.example.com/device",
"expires_in": 900,
"interval": 5
}
The client displays the user_code and verification_uri to the
user and starts polling.
POST /api/oauth/token
The desktop client polls for an access token. No auth required; the
device_code is the credential.
Request body:
{
"grant_type": "urn:ietf:params:oauth:grant-type:device_code",
"device_code": "GmRhmhcxhwAzkoEqiMEg_DnyEysNkuNhszIySk9eS"
}
Response (200) when the user has approved on the verification page:
{
"access_token": "<bearer token to use on subsequent calls>",
"token_type": "Bearer"
}
Response (400) while waiting for approval:
{ "error": "authorization_pending" }
Response (400) after expires_in elapses:
{ "error": "expired_token" }
Response (400) when the device code is unknown, expired, or revoked:
{ "error": "access_denied" }
Polling clients SHOULD honor the interval value from the device
code response and SHOULD back off on slow_down errors per RFC 8628
section 3.5.
GET /device
The user-facing approval page. The user opens it in a browser (the
URL was returned as verification_uri), enters the user_code,
signs into the portal if not already signed in, and approves the
device. The page MAY accept the user code as a query parameter
(?user_code=WDJB-MJHT) for convenience.
The HTML and UX of this page are not specified. The only contract is
that completing the approval flow MUST cause the next
POST /api/oauth/token poll for that device code to return a
successful access token response.
Issuance and lifetime notes
- The access token returned is a regular bearer token; section 6.2 governs how it is used after issuance.
- Tokens issued via device code MAY have an expiration. The protocol does not prescribe a refresh-token mechanism. An expired token returns 401 to the client and the client restarts the device code flow.
- A portal MUST NOT reuse a
device_codeafter a successful token exchange; each device code grants exactly one access token. - A portal that does not implement device code MUST still accept bearer tokens issued by some other means (admin-configured static token, web UI personal access token, etc.). Device code is the standard issuance path for desktop clients, not the only legal credential.
7. Error shape
All error responses MUST be JSON with at least an error field:
{ "error": "human-readable message" }
Status codes follow standard REST conventions:
| Code | Meaning |
|---|---|
| 400 | Bad request (malformed body, missing required fields) |
| 401 | Authentication required or failed |
| 403 | Authenticated but not authorized for this operation |
| 404 | Resource not found, OR resource exists but the user cannot see it (do not leak existence) |
| 409 | Conflict (e.g. registering a hub name that already exists, in older portals that do not support upsert) |
| 502 | Upstream hub unreachable |
| 5xx | Portal-side error |
Portals MAY add additional fields to error responses (e.g. code,
details) but MUST always include error.
8. Sync token format
Sync tokens issued by POST /api/hubs (section 3.2) MUST start with the
prefix hubsync_ so clients can distinguish them from user session
tokens, hub admin tokens, viewer tokens, and pair codes. The remainder
SHOULD be at least 32 bytes of cryptographic randomness, encoded in a
URL-safe alphabet.
The portal MUST store only the SHA-256 hash of the sync token, not the
cleartext. The cleartext is returned exactly once in the registration
response and the hub MUST persist it to its update.portals[name].syncToken
field for use in PATCH /api/hubs/sync (section 3.3).
If a hub loses its sync token, the recovery procedure is to re-register
with POST /api/hubs and a fresh admin token: the portal upserts the
record and issues a new sync token, which the hub stores.
9. CORS and origin policy
A portal SHOULD reject cross-origin state-changing requests (POST,
PUT, PATCH, DELETE) unless the request origin is on an explicit
allowlist. Awan Saya does this via an isOriginAllowed check; the
protocol does not prescribe the allowlist format.
/.well-known/tela and GET /api/hubs SHOULD be CORS-permissive
(Access-Control-Allow-Origin: *) so any client can discover and read.
10. What is not in the protocol
The following are explicitly NOT part of the portal protocol. They are SaaS concerns of specific implementations and have no place in any client that talks to the portal:
- Account / user lifecycle. Sign up, password reset, email
verification, MFA, account deletion. Awan Saya implements these under
/api/sign-up,/api/me/*,/api/forgot-password,/api/admin/*. None of those routes are part of this spec; a single-user portal does not implement them. - Organization, team, and membership management. Inviting users to
hubs, switching the active org, granting support access. Awan Saya
implements
/api/hubs/{name}/invitations,/api/hubs/{name}/members,/api/me/organization, etc. Out of scope. - Billing, plans, and tier limits. Awan Saya enforces a
max_hubsper organization; that is policy on top of the protocol, not the protocol. - Audit logging. Portals MAY log activity, but no API surface for reading audit logs is part of the protocol.
- The hub's own admin API. Portals proxy to it (section 4) but they
do not extend or reinterpret it. Anything addressed in the hub's
internal/hub/admin_api.gobelongs to that surface, not this one.
A portal that implements only the routes in this spec is a valid Tela portal. Awan Saya is a valid Tela portal that ALSO implements the SaaS surface above. A future TelaVisor Portal mode would be a valid Tela portal that omits the SaaS surface entirely.
11. Conformance checklist
To call yourself a Tela portal, you must:
-
Serve
/.well-known/tela(section 2) includingprotocolVersionandsupportedVersionsfields -
Advertise
protocolVersion: "1.1"andsupportedVersions: ["1.1"]on/.well-known/tela, plus a stableportalIdv4 UUID generated and persisted on first start (section 1.1) -
Honor the version negotiation rule: refuse clients whose major
version is not in
supportedVersions, treat the protocol as strictly additive within a major version (with the documented pre-1.0 1.0→1.1 break in section 13.7) -
Serve
GET /api/hubs(section 3.1) returning the documented shape, including theidfield carrying each hub'shubId -
Serve
POST /api/hubs(section 3.2) supporting both hub-bootstrap and user-add contexts, returning ahubsync_*sync token in the bootstrap context. RequirehubIdin context-1 bodies; for context-2 bodies that omithubId, discover the hub'shubIdviaGET /.well-known/telaon the hub URL before storing the record. Refuse the registration if nohubIdcan be obtained. -
Treat
hubIdas immutable:PATCH /api/hubsMUST NOT change it (section 3.4) -
Serve
PATCH /api/hubs/sync(section 3.3) authenticated by sync token, with timing-safe comparison -
Serve
PATCH /api/hubsandDELETE /api/hubs?name=(sections 3.4, 3.5) -
Serve
/api/hub-admin/{hubName}/{operation}(section 4) where{operation}is the hub admin path without the/api/admin/prefix, preserving method, body, and query string; gated oncanManage. Refuse the legacy double-prefix form. -
Serve
GET /api/fleet/agents(section 5) returning the merged cross-hub agent list, including theagentId,machineRegistrationId, andhubIdidentity fields on every entry sourced from a 1.1 hub -
Accept bearer-token user auth via
Authorization: Bearer <token>on every endpoint marked "user" in section 6.1, alongside any other credential forms the portal supports (section 6.2) -
(SHOULD, not MUST for single-user portals) Implement the OAuth
2.0 device code flow at
POST /api/oauth/device,POST /api/oauth/token, andGET /device(section 6.3) so desktop clients can sign users in without an embedded browser - Return errors in the documented JSON shape (section 7)
- Store sync tokens as SHA-256 hashes only (section 8)
You MAY implement additional endpoints, but a client written against this spec MUST work against your portal without knowing about them.
You MUST NOT implement any of the following endpoints, which were considered and removed during the 1.0 spec finalization (see section 13 for the rationale):
POST /api/fleet/agents/{hub}/{machine}/{action}-- use the admin proxy atPOST /api/hub-admin/{hub}/agents/{machine}/{action}instead.POST /api/hubs/{hubName}/pair-code-- use the admin proxy atPOST /api/hub-admin/{hubName}/pair-codeinstead.POST /api/hub-admin/{hubName}/api/admin/{operation}(the legacy double-prefix admin proxy form) -- use the short form/api/hub-admin/{hubName}/{operation}instead.
12. Reference implementations
| Implementation | Status | Storage | Identity model |
|---|---|---|---|
| Awan Saya | Production | PostgreSQL | Multi-org with accounts, organizations, teams, and hub memberships. |
internal/portal (Go) | Shipping | Pluggable (file-backed today; postgres adapter planned) | Single-user (file store) or multi-user via the same auth interface. |
cmd/telaportal | Shipping | File-backed (internal/portal/store/file) | Single-user, no account model. |
| TelaVisor "Portal mode" | Shipping | Embedded internal/portal over the file store | Single-user, in-process, loopback only. |
The telahubd outbound portal client lives in internal/hub/hub.go:
discoverHubDirectory()— reads/.well-known/tela(section 2)registerWithPortal()— POST/api/hubs(section 3.2)syncViewerTokenToPortals()— PATCH/api/hubs/sync(section 3.3)
These functions are the canonical client and any new portal MUST keep them working.
13. Resolved decisions
This section records the four open questions the first draft of this
spec deferred and the decisions made before the internal/portal
extraction was scheduled. The decisions are baked into the rest of
this document; this section exists to document the rationale so the
reasoning is preserved.
13.1 Protocol versioning: yes, on /.well-known/tela
Decision. The portal protocol gains a version field on
/.well-known/tela (section 2). Two new fields, protocolVersion
and supportedVersions, are required post-1.0. Pre-1.0 fallback for
portals that do not yet ship the fields is explicit and documented.
Why this and not the alternatives. Three options were on the
table: (A) no version field, strict additive-only rule post-1.0; (B)
version field on /.well-known/tela only, used for discovery-time
negotiation; (C) version field on every response, plus discovery.
Option B was chosen because /.well-known/tela is already the right
place for capability discovery in any HTTP API (RFC 8615), the
negotiation happens once per session rather than on every call, and
it future-proofs the protocol without polluting every response shape.
Option A leaves no graceful upgrade path for breaking changes; option
C protects against a non-existent failure mode (a portal silently
upgrading mid-session) at the cost of a field on every response.
The same pattern is the obvious answer for the hub wire protocol under ROADMAP-1.0.md "Protocol freeze." The two protocols are independent and version independently, but they should share the same discipline.
13.2 Admin proxy URL shape: short form only
Decision. The proxy URL is
/api/hub-admin/{hubName}/{operation} where {operation} is the hub
admin path without the /api/admin/ prefix. The legacy
double-prefix form is forbidden in protocol version 1.0 and onward
(section 4).
Why this and not the alternatives. Two options were on the table: (A) keep the double-prefix form as historical accident, document the duplication as incidental; (B) strip the prefix and forbid the legacy form pre-1.0.
Option B was chosen because the no-cruft pre-1.0 policy in CLAUDE.md exists for exactly this kind of cleanup. The double-prefix form was a coincidence of how two projects independently organized their admin namespaces, not a structural relationship. Decoupling the portal URL shape from the hub URL shape now means the hub can move its admin API later without breaking portal clients. The migration cost is bounded and small (server, two frontends, two client shims, one commit).
13.3 Fleet aggregation stays its own endpoint, per-action duplicate goes
Decision. GET /api/fleet/agents (section 5) stays as the cross-
hub aggregation endpoint. POST /api/fleet/agents/{hub}/{m}/{action}
is deleted from the spec; per-agent actions go through the generic
admin proxy at POST /api/hub-admin/{hub}/agents/{m}/{action}.
Why this and not the alternatives. Three options were on the
table: (A) keep both families, delete the per-action duplicate; (B)
fold everything under /api/hub-admin/, delete /api/fleet/; (C)
promote fleet to a generalized /api/aggregates/ namespace.
Option A was chosen because the aggregation endpoint provides real value the admin proxy cannot match in a single call (server-side hub iteration, per-hub viewer-token lookup, per-hub timeout handling), and the per-action endpoint provides no value over the generic proxy. The "fleet vs hub-admin" split is a clean conceptual rule: aggregate = fleet, single = hub-admin. Option B would force every client (TelaVisor, Awan Saya, future frontends) to reimplement iteration and timeout handling. Option C is YAGNI -- one aggregation exists today, designing a namespace for hypothetical future aggregations is over-engineering.
If a second aggregation appears (cross-hub session list, cross-hub
history view, etc.), revisit whether /api/fleet/ should be renamed
to /api/aggregates/ or whether the second aggregation gets its own
family. Don't pre-decide that now.
13.4 Pair code goes through the generic admin proxy
Decision. The dedicated POST /api/hubs/{hubName}/pair-code
endpoint is deleted from the spec. Pair-code generation is one
instance of the generic admin proxy: clients call
POST /api/hub-admin/{hubName}/pair-code and the portal forwards to
the hub's POST /api/admin/pair-code.
Why this and not the alternatives. Three options were on the table: (A) keep the dedicated endpoint, document it as canonical, forbid pair-code through the proxy; (B) delete the dedicated endpoint, fold pair-code into the generic proxy like every other admin operation; (C) keep both as equivalent.
Option B was chosen because the whole point of the generic admin proxy is that it's generic. Every hub admin endpoint should be reachable through it. The dedicated endpoint existed for historical reasons (pair-code shipped before the proxy was generalized) and the no-cruft policy says to clean that up before 1.0 freezes the surface. The "pair-code is special, it deserves its own URL" justification does not hold up: every hub admin endpoint is special to somebody; none of the others got promoted to dedicated portal URLs. If portal-side policy ever needs to be added (rate limits, TTL caps), the right place is middleware on the admin proxy that matches the specific path, not a parallel endpoint.
13.5 Implementation status (closed)
Decisions 13.1-13.4 are baked into sections 2, 4, 5, and 11 of this
spec and the migration work is complete. The internal/portal Go
package, the file-backed store, the HTTP handlers, the spec-conformance
test harness against internal/teststack, the migration of the
telahubd outbound portal client and the Awan Saya server and
frontend, and the standalone cmd/telaportal binary all landed in the
six-commit extraction series ending in a0677f6. Pre-1.0 we did not
carry both shapes; the legacy code paths were deleted in the same
change that introduced the new ones, per the no-cruft policy.
13.6 Portal user auth: bearer mandatory + OAuth 2.0 device code
Decision. Section 6 is amended in two ways. First, every endpoint
marked "user" MUST accept a bearer token in the Authorization
header (section 6.2); portals MAY accept additional credential forms
on top, but bearer-on-Authorization is the one credential format every
1.0 portal is required to honor. Second, portals SHOULD implement the
OAuth 2.0 Device Authorization Grant (RFC 8628) at the three endpoints
in section 6.3 as the standard way for desktop clients to obtain a
bearer token without an embedded browser or a redirect URI. Single-user
portals MAY skip the device code flow because the operator configures
the bearer token out of band.
Why this and not the alternatives. Three options were on the table. (A) Leave user auth implementation-defined, document a "bearer is one of several legal options" stance, let each portal choose. (B) Require bearer auth as a MUST, leave issuance implementation-defined. (C) Require bearer auth as a MUST and standardize an issuance flow that desktop clients can rely on without portal-specific code.
Option C was chosen because the previous "implementation-defined" stance worked while every Tela client was either Awan Saya's web UI (cookies) or a hub registering itself (sync tokens). It does not work once TelaVisor becomes a portal client: TV needs a single credential format that does not require an embedded webview, a redirect URI, or a portal-specific session adapter. Bearer-on-Authorization is the only credential form every HTTP client supports natively, so bearer becomes the single mandatory format. Standardizing the issuance flow on top of that mandate (option C) means the desktop client onboarding UX is the same against every portal: device code prompt, browser approval, done. Without it, every portal would invent its own desktop sign-in story and the desktop client would need a switch statement per portal implementation, which defeats the point of having a wire spec.
The OAuth 2.0 device code flow specifically (RFC 8628) was chosen
over alternatives because (a) it is what gh auth login, the AWS
CLI, the GCP CLI, the Atlassian CLI, and every other modern desktop
sign-in flow uses; (b) it has zero embedded-browser requirements;
(c) the server side is small (four endpoints, including the
user-facing approval page); (d) it is well-specified, and an existing
RFC means client and server libraries already exist in every language;
(e) it does not require client secrets, which a desktop binary cannot
keep secret anyway.
Awan Saya already accepts bearer tokens (the api_tokens table and
the cookie-then-bearer fallback in [server.js:1080-1104]) so the
section 6.2 mandate is no-op for Awan Saya at the data model level.
What Awan Saya needs to add is the section 6.3 device code endpoints
and the user-facing approval page; the existing PAT-via-web-UI flow
stays as a manual escape hatch for power users until device code
lands. TelaVisor's embedded loopback portal (internal/portal over
the file store) gets a generated bearer token written to
~/.tela/run/portal-endpoint.json at process start, so the loopback
case uses the same auth path as a remote portal — the file store's
Authenticator already accepts bearer tokens and only needs the
token to be set at startup rather than via SetAdminToken.
The hub wire protocol is unaffected by this amendment. Sync tokens and hub admin tokens are not user credentials and continue to flow exactly as sections 3.2, 3.3, and 4 describe. This amendment is strictly about how user identity reaches the portal.
13.7 Protocol bump to 1.1: stable identity for every entity
Decision. The protocol is bumped from 1.0 to 1.1 to add stable
v4 UUIDs for every entity that needs identity in the fabric: portals
get portalId, hubs get hubId, agent installations get agentId,
and per-(hub, machine) registrations get machineRegistrationId.
The new fields are required, not optional. The wire-level shape of
sections 1.1, 2, 3.1, 3.2, 3.4, and 5 is amended to carry them. The
conformance checklist in section 11 gains the corresponding items.
The full identity model lives in DESIGN-identity.md, which is the
sibling document this amendment implements at the protocol layer.
Why this and not the alternatives. Three options were on the table. (A) Stay at 1.0, leave identity as a portal-internal concern, let each portal invent whatever IDs it wants and never expose them on the wire. (B) Add identity as optional fields in 1.0 itself, no version bump, treat the protocol as still 1.0 forever. (C) Bump to 1.1, make the identity fields required, accept the clean break between 1.0 and 1.1 clients.
Option C was chosen because URL-as-identity (the de facto 1.0 model)
has produced multiple bugs in dogfooding: profile reconciliation
broke when the portal returned https:// while the profile YAML
was keyed on wss://, and a stale Remotes entry on the
awansatu/awansaya dual-domain went invisible because the directory
key was the URL. Both bugs are fixed by giving every entity a stable
ID that is not a URL. Option A leaves the bugs in place. Option B
makes identity advisory: portals would still key on URL or name
internally, clients would still have to handle missing IDs as a
first-class case, and the cross-source aggregation TV needs (the
whole point of the stretch) becomes impossible to write cleanly. The
required-fields posture in option C is what makes downstream code
simple.
On the version-negotiation break. The additive-only rule for
minor version bumps (section 2 "Version semantics") forbids required
new fields in a minor bump. 1.0 → 1.1 violates that rule on
purpose, exactly once, under the pre-1.0 no-cruft policy in
CLAUDE.md. The negotiation rule itself is not changed: a 1.0 client
sees supportedVersions: ["1.1"] and refuses to talk; a 1.1 client
sees supportedVersions: ["1.0"] and refuses to talk. The break is
clean and machine-detectable. Post-1.0 the additive-only rule is
restored to its full strength and any future identity changes will
require a major version bump.
On migration. There is no migration code. Tela is pre-1.0 and
the fabric is small enough that a destroy-and-rebuild migration is
cheaper than a compatibility shim. DESIGN-identity.md section 9
documents the interactive walkthrough; the operator destroys and
recreates each portal, hub, agent, and profile after every binary
in the fabric has been upgraded to 1.1. No if id == "" branches
are introduced anywhere in the implementation; 1.0 hubs that show
up in a fleet response are reported missing-identity to the client
rather than being papered over.
On Awan Saya. Awan Saya gains a hubs.hub_id column carrying
the hub's hubId, populated from the registration body in
context-1 calls and from /.well-known/tela in context-2 calls.
The existing hubs.name unique constraint stays in place; identity
is hub_id, the directory key is still name. The fleet endpoint
forwards the identity fields it learns from each hub's /api/status
unmodified. The full Awan Saya migration is Phase 4 of Stretch B,
documented in DESIGN-identity.md section 11.