Skip to main content
AI-native project operations

Run AI projects with confidence at every step.

TBrain Management Hub is the single operations surface for AI project teams — contributor onboarding, campaign execution, automated and human QC, and live analytics, all in one place.

Invitation-only accessPer-project data isolationAudit trail on every action

Platform capabilities at a glance

6+
Workflow types
Vision, robotics, QC, moderation, RLHF & more
13
Ops tables per project
Campaigns, batches, tasks, QC, comments & more
Real-time
Dashboard streaming
Server-sent events, sub-second KPI refresh
Isolated
Per-project data
Dedicated PostgreSQL schema per workspace
How it works

From brief to sign-off in three steps

One hub, one audit trail, no handoff chaos between tools.

STEP 01

Provision from a template

Admin creates a new project from one of four templates. A dedicated schema is materialized with all the ops tables, QC dimensions, and roles pre-configured.

STEP 02

Execute through the pipeline

Makers pick up tasks from assigned batches and submit deliverables. Automated QC runs on every submission; human QCers review what the AI flags.

STEP 03

Analyze & iterate in real time

Live KPIs stream to the dashboard as submissions land. Reviewers appeal scores, operators rebalance workloads, clients watch progress — all in the same hub.

Use cases

Any task that needs makers, reviewers, and QC

The engine is generic — tasks, schemas, and multi-dimensional QC adapt to whatever your pipeline needs, from vision labeling to robotics data collection.

Computer vision

Bounding boxes, polygons, semantic segmentation, keypoints, and 3D cuboids on images and video frames — with multi-reviewer grading.

BBoxPolygonSegmentationKeypoints

Physical AI & robotics

Humanoid teleop sessions, RGB-D captures, joint trajectories, and grasp annotations — the data foundation models for physical AI need.

TeleopRGB-DTrajectoryGrasp

Video QC & review

Frame-level verification, temporal task scoring, and long-form content review with configurable QC rubrics per project.

Frame-levelTemporalRubric

Content moderation

Policy-driven review with severity tiers, reviewer assignment rules, and escalation paths for ambiguous cases.

PolicySeverityEscalation

LLM evaluation & RLHF

Preference-pair labeling, response scoring, prompt-response grading, and preference data collection for fine-tuning pipelines.

PreferenceRLHFScoring

Transcription & audio

Speech-to-text with timecodes, speaker diarization, subtitle QA, and acoustic event labeling on long audio clips.

ASRDiarizeCaption

Don't see your workflow? The schema extends to anything that fits the task + submission + QC review model.

Platform

Everything an AI ops team actually uses

Opinionated defaults out of the box, full configurability when you need it.

Operations

Campaigns and batches, not spreadsheets

Group work into campaigns, slice into reviewable batches, and assign makers, QCers, and operations managers in a few clicks. Every status change is tracked in an audit log.

  • Four project templates cover most AI workflows
  • Bulk task import from CSV or Google Drive
  • Per-batch rework cycles with affected-task tracking
4 active
Campaigns
CV Annotation · Batch 42
2.4k frames
In QC
Humanoid Teleop · Session E2
124 clips
Running
Content Mod · Wave 7
320 tasks
Complete
RLHF Prefs · Round 3
48 pairs
Drafting
Quality

QC that scales beyond reviewer headcount

Automated verification runs first against every submission, catching common issues before a human opens the task. Reviewers score what's flagged against configurable, multi-dimensional criteria.

  • Auto QC runs in Worker, Direct, or HTTP modes
  • Multi-dimensional scoring with weighted criteria
  • Built-in appeals workflow when scores are contested
QC Review · T-1048
91.6/100
Approved
Accuracy · 40%96
Completeness · 30%88
Formatting · 20%92
Clarity · 10%84
Reviewer noted 2 minor formatting edits.
Insights

Real-time analytics, streamed not batched

KPIs, throughput, quality trends, and team workloads update the second a submission lands — powered by server-sent events, not overnight cron jobs.

  • Sub-second dashboard refresh via SSE
  • Predictive analytics with configurable horizon
  • Per-team, per-batch, and per-contributor drill-down
Submissions · last 8 days
3,284
↑ 18.4%
1D7D30D
Approved
2,971
Needs rework
218
Pending
95
Built-in capabilities

Production-grade foundations

The less-glamorous infrastructure that makes the shiny stuff trustworthy.

Per-project isolation

Each workspace gets its own PostgreSQL schema — no cross-tenant data leakage, ever.

Workflow engine

Temporal-backed graph workflows trigger pause, resume, and cancel with full history.

Live streaming

Server-sent events power the dashboard — every KPI, every chart, always current.

Polymorphic comments

Discussion threads on tasks, batches, or submissions — with @mentions and file attachments.

Auto QC pipeline

Multi-mode verification jobs catch issues before they reach reviewers, saving hours per batch.

Full audit log

Every login, every score, every status change — captured and queryable in sso.audit_log.

Built for every role in the pipeline

One product, six role-aware permission tiers — from admin to viewer.

Sign in to start
Project admins
Provision workspaces and manage access.
Team leads & OMs
Plan campaigns, balance batches, unblock teams.
QCers
Review submissions against multi-dimensional criteria.
Makers
Execute tasks and submit deliverables with context.
FAQ

Questions teams ask before signing in

Who is TBrain Management Hub for?
AI project teams running human-in-the-loop workflows at scale — data labeling shops, content moderation teams, transcription ops, and QC reviewers. Any team where makers produce work and reviewers grade it.
What kinds of workflows are supported?
Computer vision annotation (boxes, polygons, segmentation, keypoints), physical AI data collection (humanoid teleop, RGB-D, robotics trajectories), video QC, content moderation, LLM evaluation and RLHF preference pairs, transcription, and audio labeling. The task schema is flexible — if your work fits the task + submission + QC review model, it runs on TBrain.
How is access granted?
Invitation only. A project admin invites teammates by email; they sign in with Google or email/password. External collaborators can be added per project without touching your internal directory.
What happens to data between projects?
Each project lives in its own dedicated PostgreSQL schema. Campaigns, batches, tasks, submissions, QC scores, and comments never leave that schema — there is no cross-project query path.
Can I customize the QC rubric?
Yes. Every project defines its own QC dimensions, weighted criteria, and severity tiers. Templates ship with sensible defaults for video QC, data entry, moderation, and transcription — override any of them per project.
Does the platform integrate with our storage?
File uploads flow to Google Cloud Storage via resumable, chunked uploads. Deliverables can also be linked from Google Drive. Both signed URLs and service-account access are supported.
Is there an audit trail?
Every status change, score, appeal, role assignment, and login is captured in sso.audit_log with actor, target, and timestamp. Admins can query the full history of any resource.

Ready to run your next AI campaign with confidence?

Sign in with the email that was invited to your workspace — or reach out to request access for your team.

Invitation-only · Google Sign-In supported · SOC-grade audit logging