How to add a proto schema
All inter-layer contracts in Aucert are defined as Protobuf schemas in proto/. This guide covers adding new messages, modifying existing ones, and regenerating code.
Prerequisites
protoccompiler installed- Bazel configured (or standalone protoc plugins)
Current schemas
| File | Purpose |
|---|---|
proto/pipeline.proto | MCPMessage envelope, TestRunRequest/Result, BugReport |
proto/knowledge-graph.proto | KGQuery, KGResponse, KGNode, KGEdge |
Steps
Step 1: Edit or create the .proto file
// proto/pipeline.proto
syntax = "proto3";
package aucert.pipeline;
option java_package = "com.aucert.platform.domain.pipeline.model";
// Add new message types below existing ones
message TestSuiteConfig {
string suite_id = 1;
string name = 2;
repeated string test_scope = 3;
map<string, string> parameters = 4;
}
Step 2: Follow backward-compatibility rules
danger
Breaking changes to proto schemas can silently corrupt data. Follow these rules strictly.
Always safe:
- Add new fields (with new field numbers)
- Add new message types
- Add new enum values
- Add new
oneofmembers
Never do:
- Change an existing field number
- Change a field's type
- Remove a field that's in use (mark as
reservedinstead) - Rename a field (wire format uses numbers, not names, but it breaks generated code)
// Correct way to remove a field
message MyMessage {
reserved 3; // Field 3 was "old_field", removed in v2
reserved "old_field"; // Prevent accidental reuse of the name
string new_field = 4;
}
Step 3: Regenerate code
# Using Bazel (preferred)
bazel build //proto:all
# Generated output goes to schemas/generated/
# This creates code for: Kotlin, TypeScript, Python
caution
NEVER edit files in schemas/generated/. They are overwritten on every build. If you need custom behavior, wrap the generated types in your own code.
Step 4: Verify generated code compiles
# Kotlin
cd backend/platform && ./gradlew compileKotlin
# TypeScript
cd frontend/apps/console && pnpm tsc --noEmit
# Python
cd ml && uv run python -c "from schemas.generated import pipeline_pb2"
Step 5: Update consuming code
After regenerating, update any code that uses the modified messages:
- Backend — Update services/handlers in
backend/platform/ - Frontend — Update API clients in
frontend/apps/console/ - ML — Update any ML pipeline consumers in
ml/
Step 6: Update context files
If your schema change introduces a new domain concept:
- Add to
.context/GLOSSARY.md - Update
.context/ARCHITECTURE.mdif it affects the pipeline
MCP message envelope
All inter-layer data flows through the MCPMessage envelope:
message MCPMessage {
string task_id = 1;
string source_layer = 2; // e.g., "generation"
string target_layer = 3; // e.g., "execution"
bytes payload = 4; // Serialized inner message
map<string, string> context_snapshot = 5;
double confidence_score = 6;
string trace_id = 7;
int64 timestamp_ms = 8;
}
The payload field contains the serialized inner message (e.g., TestRunRequest). Layers deserialize only the messages they understand.
What's next
- How to add an API endpoint — Full endpoint workflow
- How to run tests — Verify changes