Design Philosophy for Semantic Infrastructure

Version: 1.0
Last Updated: 2025-12-04


Table of Contents

  1. Overview
  2. The Top 5 Core Principles
  3. Supporting Principles
  4. Principle Application Guide
  5. Measuring Adherence

Overview

The Semantic Infrastructure Lab (SIL) ecosystem is built on a foundation of 8 core principles that guide every architectural decision, implementation choice, and design pattern. These principles emerged from real development experience building TIA's semantic workspace and are proven to deliver maintainable, scalable, intelligent systems.

This document ranks principles by:
- Impact: How much they affect system quality and success
- Frequency: How often they apply to decisions
- Criticality: How much the ecosystem depends on them

Key Insight: SIL principles prioritize semantic discovery over rigid structure, progressive revelation over complete exposure, and composable tools over monolithic solutions.


The Top 5 Core Principles

#1: Progressive Disclosure

"Orient → Navigate → Focus. Never show everything at once."

Rank: #1 - FOUNDATIONAL PRINCIPLE

Why This Ranks First:
- Fundamental to semantic discovery at scale
- Enables human and AI agents to manage complexity
- Proven 30x context reduction in practice
- Scales from single files to 13K+ document knowledge mesh
- Foundation for reveal, Beth, and all TIA interfaces

Definition: Present information in layers, from high-level overview to detailed specifics. Users and agents only see what they need, when they need it.

The Three Levels:

LEVEL 1: ORIENT
"Where am I? What exists?"
→ Broad scan, high-level structure, entry points
→ Example: File outline, project summary

LEVEL 2: NAVIGATE
"What's relevant to my goal?"
→ Focused exploration, follow breadcrumbs, narrow to subset
→ Example: Section headers, function signatures

LEVEL 3: FOCUS
"Get the specific details I need"
→ Deep dive on target, full context on narrow scope
→ Example: Function implementation, specific content

Real Examples from SIL:

reveal (Code Structure):

# LEVEL 1: Orient - See structure
reveal app.py
# Returns: Classes, functions, imports (50 tokens vs 7,500)

# LEVEL 2: Navigate - Hierarchical view
reveal app.py --outline
# Returns: Organized hierarchy, no implementation details

# LEVEL 3: Focus - Specific element
reveal app.py function_name
# Returns: Just that function, full implementation

Beth (Knowledge Mesh):

# LEVEL 1: Orient - Top matches only
tia beth explore "topic"
# Returns: 10 strongest matches, summaries

# LEVEL 2: Navigate - Related clusters
tia beth explore "topic" --depth 2
# Returns: + Related topics, knowledge clusters

# LEVEL 3: Focus - Read specific doc
tia read <file_from_beth_result>
# Returns: Full document content

Documentation (All SIL Docs):

README.md → Topic Indexes → Detailed Guides → Implementation Specs

Example:
SIL_ECOSYSTEM_PROJECT_LAYOUT.md (overview)
  ↓
docs/guides/*.md (navigation)
  ↓
projects/*/README.md (focus on specific project)

Measured Impact:
- Agent context: 150KB → 5KB (30x improvement)
- File discovery time: 45s → 3s (15x faster)
- Documentation navigation: 80% find answers in <2 clicks
- Token usage: Reduced by 25x on average

Application Across SIL Ecosystem:

Tool Level 1 (Orient) Level 2 (Navigate) Level 3 (Focus)
reveal File structure Outline hierarchy Extract element
Beth Top 10 results Related clusters Full document
tia search search all scan search content read specific file
tia project list all projects show <name> summary Project directory
Documentation README → Index → Guide Guide → Sections Deep implementation

Red Flags (indicates Progressive Disclosure is failing):
- ❌ Commands dump full output by default
- ❌ No way to get "just the summary"
- ❌ Agent context exceeds 50KB for simple queries
- ❌ Users complain "too much information"
- ❌ No breadcrumbs to navigate deeper

Green Flags (indicates it's working):
- ✅ Every command has compact default output
- ✅ Clear paths to drill deeper (--detail, --full, reveal <element>)
- ✅ Agents naturally use outline modes first
- ✅ Documentation has clear hierarchy
- ✅ Users find what they need quickly

References:
- reveal implementation demonstrates this perfectly
- Beth's semantic search uses this inherently
- Knowledge Mesh Quality Imperative discusses context management


#2: Composability First

"Build tools that do one thing well, then orchestrate them into workflows."

Rank: #2 - ARCHITECTURAL PRINCIPLE

Why This Ranks Second:
- Enables the entire SIL ecosystem to work together
- Each tool is independently useful AND composable
- Reduces coupling, increases flexibility
- Foundation for TIA's orchestration model

Definition: Every tool in the SIL ecosystem should be independently valuable while composing naturally with other tools. TIA orchestrates; tools execute.

The SIL Composition Model:

TIA (Orchestrator)
  ├─ reveal (Code structure discovery)
  ├─ Beth (Semantic knowledge search)
  ├─ Scout (Autonomous research)
  ├─ Gemma (Provenance tracking)
  ├─ GenesisGraph (Lineage)
  └─ Domain commands (Specialized operations)

Each tool:
  ✅ Works standalone
  ✅ Has clear input/output contracts
  ✅ Composes via TIA routing
  ✅ No hard dependencies on other tools

Real Examples:

Standalone Value:

# Each tool works independently
reveal app.py                    # Code structure
tia beth explore "deployment"    # Knowledge discovery
tia search all "pytest"          # Content search
tia session context <name>       # Session history

Composed Workflows:

# Workflow 1: Discovery → Read → Understand
tia beth explore "authentication"     # Find relevant docs
tia read <file_from_beth>             # Read specific file
reveal <file> --outline               # Understand structure

# Workflow 2: Search → Filter → Extract
tia search all "error handling"       # Broad search
tia search content "try.*except"      # Narrow via regex
reveal <file> handle_error            # Extract specific function

# Workflow 3: Scout Research → Knowledge Capture
scout research "topic"                # Autonomous research
tia-save                             # Capture session
tia beth rebuild                      # Index new knowledge

Separation of Concerns:

Tool Responsibility Does NOT
reveal Extract code structure Execute code, analyze semantics
Beth Find semantic relationships Create content, execute commands
Scout Research & synthesize Manage sessions, track tasks
tia-save Capture session state Search, analyze, research
TIA Route commands, orchestrate Implement domain logic

Design Pattern - Tool Contracts:

# Each tool has clear input/output contract
class ITool(Protocol):
    def execute(self, input: ToolInput) -> ToolOutput:
        """Clear contract: input → output"""
        ...

# Tools compose via pipes
result1 = beth.search("topic")
result2 = reveal.structure(result1.top_file)
result3 = read.content(result2.extract_path)

Anti-Patterns to Avoid:
- ❌ reveal calling Beth directly (tight coupling)
- ❌ Beth generating content (violates single responsibility)
- ❌ Scout managing task lists (use tia task instead)
- ❌ Tools with hard dependencies on other tools

Benefits:
- Flexibility: Swap tools without breaking workflows
- Testability: Test each tool in isolation
- Reusability: Tools useful in unexpected combinations
- Maintainability: Change one tool, don't break others

The TIA Way: When building new tools, ask:
1. Does this tool do ONE thing well?
2. Can it work independently?
3. Does it compose naturally with existing tools?
4. Are its inputs/outputs clear?

If all answers are "yes", you have a composable tool.


#3: Semantic Discovery Over Rigid Structure

"Find by meaning, not by remembering paths. The mesh knows."

Rank: #3 - STRATEGIC PRINCIPLE

Why This Ranks Third:
- Differentiates SIL from traditional file systems
- Enables cross-domain knowledge transfer
- Scales to thousands of documents without cognitive overhead
- Powers Beth's 13K+ file knowledge mesh

Definition: Users and agents should discover information through semantic relationships, not rigid hierarchies. Structure provides organization; semantics provide discovery.

The Knowledge Mesh Model:

Traditional file systems:

You must know: /projects/X/docs/guides/TOPIC.md
To find: TOPIC documentation

SIL semantic mesh:

You know: Topic name or related concept
You find: All related docs across entire workspace

How Beth Enables This:

# Traditional (must know structure)
ls projects/*/docs/deployment/

# Semantic (discover by meaning)
tia beth explore "deployment"

# Beth returns:
✅ projects/tia-server/DEPLOYMENT_GUIDE.md (expected)
✅ sessions/blazing-ghost-1202/nginx-patterns.md (helpful)
✅ projects/squaroids/deployment/systemd.txt (unexpected!)

Cross-Domain Knowledge Transfer:

Beth's semantic graph discovers relationships humans might miss:

User researching: TIA server deployment
Beth surfaces: Game deployment systemd template

Why? Semantic overlap:
- "systemd service"
- "process management"
- "auto-restart on failure"
- "logging configuration"

Result: Excellent patterns transfer across domains

The Power of Semantic Tags:

# Frontmatter in any document
beth_topics: [deployment, systemd, process-management]

# Beth builds relationship graph
deployment ←→ systemd ←→ process-management
    ↓             ↓              ↓
  [12 docs]   [8 docs]      [15 docs]

# Now searching ANY topic finds related clusters

Structure vs Semantics - Both Matter:

Use Structure For Use Semantics For
Project organization Discovery across projects
Version control Finding related concepts
File management Cross-domain patterns
Clear ownership Knowledge relationships
Build systems Research & exploration

Design Guidance:

Good:
- Tag documents with semantic keywords
- Build relationship graphs
- Enable discovery by meaning
- Let Beth surface unexpected connections

Bad:
- Require users to memorize paths
- Bury documentation in deep hierarchies
- Assume users know where to look
- Force rigid categorization

Real Impact:

From Knowledge Mesh Quality Imperative:

"Traditional search finds what you ask for. Beth finds what you didn't know to ask for."

Example:
- User searches: "error handling"
- Beth finds: Error handling docs (expected)
- Beth also finds: Retry patterns, circuit breakers, logging strategies
- Why? Semantic relationship graph knows these concepts relate

The Balance:

SIL Ecosystem = Structure (git repos, directories)
                + Semantics (Beth, tags, relationships)

Structure: Where things live
Semantics: How to find them

#4: Value-First Delivery

"Ship working tools fast, enhance incrementally. Users need value today, not infrastructure tomorrow."

Rank: #4 - STRATEGIC PRINCIPLE

Why This Ranks Fourth:
- Drives adoption and proves value early
- Reduces risk of building the wrong thing
- Enables early feedback loops
- Aligns with agile/iterative development

Definition: Deliver immediate, usable value before building infrastructure. Every tier/phase should provide standalone benefit.

The Tier Progression:

TIER 0: Core CLI Tools (Already Working)
↓ Delivers: Beth search, reveal, tia commands
↓ Value: Semantic discovery, code exploration
↓ Setup: 0 days (it works NOW)
↓ Users get: Immediate productivity boost

TIER 1: Quality & Hygiene (1-2 days)
↓ Delivers: tia doc tools, quality validation
↓ Value: Document quality enforcement
↓ Setup: Minimal scripting
↓ Users get: Better search accuracy, cleaner docs

TIER 2: Mobile Access (1 week)
↓ Delivers: SSH tunnel, mobile browser
↓ Value: Work from anywhere
↓ Setup: Server + authentication
↓ Users get: Location independence

TIER 3: Browser Integration (2-3 weeks)
↓ Delivers: BrowserBridge, tab extraction
↓ Value: Context continuity
↓ Setup: Extension + backend
↓ Users get: Unified knowledge workspace

Key Insight: Each tier delivers value BEFORE next tier starts.

Anti-Pattern (Infrastructure-First):

Week 1: Build database schema
Week 2: Build API layer
Week 3: Build authentication
Week 4: Build UI
Week 5: FINALLY deliver first feature

Problem: No value until Week 5, high risk

SIL Pattern (Value-First):

Week 1: Ship reveal (immediate value)
Week 2: Add Beth (more value)
Week 3: Add Scout (even more value)
Week 4: Add mobile access (enhancement)

Benefit: Value from Day 1, low risk

Real Example - reveal Development:

Tier 0 (Day 1): Extract structure, print to stdout
→ Users can explore code immediately

Tier 1 (Week 2): Add outline mode, element extraction
→ Users can navigate large files efficiently

Tier 2 (Month 2): Add code quality checks
→ Users get linting + structure

Each tier delivered standalone value.

Application to New SIL Projects:

When planning new tools/features:
1. What can we ship TODAY that provides value?
2. What minimal infrastructure enables it?
3. What enhancements can wait until Tier 2?
4. How do we validate value before building more?

Measurement:
- Time to first value: <1 day preferred
- User adoption: Should see usage within first week
- Feedback quality: Early users validate direction


#5: The Pit of Success

"The right way should be the easy way. Architecture guides developers toward quality."

Rank: #5 - ENGINEERING PRINCIPLE

Why This Ranks Fifth:
- Drives developer productivity and code quality
- Self-reinforcing: good patterns multiply
- Reduces cognitive load and decision fatigue
- Makes excellence the default path

Definition: System design should make correct implementations easier than incorrect ones. The "right way" = path of least resistance.

How SIL Implements This:

Example 1: Documentation Quality

Hard Way (not Pit of Success):

# User must manually:
1. Remember to add frontmatter
2. Know which fields to include
3. Format YAML correctly
4. Add beth_topics manually
5. Track quality scores

Easy Way (Pit of Success):

# Tool does the work
tia doc init deployment_guide.md
# Auto-creates: frontmatter, quality fields, suggested beth_topics

tia doc save deployment_guide.md
# Auto-validates: quality, suggests improvements

Example 2: reveal Usage

Hard Way:

cat file.py | grep "^def " | less
# Manual parsing, error-prone

Easy Way:

reveal file.py
# Automatic structure extraction, clean output

Example 3: Session Continuity

Hard Way:

# Remember to document work manually
# Write README before closing session
# Update project metadata

Easy Way:

tia-save
# Automatically generates README with full analysis

Design Pattern - Make Good Easy:

Task Hard Way Pit of Success Way
Document quality Manual YAML tia doc init scaffolds
Code structure grep/awk reveal just works
Knowledge search find + grep tia beth explore
Session handoff Manual notes tia-save auto-generates
Git hygiene Remember commands tia git make-clean

Red Flags (Pit of Success failing):
- ❌ Good patterns require more work than bad patterns
- ❌ Developers repeatedly make same mistakes
- ❌ Quality requires extensive manual effort
- ❌ Testing is harder than skipping tests
- ❌ Documentation is an afterthought

Green Flags (it's working):
- ✅ Quality is automatic (scaffolds, linters, validators)
- ✅ Less code for better patterns
- ✅ Tools guide toward best practices
- ✅ Mistakes are caught early (validation)
- ✅ Excellence is frictionless

Implementation Strategies:

  1. Scaffolding: Provide templates/generators
  2. Validation: Check quality automatically
  3. Defaults: Make safe defaults easy
  4. Feedback: Immediate guidance on mistakes
  5. Examples: Show the right way prominently

SIL Tools That Exemplify This:
- reveal: Makes code exploration trivial
- tia-save: Makes session documentation automatic
- Beth: Makes semantic search natural
- tia-boot: Makes session startup guided

When Building New Tools: Ask:
- Is the right way the easiest way?
- Do users naturally fall into good patterns?
- Does the tool prevent common mistakes?
- Is quality automatic, not manual?


Supporting Principles

#6: Clean Separation of Concerns

Definition: Different types of logic should be strictly separated. Mixing concerns creates untestable, unmaintainable systems.

Key Separations in SIL:

  1. Discovery ↔ Execution
    - Beth finds, tools execute
    - Search returns paths, read returns content

  2. Semantic Indexing ↔ Content Generation
    - Beth indexes, doesn't create
    - Scout creates, doesn't index (until tia-save)

  3. Quality Assessment ↔ Remediation
    - Assess: "This doc scores 60/100"
    - Fix: "Here's how to improve it"
    - Separate tools/commands for each

  4. Knowledge Capture ↔ Knowledge Application
    - Sessions: Capture work (tia-save)
    - Projects: Apply knowledge (production code)
    - Never mix .tia artifacts into production repos

  5. User Intent ↔ Tool Selection
    - TIA understands intent, routes to tools
    - Tools don't guess user intent

  6. Interface ↔ Implementation
    - TIA commands = stable interface
    - Underlying tools can change

Example - Beth Separation:

# GOOD (Separated)
class BethSearch:
    """Pure search logic - returns structured data"""
    def search(self, query: str) -> List[SearchResult]:
        return self._semantic_search(query)

class BethFormatter:
    """Separate formatting"""
    def format(self, results: List[SearchResult]) -> str:
        return self._render_results(results)

# BAD (Mixed)
class BethSearch:
    def search(self, query: str):
        results = self._semantic_search(query)
        print(f"Found {len(results)}...")  # Mixed concerns!

Benefits:
- Testability: Pure functions are easy to test
- Maintainability: Change one thing, not everything
- Flexibility: Swap implementations without breaking interface


#7: Explicit Over Implicit

Definition: Behavior and configuration should be explicit and discoverable, not hidden in code.

SIL Applies This Through:

Explicit Configuration:

# project.yaml - explicit project metadata
name: SIL
type: research
status: planning
beth_topics: [semantic-infrastructure]

Explicit Frontmatter:

# Document metadata - explicit quality/topics
quality:
  completeness: 95
  accuracy: 98
beth_topics: [knowledge-mesh, quality]

Explicit Commands:

# Clear, explicit operations
tia beth rebuild          # What it does is obvious
tia session context <id>  # Explicit session target
reveal file.py func       # Explicit extraction

Implicit (Avoid):

# Magic that's hard to understand
tia magic-search          # What does this do?
process-stuff             # Hidden behavior

Benefits:
- Discoverability: Users can understand behavior
- Debuggability: Issues are easier to trace
- Documentation: Explicit configs self-document


#8: Human-in-the-Loop for High-Risk Operations

Definition: High-risk decisions require human approval. Low-risk decisions proceed automatically.

SIL Already Implements This:

From CLAUDE.md:

🚨 NEVER PUSH WITHOUT EXPLICIT CONSENT 🚨
build → test → commit → tag → ASK → push

Current HITL Patterns:

Operation Risk Requires Approval
Git push High ✅ Always ask
Git make-clean High ✅ Show plan first
tia-save Low ❌ Auto-execute
Beth rebuild Medium ⚠️ Confirm if large
Delete old sessions Low ❌ Auto (with notice)

Extension to Autonomous Agents:

When Scout or other agents operate autonomously:

# Threshold-based approval
if estimated_cost > $5:
    require_approval("Scout research campaign will cost ~$7")

if operation.risk_level == "high":
    require_approval("About to delete 50 session directories")

Benefits:
- Trust: Users trust automation within bounds
- Safety: Prevents costly mistakes
- Audit: Every high-risk operation logged
- Learning: Start conservative, expand trust over time


#9: Examples as Multi-Shot Reasoning Anchors

Definition: When prompting LLMs, examples are critical for enabling multi-shot reasoning. Show the pattern, don't just describe it.

Why This Matters:
- One-shot (description only): "Analyze the code structure"
- Multi-shot (with examples): "Analyze the code structure. For example: class Foo → 'Class definition', def bar() → 'Function definition'"

The Principle: Examples transform abstract instructions into concrete patterns that LLMs can follow reliably.

SIL Already Implements This:

From reveal --agent-help:

EXAMPLE WORKFLOW:
Step 1: reveal app.py              → See structure (50 tokens)
Step 2: reveal app.py --outline    → Hierarchical view
Step 3: reveal app.py process_data → Extract specific function

Pattern Recognition:

❌ BAD: "Use progressive disclosure"
✅ GOOD: "Use progressive disclosure:
         - reveal file.py (structure only)
         - reveal file.py --outline (hierarchy)
         - reveal file.py func_name (full detail)"

In Practice:

Context Without Examples With Examples
Tool usage "Use reveal for code exploration" "reveal app.py → structure, reveal app.py func → implementation"
Workflows "Follow progressive narrowing" "beth explore → search all → reveal → read"
Prompts "Find architectural patterns" "Find patterns like: Operator Registry (morphogen), Graph Engine (genesisgraph)"

Agent Prompting Best Practice:

When designing prompts for Scout, Groqqy, or other agents:

# ❌ Description-only prompt
prompt = "Analyze the codebase for architectural patterns"

# ✅ Example-enriched prompt
prompt = """
Analyze the codebase for architectural patterns.

EXAMPLES of patterns to look for:
- Operator Registry: Central registry of executable operators
  → Evidence: operator_registry.py, register_operator() calls

- Event-Driven Architecture: Message bus with handlers
  → Evidence: event_bus.py, @handler decorators

- Plugin System: Dynamic loading of extensions
  → Evidence: plugin_loader.py, discover_plugins()

For each pattern found, provide:
1. Pattern name
2. Evidence (file:line references)
3. Why it matters (architectural significance)
"""

Benefits:
- Reliability: LLMs follow patterns more accurately than descriptions
- Consistency: Examples establish format expectations
- Grounding: Concrete examples prevent hallucination
- Teachability: New users learn by seeing, not inferring

Connection to Reveal:

The --agent-help flag exists specifically to provide example-rich prompts:
- Shows concrete workflows, not abstract capabilities
- Demonstrates exact commands with expected outputs
- Establishes token-awareness patterns (e.g., "50 tokens vs 7,500")

Quick Check: Does your prompt include at least one concrete example of desired output format?

Related SIL Documentation:
- Hierarchical Agency Framework - Section 10.1 applies this principle to multi-agent orchestration
- Progressive Disclosure - Examples demonstrate layered information reveal


Principle Application Guide

When to Apply Which Principle

Starting a New Tool:
1. ✅ Composability: Can it work standalone AND with others?
2. ✅ Pit of Success: Is the right way the easy way?
3. ✅ Progressive Disclosure: Does it show summaries before details?

Improving Existing Tool:
1. ✅ Progressive Disclosure: Add --outline or --summary modes
2. ✅ Separation of Concerns: Extract formatting from logic
3. ✅ Explicit Over Implicit: Add config files for "magic" behavior

Building Documentation:
1. ✅ Progressive Disclosure: README → Index → Guides → Details
2. ✅ Semantic Discovery: Add beth_topics, quality metadata
3. ✅ Value-First: Document what works NOW before future plans

Designing Workflows:
1. ✅ Composability: Chain tools via TIA orchestration
2. ✅ Semantic Discovery: Use Beth to find relevant docs
3. ✅ Progressive Disclosure: Start broad, drill down


Measuring Adherence

Quantitative Metrics

Progressive Disclosure:
- Avg context size: <10KB for typical queries
- Drill-down depth: 3 levels max to reach detail
- Token reduction: 20x+ vs full dump

Composability:
- Tool reuse: Each tool used in 5+ workflows
- Independence: Each tool works standalone
- Integration points: Clear input/output contracts

Semantic Discovery:
- Beth coverage: >90% of docs indexed
- Cross-domain hits: 30%+ results from unexpected domains
- Discovery time: <5s for any topic

Value-First Delivery:
- Time to first value: <1 day
- Incremental value: Each tier delivers standalone benefit
- Adoption rate: Usage within 1 week of release

Pit of Success:
- Quality docs: >70% have proper frontmatter
- Tool usage: reveal used 10x more than grep for code
- Error rates: <5% user mistakes (good defaults work)

Qualitative Metrics

User Feedback:
- "I found it immediately" (Semantic Discovery works)
- "This just makes sense" (Pit of Success works)
- "I didn't need the docs" (Progressive Disclosure works)
- "I use it every day" (Value-First works)

Developer Experience:
- New contributors productive quickly (Pit of Success)
- Tools compose naturally (Composability)
- Easy to test and maintain (Separation)


Summary

The Hierarchy

Foundational (Drives everything):
1. Progressive Disclosure - Manage complexity through layered revelation
2. Composability First - Tools that work together, not monoliths

Strategic (Guides direction):
3. Semantic Discovery - Find by meaning, not by path
4. Value-First Delivery - Ship value fast, enhance incrementally

Engineering (Ensures quality):
5. Pit of Success - Right way = easy way
6. Clean Separation - Distinct concerns, pure functions
7. Explicit Over Implicit - Discoverable, not hidden
8. HITL Safety - Human approval for high-risk operations
9. Examples as Anchors - Show patterns, don't just describe

Quick Reference Card

Principle Ask Yourself
Progressive Disclosure Does it show summary first, details on demand?
Composability Can it work alone AND with others?
Semantic Discovery Can users find it by meaning, not path?
Value-First Does it deliver value NOW?
Pit of Success Is good quality the easy path?
Separation Are concerns cleanly separated?
Explicit Is behavior clear, not hidden?
HITL Do high-risk ops require approval?
Examples as Anchors Does your prompt include concrete examples?

Core Architecture:
- SIL Ecosystem Project Layout
- TIA Unified AI Workspace Vision

Quality & Safety:
- Knowledge Mesh Quality Imperative

Tool Documentation:
- reveal: reveal --agent-help-full
- Beth: tia beth --help
- TIA: tia help quickstart


Version History:
- v1.0 (2025-12-04): Initial formalization of SIL design principles