The Armitive Agentic Framework
A unified AI infrastructure designed for autonomous reasoning, long-term memory, and real-time execution.
6+
GCP Services
3
Model Backends
8+
Agentic Workflows
99%
Platform Uptime
import { ReasoningEngine } from "@armitive/core";
import { MemoryLayer } from "@armitive/memory";
import { InferenceGateway } from "@armitive/inference";
import { EventRouter } from "@armitive/events";
// Assemble the framework
const framework = new AgenticFramework({
reasoning: new ReasoningEngine({
strategy: "parallel-then-merge",
maxDepth: 5,
}),
memory: new MemoryLayer({
backend: "firestore+vector",
embeddingModel: "text-embedding-004",
}),
inference: new InferenceGateway({
backends: ["gemini-flash", "vertexai", "tflite"],
}),
events: new EventRouter({
transport: "pubsub",
deadLetter: true,
}),
});
// Run an agentic workflow
const result = await framework.run({
objective: "Process incoming resume batch",
context: { jobId, orgId },
});Four layers of the Agentic Framework
Core Pillar 01
Reasoning Engine
Proprietary logic for multi-step agentic task decomposition
The Reasoning Engine is the cognitive core of the Armitive framework. It receives a high-level objective, decomposes it into a dependency-ordered task graph, assigns each node to a specialized sub-agent, and resolves outputs back into a unified result — without human orchestration.
- Goal decomposition into directed task graphs
- Sub-agent assignment and lifecycle management
- Dependency resolution across parallel branches
- Retry policies with exponential backoff
- Full execution trace per reasoning run
- Human-in-the-loop interrupt hooks
// Reasoning Engine — task graph dispatch
const result = await reasoner.run({
objective: "Score and rank all applicants",
agents: [ResumeParserAgent, ScoringAgent, RankingAgent],
strategy: "parallel-then-merge",
});Core Pillar 02
Memory & Context Layer
Vector databases and Firebase for persistent, context-aware AI
Long-term memory is what separates an agent from a chatbot. The Memory Layer persists interaction context, embeddings, and structured state across sessions using vector search and Firestore — so every agent invocation is informed by full historical context, not just the current prompt.
- Vector embeddings for semantic memory retrieval
- Firestore for structured persistent state
- Session-scoped and long-term memory namespaces
- Context window management and summarization
- Per-tenant memory isolation
- Embedding model: text-embedding-004 (Vertex AI)
// Memory Layer — semantic context retrieval
const context = await memory.retrieve({
query: "candidate technical skills",
topK: 5,
namespace: `org:${orgId}`,
});Core Pillar 03
Multi-Modal Integration
Computer vision and NLP through a unified inference abstraction
The inference layer unifies two fundamentally different modalities under one API surface. Computer vision (EfficientNetV2B0 for amiSense) and language models (Google Gemini for Kavana Studio) are both first-class backends — the orchestration layer doesn't need to know which model handles which task.
- Computer vision: EfficientNetV2B0 fine-tuned per domain
- NLP: Google Gemini (Flash / Pro) for language tasks
- Unified inference abstraction — backend-agnostic callers
- TFLite edge models for low-latency on-device inference
- Vertex AI for managed model training and serving
- Model hot-swap with zero downtime
// Unified inference — backend-agnostic
const output = await infer({
modality: "vision" | "language",
model: "auto", // resolved at runtime
input: payload,
});Core Pillar 04
Event-Driven Execution
Workflows triggered by real-world signals, not scheduled jobs
Agentic workflows fire in response to real-world events — file uploads, IoT sensor feeds, webhook calls, or database mutations. Cloud Pub/Sub decouples producers from consumers, enabling each platform subsystem to evolve and scale independently.
- Cloud Pub/Sub as the asynchronous event backbone
- Webhook ingestion from any HTTP-capable source
- IoT sensor signal integration (amiSense camera feeds)
- Change-data-capture (CDC) triggers on Firestore writes
- Dead-letter queuing for unprocessable events
- Event replay for deterministic debugging
// Event subscription — Pub/Sub consumer
pubsub.subscribe("camera.frame.captured", async (event) => {
await inferencePipeline.run(event.payload);
});Request lifecycle — signal to output
Every agentic workflow follows the same deterministic path through the framework.
Signal Ingestion
IoT feed / webhook / file upload enters via Cloud Pub/Sub or HTTP endpoint
Reasoning Engine
Objective decomposed into a task graph; sub-agents spawned per node
Memory Retrieval
Relevant context pulled from vector store and Firestore before execution
Inference Dispatch
Each agent calls the unified inference API — vision or language backend resolved at runtime
State Persistence
Results written to Firestore; embeddings updated in vector store
Output Delivery
Structured result returned to caller; downstream events emitted for dependent workflows
Optimized for Google Cloud Platform
Every layer of the Armitive Agentic Framework is designed around Google Cloud primitives. Cloud Run handles compute with serverless auto-scaling. Vertex AI manages the full model lifecycle from training to online prediction. The result is an infrastructure that scales from zero to production without configuration changes.
Cloud Run
ComputeServerless containers for all platform services. Auto-scales from zero to hundreds of instances in seconds. Each agent pool runs in an isolated Cloud Run service.
Vertex AI
Model ManagementManaged training pipelines, model registry, and online prediction endpoints. All proprietary models (EfficientNetV2B0, custom scorers) are versioned and served through Vertex.
Cloud Pub/Sub
Event BackboneDurable, at-least-once message delivery across all platform services. Decouples event producers from consumers for independent scaling and deployment.
Firestore
Persistent StateDocument database for structured agent state, user data, and platform configuration. Used as the persistent memory backend with real-time sync capabilities.
Cloud Storage
Artifact StoreObject storage for model artifacts, training datasets, resume files, and exported reports. Lifecycle policies manage retention automatically.
Secret Manager
SecretsZero secrets in code or environment variables. All API keys, database credentials, and service account tokens are retrieved at runtime through Secret Manager.
Infrastructure snapshot
SERVICE REGION STATUS reasoning-engine us-central1 ✓ SERVING memory-layer us-central1 ✓ SERVING inference-gateway us-central1 ✓ SERVING event-router us-central1 ✓ SERVING kavana-api us-central1 ✓ SERVING amisense-pipeline us-central1 ✓ SERVING
- Serverless — scales to zero between events
- Per-service isolation and independent deploys
- All secrets via GCP Secret Manager
- TLS 1.3 in transit, AES-256 at rest
- Cloud Monitoring + Alerting on all services
Two products. One shared core.
Kavana Studio and amiSense are built directly on the Armitive Agentic Framework — each product exercises different subsystems of the same underlying infrastructure.
NLP + Agentic ATS
Kavana Studio
- Gemini inference pipeline for resume scoring
- Memory Layer for per-org candidate context
- Reasoning Engine for batch scoring workflows
- Firestore for pipeline state and RBAC
Computer Vision + IoT
amiSense
- EfficientNetV2B0 inference via Vertex AI
- Pub/Sub for real-time camera frame events
- Memory Layer for per-pet recognition models
- Cloud Run for edge inference orchestration
Build on the Agentic Framework
We are opening early access to the Armitive Agentic Framework for select enterprise partners. If your organization runs complex workflows that should be autonomous — reach out.