Your Brain Has a Storage Problem
Human Memory ❌
Current reality
Result: Lossy. Fragmented. Unreliable.
CogniKin Brain ✅
Externalized cognition
Result: Total Recall. Zero Fragmentation.
Before CogniKin vs After CogniKin
Replace Scattered Notes with a Living Knowledge Graph
Before: Scattered & Lost
Multiple apps, no connections, lost context
1// Before CogniKin: The Scattered Way23// Take notes in Notion4await notion.createPage({5title: "Meeting with Sarah",6content: meetingNotes7});89// Save bookmark in Raindrop10await raindrop.save({11url: articleUrl,12tags: ["ai", "research"]13});1415// Reference in Obsidian16await obsidian.appendToDaily(17`Discussed [[AI Strategy]] with Sarah`18);1920// 3 months later...21// "Where did I put that thing Sarah mentioned?"22const search1 = await notion.search("Sarah AI");23const search2 = await obsidian.search("Sarah");24const search3 = await raindrop.search("ai research");25// ...still can't find it2627// Total: 5 apps, 0 connections28// Recall: prayer-based29// Context: gone
After: Connected & Alive
One brain, automatic connections, instant recall
1// After CogniKin: The Connected Way23import { CogniKin } from '@cognikin/client';4const brain = new CogniKin(process.env.CK_KEY);56// Capture knowledge7await brain.ingest({8content: meetingNotes,9source: "meeting",10entities: ["sarah-chen"],11tags: ["ai-strategy"]12});1314// Everything connects automatically15// CogniKin links Sarah → AI Strategy →16// your prior research → related articles1718// 3 months later...19const recall = await brain.query({20query: "What did Sarah say about AI?",21depth: 2 // traverse 2 relationship hops22});2324// Returns: exact context + related nodes25// + connection graph showing WHY it's relevant2627// Total: 1 brain, ∞ connections28// Recall: 94% accuracy29// Context: permanent
Externalized Cognition in 3 Steps
Build Your Second Brain in Minutes
Capture Everything
Notes, conversations, documents — everything becomes a knowledge node
CogniKin Connects
Automatic knowledge graph — relationships emerge naturally
Recall Anything
Semantic search across your entire brain — instant, accurate, contextual
Real Results
“I stopped losing ideas the day I started using CogniKin. Six months in, my brain has 3,000 nodes and surfaces connections I never would have made on my own. It's not note-taking. It's externalized thinking.”
Add CogniKin to Your Workflow in 5 Minutes
Clean SDK That Works with Any Application
1import { CogniKin } from '@cognikin/client';23const brain = new CogniKin({4apiKey: process.env.CK_KEY5});67// Capture a thought or document8const node = await brain.ingest({9content: "Meeting notes from product review...",10type: "meeting",11entities: ["sarah-chen", "product-v2"],12tags: ["roadmap", "q2-planning"],13metadata: {14timestamp: Date.now(),15source: "zoom"16}17});1819// Node auto-connects to related knowledge:20// {21// id: "node_a8f3...",22// connections: 12,23// relatedDcps: ["product", "strategy"],24// activationLevel: "L2"25// }2627// Later: recall with semantic search28const results = await brain.query({29query: "What was decided about the roadmap?",30depth: 2,31limit: 1032});3334// Returns nodes + relationship paths + context35// Brain gets smarter with every interaction
Start Free. Scale Your Brain.
Pricing That Grows with Your Knowledge
Free
Perfect for Exploration
- ✓1,000 Knowledge Nodes
- ✓5 Brains
- ✓Full API Access
- ✓Community Support
Builder
For Serious Thinkers
- ✓50,000 Knowledge Nodes
- ✓Unlimited Brains
- ✓Priority Support
- ✓Knowledge Graph Viz
Team
For Cognitive Teams
- ✓500,000 Knowledge Nodes
- ✓Multi-User Brains
- ✓Advanced Analytics
- ✓Dedicated Onboarding
Enterprise
For Organisations
- ✓Unlimited Nodes
- ✓Self-Hosted Option
- ✓SLA Guarantees
- ✓Custom Integrations
Full Documentation
Complete developer documentation — one document, zero navigation.
47,665 characters
CogniKin — Developer Documentation
Complete reference for building AI agents with persistent memory via CogniKin.
Table of Contents
Getting Started
Core Concepts
API Reference
Guides
SDK Reference
=== GETTING STARTED ===
Introduction
CogniKin is a persistent memory and knowledge graph API for AI agents. Your agent processes tokens — it doesn't remember. CogniKin gives it a brain: structured, searchable, persistent knowledge that survives sessions.
Why CogniKin?
Building effective AI agents requires persistent context that outlives a single conversation. Traditional approaches require:
- Re-ingesting documentation and context every session
- Manual state management across conversation boundaries
- Custom vector search infrastructure for knowledge retrieval
- Constant re-learning of facts the agent already knew yesterday
CogniKin replaces all of this with two simple API calls:
\\\`typescript // 1. Get context before responding const context = await brain.getContext({ userId: user.id, task: userQuery, });
// 2. Report the outcome await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, }); \\\`
Key Benefits
| Benefit | Description |
|---|---|
| 80% Less Boilerplate | Replace custom context management with one API call |
| 100% Context Retained | Knowledge persists across sessions, deployments, and agent restarts |
| Automatic Relevance | CogniKin surfaces the most relevant knowledge based on the current query |
| Framework Agnostic | Works with any AI agent framework or custom implementation |
How It Works
CogniKin uses knowledge graphs and semantic search to build persistent memory for your agent:
- Context Request — Before your agent responds, call \
getContext()\with the agent ID and current query - Knowledge Retrieval — CogniKin returns relevant knowledge nodes, activation levels, and related concepts
- Enriched Response — Your agent uses this context to generate informed, consistent responses
- Feedback Reporting — Report whether the retrieved knowledge was useful
- Continuous Learning — CogniKin refines relevance scoring based on feedback
Next Steps
Follow the Quick Start guide to integrate CogniKin in 5 minutes.
Quick Start
Get CogniKin integrated into your AI agent in 5 minutes.
1. Get Your API Key
Sign up at cognikin.me and generate an API key from the dashboard.
2. Install the SDK
\\\`bash
TypeScript / JavaScript
npm install @cognikin/client
Python
pip install cognikin \\\`
3. Initialize the Client
\\\`typescript import { CogniKin } from '@cognikin/client';
const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, }); \\\`
4. Get Context Before Responding
\\\`typescript const context = await brain.getContext({ userId: user.id, task: "build a login page", metadata: { timestamp: Date.now(), source: "chat", }, });
// Response includes: // { // requestId: "req_abc123", // suggestedFraming: "micro_task", // communicationStyle: "brief_directive", // complexity: "break_into_steps", // confidence: 0.87 // } \\\`
5. Adapt Your Response
\\\typescript const systemPrompt = \You are a helpful coding assistant. Communication style: ${context.communicationStyle} Task framing: ${context.suggestedFraming} Complexity approach: ${context.complexity}\`;
const response = await llm.generate({ system: systemPrompt, user: userQuery, }); \\\`
6. Report the Outcome
\\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 45, satisfaction: 0.9, }); \\\
Complete Example
\\\`typescript import { CogniKin } from '@cognikin/client';
const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY });
export async function handleUserRequest( userId: string, query: string ): Promise<string> { // 1. Get knowledge context const context = await brain.getContext({ userId, task: query, metadata: { timestamp: Date.now() }, });
// 2. Build adapted system prompt const systemPrompt = \You are a helpful assistant. Communication style: ${context.communicationStyle} Task framing: ${context.suggestedFraming} Complexity approach: ${context.complexity}\;
// 3. Generate response const response = await llm.generate({ system: systemPrompt, user: query, });
// 4. Report outcome in background brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, }).catch(console.error);
return response; } \\\`
Installation
Prerequisites
- Node.js 16+ for TypeScript/JavaScript
- Python 3.8+ for Python
- A CogniKin API key from the dashboard
TypeScript / JavaScript
\\\`bash
npm
npm install @cognikin/client
yarn
yarn add @cognikin/client
pnpm
pnpm add @cognikin/client \\\`
Verify installation:
\\\`typescript import { CogniKin } from '@cognikin/client';
const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, }); console.log('CogniKin client initialized'); \\\`
Python
\\\`bash
pip
pip install cognikin
poetry
poetry add cognikin \\\`
Verify installation:
\\\`python from cognikin import CogniKin import os
brain = CogniKin(api_key=os.environ.get("COGNIKIN_API_KEY")) print("CogniKin client initialized") \\\`
REST API (Direct)
No SDK required. Use any HTTP client against the base URL:
\\\ https://api.cognikin.me/v1 \\\
\\\bash curl https://api.cognikin.me/v1/context \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "userId": "user_123", "task": "build a login page" }' \\\
Framework Integration Guides
| Framework | Setup |
|---|---|
| Next.js | Import SDK in API routes or server components. Use env vars for the key. |
| Express | Initialize once at app startup, call per-request in route handlers. |
| FastAPI | Async client with \await brain.get_context()\. Initialize at module level. |
| LangChain | Wrap CogniKin calls in a custom tool or chain step. |
Environment Variables
\\\`bash
.env
COGNIKIN_API_KEY=ck_sk_live_your_secret_key_here \\\`
Security: Never commit API keys to version control. Use environment variables or a secrets manager.
Authentication
API Key Format
\\\ ck_sk_live_1234567890abcdef \\\
| Segment | Meaning |
|---|---|
\ck_\ | CogniKin identifier |
\sk_\ | Secret key |
\live_\ or \test_\ | Environment (production or sandbox) |
| Remainder | Unique credential |
Getting Your API Key
- Sign up at cognikin.me
- Navigate to the API Keys section in your dashboard
- Click "Generate New Key"
- Copy and store the key securely (it is shown only once)
Using Your API Key
TypeScript: \\\`typescript import { CogniKin } from '@cognikin/client';
const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, }); \\\`
Python: \\\`python from cognikin import CogniKin import os
brain = CogniKin(api_key=os.environ.get("COGNIKIN_API_KEY")) \\\`
REST: \\\bash curl https://api.cognikin.me/v1/context \ -H "Authorization: Bearer ck_sk_live_your_key_here" \ -H "Content-Type: application/json" \ -d '{"userId": "user_123", "task": "complete the report"}' \\\
Test Mode vs Live Mode
| Environment | Key Prefix | Behaviour |
|---|---|---|
| Test | \ck_sk_test_\ | Free unlimited requests, isolated sandbox, profiles reset monthly |
| Live | \ck_sk_live_\ | Production data, real profiles, metered billing |
Rate Limits
| Plan | Requests/Second | Monthly Limit |
|---|---|---|
| Starter | 10 req/s | 100,000 |
| Pro | 100 req/s | 1,000,000 |
| Enterprise | Custom | Unlimited |
Key Rotation
Rotate keys every 90 days. Generate a new key before revoking the old one to avoid downtime.
Troubleshooting
| Error | Cause | Fix |
|---|---|---|
\401 Unauthorized\ | Invalid or missing API key | Verify the key is correct and the \Authorization\ header is set |
\403 Forbidden\ | Key lacks permissions | Check key permissions in dashboard |
\429 Too Many Requests\ | Rate limit exceeded | Implement exponential backoff or upgrade your plan |
=== CORE CONCEPTS ===
How CogniKin Works
The Core Problem
AI agents are stateless. Every session starts from zero — no memory of what happened yesterday, no recall of decisions already made, no persistent knowledge. Re-ingesting context every time is expensive and brittle.
The CogniKin Learning Loop
CogniKin gives your agent persistent memory through a continuous knowledge loop:
- Context Request — Call \
getContext()\with the agent ID and current query - Knowledge Retrieval — CogniKin returns relevant knowledge nodes, activation levels, and connections
- Enriched Response — Your agent uses this context to generate informed output
- Feedback Report — Report whether the retrieved knowledge was useful
- Graph Update — CogniKin refines relevance scoring and strengthens useful connections
Example Flow
\\\`typescript // User asks: "Help me refactor the auth module"
// Step 1: Get context const context = await brain.getContext({ userId: "user_42", task: "refactor auth module", taskType: "coding", complexity: "high", });
// Step 2: CogniKin responds with personalised guidance // context.suggestedFraming === "micro_task" // context.communicationStyle === "brief_directive" // context.complexity === "break_into_steps" // context.confidence === 0.82
// Step 3: Agent uses context to adapt its response const systemPrompt = \Break this into small steps. Be concise and direct.\;
// Step 4: Report outcome await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 30, }); \\\`
Knowledge Dimensions
CogniKin analyses four behavioural dimensions:
Task Framing — How to present the work:
| Type | Description | Best For |
|---|---|---|
| Achievement | Goal-oriented framing | Users who track progress and milestones |
| Learning | Exploration-oriented framing | Users motivated by understanding |
| Micro-task | Small incremental steps | Users who feel overwhelmed by large tasks |
| Challenge | Puzzle/problem framing | Users who enjoy competition and difficulty |
Communication Style — How to deliver information:
| Style | Description |
|---|---|
| Brief Directive | Short, direct instructions |
| Detailed Explanatory | Full context with reasoning |
| Conversational | Friendly, encouraging tone |
| Technical | Precise, jargon-heavy |
Complexity Handling — How much detail to provide:
| Level | Description |
|---|---|
| Full Solution | Complete working code/answer |
| Break Into Steps | Guided step-by-step process |
| Hints Only | Directional clues without full answers |
| High Level | Conceptual overview |
Encouragement Level — How much positive reinforcement:
| Level | Description |
|---|---|
| High | Frequent encouragement and praise |
| Moderate | Balanced feedback |
| Minimal | Results-focused with little commentary |
| None | Pure information exchange |
The ML Model
CogniKin uses a multi-armed bandit approach combined with collaborative filtering:
- Cold Start (0-5 interactions) — Population-level defaults, low confidence (0.2-0.4). CogniKin explores different approaches.
- Learning (5-20 interactions) — Patterns emerge, confidence increases (0.5-0.7). Recommendations become personalised.
- Optimised (20+ interactions) — High-confidence recommendations (0.7-0.95). Occasional exploration to adapt to changing preferences.
Privacy
CogniKin stores behavioural patterns only — never task content or PII. User IDs are hashed. All data encrypted at rest and in transit. Full GDPR/CCPA feedback.
Knowledge Profiles
A knowledge profile is a dynamic, multi-dimensional model of a user's behavioural preferences. Profiles evolve continuously based on real interaction data.
Profile Structure
\\\`typescript interface KnowledgeProfile { userId: string;
// Behavioural preferences preferredFraming: FramingType; communicationStyle: CommunicationStyle; complexityPreference: ComplexityLevel; encouragementLevel: EncouragementLevel;
// Performance metrics totalInteractions: number; completionRate: number; averageTimeToStart: number; // seconds flowStateFrequency: number; // 0-1
// Confidence scores (0-1) framingConfidence: number; styleConfidence: number; complexityConfidence: number;
// Contextual patterns taskTypeAffinities: Record<string, number>; timeOfDayPatterns: TimePattern[]; } \\\`
Profile Evolution Phases
| Phase | Interactions | Confidence | Behaviour |
|---|---|---|---|
| Cold Start | 0-5 | 0.2-0.4 | Population defaults, active exploration |
| Learning | 5-20 | 0.5-0.7 | Patterns emerging, increasing personalisation |
| Optimised | 20+ | 0.7-0.95 | Stable preferences, occasional exploration |
Framing Types
| Type | Example Prompt |
|---|---|
| Achievement | "Complete this feature to hit your weekly milestone" |
| Learning | "Explore how OAuth works by building this flow" |
| Micro-task | "Start by just creating the function signature" |
| Challenge | "Can you implement this without using any external libraries?" |
Multi-Context Profiles
Users often behave differently depending on the task type. CogniKin tracks per-context preferences:
\\\typescript // Same user, different contexts { "coding": { preferredFraming: "challenge", communicationStyle: "brief_directive" }, "documentation": { preferredFraming: "micro_task", communicationStyle: "conversational" }, "debugging": { preferredFraming: "micro_task", communicationStyle: "technical" } } \\\
Manual Overrides
Use \updateProfile()\ to set preferences explicitly — for example, when a user fills out an onboarding questionnaire or explicitly states a preference.
\\\typescript await brain.updateProfile({ userId: "user_123", communicationStyle: "brief_directive", encouragementLevel: "minimal", }); \\\
Manual overrides carry higher weight than learned preferences, but CogniKin continues to adapt based on actual outcomes.
Context API
The Context API is the primary integration point. Call \getContext()\ before your agent generates a response to get personalised recommendations.
The getContext() Call
\\\typescript const context = await brain.getContext({ userId: "user_123", task: "build a login page", complexity: "medium", metadata: { source: "chat", timestamp: Date.now() } }); \\\
Request Parameters
Required:
| Parameter | Type | Description |
|---|---|---|
\userId\ | string | Unique identifier for the user |
\task\ | string | Description of what the user wants to accomplish |
Optional:
| Parameter | Type | Description | ||
|---|---|---|---|---|
\complexity\ | \`"low" \ | "medium" \ | "high"\` | Helps CogniKin tailor recommendations |
\taskType\ | string | Category (e.g., "coding", "writing", "debugging") | ||
\metadata\ | object | Additional context (source, timestamp, session info) |
Response Structure
\\\typescript { requestId: "req_abc123", suggestedFraming: "micro_task", communicationStyle: "brief_directive", complexity: "break_into_steps", encouragement: "moderate", confidence: 0.87, rationale: "User has 85% completion rate with step-by-step guidance", metadata: { profilePhase: "optimised", interactionCount: 47, explorationMode: false } } \\\
Three Integration Approaches
Approach 1: System Prompt Injection
\\\`typescript const context = await brain.getContext({ userId: user.id, task: userQuery });
const systemPrompt = \You are a helpful coding assistant. Communication: ${context.communicationStyle} Framing: ${context.suggestedFraming} Detail level: ${context.complexity} Encouragement: ${context.encouragement}\; \\\`
Approach 2: Programmatic Branching
\\\`typescript const context = await brain.getContext({ userId: user.id, task: userQuery });
if (context.complexity === "break_into_steps") { response = await generateStepByStep(userQuery); } else if (context.complexity === "hints_only") { response = await generateHints(userQuery); } else { response = await generateFullSolution(userQuery); }
if (context.encouragement === "high") { response = addEncouragement(response); } \\\`
Approach 3: Hybrid (Recommended)
\\\`typescript const context = await brain.getContext({ userId: user.id, task: userQuery });
const systemPrompt = buildSystemPrompt(context); const strategy = selectStrategy(context);
const rawResponse = await llm.generate({ systemPrompt, userQuery, temperature: strategy.temperature });
const finalResponse = postProcess(rawResponse, context); \\\`
Handling Low Confidence
When \context.confidence < 0.5\, CogniKin is still learning the user. Use a balanced approach:
\\\typescript if (context.confidence < 0.5) { // Hedge: offer multiple approaches response = generateHybridResponse(userQuery); } else { response = generateAdaptedResponse(userQuery, context); } \\\
Caching
Cache context for high-traffic applications, but keep TTL under 5 minutes:
\\\`typescript const cache = new TTLCache({ ttl: 300_000 });
async function getCachedContext(userId: string, task: string) { const key = \${userId}:${hashTask(task)}\; let ctx = cache.get(key); if (!ctx) { ctx = await brain.getContext({ userId, task }); cache.set(key, ctx); } return ctx; } \\\`
Error Handling
Always fall back gracefully if CogniKin is unavailable:
\\\typescript let context; try { context = await brain.getContext({ userId: user.id, task: userQuery }); } catch (error) { console.warn('CogniKin unavailable, using defaults:', error); context = { suggestedFraming: 'achievement', communicationStyle: 'conversational', complexity: 'break_into_steps', confidence: 0.5 }; } \\\
Outcome Reporting
Outcome reporting closes the learning loop. Without it, CogniKin cannot improve its recommendations.
Why Report Outcomes?
| Without Reporting | With Reporting |
|---|---|
| Static population defaults | Personalised per-user recommendations |
| No learning over time | Continuous improvement |
| Generic responses | Increasing completion rates |
The reportOutcome() Call
\\\typescript await brain.reportOutcome({ requestId: context.requestId, // from getContext() started: true, completed: true, timeToStart: 45, // seconds before user began flowState: true, // user was engaged satisfaction: 0.9, // user self-report (0-1) }); \\\
Required vs Optional Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
\requestId\ | Yes | string | From the \getContext()\ response |
\started\ | Yes | boolean | Whether the user began the task |
\completed\ | No | boolean | Whether they finished |
\timeToStart\ | No | number | Seconds before starting |
\flowState\ | No | boolean | Whether the user was engaged |
\satisfaction\ | No | number | User satisfaction (0-1) |
\metadata\ | No | object | Additional context |
Outcome Patterns
Success: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 10, flowState: true, satisfaction: 0.95 }); \\\
Started but not completed: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: false, timeToStart: 30, flowState: false }); \\\
Never started: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: false, completed: false }); \\\
Delayed start: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 300, flowState: false }); \\\
Signal Weights
| Outcome | Signal Strength | Effect on Profile |
|---|---|---|
| Never started | Strong negative | Major confidence decrease for current approach |
| Started + Completed | Strong positive | Major confidence increase |
| Started + Abandoned | Weak negative | Minor adjustment |
| Flow state detected | Bonus positive | Extra confidence boost |
| Fast time-to-start | Positive modifier | Indicates strong knowledge match |
Best Practices
- Report outcomes for every interaction, including negative ones
- Report in the background (fire-and-forget) to avoid blocking the user
- Include optional fields when available for richer learning signals
- Report based on actual behaviour, not assumptions
=== API REFERENCE ===
getContext()
Request knowledge-aware context for a user interaction.
Endpoint
\\\ POST /v1/context \\\
Headers
\\\ Authorization: Bearer ck_sk_live_your_key_here Content-Type: application/json \\\
Request Body
\\\typescript interface GetContextParams { userId: string; // Required. Unique user identifier. task: string; // Required. Task description. complexity?: 'low' | 'medium' | 'high'; // Optional. Task complexity hint. taskType?: string; // Optional. Category (e.g., "coding"). metadata?: Record<string, any>; // Optional. Additional context. } \\\
Response Body
\\\typescript interface KnowledgeContext { requestId: string; // Unique request ID. Use in reportOutcome(). suggestedFraming: 'achievement' | 'learning' | 'micro_task' | 'challenge'; communicationStyle: 'brief_directive' | 'detailed_explanatory' | 'conversational' | 'technical'; complexity: 'full_solution' | 'break_into_steps' | 'hints_only' | 'high_level'; encouragement: 'high' | 'moderate' | 'minimal' | 'none'; confidence: number; // 0-1. How well CogniKin knows this user. rationale: string; // Human-readable explanation. metadata: { profilePhase: 'cold_start' | 'learning' | 'optimised'; interactionCount: number; explorationMode: boolean; // true when CogniKin is trying new approaches }; } \\\
Examples
Basic: \\\typescript const context = await brain.getContext({ userId: 'user_123', task: 'build a login form' }); \\\
With all parameters: \\\typescript const context = await brain.getContext({ userId: 'user_123', task: 'fix authentication bug', complexity: 'high', taskType: 'debugging', metadata: { source: 'slack', priority: 'urgent' } }); \\\
curl: \\\bash curl -X POST https://api.cognikin.me/v1/context \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "userId": "user_123", "task": "build a login page", "complexity": "medium", "taskType": "coding" }' \\\
Error Codes
| Code | Description |
|---|---|
\INVALID_USER_ID\ | User ID is missing or malformed |
\INVALID_TASK\ | Task description is missing or empty |
\RATE_LIMIT_EXCEEDED\ | Too many requests |
\UNAUTHORIZED\ | Invalid API key |
reportOutcome()
Report the outcome of an interaction to close the learning loop.
Endpoint
\\\ POST /v1/outcome \\\
Headers
\\\ Authorization: Bearer ck_sk_live_your_key_here Content-Type: application/json \\\
Request Body
\\\typescript interface OutcomeParams { requestId: string; // Required. From getContext() response. started: boolean; // Required. Did the user begin the task? completed?: boolean; // Did they finish? timeToStart?: number; // Seconds before starting. flowState?: boolean; // Was the user engaged? satisfaction?: number; // User self-report (0-1). metadata?: Record<string, any>; } \\\
Response
\\\json { "acknowledged": true, "profileUpdated": true } \\\
Examples
TypeScript: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 15, flowState: true, satisfaction: 0.95 }); \\\
Python: \\\python await brain.report_outcome( request_id=context.request_id, started=True, completed=True, time_to_start=15, flow_state=True, satisfaction=0.95 ) \\\
curl: \\\bash curl -X POST https://api.cognikin.me/v1/outcome \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "requestId": "req_abc123", "started": true, "completed": true, "timeToStart": 15, "flowState": true, "satisfaction": 0.95 }' \\\
Error Codes
| Code | Description |
|---|---|
\INVALID_REQUEST_ID\ | Request ID not found or malformed |
\OUTCOME_ALREADY_REPORTED\ | An outcome was already submitted for this request |
\REQUEST_EXPIRED\ | Request is older than 24 hours |
updateProfile()
Manually override or set user profile preferences. Useful for onboarding data, explicit user preferences, or importing from external systems.
Endpoint
\\\ PUT /v1/profile/:userId \\\
Headers
\\\ Authorization: Bearer ck_sk_live_your_key_here Content-Type: application/json \\\
Request Body
\\\typescript interface UpdateProfileParams { userId: string; preferredFraming?: 'achievement' | 'learning' | 'micro_task' | 'challenge'; communicationStyle?: 'brief_directive' | 'detailed_explanatory' | 'conversational' | 'technical'; complexityPreference?: 'full_solution' | 'break_into_steps' | 'hints_only' | 'high_level'; encouragementLevel?: 'high' | 'moderate' | 'minimal' | 'none'; metadata?: Record<string, any>; } \\\
Response
Returns the updated profile:
\\\typescript interface KnowledgeProfile { userId: string; preferredFraming: FramingType; communicationStyle: CommunicationStyle; complexityPreference: ComplexityLevel; encouragementLevel: EncouragementLevel; totalInteractions: number; completionRate: number; confidence: number; } \\\
Use Cases
User stated preference: \\\typescript // User said: "Just give me the code, no explanations" await brain.updateProfile({ userId: 'user_123', communicationStyle: 'brief_directive', encouragementLevel: 'none' }); \\\
Onboarding questionnaire: \\\typescript await brain.updateProfile({ userId: 'user_123', preferredFraming: onboarding.framingPreference, complexityPreference: onboarding.detailLevel, metadata: { source: 'onboarding', completedAt: Date.now() } }); \\\
Import from another system: \\\typescript await brain.updateProfile({ userId: 'user_123', preferredFraming: legacyProfile.taskFraming, metadata: { imported: true, source: 'legacy_system' } }); \\\
Error Codes
| Code | Description |
|---|---|
\INVALID_USER_ID\ | User ID is missing or malformed |
\INVALID_PREFERENCE\ | One or more preference values are invalid |
\UNAUTHORIZED\ | Invalid API key |
Note: Manual overrides carry higher weight than learned preferences, but CogniKin will still adapt based on actual outcomes over time.
Error Handling
Error Response Format
All API errors follow a consistent structure:
\\\json { "error": { "code": "RATE_LIMIT_EXCEEDED", "message": "Too many requests. Please retry after 60 seconds.", "details": { "limit": 100, "window": "60s", "retryAfter": 42 }, "requestId": "req_abc123", "timestamp": "2024-01-15T10:30:00Z" } } \\\
Common Error Codes
| HTTP Status | Code | Description | Resolution |
|---|---|---|---|
| 400 | \INVALID_USER_ID\ | User ID missing or malformed | Provide a valid user ID string |
| 400 | \INVALID_TASK\ | Task description missing or empty | Provide a descriptive task string |
| 400 | \INVALID_REQUEST_ID\ | Request ID not found | Use the requestId from getContext() |
| 400 | \INVALID_PREFERENCE\ | Invalid preference value | Check allowed enum values |
| 401 | \UNAUTHORIZED\ | Invalid or missing API key | Verify your API key |
| 403 | \FORBIDDEN\ | Insufficient permissions | Check key permissions in dashboard |
| 404 | \PROFILE_NOT_FOUND\ | No profile for this user | Profile is created automatically on first getContext() |
| 429 | \RATE_LIMIT_EXCEEDED\ | Too many requests | Implement exponential backoff |
| 429 | \QUOTA_EXCEEDED\ | Monthly quota reached | Upgrade plan or wait for reset |
| 500 | \INTERNAL_ERROR\ | Server error | Retry with backoff. Contact support if persistent. |
Retry Strategy
\\\typescript async function withRetry<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> { for (let attempt = 0; attempt <= maxRetries; attempt++) { try { return await fn(); } catch (error) { if (error.code === 'RATE_LIMIT_EXCEEDED' && attempt < maxRetries) { const delay = error.details?.retryAfter ? error.details.retryAfter * 1000 : Math.pow(2, attempt) * 1000; await new Promise(r => setTimeout(r, delay)); continue; } throw error; } } throw new Error('Max retries exceeded'); } \\\
Graceful Degradation
Always have a fallback so your agent works even when CogniKin is unavailable:
\\\typescript let context; try { context = await brain.getContext({ userId: user.id, task: userQuery }); } catch (error) { console.warn('CogniKin unavailable:', error.code); context = { suggestedFraming: 'achievement', communicationStyle: 'conversational', complexity: 'break_into_steps', encouragement: 'moderate', confidence: 0.5 }; } // Continue with context regardless \\\
=== GUIDES ===
Integration Examples
Personal Assistant Agent
\\\`typescript import { CogniKin } from '@cognikin/client';
const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY });
async function personalAssistant(userId: string, message: string) { const context = await brain.getContext({ userId, task: message, taskType: detectTaskType(message), });
const systemPrompt = \You are a personal assistant. Style: ${context.communicationStyle} Framing: ${context.suggestedFraming} Detail: ${context.complexity} Encouragement: ${context.encouragement}\;
const response = await llm.generate({ system: systemPrompt, user: message });
// Track outcome asynchronously trackOutcome(context.requestId, userId);
return response; } \\\`
Team Management Agent
\\\`typescript async function teamAgent(teamMembers: string[], task: string) { // Get context for each team member in parallel const contexts = await Promise.all( teamMembers.map(id => brain.getContext({ userId: id, task, taskType: 'team_task' }) ) );
// Personalise assignment message for each member return contexts.map((ctx, i) => ({ userId: teamMembers[i], message: buildAssignment(task, ctx), framing: ctx.suggestedFraming, })); } \\\`
Educational Tutor Agent
\\\typescript async function tutorAgent(studentId: string, topic: string) { const context = await brain.getContext({ userId: studentId, task: \learn ${topic}\`, taskType: 'learning', });
let lesson: string;
if (context.complexity === 'break_into_steps') { lesson = await generateProgressiveLessons(topic); } else if (context.complexity === 'hints_only') { lesson = await generateSocraticQuestions(topic); } else { lesson = await generateComprehensiveLesson(topic); }
if (context.encouragement === 'high') { lesson = addProgressTracking(lesson); }
return lesson; } \\\`
Customer Support Agent
\\\`python from cognikin import CogniKin
brain = CogniKin(api_key=os.getenv("COGNIKIN_API_KEY"))
async def support_agent(user_id: str, issue: str): context = await brain.get_context( user_id=user_id, task=f"resolve: {issue}", task_type="support", )
if context.communication_style == "brief_directive": response = generate_quick_fix(issue) elif context.communication_style == "detailed_explanatory": response = generate_walkthrough(issue) else: response = generate_standard_response(issue)
await brain.report_outcome( request_id=context.request_id, started=True, completed=True, )
return response \\\`
Best Practices
Context Usage
- Always call getContext() before generating responses — This is the core integration point
- Provide descriptive task strings — \
"fix authentication bug in login form"\is far better than \"help"\ - Include taskType when known — Enables per-task-type personalisation
- Use all returned fields — They work together as a coherent recommendation
Outcome Reporting
- Report every interaction — Including negative outcomes (abandonments, non-starts)
- Report in the background — Fire-and-forget pattern, do not block the user
- Include optional fields — \
timeToStart\, \flowState\, \satisfaction\provide richer learning signals - Be honest — Report actual behaviour, not optimistic assumptions
Profile Management
- Let CogniKin learn naturally — Avoid overriding profiles unless you have explicit user preferences
- Use updateProfile() sparingly — For onboarding data or explicit user requests only
- Monitor profile evolution — Verify that completion rates improve over time
Error Handling
- Always implement fallbacks — Your agent should work even if CogniKin is offline
- Use exponential backoff for retries — Respect \
retryAfter\headers - Log errors, don't expose them — Users should not see CogniKin-level errors
Performance
- Call getContext() asynchronously — Do not block the request pipeline
- Cache conservatively — TTL under 5 minutes to avoid stale recommendations
- Use parallel calls — When personalising for multiple users simultaneously
Security
- Store API keys in environment variables — Never in code or version control
- Use separate keys per environment — dev, staging, production
- Rotate keys every 90 days
- Hash user identifiers — Do not send PII as userId
Testing & Debugging
Test Mode
Use test-mode API keys during development:
\\\bash COGNIKIN_API_KEY=ck_sk_test_1234567890abcdef \\\
Test mode provides:
- Free unlimited requests
- Isolated data sandbox
- Profiles reset monthly
- No billing impact
Sandbox Users
Create predictable test users with known profiles:
\\\`typescript // Set up a known profile for testing await brain.updateProfile({ userId: 'test_user_achiever', preferredFraming: 'achievement', communicationStyle: 'brief_directive', });
// Verify your agent adapts correctly const context = await brain.getContext({ userId: 'test_user_achiever', task: 'test task', });
assert(context.suggestedFraming === 'achievement'); \\\`
Unit Testing with Mocks
\\\`typescript jest.mock('@cognikin/client');
const mockGetContext = jest.fn().mockResolvedValue({ requestId: 'test_req', suggestedFraming: 'achievement', communicationStyle: 'brief_directive', complexity: 'break_into_steps', confidence: 0.8, });
CogniKin.prototype.getContext = mockGetContext;
test('adapts response based on CogniKin context', async () => { const response = await handleUserRequest('user_123', 'build login'); expect(mockGetContext).toHaveBeenCalledWith({ userId: 'user_123', task: 'build login', }); expect(response).toContain('Step 1'); }); \\\`
Testing All Framing Types
\\\typescript test.each([ ['achievement', 'Complete this'], ['learning', 'Explore how'], ['micro_task', 'Start by'], ['challenge', 'Can you'], ])('handles %s framing correctly', async (framing, expectedPhrase) => { mockGetContext.mockResolvedValue({ suggestedFraming: framing }); const response = await generateResponse('build feature'); expect(response).toContain(expectedPhrase); }); \\\
Debug Logging
\\\typescript const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, debug: true, // Logs all requests and responses }); \\\
Inspecting Recommendations
\\\`typescript const context = await brain.getContext({ userId: user.id, task });
console.log('CogniKin context:', { framing: context.suggestedFraming, style: context.communicationStyle, confidence: context.confidence, rationale: context.rationale, phase: context.metadata.profilePhase, }); \\\`
Common Issues
| Issue | Cause | Fix |
|---|---|---|
| Low confidence persists | Not enough interactions or poor task descriptions | Provide richer task strings, report all outcomes |
| Same recommendation every time | Outcome reporting missing or always positive | Verify reportOutcome() is called with accurate data |
| Unexpected recommendations | User behaviour differs from assumptions | Check the \rationale\ field to understand CogniKin's reasoning |
Production Checklist
Pre-Launch
Credentials:
- [ ] Switched from test to live API key (\
ck_sk_live_\) - [ ] API key stored in environment variables or secrets manager
- [ ] Key is not committed to version control
- [ ] Key rotation schedule established (90-day cycle)
Error Handling:
- [ ] Fallback behaviour implemented for CogniKin outages
- [ ] Retry logic with exponential backoff in place
- [ ] Errors logged to monitoring service
- [ ] Graceful degradation tested end-to-end
Outcome Reporting:
- [ ] reportOutcome() called for every interaction
- [ ] Reporting runs in background (non-blocking)
- [ ] Optional fields (timeToStart, flowState, satisfaction) included when available
Performance:
- [ ] CogniKin adds < 50ms latency to responses
- [ ] Appropriate request timeouts configured
- [ ] Caching strategy in place (TTL < 5 minutes)
- [ ] Load tested with expected user volume
Monitoring
\\\typescript const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, onRequest: (params) => { metrics.increment('ck.request'); }, onResponse: (context, duration) => { metrics.timing('ck.latency', duration); metrics.gauge('ck.confidence', context.confidence); }, onError: (error) => { metrics.increment('ck.error', { code: error.code }); logger.error('CogniKin error', error); } }); \\\
Gradual Rollout
Roll out CogniKin to a percentage of users first:
\\\`typescript const ROLLOUT_PERCENTAGE = 25;
async function handleRequest(userId: string, query: string) { const useCK = hashId(userId) % 100 < ROLLOUT_PERCENTAGE;
if (useCK) { const context = await brain.getContext({ userId, task: query }); return generateAdaptedResponse(query, context); } else { return generateDefaultResponse(query); } } \\\`
Post-Launch Metrics
Track weekly:
| Metric | What It Tells You |
|---|---|
| Completion Rate | Are users completing more tasks with CogniKin? |
| Time to Start | Are users procrastinating less? |
| Flow State Frequency | Are users more engaged? |
| CogniKin Confidence | Are profiles converging and stabilising? |
Success threshold: A 20%+ increase in task completion rates within 30 days indicates effective integration.
=== SDK REFERENCE ===
TypeScript SDK
Installation
\\\bash npm install @cognikin/client \\\
Client Configuration
\\\`typescript import { CogniKin } from '@cognikin/client';
const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, baseUrl: 'https://api.cognikin.me/v1', // default timeout: 5000, // request timeout in ms retries: 3, // auto-retry failed requests debug: false, // enable debug logging }); \\\`
Methods
getContext() — Get knowledge context: \\\typescript const context = await brain.getContext({ userId: 'user_123', task: 'build a feature', complexity: 'medium', taskType: 'coding', metadata: { source: 'chat' }, }); \\\
reportOutcome() — Report interaction outcome: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 45, flowState: true, satisfaction: 0.9, }); \\\
updateProfile() — Override profile preferences: \\\typescript const profile = await brain.updateProfile({ userId: 'user_123', preferredFraming: 'achievement', communicationStyle: 'brief_directive', }); \\\
TypeScript Types
\\\typescript import type { GetContextParams, KnowledgeContext, OutcomeParams, KnowledgeProfile, UpdateProfileParams, FramingType, CommunicationStyle, ComplexityLevel, EncouragementLevel, } from '@cognikin/client'; \\\
Error Handling
\\\`typescript import { CogniKinError } from '@cognikin/client';
try { const context = await brain.getContext({ userId: user.id, task }); } catch (error) { if (error instanceof CogniKinError) { console.error(\CogniKin error: ${error.code} - ${error.message}\); console.error(\HTTP status: ${error.statusCode}\); } } \\\`
Event Listeners / Middleware
\\\`typescript brain.on('request', (params) => { console.log('CogniKin request:', params); });
brain.on('response', (context, durationMs) => { console.log(\CogniKin response in ${durationMs}ms, confidence: ${context.confidence}\); });
brain.on('error', (error) => { console.error('CogniKin error:', error.code); }); \\\`
Python SDK
Installation
\\\bash pip install cognikin \\\
Client Configuration
\\\`python from cognikin import CogniKin import os
brain = CogniKin( api_key=os.getenv("COGNIKIN_API_KEY"), base_url="https://api.cognikin.me/v1", # default timeout=5.0, # request timeout in seconds max_retries=3, # auto-retry failed requests debug=False, # enable debug logging ) \\\`
Async Methods
get_context(): \\\python context = await brain.get_context( user_id="user_123", task="build a feature", complexity="medium", task_type="coding", metadata={"source": "chat"}, ) \\\
report_outcome(): \\\python await brain.report_outcome( request_id=context.request_id, started=True, completed=True, time_to_start=45, flow_state=True, satisfaction=0.9, ) \\\
update_profile(): \\\python profile = await brain.update_profile( user_id="user_123", preferred_framing="achievement", communication_style="brief_directive", ) \\\
Synchronous Client
\\\`python from cognikin import CogniKinSync
brain_sync = CogniKinSync(api_key=os.getenv("COGNIKIN_API_KEY"))
No async/await needed
context = brain_sync.get_context(user_id="user_123", task="test") brain_sync.report_outcome(request_id=context.request_id, started=True) \\\`
Type Hints
\\\`python from cognikin import ( CogniKin, KnowledgeContext, KnowledgeProfile, FramingType, CommunicationStyle, )
Fully typed with Pydantic models
context: KnowledgeContext = await brain.get_context( user_id="user_123", task="test" ) \\\`
Error Handling
\\\`python from cognikin.exceptions import ( CogniKinError, RateLimitError, UnauthorizedError, ValidationError, )
try: context = await brain.get_context(user_id=user_id, task=task) except RateLimitError as e: print(f"Rate limited. Retry after {e.retry_after}s") except UnauthorizedError: print("Invalid API key") except ValidationError as e: print(f"Invalid params: {e.message}") except CogniKinError as e: print(f"CogniKin error: {e.code} - {e.message}") \\\`
REST API
Base URL
\\\ https://api.cognikin.me/v1 \\\
Authentication
Include your API key in the \Authorization\ header:
\\\ Authorization: Bearer ck_sk_live_your_key_here Content-Type: application/json \\\
Endpoints
POST /v1/context
Get knowledge-aware context for a user interaction.
\\\bash curl -X POST https://api.cognikin.me/v1/context \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "userId": "user_123", "task": "build a login page", "complexity": "medium", "taskType": "coding" }' \\\
Response: \\\json { "requestId": "req_abc123", "suggestedFraming": "micro_task", "communicationStyle": "brief_directive", "complexity": "break_into_steps", "encouragement": "moderate", "confidence": 0.87, "rationale": "User has 85% completion rate with step-by-step guidance", "metadata": { "profilePhase": "optimised", "interactionCount": 47, "explorationMode": false } } \\\
POST /v1/outcome
Report interaction outcome.
\\\bash curl -X POST https://api.cognikin.me/v1/outcome \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "requestId": "req_abc123", "started": true, "completed": true, "timeToStart": 45, "flowState": true, "satisfaction": 0.9 }' \\\
Response: \\\json { "acknowledged": true, "profileUpdated": true } \\\
PUT /v1/profile/:userId
Update user profile preferences.
\\\bash curl -X PUT https://api.cognikin.me/v1/profile/user_123 \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "preferredFraming": "achievement", "communicationStyle": "brief_directive" }' \\\
Response: \\\json { "userId": "user_123", "preferredFraming": "achievement", "communicationStyle": "brief_directive", "complexityPreference": "break_into_steps", "encouragementLevel": "moderate", "totalInteractions": 47, "completionRate": 0.85, "confidence": 0.87 } \\\
Rate Limit Headers
All responses include rate limit information:
\\\ X-RateLimit-Limit: 100 X-RateLimit-Remaining: 87 X-RateLimit-Reset: 1642247400 \\\
Webhooks (Coming Soon)
Register webhook endpoints to receive real-time notifications when user profiles reach key milestones:
\\\json { "event": "profile.optimised", "userId": "user_123", "confidence": 0.92, "interactionCount": 25 } \\\
Error Response Format
All errors follow a consistent structure:
\\\json { "error": { "code": "RATE_LIMIT_EXCEEDED", "message": "Too many requests. Please retry after 60 seconds.", "details": { "limit": 100, "window": "60s", "retryAfter": 42 }, "requestId": "req_abc123", "timestamp": "2024-01-15T10:30:00Z" } } \\\
Ready to Build Your Second Brain?
Start capturing knowledge today. Your future self will thank you.