/

Externalized Cognition

Your Mind, Amplified.
Stop Losing Thoughts. Start Building Intelligence.

Persistent memory. Knowledge graphs. Instant recall. One API call.

0%
Recall Accuracy
0%
Context Retained
<0ms
Retrieval

Your Brain Has a Storage Problem

Human Memory ❌

Current reality

Forget 80% of what you read within a week
Scattered notes across 12 different apps
Search fails because you forgot the keywords
Context lost between conversations
Ideas die in the gap between thinking and recording
Brilliant connections visible only in hindsight

Result: Lossy. Fragmented. Unreliable.

CogniKin Brain ✅

Externalized cognition

Every thought captured and connected automatically
One knowledge graph — all sources unified
Semantic search finds meaning, not just keywords
Context persists forever, surfaces when relevant
Capture-to-connection pipeline runs in real time
Relationships emerge as your brain grows

Result: Total Recall. Zero Fragmentation.

Before CogniKin vs After CogniKin

Replace Scattered Notes with a Living Knowledge Graph

Before: Scattered & Lost

Multiple apps, no connections, lost context

old-workflow.tstypescript
1// Before CogniKin: The Scattered Way
2
3// Take notes in Notion
4await notion.createPage({
5 title: "Meeting with Sarah",
6 content: meetingNotes
7});
8
9// Save bookmark in Raindrop
10await raindrop.save({
11 url: articleUrl,
12 tags: ["ai", "research"]
13});
14
15// Reference in Obsidian
16await obsidian.appendToDaily(
17 `Discussed [[AI Strategy]] with Sarah`
18);
19
20// 3 months later...
21// "Where did I put that thing Sarah mentioned?"
22const search1 = await notion.search("Sarah AI");
23const search2 = await obsidian.search("Sarah");
24const search3 = await raindrop.search("ai research");
25// ...still can't find it
26
27// Total: 5 apps, 0 connections
28// Recall: prayer-based
29// Context: gone

After: Connected & Alive

One brain, automatic connections, instant recall

with-cognikin.tstypescript
1// After CogniKin: The Connected Way
2
3import { CogniKin } from '@cognikin/client';
4const brain = new CogniKin(process.env.CK_KEY);
5
6// Capture knowledge
7await brain.ingest({
8 content: meetingNotes,
9 source: "meeting",
10 entities: ["sarah-chen"],
11 tags: ["ai-strategy"]
12});
13
14// Everything connects automatically
15// CogniKin links Sarah → AI Strategy →
16// your prior research → related articles
17
18// 3 months later...
19const recall = await brain.query({
20 query: "What did Sarah say about AI?",
21 depth: 2 // traverse 2 relationship hops
22});
23
24// Returns: exact context + related nodes
25// + connection graph showing WHY it's relevant
26
27// Total: 1 brain, ∞ connections
28// Recall: 94% accuracy
29// Context: permanent

Externalized Cognition in 3 Steps

Build Your Second Brain in Minutes

Step 1

Capture Everything

Notes, conversations, documents — everything becomes a knowledge node

Step 2

CogniKin Connects

Automatic knowledge graph — relationships emerge naturally

📈
Step 3

Recall Anything

Semantic search across your entire brain — instant, accurate, contextual

Real Results

0%
Recall Accuracy
<0ms
Retrieval Speed
0%
Context Retention
I stopped losing ideas the day I started using CogniKin. Six months in, my brain has 3,000 nodes and surfaces connections I never would have made on my own. It's not note-taking. It's externalized thinking.
Anthony Bracey
Founder, Kuranda Industries
View Full Case Study

Add CogniKin to Your Workflow in 5 Minutes

Clean SDK That Works with Any Application

brain-integration.tstypescript
1import { CogniKin } from '@cognikin/client';
2
3const brain = new CogniKin({
4 apiKey: process.env.CK_KEY
5});
6
7// Capture a thought or document
8const node = await brain.ingest({
9 content: "Meeting notes from product review...",
10 type: "meeting",
11 entities: ["sarah-chen", "product-v2"],
12 tags: ["roadmap", "q2-planning"],
13 metadata: {
14 timestamp: Date.now(),
15 source: "zoom"
16 }
17});
18
19// Node auto-connects to related knowledge:
20// {
21// id: "node_a8f3...",
22// connections: 12,
23// relatedDcps: ["product", "strategy"],
24// activationLevel: "L2"
25// }
26
27// Later: recall with semantic search
28const results = await brain.query({
29 query: "What was decided about the roadmap?",
30 depth: 2,
31 limit: 10
32});
33
34// Returns nodes + relationship paths + context
35// Brain gets smarter with every interaction

Start Free. Scale Your Brain.

Pricing That Grows with Your Knowledge

Free

Perfect for Exploration

$0/month
  • 1,000 Knowledge Nodes
  • 5 Brains
  • Full API Access
  • Community Support
Popular

Builder

For Serious Thinkers

$49/month
  • 50,000 Knowledge Nodes
  • Unlimited Brains
  • Priority Support
  • Knowledge Graph Viz

Team

For Cognitive Teams

$199/month
  • 500,000 Knowledge Nodes
  • Multi-User Brains
  • Advanced Analytics
  • Dedicated Onboarding

Enterprise

For Organisations

Custom
  • Unlimited Nodes
  • Self-Hosted Option
  • SLA Guarantees
  • Custom Integrations

Full Documentation

Complete developer documentation — one document, zero navigation.

47,665 characters

CogniKin — Developer Documentation

Complete reference for building AI agents with persistent memory via CogniKin.


Table of Contents

Getting Started

Core Concepts

API Reference

Guides

SDK Reference


=== GETTING STARTED ===

Introduction

CogniKin is a persistent memory and knowledge graph API for AI agents. Your agent processes tokens — it doesn't remember. CogniKin gives it a brain: structured, searchable, persistent knowledge that survives sessions.

Why CogniKin?

Building effective AI agents requires persistent context that outlives a single conversation. Traditional approaches require:

  • Re-ingesting documentation and context every session
  • Manual state management across conversation boundaries
  • Custom vector search infrastructure for knowledge retrieval
  • Constant re-learning of facts the agent already knew yesterday

CogniKin replaces all of this with two simple API calls:

\\\`typescript // 1. Get context before responding const context = await brain.getContext({ userId: user.id, task: userQuery, });

// 2. Report the outcome await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, }); \\\`

Key Benefits

BenefitDescription
80% Less BoilerplateReplace custom context management with one API call
100% Context RetainedKnowledge persists across sessions, deployments, and agent restarts
Automatic RelevanceCogniKin surfaces the most relevant knowledge based on the current query
Framework AgnosticWorks with any AI agent framework or custom implementation

How It Works

CogniKin uses knowledge graphs and semantic search to build persistent memory for your agent:

  1. Context Request — Before your agent responds, call \getContext()\ with the agent ID and current query
  2. Knowledge Retrieval — CogniKin returns relevant knowledge nodes, activation levels, and related concepts
  3. Enriched Response — Your agent uses this context to generate informed, consistent responses
  4. Feedback Reporting — Report whether the retrieved knowledge was useful
  5. Continuous Learning — CogniKin refines relevance scoring based on feedback

Next Steps

Follow the Quick Start guide to integrate CogniKin in 5 minutes.


Quick Start

Get CogniKin integrated into your AI agent in 5 minutes.

1. Get Your API Key

Sign up at cognikin.me and generate an API key from the dashboard.

2. Install the SDK

\\\`bash

TypeScript / JavaScript

npm install @cognikin/client

Python

pip install cognikin \\\`

3. Initialize the Client

\\\`typescript import { CogniKin } from '@cognikin/client';

const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, }); \\\`

4. Get Context Before Responding

\\\`typescript const context = await brain.getContext({ userId: user.id, task: "build a login page", metadata: { timestamp: Date.now(), source: "chat", }, });

// Response includes: // { // requestId: "req_abc123", // suggestedFraming: "micro_task", // communicationStyle: "brief_directive", // complexity: "break_into_steps", // confidence: 0.87 // } \\\`

5. Adapt Your Response

\\\typescript const systemPrompt = \You are a helpful coding assistant. Communication style: ${context.communicationStyle} Task framing: ${context.suggestedFraming} Complexity approach: ${context.complexity}\`;

const response = await llm.generate({ system: systemPrompt, user: userQuery, }); \\\`

6. Report the Outcome

\\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 45, satisfaction: 0.9, }); \\\

Complete Example

\\\`typescript import { CogniKin } from '@cognikin/client';

const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY });

export async function handleUserRequest( userId: string, query: string ): Promise<string> { // 1. Get knowledge context const context = await brain.getContext({ userId, task: query, metadata: { timestamp: Date.now() }, });

// 2. Build adapted system prompt const systemPrompt = \You are a helpful assistant. Communication style: ${context.communicationStyle} Task framing: ${context.suggestedFraming} Complexity approach: ${context.complexity}\;

// 3. Generate response const response = await llm.generate({ system: systemPrompt, user: query, });

// 4. Report outcome in background brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, }).catch(console.error);

return response; } \\\`


Installation

Prerequisites

  • Node.js 16+ for TypeScript/JavaScript
  • Python 3.8+ for Python
  • A CogniKin API key from the dashboard

TypeScript / JavaScript

\\\`bash

npm

npm install @cognikin/client

yarn

yarn add @cognikin/client

pnpm

pnpm add @cognikin/client \\\`

Verify installation:

\\\`typescript import { CogniKin } from '@cognikin/client';

const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, }); console.log('CogniKin client initialized'); \\\`

Python

\\\`bash

pip

pip install cognikin

poetry

poetry add cognikin \\\`

Verify installation:

\\\`python from cognikin import CogniKin import os

brain = CogniKin(api_key=os.environ.get("COGNIKIN_API_KEY")) print("CogniKin client initialized") \\\`

REST API (Direct)

No SDK required. Use any HTTP client against the base URL:

\\\ https://api.cognikin.me/v1 \\\

\\\bash curl https://api.cognikin.me/v1/context \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "userId": "user_123", "task": "build a login page" }' \\\

Framework Integration Guides

FrameworkSetup
Next.jsImport SDK in API routes or server components. Use env vars for the key.
ExpressInitialize once at app startup, call per-request in route handlers.
FastAPIAsync client with \await brain.get_context()\. Initialize at module level.
LangChainWrap CogniKin calls in a custom tool or chain step.

Environment Variables

\\\`bash

.env

COGNIKIN_API_KEY=ck_sk_live_your_secret_key_here \\\`

Security: Never commit API keys to version control. Use environment variables or a secrets manager.


Authentication

API Key Format

\\\ ck_sk_live_1234567890abcdef \\\

SegmentMeaning
\ck_\CogniKin identifier
\sk_\Secret key
\live_\ or \test_\Environment (production or sandbox)
RemainderUnique credential

Getting Your API Key

  1. Sign up at cognikin.me
  2. Navigate to the API Keys section in your dashboard
  3. Click "Generate New Key"
  4. Copy and store the key securely (it is shown only once)

Using Your API Key

TypeScript: \\\`typescript import { CogniKin } from '@cognikin/client';

const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, }); \\\`

Python: \\\`python from cognikin import CogniKin import os

brain = CogniKin(api_key=os.environ.get("COGNIKIN_API_KEY")) \\\`

REST: \\\bash curl https://api.cognikin.me/v1/context \ -H "Authorization: Bearer ck_sk_live_your_key_here" \ -H "Content-Type: application/json" \ -d '{"userId": "user_123", "task": "complete the report"}' \\\

Test Mode vs Live Mode

EnvironmentKey PrefixBehaviour
Test\ck_sk_test_\Free unlimited requests, isolated sandbox, profiles reset monthly
Live\ck_sk_live_\Production data, real profiles, metered billing

Rate Limits

PlanRequests/SecondMonthly Limit
Starter10 req/s100,000
Pro100 req/s1,000,000
EnterpriseCustomUnlimited

Key Rotation

Rotate keys every 90 days. Generate a new key before revoking the old one to avoid downtime.

Troubleshooting

ErrorCauseFix
\401 Unauthorized\Invalid or missing API keyVerify the key is correct and the \Authorization\ header is set
\403 Forbidden\Key lacks permissionsCheck key permissions in dashboard
\429 Too Many Requests\Rate limit exceededImplement exponential backoff or upgrade your plan

=== CORE CONCEPTS ===

How CogniKin Works

The Core Problem

AI agents are stateless. Every session starts from zero — no memory of what happened yesterday, no recall of decisions already made, no persistent knowledge. Re-ingesting context every time is expensive and brittle.

The CogniKin Learning Loop

CogniKin gives your agent persistent memory through a continuous knowledge loop:

  1. Context Request — Call \getContext()\ with the agent ID and current query
  2. Knowledge Retrieval — CogniKin returns relevant knowledge nodes, activation levels, and connections
  3. Enriched Response — Your agent uses this context to generate informed output
  4. Feedback Report — Report whether the retrieved knowledge was useful
  5. Graph Update — CogniKin refines relevance scoring and strengthens useful connections

Example Flow

\\\`typescript // User asks: "Help me refactor the auth module"

// Step 1: Get context const context = await brain.getContext({ userId: "user_42", task: "refactor auth module", taskType: "coding", complexity: "high", });

// Step 2: CogniKin responds with personalised guidance // context.suggestedFraming === "micro_task" // context.communicationStyle === "brief_directive" // context.complexity === "break_into_steps" // context.confidence === 0.82

// Step 3: Agent uses context to adapt its response const systemPrompt = \Break this into small steps. Be concise and direct.\;

// Step 4: Report outcome await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 30, }); \\\`

Knowledge Dimensions

CogniKin analyses four behavioural dimensions:

Task Framing — How to present the work:

TypeDescriptionBest For
AchievementGoal-oriented framingUsers who track progress and milestones
LearningExploration-oriented framingUsers motivated by understanding
Micro-taskSmall incremental stepsUsers who feel overwhelmed by large tasks
ChallengePuzzle/problem framingUsers who enjoy competition and difficulty

Communication Style — How to deliver information:

StyleDescription
Brief DirectiveShort, direct instructions
Detailed ExplanatoryFull context with reasoning
ConversationalFriendly, encouraging tone
TechnicalPrecise, jargon-heavy

Complexity Handling — How much detail to provide:

LevelDescription
Full SolutionComplete working code/answer
Break Into StepsGuided step-by-step process
Hints OnlyDirectional clues without full answers
High LevelConceptual overview

Encouragement Level — How much positive reinforcement:

LevelDescription
HighFrequent encouragement and praise
ModerateBalanced feedback
MinimalResults-focused with little commentary
NonePure information exchange

The ML Model

CogniKin uses a multi-armed bandit approach combined with collaborative filtering:

  • Cold Start (0-5 interactions) — Population-level defaults, low confidence (0.2-0.4). CogniKin explores different approaches.
  • Learning (5-20 interactions) — Patterns emerge, confidence increases (0.5-0.7). Recommendations become personalised.
  • Optimised (20+ interactions) — High-confidence recommendations (0.7-0.95). Occasional exploration to adapt to changing preferences.

Privacy

CogniKin stores behavioural patterns only — never task content or PII. User IDs are hashed. All data encrypted at rest and in transit. Full GDPR/CCPA feedback.


Knowledge Profiles

A knowledge profile is a dynamic, multi-dimensional model of a user's behavioural preferences. Profiles evolve continuously based on real interaction data.

Profile Structure

\\\`typescript interface KnowledgeProfile { userId: string;

// Behavioural preferences preferredFraming: FramingType; communicationStyle: CommunicationStyle; complexityPreference: ComplexityLevel; encouragementLevel: EncouragementLevel;

// Performance metrics totalInteractions: number; completionRate: number; averageTimeToStart: number; // seconds flowStateFrequency: number; // 0-1

// Confidence scores (0-1) framingConfidence: number; styleConfidence: number; complexityConfidence: number;

// Contextual patterns taskTypeAffinities: Record<string, number>; timeOfDayPatterns: TimePattern[]; } \\\`

Profile Evolution Phases

PhaseInteractionsConfidenceBehaviour
Cold Start0-50.2-0.4Population defaults, active exploration
Learning5-200.5-0.7Patterns emerging, increasing personalisation
Optimised20+0.7-0.95Stable preferences, occasional exploration

Framing Types

TypeExample Prompt
Achievement"Complete this feature to hit your weekly milestone"
Learning"Explore how OAuth works by building this flow"
Micro-task"Start by just creating the function signature"
Challenge"Can you implement this without using any external libraries?"

Multi-Context Profiles

Users often behave differently depending on the task type. CogniKin tracks per-context preferences:

\\\typescript // Same user, different contexts { "coding": { preferredFraming: "challenge", communicationStyle: "brief_directive" }, "documentation": { preferredFraming: "micro_task", communicationStyle: "conversational" }, "debugging": { preferredFraming: "micro_task", communicationStyle: "technical" } } \\\

Manual Overrides

Use \updateProfile()\ to set preferences explicitly — for example, when a user fills out an onboarding questionnaire or explicitly states a preference.

\\\typescript await brain.updateProfile({ userId: "user_123", communicationStyle: "brief_directive", encouragementLevel: "minimal", }); \\\

Manual overrides carry higher weight than learned preferences, but CogniKin continues to adapt based on actual outcomes.


Context API

The Context API is the primary integration point. Call \getContext()\ before your agent generates a response to get personalised recommendations.

The getContext() Call

\\\typescript const context = await brain.getContext({ userId: "user_123", task: "build a login page", complexity: "medium", metadata: { source: "chat", timestamp: Date.now() } }); \\\

Request Parameters

Required:

ParameterTypeDescription
\userId\stringUnique identifier for the user
\task\stringDescription of what the user wants to accomplish

Optional:

ParameterTypeDescription
\complexity\\`"low" \"medium" \"high"\`Helps CogniKin tailor recommendations
\taskType\stringCategory (e.g., "coding", "writing", "debugging")
\metadata\objectAdditional context (source, timestamp, session info)

Response Structure

\\\typescript { requestId: "req_abc123", suggestedFraming: "micro_task", communicationStyle: "brief_directive", complexity: "break_into_steps", encouragement: "moderate", confidence: 0.87, rationale: "User has 85% completion rate with step-by-step guidance", metadata: { profilePhase: "optimised", interactionCount: 47, explorationMode: false } } \\\

Three Integration Approaches

Approach 1: System Prompt Injection

\\\`typescript const context = await brain.getContext({ userId: user.id, task: userQuery });

const systemPrompt = \You are a helpful coding assistant. Communication: ${context.communicationStyle} Framing: ${context.suggestedFraming} Detail level: ${context.complexity} Encouragement: ${context.encouragement}\; \\\`

Approach 2: Programmatic Branching

\\\`typescript const context = await brain.getContext({ userId: user.id, task: userQuery });

if (context.complexity === "break_into_steps") { response = await generateStepByStep(userQuery); } else if (context.complexity === "hints_only") { response = await generateHints(userQuery); } else { response = await generateFullSolution(userQuery); }

if (context.encouragement === "high") { response = addEncouragement(response); } \\\`

Approach 3: Hybrid (Recommended)

\\\`typescript const context = await brain.getContext({ userId: user.id, task: userQuery });

const systemPrompt = buildSystemPrompt(context); const strategy = selectStrategy(context);

const rawResponse = await llm.generate({ systemPrompt, userQuery, temperature: strategy.temperature });

const finalResponse = postProcess(rawResponse, context); \\\`

Handling Low Confidence

When \context.confidence < 0.5\, CogniKin is still learning the user. Use a balanced approach:

\\\typescript if (context.confidence < 0.5) { // Hedge: offer multiple approaches response = generateHybridResponse(userQuery); } else { response = generateAdaptedResponse(userQuery, context); } \\\

Caching

Cache context for high-traffic applications, but keep TTL under 5 minutes:

\\\`typescript const cache = new TTLCache({ ttl: 300_000 });

async function getCachedContext(userId: string, task: string) { const key = \${userId}:${hashTask(task)}\; let ctx = cache.get(key); if (!ctx) { ctx = await brain.getContext({ userId, task }); cache.set(key, ctx); } return ctx; } \\\`

Error Handling

Always fall back gracefully if CogniKin is unavailable:

\\\typescript let context; try { context = await brain.getContext({ userId: user.id, task: userQuery }); } catch (error) { console.warn('CogniKin unavailable, using defaults:', error); context = { suggestedFraming: 'achievement', communicationStyle: 'conversational', complexity: 'break_into_steps', confidence: 0.5 }; } \\\


Outcome Reporting

Outcome reporting closes the learning loop. Without it, CogniKin cannot improve its recommendations.

Why Report Outcomes?

Without ReportingWith Reporting
Static population defaultsPersonalised per-user recommendations
No learning over timeContinuous improvement
Generic responsesIncreasing completion rates

The reportOutcome() Call

\\\typescript await brain.reportOutcome({ requestId: context.requestId, // from getContext() started: true, completed: true, timeToStart: 45, // seconds before user began flowState: true, // user was engaged satisfaction: 0.9, // user self-report (0-1) }); \\\

Required vs Optional Parameters

ParameterRequiredTypeDescription
\requestId\YesstringFrom the \getContext()\ response
\started\YesbooleanWhether the user began the task
\completed\NobooleanWhether they finished
\timeToStart\NonumberSeconds before starting
\flowState\NobooleanWhether the user was engaged
\satisfaction\NonumberUser satisfaction (0-1)
\metadata\NoobjectAdditional context

Outcome Patterns

Success: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 10, flowState: true, satisfaction: 0.95 }); \\\

Started but not completed: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: false, timeToStart: 30, flowState: false }); \\\

Never started: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: false, completed: false }); \\\

Delayed start: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 300, flowState: false }); \\\

Signal Weights

OutcomeSignal StrengthEffect on Profile
Never startedStrong negativeMajor confidence decrease for current approach
Started + CompletedStrong positiveMajor confidence increase
Started + AbandonedWeak negativeMinor adjustment
Flow state detectedBonus positiveExtra confidence boost
Fast time-to-startPositive modifierIndicates strong knowledge match

Best Practices

  • Report outcomes for every interaction, including negative ones
  • Report in the background (fire-and-forget) to avoid blocking the user
  • Include optional fields when available for richer learning signals
  • Report based on actual behaviour, not assumptions

=== API REFERENCE ===

getContext()

Request knowledge-aware context for a user interaction.

Endpoint

\\\ POST /v1/context \\\

Headers

\\\ Authorization: Bearer ck_sk_live_your_key_here Content-Type: application/json \\\

Request Body

\\\typescript interface GetContextParams { userId: string; // Required. Unique user identifier. task: string; // Required. Task description. complexity?: 'low' | 'medium' | 'high'; // Optional. Task complexity hint. taskType?: string; // Optional. Category (e.g., "coding"). metadata?: Record<string, any>; // Optional. Additional context. } \\\

Response Body

\\\typescript interface KnowledgeContext { requestId: string; // Unique request ID. Use in reportOutcome(). suggestedFraming: 'achievement' | 'learning' | 'micro_task' | 'challenge'; communicationStyle: 'brief_directive' | 'detailed_explanatory' | 'conversational' | 'technical'; complexity: 'full_solution' | 'break_into_steps' | 'hints_only' | 'high_level'; encouragement: 'high' | 'moderate' | 'minimal' | 'none'; confidence: number; // 0-1. How well CogniKin knows this user. rationale: string; // Human-readable explanation. metadata: { profilePhase: 'cold_start' | 'learning' | 'optimised'; interactionCount: number; explorationMode: boolean; // true when CogniKin is trying new approaches }; } \\\

Examples

Basic: \\\typescript const context = await brain.getContext({ userId: 'user_123', task: 'build a login form' }); \\\

With all parameters: \\\typescript const context = await brain.getContext({ userId: 'user_123', task: 'fix authentication bug', complexity: 'high', taskType: 'debugging', metadata: { source: 'slack', priority: 'urgent' } }); \\\

curl: \\\bash curl -X POST https://api.cognikin.me/v1/context \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "userId": "user_123", "task": "build a login page", "complexity": "medium", "taskType": "coding" }' \\\

Error Codes

CodeDescription
\INVALID_USER_ID\User ID is missing or malformed
\INVALID_TASK\Task description is missing or empty
\RATE_LIMIT_EXCEEDED\Too many requests
\UNAUTHORIZED\Invalid API key

reportOutcome()

Report the outcome of an interaction to close the learning loop.

Endpoint

\\\ POST /v1/outcome \\\

Headers

\\\ Authorization: Bearer ck_sk_live_your_key_here Content-Type: application/json \\\

Request Body

\\\typescript interface OutcomeParams { requestId: string; // Required. From getContext() response. started: boolean; // Required. Did the user begin the task? completed?: boolean; // Did they finish? timeToStart?: number; // Seconds before starting. flowState?: boolean; // Was the user engaged? satisfaction?: number; // User self-report (0-1). metadata?: Record<string, any>; } \\\

Response

\\\json { "acknowledged": true, "profileUpdated": true } \\\

Examples

TypeScript: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 15, flowState: true, satisfaction: 0.95 }); \\\

Python: \\\python await brain.report_outcome( request_id=context.request_id, started=True, completed=True, time_to_start=15, flow_state=True, satisfaction=0.95 ) \\\

curl: \\\bash curl -X POST https://api.cognikin.me/v1/outcome \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "requestId": "req_abc123", "started": true, "completed": true, "timeToStart": 15, "flowState": true, "satisfaction": 0.95 }' \\\

Error Codes

CodeDescription
\INVALID_REQUEST_ID\Request ID not found or malformed
\OUTCOME_ALREADY_REPORTED\An outcome was already submitted for this request
\REQUEST_EXPIRED\Request is older than 24 hours

updateProfile()

Manually override or set user profile preferences. Useful for onboarding data, explicit user preferences, or importing from external systems.

Endpoint

\\\ PUT /v1/profile/:userId \\\

Headers

\\\ Authorization: Bearer ck_sk_live_your_key_here Content-Type: application/json \\\

Request Body

\\\typescript interface UpdateProfileParams { userId: string; preferredFraming?: 'achievement' | 'learning' | 'micro_task' | 'challenge'; communicationStyle?: 'brief_directive' | 'detailed_explanatory' | 'conversational' | 'technical'; complexityPreference?: 'full_solution' | 'break_into_steps' | 'hints_only' | 'high_level'; encouragementLevel?: 'high' | 'moderate' | 'minimal' | 'none'; metadata?: Record<string, any>; } \\\

Response

Returns the updated profile:

\\\typescript interface KnowledgeProfile { userId: string; preferredFraming: FramingType; communicationStyle: CommunicationStyle; complexityPreference: ComplexityLevel; encouragementLevel: EncouragementLevel; totalInteractions: number; completionRate: number; confidence: number; } \\\

Use Cases

User stated preference: \\\typescript // User said: "Just give me the code, no explanations" await brain.updateProfile({ userId: 'user_123', communicationStyle: 'brief_directive', encouragementLevel: 'none' }); \\\

Onboarding questionnaire: \\\typescript await brain.updateProfile({ userId: 'user_123', preferredFraming: onboarding.framingPreference, complexityPreference: onboarding.detailLevel, metadata: { source: 'onboarding', completedAt: Date.now() } }); \\\

Import from another system: \\\typescript await brain.updateProfile({ userId: 'user_123', preferredFraming: legacyProfile.taskFraming, metadata: { imported: true, source: 'legacy_system' } }); \\\

Error Codes

CodeDescription
\INVALID_USER_ID\User ID is missing or malformed
\INVALID_PREFERENCE\One or more preference values are invalid
\UNAUTHORIZED\Invalid API key

Note: Manual overrides carry higher weight than learned preferences, but CogniKin will still adapt based on actual outcomes over time.


Error Handling

Error Response Format

All API errors follow a consistent structure:

\\\json { "error": { "code": "RATE_LIMIT_EXCEEDED", "message": "Too many requests. Please retry after 60 seconds.", "details": { "limit": 100, "window": "60s", "retryAfter": 42 }, "requestId": "req_abc123", "timestamp": "2024-01-15T10:30:00Z" } } \\\

Common Error Codes

HTTP StatusCodeDescriptionResolution
400\INVALID_USER_ID\User ID missing or malformedProvide a valid user ID string
400\INVALID_TASK\Task description missing or emptyProvide a descriptive task string
400\INVALID_REQUEST_ID\Request ID not foundUse the requestId from getContext()
400\INVALID_PREFERENCE\Invalid preference valueCheck allowed enum values
401\UNAUTHORIZED\Invalid or missing API keyVerify your API key
403\FORBIDDEN\Insufficient permissionsCheck key permissions in dashboard
404\PROFILE_NOT_FOUND\No profile for this userProfile is created automatically on first getContext()
429\RATE_LIMIT_EXCEEDED\Too many requestsImplement exponential backoff
429\QUOTA_EXCEEDED\Monthly quota reachedUpgrade plan or wait for reset
500\INTERNAL_ERROR\Server errorRetry with backoff. Contact support if persistent.

Retry Strategy

\\\typescript async function withRetry<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> { for (let attempt = 0; attempt <= maxRetries; attempt++) { try { return await fn(); } catch (error) { if (error.code === 'RATE_LIMIT_EXCEEDED' && attempt < maxRetries) { const delay = error.details?.retryAfter ? error.details.retryAfter * 1000 : Math.pow(2, attempt) * 1000; await new Promise(r => setTimeout(r, delay)); continue; } throw error; } } throw new Error('Max retries exceeded'); } \\\

Graceful Degradation

Always have a fallback so your agent works even when CogniKin is unavailable:

\\\typescript let context; try { context = await brain.getContext({ userId: user.id, task: userQuery }); } catch (error) { console.warn('CogniKin unavailable:', error.code); context = { suggestedFraming: 'achievement', communicationStyle: 'conversational', complexity: 'break_into_steps', encouragement: 'moderate', confidence: 0.5 }; } // Continue with context regardless \\\


=== GUIDES ===

Integration Examples

Personal Assistant Agent

\\\`typescript import { CogniKin } from '@cognikin/client';

const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY });

async function personalAssistant(userId: string, message: string) { const context = await brain.getContext({ userId, task: message, taskType: detectTaskType(message), });

const systemPrompt = \You are a personal assistant. Style: ${context.communicationStyle} Framing: ${context.suggestedFraming} Detail: ${context.complexity} Encouragement: ${context.encouragement}\;

const response = await llm.generate({ system: systemPrompt, user: message });

// Track outcome asynchronously trackOutcome(context.requestId, userId);

return response; } \\\`

Team Management Agent

\\\`typescript async function teamAgent(teamMembers: string[], task: string) { // Get context for each team member in parallel const contexts = await Promise.all( teamMembers.map(id => brain.getContext({ userId: id, task, taskType: 'team_task' }) ) );

// Personalise assignment message for each member return contexts.map((ctx, i) => ({ userId: teamMembers[i], message: buildAssignment(task, ctx), framing: ctx.suggestedFraming, })); } \\\`

Educational Tutor Agent

\\\typescript async function tutorAgent(studentId: string, topic: string) { const context = await brain.getContext({ userId: studentId, task: \learn ${topic}\`, taskType: 'learning', });

let lesson: string;

if (context.complexity === 'break_into_steps') { lesson = await generateProgressiveLessons(topic); } else if (context.complexity === 'hints_only') { lesson = await generateSocraticQuestions(topic); } else { lesson = await generateComprehensiveLesson(topic); }

if (context.encouragement === 'high') { lesson = addProgressTracking(lesson); }

return lesson; } \\\`

Customer Support Agent

\\\`python from cognikin import CogniKin

brain = CogniKin(api_key=os.getenv("COGNIKIN_API_KEY"))

async def support_agent(user_id: str, issue: str): context = await brain.get_context( user_id=user_id, task=f"resolve: {issue}", task_type="support", )

if context.communication_style == "brief_directive": response = generate_quick_fix(issue) elif context.communication_style == "detailed_explanatory": response = generate_walkthrough(issue) else: response = generate_standard_response(issue)

await brain.report_outcome( request_id=context.request_id, started=True, completed=True, )

return response \\\`


Best Practices

Context Usage

  • Always call getContext() before generating responses — This is the core integration point
  • Provide descriptive task strings — \"fix authentication bug in login form"\ is far better than \"help"\
  • Include taskType when known — Enables per-task-type personalisation
  • Use all returned fields — They work together as a coherent recommendation

Outcome Reporting

  • Report every interaction — Including negative outcomes (abandonments, non-starts)
  • Report in the background — Fire-and-forget pattern, do not block the user
  • Include optional fields — \timeToStart\, \flowState\, \satisfaction\ provide richer learning signals
  • Be honest — Report actual behaviour, not optimistic assumptions

Profile Management

  • Let CogniKin learn naturally — Avoid overriding profiles unless you have explicit user preferences
  • Use updateProfile() sparingly — For onboarding data or explicit user requests only
  • Monitor profile evolution — Verify that completion rates improve over time

Error Handling

  • Always implement fallbacks — Your agent should work even if CogniKin is offline
  • Use exponential backoff for retries — Respect \retryAfter\ headers
  • Log errors, don't expose them — Users should not see CogniKin-level errors

Performance

  • Call getContext() asynchronously — Do not block the request pipeline
  • Cache conservatively — TTL under 5 minutes to avoid stale recommendations
  • Use parallel calls — When personalising for multiple users simultaneously

Security

  • Store API keys in environment variables — Never in code or version control
  • Use separate keys per environment — dev, staging, production
  • Rotate keys every 90 days
  • Hash user identifiers — Do not send PII as userId

Testing & Debugging

Test Mode

Use test-mode API keys during development:

\\\bash COGNIKIN_API_KEY=ck_sk_test_1234567890abcdef \\\

Test mode provides:

  • Free unlimited requests
  • Isolated data sandbox
  • Profiles reset monthly
  • No billing impact

Sandbox Users

Create predictable test users with known profiles:

\\\`typescript // Set up a known profile for testing await brain.updateProfile({ userId: 'test_user_achiever', preferredFraming: 'achievement', communicationStyle: 'brief_directive', });

// Verify your agent adapts correctly const context = await brain.getContext({ userId: 'test_user_achiever', task: 'test task', });

assert(context.suggestedFraming === 'achievement'); \\\`

Unit Testing with Mocks

\\\`typescript jest.mock('@cognikin/client');

const mockGetContext = jest.fn().mockResolvedValue({ requestId: 'test_req', suggestedFraming: 'achievement', communicationStyle: 'brief_directive', complexity: 'break_into_steps', confidence: 0.8, });

CogniKin.prototype.getContext = mockGetContext;

test('adapts response based on CogniKin context', async () => { const response = await handleUserRequest('user_123', 'build login'); expect(mockGetContext).toHaveBeenCalledWith({ userId: 'user_123', task: 'build login', }); expect(response).toContain('Step 1'); }); \\\`

Testing All Framing Types

\\\typescript test.each([ ['achievement', 'Complete this'], ['learning', 'Explore how'], ['micro_task', 'Start by'], ['challenge', 'Can you'], ])('handles %s framing correctly', async (framing, expectedPhrase) => { mockGetContext.mockResolvedValue({ suggestedFraming: framing }); const response = await generateResponse('build feature'); expect(response).toContain(expectedPhrase); }); \\\

Debug Logging

\\\typescript const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, debug: true, // Logs all requests and responses }); \\\

Inspecting Recommendations

\\\`typescript const context = await brain.getContext({ userId: user.id, task });

console.log('CogniKin context:', { framing: context.suggestedFraming, style: context.communicationStyle, confidence: context.confidence, rationale: context.rationale, phase: context.metadata.profilePhase, }); \\\`

Common Issues

IssueCauseFix
Low confidence persistsNot enough interactions or poor task descriptionsProvide richer task strings, report all outcomes
Same recommendation every timeOutcome reporting missing or always positiveVerify reportOutcome() is called with accurate data
Unexpected recommendationsUser behaviour differs from assumptionsCheck the \rationale\ field to understand CogniKin's reasoning

Production Checklist

Pre-Launch

Credentials:

  • [ ] Switched from test to live API key (\ck_sk_live_\)
  • [ ] API key stored in environment variables or secrets manager
  • [ ] Key is not committed to version control
  • [ ] Key rotation schedule established (90-day cycle)

Error Handling:

  • [ ] Fallback behaviour implemented for CogniKin outages
  • [ ] Retry logic with exponential backoff in place
  • [ ] Errors logged to monitoring service
  • [ ] Graceful degradation tested end-to-end

Outcome Reporting:

  • [ ] reportOutcome() called for every interaction
  • [ ] Reporting runs in background (non-blocking)
  • [ ] Optional fields (timeToStart, flowState, satisfaction) included when available

Performance:

  • [ ] CogniKin adds < 50ms latency to responses
  • [ ] Appropriate request timeouts configured
  • [ ] Caching strategy in place (TTL < 5 minutes)
  • [ ] Load tested with expected user volume

Monitoring

\\\typescript const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, onRequest: (params) => { metrics.increment('ck.request'); }, onResponse: (context, duration) => { metrics.timing('ck.latency', duration); metrics.gauge('ck.confidence', context.confidence); }, onError: (error) => { metrics.increment('ck.error', { code: error.code }); logger.error('CogniKin error', error); } }); \\\

Gradual Rollout

Roll out CogniKin to a percentage of users first:

\\\`typescript const ROLLOUT_PERCENTAGE = 25;

async function handleRequest(userId: string, query: string) { const useCK = hashId(userId) % 100 < ROLLOUT_PERCENTAGE;

if (useCK) { const context = await brain.getContext({ userId, task: query }); return generateAdaptedResponse(query, context); } else { return generateDefaultResponse(query); } } \\\`

Post-Launch Metrics

Track weekly:

MetricWhat It Tells You
Completion RateAre users completing more tasks with CogniKin?
Time to StartAre users procrastinating less?
Flow State FrequencyAre users more engaged?
CogniKin ConfidenceAre profiles converging and stabilising?

Success threshold: A 20%+ increase in task completion rates within 30 days indicates effective integration.


=== SDK REFERENCE ===

TypeScript SDK

Installation

\\\bash npm install @cognikin/client \\\

Client Configuration

\\\`typescript import { CogniKin } from '@cognikin/client';

const brain = new CogniKin({ apiKey: process.env.COGNIKIN_API_KEY, baseUrl: 'https://api.cognikin.me/v1', // default timeout: 5000, // request timeout in ms retries: 3, // auto-retry failed requests debug: false, // enable debug logging }); \\\`

Methods

getContext() — Get knowledge context: \\\typescript const context = await brain.getContext({ userId: 'user_123', task: 'build a feature', complexity: 'medium', taskType: 'coding', metadata: { source: 'chat' }, }); \\\

reportOutcome() — Report interaction outcome: \\\typescript await brain.reportOutcome({ requestId: context.requestId, started: true, completed: true, timeToStart: 45, flowState: true, satisfaction: 0.9, }); \\\

updateProfile() — Override profile preferences: \\\typescript const profile = await brain.updateProfile({ userId: 'user_123', preferredFraming: 'achievement', communicationStyle: 'brief_directive', }); \\\

TypeScript Types

\\\typescript import type { GetContextParams, KnowledgeContext, OutcomeParams, KnowledgeProfile, UpdateProfileParams, FramingType, CommunicationStyle, ComplexityLevel, EncouragementLevel, } from '@cognikin/client'; \\\

Error Handling

\\\`typescript import { CogniKinError } from '@cognikin/client';

try { const context = await brain.getContext({ userId: user.id, task }); } catch (error) { if (error instanceof CogniKinError) { console.error(\CogniKin error: ${error.code} - ${error.message}\); console.error(\HTTP status: ${error.statusCode}\); } } \\\`

Event Listeners / Middleware

\\\`typescript brain.on('request', (params) => { console.log('CogniKin request:', params); });

brain.on('response', (context, durationMs) => { console.log(\CogniKin response in ${durationMs}ms, confidence: ${context.confidence}\); });

brain.on('error', (error) => { console.error('CogniKin error:', error.code); }); \\\`


Python SDK

Installation

\\\bash pip install cognikin \\\

Client Configuration

\\\`python from cognikin import CogniKin import os

brain = CogniKin( api_key=os.getenv("COGNIKIN_API_KEY"), base_url="https://api.cognikin.me/v1", # default timeout=5.0, # request timeout in seconds max_retries=3, # auto-retry failed requests debug=False, # enable debug logging ) \\\`

Async Methods

get_context(): \\\python context = await brain.get_context( user_id="user_123", task="build a feature", complexity="medium", task_type="coding", metadata={"source": "chat"}, ) \\\

report_outcome(): \\\python await brain.report_outcome( request_id=context.request_id, started=True, completed=True, time_to_start=45, flow_state=True, satisfaction=0.9, ) \\\

update_profile(): \\\python profile = await brain.update_profile( user_id="user_123", preferred_framing="achievement", communication_style="brief_directive", ) \\\

Synchronous Client

\\\`python from cognikin import CogniKinSync

brain_sync = CogniKinSync(api_key=os.getenv("COGNIKIN_API_KEY"))

No async/await needed

context = brain_sync.get_context(user_id="user_123", task="test") brain_sync.report_outcome(request_id=context.request_id, started=True) \\\`

Type Hints

\\\`python from cognikin import ( CogniKin, KnowledgeContext, KnowledgeProfile, FramingType, CommunicationStyle, )

Fully typed with Pydantic models

context: KnowledgeContext = await brain.get_context( user_id="user_123", task="test" ) \\\`

Error Handling

\\\`python from cognikin.exceptions import ( CogniKinError, RateLimitError, UnauthorizedError, ValidationError, )

try: context = await brain.get_context(user_id=user_id, task=task) except RateLimitError as e: print(f"Rate limited. Retry after {e.retry_after}s") except UnauthorizedError: print("Invalid API key") except ValidationError as e: print(f"Invalid params: {e.message}") except CogniKinError as e: print(f"CogniKin error: {e.code} - {e.message}") \\\`


REST API

Base URL

\\\ https://api.cognikin.me/v1 \\\

Authentication

Include your API key in the \Authorization\ header:

\\\ Authorization: Bearer ck_sk_live_your_key_here Content-Type: application/json \\\

Endpoints

POST /v1/context

Get knowledge-aware context for a user interaction.

\\\bash curl -X POST https://api.cognikin.me/v1/context \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "userId": "user_123", "task": "build a login page", "complexity": "medium", "taskType": "coding" }' \\\

Response: \\\json { "requestId": "req_abc123", "suggestedFraming": "micro_task", "communicationStyle": "brief_directive", "complexity": "break_into_steps", "encouragement": "moderate", "confidence": 0.87, "rationale": "User has 85% completion rate with step-by-step guidance", "metadata": { "profilePhase": "optimised", "interactionCount": 47, "explorationMode": false } } \\\

POST /v1/outcome

Report interaction outcome.

\\\bash curl -X POST https://api.cognikin.me/v1/outcome \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "requestId": "req_abc123", "started": true, "completed": true, "timeToStart": 45, "flowState": true, "satisfaction": 0.9 }' \\\

Response: \\\json { "acknowledged": true, "profileUpdated": true } \\\

PUT /v1/profile/:userId

Update user profile preferences.

\\\bash curl -X PUT https://api.cognikin.me/v1/profile/user_123 \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "preferredFraming": "achievement", "communicationStyle": "brief_directive" }' \\\

Response: \\\json { "userId": "user_123", "preferredFraming": "achievement", "communicationStyle": "brief_directive", "complexityPreference": "break_into_steps", "encouragementLevel": "moderate", "totalInteractions": 47, "completionRate": 0.85, "confidence": 0.87 } \\\

Rate Limit Headers

All responses include rate limit information:

\\\ X-RateLimit-Limit: 100 X-RateLimit-Remaining: 87 X-RateLimit-Reset: 1642247400 \\\

Webhooks (Coming Soon)

Register webhook endpoints to receive real-time notifications when user profiles reach key milestones:

\\\json { "event": "profile.optimised", "userId": "user_123", "confidence": 0.92, "interactionCount": 25 } \\\

Error Response Format

All errors follow a consistent structure:

\\\json { "error": { "code": "RATE_LIMIT_EXCEEDED", "message": "Too many requests. Please retry after 60 seconds.", "details": { "limit": 100, "window": "60s", "retryAfter": 42 }, "requestId": "req_abc123", "timestamp": "2024-01-15T10:30:00Z" } } \\\

Ready to Build Your Second Brain?

Start capturing knowledge today. Your future self will thank you.