~/blog/cognichain-llm-orchestration-for-dotnet

$ cat cognichain-llm-orchestration-for-dotnet.log

Published

#AI #LLM #.NET #Open Source #NuGet #CogniChain
CogniChain logo representing LLM orchestration and workflow automation for .NET

From “Just Another API Call” to Building Something Real 🚀

Remember your first time calling an LLM API? Mine was exhilarating. Three lines of code and suddenly I had an AI responding to my prompts. Magic, right?

Then reality hit.

The second conversation forgot everything from the first. The third API call failed randomly. The fourth prompt was too long because I didn’t manage context. By the fifth attempt, I was drowning in spaghetti code, trying to manually track conversation history, handle retries, and figure out why my prompt variables kept breaking.

I realized I wasn’t building an AI application—I was building infrastructure. Again.

The Problem Nobody Talks About

Here’s what they don’t tell you about building with LLMs:

95% of your code isn’t AI. It’s plumbing. It’s error handling. It’s context management. It’s retry logic. It’s streaming handlers. It’s tool integration. It’s all the things that should “just work” but don’t.

Every developer building with LLMs writes the same boilerplate. We all need conversation memory. We all need prompt templates. We all need retry logic. We all need tool calling. Yet we’re all writing it from scratch, each time, slightly differently, with different bugs.

It’s like building web apps without Express or Flask. Technically possible, but why would you?

So I Built CogniChain

CogniChain isn’t trying to be another LLM wrapper. You already have OpenAI, Azure OpenAI, Anthropic, or whatever SDK you prefer.

Instead, CogniChain is the infrastructure you’re going to build anyway—but battle-tested, documented, and ready to go.

What it actually does:

  • Prompt Templates - Stop concatenating strings with + like it’s 2005. Use proper templates with type-safe variable substitution.
  • 🧠 Conversation Memory - Automatic history management with smart limits. Your chatbot remembers context without running into token limits.
  • ⛓️ Workflow Orchestration - Chain multiple LLM operations together. Generate outline → Expand sections → Proofread → Format. One clean pipeline.
  • 🛠️ Tool Integration - Let your LLM call actual functions in your app. Search databases, call APIs, perform calculations—real tools, not just prompts.
  • 🔄 Retry Logic - Exponential backoff built-in. Because networks fail and APIs have rate limits.
  • 📡 Streaming Support - Real-time response streaming. Give users that ChatGPT-like experience without the headache.

Built for .NET 10, Built for Developers

I built CogniChain in C# because .NET deserves first-class LLM tooling. Not a Python port. Not an afterthought. A library designed for the .NET ecosystem from the ground up.

Fluent API with full IntelliSense support. Async/await throughout. SOLID principles and clean architecture. Everything you’d expect from modern .NET.

var orchestrator = new LLMOrchestrator();
orchestrator.Memory.AddSystemMessage("You are a helpful assistant.");

var workflow = orchestrator.CreateWorkflow()
    .WithPrompt(new PromptTemplate("Help me with: {task}"))
    .WithVariables(new { task = "building an AI app" })
    .AddStep(new YourLLMCallStep());

var result = await workflow.ExecuteAsync();

Five lines. That’s a complete AI workflow with memory, prompt management, and error handling.

The Philosophy: Bring Your Own LLM

CogniChain doesn’t force you into a specific LLM provider. No vendor lock-in. No opinions about which API you should use.

Want OpenAI? Great. Prefer Azure? Perfect. Using Anthropic or a local model? Absolutely.

You bring your LLM client. CogniChain handles everything else.

This isn’t just about flexibility—it’s about building sustainable applications. LLM providers change. APIs evolve. New models emerge. Your infrastructure shouldn’t need a rewrite every time.

What I Learned Building This

1. Developer experience matters more than features

A fluent API that “just works” beats a powerful API you need to read documentation to use.

2. Modularity is freedom

Every component works independently. Use what you need. Ignore what you don’t.

3. The real challenge isn’t the AI

It’s managing context, handling failures gracefully, and building reliable systems around unpredictable models.

Who Is This For?

If you’re building:

  • 🤖 AI chatbots and assistants
  • ✍️ Content generation pipelines
  • 🔄 Automated workflows with LLMs
  • 🧩 Applications that need function calling
  • 📱 Anything that chains multiple AI operations

And you’re tired of writing the same infrastructure code over and over—CogniChain is for you.

It’s Open Source

CogniChain is MIT licensed and available on NuGet. The code is on GitHub. Use it. Fork it. Contribute to it. Build something cool with it.

dotnet add package CogniChain

That’s it. No signup. No API key for CogniChain itself. Just install and start building.

Check out CogniChain: github.com/wouternijenhuis/CogniChain

What’s Next?

This is version 0.1.0—the foundation. I have ideas for semantic memory, function calling improvements, observability features, and more. But I want to hear from you.

What problems are you facing building with LLMs? What infrastructure keeps you from focusing on your actual application?

Let’s stop rewriting the same code. Let’s build better AI applications together.


What infrastructure challenges have you faced building with LLMs? I’d love to hear your story.

Built with .NET 10 | MIT License | Open Source

← Back to all posts