The in-memory database for NET
Build faster systems, faster

Download Now

If the data fits in RAM why move it back and forth between memory and disk?

In-memory technology is not just the latest buzz, it's spreading fast and is predicted to be highly disruptive. The first version of Oracle ran on a PDP-11 with 128KB of RAM. The amount of available RAM is constantly increasing with commodity servers approaching the 1TB mark, enough to hold 99% of all OLTP databases in memory. The traditional RDBMS architecture, and the relational model along with it, are now obsolete.

OrigoDB enables you to build high quality, mission critical systems with real-time performance at a fraction of the time and cost. This is not marketing gibberish! Please read on for a no nonsense description of our features. Get in touch if you have questions or download and try it out today!

Blazing speed

In-memory operations are orders of magnitude faster than disk operations. A single OrigoDB engine can execute millions of read transactions per second and thousands of write transactions per second with synchronous command journaling to a local SSD.


This is the #1 reason we built OrigoDB. A single object oriented domain model is far simpler than the full stack including a relational model, object/relational mapping, data access code, views and stored procedures. That's a lot of waste that can be eliminated!

Software quality

The OrigoDB engine is 100% ACID out of the box. Commands execute one at a time, transitioning the in-memory model from one consistent state to the next. The data model, commands and queries are all strongly typed, compile time checked, version controlled with the rest of your code and easily unit tested.

Bring your own data model

OrigoDB data models, commands and queries are written in C# with runtime access to the entire Mono/.NET class library. Create your own domain specific model, or go with a generic one:

  • Relational
  • Document
  • Key/value
  • Graph
  • Xml
  • Redis clone
  • Javascript

Use cases

Given the choice of data models, the possible applications are endless. Here are a few examples just to give you a general picture:

  • General OLTP alternative
  • Domain Driven Design
  • Complex Event Processing
  • Real time analytics/search
  • Caching
  • Rapid prototyping
  • Online gaming
  • Serving ads

Modular architecture

OrigoDB can be easily customized to meet your specific requirements. Choose from existing modules or implement custom plugins. Storage and wire formats:

  • JSON
  • Native binary
  • Protobuf
Backing stores:
  • File system
  • Sql Server
  • Event Store
  • NEventStore


The command journal contains a complete history of every change ever made to the database.

  • Browse, query, define projections or restore to any point in time
  • Know exactly who did what and when
  • Go back in time and fix bugs that corrupted the data model

Not just .NET

The OrigoDB Server REST api uses the widely supported JSON format allowing access from virtually any platform. Use the native NET client for optimal performance and the full set of features. Both interfaces support ad-hoc queries using the powerful LINQ syntax.

Simple management

Easily administer OrigoDB Server using a simple and intuitive web-based interface.

  • Monitor, start and stop nodes
  • Manage replication
  • Execute ad-hoc queries

Multi node partitioning

If you're dealing with more data that can fit on a single server, OrigoDB supports data partitioning. Choose an appropriate data model, define a partitioning scheme then set up as many server nodes as necessary.

Multiplatform and open source

OrigoDB Server runs on Mono/.NET on either Windows, Linux or MacOS and is based on the open source OrigoDB engine.

High availability

OrigoDB Server features multi-server replication with any number of readonly replicas, manual role switching and automatic client redirection.

How does it work?

Persistence is achieved using write-ahead command logging with optional full model snapshots. On startup, the in-memory model is restored from the most recent snapshot followed by command replay. Commands and queries are processed by the kernel, the component responsible for atomicity, isolation and durability.

So what's the catch?

Here are the major drawbacks:

  • The command journal, unless truncated, can become large over time
  • Truncating the journal erases the history of events
  • Snapshots take time to read and write
  • System is readonly while snapshot is being taken
  • Bringing a large system online can take time
  • Rolling back a failed command involves a full restore