or learn more


Mar 5, 2012

It’s a proud day as my colleagues Rich Hickey, Stuart Halloway, and others at Relevance, Inc. have announced Datomic — a new kind of database. I wouldn’t give due justice in describing what Datomic provides now, so I recommend viewing the video by Rich below to get the summary.

Follow that with an introduction to Datalog and Datomic querying by Stuart Halloway:

A Datomic VM appliance and peer are available for those of you interested in trying it out. This is something very new and it will take some time to spread the relevant knowledge. I promise to serve my part in this by sharing some Datomic-related blog posts soon.


30 Comments, Comment or Ping

  1. Very interesting! It seems to me that the biggest problem will be write-heavy workloads. This shows up as an issue in two places.

    First, the transactor is guaranteeing both total ordering and consistency. That’s fine, but it means the transactor cannot be effectively distributed. It is a single bottleneck through which all incoming data must flow. That’s fine for some workloads, but not universally.

    Second, data needs to be synchronized to application-local caches for effective querying. This is very nice as a feature, but it means that there’s a lot of chatter back and forth. It’s good for queries that are stored data but heavy on local data and/or computed data, but it’s absolutely abysmal for queries that work on large, persisted datasets. It’s unacceptable to be forced to synchronize multi-terrabyte datasets to every single application peer. The network load alone is terrifying to consider.

    With all that said, I think many, and perhaps most workloads are not going to hit the above two problems. For most applications, even applications at high scale, Datomic seems to be a really good fit. The two keys seem to be a read-heavy workload and comparatively small working sets.

    (note that these opinions are based on watching the presentations. I haven’t actually played with Datomic yet)

  2. Hi Daniel, thanks for checking out Datomic.

    To your first point, yes, writes can’t be distributed, and that is one of the many tradeoffs that preclude the possibility of any universal data solution. The idea is that, by stripping out all the other work normally done by the server (queries, reads, locking, disk sync), many workloads will be supported by this configuration. We don’t target the highest write volumes, as those workloads require different tradeoffs.

    As for your second point, large datasets do not have to be synchronized, and do not have to be local to be fast. Datomic stores data on distributed storage as a tree of covering index nodes, and, once the roots have been cached on the peers, a peer can seek to an arbitrary (uncached) point in the index in a single network read, which, from DynamoDB, is backed by an SSD read, and often faster than a local disk seek. What is stored in storage is an immutable btree-like structure (with 1000+-way branching) that can be directly leveraged by peer query engines. It does not have to be copied locally into an indexed form. Datomic is, in one sense, a distributed index.

    The live sync that goes on is just to bridge the gap between the last periodic index and now, and every time a new index is made the last gap can be dropped by the peers. So peers only need to cache the window of data that accumulates between indexing jobs. Other than that, they cache their working sets.

  3. Duraid

    WTF!! how can this guy get everything write?

  4. Duraid


  5. Rich,

    The distributed index is basically what I was thinking what going on. It’s the only way to make something like this achievable. My concern is with the final bit:

    Other than that, they cache their working sets.

    So if I’m trying to take the mean of a 2 TB dataset, wouldn’t that require the synchronization of 2 TB of data? If I’m actually reading a datom, does it not have to be local? Obviously, random seeks can be optimized to not cache intervening results, but local queries would seem to require local input data.

  6. I’m still a bit confused about how you can get around transferring every (partial) record over the wire (or caching all of them locally). Take Stuart’s example from his query language demo video:

    [:find ?customer ?product
     :where [?customer :shipAddress ?addr]
            [?addr :zip ?zip]
            [?product :product/weight ?weight]
            [?product :product/price ?price]
            [(Shipping/estimate ?zip ?weight) ?shipCost]
            [(<= ?price ?shipCost)]]

    If Shipping/estimate is a normal JVM function run on a peer, doesn’t this mean that at the very least the peer needs to pull the following items from the data store (or a locally cached version of it):

    • The zip code of every customer in the database (and some kind of ID or PK).
    • The weight and price of every product in the database (and some kind of ID or PK).
    • The full customer and product records of all the results (this is what most DBs would be sending over the wire: the “results”).

    Which means that if there are 100,000 customers in the DB, and 10,000 products in the DB, you’d need to transfer:

    (+ (* 100000 (+ (size-of zip-code)
                    (size-of pk)))
       (* 10000 (+ (size-of weight)
                   (size-of price)
                   (size-of pk)))
       (* distinct-customers-in-results (size-of whole-customer))
       (* distinct-products-in-results (size-of whole-product)))

    Instead of just:

    (+ (* distinct-customers-in-results (size-of whole-customer))
       (* distinct-products-in-results (size-of whole-product)))

    I might be missing something though.

  7. The live sync that goes on is just to bridge the gap between the last periodic index and now, and every time a new index is made the last gap can be dropped by the peers. So peers only need to cache the window of data that accumulates between indexing jobs.

    On further consideration, this point would seem to resolve the majority of my concerns. If I’m understanding this correctly, data does need to be brought over the network in order to be used as query input, but this is done incrementally then dropped once that data is no longer required. This would be better than having to batch-copy all working sets ahead of time, holding onto them for an indefinite period, but it is still likely to be slower than sending the query to the data and receiving a reduced result.

  8. Taking the mean of an entire dataset obviously requires the entire dataset, and is probably a better job for a map-reduce-like approach. But there are many interesting queries against large datasets that are not summaries of the entire dataset.

  9. Taking the mean of an entire dataset obviously requires the entire dataset, and is probably a better job for a map-reduce-like approach. But there are many interesting queries against large datasets that are not summaries of the entire dataset.

    Right. Datomic does not appear to be optimized for analytics-style workloads. Rather, its niche seems to be in the (very common) case of random access subset queries, where you’re filtering a large dataset down to just a few points.

  10. but it is still likely to be slower than sending the query to the data and receiving a reduced result.

    True. But you don’t have full query power, a join for example, while Datomic has.

  11. Datomic does not appear to be optimized for analytics-style workloads.

    I disagree about analytics. There too, it is an unlikely scenario to roll up the entire dataset. Also, a set of the Datomic indexes look a lot like the indexes used by column stores (i.e. attribute-major slices), which are quite friendly to analytics. Furthermore, if you had to have a big-bad box to do analytics you’d be copying out of your OLTP DB into it anyway. With Datomic you could set up a big-bad peer and it would automatically be incrementally updated. And, being so, it would be able to do analytics on real-time information.

  12. First of all, congrats to the datomic team for this contribution.

    I think with analytics, datomic will allow for some innovative approaches due to the nature of the peers, and the data model. So the definition of “analytics-style” workloads may itself shift.

    It is extremely powerful, for example, to be able to store the results of analytics within the same ontology as the source data, and have those analytic facts contributed by a set of peers with varying code and resources.

    The main bottleneck, and maybe its a temporary thing, is DynamoDB. It is extremely expensive for storing large amounts of data.

    Also, the promise of analytics workloads not interfering with the web tier is constrained by the fact that you have to pre-allocated DynamoDB bandwidth. So its possible for the two to compete for resources.

    It would be interesting to see S3 as an option for “long-term” data that is old or rarely accessed.

  13. Jason

    Is this using parts of storm or is this totally seperate?

  14. @Jason

    Totally separate.

  15. Mark

    Is there any mechanism for intentionally “forgetting” unneeded data? Aside from helping to control storage costs, there are some important legal implications surrounding the “right to be forgotten” in which sometimes companies are mandated to erase certain data.

  16. @Mark

    In your schema, you can mark any attribute :db/noHistory, and it won’t preserve history when updated. I believe that functionality was built in specifically to cover the use-case you cite.

  17. Very cool. I think for an enormous subset of business applications this could hit a sweet spot. Often times I’ve seen kludged together architectures that involved an OLTP database backing the website or sales systems, with loads of stored queries either slowing down every transaction or running periodically to aggregate and/or copy data. Then offline there will be mirrors of the data used to generate analytics and process various business rules to send emails, issue orders, etc. These systems are a total mess to deal with, and the database is constantly a source of issues because of how much it is being asked to do.

    The Datomic architecture lends itself to a CQRS style of design, where the transactor can shred through simple commands (writes) without performing any logic, and then peers can subscribe to data to do online business rule processing, analytics, validation, etc. What would be interesting is if you could specify the queries, and then have a distributed query sheduler determine which peers are best suited to execute them, or whether it’s more appropriate to spin up another peer. Great stuff.

  18. Aliaksey Kandratsenka

    This is very interesting. But too bad you guys don’t provide source.

    I don’t see how some consistency issues can be resolved in your supposedly ACID transactions.

    Say I want to atomically increment/decrement some counter field. Looks like I cannot remove old fact and create new one. Because I need new fact to have value 1 + last value which only transactor will know. And it seems impossible to do with transactions as data. You seemingly have to send code.

    Another example is I need if something already exists before creating it. I.e. UNIQUE like constraint. Sometimes it can be done my naming things. And then supposedly transaction will fail if I attempt to add something with existing name.

    So how things like that can be implemented ?

  19. mp

    @Aliaksey Just guesses based on a quick skim of the tutorial, but: 1. Do in one transaction [:db/retract entity-id :counter old-value] and [:db/add entity-id :counter (+1 old-value)], if it fails update your idea of the old value and repeat.

    1. Attributes may be marked as unique in the schema.
  20. I waited for the Datomic announcement with great excitement, and I’d like now to share some thoughts, hoping they will be food for more comments or blog posts.

    Datomic certainly provides interesting features, most notably:

    1) Clojure-style data immutability, separating entity values in time. 2) Declarative query language with powerful aggregation capabilities.

    But unfortunately, my list of concerns is way longer, maybe because some lower level aspects weren’t addressed in the whitepaper, or maybe because my expectations were really too high. Let’s try to briefly enumerate the most relevant ones:

    1) Datomic provides powerful aggregation/processing capabilities, but violates one of the most important rules in distributed systems: collocating processing with data, as data must be moved from storage to peers’ working set in order to be aggregated/processed. In my experience, this is a huge penalty when dealing with even medium-sized datasets, and just answering that “we expect it to work for most common use cases” isn’t enough. 2) In-process caching of working sets usually leads in my experience to compromising overall application reliability: that is, the application usually ends up spending lots of time dealing with the working set cache, either faulting/flushing objects or gc’ing them, rather than doing its own business. 3) Transactors are both a Single Point Of Bottleneck and Single Point Of Failure: you may don’t care about the former (which I’d do btw), but you have to care about the latter. 4) You say you avoid sharding, but being transactors a single point of bottleneck, when the time you have too much data over a single transactor system will come, you’ll have to, guess what, shard, and Datomic has no support for this apparently. 5) There’s no mention about how Datomic deals with network partitions.

    I think that’s enough. I’ll be happy to read any feedback about my points.


    Sergio B.

  21. I really like having the time element as an integral part of the system. I’ve worked on financial system where several months into the project, there are suddenly requirements of auditing and archiving. We want to know for certain forms who approved what and when. With Datomic we can get a consistent snapshot of everything at a certain point in time, no need to play around having new tables backing part of the information. We just have to add a whodunit field. I guess archiving and slimming of databases can be simplified too. Make a backup of snapshot at moment ‘X’ in time with all previous data and then clean all data before the ‘X’ snapshot. Outstanding applications for the financial, legal and insurance domains using Datomic are waiting for the right agile company 8)

  22. Benjamin chi

    It will be interesting to see how this compares to Greenplum for big data analytics that takes the map/reduce approach, e.g. Over 40 TBs of data, and scanning through at least 2/3 of data for summary/aggregates/mean. And for OLTP type of processing, the writing part may have performance issue to handle over 10 million complex type of transactions within a constrained timeframe, e.g. A few hours.

    It’ll be interesting to find out if transactions can be backed out to restore back to a couple hours, days, weeks, or months. would there be a transaction record that shows that records from that restore point were brought to the current time? Ideally, it could work two ways, Datomic would keep all history so and it wouldn’t delete any history, or it could delete all history after the restored point.

    Thanks. I look forward to reading the documentation.

  23. kimberlad

    If one of my peers goes down and the Tranactor (assume this can be configured HA) then do I loose data?

  24. Rich Hickey has answered Sergio’s questions over on the myNoSQL blog:

    (See the comment section at the bottom.)

  25. I realized only now that datom sounds like datum 8) I should have expected something like that.

  26. Eugen Dück

    Playing around with datomic right now, I really like the concept. Serializing all transactions is a bold move, I hope it performs (which I suspect it could, given the way the transactor is designed), but if not, I’ll check if using multiple databases is a way to scale.

    Feature-wise, there’s one thing I couldn’t find in the documentation: How I can query the history of a particular entity? Is that possible? The information certainly is there, and I could perhaps hack something very inefficient using “asOf” etc. iterating through all states of the database.

    If this “get history of entity” feature exists, it’d change the way I design my schema. Otherwise I’d have to do what I currently do in relational databases, which is preserving history explicitly by adding tables and columns which I manually fill with data. But I’m hoping datomic can help me with this…

  27. Eugen Dück

    It looks like I can get the history be means of the :db/txInstant attribute. Nice!

  28. Magnus Skog

    Hi Rich,

    I like this approach to facts and getting rid of place oriented stuff. Do you guys have any terminology you use when talking about databases with this approach? It’s always nice to have a term to refer to instead of saying “Datomic-like database”.

    What does the competition look like? Are there any other databases that have taken the same route? Targeting the jvm is nice, if you indeed are using the jvm. What about other languages? Do you have any plans to create e.g. bindings for C to support legacy or future systems and languages?

    Cheers /Magnus

  29. tsaixingwei

    Would designing a relational-based databases with the 6th Normal Form /Anchor Modelling give the same advantages as what Datomic is trying to achieve? In 6NF, everything (inserts, updates, deletes) is really an insert (immutability?) with temporal characteristics (changing time, recording time, happening time). You would be able to query for the latest value, or a value at a particular point in time in the past or the values between any 2 points in time. I’ve just started reading up on 6NF and Anchor Modelling recently and it seems to be that a lot of concept in Datomic seems to be similar as I watch the video.

  30. tsaixingwei

    To have my previous comment make more sense, here are links to two presentations about Anchor Modelling:

    By the way, I’m no expert on Anchor Modelling or 6NF.

Reply to “Datomic”