Writing your tests in EDN files

Jacob O'Bryant | 19 Jul 2025

I've previously written about my latest approach to unit tests:

[Y]ou define only the input data for your function, and then the expected return value is generated by calling your function. The expected value is saved to an EDN file and checked into source, at which point you ensure the expected value is, in fact, what you expect. Then going forward, the unit test simply checks that what the function returns still matches what’s in the EDN file. If it’s supposed to change, you regenerate the EDN file and inspect it before committing.

I still like that general approach, however my previous implementation of it ended up being a little too heavy-weight/too reliant on inversion of control. The test runner code had all these things built into it for dealing with fixtures, providing a seeded database value, and other concerns. Writing new tests ended up requiring a little too much cognitive overhead, and I reverted back to manual testing (via a mix of the REPL and the browser).

I have now simplified the approach so that writing tests is basically the same as running code in the REPL, and there's barely anything baked into the test runner itself that you have to remember. I put all my tests in EDN files like this (named with the pattern my_namespace_test.edn):

{:require
 [[com.yakread.model.recommend :refer :all]
  [com.yakread.lib.test :as t]
  [clojure.data.generators :as gen]],
 :tests
 [{:eval (weight 0), :result 1.0}
  _
  {:eval (weight 1), :result 0.9355069850316178}
  _
  {:eval (weight 5), :result 0.7165313105737893}
  ...]}

(weight is a simple function for the forgetting curve, which I'm using in Yakread's recommendation algorithm.)

I only write the :eval part of each test case. The test runner evaluates that code, adds in the :result part, and pprints it all back to the test file. Right now there isn't a concept of "passing" or "failing" tests. Instead, when the tests are right, you check them into git; if any test results change, you'll see it in the diff. Then you can decide whether to commit the new results (if the change is expected) or go fix the bug (if it wasn't). If I had CI tests for my personal projects, I'd probably add a flag to have the test runner report any test cases with changed results as failed.

In my lib.test namespace I've added a couple helper functions, such as a t/with-db function that populates an in-memory XTDB database value:

{:require
 [[com.yakread.work.digest :refer :all]
  [com.yakread.lib.test :as t]
  [clojure.data.generators :as gen]],
 :tests
 [{:eval
   (t/with-db
    [db
     [{:xt/id "user1", :user/email "user1@example.com"}
      {:xt/id "user2",
       :user/email "user2@example.com",
       :user/digest-last-sent #time/instant "2000-01-01T00:00:00Z"}]]
    (queue-send-digest
     {:biff/db db,
      :biff/now #time/instant "2000-01-01T16:00:01Z",
      :biff/queues
      {:work.digest/send-digest
       (java.util.concurrent.PriorityBlockingQueue. 11 (fn [a b]))}}
     :start)),
   :result
   {:biff.pipe/next
    ({:biff.pipe/current :biff.pipe/queue,
      :biff.pipe.queue/id :work.digest/send-digest,
      :biff.pipe.queue/job
      {:user/email "user1@example.com", :xt/id "user1"}})}}
  ...]}

(queue-send-digest returns a list of users who need to be sent an email digest of their RSS subscriptions and other content.)

I like this approach a lot more than the old one: you just write regular code, with test helper functions for seeded databases or whatever if you need them. It's been pretty convenient to write my "REPL" code in these _test.edn files and then have the results auto-update as I develop the function under test.

There are a couple other doodads: if the code in :eval throws an exception, the test runner writes the exception as data into the test case, albeit under an :ex key instead of under :results:

{:require
 [[com.yakread.model.recommend :refer :all]
  [com.yakread.lib.test :as t]
  [clojure.data.generators :as gen]],
 :tests
 [{:eval (weight 0)
   :ex
   {:cause "oh no",
    :data {:it's "sluggo"},
    :via
    [{:type clojure.lang.ExceptionInfo,
      :message "oh no",
      :data {:it's "sluggo"},
      :at
      [com.yakread.model.recommend$eval75461$weight__75462
       invoke
       "recommend.clj"
       60]}],
    :trace
    [[com.yakread.model.recommend$eval75461$weight__75462
      invoke
      "recommend.clj"
      60]
     [tmp418706$eval83727 invokeStatic "NO_SOURCE_FILE" 0]
     [tmp418706$eval83727 invoke "NO_SOURCE_FILE" -1]
     [clojure.lang.Compiler eval "Compiler.java" 7700]
     [clojure.lang.Compiler eval "Compiler.java" 7655]
     [clojure.core$eval invokeStatic "core.clj" 3232]
     [clojure.core$eval invoke "core.clj" 3228]]}}
  ...]}

The stack trace gets truncated so it only contains frames from your :eval code (mostly—I could truncate it a little more).

I also capture any tap>'d values and insert those into the test case, whether or not there was an exception. It's handy for inspecting intermediate values:

:tests
[{:eval (weight 1),
  :result 0.9355069850316178,
  :tapped ["hello there" "exponent: -1/15"]}
  ...

And that's it. If you want to try this out, you can copy run-examples! (the test runner function) into your own project. It searches your classpath for any files ending in _test.edn and runs the tests therein. I call it from a file watcher (Biff's on-save function) so your test results get updated whenever you save any file in the project.

Sign up for Biff: The Newsletter
Announcements, blog posts, et cetera et cetera.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.