Hi there. I'm Asko Nõmm and I'm a contract programmer specializing in Clojure and ClojureScript projects, based out of Estonia.

I'm currently spending my days helping IPRally create the best patent search the world has ever seen and my nights working on Invobi. You can contact me via e-mail asko@repl.ee and check out my open source work on GitHub.


Invo v2

It is my pleasure to announce that the new and improved version 2 of my entirely free invoice generation service, invo.ee, is now live.

Visually you don't see any changes, but behind the scenes it's a lot. All the user data is now fully encrypted. I have no idea what the users enter into their invoices, and should anyone somehow hack themselves a copy of the database, they neither would have any idea.

Additionally, the entire service is now powered by Clojure, as opposed TypeScript that it was before. The API layer is pure Clojure and the UI layer is pure ClojureScript. The PDF generation service is also self-hosted now, meaning improved speed when downloading an invoice as well. And since Invo now no-longer relies on any third-party services it has a much higher uptime and reliability, but I figure mainly because it now runs on Clojure, which I have successfully written long-lasting quality software before, and which is much harder to do with the JS ecosystem.

And as always, Invo is still entirely free, no sign-up required, no souls needing to be sold. Enjoy!

Update (23 May, 2023): Invo is now called Invobi. I needed a new name for a .com domain because .ee is a regional domain and thus disadvantaged for SEO in other regions, and since Invo supports English and Spanish as well, it makes sense to go with a .com.

Routing with Ruuter in a Reagent / Re-frame project

Ruuter, my zero-dependency Clojure(Script) router can be used as a general router, without any HTTP server as well. This is true for both Clojure and ClojureScript, and because the router has no dependencies, also true for Babashka and NBB, and is exactly what I did in a Reagent / Re-frame project recently, and here's how I did it.

At the core of it all are your routes, let's define them as something simple:

(def routes
  [{:path "/"
    :response (fn [_]
                [:div "Hello, World"])}
   {:path "/hello/:who"
    :response (fn [{params :params}]
                [:div "Hello, " (:who params)])}])

Unlike with a HTTP server such as HTTP-Kit, we don't need the route to have a :method, nor do we need it to return a response map. We can have it return anything we want, which in this case is a Reagent component.

Now let's create a Re-frame event for setting URI path:

(ns events
    [re-frame.core :refer [reg-event-fx]]))

  (fn [{db :db} [_ path]]
    (.pushState (.-history js/window) nil "" path)
    {:db (assoc db :path path)}))

This allows us to call a :set-path event whenever we want to change the current route in-place, and it will also update the URL visible in the browser.

Then let's create a Re-frame subscription, so we could listen to said path:

(ns subs
    [re-frame.core :refer [reg-sub]]))

  (fn [db _]
    (-> db :path)))

And finally let's put it all to work in our core component:

(ns core
    [reagent.core :as r]
    [reagent.dom :as rd]
    [re-frame.core :refer [dispatch dispatch-sync subscribe]]
    [ruuter.core :as ruuter]

(def routes
  [{:path "/"
    :response (fn [_]
                [:div "Hello, World"])}
   {:path "/hello/:who"
    :response (fn [{params :params}]
                [:div "Hello, " (:who params)])}])

(defn- app []
  (let [popstate-fn #(dispatch [:set-path (-> js/window .-location .-pathname)])
        path (subscribe [:path])]
       (fn [_]
         (dispatch-sync [:initialise-db])
         (.addEventListener js/window "popstate" popstate-fn))
       (fn [_]
         (.removeEventListener js/window "popstate" popstate-fn))
       (fn []
         (when @path
           (ruuter/route routes {:uri @path})))})))

(defn ^:export init []
  (rd/render [app] (.querySelector js/document "#app")))

As you can see, when the Reagent app loads, it adds an event listener for popstate, which listens to a URI change by the user. Thus, if the user changes the URL manually, the app will call :set-path on its own. Regardless if you call the :set-path event yourself manually or whether the popstate event promps that call, the end result is the same - it re-renders the app component, which then will run Ruuter again, matching against the new path, loading the corresponding component.

So if you now navigate to /hello/John, it should render "Hello, John" on the page. Oh and, currently when you visit the page via a link directly, it won't load the correct component, because the default path isn't set, so I recommend you set it via your Re-frame db initialisation, like so:

(ns events
    [re-frame.core :refer [reg-event-fx]]))

(def default-db
  {:path (-> js/window .-location .-pathname)})

  (fn [_ _]
    {:db db/default-db}))

And that's how you can use Ruuter to do any type of routing, whether that would be in Clojure side, ClojureScript or even in Babashka and NBB.

State of Clojure 2022

I love to see that Clojure is used more and more at work and it does reflect in my own experience as well from when I was job searching recently. Seems that each time I do job searching there are more and more opportunities for Clojure.

I'm also very happy with the Clojure ecosystem, which is flourishing, with tons of great tools being made all the time (keep an eye on the #announcements and #releases channel on Slack if you're curious as well).

And everyone is approachable, friendly and encouraging, which I don't think is actually all that common to have based on my experience with other languages where the community tends to get toxic quickly, but it could also be because the Clojure community is small, further convincing me to spend the rest of my career working with niche technology.

People are wonderful. I love individuals. I hate groups of people. I hate a group of people with a 'common purpose'. 'Cause pretty soon they have little hats. And armbands. And fight songs. And a list of people they're going to visit at 3am. So, I dislike and despise groups of people but I love individuals. Every person you look at; you can see the universe in their eyes, if you're really looking.

— George Carlin

On the topic of code editors, I see Emacs losing popularity, IntelliJ (Cursive) being pretty much the same with a slight uptick, VS Code (Calva) marching on upwards the quickest. Personally however I'm mostly using IntelliJ due to it being the most stable software for Clojure programming I have found, and second-best for me is Neovim + Conjure, actually. I do hope that one day VS Code + Calva would reach or beat the stability of IntelliJ because that would be a monumental achievement for open source software I think, but it still isn't there yet for me.

I've whined and moaned about the clj and clojure CLI tools before saying that I think the user-interface of those are unfriendly, what with their -X and -T weirdness I've not seen any modern CLI tools do, and which is why I much prefer to use lein. I do however realize that there is no hiding from native CLI tools very soon, and so I must switch as well, but in terms of user friendlyness that switch would definitely be a downgrade for me.

But all in all another great year in the land of Clojure which I'm very happy about. Soon we'll get the StackOverflow Developer Survey 2022 results as well and perhaps I'll have more thoughts then.

The Niche Programmer

For the vast majority of my programming career, I've been a mainstream developer. By mainstream, I mean writing in a language and using the tools that most of the category of software development I have been in (mostly web development) has used, such as PHP, JavaScript, and the most popular tools of those ecosystems.

But then one day in 2018 I got a job where I had to learn Clojure. I had never heard of it and if we're being totally honest, I had never even heard of what Lisp is at that point. I was so engulfed in the mainstream I had no idea that there would be something without a C-like syntax. Well, okay, I knew Ruby existed, but Lisp? So many parentheses, such seemingly condense code. Crazy.

Nevertheless, I learned it and then wrote Clojure for almost 3 years at that company. I didn't dive into finding an online Clojure community and none of my programmer friends did Clojure or had heard of it either so I had no idea if the language was gaining popularity or dying.

All was well until one day the company I worked at announced that they were moving away from Clojure to TypeScript, saying that it was too hard to find Clojure developers. I remember thinking that it must be a dying language then that nobody used, which sucked for me because I happened to like Clojure. Oh well, back to the mainstream then, I thought.

A few months later I wanted a new challenge and quit that gig. Whilst doing job searching, I discovered something interesting. I discovered that while there are, of course, a ton of mainstream dev jobs out there, most of those wanted you to work in an office, and while there were much, much fewer Clojure jobs out there, they were all remote. Best of all, the salary was more than double that of the mainstream stuff. Turns out the company I worked for just didn't have the budget for Clojure developers (and that I was massively underpaid).

So I joined the Clojure Slack community and kept an eye on Clojure job boards, and another interesting thing I found was that instead of the 100+ competitors for a job that I had gotten used to doing mainstream stuff, for Clojure, there were maybe 10. This made it so that the vast majority of the CVs I sent resulted in an interview, which was awesome.

And while doing the interviews I discovered that because of the low number of applicants, leetcode is fairly rare. Most of the interviews I've been a part of have focused mostly on questions around tool use, clean code practices, and asking me what I built in my previous jobs. And unlike mainstream language companies, they check my GitHub projects and for the most part never even give me a technical test job.

This was an amazing revelation to me because I had gotten used to the interview process being something similar to a prostitution ring where nobody cares about my open-source projects and most of the time nobody even actually read my CV.

Anyway, this is all to say that being a niche programmer is not bad at all. Pay is great, competition is low and the interview processes for the most part very humane. If Clojure ever makes it mainstream, I'll find a new niche language to specialize in. And maybe you shouldn't be too afraid to try a niche language as well, if you've ever thought about it. Just because something has more jobs does not necessarily mean that you'll have an easier time getting a job.

Update: I want to clarify that not all niches yield similar results. Some languages have virtually no jobs available at all (perhaps because they are very new, Clojure is over 10 years old now), so please make sure to do market research before comitting to a niche.

Correcting Markdown: Newlines

Part of the upcoming 2.0 release of Clarktown are Correctors. Correctors, like the name would suggest, correct inputted Markdown. They are the middlemen which the input goes through before Markdown gets passed to the Parsers, which then do the job of converting Markdown into HTML.

In the future there will probably be many different types of Correctors, but at the time of writing this there's only one type: Block Separation Correctors. These correctors ensure that there are empty newlines where need-be so that the Parsers get correct blocks, because in Clarktown everything is a block, separated by two newlines (\n\n or \newline\newline in Clojure).

The problem

Take for example the following Markdown:

This is some paragraph text.
# This is some heading.

Since there's only one \newline between these two lines, Clarktown will think of it as one block, and the block Matcher (which identifies a block) will start from the beginning, see regular text, and think the whole thing is just a paragraph, and will render HTML like this:

<p>This is some paragraph text.
# This is some heading.</p>

Where instead what should be the end result is this:

<p>This is some paragraph text.</p>

<h1>This is some heading.</h1>

Now while I personally do not write Markdown like that and nicely always add two newlines between blocks myself, some users will not do that, and for them the end result will be broken.

The solution

Solution to this problem is to create correctors. Essentially we'll be splitting the entire Markdown input into a vector of lines, and going over each line. Then we run the correctors over each of those lines and they will determine if a fix is needed or not. Should there be a \newline above or below of the current line? Perhaps both? A corrector will answer these questions.

The type of heading block that starts with a hashbang is called an ATX heading block, so let's create a function that determines whether we should have an extra \newline on top of the block by feeding it all the lines, the current line, and the current index, like this:

(defn empty-line-above?
  [lines line index])

First let's make sure that this line is indeed a ATX heading block line:

(clojure.string/starts-with? line "#")

Then let's make sure that this is not the very first line, because if it is then there's no need to add anything above.

(> index 0)

Finally the important bit, which is to check if an actual new \newline is required or not:

(not (= (-> (nth lines (- index 1))

You see clojure.string/trim removes any newlines, and so if we check what are the contents of the line previous to the current line, we should then get a result which is an empty string.

And so our final empty-line-above? corrector would be:

(defn empty-line-above?
  [lines line index]
  (and (clojure.string/starts-with? "#")
       (> index 0)
       (not (= (-> (nth lines (- index 1))

There's a bit more to the corrector of a ATX heading block, such as the empty-line-below? function as well as detecting if we're in a code block, because we do not want to correct anything inside of a code block, but this here is the gist of it.

Bundling the correctors

Once we have a bunch of correctors, we don't want to manually integrate them, so we'd rather create a map, like this:

(def block-separation-correctors
  {:newline-above [...]
   :newline-below [...])

The vectors of each will include references to functions like the one we created above (the empty-line-above? function).

And we'll use these by running them over each line in our inputted Markdown, like so:

(let [lines (clojure.string/split-lines "our markdown goes here")
      above-correctors (:newline-above block-separation-correctors)
      below-correctors (:newline-below block-separation-correctors)]
  (->> lines
	 (fn [index line]
	   (let [add-newline-above? (some #(true? (% lines line index)) above-correctors)
		 add-newline-below? (some #(true? (% lines lien index)) below-correctors)]
	       (and add-newline-above?
		    (not add-newline-below?))
	       (str \newline line)

	       (and add-newline-below?
		    (not add-newline-above?))
	       (str line \newline)

	       (and add-newline-above?
	       (str \newline line \newline)

	       :else line))))))

And this mostly concludes how the \newline Markdown corrections are done in Clarktown. You can check more by reading the engine.clj file.

A contentEditable, pasted garbage and caret placement walk into a pub

Pasted garbage says to contentEditable; "Hey! I'd really like to become part of you" and contentEditable says back; "Not so fast, you! First we got to rinse you down!". And thus begins a story of how to make contentEditable take in a good ol' paste, parse that paste for anything we might not want, put the result of that parsing into the right place in contentEditable and place the caret just after that paste. Sounds easy, right? Right.

The filthy default of contentEditable behaviour

By default, contentEditable takes in just about anything you'd like to give it. If you copy text from anywhere that also has mark-up and styles (like a Word document) and then paste it into the contentEditable, it would gladly take all that mark-up and styles as well. But this isn't a great user experience if you're building a content editor like I am, so the best solution is to parse that paste and remove anything you might not want - which in my case was to remove all styles and only allow certain mark-up.

Rinsing down the paste

Alright so let's create a simple contentEditable that also listens to the Paste event. I'll be doing this in ClojureScript as it is my favourite language, using Reagent for the React goodness as this is a React app, but all of this also applies for good ol' regular JS and React.js.

(defn contentEditable []
   {:contentEditable true
    :on-paste #(on-paste! %)}])

Don't you just love it how little code you have to write to make a React component in ClojureScript? I sure do, and this is totally NOT (wink wink) my way of saying you should try ClojureScript. Anyway, let's create the on-paste! function as well.

(defn on-paste! [event])

Oh shoot, it's empty! Yeah so, I wanted to stop here because little did I know, there's now a standard Clipboard API that you should use to get the pasted user content - but it comes with a gotcha - as soon as you try to use it, the browser will ask the user to give your page permission to read clipboard data, which I found not very user friendly for something as simple as being able to paste text into an input seeing as it wont ask that when you paste text into an input using the default behaviour, but anyway, ce'st la vie.

So, retrieving the pasted content with the Clipboard API would look like this:

(defn on-paste! [event]
    (.-clipboard js/navigator)) 
    (fn [clip]
      ;; `clip` contains the pasted content

Now the clip is the actual paste, along with all of its horrible formatting and styles, so I went along and used the sanitize-html NPM package to clean it right up (I do want to build a native Clojure version of this at one point, but for now this works just swell!). So, with that package, the on-paste! function would look like this:

(defn on-paste! [event]
    (.-clipboard js/navigator)) 
    (fn [clip]
      (let [pasted-content (parse-html clip)]
        ;; do something with `pasted-content` here

And the parse-html function would look like this:

(ns your-app
  (:require ["sanitize-html" :as sanitize-html]))

(defn parse-html [html]
    {:allowedTags ["b" "strong" "i" "em" "a" "u"]
     :allowedAttributes {"a" ["href"]}})))

Which, as I'm sure you can tell, only allows the tags b, strong, i, em, a, u and would only allow attributes on the a tag and only if that attribute is href. Pretty cool right? I sure think so.

Putting the paste in the right place

Woah! That rhymed! Maybe I could have a career in hip hop after all haha! Right, so now that we have the paste and we've successfully cleaned it from any garbage it might have, we have to put that paste somehow into our contentEditable.

How do we do that? Do we simply insert it into the DOM element? That's not very React-y now is it. What if we create a local state for the content and just modify that? That sounds a lot better, actually. Let's do just that by going back to our React component and changing it to look like this:

(ns your-app
  (:require [reagent.core :as r]))
(defn contentEditable []
  (let [content (r/atom "")]
    (fn []
       {:contentEditable true
        :on-paste #(on-paste! content %)
        :on-input #(reset! content (.-innerHTML (.-target %)))
        :dangerouslySetInnerHTML {:__html @content}}])))

As you can see, we create a Reagent atom and set it as an empty string, which we then dereference into the contentEditable content using the :dangerouslySetInnerHTML attribute. On every change to the content (the :on-input event), we update the content atom so that it is always up-to-date with what is actually inside the contentEditable, and finally notice the on-paste! call - we now pass the content along to it as well, so that the on-paste! function would be aware of what is the current content.

So now all we need to do to paste the content into the right place, is to change the on-paste! function to be aware of where your caret was when the paste happened and insert the paste there. The on-paste! function will then look like this:

(defn on-paste! [content event]
    (.-clipboard js/navigator)) 
    (fn [clip]
      (let [pasted-content (parse-html clip)
            selection (.getSelection js/window)
            offset (.-anchorOffset selection)
            new-content (string->string @content pasted-content offset)]
        (reset! content new-content)))))

So check this out, we get the current selection via (.getSelection js/window) which then allows us to get the caret offset using (.-anchorOffset selection), and that offset is key! That's how many index-based characters from the beginning of the text your caret was when you made the paste, and so that's also where we need to put the pasted content. I made a helper function called string->string for exactly that, and it looks like this:

(defn string->string [string inserted-string index]
  (let [split-beginning (subs string 0 index)
        split-end (subs string index)]
    (str split-beginning inserted-string split-end)))

Which takes the original content as string, then the content you want to insert into it as inserted-string and finally the index at which you want to insert that new content. It would then return the final string.

And as you saw in the end of the on-paste! function we called reset!, which basically just overwrites the content atom with the new content, prompting a re-render of the component, and thus now the contentEditable has the pasted content with all of the garbage removed in the right place as desired.

Why you got to Caret me like that?

One thing you may have noticed is that when pasting content the caret itself will end up in the wrong place - or rather the right place, which is to say that the caret will stay where it was, but you probably expect it to end up just AFTER the pasted content, as that's how it usually works. This happens because while the content of the contentEditable changed, the caret position did not, so we have to make it change ourselves.

Thankfully this is easier than one would think, we just have to take the current caret offset and add to it the number of characters that the pasted content has. Let's say that your caret was at offset 10, and the pasted string has a length of 7, then naturally we want 10 + 7, which means that the caret will be the 18th character.

To do this, we have to turn our component into a class component, because that's how you get lifecycle events in Reagent. Why? Because we need to able to place the caret AFTER the component has rendered, not before, as we won't yet have the updated text in the contentEditable otherwise and caret placement will throw an error for index being out of bounds. So, with that in mind, the updated component would look like this:

(ns your-app
  (:require [reagent.core :as r]))
(defn contentEditable []
  (let [ref (r/atom nil)
        content (r/atom "")
        caret-location (r/atom nil)]
      #(place-caret! ref content caret-location)
      (fn []
           {:contentEditable true
            :ref #(fn [el] (reset! ref el))
            :on-paste #(on-paste! content caret-location %)
            :on-input #(reset! content (.-innerHTML (.-target %)))
            :dangerouslySetInnerHTML {:__html @content}}])})))

Aye! You can see that we're also passing to the on-paste! function a new state variable called caret-location, which by default will be nil, and we'll use that to know where to put the caret with our place-caret! function you can see is being called from within the :component-did-update lifecycle event. We also create a new state called ref, which will hold the actual DOM element of our contentEditable so that we know in what element do we focus our cursor in.

Our updated on-paste! function should look like this now:

(defn on-paste! [content caret-location event]
    (.-clipboard js/navigator)) 
    (fn [clip]
      (let [pasted-content (parse-html clip)
            selection (.getSelection js/window)
            offset (.-anchorOffset selection)
            new-content (string->string @content pasted-content offset)]
        (reset! content new-content)
        (reset! caret-location (+ offset (count pasted-content)))))))

So now the caret-location will hold a value that is whatever the offset was when you pasted + the length of the pasted content, so it should now appear right after the paste. Well, not yet - we still have to create our place-caret! function, so let's go ahead create it looking like this:

(defn place-caret! [ref content caret-location]
  (when (and (not (nil? @caret-location))
             (>= (count @content) @caret-location)
             (first (.-childNodes @ref)))
    (let [selection (.getSelection js/window)
          range (.createRange js/document)]
      (.setStart range (first (.-childNodes @ref)) @caret-location)
      (.collapse range true)
      (.removeAllRanges selection)
      (.addRange selection range)
      (.focus @ref)
      (reset! caret-location nil))))

What this function does is that it takes a ref, which is the DOM element e.g our contentEditable, the content and caret-location states and it will then make sure that the content is not longer than caret-location (because if it is, we won't be able to change caret location because the index is out of bounds) and we check that the caret-location is not nil, because it's by default nil, so that we could only invoke caret placement when we want to, which in our case is during paste.

After all is good, we get the current selection, create a new range, set the start of the range to be our caret-location, collapse that range, remove all existing ranges from selection and add our new one instead, and then we'll focus on the ref element and reset the caret-location state.

Browsers decode images differently

I'd like to put down some thoughts about how browsers decode images - and how they do it differently, which can make things a bit tricky for you if you want to deliver the same user experience for every user of your application.

So what does this mean exactly? Well, let's say that you have a single img tag on your web page, but you update the src attribute of it via JavaScript, and you do this often enough to trigger this bug in Firefox. You can easily trigger it if you hook a scroll event to switching the src attribute so that on each scroll the image source updates, which should work just fine on Chrome, but on Firefox will start blinking.

Why does it start blinking? Well, it has everything to do with image decoding. The reason that it blinks on Firefox is that the image hasn't yet decoded when your scroll event is being triggered, but you are already attempting to display it - hence the blink. There's a pretty easy solution for this that I also wrote about on the bug report but the gist of the matter is that the HTMLImageElement has a Promise called decode and that you should not replace the src attribute until the decode finishes, which you can do like this:

const imgUrl = 'yournewimage.png'; // your new image
const img = new Image(); // create temporary image

img.src = imgUrl; // add your new image as src on the temporary image

img.decode().then(() => { // wait until temporary image is decoded
    document.querySelector('img').src = imgUrl; // replace your actual element now

You see because in Firefox once the decode happens, even if it happens on an image element other than the one you are updating, the decoded result of that image is cached, and with it, the bug resolved.

So I should always listen to the decode promise, right?

Technically yes, the MDN recommends that to know when it is safe to add the image to DOM, but what happens in Chrome with this code? Well, turns out it slows to a crawl and you're better off not using it. Now I don't think Chrome has implemented this feature in any other way from Firefox, except that for some unbeknownst reason it is a lot slower, but I do think that the two browsers decode images in a different way.

While in Firefox you will see an artifact in the form of a blink when the decode is taking place, I think in Chrome you'll just not see an updated image until that image has decoded, thus you don't see a blink and everything feels smoother, even if it is probably just the same. I tried to find more information on the differences but was unsuccessful, so if you do know something please get in touch. For now, without knowing more, my best recommendation is to in such a case simply write one implementation targeting Firefox and the other Chrome, like this:

const firefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1;
const imgUrl = 'yournewimage.png';
const img = new Image();

img.src = imgUrl;

if (firefox) {
   img.decode().then(() => {
      document.querySelector('img').src = imgUrl;
} else {
   document.querySelector('img').src = imgUrl;

In Firefox we wait for the decode Promise to tell us when we can safely update the image src attribute, according to MDN spec. Otherwise, we'll just update the src regardless of waiting for the decode to happen or not.

And that's how I unified the experience across Firefox and Chrome with this particular issue. It's actually funny because just recently I remember thinking that browsers had come such a long way in the past 10 years that if you write something in one it always works in the others. Well, almost always.