Digging into the Ash Framework
22 February 2023

Digging into the Ash Framework

I took the Ash framework for a test run while I was working on a project at the end of last year. I found myself comparing it to the Phoenix Framework. Ash wants you to build very agnostically with plugins to put things like postgres behind it. You can use all kinds of data layers, and it focuses a lot on flexibility.

Let’s dig in

Ash parlance has you define your resources. The framework generates all of the crud default for you behind the scenes. It has lots of plugins to support your data model. There are plugins for different data layers, from ETS to Postgres. It has front end plugins that help you build anything from a JSON or GraphQL API to a LiveView-based interactive web app. Here’s what a resource looks like

defmodule Helpdesk.Support.Ticket do
  use Ash.Resource,
    data_layer: Ash.DataLayer.Ets

  actions do
    defaults [:create, :read, :update, :destroy]

    create :open do
      accept [:subject]
    end

    update :close do
      accept []
      change set_attribute(:status, :closed)
    end

    update :assign do
      accept []

      argument :representative_id, :uuid do
        allow_nil? false
      end

      change manage_relationship(:representative_id, :representative, type: :append_and_remove)
    end
  end

  attributes do
    uuid_primary_key :id

    attribute :subject, :string do
      allow_nil? false
    end

    attribute :status, :atom do
      constraints [one_of: [:open, :closed]]
      default :open
      allow_nil? false
    end
  end

  relationships do
    belongs_to :representative, Helpdesk.Support.Representative
  end
end

I went through the getting started tutorial that has you define a help desk application with a super simple data model. We have a Ticket, which has some basic fields and can be assigned to a Representative. The basic create, read, update, and destroy operations are created for us, and we define some custom actions with a little more logic to them.

One of the things that Ash does that is kind of cool is it gives you some good compile and runtime erroring with relationships. If you define one side but not the other, it will give you an error.

== Compilation error in file lib/helpdesk/support/resources/ticket.ex ==
** (Spark.Error.DslError) [Helpdesk.Support.Ticket]
 actions -> update -> assign -> change -> manage_relationship -> representative_id -> representative:
  No such relationship representative exists.
    (ash 2.0.0) lib/ash/resource/transformers/validate_manage_relationship_opts.ex:40: anonymous fn/3 in Ash.Resource.Transformers.ValidateManagedRelationshipOpts.transform/1
    (elixir 1.14.0) lib/enum.ex:975: Enum."-each/2-lists^foreach/1-0-"/2
    (ash 2.0.0) lib/ash/resource/transformers/validate_manage_relationship_opts.ex:19: Ash.Resource.Transformers.ValidateManagedRelationshipOpts.transform/1
    (spark 0.1.28) lib/spark/dsl/extension.ex:530: anonymous fn/4 in Spark.Dsl.Extension.run_transformers/4
    (elixir 1.14.0) lib/enum.ex:4751: Enumerable.List.reduce/3
    (elixir 1.14.0) lib/enum.ex:2514: Enum.reduce_while/3

A nice example of a runtime error that tells me exactly what is wrong, when I defined the has_many side of a relationship, but not the corresponding belongs_to.

I thought it would be interesting to look at this in comparison to doing the same things with Phoenix generators. What Ash is trying to get rid of is all of the boilerplate, the fancier version of the schema file, thing that are generated in the context file when you run the generators. It is done for you behind the scenes and you don’t have to have it in your codebase.

Spark lies underneath Ash. It does a lot of the documentation for you out of the box and generates the hex docs automatically. 

What is missing?

Ash is still relatively early in its development, and there are a few basic things that aren’t quite there yet. Certain types of relationships aren’t supported. For instance, there is a lot of extra work required right now to have a many-to-many relationship through a join table (but supporting this is in the future plans). One of my bigger concerns is that testing doesn’t have very much support, and isn’t a priority for the maker of Ash. That is one practice that we prioritize differently in the way we write code at Launch Scout.

One thing that I am still trying to iron out: where does the business logic fit into Ash? Where would I call a function that does business logic?

While the code “should work,” test driven development provides us with reassurance we appreciate in maintaining relationships in code.

There is a test guide that is pretty bare bones that tells you to disable async, but seems to be pretty empty. When you run the phoenix generators you get all of the tests out of the box that I feel is a missing piece from Ash.

It gets rid of a lot of boilerplate and it would be interesting to explore more with the JSON api plugin or to put Live View on top of it.

Neutral with hesitations

These are the pros and cons to weigh that I discovered in this technical spike.

Pros

  • Clear errors
  • Behind the scenes crud
  • Lots of flexibility with plugins

Cons

  • Not much support for testing
  • Can’t handle some commonplace postgres

In this experiment, I didn’t get far enough to be an advocate in the field for using this technology broadly. I am looking forward to learning more about the Ash Framework and opportunities that let us stretch its capabilities. I will be keeping an eye on it as development continues, and look forward to pushing it a bit farther in the future.

Related Posts

Want to learn more about the work we do?

Explore our work

Ready to start your software journey with us?

Contact Us