It is folklore knowledge that a pair of adjoint functors induces a monad and a comonad. Due to my recent work, I had to load relevant information into my brain cache. However, it turned out to be hard for me to find all the pieces of information I need in one place. Therefore, I am going to summarise here what I know and need, hoping it will be helpful for someone like me.

Given categories `C` and `D` , we call two functors `L : C → D` and `R : D → C` a pair of adjoint functors if, for all object `A` in `C` and object `B` in `D`, we have the following natural isomorphism:

``` Hom (L A, B) ≅ Hom (A, R B) ```

This is denoted by `L ⊣ R`. Functors `L` and `R` are respectively called the left and the right adjoint.

Concepts such as `Hom (A, B)` and natural isomorphism will be explained in more detail later. For now, it suffices to say that `Hom (A, B)` is the collection of all morphisms from `A` to `B`. For an example, in Set (the category of sets and total functions), `Hom (A, B)` are all the functions having type `A → B`, and that `Hom (L A, B) ≅ Hom (A, R B)` can be understood as

``` L A → B ≅ A → R B ```

That is, given a function `L A → B` there is a unique corresponding function `A → R B`, and vice versa. A typical example is when `L A = A × S` and `R B = S → B` for some `S`. Indeed we have

``` (A × S) → B ≅ A → (S → B) ```

with the mapping from left-to-right being `curry`, and the reverse mapping being `uncurry`.

Note that Set is but an instance of a category (that is easier for me to understand). The notion of adjoint functors is much more general. For example, when the categories are such that the objects are elements of a partially ordered set, and there is a morphism `a → b` if `a ≼ b` (thus there is either zero or one morphism between any two objects), then `L` and `R` being adjoint functors means that they form a Galois connection.

That `Hom (L A, B)` and `Hom (A, R B)` being isomorphic means that there exists a pair of mappings `ϕ : Hom (L A, B) → Hom (A, R B)` and `θ : Hom (A , R B) → Hom (L A, B)`, such that ` ϕ ∘ θ = id` and ` θ ∘ ϕ = id`. Being natural isomorphic refers to an additional constraint: that `ϕ` and `θ` must be natural with respect to `A` and `B`. This is an important property that will be explained and used later.

If `L : C → D` and `R : D → C` form a pair of adjoint functors, `R ∘ L` is a monad, while `L ∘ R` is a comonad.

Recall the example `L A = A × S` and `R B = S → B`. Indeed, we have `(R ∘ L) A = S → (A × S)` — the type of state monad!

Merely having the type does not constitute a monad — we have got to construct the monad operators. In a more programming-oriented definition, a monad `M : * → *` comes with two operators `return : A → M A` and `(>>=) : M A → (A → M B) → M B`. In traditional, mathematics-oriented definition, a monad `M` comes with three operators: `return`, `map : (A → B) → M A → M B`, and `join : M (M A) → M A` — as a convention, `return` and `join` are often respectively written as `η` and `μ`. Dually, a comonad `N` comes with three operators: `ε : N B → B`, `map : (A → B) → N A → N B`, and ` δ : N B → N (N B)`.

As mentioned before, adjoint functors `L` and `R` induce a monad `M = R ∘ L` and a comonad `N = L ∘ R`. The operators `η` and `ε` are given by:

``` η : A → R (L A) η = ϕ id -- id : L A → L A ε : L (R B) → B ε = θ id -- id : R B → R B ```

The types of `id` are given in the comments. Operators `μ` and `δ` can then be defined by:

``` μ : R (L (R (L A))) → R (L A) μ = R ε δ : L (R B) → L (R (L (R B))) δ = L η ```

where `η : R B → R (L (R B))` and `ε : L (R (L A)) → L A`.

The operators do have correct types. Do they satisfy the monad laws? There are six monad laws for for `(M, η, μ)`:

1. `M id = id`
2. `M f ∘ M g = M (f ∘ g)`
3. `η ∘ f = M f ∘ η`
4. `M f ∘ μ = μ ∘ M (M f)`
5. `μ ∘ η = id = μ ∘ M η`
6. `μ ∘ μ = μ ∘ M μ`

The first two laws demand that `M` be a functor. Since `L` and `R` are functors, the two laws are immediately true. Laws 3 and 4 demands that `η` and `μ` be natural transformations, while Laws 5 and 6 are important computational rules for monads. We have got to check they do hold for the definitions of `μ` and `η`.

For comonads there is a collections of dual laws. Since the proofs are dual, we talk only about the laws for monads in this post.

## Hom-Set and Hom-Functors

To prove the four remaining monad rules we need more properties about `ϕ` and `θ`. For that we give the concepts of hom-set and natural isomorphism, which we quickly skimmed through, a closer look.

The collection of all morphisms from `A` to `B` (both of them objects in category `C`) is denoted by `Hom(A,B)`. (In general `Hom(A,B)` is not necessarily a set. When it happens to be a set, `C` is called a locally small category. See hom-set on ncatlab for details.)

Given category `C`, `Hom` is also a functor `Cᵒᵖ × C → Set` (where `Cᵒᵖ` denotes the dual category of `C`). It maps an object `(c,d)` (in `Cᵒᵖ × C`) to `Hom(A,B)`, which is now an object in Set, and maps a pair of morphisms `f : A₂ → A₁``g : B₁ → B₂` to a morphism `Hom(A₁,B₁) → Hom(A₂, B₂)` in Set, defined by

``` Hom : (A₂ → A₁ × B₁ → B₂) → Hom(A₁,B₁) → Hom(A₂, B₂) Hom (f,g) h = g ∘ h ∘ f ```

(The “type” given to `Hom` is not a rigorous notation, but to aid understanding. For more details, see hom-functor on ncatlab.)

Recall that, given functors `F` and `G`, when we say `h : F → G` is a natural transformation (from `F` to `G`), we mean that `h` is a series of morphisms — for each object `A` there is a morphism `h : F A → G A`, and for all `f : A → B` we have the following

``` h ∘ F f = G f ∘ h ```

The types of `h` on the left and right hand sides are respectively `F B → G B` and `F A → G A`.

Recall also that the definition of adjoint functors demands `ϕ` and `θ` be natural with respect to `A` and `B`. What we mean by being natural here is essentially the same, but slightly complicated by the fact that `Hom` is a more complex functor: for all `f : A₂ → A₁` and `g : B₁ → B₂`, we want

``` ϕ ∘ Hom (L f, g) = Hom (f, R g) ∘ ϕ θ ∘ Hom (f, R g) = Hom (L f, g) ∘ θ ```

If we expanding the definition of `Hom`, apply both sides of the equation regarding `ϕ` to an argument `h : L A₁ → B₁`, and apply both sides of the equation regarding `θ` to an argument `k : A₁ → R B₁`, we get

``` ϕ (g ∘ h ∘ L f) = R g ∘ ϕ h ∘ f θ (R g ∘ k ∘ f) = g ∘ θ k ∘ L f ```

These naturality condition will be of crucial importance in the proofs later.

3. `η ∘ f = M f ∘ η`:

``` R (L f) ∘ ϕ id = { ϕ (g ∘ h ∘ L f) = R g ∘ ϕ h ∘ f, [g, h, f := L f, id, id] } ϕ (L f ∘ id ∘ L id) = ϕ (L f) = ϕ (id ∘ id ∘ L f) = { ϕ (g ∘ h ∘ L f) = R g ∘ ϕ h ∘ f, [g, h := id, id]} ϕ id ∘ f ```

4. `M f ∘ μ = μ ∘ M (M f)`

``` R (θ id) ∘ (R (L (R (L f))) = R (θ id ∘ L (R (L f))) = { θ (R g ∘ k ∘ f) = g ∘ θ k ∘ L f, [g, k, f := id, id, R (L f)] } R (θ (R id ∘ id, R (L f))) = R (θ (R (L f))) { θ (R g ∘ k ∘ f) = g ∘ θ k ∘ L f, [g, k, f := L f, id, id] } = R (L f ∘ θ id ∘ id) = R (L f) ∘ R (θ id) ```

5.1 `μ ∘ η = id`

``` R (θ id) ∘ ϕ id = { ϕ (g ∘ h ∘ L f) = R g ∘ ϕ h ∘ f, [g, h, f := θ id, id, id] } ϕ (θ id ∘ id ∘ id) = ϕ (θ id) = { ϕ ∘ θ = id } id ```

5.2 `μ ∘ M η = id`

``` R (θ id) ∘ R (L (ϕ id)) = R (θ id ∘ L (ϕ id)) = { θ (R g ∘ k ∘ f) = g ∘ θ k ∘ L f, [g, k, f := id, id, ϕ id] } R (θ (R id ∘ id ∘ ϕ id)) = R (θ (ϕ id)) = { θ ∘ ϕ = id } R id = id . ```

6. `μ ∘ μ = μ ∘ M μ`

``` R (θ id) ∘ R (L (R (θ id))) = R (θ id ∘ L (R (θ id))) = { θ (R g ∘ k ∘ f) = g ∘ θ k ∘ L f, [g, k, f := id, id, R (θ id)] } R (θ (R id ∘ id ∘ R (θ id))) = R (θ (R (θ id))) = { θ (R g ∘ k ∘ f) = g ∘ θ k ∘ L f, [g, k, f := θ id, id, id]} R (θ id ∘ θ id) = R (θ id) ∘ R (θ id) ```

The proofs above use only functor laws, the fact that `ϕ` and `θ` are inverses, and the naturality laws of `ϕ` and `θ`. Traditionally, the proofs would proceed by diagram chasing, which would probably be easier for those who familiar with them. I am personally happy about being able to construct these equational proofs, guided mostly by the syntax.