Construcciones de Computación (Mónadas, Flechas, etc.)

I have become rather interested in how computation is modeled in Haskell. Several resources have described monads as "composable computation" and arrows as "abstract views of computation". I've never seen monoids, functors or applicative functors described in this way. It seems that they lack the necessary structure.

I find that idea interesting and wonder if there are any other constructs that do something similar. If so, what are some resources that I can use to acquaint myself with them? Are there any packages on Hackage that might come in handy?

Nota: Esta pregunta es similar a Mónadas contra flechas y https://stackoverflow.com/questions/2395715/resources-for-learning-monads-functors-monoids-arrows-etc, but I am looking for constructs beyond funtors, applicative functors, monads, and arrows.

Edit: I concede that applicative functors should be considered "computational constructs", but I'm really looking for something I haven't come across yet. This includes applicative functors, monads and arrows.

preguntado el 09 de marzo de 12 a las 15:03

The Monad vs. Arrows page links to a paper that also compares applicative functors (aka idioms). -

Applicative functors most certainly son good at composable computation! In fact, they compose better than monads (the composition of two applicative functors is an applicative functor, which doesn't hold for monads). -

3 Respuestas

`Arrows` are generalized by Categories, and so by the `Category` clase de tipo.

`````` class Category f where
(.) :: f a b -> f b c -> f a c
id :: f a a
``````

El `Arrow` typeclass definition has `Category` as a superclass. Categories (in the haskell sense) generalize functions (you can compose them but not apply them) and so are definitely a "model of computation". `Arrow` proporciona una `Category` with additional structure for working with tuples. So, while `Category` mirrors something about Haskell's function space, `Arrow` extends that to something about product types.

Cada `Monad` gives rise to something called a "Kleisli Category" and this construction gives you instances of `ArrowApply`. Puedes construir un `Monad` de cualquiera `ArrowApply` such that going full circle doesn't change your behavior, so in some deep sense `Monad` y `ArrowApply` son lo mismo

`````` newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }

instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\b -> g b >>= f)

instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\ ~(b,d) -> f b >>= \c -> return (c,d))
second (Kleisli f) = Kleisli (\ ~(d,b) -> f b >>= \c -> return (d,c))
``````

Actually every `Arrow` gives rise to an `Applicative` (universally quantified to get the kinds right) in addition to the `Category` superclass, and I believe the combination of the appropriate `Category` y `Applicative` is enough to reconstruct your `Arrow`.

So, these structures are deeply connected.

Warning: wishy-washy commentary ahead. One central difference between the `Functor`/`Applicative`/`Monad` way of thinking and the `Category`/`Arrow` way of thinking is that while `Functor` and its ilk are generalizations at the level of objeto (types in Haskell), `Category`/`Arrow` are generelazation of the notion of morfismo (functions in Haskell). My belief is that thinking at the level of generalized morfismo involves a higher level of abstraction than thinking at the level of generalized objetos. Sometimes that is a good thing, other times it is not. On the other-hand, despite the fact that `Arrows` have a categorical basis, and no one in math thinks `Applicative` is interesting, it is my understanding that `Applicative` is generally better understood than `Arrow`.

Basically you can think of "Category < Arrow < ArrowApply" and "Functor < Applicative < Monad" such that "Category ~ Functor", "Arrow ~ Applicative" and "ArrowApply ~ Monad".

More Concrete Below: As for other structures to model computation: one can often reverse the direction of the "arrows" (just meaning morphisms here) in categorical constructions to get the "dual" or "co-construction". So, if a monad is defined as

``````class Functor m => Monad m where
return :: a -> m a
join :: m (m a) -> m a
``````

(okay, I know that isn't how Haskell defines things, but `ma >>= f = join \$ fmap f ma` y `join x = x >>= id` so it just as well could be) then the comonad is

``````class Functor m => Comonad m where
extract :: m a -> a -- this is co-return
duplicate :: m a -> m (m a) -- this is co-join
``````

This thing turns out to be pretty common also. It turns out that `Comonad` is the basic underlying structure of autómata celular. For completness, I should point out that Edward Kmett's `Control.Comonad` pone `duplicate` in a class between functor and `Comonad` for "Extendable Functors" because you can also define

``````   extend :: (m a -> b) -> m a -> m b -- Looks familiar? this is just the dual of >>=
extend f = fmap f . duplicate
--this is enough
duplicate = extend id
``````

It turns out that all `Monad`s are also "Extendable"

``````   monadDuplicate :: Monad m => m a -> m (m a)
``````

while all `Comonads` are "Joinable"

``````   comonadJoin :: Comonad m => m (m a) -> m a
``````

so these structures are very close together.

respondido 10 mar '12, 02:03

Great, categories and comonads are my next two topics. Thanks. - doug moore

Ah I love you cellular automata example. Edward Kmett has a really cool post about making every comonad into a monad in Haskell, but not the other way around. enlace. It is very high level stuff, but if you take the time it will make you understand the connection between the two. - edgar klerks

@EdgarKlerks one of the consequences of that post I think is most interesting is that the `Store` comonad might be rather fundamental: since Lenses are just the "co-algebra of the store comonad" (aka `Lens a b = a -> Store b a)` y `State` is what you get by taking the end of the store comonad. Between lenses and state you have something a lot like imperative programming. I still feel aways from understanding the significance of this though. - Felipe JF

All Monads are Arrows (Monad is isomorphic to ArrowApply). In a different way, all Monads are instances of Aplicativo, Donde `<*>` is `Control.Monad.ap` y `*>` is `>>`. Applicative is weaker because it does not guarantee the `>>=` operation. Thus Applicative captures computations that do not examine previous results and branch on values. In retrospect much monadic code is actually applicative, and with a clean rewrite this would happen.

Extending monads, with recent Constraint kinds in GHC 7.4.1 there can now be nicer designs for restricted monads. And there are also people looking at parameterized monads, and of course I include a link to something by Oleg.

Respondido 04 Abr '12, 10:04

Yes, monads are (roughly) a specialization of arrows and a generalization of applicatives. Are there any generalizations of monads that aren't arrows? Maybe something that generalizes the arrow? - doug moore

In libraries these structures give rise to different type of computations.

For example Applicatives can be used to implement static effects. With that I mean effects, which are defined at forehand. For example when implementing a state machine, rejecting or accepting an input state. They can't be used to manipulate their internal structure in terms of their input.

The type says it all:

`````` <*> :: f (a -> b) -> f a -> f b
``````

It is easy to reason, the structure of f cannot be depend om the input of a. Because a cannot reach f on the type level.

Monads can be used for dynamic effects. This also can be reasoned from the type signature:

`````` >>= :: m a -> (a -> m b) -> m b
``````

How can you see this? Because a is on the same "level" as m. Mathematically it is a two stage process. Bind is a composition of two function: fmap and join. First we use fmap together with the monadic action to create a new structure embedded in the old one:

``````fmap :: (a -> b) -> m a -> m b
f :: (a -> m b)
m :: m a
fmap f :: m a -> m (m b)
fmap f m :: m (m b)
``````

Fmap can create a new structure, based on the input value. Then we collapse the structure with join, thus we are able to manipulate the structure from within the monadic computation in a way that depends on the input:

``````join :: m (m a) -> m a
join (fmap f m) :: m b
``````

Many monads are easier to implement with join:

``````(>>=) = join . fmap
``````

``````addCounter :: Int -> m Int ()
``````

But not with applicatives, but applicatives (and any monad) can do things like:

``````addOne :: m Int ()
``````

Arrows give more control over the input and the output types, but for me they really feel similar to applicatives. Maybe I am wrong about that.

respondido 10 mar '12, 14:03

No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas or haz tu propia pregunta.