There are two forms of compiler optimizations crucial to efficient program execution in Haskell:
What is specialization?
Specialization refers to replacing a generic typeclass method with its type-specialized definition for an overloaded function. Ad-hoc polymorphism in Haskell is achieved through typeclass methods, where methods can have different underlying algorithms depending on the context in which they’re implemented. For example,
+ is an operator defined by the
Num class, and is an abstract function that can be used with types of
Another example is
fmap: a method for any functor. However, this functor could be a list
, or a
Either. Each of these types have specific implementations of
fmap tailor-made for the data structure. GHC can use what it knows about how
fmap works with each of these types to expedite compilation instead of figuring things out from scratch. In the case of
fmap, if we know that the functor is a list
, we can replace the indirect, dynamic lookup of
fmap in the dictionary with the direct, static lookup of the
. That is, we can easily find
fmap for any datatype (such as
Either.fmap, etc.) and use that.
To solidify this idea, consider an imaginary yet simple class method that takes a polymorphic parameter,
safeCheese :: a -> Bool safeCheese = dairyLevel . dairyType
This method calculates whether or lactose levels in a given cheese are safe to be consumed by a lactose-intolerant pizza lover. It does so by taking an input of any type
a, and producing a
Bool. The polymorphic input
a could be of any
Numeric value, such as
Float, indicating the milligrams of cheese present in the meal. It could also be a
String, suggesting the kind of cheese (like
"feta"), as the type of cheese has a variable impact on the user’s overall digestive comfort (feta seems to sit better with her than mozzarella for reasons unknown).
In this example, each time the class method
safeCheese is invoked, we have to dynamically lookup which definition to use in the supplied class dictionary, to make sure it’s appropriate for whatever input is passed in. However, if we know what concrete type
a is instantiated with (ex.,
Int), we can specialize the definition and produce better code. Specialization thus removes typeclass dictionary arguments. Once specialized, dictionary methods can be easily inlined (I’ll explain what inlining is below), and this process creates more efficient code. Specialization is achieved by including the
SPECIALISE pragma in the module. The specialized definition is placed in a module but also exported so it can be reused if there’s another upstream call-site where specialization would take place. For more information on how to use this feature, see documentation.
What is inlining?
Inlining happens when the compiler replaces the function call with its definition. This is also known as “unfolding”, as we take the innards of the function and implementing those directly. It’s kind of like taking a sandwich apart and consuming its constituent parts directly (the cheese, tomatoes, lettuce, mayo, etc), rather than the sandwich itself. For example, let’s refer back to our
safeCheese example. In this case, instead of calling
safeCheese, we’d directly call its body:
dairyLevel . dairyType.
Inlining is achieved via the
INLINE pragmas. Unlike specialization, which works on typeclass methods, we can apply inlining to any average-Joe function. However, the inlined functions are also specialized by the compiler! This is particularly useful in situations where we don’t have typeclasses, and hence need to brute-force the unfolding top-level function definitions. GHC is unlikely to inline function calls (as opposed to method calls, which are aggressively inlined) unless you explicitly tell GHC to do so.
INLINABLE replaces the function with its definition and specializes it, without discriminating between functions.
INLINE does all of that, in addition to fiercely biasing GHC to ensure the inlining of the function occurs. Note that
INLINE is the nuclear option, and hence it must be exercised with care. Though it may be counter-intuitive, overwhelming your code with
INLINE everywhere actually slows down compilation. To further understand which circumstances are most appropriate for this technique, see documentation.
Why does this matter?
Inlining and specialization are ways in which programmers can enhance compiler optimization to save stack frames and thus time. Haskell encourages programmers to build abstractions by composing functions. Function composition allows us to write programs with a high-level of abstraction, meaning we can take advantage of polymorphism in a variety of ways. These abstractions, however, are eventually saturated when used with concrete arguments during program execution. The compiler decides whether inlining a function will enable further optimizations downstream.