This package began as an exploration in generalizing Base.Fix1 and Base.Fix2. These types represent particular forms of functions: x -> f(v, x) and x -> f(x, v) for a given f and v. We'll pick up that motivation shortly.

There is a secondary motivation for this package, aligned with the philosophy that choosing names is hard, and that a long name may be avoided when concepts are further broken down:

If you need underscores, it most likely means that you can work harder to find a better name, or perhaps that you are mixing together two concepts that should be separated.


There are a lot of great names within the Julia ecosystem, and meaning is important:

With multiple dispatch, just as the meaning of a function is important to get consistent, it's also important to get the meanings of the argument slots consistent when possible.


This package can help to re-use these names and put them together in new ways. Crucially, it relies on parametric types, the same facility that allows Vector{Int64} and Vector{Float32} to be given "orthogonal" names, instead of requiring names that bake together the concept of "vector" and "element type".


Let's illustrate Fix1 and Fix2. We'll use the string function in Base, which concatenates the string representations of its arguments:

julia> string("first ", "second")"first second"

Now, to construct and use the Fix1 and Fix2 types:

julia> using Base: Fix1, Fix2
julia> f1 = Fix1(string, "one then ")(::Base.Fix1{typeof(string), String}) (generic function with 1 method)
julia> f1("two")"one then two"

The function-call behavior of Fix1(f, bind) is the same as x -> f(bind, x).


julia> f2 = Fix2(string, " before two")(::Base.Fix2{typeof(string), String}) (generic function with 1 method)
julia> f2("one")"one before two"

The function-call behavior of Fix2(f, bind) is the same as x -> f(x, bind).

The key point of the Fix1 and Fix2 types is that methods can dispatch on

  1. the type of f
  2. the type of bind
  3. the position of bind within the function call

Dispatch is not tenable with anonymous functions. Let's illustrate while moving to a more practical example using == instead of string.

julia> f1 = x -> x == 0#1 (generic function with 1 method)
julia> f2 = Fix1(==, 0)(::Base.Fix1{typeof(==), Int64}) (generic function with 1 method)

Now define the "same" things again:

julia> f3 = x -> x == 0#3 (generic function with 1 method)
julia> f4 = Fix1(==, 0)(::Base.Fix1{typeof(==), Int64}) (generic function with 1 method)

The types of both the Fix1 values is the same:

julia> typeof(f2) === typeof(f4)true

But each anonymous function definition introduces a new type with an opaque name:

julia> typeof(f1), typeof(f3)(Main.var"#1#2", Main.var"#3#4")

A new anonymous function is always given a unique type, which allows methods to specialize on the specific anonymous function passed as an argument, but does not "permit" dispatch. To be more accurate, as far as dispatch is concerned, the type of anonymous functions is not special:

julia> foo(::typeof(f1)) = "f1"foo (generic function with 1 method)
julia> foo(::typeof(f3)) = "f3"foo (generic function with 2 methods)
julia> foo(f1)"f1"
julia> foo(f3)"f3"

But really we'd like to use a type that is less opaque and furthermore is "structural" in some ways, rather than purely "nominal".

Examples of Base.Fix2

Where is it useful to dispatch on these special functions? Because Base does not export and does not document these types, there aren't many methods in the Julia ecosystem.

But these types are constructed with, for example, ==(3) or in([1, 2, 3]). A type like these is useful as a predicate to pass to higher-order functions, e.g. findfirst(==(3), some_array) to find the first element that equals 3. Brevity aside, these types are useful to define more efficient methods of generic higher-order functions. For example, take a specific method of the findfirst function:

findfirst(p::Union{Fix2{typeof(isequal),Int},Fix2{typeof(==),Int}}, r::OneTo{Int}) =
    1 <= p.x <= r.stop ? p.x : nothing

The fallback for findfirst (triggered by e.g. findfirst(x->x==3, 1:10) instead of findfirst(==(3), 1:10)) would produce the same (correct) answer, but the method above will be quicker.

Dispatching on the structure of the predicate function enables a certain form of symbolic computation.

Symbolic computation and lazy evaluation

This package provides a generalization of Fix1 and Fix2 in a few ways:

  1. A function of any positional arity can be used, and any number of its arguments can be bound, allowing the remaining arguments to be provided later.
  2. A function can have its keyword arguments bound.
  3. The function x -> f(x, b) is represented with types:
    • a Lambda to represent function (args -> body)
    • a Call to represent the function call (f(...)) in the body
    • a ArgPos to represent the x in the body of the lambda function

The third generalization is powerful, because it's effectively the lambda calculus.

It is worth considering first just Call, which can serve the purpose of representing a delayed function call evaluation. If you prefer, you may also consider a thunk () -> foo(1, 2), which would be a Lambda (with no arguments) and a Call that does not mention any "free variables".

If laziness is all that is needed, then defining a Julia anonymous function will do the job. But this package allows an additional benefit since methods can dispatch on details of the lazy call.

In many domains, new types are introduced to represent this pattern.


Base.Generator consists of two fields f and iter. This can be taken as a representation of map(f, iter):

julia> using FixArgs
julia> gen = let f = string, iter = 1:10 @xquote map(f, iter) endCall(Some(map), FrankenTuple((Some(string), Some(1:10)), NamedTuple()))

It's certainly less nice to look at than Base.Generator{UnitRange{Int64}, typeof(string)}(string, 1:10). Better UX / ergonomics are be possible by defining a type alias:

const MyGenerator{F, I} = FixArgs.Call{Some{typeof(map)}, FixArgs.FrankenTuples.FrankenTuple{Tuple{Some{F}, Some{I}, (), Tuple{}}}

That is quite unsightly, and there are quite a few internals leaking out. We can use a macro instead:

julia> const MyGenerator{F, I} = @xquoteT map(::F, ::I)FixArgs.Call{Some{typeof(map)}, FrankenTuples.FrankenTuple{Tuple{Some{F}, Some{I}}, (), Tuple{}}} where {F, I}

It should be made convenient to defining constructors and show methods that correspond with the type alias.

To evaluate the call (i.e. "collect the iterator"):

julia> xeval(gen)10-element Vector{String}:

This example is actually circular. The evaluation of the map call is done in terms of Generator! The definition:

map(f, A) = collect(Generator(f,A))

Breaking this circularity is possible by defining

function iterate(gen::(@xquoteT map(::F, ::I))) where F, I
    f = FixArgs.xeval(gen.args[1])  # not the prettiest thing right now...
    iter = FixArgs.xeval(gen.args[2])
    # ...

and might also require a separation of the purposes of collect and map. See this issue.

Many types in Base.Iterators can be seen as lazy calls of existing functions. For example Base.Iterators.Filter(flt, itr) could be replaced with @xquote filter(flt, itr). Base.Iterators.Filter would be an alias for (@xquoteT filter(::F, ::I)) where {F, I} to enable the existing symbolic computation, e.g.:

reverse(f::Filter) = Filter(f.flt, reverse(f.itr))

Base.Iterators.Flatten, which defines a convenience function

flatten(itr) = Flatten(itr)

could be written in terms of a function flatten with no methods. However, it is perhaps better seen as @xquote reduce(vcat, it)


What is Rational but lazy division on integers?

julia> 1/9 * 3/2 # eager division
julia> using FixArgs
julia> (@xquote 1/9) * (@xquote 3/2)ERROR: MethodError: no method matching *(::FixArgs.Call{Some{typeof(/)}, FrankenTuples.FrankenTuple{Tuple{Some{Int64}, Some{Int64}}, (), Tuple{}}}, ::FixArgs.Call{Some{typeof(/)}, FrankenTuples.FrankenTuple{Tuple{Some{Int64}, Some{Int64}}, (), Tuple{}}}) Closest candidates are: *(::Any, ::Any, !Matched::Any, !Matched::Any...) at /opt/hostedtoolcache/julia/1.7.3/x64/share/julia/base/operators.jl:655

Of course, we have to do some more work.

using Base: divgcd

function Base.:*(
        x::(@xquoteT ::T / ::T),
        y::(@xquoteT ::T / ::T),
        ) where {T}
    xn, yd = divgcd(something(x.args[1]), something(y.args[2]))
    xd, yn = divgcd(something(x.args[2]), something(y.args[1]))
    ret = @xquote $(xn * yn) / $(xd * yd) # TODO use `unsafe_rational` and `checked_mul`

Now, try again:

julia> q = (@xquote 1/9) * (@xquote 3/2)Call(Some(/), FrankenTuple((Some(1), Some(6)), NamedTuple()))
julia> map(xeval, q.args) # make numerator and denominator plainly visible2-element Vector{Int64}: 1 6

compare with using // to construct a Base.Rational:

julia> 1//9 * 3//21//6

Finally, because we have encoded the relationship between this "new" rational type, and /, we can do:

julia> xeval(q)0.16666666666666666

We could define an alias:

const MyRational{T} = @xquoteT(::T / ::T)
FixArgs.Call{Some{typeof(/)}, FrankenTuples.FrankenTuple{Tuple{Some{T}, Some{T}}, (), Tuple{}}} where T

which would also enforce the same type for both the numerator and denominator, as is the case of Base.Rational.

julia> sizeof(MyRational{Int32})8

Occasionally, a user might find this to be a limitation, yet they would still like to use some of the generic algorithms that might apply.

The fields of Base.Rational are num and den. They have to be named since there is nothing else that gives the fields any meaning at all. In our type, however, they can be distinguished by the role they play with respect to the / function.

Fixed-Point Numbers and "static" arguments

Replacing Rational may be silly, but this approach comes with a benefit: it also generalizes fixed-point numbers. A fixed-point number is just a lazy division with a specified (i.e. "static") denominator.

If we have a large array of fixed-point numbers with the same denominator, we certainly do not want to store the denominator repeatedly. Furthermore, we want efficient arithmetic. There is no need to check for a common denominator (let alone find one) if the two denominators are known to be equal at code-generation time.

There are values that we can "bake in" (see Base.isbitstype) into the type of Call itself!

Here is an example that models FixedPointNumbers.Fixed{Int8,7} from FixedPointNumbers.jl. The macros use the notation V::::S (quadruple colon) to mark an argument V as "static". Also note the use of $ to escape subexpressions.

julia> using FixArgs
julia> MyFixed{Int8, 128} === typeof(MyQ0f7(3))true
julia> function Base.:+(a::MyFixed{N,D}, b::MyFixed{N,D})::MyFixed{N,D} where {N, D} n = something(a.args[1]) + something(b.args[1]) return (@xquote $(N(n)) / D::::S) end
julia> xeval(MyQ0f7(3) + MyQ0f7(2)) === 5/128true
julia> sizeof(MyFixed{Int8, 128})1
julia> sizeof(Int8)1

And the generated code appears to be equivalent between

using FixedPointNumbers
look_inside_1(x, y) = reinterpret(Fixed{Int8, 7}, Int8(x)) + reinterpret(Fixed{Int8, 7}, Int8(y))


look_inside_2(x, y) = MyQ0f7(x) + MyQ0f7(y)

Pure-imaginary type and Base.Complex

Now that we can make some arguments static, we can introduce a meaningful example where the lazy call does not correspond to a valid eager call. You can define a type such that xeval raises MethodError and still represent the computation symbolically. You can define a type A in terms of a function f and a type B even if it may not make sense to define a new method of f on B.

Here is an over-the-top example:

julia> using FixArgs
julia> struct ImaginaryUnit end # if we want to be really cute, can do `@xquote sqrt((-1)::::S)`
julia> const Imaginary{T} = @xquoteT ::T * ::ImaginaryUnitFixArgs.Call{Some{typeof(*)}, FrankenTuples.FrankenTuple{Tuple{Some{T}, Some{Main.ImaginaryUnit}}, (), Tuple{}}} where T
julia> Imaginary(x) = @xquote x * $(ImaginaryUnit()) # note escapingFixArgs.Call{Some{typeof(*)}, FrankenTuples.FrankenTuple{Tuple{Some{T}, Some{Main.ImaginaryUnit}}, (), Tuple{}}} where T

note that if we assume we have no Base.Complex or anything like it, we don't have a way to further evaluate:

julia> xeval(Imaginary(3))ERROR: MethodError: no method matching *(::Int64, ::Main.ImaginaryUnit)
Closest candidates are:
  *(::Any, ::Any, !Matched::Any, !Matched::Any...) at /opt/hostedtoolcache/julia/1.7.3/x64/share/julia/base/operators.jl:655
  *(::T, !Matched::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8} at /opt/hostedtoolcache/julia/1.7.3/x64/share/julia/base/int.jl:88
  *(::Union{Int16, Int32, Int64, Int8}, !Matched::BigInt) at /opt/hostedtoolcache/julia/1.7.3/x64/share/julia/base/gmp.jl:542

We represented pure imaginary numbers as lazy multiplication of numbers and a singleton type ImaginaryUnit, and it is basically as if we had defined

struct Imaginary{T}

Let's just go ahead and represent complex numbers too:

julia> # const MyComplex{R, I} = @xquoteT ::R + (::I * ::ImaginaryUnit) # TODO this macro doesn't work
       MyComplex(r, i) = @xquote r + i * $(ImaginaryUnit())MyComplex (generic function with 1 method)

Note this monster of a type has the same size as Base.Complex:

julia> sizeof(Complex(1, 2))16
julia> sizeof(MyComplex(1, 2))16

and the same layout too:

julia> reinterpret(Int64, [Complex(1, 2)])2-element reinterpret(Int64, ::Vector{Complex{Int64}}):
julia> reinterpret(Int64, [MyComplex(1, 2)])2-element reinterpret(Int64, ::Vector{FixArgs.Call{Some{typeof(+)}, FrankenTuples.FrankenTuple{Tuple{Some{Int64}, FixArgs.Call{Some{typeof(*)}, FrankenTuples.FrankenTuple{Tuple{Some{Int64}, Some{Main.ImaginaryUnit}}, (), Tuple{}}}}, (), Tuple{}}}}): 1 2

Of course, there are many different types that would all be mathematically equivalent by swapping the arguments to + or *. Note that swapping the arguments to + would give a different memory layout.

Faster set operations by deferring computation

Suppose we have some generic function to return bounds on set-like objects.

"""Produce the unique best bound, in the sense that `x ∈ input` implies `x ∈ result`"""
function bounding#=(result_type, input)=# end

The objects may be shapes in space, and result_type could correspond to important categories of bounding volumes. To keep things simple, let us deal with sets of integers as represented by e.g. Vector and UnitRange.

julia> bounding(::Type{UnitRange}, v::Vector{<:Integer}) = UnitRange(extrema(v)...)bounding (generic function with 1 method)
julia> bounding(UnitRange, [1, 3, 5])1:5

Consider the following computation:

julia> eager = bounding(UnitRange, union(1:3, 5:7))1:7

It might be worth deferring that union call. It produces a representation with a size linear in the number of elements, whereas a deferred computation is representable in constant size.

using FixArgs

function bounding(
        _union::(@xquoteT union(::UnitRange{T}, ::UnitRange{T}))
        ) where T <: Integer
    (a, b) = something.(_union.args)
    UnitRange(min(minimum(a), minimum(b)), max(maximum(a), maximum(b)))
bounding (generic function with 2 methods)

Now to use our specialized method for bounding unions of UnitRanges, we simply defer one part of the previous computation:

julia> lazy = bounding(UnitRange, @xquote union($(1:3),$(5:7)))1:7
julia> eager == lazytrue

Ergonomics and Syntax

Ideally, this package would support exactly the syntax that Julia supports. However, it is a bit challenging not being able to hook into lowering exactly.

Anonymous function syntax is still under discussion:

There are packages that define anonymous function syntax, and it would be nice to ensure that they compose with e.g. @xquote.



(see also nominative and structural type systems)

The Julia ecosystem goes to great lengths to find the right generic functions and to ensure that all methods defined on generic functions are semantically compatible. This effort enables generic programming and interoperability. Much care goes into naming these functions and deciding their generic meaning, and it makes sense to reuse those names when possible, instead of creating a new name.

One possible advantage of using compositional names for types is that two packages can declare methods on the same type without knowing about each other, as long as both packages know about the constituents of the composition. There may be situations in which using a compositional name is a better strategy than trying to establish a "FooBase" package.

It is certainly possible to go overboard, however.

Type Piracy

The Julia manual discusses not overloading methods on container types defined by others (hashed).

Arguably Call can be seen as a container type, but the whole purpose of it is for methods to be defined on it. As far as type piracy is concerned, e.g. @xquoteT ::Int64 / ::Int64 is equivalent to using Rational{Int64} since in both cases, all types are owned by Base (assuming Call, Lambda, etc. is not owned by your package (module) either). In this example, the types are typeof(/) and Int64.

For a function owned by your package, e.g. my_foo, by all means define my_foo(::(@xquoteT ::Int64 / ::Int64)).

For a function not owned your package, e.g. Base.:*, to avoid type piracy, * should be defined by a package that owns at least one of the constituent types, e.g.

  • Base.:*(::(@xquote MyFunction(::NotMyType))
  • Base.:*(::(@xquote NotMyFunction(::MyType))
  • Base.:*(::(@xquote MyFunction(::MyType))

If it is controversial what f(::(@xquoteT func(::A, ::B) should mean, then it is likely worth defining a new type instead of using this package.

If there are multiple functions that apply to the fields of a struct, as in:

struct MyType{T1, T2}

f1(t::MyType) = *(t.arg1, t.arg2)
f2(t::MyType) = +(t.arg1, t.arg2)

Then it also seems like type would also not be a good candidate to be replaced with a Call. Which of * or + would one choose?

More examples

Types of geometry primitives from scratch

What is better?

struct Disc{R, C}


using LinearAlgebra: norm
Disc(r, c) = @xquote x -> norm(x - c) ≤ r

Honestly, I don't know. I imagine there is a trade-off. But consider a situation where you'd like to distinguish between the disc with boundary, disc without boundary, and just the boundary (a circle). Those distinctions are as simple as , <, and ==. You might more accurately call the struct above DiscWithBoundary, and introduce DiscWithoutBoundary and Circle. Or instead you might use a convention that all the regions are closed sets and introduce wrapper types WithoutBoundary and OnlyBoundary. In both cases, there's a lot of names to choose...

A way to avoid naming a new method

Computing the square norm of a vector can usually be done more efficiently than computing the norm and then computing the square.

julia> using FixArgs
julia> using LinearAlgebra: norm, norm_sqr
julia> vec = [1, 1]2-element Vector{Int64}: 1 1

instead of writing

julia> norm_sqr(vec)2.0000000000000004

maybe you would rather write

julia> xeval(@xquote norm(vec)^2::::S)2.0000000000000004

Well, perhaps better with an alternative macro that applies ::::S to any isbitstype literals, and also does xeval.

@a_good_short_name norm(vec)^2

Whatever owns norm can add a new method to xeval to detect this form and use the more efficient method.

xeval(c::(@xquoteT norm(::T)^2::::S)) where T = norm_sqr(c.args[1].args[1])

This is different from having a package define its own macro. This would be one macro that can be hooked into by defining new methods on xeval.

Alternative to Base.literal_pow

julia> Meta.@lower x^2:($(Expr(:thunk, CodeInfo(
    @ none within `top-level scope`
1 ─ %1 = Core.apply_type(Base.Val, 2)
│   %2 = (%1)()
│   %3 = Base.literal_pow(^, x, %2)
└──      return %3

Could instead lower to roughly

%1 = @xquote x^(2::::S) # i.e. something roughly like `Call(^, x, Val(2))`
%2 = xeval(%1)

xeval could call Base.literal_pow for backwards compatibility:

julia> using FixArgs
julia> function FixArgs.xeval(x::(@xquoteT (::B)^(E::::S))) where {B, E} println("I was called") base = xeval(x.args[1]) Base.literal_pow(^, base, Val(E)) end
julia> let x = 3 xeval(@xquote x^(2::::S)) endI was called 9

but going forward, types that wish to hook into this functionality would define a method on xeval directly.

Base.Broadcast.Broadcasted is like Call with extra information Style and axes::Axes. materialize is like xeval. There's more information in the manual.

Base.Generator, as already discussed.

partial application with keyword arguments


isapprox(y; kwargs...) = x -> isapprox(x, y; kwargs...)

probably would use Base.Fix2 if it also supported keyword arguments. All other docstrings with "Create a function" do.

array-of-struct and struct-of-array representations


soa = (a=[1,2,3], b=[10, 20, 30])
aos_eager = map(NamedTuple{(:a, :b)} ∘ tuple, soa.a, soa.b)
3-element Vector{NamedTuple{(:a, :b), Tuple{Int64, Int64}}}:
 (a = 1, b = 10)
 (a = 2, b = 20)
 (a = 3, b = 30)

non-standard evaluation

nominal vs structural fields Instead of field names inner and outer, the arguments can be distinguished by the role they play with respect to the function .

What if itself is defined as ∘(a, b) = @xquote a ∘ b? Then define function without methods: function ∘ end


The poor compiler...

The poor user trying to understand the error message...

Method ambiguities

Unifying too many things can lead to too much coupling.

API and internals


Represent the arity of a Lambda.

Currently, only represents a fixed number of positional arguments, but may be generalized to include optional and keyword arguments.

P is 0, 1, 2, ... KW is always NoKeywordArguments, and may be extended in the future.


A call "f(args...)". args may represent both positional and keyword arguments.


terms are evaluated with respect to a Context A Context is an associations between bound variables and values, and they may be nested (via the parent field).


Nest ArgPos in ParentScopes to represent a reference to the formal parameters of a "parent" function. Forms a unary representation.

Related: [De Bruijn indices]

isexpr(expr) -> Bool
isexpr(expr, head) -> Bool

Checks whether given value isa Base.Expr and if further given head, it also checks whether the head matches expr.head.


julia> using ExprParsers
julia> EP.isexpr(:(a = hi))
julia> EP.isexpr(12)
julia> EP.isexpr(:(f(a) = a), :(=))
julia> EP.isexpr(:(f(a) = a), :function)
postwalk(f, expr)

Applies f to each node in the given expression tree, returning the result. f sees expressions after they have been transformed by the walk.

See also: prewalk.

prewalk(f, expr)

Applies f to each node in the given expression tree, returning the result. f sees expressions before they have been transformed by the walk, and the walk will be applied to whatever f returns.

This makes prewalk somewhat prone to infinite loops; you probably want to try postwalk first.


α-conversion in λ-calculus

labeler(x) produces a Symbol or similar from x.referent_depth x.antecedent_depth x.arg_i x.sym – name before relabeling

x.referent_depth - x.antecedent_depth is number of ->s that are between the evaluation site and the definition site


Given a value, produce an expression that when eval'd produces the value.


julia> eval(uneval(Expr(:my_call, :arg1, :arg2)))
:($(Expr(:my_call, :arg1, :arg2)))

julia> eval(eval(uneval(:(sqrt(9)))))

This function is used to return expressions from this package's macros. This is likely not a well-posed problem to begin with. Related issue.

Note the special case for :(esc(x)).


A convenience macro that implements the syntax of this PR

@fix f(_, b) is the equivalent of x -> f(x, b)


This macro is used to debug and introspect the escaping behavior of @xquote

julia> dump(let x = 9
       @xquote sqrt(x)
    head: Symbol call
    args: Array{Any}((2,))
        1: sqrt (function of type typeof(sqrt))
        2: Int64 9

The types produced by this package are unwieldly. This macro permits a convenient syntax, e.g. @xquoteT func(::Arg1Type, ::Arg2Type) to represent types.

let func = identity, arg = 1
    typeof(@xquote func(arg)) == @xquoteT func(::typeof(arg))

# output


If an argument is "static", then it is part of the type, and the value is annotated as illustrated:

julia> @xquoteT string(123::::S)