Friday, April 28, 2006

Defining new compiler intrinsics

Although the code is not all in darcs yet, you can now define new compiler intrinsics in a very direct way. For example, here is how fixnum-bitand is defined on PowerPC:
\ fixnum-bitand [
"x" operand "y" operand "x" operand AND
] H{
{ +input { { f "x" } { f "y" } } }
{ +output { "x" } }
} define-intrinsic

This overrides the fixnum-bitand primitive in compiled code; instead of a call to the runtime function being compiled, it instead generates an AND instruction by calling the assembler word 'AND' with some arguments.
I think this is very neat and clean. I wonder if anybody find a use for this feature outside of the compiler core.

A look at the new compiler design

The higher levels of the compiler did not change at all. The stack effect inference module spits out a dataflow intermediate representation (IR), and the dataflow optimizer rewrites it. The most important dataflow IR node types are:
  • #push - push literals on the data stack
  • #shuffle - permute the elements of the data or call stack
  • #call - call a word
  • #label - an inlined recursive block (loop, etc)
  • #if - conditional with two child nodes
  • #dispatch - jump table with multiple nodes; jumps to the node indexed by a number on the data stack

What is different is that now, machine code is generated directly from optimized dataflow IR, instead of being converted to linear IR and having another optimization pass applied. Paradoxically, the new approach generates better code, and the key is making use of CPU registers in an intelligent way.
The simplest optimization performed by the generator is an old one; the defunct linear IR optimizer did it too. The idea is that inside runs of generated code between calls to runtime functions, we can merge multiple stack height change instructions into one final one at the end. As long as no runtime functions which inspect the stack are called, the temporary inconsistency does not create any issues, and saves instructions.
When the generator encouters a #push node, it allocates enough registers to hold the literals being pushed, and loads the literals into the registers. Then it makes a note that the top n stack elements are actually in registers now. This information is recorded on a phantom stack. Note that the dataflow optimizer already kills dead literal loads, so there is no loss of efficiency from immediately loading the literals into registers without checking usage first.
#shuffle nodes are similar. If the phantom stack contains sufficient register assignments for the shuffle to take place, then the phantom stack is simply rearranged. Otherwise, we add special placeholder "location" objects to the phantom stack, and perform the shuffle. For example, suppose the compiler comes across a swap, but the phantom stack is empty, meaning no stack items are in registers. Then the phantom stack ends up looking like this:
{ T{ ds-loc f 0 } T{ ds-loc f 1 } }
Since locations are indexed with the top of the stack starting from zero, this means that the "real" top of the stack is the "current" item underneath the top, and vice versa. Note that #shuffle nodes do not allocate registers or load values; they perform a purely formal operation. This is because in many cases, values which would be loaded by a shuffle are not actually used. For example, consider a shuffle operation which leaves certain locations fixed, such as over ( x y -- x y x ). The value y is part of the shuffle and is never loaded. It would be inefficient to allocate a register for y and load it.
Any node which ends up calling into the runtime, or calling another compiled word, "commits" the phantom stacks. What this does is look at the phantom stack, and write back any values stored in registers to the stack. It also shuffles stack elements if any location objects are present, taking care not to ignore locations which end up back at their original position. So in effect, a #shuffle node does not generate any code until the next time the phantom stack is "committed", and a #push only allocates some registers without writing the stack.
As I have described it so far, the optimizer only takes care of merging consecutive #push and #shuffle nodes. While this is a useful optimization, the old compiler design made it this far without any problems.
The real improvement is in the handling of intrinsics. In the old compiler, a handful of words, such as set-slot, fixnum+ and so on were compiled inline, as hand-tuned assembly segments. This is still being done, but the framework is far more intelligent, and the idea is that if an intrinsic is compiled with all its inputs as registers on the phantom stack, significant effort can be saved in loading/saving elements from the stack. For example, consider the following snippet:
dup cons? [ ... ] [ ... ] if
This is a typical type check. The dataflow optimizer inlines cons?:
dup tag 2 eq? [ ... ] [ ... ] if
Now during code generation, the dup modifies the phantom stack to note that the top two "real" stack items are actually both the "current" top stack item. When the #call node for tag is encountered, the compiler notes that the tag word has an associated compiler intrinsic, which requests one input and produces one output. Now the compiler first checks the phantom stack to see if the input can be loaded without generating any stack traffic. In this case, the phantom stack holds a location object, so a load intruction does need to be generated. Then the tag intrinsic is generated -- it is a single instruction which masks off the tag bits in the address given. Next up is the literal 2. A register is allocated, the integer 2 is loaded in, and a note of this is made on the phantom stack. The eq? followed by a conditional is handled as one operation; and this time, both inputs are on the phantom stack, as registers -- the return value of tag, and the literal 2. So the eq? does not generate any stack traffic at all. Also note that in this example, dup and 2 together increase the stack height by 2, while eq? decreases it by 1 and the conditional pops another value (the return value of eq?). So no stack height change instruction is generated, since the stack is restored to its original height before any runtime calls are made. Indeed, here is the generated x86 assembly for the above example:
MOV EAX,[ESI]
MOV ECX,16 -- the fixnum 2, with a tag, is encoded as the integer 16
CMP EAX,ECX
JNE false_branch
...
JMP end
: false-branch
...
: end
...

Now, I admit that the old linear IR optimizer would, after considerable effort and some rather ugly hand-coded heuristics, squeeze the same machine code out of the above type-check snippet. However, the code was considerably more complicated, and it did not deal with registers and stack locations in such full generality. Indeed, the old design would not manage to deal with the following:
over tag over tag eq? [ ... ] [ ... ] if
Both stack shuffle words would generate code, and the calls to tag would incur stack load/store overhead. The new design handles the above just fine; it loads the top two stack items into registers, masks off everything but the tag bits, compares them and branches.
Many other far more complicated examples generate good code with the new design, especially on PowerPC where there is a large number of registers available.

Friday, April 14, 2006

Humorous web log

A number of people have already commented on this, but I thought I'd post it here in case somebody hasn't seen it yet. The writings on the weblog are an exact match for the classical caricature of an "enterprise architect"; if you liked the Iraqi information minister, you'll love the Enterprise Architecture -- Thought Leadership web log:


  • Ruby doesn't have instance variables and therefore isn't ready for enterprise-scale reuse (note that since then, the claim about instance variables not being supported has been removed from the post)

  • Developer productivity is irrelevant, current "enterprise" technology is as good as it gets, and you cannot take Ruby seriously until the Garner group recommends that thought architects adopt it



The author of the Rails framework writes:
"So by Enterprise, Architect, and Enterprise Architect standards, this gent must be the top of the pop. Thus, allow me to make this perfectly clear: I would be as happy as a clam never to write a single line of software that guys like James McGovern found worthy of The Enterprise."
Ha ha ha! I feel the same way.

Thursday, April 13, 2006

Exams and compiler

Sorry for the lack of updates lately. I have been busy with exams; I have my last exam tomorrow, and after that I'm pretty much free. I will be working as a TA this summer, either doing classroom problem solving sessions or marking assignments. I'm also spending a lot of time reading papers and books, thinking of a thesis topic. I still have a lot to learn about Algebraic Topology before I'm ready to contribute to the field.

Because I've been busy with school stuff, work on Factor has been proceeding a bit slowly, but I am making progress on reworking the compiler. Previously it would transform quotations to dataflow IR, then optimize the dataflow IR, transform it to linear IR, optimize the linear IR and generate machine code. Well I'm working on a new code generator which generates code directly from the dataflow IR. I'm also working on a rudimentary register allocator. The result should be both faster compilation and faster generated code. I will post some details on the new optimizer/generator when it is done, as the design is quite elegant and interesting.