In Factor 0.82 I introduced a nasty bug which I have only just now managed to fix. It would manifest as random crashes when running code compiled with SSE2 float intrinsics. Without float intrinsics, everything would work fine.
I finally spent some time with gdb, fiddled with the disassembler and looked at memory dumps. Turns out the cause of the problem was that the inline float allocation intrinsic was compiling 32-bit instructions. So when depositing a boxed float in the heap, it would only write to the lower 32 bits of the header cell. If the high 32 bits contained garbage data, the resulting object would be corrupt. The thing is, regression testing did not catch this error, since I supposed in most cases inline float allocation was hitting memory which happened to be zeroed.