Monday, March 31, 2008

Inheritance, tuple reshaping, cleavers, and language evolution

About a year ago I implemented smart tuple reshaping, which updated instances of a class in memory when the class was redefined. Now that I'm working on inheritance, one of the most important sub-projects is updating this feature. The old implementation was already quite hairy, and I thought that updating it to support inheritance would make it intolerably so, however after some refactoring I simplified it down considerably.

Tuple reshaping and inheritance


The best way to demonstrate the feature is with live code examples. Note that they work with Factor from git; in fact, inheritance is mostly in place, the only thing that needs to be updated is method dispatch -- of course inheritance is useless without methods, but with all the other changes in place this remaining task is comparatively easy.

First a datatype:
TUPLE: bytes n ;
: bytes \ bytes construct-boa ;
: megabytes 20 shift bytes ;
: gigabytes 30 shift bytes ;


Let's define a class to represent computers; persumably, this is a high-end, million-dollar enterprise systems management app:
( scratchpad ) TUPLE: computer ram cpu ;

Now a subclass for servers:
( scratchpad ) TUPLE: server < computer ;

A constructor:
( scratchpad ) C: <server> server

This is the 21st century and we have some pretty high-end gear:
64 megabytes "Pentium II" <server> server set

Let's check out our server in the inspector:
( scratchpad ) server get describe
server instance
"delegate" f
"ram" T{ bytes f 67108864 }
"cpu" "Pentium II"

Is our server a computer?
( scratchpad ) server get computer?
t

Oh crap, we forgot to define a slot for the disk. No problem, let's redefine the server type:
TUPLE: server < computer disk ;

What happened to our server?
( scratchpad ) server get describe
sserver instance
"delegate" f
"ram" T{ bytes f 67108864 }
"cpu" "Pentium II"
"disk" f

Notice the new slot that appeared. Let's give it a value:
( scratchpad ) server get 2 gigabytes >>disk drop

But hey, the disk slot should really go in the computer type. All computers have disks, not just servers. That's not a problem:
( scratchpad ) TUPLE: computer disk cpu ram ; TUPLE: server < computer ;

What happened to our server?
( scratchpad ) server get describe
server instance
"delegate" f
"disk" T{ bytes f 2147483648 }
"ram" T{ bytes f 67108864 }
"cpu" "Pentium II"

Now we want to add a new type for electronic equipment.
( scratchpad ) TUPLE: electronic-device voltage ;

Is our server an electronic device?
( scratchpad ) server get electronic-device?
f

Well, no it isn't, there's not type relation here. But we can redefine our computer type to inherit from the electronic device:
TUPLE: computer < electronic-device disk cpu ram ;

Tada! The object is now an instance of a type it wasn't an instance of before:
( scratchpad ) server get electronic-device?
t

Let's take one last look at that server:
( scratchpad ) server get describe
server instance
"delegate" f
"voltage" f
"disk" T{ bytes f 2147483648 }
"ram" T{ bytes f 67108864 }
"cpu" "Pentium II"

Now we could go on and assign it a voltage, then redefine a class again permuting the slots, etc. Of course people who've been following Factor for a while take this type of thing for granted, but let's take a step back and think about what happened here: we created an instance of a class, then proceeded to redefine the class, change its position in the inheritance hierarchy, and move slots up and down the hierarchy. The instance remained consistent with the current definition of the class at each step and Factor tried its best to preserve slot values when things were being moved around.

This is of course trivial in languages which use hashtables to represent objects but Factor uses a more efficient representation: objects really are just tuples, with slot values stored contiguously in memory. You get very fast slot access because of this, but the garbage collector and object system work together to give the illusion that these objects are a lot more malleable than they really are.

Cleave and spread combinators


The next thing I want to touch on is the fact that the tuple reshaping implementation (and indeed, the entire tuple class implementation in general) is a lot cleaner than the old code, primarily because I've been using the new "cleave" and "spread" family of combinators designed by Eduardo Cavazos. These combinators are in the kernel vocabulary now, and they're documented; enter "kernel" about in the UI listener to learn more. I won't repeat the documentation here, just give an overview.

These combinators help structure your data flow by grouping parts of computations into one-to-many, many-to-many and many-to-one designs. I've never seen a programming language make this explicit before and it seems to be a genuinely new idea now present in either applicative or stack-based languages. (More academically-minded folk might wish to ponder the implications of this approach when combined with linear type systems.)

The basic picture is that we have three groups of combinators:
  • The "cleave" family take m values and n quotations. They apply each quotation to all m values in turn.
  • The "spread" family take m*n values and n quotations. They apply each quotation to the corresponding group of m values in turn.
  • The "apply" family take m*n values and 1 quotation. They apply the quotation to each m-element group of values in turn.

The naming convention is simple. The suffix on the combinator name determines which family it belongs to. No suffix denotes a "cleave", a * suffix denotes a "spread", and a @ suffix denotes an "apply". If the combinator name does not start with a digit, m is 1. Otherwise, the first digit is the value of m. Finally, the rest of the name, except for the prefix and suffix, determines n. This name is either bi, for n=2 or tri, for n=3.

Here is the full list:
  • bi: 1 value, 2 quotations, quotation has arity 1
  • tri: 1 value, 3 quotations, quotation has arity 1
  • 2bi: 2 values, 2 quotations, quotation has arity 2
  • 2tri: 2 values, 3 quotations, quotation has arity 2
  • 3bi: 3 values, 2 quotations, quotation has arity 3
  • 3tri: 3 values, 3 quotations, quotation has arity 3
  • bi* - 2 values, 2 quotations, quotation has arity 1
  • tri* - 3 values, 3 quotations, quotation has arity 1
  • 2bi* - 4 values, 2 quotations, quotation has arity 2
  • bi@ - 2 values, 1 quotation, quotation has arity 1
  • tri@ - 3 values, 1 quotation, quotation has arity 1
  • 2bi@ - 4 values, 1 quotation, quotation has arity 2

Note that 2tri*, 3bi* and 3tri* do not exist: if they did, they would take 6, 6 and 9 input values, respectively, and if your code needs that many values on the stack, then you should either use sequences, assocs, tuples, or variables.

Finally, there is a generalized versions of bi and bi*: cleave and spread take a sequence of quotations.

Because the naming convention is simple and consistent, the names of these combinators are easier to remember than the names of the stack shuffles. More interestingly, I've found that after refactoring code to use the cleave combinators, the only shuffle words I really need are dup, drop, and very rarely, a swap.

For example, I recently converted our serialization library to use these combinators. The result is extremely straightforward: there is almost no stack manipulation, except for the occasional dup and drop, but it doesn't use named locals either. Thanks to the cleave combinators, all values just happen to be in the right place at the right time, and everything composes together very nicely, like lego bricks.

Soapbox


Factor has changed greatly since its inception. Recent language changes such as new slots and the addition of the cleave combinators, as well changes I'm working on right now, such as inheritance and multimethods, should be the last major changes before Factor 1.0.

Last year, Daniel Ehrenberg published a roadmap for Factor 1.0. It seems that we're on track at this stage. After the language features are done, I'm going to review the entire core library and incorporate new idioms and features where possible. To put this in context, I don't think the Java implementation is far from exemplary Java code, the SBCL source code far from exemplary Common Lisp code, and Squeak's core is far from exemplary Smalltalk code. I don't want Factor to follow this trend and release 1.0 with a huge, bloated, incomprehensible implementation: I want the code to be simple, clean, clear and consistent. With the new language features and the new idioms I've been learning along the way, this goal should be achievable; in fact the core is for the most part pretty well-designed, except for some dark corners here and there.

Because Factor is a new language, implementing Factor is more work than implementing, say, Scheme or Common Lisp: when you're working against a fixed spec, for a language that has had a wealth of programs written in it, representing decades of experience, there isn't so much need for experimentation, trying different approaches. Factor is different. Factor is a new language, and the job of implementing Factor isn't a race to a certain finish-line that we can see from the start, but rather a mind-expanding journey, full of learning opportunities, joy, frustrations, obstacles and wrong turns.

On the flip-side, I can change the language to suit the implementation; if a feature is ugly to implement and makes the system more fragile, then it is better not to include it, and I have that freedom.

Once inheritance is done, I'm going to return to working on the web framework, and hopefully you will see more practical blog posts from me again, instead of philosophical junk!

Thursday, March 27, 2008

Working on inheritance

Anyone who has studied Factor's object system in depth will already be familiar with one form of inheritance already provided: union classes and the closely related mixin classes.

So why am I working on another form of inheritance? How is it implemented, and what does it buy me? And on a related note, isn't inheritance evil because some famous Java guys said so on their blog? Well, I'll try to answer these questions in this post.

First, a recap of existing facilities. A union class has members; methods defined on the union class dispatch on instances of all members of the class. Mixin classes are like unions except they are extensible; new member classes can be added after the fact. I've blogged about Factor mixins previously.

Declaring your class an instance of a mixin is a form of behavior inheritance; in C++, you might inherit an abstract class which only contains methods. You can test if an object is an instance of a mixin class, or define methods on the mixin class which then apply to instances of all members of the mixin.

This is very nice, and mixins are very powerful. We use them for protocols -- declaring that your class is a sequence, for example, and also for sharing behavior; if you declare your class is an immutable sequence, you get default methods for set-nth and set-length which throw "sequence is immutable" exceptions. Some mixins are a combination of both; the growable sequence mixin is used to share code between vectors, bit vectors, byte vectors, float vectors and string buffers. It expects instances to provide certain generic words, and using those words it implements the entire sequence protocol.

However, mixins alone are not enough. There are cases where what you really want to do is inherit from another concrete class; that is, you want to inherit some state (tuple slots) as well as behavior. So far, this niche has been occupied by delegation. When you call a generic word on an object, if the object's class doesn't have a method for that generic word, the method call is forwarded to the delegate. This works well in many cases but not only is it inefficient (with a long delegation chain, you basically end up with a linear method search) but it suffers from the slicing problem: if an object A delegates to B, then methods defined on B no longer receive the instance of A on the stack, they receive the instance of B; and if this instance is stored somewhere in a collection, the "A-ness" of the object is "sliced off". Delegation is also impossible to analyze statically because an object's delegate can be changed at any time, dynamically. Finally, delegation wastes memory. Each tuple has a delegate slot that is empty in the great majority of cases, and if you have several objects composed in one long delegation chain that represent one domain entity, you end up wasting the space used by each object's header (and delegation slot).

Delegation is still useful in many cases. Once the core delegation feature is removed, you'll still be able to write code which manually forwards method calls to slot values, however in most cases this isn't necessary, and is also discouraged because it leads to ugly, Java-style code. A real replacement can be found in extra/delegate, which is Daniel Ehrenberg's delegation library. It is more flexible than core delegation because you can specify which slot to forward methods to on a per-generic word (or per-protocol) basis; there is no distinguished "delegate" slot. It also offers a solution to the slicing problem where one tuple can "mimic" another. Dan has a blog entry describing his delegation system in more detail.

Most people in the Factor community nowadays take it for granted that a powerful delegation mechanism can be implemented as a library in Factor. What's more interesting is that while delegation can be done as a library, inheritance is something that really needs core support. This shows that it is more fundamental.

I should emphasize that tuple inheritance is a work in progress: I will post another blog entry when it makes its way into the git repository and is ready for prime time.

So how does it look? The syntax is pretty simple:
TUPLE: vehicle occupants ;
TUPLE: car < vehicle engine ;
TUPLE: aeroplane < vehicle cargo ;
TUPLE: checked-luggage owner ;

If the tuple class name is followed by <, the next token is a class name to inherit from. Tuples without an explicitly-given superclass inherit from the tuple class.

To be consistent, I changed the syntax for predicate classes:
PREDICATE: even < integer 2 mod 0 = ;

The old syntax looked liked so, but it didn't gel with the new tuple inheritance syntax:
PREDICATE: integer even 2 mod 0 = ;

Unlike with tuples, predicate class definitions must specify a superclass; there is no implicit default.

The new slot accessors I've been taking about work nicely with inheritance:
: sort-cargo ( aeroplane -- seq )
[ occupants>> ] [ cargo>> ] bi
[ [ owner>> = ] with subset ] curry map ;

Before you would have to qualify each slot with the class you were reading it from, so you had to know that occupants was in vehicle but cargo was in aeroplane:
: sort-cargo ( aeroplane -- seq )
[ vehicle-occupants ] [ aeroplane-cargo ] bi
[ [ checked-luggage-owner = ] with subset ] curry map ;

While in some sense the longer slot accessors made code more explicit, peppering them all over a source file just made things look visually cluttered and made code less generic than it could be: with new slots, I can write something that works on a ship or an aeroplane, provided both have a cargo slot; they don't even need to inherit from a common superclass.

Before, testing if a tuple was an instance of a class was very straightforward: you read the tuple header, and compare this with a class instance. Now it is not so easy, because if you have an instance of aeroplane, well, it is still a vehicle, even though its header is different. I used a neat trick to implement class tests in constant time. I did not invent this trick, I read about it elsewhere and I forget where, and I probably would not have been able to think about it myself, but it is really quite straightforward. It only works for single inheritance, though. Each class object stores a sequence containing all of its superclasses. The first element is the tuple class, the last element is the class itself. To test if a class object A is a subclass of a given class object B, you first check if A's superclass chain is at least as long as B's. If not, then A cannot possibly be a subclass of B, since if it were, its superclass chain would be at least as long. So we assume that A's superclass chain is at least as long as B's. Now if A is a subclass of B, then B's superclass chain must be a prefix of A's; in fact, suppose B's superclass chain has length n; then the nth element of B's superclass chain must equal B, and since B's superclass chain is a prefix of A's, that means that the nth element of A's superclass chain must be equal to B also. So you perform these two checks: compare the superclass chain lengths, and check if an element at a constant index is equal to B. If both tests succeed, then A is a subclass of B. To extend this to a predicate on objects which tests if an object X is an instance of B, you simply first get an object's class, then check if this class is a subclass of B. Very nice trick: if anyone knows of a way to generalize this to multiple inheritance, let me know.

Finally, there are some programmers who like to repeat this mantra that "implementation inheritance is evil"; they write blog posts with a handful of contrived examples, then a statement along the lines of "all future languages must not have inheritance", finishing off with a bang. These people have limited experience with programming languages and their arguments boil down to three core forms, neither of which is sound:
  • Problems come up when inheritance is used to model "has-a" relationships instead of "is-a" relationships. Code becomes less clear and less reusable than it needs to be in this case. But this is not a fundamental flaw of inheritance, it simply reflects a lack of thinking on the part of the designer. The dual problem, of course, is of course abusing delegation to model "is-a" relationships: if you abolish inheritance, you end up writing lots of boilerplate methods which simply forward calls to other objects, and you get the related issues with performance, space usage, and the "slicing problem". This is what happened in Factor; I was using delegation to model "is-a" as well as "has-a", and in the former case it was simply the wrong thing to do.
  • The other problem cited is using inheritance purely as a means to share code. This is indeed an abuse of inheritance; just because two types support the same operation doesn't mean there is necessarily a formal relationship between them. However, this only comes up in statically typed languages with type systems that are very much behind the state of the art; if you have a static type system with parametric polymorphism, structural subtyping, and so on, writing generic statically-type-safe is easy, and in a dynamically-typed language such as Factor it is a complete non-issue. As long as you have a set of types which have methods on a certain generic word, or slots with the same name, your code doesn't ever have to care about what the actual type of the object is.
  • There are issues specific to the implementation of inheritance in a single language: memory management in C++, the interactions between inheritance and constructors in Java, and so on. These concerns are irrelevant to me, except as a case study of the mistakes of those that have come before.

Incidentally, the same people like to rail against multiple dispatch, macros, singletons, dynamic scope, etc. I'm trying to avoid dogma here and isolate the essence of each language feature, implementing it without unnecessary limitations and avoiding odd quirks, staying out of the way until you need it (if at all). Factor is a great vehicle for this because it starts with a simple but extensible core consisting of highly orthogonal primitives; new abstractions can be built up effortlessly. My goal is to have the Factor programmer be free to choose the best abstractions for expressing their problem without needing a 500-page book of "gotchas" and "best practices" on hand, and without having to memorize awkward "design pattern" workarounds resulting from short-sightedness on my part. If I ever add a single language feature which requires that one consult a huge FAQ to understand some arcane interaction, then it means that I have basically failed.

Factor is going to give you mixin inheritance, tuple inheritance and fine-grained automatic delegation as an additional library; all will be documented, and the library will give good examples of idiomatic usage of all three to help beginners learn when each technique is appropriate. The limited and poorly-designed core delegation feature is going away. Exciting times ahead!

Thursday, March 20, 2008

New slot accessors merged; future direction for Factor

The new slot accessors that I've talked about here and here have been merged into the core.

Old-style tuple slot accessors are still defined for all tuples. I will be porting code over the next few months to use the new accessors. All new code should use the new accessors. The old accessors will be removed at the end of May.

Next up in the language change department:
  • Implementation inheritance for tuple classes.
  • Removing delegation from the core and replacing it by Daniel Ehrenberg's extra/delegation library.
  • New tuple syntax with named slots.
  • Moving Doug Coleman's singletons to the core.
  • Moving extra/multi-methods into the core and removing the current generic word system; I have blogged about the multi-methods library in the past.
  • Declarative sequence construction using Eduardo's extra/bake library should be merged into the core. It can replace a whole slew of utility words and make Factor easier to learn.

Each one of these changes will be accompanied by refactorings of Factor's codebase to use the new features where appropriate.

I will spend the next few days dealing with the fallout from the new-slots merge; I also have a few minor features I want to get out of the way. Once that is done I will return to working on the web framework while continuing to ponder the implementation details of inheritance in my head. Once those are worked out I will dive back into the core.

These language changes are coming rather late in the evolution of Factor, however their utility, along with the recently merged new-slots, only became apparent after we wrote a lot of code, and so the justification for incorporating these features and removing the ones they replace is based on real experience and not just theoretical concerns.

Once these features are done I will be happy with the state of Factor's language and my focus will shift to performance, documentation, and peripheral libraries. Then, Factor 1.0 will be released and heads will roll.

Some improvements to locals: let* and methods with named parameters

The other day, Eduardo Cavazos remarked that working on Factor's locals library must be bittersweet. On one hand, we have this awesome demonstration of Factor's unmatched meta-programming capability: efficient, lexically scoped closures, integrated with Factor's syntax, implemented as a library of only 406 lines of code. On the other hand, very few libraries use it: I developed locals to provide an "escape hatch" from the world of stack code, but it turns out the only cases where it is necessary is for complex math formulas and certain hairy algorithms dealing with two-dimensional arrays and such. Maybe one in a hundred words stand to benefit from locals, and everything else can be expressed in a clean way using stack combinators. If locals were used more frequently, it would be great on one hand because this awesome library would be seeing some real usage. But on the other hand, if you had to resort to locals all the time, it would point to real weaknesses in the stack model.

Having said that, I still enjoy improving the locals library. I added two features recently that Doug Coleman requested.

The first feature is a [let* word. It is exactly like let* in Lisp and Scheme: you define a set of bindings and unlike [let, each binding can see the previous bindings.

Here is an example from the calendar library:
:: julian-day-number>date ( n -- year month day )
#! Inverse of julian-day-number
[let* | a [ n 32044 + ]
b [ 4 a * 3 + 146097 /i ]
c [ a 146097 b * 4 /i - ]
d [ 4 c * 3 + 1461 /i ]
e [ c 1461 d * 4 /i - ]
m [ 5 e * 2 + 153 /i ] |
100 b * d + 4800 -
m 10 /i + m 3 +
12 m 10 /i * -
e 153 m * 2 + 5 /i - 1+
] ;

This is a hairy math formula for computing the year/month/day from the number of days since an epoch. Trying to do this with nested [lets would quickly get out of hand because each binding refers to the previous one. The version using dynamic variables doesn't suffer from deep nesting, but it is inefficient and harder to read because everything is bunched up with no clear demarkation of computations:
: julian-day-number>date ( n -- year month day )
#! Inverse of julian-day-number
[
32044 + a set
4 a get * 3 + 146097 /i b set
a get 146097 b get * 4 /i - c set
4 c get * 3 + 1461 /i d set
c get 1461 d get * 4 /i - e set
5 e get * 2 + 153 /i m set
100 b get * d get + 4800 -
m get 10 /i + m get 3 +
12 m get 10 /i * -
e get 153 m get * 2 + 5 /i - 1+
] with-scope ;

The nice thing about [let* is that after some refactoring, I was able to implement it with very little effort. Basically the only difference between it and [let is the parsing; the closure conversion stage is identical for the two, both expand into nested lambdas.

The next feature is method definitions with locals. You used to be able to define new words with named parameters, as in the above example, but there was no corresponding way to define a method with named parameters. I added a M:: word which does this. So now locals have named-parameter analogues of :, MACRO: and M:. This represents a combinatorial explosion of sorts: for every defining word you want to have named parameters for, you need to define a new version of that word. To alleviate this somewhat, I've redesigned some code in the parser and macro library so that everything is factored in a nice, logical way:
: (:) define ; parsing
: :: (::) define ; parsing

: MACRO: (:) define-macro ; parsing
: MACRO:: (::) define-macro ; parsing

: M: (M:) define ; parsing
: M:: (M::) define ; parsing

The parenthetical helper words are defined like so:
: (:) CREATE-WORD parse-definition ;
: (::) CREATE-WORD parse-locals-definition;

: (M:) CREATE-METHOD parse-definition ;
: (M::) CREATE-METHOD parse-locals-definition ;

This is what I like to call "perfect factoring".

Locals are meta-programmable and so they would be nice if someone decided to implement an efficient Scheme to Factor compiler; it seems like most of the hard work is already done. But who the hell wants to code Scheme anyway.

Wednesday, March 12, 2008

The new HTTP server, part 1

For the last couple of weeks I've been working on a redesign of Factor's HTTP server.

The old HTTP server


The old HTTP server dates back to 2003, when Factor was written in Java. At that time, the only Factor code I had written was about 5,000 lines for the core language, and some 15,000 lines of scripting code for my abandoned game project. The scripting code mostly consisted of configuration: creating hashtables filled with values, setting up various objects and calling into Java code to add them to global registries. I started writing the HTTP server because I wanted a simple example of a Factor application that I could demo independently of the game code. So it is fair to say that the HTTP server was the first "real" application written in Factor.

Four years later, the HTTP server is still around. It has evolved incrementally over the years, and survived some major changes in the Factor language, however it has never undergone a major overhaul or even been redesigned to use some abstractions that were introduced after I started working on the HTTP server.

For example, the HTTP server did not define a single class or generic word; when I started writing it, Factor didn't have an object system, I just used hashtables to simulate objects, with quotation values to simulate methods (much like Lua). HTTP headers and request parameters were stored in dynamically scoped variables named by strings. This somewhat ad-hoc design survived in the HTTP server to this day, and it made it harder to learn and extend than it had to be.

Another source of ad-hoc-ness was that while the HTTP request was parsed pretty well, writing the response was entirely up to the web app; it had to write out the headers directly. This would have made cookies and similar features harder to implement.

The old HTTP server served us well. It has powered and still powers several web sites which receive a lot of hits, and the API was good enough for simple applications. However, since I'm going to build commercial software with Factor, I need something higher-level and more robust, and I decided to redo the HTTP server from scratch, using the latest Factor abstractions and idioms, and incorporating many of the things I learned about web development over the last four years.

Progress so far


So, here is a quick rundown of the features I've implemented:
  • Component-based form framework with validation support
  • URL and cookie sessions
  • Database support
  • CRUD scaffolding
  • Authentication with login and registration page
  • User account info can be stored in memory or a database
  • Persistence of sessions in a database
  • Continuation-based page flow (based on Chris Double's code for the old server)
  • Logging with log rotation and nightly log analysis e-mail - this is in fact the new logging framework and I've blogged about it before

Still pending:
  • Documentation
  • Updating pastebin, planet, help web apps in extra/ to use it
  • SSL support using OpenSSL on Unix and native SSL APIs on Windows
  • Library for adding threaded discussion comments to any site
  • Better templating with support to make it easier to specify a common theme for the entire site

I will be blogging about the new features over the next few weeks. In this entry, I will talk about the most basic layer: HTTP request and response handling. This layer forms the foundation of any web application, even ones developed with the highest level abstractions such as CRUD scaffolding have to call into the HTTP layer at some point. I'd like to emphasize that the new HTTP server is already quite complete and moving very quickly; it is certainly not limited to the functionality I am describing in this post. I wanted to focus on the basics before I feel the fundamentals here are very well designed and they serve as a foundation for all other advanced functionality I have implemented.

HTTP requests and responses


The central concept in the new server is that HTTP requests and responses are now first-class types. The extra/http library implements these types and operations for working on them. An HTTP request can be parsed from an input stream, or written to an output stream. Similarly, an HTTP response can be parsed from an input stream, or written to an output stream.

Both the HTTP server and HTTP client use this library, and conceptually what they do can be explained in very simple terms:
The HTTP server reads a request from the client, processes it, and writes a response.

The HTTP client writes a request to the server, then reads a response from the server, and returns it to the user.

This is a very nice simplification and it allows a lot of code to be shared between the client and the server.

So in the description of the server above, it says that it "processes" the request to produce a response. The code that does this is known as a responder. There is only one responder per HTTP server. It receives requests and it outputs responses.

A responder is an object implementing a method on a generic word:
GENERIC: call-responder ( path responder -- response )

When this word is called, the request is stored in a request variable.

Here is a simple responder:
TUPLE: simple-responder ;

C: <simple-responder> simple-responder

M: simple-responder call-responder
2drop
<response>
200 >>code
"Document follows" >>message
"text/plain" set-content-type
"Hello world" >>body ;

Using a higher-level feature such as HTTP actions, it is possible to avoid much of this boilerplate, but let's just stick to the lowest layer for now.

If we manually call write-response on the result of the above construction, we get what we expect:
HTTP/1.1 200 Document follows
connection: close
content-type: text/plain
date: Thu, 13 Mar 2008 01:30:43 GMT

This is what the HTTP server would send given a HEAD request. Given a GET request, it calls write-response followed by write-response-body; the looks at the body slot; if it is a string, it writes it to the client, if it is a quotation, it calls it, and the quotation is free to write any output it choses.

We can create an instance of this responder, store it in the main-responder variable, and start the HTTP server:
<simple-responder> main-responder set-global
8080 httpd

Now, visiting http://localhost:8080/ produces a "Hello world" message.

This is how you start an HTTP server that serves a single web application. A more useful example is a server which simply serves static content:
"/var/www/" <static> main-responder set
8080 httpd

However, what if you wanted to have several web apps per server? Or a server that serves static content as well as web apps? Or what if your site only ran a single application conceptually, but you wanted to structure it from multiple responders? As I've described the server design above, it is not clear how this is possible. However, in fact this problem is solved in a very elegant way. A responder can call other responders: so the fact that the HTTP server can only ever have one main responder is not a limitation at all.

Dispatchers


A dispatcher is a responder which looks at the first component of a path name, chops it off, and calls one of several possible child responders depending on that path name. For example, we can create a dispatcher which has our hello world app, together with static content, as children:
<dispatcher>
<hello-world> "hello" add-responder
"/var/www/" <static> "data" add-responder
main-responder set
8080 httpd

Now, if we visit http://localhost:8080/hello/, we get the "Hello world" message, and if we visit http://localhost:8080/data/ we get our static content. For example, if we have a file /var/www/widgets.html on our file system, then visiting http://localhost:8080/data/widgets.html will serve that file.

What if we visit http://localhost:8080/? We get a 404 not found response, because the dispatcher doesn't know what to do in this case. However, we can give it a default responder; for example, let's change the above code so that the static content is the default:
<dispatcher>
<hello-world> "hello" add-responder
"/var/www/" <static> >>default
main-responder set
8080 httpd

Now, visiting http://localhost:8080/hello, we get our simple responder, and if we visit http://localhost:8080/widgets.html or any other path, we get the static data. Effectively, we've "mounted" the hello world web app in the hello/ directory.

More about static content, and CGI


In the old HTTP server, the file responder for static content was hard-coded to allow .fhtml files to execute, which were templates mixing HTML content and Factor code. This presented a potential security problem: if you allow users to upload arbitrary data then you want to serve it out, you probably don't want to run .fhtml scripts.

On the flip-side, sometimes you want to serve some CGI scripts. CGI is crappy, inefficient, archaic and error-prone, but sometimes you want to use it anyway. For example, factorcode.org runs a gitweb.cgi instance. The old HTTP server had a CGI responder which shared a lot of code with the file responder. While there was no code duplication this was a bit ugly.

What I decided to do this time round is make the file responder more flexible. It no longer hard-codes any behavior for .fhtml files. Instead, each file-responder instance has a hashtable mapping MIME types to quotations implementing special behaviors. These special behaviors can include running .fhtml scripts or running CGI scripts. You could even add PHP support by dynamically loading the PHP runtime and calling it via FFI if you wanted to.

Here is an example of this:
: enable-fhtml ( responder -- responder )
[ serve-template ]
"application/x-factor-server-page"
pick special>> set-at ;

The enable-fhtml word takes a file responder as input, and stores a quotation in its hashtable of special hooks. The stack effect is designed to return the file responder on the stack. This allows it to be used as follows:
<dispatcher>
"/var/www/" <static>
enable-fhtml
"data" add-responder
<hello-world> >>default
main-responder set

CGI is implemented in a similar fashion; there is an enable-cgi word.

Here is a more elaborate example:
<dispatcher>
"/var/www/widgets.com/images" <static> "images" add-responder
"/var/www/widgets.com/cgi" <static>
enable-cgi
"cgi-bin" add-responder
"/var/www/widgets.com/content" <static>
enable-fhtml
main-responder set

This might be suitable for a simple site where the content was mostly static, and there were only a handful of dynamic templates which did not do anything too elaborate (for if they did, you'd code them as responders instead, so that they'd always be loaded, and because responders are more flexible in what kinds of responses they can give).

Paths hanging off http://widgets.com/images are served from /var/www/widgets.com/images and no templates or CGI scripts are permitted there. Paths hanging off http://widgets.com/cgi-bin are served from /var/www/widgets.com/cgi/, and CGI script execution is supported. All other paths are served from /var/www/widgets.com/content and templates are allowed.

What's next?


  • In the second installment, I will talk about virtual hosting, session management, and cookies, and give some more complex examples of responders. Virtual hosting is a work in progress; the design is a lot more flexible than it was before. Cookies and session management and pretty much done, but this post is already getting rather long so I will describe this next time.
  • In the third installment, I will talk about web actions and form validation.
  • In the fourth installment, I will discuss database access and CRUD actions.

Tuesday, March 11, 2008

Faster serialization

I figured out a nice trick that sped up Factor's serialization library. Because serialization is required to preserve circular and shared structure, it would previously store all objects serialized so far in a sequence. When a new object was encountered, it would scan the sequence for an object eq? to the one being serialized. This made serialization quadratic in the number of objects being serialized, which is unacceptable. While Factor's hashtables always check for equality using = and not eq?, I was able to simulate eq? hashtables by defining a new data type.

The new type is a tuple which holds a reference to a single object:
TUPLE: id obj ;

C: <id> id

The hashcode of an id is the hashcode of the underlying object:
M: id hashcode* obj>> hashcode* ;

The equality test, however, checks if the underlying objects are eq?:
M: id equal? over id? [ [ obj>> ] 2apply eq? ] [ 2drop f ] if ;

That is, two distinct instances of id are equal if their underlying objects are identical (got it?).

Now, we just have to replace two words to use a hashtable with id keys instead of a linear search of a sequence:
: add-object ( obj -- )
#! Add an object to the sequence of already serialized
#! objects.
serialized get [ assoc-size swap ] keep set-at ;

: object-id ( obj -- id )
#! Return the id of an already serialized object
serialized get at ;

True id hashtables will be added at some point in the future, but for now this will suffice.

Saturday, March 08, 2008

Forth Foundation Library

It seems Factor is not the only stack-based language trying to build up a rich library. The Forth Foundation Library includes a regexp engine, various cryptographic hashes, an XML parser, binary trees, and so on. Looks pretty nice.

Friday, March 07, 2008

New-style slots in io.launcher and smtp

Yesterday I refactored the io.launcher and smtp libraries to use Eduardo's new-slots and the results were quite nice.

Prior to this, words such as run-process would take either a string, an array of strings, or an assoc. The latter was used to specify more advanced launch parameters, such as I/O redirection. Here is a typical example of the old way:
{
{ +command+ "make clean install" }
{ +stdout+ "../build-log" }
{ +stderr+ +stdout+ }
} run-process

This looks nice and neat, however, if some of the values are computed, then you cannot use a literal assoc, you have to construct one. Suppose we want to add a 5 minute timeout. Here is one way:
[
"make clean install" +command+ set
"../build-log" +stdout+ set
+stdout+ +stderr+ set
5 minutes +timeout+ set
] { } make run-process

Having to change all of the code just because one parameter becomes computed is not so nice. The other alternative is to use something like bake:
{
{ +command+ "make clean install" }
{ +stdout+ "../build-log" }
{ +stderr+ +stdout+ }
{ +timeout+ %[ 5 minutes ] }
} bake run-process

This is better but now you have bake, %[ which don't pertain to your problem domain, plus some unnecessary Lisp-style nesting. And what if you wanted to build up a descriptor in one word, but then have another word which amended it further? You'd have to bake two assocs and use an assoc word such as 'union'. Bake is nice in other contexts but it is the wrong solution here.

Now suppose we used a tuple of process launch parameters:
<process>
"make clean install" over set-process-command
"../build-log" over set-process-stdout
+stdout+ over set-process-stderr
5 minutes over set-process-timeout
run-process

This is more composable; you can write a word which takes a descriptor, and sets some slots in it. It is also relatively easy to have both literal launch parameters and computed ones, and adding a computed parameter to an all-literal descriptor doesn't require introducing make-assoc or bake. However, now we've introduced some stack shuffles and the accessor names are kind of long.

Enter new-slots:
<process>
"make clean install" >>command
"../build-log" >>stdout
+stdout+ >>stderr
5 minutes >>timeout
run-process

This is about as ideal as it gets. No unnecessary nesting, no stack shuffling, and the only words that appear directly pertain to your problem domain.

What about the case where a launch parameter comes off the stack? Suppose the command name is an input and everything else is computed on the spot. Here is the old way:
[
+command+ set
"../build-log" +stdout+ set
+stdout+ +stderr+ set
5 minutes +timeout+ set
] { } make run-process

The new way introduces a swap, but there is still a net reduction in complexity:
<process>
swap >>command
"../build-log" >>stdout
+stdout+ >>stderr
5 minutes >>timeout

Now suppose we have a make target name on the stack and we want to run make with that target. Here is the old way:
[
"make" swap 2array +command+ set
"../build-log" +stdout+ set
+stdout+ +stderr+ set
5 minutes +timeout+ set
] { } make run-process

With new-slots:
<process>
"make" rot 2array >>command
"../build-log" >>stdout
+stdout+ >>stderr
5 minutes >>timeout

With bake and new-slots:
<process>
swap { "make" , } bake >>command
"../build-log" >>stdout
+stdout+ >>stderr
5 minutes >>timeout

I find the latter preferrable to the first two.

The situation with smtp is a little different. Previously, I had a send-simple-message word which took a body, subject, from and to on the stack:
"Blahblah" "Hi" { "alice@aol.com" "joe@aol.com" } "bob@aol.com" send-simple-message

Clearly, this doesn't scale as additional parameters are added; header lines, attachments, etc. The new code uses new-slots and it is a lot more aesthetically pleasing:
<email>
"Blahblah" >>body
"Hi" >>subject
{ "alice@aol.com" "joe@aol.com" } >>to
"bob@aol.com" >>from
send

Not only does it read better but also it is easy to add additional fields such as CC, BCC, attachments, S/MIME support, etc. (Well, easy enough to add the fields; actually implementing some of these is another matter altogether...)

New-slots is going into the core very soon now. Bake will need to be worked on some more; then it will go in the core and replace usages of make there. Make will go into a library in extra and be phased out along with old slots.

Saturday, March 01, 2008

New convention for unit tests

Until now, unit test files would define words in the temporary vocabulary. This made it problematic to run these words after the unit tests were done -- sometimes your tests failed, and to debug the problem you want to run some of these words, however the unit test code would forget everything in the temporary vocabulary, necessitating an annoying workflow where to access these words the test file had to be loaded with run-file first. I finally got fed up with this so I changed the convention: test files now define words in vocab.tests, not temporary, and these vocabularies are not forgotten after tests run. I've updated all tests in the repository; new code by contributors should no longer use the temporary vocab.