The old HTTP server
The old HTTP server dates back to 2003, when Factor was written in Java. At that time, the only Factor code I had written was about 5,000 lines for the core language, and some 15,000 lines of scripting code for my abandoned game project. The scripting code mostly consisted of configuration: creating hashtables filled with values, setting up various objects and calling into Java code to add them to global registries. I started writing the HTTP server because I wanted a simple example of a Factor application that I could demo independently of the game code. So it is fair to say that the HTTP server was the first "real" application written in Factor.
Four years later, the HTTP server is still around. It has evolved incrementally over the years, and survived some major changes in the Factor language, however it has never undergone a major overhaul or even been redesigned to use some abstractions that were introduced after I started working on the HTTP server.
For example, the HTTP server did not define a single class or generic word; when I started writing it, Factor didn't have an object system, I just used hashtables to simulate objects, with quotation values to simulate methods (much like Lua). HTTP headers and request parameters were stored in dynamically scoped variables named by strings. This somewhat ad-hoc design survived in the HTTP server to this day, and it made it harder to learn and extend than it had to be.
Another source of ad-hoc-ness was that while the HTTP request was parsed pretty well, writing the response was entirely up to the web app; it had to write out the headers directly. This would have made cookies and similar features harder to implement.
The old HTTP server served us well. It has powered and still powers several web sites which receive a lot of hits, and the API was good enough for simple applications. However, since I'm going to build commercial software with Factor, I need something higher-level and more robust, and I decided to redo the HTTP server from scratch, using the latest Factor abstractions and idioms, and incorporating many of the things I learned about web development over the last four years.
Progress so far
So, here is a quick rundown of the features I've implemented:
- Component-based form framework with validation support
- URL and cookie sessions
- Database support
- CRUD scaffolding
- Authentication with login and registration page
- User account info can be stored in memory or a database
- Persistence of sessions in a database
- Continuation-based page flow (based on Chris Double's code for the old server)
- Logging with log rotation and nightly log analysis e-mail - this is in fact the new logging framework and I've blogged about it before
- Updating pastebin, planet, help web apps in extra/ to use it
- SSL support using OpenSSL on Unix and native SSL APIs on Windows
- Library for adding threaded discussion comments to any site
- Better templating with support to make it easier to specify a common theme for the entire site
I will be blogging about the new features over the next few weeks. In this entry, I will talk about the most basic layer: HTTP request and response handling. This layer forms the foundation of any web application, even ones developed with the highest level abstractions such as CRUD scaffolding have to call into the HTTP layer at some point. I'd like to emphasize that the new HTTP server is already quite complete and moving very quickly; it is certainly not limited to the functionality I am describing in this post. I wanted to focus on the basics before I feel the fundamentals here are very well designed and they serve as a foundation for all other advanced functionality I have implemented.
HTTP requests and responses
The central concept in the new server is that HTTP requests and responses are now first-class types. The
extra/httplibrary implements these types and operations for working on them. An HTTP request can be parsed from an input stream, or written to an output stream. Similarly, an HTTP response can be parsed from an input stream, or written to an output stream.
Both the HTTP server and HTTP client use this library, and conceptually what they do can be explained in very simple terms:
The HTTP server reads a request from the client, processes it, and writes a response.
The HTTP client writes a request to the server, then reads a response from the server, and returns it to the user.
This is a very nice simplification and it allows a lot of code to be shared between the client and the server.
So in the description of the server above, it says that it "processes" the request to produce a response. The code that does this is known as a responder. There is only one responder per HTTP server. It receives requests and it outputs responses.
A responder is an object implementing a method on a generic word:
GENERIC: call-responder ( path responder -- response )
When this word is called, the request is stored in a
Here is a simple responder:
TUPLE: simple-responder ;
C: <simple-responder> simple-responder
M: simple-responder call-responder
"Document follows" >>message
"Hello world" >>body ;
Using a higher-level feature such as HTTP actions, it is possible to avoid much of this boilerplate, but let's just stick to the lowest layer for now.
If we manually call
write-responseon the result of the above construction, we get what we expect:
HTTP/1.1 200 Document follows
date: Thu, 13 Mar 2008 01:30:43 GMT
This is what the HTTP server would send given a
HEADrequest. Given a
GETrequest, it calls
write-response-body; the looks at the
bodyslot; if it is a string, it writes it to the client, if it is a quotation, it calls it, and the quotation is free to write any output it choses.
We can create an instance of this responder, store it in the
main-respondervariable, and start the HTTP server:
<simple-responder> main-responder set-global
http://localhost:8080/produces a "Hello world" message.
This is how you start an HTTP server that serves a single web application. A more useful example is a server which simply serves static content:
"/var/www/" <static> main-responder set
However, what if you wanted to have several web apps per server? Or a server that serves static content as well as web apps? Or what if your site only ran a single application conceptually, but you wanted to structure it from multiple responders? As I've described the server design above, it is not clear how this is possible. However, in fact this problem is solved in a very elegant way. A responder can call other responders: so the fact that the HTTP server can only ever have one main responder is not a limitation at all.
A dispatcher is a responder which looks at the first component of a path name, chops it off, and calls one of several possible child responders depending on that path name. For example, we can create a dispatcher which has our hello world app, together with static content, as children:
<hello-world> "hello" add-responder
"/var/www/" <static> "data" add-responder
Now, if we visit
http://localhost:8080/hello/, we get the "Hello world" message, and if we visit
http://localhost:8080/data/we get our static content. For example, if we have a file
/var/www/widgets.htmlon our file system, then visiting
http://localhost:8080/data/widgets.htmlwill serve that file.
What if we visit
http://localhost:8080/? We get a 404 not found response, because the dispatcher doesn't know what to do in this case. However, we can give it a default responder; for example, let's change the above code so that the static content is the default:
<hello-world> "hello" add-responder
"/var/www/" <static> >>default
http://localhost:8080/hello, we get our simple responder, and if we visit
http://localhost:8080/widgets.htmlor any other path, we get the static data. Effectively, we've "mounted" the hello world web app in the
More about static content, and CGI
In the old HTTP server, the file responder for static content was hard-coded to allow
.fhtmlfiles to execute, which were templates mixing HTML content and Factor code. This presented a potential security problem: if you allow users to upload arbitrary data then you want to serve it out, you probably don't want to run
On the flip-side, sometimes you want to serve some CGI scripts. CGI is crappy, inefficient, archaic and error-prone, but sometimes you want to use it anyway. For example, factorcode.org runs a gitweb.cgi instance. The old HTTP server had a CGI responder which shared a lot of code with the file responder. While there was no code duplication this was a bit ugly.
What I decided to do this time round is make the file responder more flexible. It no longer hard-codes any behavior for
.fhtmlfiles. Instead, each file-responder instance has a hashtable mapping MIME types to quotations implementing special behaviors. These special behaviors can include running
.fhtmlscripts or running CGI scripts. You could even add PHP support by dynamically loading the PHP runtime and calling it via FFI if you wanted to.
Here is an example of this:
: enable-fhtml ( responder -- responder )
[ serve-template ]
pick special>> set-at ;
enable-fhtmlword takes a file responder as input, and stores a quotation in its hashtable of special hooks. The stack effect is designed to return the file responder on the stack. This allows it to be used as follows:
CGI is implemented in a similar fashion; there is an
Here is a more elaborate example:
"/var/www/widgets.com/images" <static> "images" add-responder
This might be suitable for a simple site where the content was mostly static, and there were only a handful of dynamic templates which did not do anything too elaborate (for if they did, you'd code them as responders instead, so that they'd always be loaded, and because responders are more flexible in what kinds of responses they can give).
Paths hanging off
http://widgets.com/imagesare served from
/var/www/widgets.com/imagesand no templates or CGI scripts are permitted there. Paths hanging off
http://widgets.com/cgi-binare served from
/var/www/widgets.com/cgi/, and CGI script execution is supported. All other paths are served from
/var/www/widgets.com/contentand templates are allowed.
- In the second installment, I will talk about virtual hosting, session management, and cookies, and give some more complex examples of responders. Virtual hosting is a work in progress; the design is a lot more flexible than it was before. Cookies and session management and pretty much done, but this post is already getting rather long so I will describe this next time.
- In the third installment, I will talk about web actions and form validation.
- In the fourth installment, I will discuss database access and CRUD actions.