I guess at a certain level, I’m only noting this in sort of a thumbing-my-nose-at-MySQL way, but the sale to Oracle of the company that creates the only transaction-safe storage back-end with referential integrity available for MySQL “has some real implications for MySQL”:http://www.megginson.com/blogs/quoderat/archives/2005/10/11/oracle-vs-mysql-ab/. I, of course, am largely unaffected because, well, I don’t use MySQL.
Category: Code
Hiliarous troll of the day
Seen on linux-kernel:
bq.. From: Ahmad Reza Cheraghi
Subject: Why no XML in the Kernel?
To: linux-kernel
Date: Sun, 2 Oct 2005 02:41:42 -0700 (PDT)
Can somebody tell me why the Kernel-Development dont
wanne have XML is being used in the Kernel??
Regards
Ahmad Reza Cheraghi
The Shorter Nat Friedman
How to become a hacker? “Hack”:http://nat.org/2005/september/#How-to-become-a-hacker
Occasional C hacking (aka, Why I Love Free Software)
So yesterday I found myself in an unfortunate situation–I had just spent several days doing a significant revamp and cleanup of a clients LDAP tree (to better support multiple-domain email handling, mostly, but it had accumulated several years of cruft) when the client called me in a tizzy because their WebDAV access–necessary to modify a number of their websites–had stopped working.
Well, it turns out that Adobe GoLive! URI-encodes any (presumably, I didn’t check) non-alphabetic characters in the username it sends over for authentication. But these usernames aren’t decoded before they’re handed to mod-auth-ldap, so the lookup fails because there is no record for ‘foo%40example.com’.
If I were dealing with traditional vendors here, I expect I would have spent quite some time on the phone as everyone involved pointed fingers at one another–the web server vendor saying that GoLive! shouldn’t URI-encode the usernames, Adobe saying that the web server should decode them, the web server saying that the LDAP server should know how to decode them, etc., etc. Round and round.
But I’m not dealing with traditional vendors (at least, not on the server side), I’m dealing with Free Software. Which means I was able to download the source to mod-auth-ldap and add the following patch:
bc.. — libapache-auth-ldap-1.6.0.orig/auth_ldap.c
+++ libapache-auth-ldap-1.6.0/auth_ldap.c
@@ -404,7 +405,12 @@
LDAP filter metachars are escaped.
*/
filtbuf_end = filtbuf + FILTER_LENGTH – 1;
– for (p = r->connection->user, q=filtbuf + strlen(filtbuf);
+
+ /* fscking Go Live uri-encodes the usernames, which screws up lookups */
+ char *decoded_user = ap_pstrdup (r->pool, r->connection->user);
+ ap_unescape_url (decoded_user);
+
+ for (p = decoded_user, q=filtbuf + strlen(filtbuf);
*p && q < filtbuf_end; *q++ = *p++) {
if (strchr("*()\\", *p) != NULL) {
*q++ = '\\';
p. And everything works just fine, thanks.
I am a JavaScript slacker…
That is to say, when I’m working on web stuff, I think almost exclusively in terms of what I can do on the server side–I have been known to use JavaScript to do simple pre-submission validation of forms, but that’s about as far as I go.
However, “there’s an interesting article on how to have your ajax-enabled site degrade gracefully”:http://particletree.com/features/the-hows-and-whys-of-degradable-ajax that uses the incredibly sensible strategy of shipping all your documents as HTML that works, if mundanely (what I’m used to doing) and then, _if it’s available_, using javascript to make them full-on-robot-chubby ajax-enabled masterpieces. You won’t even enable the ajax capabilities unless that particular promise can be fulfilled.
I’m sure this is old hat, and I’m terribly late to the party, but damn it just seems brilliant.
What if you had a language that was all cut-and-paste
Anyone worth their salt as a programmer will tell you that programming by cut-and-paste is always, always, always a mistake. You might do it for expedience, because reworking whatever you’re cutting-and-pasting to be more generic might take longer than you have to deliver your result, but there is never a situation where it’s a good thing.
But the “subtext”:http://subtext.org/ language has a demo that posits the question “what if your language was built to handle all the issues for you?”:http://subtextual.org/demo1.html.
I don’t think I’ll be programming in in any time soon, but its always interesting when a new idea comes around.
Elijah Newren distinguishes himself
I guess first I should make the observation that I don’t know who Elijah is other than “some random Gnome hacker”:http://www.gnome.org/~newren/.
However, the last couple of days in Gnome-land has involved huge, horrendous amounts of dumping on someone named Eugenia for saying some unconsidered and unkind things in the most public way possible. Lots and lots of dumping. I mean tons. It certainly seems like everyone on “Planet Gnome”:http://planet.gnome.org/ has made a comment, and though most of them have been minimally civil–no shouted obscenities, no ad hominiem attacks–I think it’s fair to say most of them feel unfairly attacked.
Elijah, though, “takes the time to try and figure out why it all happened”:http://www.gnome.org/~newren/blog/2005/03/15.
Sure, it’s all supposition, but it’s refreshing to see someone–however alone they may be–trying to step back and understand the other side’s point of view, however misguided it may actually be. I’ve witnessed a lot of Debian flame-wars (it looks like another is heating up right now) that quickly sink to the all-heat-and-no-light level.
At the end of some comments about working with free software hackers
which is an interesting bit in itself, Jakub Steiner drops a couple of links to some resources on writing (and, for that matter, why to write) functional specifications, “one from Joel Spolsky”:http://www.joelonsoftware.com/articles/fog0000000036.html and “one much more elaborate one”:http://www.mojofat.com/tutorial/ that really leads you by the nose.
This all seems especially germane to me right now since I’m going through the throes of writing some specs for the great rewrite of “AnteSpam”:http://antespam.com/.
Colorization using optimization
This is apparently all over geek circles today, but I got it from “Miguel”:http://primates.ximian.com/~miguel/.
Researchers in Israel have developed colorization techniques that are almost freakish in their ability to produce natural-looking results using an incredibly simple-seeming marking-up of the original image.
Needless to say, “they have a web page devoted to it”:http://www.cs.huji.ac.il/~yweiss/Colorization/.
More Ajax
It’s “the hot new thing”:http://zilbo.com/articles/ajax_how.html, even cooler than “Ruby on Rails”:http://www.rubyonrails.com/.
I don’t think I’ve mentioned Laszlo before
I remember when it was first freed, late last summer–right about the time I was starting in DC–and it was the subject of much enthusiasm and lots of “wow, this is just what we’ve been waiting for” posts.
It would appear, though, that the bloom is off the rose, and “all we’re left with are the thorns”:http://rifers.org/blogs/gbevin/2005/3/8/wasting_time_with_laszo.
Adventures in building Perl modules (a short, short primer on extending Module::Build)
Over the last few years, it has been a presumption that when I work on a project in Perl, I will use the standard Perl tools– @ExtUtils::MakeMaker@ and, later, @Module::Build@ –for managing the Perl library code I write.
But yesterday, for the first time, I looked at extending @Module::Build@ to do more than just the stock actions. And you know what, it was easy.
Now the specific issue I was running up against was that I needed to insure that the database I was running my tests against was installed and clean. I had been using a Makefile, but that was a hack–for instance, I wasn’t actually checking the presence of the database or anything, I was looking for a file I wrote when I created the database. I probably could have made make check for the actual database, but it’s imperative, rather than procedural, style makes this kind of ugly.
Also, I wanted a clean instance of this database before I did a test run of the conversion utility (this is all work on a heavily revised “AnteSpam”:http://antespam.com/, and we’re moving from keeping config info in ldap to putting it in a replicated PostgreSQL database). _And_ I wanted a clean instance of this database before I ran the “PostgreSQL Autodoc”:http://www.rbt.ca/autodoc/ tool to generate a nice diagram and DocBook documentation of the structure.
Oh, and I got so frigging tired of SQL’s spectacular verbosity ( *badly* exacerbated by the fact that I was commenting on most of the structures so the information would show up in the DocBook documentation) that I wrote a simple preprocessor–so I had to make sure that was run if necessary before creating the database.
Oh, and I wanted to build the documentation automatically. And I kept forgetting to run the @Build@ script with the environment set properly for the database, so I wanted that to be handled easily.
So, I made a file, @Build.pm@, which sits right alongside @Build.PL@, and subclasses Module::Build. And to that file I added a function (admittedly *very* simplistic, and, as a result, somewhat overenthusiastic) to drop and recreate the database:
bc.. sub create_db {
my $self = shift;
# Get the database name
my $database = $self->args (“database”);
# Drop the database if it already exists
$self->do_system (qq{dropdb $database}) if ($self->do_system (qq{psql -l | egrep -q $database}));
# Create the database
$self->do_system (qq{createdb $database});
# Make sure the schema is up-to-date
$self->dispatch (“ddlpp”);
# Load up the schema and initial data
$self->do_system (qq{psql -q -f antespam.sql});
};
p. There are several cool things here. First, you can look at, at run-time, arguments that were given to the script when it was created. So you can do:
@perl Build.PL database=foo@
and when you actually invoke the resulting build script, the bits you write can look for a database argument, and use what was set initially. The rest of it should be fairly obvious–yes, I’m just shelling out to psql rather than doing it all in DBI–except for the dispatch call. You see, you can add additional actions to your script. In this case, I added an action called @ddlpp@ (for DDL pre-processor) to build the sql from my data definition file. It’s short, just:
bc.. sub ACTION_ddlpp {
my $self = shift;
$self->do_system (qq{ddlpp antespam.dp antespam.sql}) unless ($self->up_to_date (“antespam.dp”, “antespam.sql”));
};
p. You’ll notice, though, that it will only run @ddlpp@ again if the @.dp@ file is newer than the @.sql@ file. That’s cool.
Anyway, I also overrode the standard @test@ action, to make sure the database is created:
bc.. sub ACTION_test {
my $self = shift;
# Set up database access
local $ENV{PGDATABASE} = $self->args ("database");
local $ENV{PGHOST} = $self->args ("host");
local $ENV{PGPASSWORD} = $self->args ("password");
local $ENV{PGUSER} = $self->args ("user");
# Make sure the database is created
$self->create_db;
# Run tests as normal
$self->SUPER::ACTION_test (@_);
}
p. All this does is set the appropriate environment variables for psql to pick up, creates the database, and then runs the normal test action that it inherited from @Module::Build@. The @convert@ action is similar, except it shells out to the convert script.
Etc., etc. I’m not holding this up as any paragon of implementation–in fact, it’s exposed some shortcuts I’ve taken that I ought not be taking, so I’m gonna have to clean those up eventually, and I should be able to add automatic @.dp@ to @.sql@ conversion and so forth–but for an hour or two of poking around, I’ve made some not-inconsequential extensions to the build system, giving me a much cleaner, more integrated process.
JavaScript Templates
There is now an templating system implemented “using client-side Javascript”:http://trimpath.com/project/wiki/JavaScriptTemplates.
Normally this would be boring an tedious to contemplate, but, as Ian Holsman observes, combined with liberal use of XMLHttpRequest, this could be interesting.
For the bizarrely technically-minded among you
So, the Perl 6 implementation on top of Parrot continues to chug along. At least, I suppose it does–since Piers Cawley doesn’t write his weekly summaries any more, I have no idea. I guess I need to subscribe to yet another mailing list.
However, regardless of that, a “competing” implementation is being worked on. The weird, mildly disturbing part of it is that it’s being “implemented in Haskell”:http://www.pugscode.org/.
Now I don’t have anything against Haskell, _per se_–in fact, I considered learning it as a new language, although I ended up going with C#, which is a whole other story–but everything I’ve seen about it suggests that it is, if not an anti-Perl sort of language, at least a very un-Perl sort of language. To use one to implement the other seems, masochistic.
Wow
I just don’t know what else to say. “This”:http://perlmonks.org/index.pl?node_id=329174 wins any “Write a hello, world” program competition hands down.
Reeling in the years…
So, as part of my ongoing quest to have as spare an office as possible (you must understand that I mean spare by my usually cluttered standards–I do not intend to get rid of, say, the four large bookcases full of books, or the several hundred CDs, say; I just want to get rid of all the superfluous shit), I often grab a stack of old magazines I’ve kept around, and start going through them, looking for anything worth cutting out, and recycling the rest.
Yesterday I did some _WebTechniques_ from 1999-2001, and boy, were they amusing–very much of their Internet-bubble time, and rife with flavor-of-the-month software and technology that no one even thinks about any more.
Today I started in on my old _Dr. Dobb’s Journal_. The oldest issues I have are from ’97 (I have the CD-ROM that had the text of articles up to that point), but boy, even that’s a heck of a time capsule–for instance, one of the big articles has to do with the Pentium II math bug, which I hadn’t thought about in years.
Also of interest are some of the authors, who I now know of from different contexts–for instance, I just noticed a C++ article from Nathan Meyers, who I know from both the gcc development list (not suprising), and from the occasional Debian list.
What’s really wierd, though, is how irrelevant it all it seems to me in retrospect. You have to understand, this is a magazine I’ve been reading off and on–mostly on, though I let my subscription lapse a couple of months ago for the first time in a decade–since I was, say, 15. That is more than half my life.
And yet, the vast majority of the stuff in these issues I’m looking at hasn’t had much to offer me–I mean, I do believe that some of it has indirectly made me a better programmer, if only by making me cognizant of some of the “big picture” issues of programming, or talking about language- and platform-neutral issues and such.
I guess this really drives home to me that I work outside the mainstream, and I don’t have any desire to move towards the mainstream. Dr. Dobb’s had become a magazine where articles were either oriented towards the mainstream–programming Windows stuff, or how to use whatever new Java interface Sun has dreamed up–or they were too specific to do anything for me–how to compute elliptic curves across 3D spaces or other such hyper-specialized stuff. So I don’t read it any more.
Wierd.
Never underestimate Apache subrequests
So, our “whole application”:http://i-squad.com/ is written in HTML::Mason, running under mod_perl, etc. Everyone seems quite happy; it’s performing really well–we’re handling nearly 2M hits/day (with about a 10:1 graphics to HTML ratio) on a dual PIII/1Ghz app server and a similar DB server–and it’s pretty easy to get it to do whatever you want it to.
However, for reporting purposes, we have to produce something in a reasonable printable form. The only really portable print-oriented format that gets you good display control is .PDF. We’ve gone down the using-HTML-to-generate-print rathole for one report, and it’s too horrific to contemplate doing more.
There’s no good–by which I guess I really mean high-level–free way to produce PDFs in perl that I have found, and boy, have I looked. And the commercial tools that do what we need all seem to want multi-thousand dollar licenses for “server versions”. This is unattractively expensive.
But we still need pdf, and until xmlroff is in better shape, and its libfo is hooked into a perl module (boy, I wish I had time and energy to help with it, because it’s a cool system), there really aren’t any options that don’t involve using external processes.
So, we’ve started retrieving our report data in XML form, using Matt Sergeant’s XML::Generator::DBI, and we’ve got some stylesheets that do our conversions to HTML and CSV, and we’ll eventually do some to get the data in a form to feed to Spreadsheet::WriteExcel::FromXML so that we can get XLS files.
And we can write stylesheets to go to fo, and then we can run FOP on them, and, here’s the beauty part, our HTML::Mason component that does this can then just make an Apache subrequest to FOP’s output file, and Apache will take care of sending the PDF back for us.
It’s a simple as:
bc. system qq{/usr/bin/fop -q -fo $fo -pdf $pdf};
$m->auto_send_headers (0);
my $subrequest = $r->lookup_file ($pdf);
if ($subrequest->status == 200) {
$subrequest->run (1) ;
} else {
$m->abort (404);
}
We do similar things for doing access control on static graphics–we keep them absolutely outside of our document root, and then have a Mason dhandler that decides whether you’re allowed access, and if you are, let’s Apache take care of you.
I don’t think enough people use Apache subrequests.