I decided to give the DomainKeys a test on my ogre.com domain the other day. My system is a pretty generic sendmail installation (8.13.1), with a few milters ("mail filters") running already. In particular, I use MIMEdefang, which likes to append a new X-Scanned-By header on all messages.
This does not work well with Domain Keys, since it expects no header changes after it calculates the signature. I could either get it to work for all my outgoing mail (the DK filter last in the milter chain), or for all incoming mail (the DK filter first). But no matter what, it can't function properly for both cases.
Instead, I ended up creating a second sendmail configuration (and process) instance. This wasn't terribly difficult, but I'll describe the steps I had to take.
1. Install sendmail 8.13 (or later), and the dk-filter milter, and follow all the instructions for setting up your ))DomainKey(( system. I added the following to my DNS server:
ogre._domainkey IN TXT "g=; k=rsa; t=y; p= ... <excluded, use DNS> _domainkey IN TXT "t=y; o=~; n=contact email@example.com;"
2. First I fixed my existing sendmail.cf (the .mc file) by making sure it did not bind the MSA port (587), and also run the DK filter first (before any other milters):
FEATURE(no_default_msa)dnl INPUT_MAIL_FILTER(`dk-filter', `S=unix:/var/spool/dk.socket')dnl INPUT_MAIL_FILTER(`mimedefang', `S=unix:/var/spool/md.sock, T=S:1m;R:1m')dnl
3. Next I created a second configuration, I call it sendmail-msa.cf (and .mc), which is close to identical to my original configuration. The only changes are
define(`QUEUE_DIR', `/var/spool/mqueue-msa')dnl DAEMON_OPTIONS(`Port=587, Name=MTA, M=Ea')dnl FEATURE(no_default_msa)dnl INPUT_MAIL_FILTER(`mimedefang', `S=unix:/var/spool/md.sock, T=S:1m;R:1m')dnl INPUT_MAIL_FILTER(`dk-filter', `S=unix:/var/spool/dk.socket')dnl
You have to make sure you run the dk-filter last, so that no other milters can change or add any mail headers after the DK signature has been generated.
4. Finally, I start up a second instance of sendmail, using the new configuration:
/usr/sbin/sendmail -bd -C /etc/mail/sendmail-msa.cf -q1h
And that's it!
I've been using Comcast cable at my house for a while, primarily because it's the only "reliable" network provider that supports my area (I live sort of in the boonies). The performance is ok downstream, but of course completely pathetic upstream.
A while back ago, we upgraded to the "professional" service, which although expensive, did give us a bit better bandwith both up- and downstream. This was a simple switch of a button somewhere in their system, no upgrades or changes on my side. Recently they introduced what they call the "Home Network", which cost less than the "professional" service, but the same bandwith.
Now, all I really care for is the bandwith, but "Home Network" in this case apparently implies that you want their firewall and wireless router. For me, this would create some headaches and lost hardware/software investments, since I've already built my house network around a Linux firewall and two existing wireless hubs (supporting 802.11g). And to make things worse, I'd have to pay a $199 "service and installation" fee, plus new hardware cost (or rental fees). All this, to get something I already have. Nice thinking, Comcast ...
I spent well over 3 hours total on the phone trying to convince them I really only wanted the higher bandwith service, and that I was willing to pay the extra monthly charge without taking any of their router/hub hardware and installation. But they completely refused, and being a stubborn ass myself, I then simply downgraded my service again to "basic" internet services.
So, instead of taking my $10 / month with no extra work on their side, they forced me to downgrade the service, and save me some more money (maybe I can get that Traxxas Revo now). I guess I could have sucked it up and take the loss on my cable modem, the $199 installation fee, and rent (or buy) the new wireless router (that I don't need), and get the extra bandwith. But it just seemed so wrong on so many levels, I just couldn't take it.
So, for now, until SBC or someone else can provide me with reasonable bandwith to my house, I'm stuck running at the lowest possible bandwith from Comcast... Kill me billy.
I've been playing around with Fedora "development" packages on my RedHat 9 system lately. These packages are obviously unstable, but I've had pretty good results so far. I simply mirror the latest "devel" packages from the FTP mirrors, see the official Fedora site for more info.
I initially just installed a few core packages, to get the 2.6.1 kernel from Fedora/devel to run on my RH9 system. This was pretty easy, and worked very well. I don't remember exactly which packages I had to install, but at least:
- kernel-smp-2.6.1, make sure you "rpm -i" this package to not lose your old kernel(s)
(Pick the appropriate kernel for your system, SMP, architecture etc.)
I use the nVIDIA driveres on my machine, which now support 2.6.x kernels directly. However, since the Fedora/devel kernel was compiled with gcc-3.3, the nVidia installer failed during build. So, I also upgrade to new gcc builds, and then the graphic card drivers built fine.
I've since upgrade a majority of my old RH9 packages to Fedora/devel, with only minor problems. One problem that took me a while to figure out (since my gdm wouldn't start) was that somehow the file /etc/gtk-2.0/gdk-pixbuf.loaders was missing. In my previous installation, this was part of the gdk2 RPM (from Ximian). As it turns out, this file can easily be recreated with a simple command:
root@thor 301/0 # gdk-pixbuf-query-loaders > /etc/gtk-2.0/gdk-pixbuf.loaders
Without this file, pretty much every Gnome application (including GDM) will fail, with errors like:
Can not open pixbuf loader module file '/etc/gtk-2.0/gdk-pixbuf.loaders' Error loading PNG image loader: Image type 'png' is not supported
I spent a long night debugging a problem in my current project (Image/content spam prevention), and had problems retrieving URLs that resolved into the Akamai distributed proxy mesh. And yes, I did file this as a SourceForge bug
I don't know who's at fault here, but httplib.py will interpret the HTTP response from an Akamai server (erroneously) as if the socket is closed when the request is finished. This does not happen, because Akamai implements a Connection: keep-alive feature. The following diff to httplib.py (Python 2.3.2) does solve the problem, although I'm not sure if it's the right solution:
--- /usr/lib/python2.3/httplib.py 2003-10-06 09:11:52.000000000 -0700 +++ httplib.py 2004-01-11 03:10:18.000000000 -0800 @@ -355,6 +355,12 @@ # An HTTP/1.0 response with a Connection header is probably # the result of a confused proxy. Ignore it. + # Akamai returns HTTP 1.0 headers, with connection: keep-alive, so + # the socket will not close. + conn = self.msg.getheader('connection') + if conn and conn.lower().find("keep-alive") >= 0: + return False + # For older HTTP, Keep-Alive indiciates persistent connection. if self.msg.getheader('keep-alive'): return False
As an alternative, you can subclass the HTTPResponse class, to override the _check_close() method:
class HTTPResponse(httplib.HTTPResponse): def _check_close(self): if self.version == 11: # An HTTP/1.1 proxy is assumed to stay open unless # explicitly closed. conn = self.msg.getheader('connection') if conn and conn.lower().find("close") >= 0: return True return False # Akamai returns HTTP 1.0 headers, with connection: keep-alive, so # the socket will not close. conn = self.msg.getheader('connection') if conn and conn.lower().find("keep-alive") >= 0: return False # For older HTTP, Keep-Alive indiciates persistent connection. if self.msg.getheader('keep-alive'): return False # Proxy-Connection is a netscape hack. pconn = self.msg.getheader('proxy-connection') if pconn and pconn.lower().find("keep-alive") >= 0: return False # otherwise, assume it will close return True httplib.HTTPConnection.response_class = HTTPResponse
I'm a long time Python user (since 1994) and fan, and lately I've been trying to catch up on some of the many changes the language have been going through. I must say, pretty much everything they have changed or added since the old 1.x days are awesome! As if Python wasn't a great language already, it's shaping up to be a very strong contender indeed.
If you haven't looked at Python lately, take a quick look at the following "summaries" of changes that's been made in recent years:
- What's new in python 2.0
- What's new in Python 2.1
- What's new in Python 2.2
- What's new in Python 2.3
- What's new in Python 2.4
- What's new in the next Python
- Python PEPs
Anyways, last week I was playing with the super() function, to clean up some old code which used the old style calling conventions for accessing the super class members. This new built-in function was added to Python 2.2, to better support the new multiple inheritance lookup rules (see above). The following example shows the two different styles (but read the docs above for more in depth analysis why using super() is useful):
class Foo(BaseFoo): def __init__(self, arg1, arg2): BaseFoo.__init__(self, arg1, arg2) ... class Bar(BaseBar): def __init__(self, arg1, arg2): super(Bar, self).__init__(arg1, arg2) ...
This worked most of the time in my code, except when I tried to sub-class some old-style classes, for instance:
from HTMLParser import HTMLParser, HTMLParseError class HTMLImageParser(HTMLParser): def __init__(self, callback=None): super(HTMLImageParser, self).__init__() ....
This would generate an error like:
TypeError: super() argument 1 must be type, not classobj
The simple solution was to change my sub-class to also be a sub-class of the object class, making it a new-style class:
class HTMLImageParser(HTMLParser, object): def __init__(self, callback=None): super(HTMLImageParser, self).__init__() ...
So, for the second or third time, I read on Slashdot about people complaining about the "serious security" problems with MacOSX. I finally couldn't stop myself from posting to one of these discussions, not because I'm a Macintosh fan, but because I find it prepostoreous to even call this a bug.
As far as I'm concerned, Apple has implemented their solutions as per specifications. They probably could have made better documentation, GUI/tools and training around the well known security issues around DHCP in general, and the LDAP option in particular. And arguably, using DHCP with the LDAP option like they did might have been a poor design decision, but it was no less secure than their previous systems afaik (e.g. Netinfo).
I still don't understand why this security "hole" got so much attention... Are people struggling to find problems with MacOSX? First of all, attacks like this is nothing new, just remember the old YP/NIS problems with broadcasting for the server, to mention just one example. Secondly, when we wrote the DHCP LDAP option specs way back when, we explicitly documented this problem in the security section: 5. Security considerations Security considerations discussed in , particularly with respect to the provision of authentication information, are directly applicable here. Additionally, it should be noted that providing LDAP server information by a broadcast protocol such as DHCP may allow unauthorized clients to learn the location of and authentication information for LDAP servers and hence pose as valid clients. This presents a security problem when sensitive information, such as user passwords, is published via LDAP servers. The DHCP protocol provides no mechanisms for the client to verify the validity and correctness of the received information. The security considerations in  discuss several weaknesses, particularly the problem with unauthorized DHCP servers. This was written in 1997, note the last paragraph above. These issues have been discusses and documented in several RFCs, many years ago...
Someone posted a question asking if this was a MacOSX specific problem, to which I responded:
Well, DHCP is inherently insecure, so this is definitely not a MacOSX specific "bug" (but I personally don't consider it a bug). This is all well documented in the DHCP RFCs and docs, e.g. from RFC 2131: 7. Security Considerations DHCP is built directly on UDP and IP which are as yet inherently insecure. Furthermore, DHCP is generally intended to make maintenance of remote and/or diskless hosts easier. While perhaps not impossible, configuring such hosts with passwords or keys may be difficult and inconvenient. Therefore, DHCP in its current form is quite insecure. Unauthorized DHCP servers may be easily set up. Such servers can then send false and potentially disruptive information to clients such as incorrect or duplicate IP addresses, incorrect routing information (including spoof routers, etc.), incorrect domain nameserver addresses (such as spoof nameservers), and so on. Clearly, once this seed information is in place, an attacker can further compromise affected systems. Malicious DHCP clients could masquerade as legitimate clients and retrieve information intended for those legitimate clients. Where dynamic allocation of resources is used, a malicious client could claim all resources for itself, thereby denying resources to legitimate clients.
I think what makes MacOSX "unique" is that they use services traditionally not provided by DHCP (in this case, LDAP server information). Just like with NIS/YP, we have a tradeoff between ease of deployment (automatic service discovery) vs strong security. I know for a fact that way back, many YP/NIS deployments got hacked (in open networks, most commonly Universities) by simply pretending to be an NIS server. NIS+ addressed this problem (and others), and made it close to impossible to deploy and maintain. :-)
I don't know what Apple will do to "secure" this, the natural solution seems to be to have the DHCP client limit which servers it will talk to (establish a trust relation). It could be done with something as simple as a DHCP server host list, or more likely using Kerberos tickets to verify the authenticity of the DHCP response (I'm no Kerberos expert, so don't quote me on that one). More than likely, it'll make deployment a bit harder that what it is now.
Nothing new about this at all....
Some readers might know that one of my pet peeves is Spam. I've spent so much time trying to fight spam, and I'm actually "winning" right now. It is a constant battle though, as spammers adapt to new (and old) anti-spam tools, and I have to constantly update and tune my systems.
I guess one of these days I should write something about what I do for anti-spam, I just haven't had the time. I do run a number of different toolkits together, which has turned out to be very effective. A few months ago I added a Bayesian filter system, which even though it is effective, is incredibly painful to maintain. I'm also thinking of adding RBL to my systems, assuming they let me use it for free. :-)
I'm working on a tool to let me manage the learning through IMAP clients (e.g. Mozilla), more or less automatically. I'll post more on that when I get closer to release. (And yes, I know there are similar tools out there already, but I need something that's 100% server side).
Oh, and for what it's worth, I'm currently catching 10k spam / month / user on my system, and that's not counting spam/virus that are rejected immediately at the MTA layer. That'd add up to well over 100k spam / year for one person, it's just insane. For me personally, spam is over 75% of my incoming mail, and I'm on a lot of fairly large mailing lists...
This seems to have been resolved in the latest vmware "fix" package from Petr (v44), see more info in this Bugzilla bug.
I think they removed this compile option again from the unofficial 2.6 builds, maybe because it broke other binary only kernel modules (like nVidia)? But the nVidia drivers are still broken, since the new CONFIG_X86_4G_VM_LAYOUT is used. More information on the nVidia for 2.6 site.
So, I've been tooling around using the 2.6 RPMs provided by some nice fellow at RedHat . I simply followed the instructions from Thomer M. Gil. It was suprisingly easy to get my ))RedHat 9(( system upgraded to 2.6 using these RPMs and instructions.
This all worked nice and well up until (I think) test-8. After this release, I could no longer get VMWare to run, even after applying Petr's patches. Reading the News group for running VMWare on "experimental plaforms", I found this comment from Petr:
^''linux-2.6.0-compile.patch in Arjan's kernel is responsible for that. It adds __-mregparm=3__ to the CFLAGS, completely breaking ABI: functions do not expect arguments on the stack, but in registers (and so misc_register() uses leftover 165 value in EAX as a miscdevice pointer...).''^
I personally "solved" this by editing my arch/i386/Makefile in the Kernel source, removed the -mregparm=3 option, and rebuilt the kernel.
I've been playing around a bit with the latest "unstable" Ximian Gnome packages, with mixed results. Most disturbing, my Emacs settings got completely whacky. My colors were all changed, fonts weren't right, and many other of my custom settings were being ignored. After some annoying debugging, I realized Gnome was now setting some X11 resources, which obviously affected my Emacs run-time.
The offending file (at least for me), turned out to be:
which is part of the control-center package. Simply removing this file, and restarting Gnome/X11 solved all my Emacs problems, since the conflicting X11 resources are now gone. I have no idea why this file was put in there... I control all my Emacs settings through E-lisp, thank you very much, and X11 resources should be avoided when possible IMO. :-)