Blog | | | Bio | | | Contact |
Mastodon:
|
I don't use the word "hate" very often. I reserve that work for things that I dislike with a real passion but email is becoming one of those things. If you attempted to follow my previous posting about Controlling SPAM you can guess why I have this passion.
I wish that I could give up email altogether. I think that this will happen in the next few years but at least at this point, there is not a better alternative for most of the people that I communicate with. I have found that Twitter and IM have become integral parts of my communications infrastructure but it doesn't and will never come close to replacing the majority of my communications needs. The long breaks in my blogging record suggest that blogging is not a good communications mechanism for me. Most of the social networks out there just seem to add to the spam and privacy problems and don't really add much positive to my communications. I'm just stuck with email for a while.
There are some good technologies out there to "fix" email. DomainKeys and Sender Policy Framework (SPF) are two technologies that could to a lot to climate the problems with SPAM but there is just too much inertia in the install based of technology and administrator skill sets to actually get a critical mass of adoption. If the weight of spam has not overcome this inertia by now, I don't think it ever will.
I think that the only thing that will fix the spam problem is something new that replaces email. That new techology must have obvious benefits and have spam resistance built in from the beginning. Earlier adopters will legitimize the technology and will eventually drag the rest of the world into using that technology. We are seeing these kinds of shifts with the use of Facebook and Twitter but the closed, centralzied nature of both these system make them inappropriate for mass adoption that the internet infrastucture level that is required to really replace email. By the way, when I speak of "email" here, I'm refering to SMTP email. I think that we will always have email as in electronic mail but it may be based on completely different underlying technology than the SMTP that we see today.
What will replace SMTP email? That's a pretty tough question. There doesn't seem to be anything with momemtium on the horizon yet. It is something that I've been thinking about and does tie into the OpenPersona idea that I've been playing with. Maybe it will come out of that effort.
I've been noticing that the amount of spam that I get has been going up. Up until about a month ago, I was receiving about 1000 spam messages a day but that has risen to about 3000 per day over the last week or so. I have been using GMail for managing my email and it had been great at filtering out this spam. Virtually no false positives (good messages going into the spam folder) and about 1-2% false negatives (spam not getting put into spam filter). That left me with about 10-20 spam messages a day to deal with. Not too much overhead. Sometime over the last couple of days, Google must have changed their spam filters in some way. I suspect it was in response to increasing levels of spam. The net effect was that the false positives went from practically none to about 70%. In other words, about 70% of my legitmate email was going into the spam folder with 3000 spam messages.
Well that made GMail's spam filter just about useless. It was time to see if I could figure out some ways to filter out some of this spam before it got to GMail so that I could do occasional, manual false positive checks in the spam folder. So the first question is "How is it possible to get 3000 spam messages a day?" That's easy. I have two domain names that send all email, regardless of address, to my GMail account. I've had these for many years and use them to create ad hoc "BACN" email addresses for signing up for new services. I'll call these domains my BACN domains and use BACN.com generically. I embed a standard code and the website's domain name into the email address so that if I start to get spam, I know who to blame (and block). For example, my email address might look like this: asdfa.newwebsite.com@BACN.com. The "asdfa" code (not what I really use) has been a string that I've embedded with the thought that at some time I could use this to help in my spam filtering. That time is now!
I've learned a few things about spam from using these catchall BACN email setups. First, a number of websites have sold/given/lost their email lists to spammers. A couple that come to mind are Napster, Bicycle.com, and my local gas and electricity company. It is also very interesting to see just how much spam is sent to made up email accounts. I see a lot of random looking string as email accounts. Others look like that they might be an account name from some other domain with my BACN domain tacked on the end. Others include HTML tags and attributes (like HREF or MAILTO) and are obviously due to HTML parsing errors when the spammers were trying to harvest email addresses from web pages.
Another factor in my large number of spam messages is that I manage several hundred domain names. Some are for my own projects, others are for clients, friends and relatives. A lot of these domains have legitimate email addresses that forward to me. I've yet to find any way to keep any email address spam free short of never telling anyone about it and not using it. Also, when registering these domains, they must have a legitimate contact email address and it's really important that I get any legitimate email that is sent to these accounts. I have 3 email addresses that are used for this purpose and so they end up in the public whois registration database entries for those domains. The whois database is a favorite place for spammers to harvest email addresses so these 3 addresses get spammed heavily.
So how to do some pretty brutal spam trimming? My solution is not for everyone. It involves Sendmail, Procmail and an extra GMail account. I happen to have the luxury (and the associated maintenance overhead) of having a dedicated Debian Linux server that handles some of my client's email and all of my email. I could run spamassassin or other linux server spam filtering software but I want to keep this simple to implement and manage. I've used these server based spam filters in the past but found them to be overkill for the use of a relatively small number of people. Spam filtering is not a service that I need to offer my clients. Most of the email that comes to this server just gets forwarded off to some other email account via a Sendmail virtusertable configuration file. Even my own email just gets forwarded to my GMail account. So my first line of defending myself from the spam was to create a local email account that I forward all of my BACN. I then implemented a procmail filter that would only forward mail that had the the special code "asdfa" in the To address field. What gets forwarded is what I call potentially good BACN. What gets left is pure spam and discarded. Here is an example of that filter with dummy data and email addresses inserted:
spamfilteraccount@gmail.com is not a real GMail account (at least its not mine) but just a place holder for my real, spam filtering only, Gmail account. I forward my potentially good BACN to this GMail account along with my whois database email addresses and a few other heavily spammed accounts. In that GMail spam account I set it up to immediately forward all mail to my real GMail account. This only forward messages that don't get caught in it's spam filter. False positives in this stream of email are tolerable because this email is BACN plus some spam.
So now I have a 4 level spam filtering strategy.
Over the last couple of months, I've gotten myself into Twittering more and trying to see how this could be a useful tool to me. I've found that by following a number of prolific Twitterers, that I can keep my finger on the pulse of a number of subject areas. One of the problems I ran into is that I often feel that I'm missing out on half of the conversation and wanted to easily see the whole conversation around specific memes. That's when the concept of a tweme (Twitter meme) came up. A tweme is a tag that gets included in twitter posts about a particular meme. This makes it possible to look at twitter posts from the perspective of that meme and see what the whole twittersphere is saying about it.
As a first approximation of what viewing twemes would be like, I've create Twemes.com. Twemes.com shows the most recent twemes as extracted from the Twitter public status stream as well as a "tweme cloud" of the most active twemes. You can also view and bookmark pages of specific twemes so that you can follow the twittersphere's thoughts on that meme.
I've been amazed at the progress of the OpenID and the lesser known Yadis open specifications over the last year or so. While not talked about too much, I think that the Yadis standard really help to bring various parties to the table around the concept of using a URI (or URL) as a basic identifier for people. Yadis provides a simple way to allow a single URI to be used for many different identity and even non-identity services. I have a sense that as Yadis become more widely used, it will unleash the floodgates for new kinds of networked applications will make Web 2.0 look quaint in comparison.
The podcast, The Story of Digital Identity has been a great inspiration for ideas on this subject.
I recently read an article by Tim Bray about personal data backup. While the article did not have a lot of specific about software to use, he did provide some very good guidelines to keep in mind.
In that spirit, I thought that I would share my own approach to backing up my personal data in my home environment.
To begin with, I should describe my home setup. My wife and I each have our own home offices with desktop computers. My desktop is running SuSE Linux 9.3 and my wife runs Windows XP Pro. The living room contains another Windows XP Pro machine that is our home entertainment center and contains a considerable amount (400GB) of music and video and is attached to our projector and stereo system.
We also have a "server closet" containing a variable number of PCs running with connected to a KVM switch and a single keyboard/monitor/mouse setup. In that closet there is always an old Debian Linux 3.1 machine, our routers, switches and cable modem. There is generally a couple of other computers depending on current projects.
At the moment I also have 4 computers sitting next to my desktop machine that are involved in the process of testing unattended system installs on refurbished computers for use by the BC Digital Divide.
That adds up to 10 computers in the house but only 3 of them really have "useful" data on them that requires backup considerations. These are mine and my wife's desktop machines and the media machine.
We use a combination of strategies to safeguard the software on our machines. The first is that we make a distinction between media of various types and other personal data. Media is kept on the media server and personal data is kept on our primary desktop machines. The one Windows XP primary desktop machine keeps all data to be backed up under it's "Documents and Settings" directory tree and that is the only part that is backed up. The rest of the system is considered to be easily replacable.
On the SuSE machine the /etc, /home, and /root directories are backed up.
All personal data on the two primary desktop machines are backed up to two different locations every night using unattended scripts that are much too complex to talk about in this discussion. For both machines a full backup is made as compressed archives to a Windows share on the media machine. Secondarily rsync is used to syncronize the personal data with a Debian Linux dedicated server located in California. Using rsync keeps the bandwidth usage to a few 10's of megabites per day.
As it happens, the backup archive on the media machine is about 4.2 GiB so it just fits on a single DVD-RW. Each night after the desktops have completed doing a full backup to the media machine, that backup archive is burnt to a DVD-RW.
The DVD-RW are rotated though a group of 6 disk, one for each day of the week. There is another set of 5 DVD-RWs that are additionally burnt on Mondays so that we have weekly snapshots for the last 5 weeks. On top of that we do an extra DVD-RW burn on or about the 1st of each month. This gives us monthly backups for the last 12 months. So with 23 rotated DVD-RWs we can find just about any version of any document over the last year.
So that was the personal data. What do we do with the media data? That just way too much data to use a traditional DVD rotation. Instead we break the media down into three groups: photos, audio and video. The photos are kept in DVD sized trees on the media server. Photos are kept in our personal data area until there is enough to dump them into the photo tree. When that is done, two copies of the photo data are made to DVD-RWs that backup those photos. That way we have 3 copies of the photos. Audio is treated the same way but using a seperate DVD set. We also have the CDs for most of this audio but it easier to burn the ripped audio than re-ripping it. Video is a little more complex. Most of the video is captured TV shows that we capture with Snapstreams Beyond TV. A lot of this content is just erased after viewing. Most programs are just not worth keeping. Some content, movies and a few TV series, are offloaded to DVD-Rs. They are not turned into the format that DVD players require but are just left in their Windows Media Player or DivX format that we have them in.