Geoffrey Meredith
Thoughts on Technology

Blog

(posted on 15 Mar 2008)

I don't use the word "hate" very often.  I reserve that work for things that I dislike with a real passion but email is becoming one of those things.  If you attempted to follow my previous posting about Controlling SPAM you can guess why I have this passion.

I wish that I could give up email altogether.  I think that this will happen in the next few years but at least at this point, there is not a better alternative for most of the people that I communicate with.  I have found that Twitter and IM have become integral parts of my communications infrastructure but it doesn't and will never come close to replacing the majority of my communications needs.  The long breaks in my blogging record suggest that blogging is not a good communications mechanism for me.  Most of the social networks out there just seem to add to the spam and privacy problems and don't really add much positive to my communications.  I'm just stuck with email for a while.

There are some good technologies out there to "fix" email.  DomainKeys and Sender Policy Framework (SPF) are two technologies that could to a lot to climate the problems with SPAM but there is just too much inertia in the install based of technology and administrator skill sets to actually get a critical mass of adoption.  If the weight of spam has not overcome this inertia by now, I don't think it ever will.

I think that the only thing that will fix the spam problem is something new that replaces email.  That new techology must have obvious benefits and have spam resistance built in from the beginning.  Earlier adopters will legitimize the technology and will eventually drag the rest of the world into using that technology.  We are seeing these kinds of shifts with the use of Facebook and Twitter but the closed, centralzied nature of both these system make them inappropriate for mass adoption that the internet infrastucture level that is required to really replace email.  By the way, when I speak of "email" here, I'm refering to SMTP email.  I think that we will always have email as in electronic mail but it may be based on completely different underlying technology than the SMTP that we see today.

What will replace SMTP email?  That's a pretty tough question.  There doesn't seem to be anything with momemtium on the horizon yet.  It is something that I've been thinking about and does tie into the OpenPersona idea that I've been playing with.  Maybe it will come out of that effort.

(posted on 15 Mar 2008)

I've been noticing that the amount of spam that I get has been going up.  Up until about a month ago, I was receiving about 1000 spam messages a day but that has risen to about 3000 per day over the last week or so.  I have been using GMail for managing my email and it had been great at filtering out this spam.  Virtually no false positives (good messages going into the spam folder) and about 1-2% false negatives (spam not getting put into spam filter).  That left me with about 10-20 spam messages a day to deal with.  Not too much overhead.  Sometime over the last couple of days, Google must have changed their spam filters in some way.  I suspect it was in response to increasing levels of spam.  The net effect was that the false positives went from practically none to about 70%.  In other words, about 70% of my legitmate email was going into the spam folder with 3000 spam messages.

Well that made GMail's spam filter just about useless.  It was time to see if I could figure out some ways to filter out some of this spam before it got to GMail so that I could do occasional, manual false positive checks in the spam folder.  So the first question is "How is it possible to get 3000 spam messages a day?"  That's easy.  I have two domain names that send all email, regardless of address, to my GMail account.  I've had these for many years and use them to create ad hoc "BACN" email addresses for signing up for new services.  I'll call these domains my BACN domains and use BACN.com generically.  I embed a standard code and the website's domain name into the email address so that if I start to get spam, I know who to blame (and block).  For example, my email address might look like this: asdfa.newwebsite.com@BACN.com.  The "asdfa" code (not what I really use) has been a string that I've embedded with the thought that at some time I could use this to help in my spam filtering.  That time is now!

I've learned a few things about spam from using these catchall BACN email setups.  First, a number of websites have sold/given/lost their email lists to spammers.  A couple that come to mind are Napster, Bicycle.com, and my local gas and electricity company.  It is also very interesting to see just how much spam is sent to made up email accounts.  I see a lot of random looking string as email accounts.  Others look like that they might be an account name from some other domain with my BACN domain tacked on the end.  Others include HTML tags and attributes (like HREF or MAILTO) and are obviously due to HTML parsing errors when the spammers were trying to harvest email addresses from web pages.

Another factor in my large number of spam messages is that I manage several hundred domain names.  Some are for my own projects, others are for clients, friends and relatives.  A lot of these domains have legitimate email addresses that forward to me.  I've yet to find any way to keep any email address spam free short of never telling anyone about it and not using it.  Also, when registering these domains, they must have a legitimate contact email address and it's really important that I get any legitimate email that is sent to these accounts.  I have 3 email addresses that are used for this purpose and so they end up in the public whois registration database entries for those domains.  The whois database is a favorite place for spammers to harvest email addresses so these 3 addresses get spammed heavily.

So how to do some pretty brutal spam trimming?  My solution is not for everyone.  It involves Sendmail, Procmail and an extra GMail account.  I happen to have the luxury (and the associated maintenance overhead) of having a dedicated Debian Linux server that handles some of my client's email and all of my email.  I could run spamassassin or other linux server spam filtering software but I want to keep this simple to implement and manage.  I've used these server based spam filters in the past but found them to be overkill for the use of a relatively small number of people.  Spam filtering is not a service that I need to offer my clients.  Most of the email that comes to this server just gets forwarded off to some other email account via a Sendmail virtusertable configuration file.  Even my own email just gets forwarded to my GMail account.  So my first line of defending myself from the spam was to create a local email account that I forward all of my BACN.  I then implemented a procmail filter that would only forward mail that had the the special code "asdfa" in the To address field.  What gets forwarded is what I call potentially good BACN.  What gets left is pure spam and discarded.  Here is an example of that filter with dummy data and email addresses inserted:


:0
* ^To: .*asdfa.*
! spamfilteraccount@gmail.com


spamfilteraccount@gmail.com is not a real GMail account (at least its not mine) but just a place holder for my real, spam filtering only, Gmail account.  I forward my potentially good BACN to this GMail account along with my whois database email addresses and a few other heavily spammed accounts.  In that GMail spam account I set it up to immediately forward all mail to my real GMail account.  This only forward messages that don't get caught in it's spam filter.  False positives in this stream of email are tolerable because this email is BACN plus some spam.

So now I have a 4 level spam filtering strategy.

  1. A sendmail virtusertable file that blocks some known spammed email addresses that I just don't need any more.  Like my bicycle.com website email address.  I also forward email addresses that are my main contact email addresses directly to my main GMail account.  This short circuit of the process reduces the chances of false positives and even if there are false positives, they will show up in my main GMail account.  This account won't get too diluted by spam so I can occasionally check for them.
  2. BACN+spam is sent to a local email account that has a procmail filter to strip out all email that doesn't have "asdfa" in the To field.
  3. Potentially good BACN is sent a special spam GMail account that is used to filter out real spam sent to BACN email addresses.
  4. Finally I use my main GMail account's spam filtering as a final line of defense but I can still check it for false positives.

I implemented this strategy about 3 hours ago.  The procmail filter, has caught about 200 messages since then.  All spam.  The GMail spam account has caught about 40 spam messages.  All real spam sent to my BACN and whois accounts.  My main GMail has caught 5 spam messages and missed one that I had to manually mark as spam.

That feels much better!
I have create a website for the discussion of Persona concepts and a set of protocols to make to allow them to communicate.  You can find this at OpenPersona.org.
There has been a lot of talk lately about being in control of your own data online. This talk has arisen due to the various websites that revolve around the concept of a social network. MySpace and Facebook are the two best known of these websites but this is just the tip of the iceberg. Social networking online is not a new concept. To varying degrees, forum websites and going back further, BBSes, Compuserve, The Well, and Newsgroups are all instances of very successful social networks. They may not have been as focused and structured around the networking aspects as such websites as Frendster, LinkedIn or Plaxo but they still provided that functionality.

What is different now, especially when looking at a tool such as Facebook, is the shere amount of concentrated data that a single company has collected about a large segment of the online population. That scares a lot of people. It scares me and is the reason that I've minimized my exposure to Facebook. To a lesser degree, I have this same issue with Google as well, particular with respect to GMail.

I've been talking, although not blogging, about this issue for a couple of years and would have expected some serious progress towards addressing this issue by now. I often hear the mantra about "owning ones own data" but I have not seen a lot of progress other than being able to import/export data from various online tools and some ideas being generated on DataPortability.org.

So what have I been hoping to see develop in this space? I've been using the term "Persona" to describe a structured set of data and services that represent me or any individual online. I want my Persona to be completely under my control or delegated to a trusted service organization. Think "data analog to the banking system". I want that Persona to be my proxy to the online world as well as provide a window onto other Personas that interest me and provide a place for us to communicate and collaborate.

In a very real sense, I want to see the business model that Facebook is using turn it inside out. I want to see a lot of smaller service providers that make it their business to protect the Personas that have been entrusted to them. I want protection from spammers, data identity thieves and from marketing messages that are not of interest to me. If I'm particularly paranoid or technically savvy, I want to be able to host and operate my own data and services so that I don't have to trust anyone.

This is just a first entry in what I hope will be a long series of posts on the topic of Persona. Stay tuned!
(posted on 13 Jan 2008)

Over the last couple of months, I've gotten myself into Twittering more and trying to see how this could be a useful tool to me.  I've found that by following a number of prolific Twitterers, that I can keep my finger on the pulse of a number of subject areas.  One of the problems I ran into is that I often feel that I'm missing out on half of the conversation and wanted to easily see the whole conversation around specific memes.  That's when the concept of a tweme (Twitter meme) came up.  A tweme is a tag that gets included in twitter posts about a particular meme.  This makes it possible to look at twitter posts from the perspective of that meme and see what the whole twittersphere is saying about it.

As a first approximation of what viewing twemes would be like, I've create Twemes.com.  Twemes.com shows the most recent twemes as extracted from the Twitter public status stream as well as a "tweme cloud" of the most active twemes.  You can also view and bookmark pages of specific twemes so that you can follow the twittersphere's thoughts on that meme.

(posted on 12 Jul 2007)
I've decided to convert my infrequently updated blog from Wordpress to Typo.  Not because there is anything wrong with Wordpress.  In fact, I quite like Wordpress but I wanted to try out Typo and I thought it would be educational to have a closer look at a real world Rails app.  The install process was pretty straight forward.  I created my own theme out of the scribbish theme provided.

I did find that the way that Typo managed it's template system to be quite interesting.  While I was familiar with being able to programatically set the layout for a controller or the whole application, what I really wanted to do was to to be able to override any view template for a specific theme.  I've done this kind of thing in PHP but I really wanted to do this "the rails way".  What Typo does is quite elegant although it's a bit risky.  Typo overrides the ActionView::Base function full_template_path which will produce an absolute path for a template.  By default this resolves to templates in the RAILS_ROOT/apps/views directory but Typo overrides this and sets up a list of serach paths to attempt to resolve to that includes RAILS_ROOT/themes/#{themename} .  There are a couple of other tricks to get stylesheets, javascript and image directories to live in the theme directory.
(posted on 5 Feb 2007)

I've been amazed at the progress of the OpenID and the lesser known Yadis open specifications over the last year or so.  While not talked about too much, I think that the Yadis standard really help to bring various parties to the table around the concept of using a URI (or URL) as a basic identifier for people.  Yadis provides a simple way to allow a single URI to be used for many different identity and even non-identity services.  I have a sense that as Yadis become more widely used, it will unleash the floodgates for new kinds of networked applications will make Web 2.0 look quaint in comparison.

The podcast, The Story of Digital Identity has been a great inspiration for ideas on this subject.

(posted on 4 Jan 2007)
I've been playing with Rails a bit and it's been quite interesting.  I quite like the way that the framework is modeled.  It guides you nicely into a highly structured web application model but allows you to break out of this model when you feel the need.  I generally find that the default behaviours feel right.

I have run into a number of name space clashes that produce mysterious errors.  Try adding an attribute named attributes to an ActiveRecord and see what I mean!

Obviously the "Programming Ruby" and "Agile Web Development with Rails" are must reads before you get started.  A lot of the API documentation has been pulled together at ruby-doc.org, api.rubyonrails.com and script.aculo.us.  I have found that the documentation to be just a little bit thin in places.  It seems to often be more suited as a refresher for experienced Ruby/Rails programmers and not so much to learn the APIs from.  So you often have to resort to google searches to find a blog entry where someone before you has had exactly the same problem understanding the concept or the interface.
(posted on 13 Dec 2006)
I've been following Rails for quite some time now and have now finally come across a couple of new, small projects that might get some benefit from Rails rapid, test driven development framework.  One project is a quick prototype for demonstration purposes and the other is social networking idea that I've had for quite some time.  If nothing these, these two projects should give me a better personal sense of how Rails development compares to others that I've used in the past.

In one perspective, it won't be a fair comparison because most other development environments that I've used in the past did not have well developed frameworks when I did the majority of work that I did in them.  In the mid to late '80s, I was writing code in C and assembler, mostly is MS-DOS.  A lot of these programs were TSR (Terminate and Stay Resident) and extended memory based program so you were really working without any kind of net, let alone a framework.  You even had to rewrite parts of the libraries and startup code that came with the compiler just to make them work in these environments. 

In the late '80s and early '90s we moved to C++ and started to do serious work in Windows.  It was refreshing in some sense because we were at least working in a mostly documented environment although we quickly started to get into hooking into Windows internals so that we could control other applications.  In the early '90s Microsoft brought out MFC but that was worse than using the raw Win16 API.  At the beginning of one big project in '91 we did an extensive survey of all of the available frameworks for Windows and they all failed to impress.  Most suffered from poor performance, bloated size, restricted feature sets and stability problems.  So we built our own application framework (AF).  It used the Windows API as a model but abstracted the interfaces in to a series of object models in C++.  Most importantly it got rid of pointers.  Buffer overflow and bad pointers were almost completely eliminated.  These had been the source of 95% of bugs before that.  The AF made it a lot easier for non-Windows GUI programmers to write Windows code.  The AF worked so well that when Windows 95 and Windows NT came around and the Win32 API was available, it took us 2 weeks to update the AF and all of our software ran flawlessly.  We maintained Win16 and Win32 version of that software for years until people finally abandoned Windows 3.1.

In the mid '90s I moved into web development, writing in C, C++ and perl.  There were some crude libraries to help with some aspects of CGI programing but you were mostly on your own.  We tended to resort to running fairly traditional services that interacted with a web server via CGI connector scripts.  Java started to look like a real ray of hope.  It was a much cleaner, simpler language than C++ and the runtime overhead didn't seem too bad.  You could write desktop and web client apps and the networking libraries were very rich.  Unfortunately the user interface libraries generated ugly user interfaces.  I don't think that Java has recovered from that.  I suspect that is why Java now seems relegated to take the place of COBOL in business/financial apps where UI has never been very important.

In '98 I started using PHP.  I kind of felt embarrassed using such a light weight language.  It almost felt like I was using Visual Basic.  Not something that one admits to.  But the thing was, you could create a simple web apps extremely quickly.  While it wasn't fully object oriented and wasn't designed for programming-in-the-large but there were a lot of web apps that needed writing that didn't need that.  You could also easily find all the documentation that you really needed in one place, the php.net website.  A the time, there weren't any good frame works.  I'm not sure if there are any good PHP frameworks now.  The base API and libraries are at a pretty high level so there is no really urgent need for a framework.  I've created my own libraries and patterns that work for the projects that I've been involved in.

In the last year or so, I've been looking around to see if I could improve upon PHP.  Java and .NET are looking pretty strong in enterprise environments but I was looking more for something that works well in the Web 2.0 world.  Python and Ruby were obvious candidates.  I spent some time with Python and liked the language but fond that the various libraries to be awkward and buggy.  Documentation was scattered widely.  There seemed to be a lot of "lets see if we can reproduce what's been successful in Rails" talk.  So I thought, go for Rails and cut out the middle man!
(posted on 9 Feb 2006)

I recently read an article by Tim Bray about personal data backup. While the article did not have a lot of specific about software to use, he did provide some very good guidelines to keep in mind.

In that spirit, I thought that I would share my own approach to backing up my personal data in my home environment.

To begin with, I should describe my home setup. My wife and I each have our own home offices with desktop computers. My desktop is running SuSE Linux 9.3 and my wife runs Windows XP Pro. The living room contains another Windows XP Pro machine that is our home entertainment center and contains a considerable amount (400GB) of music and video and is attached to our projector and stereo system.

We also have a "server closet" containing a variable number of PCs running with connected to a KVM switch and a single keyboard/monitor/mouse setup. In that closet there is always an old Debian Linux 3.1 machine, our routers, switches and cable modem. There is generally a couple of other computers depending on current projects.

At the moment I also have 4 computers sitting next to my desktop machine that are involved in the process of testing unattended system installs on refurbished computers for use by the BC Digital Divide.

That adds up to 10 computers in the house but only 3 of them really have "useful" data on them that requires backup considerations. These are mine and my wife's desktop machines and the media machine.

We use a combination of strategies to safeguard the software on our machines. The first is that we make a distinction between media of various types and other personal data. Media is kept on the media server and personal data is kept on our primary desktop machines. The one Windows XP primary desktop machine keeps all data to be backed up under it's "Documents and Settings" directory tree and that is the only part that is backed up. The rest of the system is considered to be easily replacable.

On the SuSE machine the /etc, /home, and /root directories are backed up.

All personal data on the two primary desktop machines are backed up to two different locations every night using unattended scripts that are much too complex to talk about in this discussion. For both machines a full backup is made as compressed archives to a Windows share on the media machine. Secondarily rsync is used to syncronize the personal data with a Debian Linux dedicated server located in California. Using rsync keeps the bandwidth usage to a few 10's of megabites per day.

As it happens, the backup archive on the media machine is about 4.2 GiB so it just fits on a single DVD-RW. Each night after the desktops have completed doing a full backup to the media machine, that backup archive is burnt to a DVD-RW.

The DVD-RW are rotated though a group of 6 disk, one for each day of the week. There is another set of 5 DVD-RWs that are additionally burnt on Mondays so that we have weekly snapshots for the last 5 weeks. On top of that we do an extra DVD-RW burn on or about the 1st of each month. This gives us monthly backups for the last 12 months. So with 23 rotated DVD-RWs we can find just about any version of any document over the last year.
So that was the personal data. What do we do with the media data? That just way too much data to use a traditional DVD rotation. Instead we break the media down into three groups: photos, audio and video. The photos are kept in DVD sized trees on the media server. Photos are kept in our personal data area until there is enough to dump them into the photo tree. When that is done, two copies of the photo data are made to DVD-RWs that backup those photos. That way we have 3 copies of the photos. Audio is treated the same way but using a seperate DVD set. We also have the CDs for most of this audio but it easier to burn the ripped audio than re-ripping it. Video is a little more complex. Most of the video is captured TV shows that we capture with Snapstreams Beyond TV. A lot of this content is just erased after viewing. Most programs are just not worth keeping. Some content, movies and a few TV series, are offloaded to DVD-Rs. They are not turned into the format that DVD players require but are just left in their Windows Media Player or DivX format that we have them in.

older blog items...