Geoffrey Meredith
Thoughts on Technology

Blog

I have create a website for the discussion of Persona concepts and a set of protocols to make to allow them to communicate.  You can find this at OpenPersona.org.
There has been a lot of talk lately about being in control of your own data online. This talk has arisen due to the various websites that revolve around the concept of a social network. MySpace and Facebook are the two best known of these websites but this is just the tip of the iceberg. Social networking online is not a new concept. To varying degrees, forum websites and going back further, BBSes, Compuserve, The Well, and Newsgroups are all instances of very successful social networks. They may not have been as focused and structured around the networking aspects as such websites as Frendster, LinkedIn or Plaxo but they still provided that functionality.

What is different now, especially when looking at a tool such as Facebook, is the shere amount of concentrated data that a single company has collected about a large segment of the online population. That scares a lot of people. It scares me and is the reason that I've minimized my exposure to Facebook. To a lesser degree, I have this same issue with Google as well, particular with respect to GMail.

I've been talking, although not blogging, about this issue for a couple of years and would have expected some serious progress towards addressing this issue by now. I often hear the mantra about "owning ones own data" but I have not seen a lot of progress other than being able to import/export data from various online tools and some ideas being generated on DataPortability.org.

So what have I been hoping to see develop in this space? I've been using the term "Persona" to describe a structured set of data and services that represent me or any individual online. I want my Persona to be completely under my control or delegated to a trusted service organization. Think "data analog to the banking system". I want that Persona to be my proxy to the online world as well as provide a window onto other Personas that interest me and provide a place for us to communicate and collaborate.

In a very real sense, I want to see the business model that Facebook is using turn it inside out. I want to see a lot of smaller service providers that make it their business to protect the Personas that have been entrusted to them. I want protection from spammers, data identity thieves and from marketing messages that are not of interest to me. If I'm particularly paranoid or technically savvy, I want to be able to host and operate my own data and services so that I don't have to trust anyone.

This is just a first entry in what I hope will be a long series of posts on the topic of Persona. Stay tuned!
(posted on 13 Jan 2008)

Over the last couple of months, I've gotten myself into Twittering more and trying to see how this could be a useful tool to me.  I've found that by following a number of prolific Twitterers, that I can keep my finger on the pulse of a number of subject areas.  One of the problems I ran into is that I often feel that I'm missing out on half of the conversation and wanted to easily see the whole conversation around specific memes.  That's when the concept of a tweme (Twitter meme) came up.  A tweme is a tag that gets included in twitter posts about a particular meme.  This makes it possible to look at twitter posts from the perspective of that meme and see what the whole twittersphere is saying about it.

As a first approximation of what viewing twemes would be like, I've create Twemes.com.  Twemes.com shows the most recent twemes as extracted from the Twitter public status stream as well as a "tweme cloud" of the most active twemes.  You can also view and bookmark pages of specific twemes so that you can follow the twittersphere's thoughts on that meme.

(posted on 12 Jul 2007)
I've decided to convert my infrequently updated blog from Wordpress to Typo.  Not because there is anything wrong with Wordpress.  In fact, I quite like Wordpress but I wanted to try out Typo and I thought it would be educational to have a closer look at a real world Rails app.  The install process was pretty straight forward.  I created my own theme out of the scribbish theme provided.

I did find that the way that Typo managed it's template system to be quite interesting.  While I was familiar with being able to programatically set the layout for a controller or the whole application, what I really wanted to do was to to be able to override any view template for a specific theme.  I've done this kind of thing in PHP but I really wanted to do this "the rails way".  What Typo does is quite elegant although it's a bit risky.  Typo overrides the ActionView::Base function full_template_path which will produce an absolute path for a template.  By default this resolves to templates in the RAILS_ROOT/apps/views directory but Typo overrides this and sets up a list of serach paths to attempt to resolve to that includes RAILS_ROOT/themes/#{themename} .  There are a couple of other tricks to get stylesheets, javascript and image directories to live in the theme directory.
(posted on 5 Feb 2007)

I've been amazed at the progress of the OpenID and the lesser known Yadis open specifications over the last year or so.  While not talked about too much, I think that the Yadis standard really help to bring various parties to the table around the concept of using a URI (or URL) as a basic identifier for people.  Yadis provides a simple way to allow a single URI to be used for many different identity and even non-identity services.  I have a sense that as Yadis become more widely used, it will unleash the floodgates for new kinds of networked applications will make Web 2.0 look quaint in comparison.

The podcast, The Story of Digital Identity has been a great inspiration for ideas on this subject.

(posted on 4 Jan 2007)
I've been playing with Rails a bit and it's been quite interesting.  I quite like the way that the framework is modeled.  It guides you nicely into a highly structured web application model but allows you to break out of this model when you feel the need.  I generally find that the default behaviours feel right.

I have run into a number of name space clashes that produce mysterious errors.  Try adding an attribute named attributes to an ActiveRecord and see what I mean!

Obviously the "Programming Ruby" and "Agile Web Development with Rails" are must reads before you get started.  A lot of the API documentation has been pulled together at ruby-doc.org, api.rubyonrails.com and script.aculo.us.  I have found that the documentation to be just a little bit thin in places.  It seems to often be more suited as a refresher for experienced Ruby/Rails programmers and not so much to learn the APIs from.  So you often have to resort to google searches to find a blog entry where someone before you has had exactly the same problem understanding the concept or the interface.
(posted on 13 Dec 2006)
I've been following Rails for quite some time now and have now finally come across a couple of new, small projects that might get some benefit from Rails rapid, test driven development framework.  One project is a quick prototype for demonstration purposes and the other is social networking idea that I've had for quite some time.  If nothing these, these two projects should give me a better personal sense of how Rails development compares to others that I've used in the past.

In one perspective, it won't be a fair comparison because most other development environments that I've used in the past did not have well developed frameworks when I did the majority of work that I did in them.  In the mid to late '80s, I was writing code in C and assembler, mostly is MS-DOS.  A lot of these programs were TSR (Terminate and Stay Resident) and extended memory based program so you were really working without any kind of net, let alone a framework.  You even had to rewrite parts of the libraries and startup code that came with the compiler just to make them work in these environments. 

In the late '80s and early '90s we moved to C++ and started to do serious work in Windows.  It was refreshing in some sense because we were at least working in a mostly documented environment although we quickly started to get into hooking into Windows internals so that we could control other applications.  In the early '90s Microsoft brought out MFC but that was worse than using the raw Win16 API.  At the beginning of one big project in '91 we did an extensive survey of all of the available frameworks for Windows and they all failed to impress.  Most suffered from poor performance, bloated size, restricted feature sets and stability problems.  So we built our own application framework (AF).  It used the Windows API as a model but abstracted the interfaces in to a series of object models in C++.  Most importantly it got rid of pointers.  Buffer overflow and bad pointers were almost completely eliminated.  These had been the source of 95% of bugs before that.  The AF made it a lot easier for non-Windows GUI programmers to write Windows code.  The AF worked so well that when Windows 95 and Windows NT came around and the Win32 API was available, it took us 2 weeks to update the AF and all of our software ran flawlessly.  We maintained Win16 and Win32 version of that software for years until people finally abandoned Windows 3.1.

In the mid '90s I moved into web development, writing in C, C++ and perl.  There were some crude libraries to help with some aspects of CGI programing but you were mostly on your own.  We tended to resort to running fairly traditional services that interacted with a web server via CGI connector scripts.  Java started to look like a real ray of hope.  It was a much cleaner, simpler language than C++ and the runtime overhead didn't seem too bad.  You could write desktop and web client apps and the networking libraries were very rich.  Unfortunately the user interface libraries generated ugly user interfaces.  I don't think that Java has recovered from that.  I suspect that is why Java now seems relegated to take the place of COBOL in business/financial apps where UI has never been very important.

In '98 I started using PHP.  I kind of felt embarrassed using such a light weight language.  It almost felt like I was using Visual Basic.  Not something that one admits to.  But the thing was, you could create a simple web apps extremely quickly.  While it wasn't fully object oriented and wasn't designed for programming-in-the-large but there were a lot of web apps that needed writing that didn't need that.  You could also easily find all the documentation that you really needed in one place, the php.net website.  A the time, there weren't any good frame works.  I'm not sure if there are any good PHP frameworks now.  The base API and libraries are at a pretty high level so there is no really urgent need for a framework.  I've created my own libraries and patterns that work for the projects that I've been involved in.

In the last year or so, I've been looking around to see if I could improve upon PHP.  Java and .NET are looking pretty strong in enterprise environments but I was looking more for something that works well in the Web 2.0 world.  Python and Ruby were obvious candidates.  I spent some time with Python and liked the language but fond that the various libraries to be awkward and buggy.  Documentation was scattered widely.  There seemed to be a lot of "lets see if we can reproduce what's been successful in Rails" talk.  So I thought, go for Rails and cut out the middle man!
(posted on 9 Feb 2006)

I recently read an article by Tim Bray about personal data backup. While the article did not have a lot of specific about software to use, he did provide some very good guidelines to keep in mind.

In that spirit, I thought that I would share my own approach to backing up my personal data in my home environment.

To begin with, I should describe my home setup. My wife and I each have our own home offices with desktop computers. My desktop is running SuSE Linux 9.3 and my wife runs Windows XP Pro. The living room contains another Windows XP Pro machine that is our home entertainment center and contains a considerable amount (400GB) of music and video and is attached to our projector and stereo system.

We also have a "server closet" containing a variable number of PCs running with connected to a KVM switch and a single keyboard/monitor/mouse setup. In that closet there is always an old Debian Linux 3.1 machine, our routers, switches and cable modem. There is generally a couple of other computers depending on current projects.

At the moment I also have 4 computers sitting next to my desktop machine that are involved in the process of testing unattended system installs on refurbished computers for use by the BC Digital Divide.

That adds up to 10 computers in the house but only 3 of them really have "useful" data on them that requires backup considerations. These are mine and my wife's desktop machines and the media machine.

We use a combination of strategies to safeguard the software on our machines. The first is that we make a distinction between media of various types and other personal data. Media is kept on the media server and personal data is kept on our primary desktop machines. The one Windows XP primary desktop machine keeps all data to be backed up under it's "Documents and Settings" directory tree and that is the only part that is backed up. The rest of the system is considered to be easily replacable.

On the SuSE machine the /etc, /home, and /root directories are backed up.

All personal data on the two primary desktop machines are backed up to two different locations every night using unattended scripts that are much too complex to talk about in this discussion. For both machines a full backup is made as compressed archives to a Windows share on the media machine. Secondarily rsync is used to syncronize the personal data with a Debian Linux dedicated server located in California. Using rsync keeps the bandwidth usage to a few 10's of megabites per day.

As it happens, the backup archive on the media machine is about 4.2 GiB so it just fits on a single DVD-RW. Each night after the desktops have completed doing a full backup to the media machine, that backup archive is burnt to a DVD-RW.

The DVD-RW are rotated though a group of 6 disk, one for each day of the week. There is another set of 5 DVD-RWs that are additionally burnt on Mondays so that we have weekly snapshots for the last 5 weeks. On top of that we do an extra DVD-RW burn on or about the 1st of each month. This gives us monthly backups for the last 12 months. So with 23 rotated DVD-RWs we can find just about any version of any document over the last year.
So that was the personal data. What do we do with the media data? That just way too much data to use a traditional DVD rotation. Instead we break the media down into three groups: photos, audio and video. The photos are kept in DVD sized trees on the media server. Photos are kept in our personal data area until there is enough to dump them into the photo tree. When that is done, two copies of the photo data are made to DVD-RWs that backup those photos. That way we have 3 copies of the photos. Audio is treated the same way but using a seperate DVD set. We also have the CDs for most of this audio but it easier to burn the ripped audio than re-ripping it. Video is a little more complex. Most of the video is captured TV shows that we capture with Snapstreams Beyond TV. A lot of this content is just erased after viewing. Most programs are just not worth keeping. Some content, movies and a few TV series, are offloaded to DVD-Rs. They are not turned into the format that DVD players require but are just left in their Windows Media Player or DivX format that we have them in.

(posted on 8 Dec 2005)

Google Labs has opened up a beta of their transit trip planning tool at http://google.com/transit. It generates very detailed directions about how to get from one point to another using public transit. It provides transit travel time, walking times and distances as well as a comparing with the cost and travel time of driving. A tool like this could really help people to make a well informed decision about whether to drive or take public transit.

Too bad it currently only works for Portland, Oregon.

(posted on 30 Oct 2005)

I am starting to warm up to Python after my initial issue with indention controlling the blocking of code. My general sense is that it is an elegant, modern mixture of Basic and Perl. I'm sure that experienced Python programmers would cringe at that comparison but that is what initially came to mind.

My next "shock" was in comparison expressions. For instance the expression a < b == c tests whether a is less than b and also that b equals c. I would have coded that a < b and b == c. Another cool comparison is 'ABC' < 'C' < 'Pascal' < 'Python' where each side of each less-than symbol must be true for the whole expression to be true.

older blog items...