Geoffrey Meredith
Thoughts on Technology

Blog

(posted on 13 Jul 2008)

I posted a job ad on Craigslist.org yesterday that got quickly flagged and removed.  You don't get any indication as to why and ad was removed, just a link to a forum where you can post the details of your ad and get suggestions as to what you did wrong.  I had no idea as to why my ad was pulled so I spent some time on that forum looking for clues.  Nothing in the examples I saw there helped me understand what was wrong with my ad so I posted my ad to the forum and and awaited responses.  I only received one comment about my ad.  The comment was sarcastic and suggested that the compensation was only appropriate for a third world country.  Maybe I'm cheap or out of touch with salary expectations (or both) but I do think that there would have been people interested in responding to the ad.

The responses to a number of other rejected ads seem to expose either personal or political agenda.  People just didn't like the ads because they didn't properly address their political ideals, even if those political ideas were tangential to the posting.  It became obvious that to post in any particular category and city, you have to abide by mostly unwritten "community standards".  Who's standards are those "community standards"?  It's not the community of people in that city interested in that category, it's the much smaller community that comb through Craigslist ads looking for ones that do live up to their standards.  That to me sounds like a vigilante mob; the darker side of crowdsourcing. 

I don't think that it has to be this way.  I think there are technological solutions to improving moderation on websites like Craigslist.  Meta moderation and sophisticated algorithms that websites like Slashdot.org, Digg.com, or even the use of more descriptive flagging, can be added to raise moderation beyond the level of vigilantism.

Of course, this may be the way that Craigslist wants their website to behave.  It's certainly within their rights to do so.  I actually see an opportunity here for someone to build a better Craigslist.

(posted on 26 Jun 2008)
Among the many things that I look after, I manage the email for several hundred domain names.  A large portion of these domains are for individual artist websites and thus have only a couple of actual email addresses.  In most cases I just forward any inbound email to the artist's ISP or web mail email account.  We don't filter that email in any way so everything, including spam, gets forwarded to the owners actually email address.  We have avoided spam filtering because most people already have spam filtering on their email account so our filtering would not be beneficial for the recipient.  I'm sure that the spam filtering efforts by GMail, Comcast, Yahoo, etc is much better that I'm going to be able to implement.

Over the last month or so we've started running into issues with Comcast and AT&T blocking all email from our servers due to the fact that they receive what they consider spam from our servers.  We have gotten our servers unblocked but today, Comcast has blocked us again.  So, to be able to deliver email to Comcast we have to "clean" all email that passes through our servers.  We have no idea about what the triggers are for Comcast to block a server.  The barrier is likely to be fairly low as we don't have all that much email traffic in total.  So to keep our standing with Comcast, we will have to be brutal.  We will have to consider any email that might possibly be spam as spam and bounce it.  If only a tiny percentage of spam gets though our filters, we might get blocked again.  The net effect is that some legitimate email will bounced.

While neither us nor our customers are doing anything wrong, Comcast is forcing us to not just tag potential spam as spam but forcing us to block it entirely.  Essentially they are pushing their problems on us.

The net effect of all this is that Comcast will be forcing many smaller operations that process smaller amounts of email to find their own solutions to deal with the "Comcast" email problem.  Each operator will find a way that will cost in aggregate thousands, maybe millions of man-hours of effort and will at a net, reduce the percentage of successful legitimate email deliveries.  Spam has made email less useful but these efforts by Comcast will be adding some of the last few nails to the email coffin.  I'd love to see email disappear but it won't until something better takes it's place.
My previous post about the Internet Operating System compared the Internet's structure and operation to a stand-alone computer.  In this post I want to take that another step further.

I have the feeling that the state of the Internet now is much like stand-alone computers were just before the introduction of the IBM PC in 1981.  A lot of the pieces of the PC revolution were there, but no one had quite put them all together.  What the PC did was put control of serious computing resources into the hands of individuals.  We are now waiting for the Internet analog to this revolution.  I think that will happen when people control their data own on the internet.

That data control is not just in the "Data Portability" vision of being able to copy data from one walled garden to the next, but in the ability to store your data in a single datastore of your choosing and that you control completely. You can then allow selective access to to your data by external services that you want to use.  I think that Amazon.com's S3 is the start of the kind of service where you could store data.  Not that S3 has the complete functionality required to support this model but it could be based on top of S3.  Having your own datastore is like being in control of the hard drive on your computer.  You load applications and tell those applications what data to work with.  In that same way, you could allow a web based service such as Adobe Photoshop Express, to access some photos in your datastore, do some online processing and after it's done, store the results back to your datastore.  You can already do this with your photos stored on Flickr and a couple of other photo sites.  Adobe's got the right idea but there is no open protocol for that would allow them to reach the photos on my own personal server.

In a similar vein, we have Facebook, Google, Yahoo, Microsoft and many smaller players fighting over control of "the social graph".  The "right" way to handle this is to allow me to store and control my part of the social graph and then selectively allow other services to have access to that.  There would no longer be a need to give some new tool your account credentials to your GMail, Facebook, and other services.  Just point them at your datastore and tell your datastore what personal data that the service can have.

This model really is the holy grail social computing from a user's perspective.  It's deadly to a social aggregator's perspective (such as Facebook) as there isn't much left for them once they the user gets rescued from their lock-in.  I also see this as a significant component of the next version of the Internet Operating System.
For many years, there has been talk about the coming "Internet OS".  Given Google's current dominance on the internet, may feel that it will come from them and often call it "The Google OS" (not to be confused with Goobuntu. Google's internal Ubuntu based OS).  The thing is, the Internet OS is already here and has been for roughly 30 years.  TCP/IP is fundamentally the internet operating system.

In the stand-alone computer world, an operating system is the software that manages and ties together the hardware components, usually via hardware drivers.  The operating system exposes APIs that allow application programs to interact with the hardware as well as the lifecycle of the applications themselves. Over time, computer operating systems have grown by adding layers over this fundamental "kernel".  These new APIs provide rich, event driven presentation layers such as Windows and Mac OS.  They also provide such things as file systems, security, etc.

Since the 70's, TCP/IP has been the operating system kernel that has tied together the various bits and pieces of the internet, (computers, switches, routers, etc.).  This has allowed for a variety protocols to be developed that are roughly the equivalent of an operating system API.  In fact, from a software point of view, most programs use the protocols via an API that implements the protocol.  Once the TCP/IP kernel was created, a set of protocols developed on top of it.  These are the APIs of the Internet OS.  Those protocols started out with basic file transfers but soon added SMTP email, remote login, and more recently, the HTTP web, VoIP and to some extent, instant messaging.  [There are lots of other protocols in there but most haven't become mainstream or, as in the example of DNS, we don't often think of them until they break.]

An operation system is not all that useful if it's just a bunch of APIs.  You have utility programs and shells to allow people to interact with it and you have major applications such as word processors and spreadsheet software.  In the same way, we have SMTP protocol clients such as Outlook and Thunderbird, HTTP clients such as Internet Explorer, Firefox, Safari and Opera, etc.  We also have lots of utilities such as ping, traceroute, etc that provide and direct interaction with TCP/IP.

So what we are living with now is Version X (no one has been keeping count) of the Internet.  What people are eagerly awaiting is development of new protocols that will allow us to get beyond the "old" models of interaction on the internet that involve email and web pages and basic media viewing.  We've gone a long way in creating great interaction models with these basic protocols.  Web 2.0 has started to give us insight into what the future possibilities are but we need to take these ideas and encode them into internet scale protocols.

I've follow up this post with some ideas about what I see as the emerging, next generation technologies and protocols that will make up the Internet OS Version X+1.
(posted on 30 May 2008)
I came across new "game" being played related to domain names.  I was in a position to attempt to capture a domain that had expired and was about to be deleted.  I had the .ca version of that domain and wanted to get the .com that was just about to be available.  I employed a service that is designed to capture these domains as soon as they become available.  I've done this before successfully so I was hopeful that this one would happen without a hitch.

Well the service announced that the domain had been captured by a registrar called "namevolcano.com".  I knew a lot of the domain spammers capture newly deleted names to "taste" them for the revenue generation prospects.  They usually drop these domains within 5 days so that they don't have to pay for them.  I guess that they are looking for domains that will generate more revenue than the roughly $6/yr that they end up paying for registering them.  So I was hopeful that I still might get a crack at the domain.

About 4 days after the domain was captured by "namevolcano.com", I got an email that was trying to "sell" me the domain, suggesting that the fact that I had the .net variant (I didn't have that) that I really should buy this .com from the sender, for the low price of only $557!  I was careful at the time to not follow any the links in the email as that might have shown interest to the sender and they might have kept the domain.  I didn't repsond.

About 12 hours after receiving the email the domain change hands to a registrar called "vibrant networks".  After two days, I received a second email, substantially the same as the first, although reminding me that this was their second email on the subject.

After the second 5 day tasting period ran out, I finally captured the domain.  I then went back and started checking the links provided by the email.  This is how I got the $557 purchase number as it wasn't actually in the email.

A interesting fact is that the whois server for the second taster was whois.itimemarketing.com, which is the same domain where the link in the initial email was pointed at.  So these two domain registrars seem to be related and were playing some kind of tag team game.  I guess they figured that they needed 10 days to try to "sell" the domain to me.

I've use the words "game" and "sell" in quotes above as I actually consider this activity to something in the neighborhood of scam to extortion.  Somehow think that this kind of activity is against the terms of service that registrars must follow to be accredited by ICANN.  ICANN has really got to clean up the mess of scammers that are posing as domain registrars.  These guys make the oil, gas and electricity market manipulators look tame by comparison in their brazen activities.

The one takeaway I can suggest from this experience is that if you run into a similar situation, don't do anything to raise the scammer's hopes of actually selling you the domain as I think that this will reduce your chances that they will just let the domain go and give you a real chance of getting the domain.
(posted on 28 May 2008)
I received an interesting message yesterday from a developer using Microsoft's Silverlight (I suspect a Microsoft employee).  He was trying to read a RSS feed from the Twemes.com website but couldn't because Twemes.com did not have a cross-domain policy file.   My immediate thought was "What's Microsoft attempting to do to RSS?"  It felt like some kind of Trojan Horse, sneaking in with the Silverlight runtime.

My "expertise" in Silverlight cross-domain policy requirements consists of about 10 minutes reading the provided references, so I could be completely wrong about all of this but here are my concerns about using this for RSS.

Microsoft seems to have modeled this on Adobe's cross-domain policy file (/crossdomain.xml) and will fall back to this file if it doesn't find it's perferred /clientaccesspolicy.xml.  The idea being that client software that supports the use of this policy file will use it to decide if the content on a given website is allowed to be used by the client.  So for Adobe Flash or MS Silverlight runtimes, it's a way to prevent someone from creating an application that access resources from a website that does not explicitly give it permission.  (I'm assuming that this is a technical permission and does not assign copyrights but I'm Not A Laywer). 

I don't know how effective this has been for controlling cross-domain usage of Flash resources but it seems superficially viable.  Especially with the Flash file formats and players that were at one time proprietary (are they still?)  This could provide for a type of DRM, regardless of it's effectiveness.

The problem with applying this kind of DRM to RSS is that in some respects, a RSS file *is* a content policy file.  It kind of says: "Instead of scraping data from my website's HTML pages, I'll give you this data in an nice machine readable format so you will get it right and so I can have some say in what and how it is presented."  By having an RSS feed, we are saying you can use this data in the RSS file but leave the rest of what's on the website alone.  I don't know how much legal standing this has but there does seem to be a pretty clear common sense message in RSS.

So over the last 8 years RSS has developed with a fairly universal understanding that its reasonable for any software to import and use it (within the bounds of copyright) and that if the publisher doesn't like this, then don't publish it.  If you want to restrict access to an RSS feed, use technology (such as HTTP basic authentication) to do that.

So why is Microsoft demanding a new layer of permission system (DRM) to be present before a Silverlight program can access resources that have been considered completely open?  Is this just the side effect of overly intrusive legal counsel?  A beta software problem where RSS was just thrown in with media files types and no one considered this issue?  Or is just another example of Microsoft's long history trying to turn open standards into proprietary Microsoft monopolies?
(posted on 18 May 2008)
I've been following the various conversations about data portability between the various big social networks.  This is definitely a hotly debated space right now.  The funny thing is that there is this one tiny piece of information, a person's email address, that is really at the center of the controversy but no one has really brought up why.  The only reason that we don't want our email to get out there in an aggregated way is that the technology behind email can't really control how it is used.  So when our email address gets out there, we get spammed.  This is a highly emotional issue for a lot of people.  If it were possible to positively identify the sender of email, we would get very little spam (and those that did spam us would be blocked quickly) and we would not care nearly as much how this piece of data is distributed.  It's funny that an email address is specifically designed to be published so that others can find it and send us messages but we now want the publishing of that email address to be tightly controlled and describe it as data that we "own".  It would be so much better if we did not need to control how that is published because it's use would be controlled.

In the context of the social networks, many people (that do not have a vested interested in a social network) say that an email address is our own data and that we should have the right to control it.  The problem is that for it to be a useful piece of data is has to be freely available.  What's happened with Facebook this week is that although they have been pretending to be opening up their network, they realize that combination of the social graph and email address is the basis for their walled garden.  If that gets away, other social networks can reproduce the Facebook network and undermine it's value.  What I see as significantly more important is the social graph itself.  If we had a messaging identifier that was spam proof, then this would not need to be protected data.  We would want to be careful about allowing other to know who we know and interact with, at least at a real world level.  There is no value to society (except for sociology research) in having any one company build a social graph and there is a lot of harm can come from it (McCarthyism).  There is a value to that company in that they can use this social graph to advertise to you and in building walled gardens.  I prefer a model where my piece of the social graph lives completely in my control and I only provide that information when and to who I chose to, from time to time.  Just like it used to before Friendster and Facebook.  Humans just work that way.
(posted on 14 May 2008)
That's a question that's being asked, answered and discussed on and around Twitter in ever increasing waves lately.  This is a pretty good indication about how important Twitter is to the people that are talking about it.  It's becoming an increasingly important tool in the everyday lives of those people.  I know it has for me.  I've stopped using blog aggregation as my way of keeping in touch with what's going on in the topics that I'm interested in.  Instead, I follow the people have interesting things to say.  Many point to interesting articles, sometimes their own, and if I have the time and inclination, I'll go read those.  This has saved me hours a day wading through mounds of closely related headings in Google Reader.

So before talking about what's wrong with Twitter, what is it?  It's essentially the conceptual melding of instant messaging, forums and chat rooms.  It has that rapid feedback and short messaging of IM but in the context of a larger group of interested people.  It has a bit of the feel of IRC and chat rooms but instead of being organized around topics, it's organized around our own unique set of interests.  It's "limitation" of 140 characters, defined by what SMS can handle, makes people concise and allows readers to rapidly scan through a stream of concentrated ideas.  We overcome the signal to noise problems from other conversation systems by only follow those who we identify as signal and ignore those that look like noise to use.  The platform nature of Twitter also allows people to interact with Twitter in as varied a manner as the kinds of people that they follow.

As I've heard Robert Scoble say, "Everyone's Twitter experience is different."  That's because you tailor it to create your own experience.  So what people will see as wrong with Twitter will depend a lot on how you've tailored it, what tools you are using with it and what additional things that you would like to do with it.  Personally I don't think that there is a whole lot wrong with Twitter any more than there is anything wrong with YahooIM, AIM, MS IM, Google Groups, Google Reader, etc.  Yes, Twitter could be more reliable and it's a bit surprising that it's not.  It's completely down as I write this.  The biggest problem is that a lot of people are overlaying what they would like Twitter to be on the service and seeing the shortcomings of that ideal.  They see what could be.  What's "so close" but not quite possible.

What I do see in Twitter and the way that people have such different ideas about how Twitter should be changed/upgrade/replaced is that Twitter has opened up people's eyes to the many faceted ways that people can communicate in the real-time, always connected, anywhere world that we are just starting to dip our toes into.

(posted on 14 Apr 2008)
My recent trip Banff gave me a lot to think about in terms of the kinds of tools that would make me productive in mobile environments.

There really are two different environments that I am thinking.  The first is relates to being able to do a limited amount of work while I'm out and about but need to be able to react quickly to problems.  The second is one that would travel with me to places where I want to get serious work done.

Light Computing

My "out and about" mobile client needs to be light enough to fit in a big pocket of my cargo pants and powerful enough to write medium length email messages and visit standard web pages.  I've played with a few things.  I've uses a Nokia 770 Internet tablet, a Samsung A920, a Razr, an eee PC and even an OLPC XO.  The WiFi devices are pretty good where you can get open WiFi but that's not an easy thing to do in Vancouver.  There is a lot of WiFi signal around but people have gotten smart about locking them down.  Until/if Vancouver gets blanket WiFi or WiMax, the only real solution is cellular data.  I tried to pairing up the N770 with the A920 via bluetooth and use the DUN.  That worked fairly well until the $100 bill came in for a couple of dozen web pages.  Cellular data plans are hideously expensive here in Canada.  So far, my best option ends up being a lowly Razr on a prepaid plan from VirginMobile.ca.  I don't use it much for talking but for $7/month I unlimited web browsing, albeit on an extremely limited device.  At least I can check my Gmail account regularly and either respond if it's no more than a sentence, or get myself to a real computer quickly.

This is not a great solution but it will have to do until I can find something better.  An iPhone would get me fairly close but even if it were available in Canada, I'd still have a hard time justifying it's cost.  I'm just not that mobile that I could justify it.  Unless I could find a project that I required it!  Even then, I would like to have a better keyboard.  I like the idea of the folding bluetooth keyboards.  You can just pull them out when you need to do more extensive typing.  I have a borrowed one but I've yet to find a bluetooth device that it will work with.  That seems to be a common problem with these things.

Heavy Computing

When I'm going to camp out in some hotel room for a bit and need to do serious work while I'm there, I can get by with some standard equipment but I have been dreaming about the ideal set up.  Most of this equipment does not exist and I doubt it ever will.  It's not a matter of can it be built but is there a market for it and a manufacturer willing to risking building it.

Instead of having a standard notebook computer, this would be made up of a couple of components that would be built using similar technologies to notebooks.  The core would be the CPU and storage module.  This would be something along the lines of a mac mini in size although I'm not sure that the optical drive would be necessary.  Not for me anyway.  Just a hard drive and a decent CPU.  Maybe a battery.  Some IO ports.  WiFi, WiMax or cellular data or a PCI express slot to provide for connectivity.  I could see selecting this component from a number similar units that could be configured for high power or portability, etc, just like notebook computers are today.  For display, we could have our choice too.  A very think and light clamshell that was made up of a keyboard, touch pad and screen that had wireless connectivity to the CPU base up to folding dual 17" panels with stand and wired DVI connection to the base.  A separate wireless keyboard and touch pad or mouse would be designed for travel but could be chosen to suit the user.  Many would be fine with a notebook style keyboard and touch pad but I'd prefer a split keyboard and full size mouse.

All of these components could fit into a reasonably small case and not be too heavy.  Likely in the 10 pound range.  Now that sounds heavy to those that wander around all day with a 3 pound notebook over the shoulder all day but that's not what this is for.  This form factor would be very nice as a desktop replacement but would also be compact enough to travel although note necessarily fit on your airline seat back tray.

Most of the interface standards already exist.  Bluetooth would work for a lot of the wireless communicatiosn between compoents.  The screen connection might need some redesign, especially if it were wireless.  Most of these components just require the tallents of a good notebook packaging designer and engineer.  When you look at what Apple did with the Air, could you imagine that same skill applied to these component system?

What's kind of funny about some of these ideas is that I've had some of this in the past.  15 to 20 years ago, I had a series of "portable" computers that weighted from 20 to 35 pounds.  From the Osborne to the original Compaq, the metal cased Eagle and even IBM's first (and I think only) lunchbox style computer, I had computers that were fairly close in functionality to the then available desktop computers.  I actually took most of these on airplanes (although the Compaq had to have it's boards and connectors reseated after each trip!)

So I would love to have some of the expertise that is used to make today's laptops put into portable component computing.  To have a set of mix and match parts that I could use to build my ideal portable working environment would be wonderful.

I doubt that I will see this though.  The computer industry is too focused on building slight variations on a couple of themes.  You can see how reluctant manufactures are to step out side of a narrow box when you look at the success of the eee PC.  Millions of these have been sold into a market that did not exist before it was produced.  There was obviously a demand but the manufacturers were not willing to risk it until Asus came a long.  Hmm... maybe Asus will start building my mobile modular computer components (the MMCC?).

I can always dream!

(posted on 14 Apr 2008)

I've always found that my productivity really suffers when I need to go mobile.  I have to squeeze a subset of my office desktop's functionality onto a notebook computer.  The time and effort required to get setup and low productivity environment that I end up with would generally make it not worth the effort.  Well, I just came back from a 9 day stay at The Banff Center so before I left, I was determined to experiment on how I productive I could become in such an environment.  The Banff Center is an academic and conference center that has a world class reputation for media, arts and management events.  I was told that they had a great computing resources so I thought that this would be a best case test.

Hotel Mobile Office Once set up in our room, I tested out the WiFi.  The signal was better than in my office!  I continued to set up the 5 computers that I brought.  A 14" Dell notebook, a MacBook, an eee PC, an OLPC XO and a Nokia 770 Internet Tablet.  The Dell was to be my main workstation and I brought my office mouse and MS Natural Keyboard to give me a feeling that was as much like the office as possible.  The MacBook and eee PC are my wife's computing devices so they were not an important part of the experiment.  The XO worked well as an email station for around the campus and the N770 was easy to carry around Banff.  With the large number of hotels/motels and coffee shops, it's wasn't too hard to find WiFi in town.

To add an additional screen to the mix, I uses a S-Video cable to create a secondary desktop from the Dell on the room's TV.  The quality wasn't great but worked well for Twhirl Twitter client and for playing movies at other times.

At the workshop that my wife was attending, she won a Tangent WiFi Table Radio so we ended up with 6 WiFi devices in the room at times.  They all worked really well.  The one problem that I did have is they seemed to be capping the bandwidth of any individual WiFi device to about 100Kbps.  It was low latency so it was fast enough for using VNC, SSH and general web surfing but made downloading my daily podcasts a pain.

Bringing my comfortable mouse and keyboard really helped.  I use a natural keyboard because of it's split layout.  It has really helps keeping chronic carpel tunnel syndrome at bay but it makes the transition to the cramped space of a notebook keyboard a real pain (literally).  I think that I almost prefer the little rubber keys on the XO to the notebook keyboard because I don't even try to touch type but switch to a 4 fingered hunt and peck.

The two things that I did miss from my desktop environment is the 4 screen setup using a 22" and 19" monitors on my desktop, and controlling the Dell using Input Director from the desktop and the little XO sitting above my monitors scrolling logging information.  The second things is text size.  The text on the 22" seemed monstrous when I got home.  It felt like you do after getting off of a long flight sitting in a middle seat.

Another problem that developed relates to a particular ergonomic requirement that I have.  In my office, my keyboard sits about a foot in from the edge of a corner disk.  I can then put my elbows on the desk.  I find this to be extremely comfortable and I can spend long sessions typing without fatigue.  The room at The Banff Center was very well equiped but it did only have the standard 24" deep desk.  The notebook with only the natural keyboard in front of it left only a couple of inches of desk to rest my arms on.  Not nearly enough for me and I ended up with very sore arms.

Because most of my work involves websites and the internet in general, I often working on remote servers vis SSH so working from Banff was not much of a hindrance.  A VNC connection to my office desktop and other office machines were pretty effective.  I found I could get light coding done remotely with no problem.  I've moved most of my productivity apps to the cloud (GMail/GDocs, SlimTimer.com, RememberTheMilk.com, Twitter, etc) so as long as the connectivity is good, those are no problem.

All in all, I think that I came pretty close to a good mobile setup.  It was fairly productive but I don't think that I ever really got into the "groove".  There are reasons for this that go beyond the nature of my setup (maybe the beautify mountains!)

I did have some thought about the dream mobile office setup would be but I'll leave that to another post.

older blog items...