Geoffrey Meredith |
Thoughts on Technology |
Blog(posted on 13 Jul 2008)
I posted a job ad on Craigslist.org yesterday that got quickly flagged and removed. You don't get any indication as to why and ad was removed, just a link to a forum where you can post the details of your ad and get suggestions as to what you did wrong. I had no idea as to why my ad was pulled so I spent some time on that forum looking for clues. Nothing in the examples I saw there helped me understand what was wrong with my ad so I posted my ad to the forum and and awaited responses. I only received one comment about my ad. The comment was sarcastic and suggested that the compensation was only appropriate for a third world country. Maybe I'm cheap or out of touch with salary expectations (or both) but I do think that there would have been people interested in responding to the ad. The responses to a number of other rejected ads seem to expose either personal or political agenda. People just didn't like the ads because they didn't properly address their political ideals, even if those political ideas were tangential to the posting. It became obvious that to post in any particular category and city, you have to abide by mostly unwritten "community standards". Who's standards are those "community standards"? It's not the community of people in that city interested in that category, it's the much smaller community that comb through Craigslist ads looking for ones that do live up to their standards. That to me sounds like a vigilante mob; the darker side of crowdsourcing. I don't think that it has to be this way. I think there are technological solutions to improving moderation on websites like Craigslist. Meta moderation and sophisticated algorithms that websites like Slashdot.org, Digg.com, or even the use of more descriptive flagging, can be added to raise moderation beyond the level of vigilantism. Of course, this may be the way that Craigslist wants their website to behave. It's certainly within their rights to do so. I actually see an opportunity here for someone to build a better Craigslist. (posted on 26 Jun 2008)
Over the last month or so we've started running into issues with Comcast and AT&T blocking all email from our servers due to the fact that they receive what they consider spam from our servers. We have gotten our servers unblocked but today, Comcast has blocked us again. So, to be able to deliver email to Comcast we have to "clean" all email that passes through our servers. We have no idea about what the triggers are for Comcast to block a server. The barrier is likely to be fairly low as we don't have all that much email traffic in total. So to keep our standing with Comcast, we will have to be brutal. We will have to consider any email that might possibly be spam as spam and bounce it. If only a tiny percentage of spam gets though our filters, we might get blocked again. The net effect is that some legitimate email will bounced. While neither us nor our customers are doing anything wrong, Comcast is forcing us to not just tag potential spam as spam but forcing us to block it entirely. Essentially they are pushing their problems on us. The net effect of all this is that Comcast will be forcing many smaller operations that process smaller amounts of email to find their own solutions to deal with the "Comcast" email problem. Each operator will find a way that will cost in aggregate thousands, maybe millions of man-hours of effort and will at a net, reduce the percentage of successful legitimate email deliveries. Spam has made email less useful but these efforts by Comcast will be adding some of the last few nails to the email coffin. I'd love to see email disappear but it won't until something better takes it's place. (posted on 3 Jun 2008)
I have the feeling that the state of the Internet now is much like stand-alone computers were just before the introduction of the IBM PC in 1981. A lot of the pieces of the PC revolution were there, but no one had quite put them all together. What the PC did was put control of serious computing resources into the hands of individuals. We are now waiting for the Internet analog to this revolution. I think that will happen when people control their data own on the internet. That data control is not just in the "Data Portability" vision of being able to copy data from one walled garden to the next, but in the ability to store your data in a single datastore of your choosing and that you control completely. You can then allow selective access to to your data by external services that you want to use. I think that Amazon.com's S3 is the start of the kind of service where you could store data. Not that S3 has the complete functionality required to support this model but it could be based on top of S3. Having your own datastore is like being in control of the hard drive on your computer. You load applications and tell those applications what data to work with. In that same way, you could allow a web based service such as Adobe Photoshop Express, to access some photos in your datastore, do some online processing and after it's done, store the results back to your datastore. You can already do this with your photos stored on Flickr and a couple of other photo sites. Adobe's got the right idea but there is no open protocol for that would allow them to reach the photos on my own personal server. In a similar vein, we have Facebook, Google, Yahoo, Microsoft and many smaller players fighting over control of "the social graph". The "right" way to handle this is to allow me to store and control my part of the social graph and then selectively allow other services to have access to that. There would no longer be a need to give some new tool your account credentials to your GMail, Facebook, and other services. Just point them at your datastore and tell your datastore what personal data that the service can have. This model really is the holy grail social computing from a user's perspective. It's deadly to a social aggregator's perspective (such as Facebook) as there isn't much left for them once they the user gets rescued from their lock-in. I also see this as a significant component of the next version of the Internet Operating System. (posted on 3 Jun 2008)
In the stand-alone computer world, an operating system is the software that manages and ties together the hardware components, usually via hardware drivers. The operating system exposes APIs that allow application programs to interact with the hardware as well as the lifecycle of the applications themselves. Over time, computer operating systems have grown by adding layers over this fundamental "kernel". These new APIs provide rich, event driven presentation layers such as Windows and Mac OS. They also provide such things as file systems, security, etc. Since the 70's, TCP/IP has been the operating system kernel that has tied together the various bits and pieces of the internet, (computers, switches, routers, etc.). This has allowed for a variety protocols to be developed that are roughly the equivalent of an operating system API. In fact, from a software point of view, most programs use the protocols via an API that implements the protocol. Once the TCP/IP kernel was created, a set of protocols developed on top of it. These are the APIs of the Internet OS. Those protocols started out with basic file transfers but soon added SMTP email, remote login, and more recently, the HTTP web, VoIP and to some extent, instant messaging. [There are lots of other protocols in there but most haven't become mainstream or, as in the example of DNS, we don't often think of them until they break.] An operation system is not all that useful if it's just a bunch of APIs. You have utility programs and shells to allow people to interact with it and you have major applications such as word processors and spreadsheet software. In the same way, we have SMTP protocol clients such as Outlook and Thunderbird, HTTP clients such as Internet Explorer, Firefox, Safari and Opera, etc. We also have lots of utilities such as ping, traceroute, etc that provide and direct interaction with TCP/IP. So what we are living with now is Version X (no one has been keeping count) of the Internet. What people are eagerly awaiting is development of new protocols that will allow us to get beyond the "old" models of interaction on the internet that involve email and web pages and basic media viewing. We've gone a long way in creating great interaction models with these basic protocols. Web 2.0 has started to give us insight into what the future possibilities are but we need to take these ideas and encode them into internet scale protocols. I've follow up this post with some ideas about what I see as the emerging, next generation technologies and protocols that will make up the Internet OS Version X+1. (posted on 30 May 2008)
Well the service announced that the domain had been captured by a registrar called "namevolcano.com". I knew a lot of the domain spammers capture newly deleted names to "taste" them for the revenue generation prospects. They usually drop these domains within 5 days so that they don't have to pay for them. I guess that they are looking for domains that will generate more revenue than the roughly $6/yr that they end up paying for registering them. So I was hopeful that I still might get a crack at the domain. About 4 days after the domain was captured by "namevolcano.com", I got an email that was trying to "sell" me the domain, suggesting that the fact that I had the .net variant (I didn't have that) that I really should buy this .com from the sender, for the low price of only $557! I was careful at the time to not follow any the links in the email as that might have shown interest to the sender and they might have kept the domain. I didn't repsond. About 12 hours after receiving the email the domain change hands to a registrar called "vibrant networks". After two days, I received a second email, substantially the same as the first, although reminding me that this was their second email on the subject. After the second 5 day tasting period ran out, I finally captured the domain. I then went back and started checking the links provided by the email. This is how I got the $557 purchase number as it wasn't actually in the email. A interesting fact is that the whois server for the second taster was whois.itimemarketing.com, which is the same domain where the link in the initial email was pointed at. So these two domain registrars seem to be related and were playing some kind of tag team game. I guess they figured that they needed 10 days to try to "sell" the domain to me. I've use the words "game" and "sell" in quotes above as I actually consider this activity to something in the neighborhood of scam to extortion. Somehow think that this kind of activity is against the terms of service that registrars must follow to be accredited by ICANN. ICANN has really got to clean up the mess of scammers that are posing as domain registrars. These guys make the oil, gas and electricity market manipulators look tame by comparison in their brazen activities. The one takeaway I can suggest from this experience is that if you run into a similar situation, don't do anything to raise the scammer's hopes of actually selling you the domain as I think that this will reduce your chances that they will just let the domain go and give you a real chance of getting the domain. (posted on 28 May 2008)
My "expertise" in Silverlight cross-domain policy requirements consists of about 10 minutes reading the provided references, so I could be completely wrong about all of this but here are my concerns about using this for RSS. Microsoft seems to have modeled this on Adobe's cross-domain policy file (/crossdomain.xml) and will fall back to this file if it doesn't find it's perferred /clientaccesspolicy.xml. The idea being that client software that supports the use of this policy file will use it to decide if the content on a given website is allowed to be used by the client. So for Adobe Flash or MS Silverlight runtimes, it's a way to prevent someone from creating an application that access resources from a website that does not explicitly give it permission. (I'm assuming that this is a technical permission and does not assign copyrights but I'm Not A Laywer). I don't know how effective this has been for controlling cross-domain usage of Flash resources but it seems superficially viable. Especially with the Flash file formats and players that were at one time proprietary (are they still?) This could provide for a type of DRM, regardless of it's effectiveness. The problem with applying this kind of DRM to RSS is that in some respects, a RSS file *is* a content policy file. It kind of says: "Instead of scraping data from my website's HTML pages, I'll give you this data in an nice machine readable format so you will get it right and so I can have some say in what and how it is presented." By having an RSS feed, we are saying you can use this data in the RSS file but leave the rest of what's on the website alone. I don't know how much legal standing this has but there does seem to be a pretty clear common sense message in RSS. So over the last 8 years RSS has developed with a fairly universal understanding that its reasonable for any software to import and use it (within the bounds of copyright) and that if the publisher doesn't like this, then don't publish it. If you want to restrict access to an RSS feed, use technology (such as HTTP basic authentication) to do that. So why is Microsoft demanding a new layer of permission system (DRM) to be present before a Silverlight program can access resources that have been considered completely open? Is this just the side effect of overly intrusive legal counsel? A beta software problem where RSS was just thrown in with media files types and no one considered this issue? Or is just another example of Microsoft's long history trying to turn open standards into proprietary Microsoft monopolies? (posted on 18 May 2008)
In the context of the social networks, many people (that do not have a vested interested in a social network) say that an email address is our own data and that we should have the right to control it. The problem is that for it to be a useful piece of data is has to be freely available. What's happened with Facebook this week is that although they have been pretending to be opening up their network, they realize that combination of the social graph and email address is the basis for their walled garden. If that gets away, other social networks can reproduce the Facebook network and undermine it's value. What I see as significantly more important is the social graph itself. If we had a messaging identifier that was spam proof, then this would not need to be protected data. We would want to be careful about allowing other to know who we know and interact with, at least at a real world level. There is no value to society (except for sociology research) in having any one company build a social graph and there is a lot of harm can come from it (McCarthyism). There is a value to that company in that they can use this social graph to advertise to you and in building walled gardens. I prefer a model where my piece of the social graph lives completely in my control and I only provide that information when and to who I chose to, from time to time. Just like it used to before Friendster and Facebook. Humans just work that way. (posted on 14 May 2008)
So before talking about what's wrong with Twitter, what is it? It's essentially the conceptual melding of instant messaging, forums and chat rooms. It has that rapid feedback and short messaging of IM but in the context of a larger group of interested people. It has a bit of the feel of IRC and chat rooms but instead of being organized around topics, it's organized around our own unique set of interests. It's "limitation" of 140 characters, defined by what SMS can handle, makes people concise and allows readers to rapidly scan through a stream of concentrated ideas. We overcome the signal to noise problems from other conversation systems by only follow those who we identify as signal and ignore those that look like noise to use. The platform nature of Twitter also allows people to interact with Twitter in as varied a manner as the kinds of people that they follow. As I've heard Robert Scoble say, "Everyone's Twitter experience is different." That's because you tailor it to create your own experience. So what people will see as wrong with Twitter will depend a lot on how you've tailored it, what tools you are using with it and what additional things that you would like to do with it. Personally I don't think that there is a whole lot wrong with Twitter any more than there is anything wrong with YahooIM, AIM, MS IM, Google Groups, Google Reader, etc. Yes, Twitter could be more reliable and it's a bit surprising that it's not. It's completely down as I write this. The biggest problem is that a lot of people are overlaying what they would like Twitter to be on the service and seeing the shortcomings of that ideal. They see what could be. What's "so close" but not quite possible. What I do see in Twitter and the way that people have such different ideas about how Twitter should be changed/upgrade/replaced is that Twitter has opened up people's eyes to the many faceted ways that people can communicate in the real-time, always connected, anywhere world that we are just starting to dip our toes into. (posted on 14 Apr 2008)
There really are two different environments that I am thinking. The first is relates to being able to do a limited amount of work while I'm out and about but need to be able to react quickly to problems. The second is one that would travel with me to places where I want to get serious work done. Light ComputingMy "out and about" mobile client needs to be light enough to fit in a big pocket of my cargo pants and powerful enough to write medium length email messages and visit standard web pages. I've played with a few things. I've uses a Nokia 770 Internet tablet, a Samsung A920, a Razr, an eee PC and even an OLPC XO. The WiFi devices are pretty good where you can get open WiFi but that's not an easy thing to do in Vancouver. There is a lot of WiFi signal around but people have gotten smart about locking them down. Until/if Vancouver gets blanket WiFi or WiMax, the only real solution is cellular data. I tried to pairing up the N770 with the A920 via bluetooth and use the DUN. That worked fairly well until the $100 bill came in for a couple of dozen web pages. Cellular data plans are hideously expensive here in Canada. So far, my best option ends up being a lowly Razr on a prepaid plan from VirginMobile.ca. I don't use it much for talking but for $7/month I unlimited web browsing, albeit on an extremely limited device. At least I can check my Gmail account regularly and either respond if it's no more than a sentence, or get myself to a real computer quickly.This is not a great solution but it will have to do until I can find something better. An iPhone would get me fairly close but even if it were available in Canada, I'd still have a hard time justifying it's cost. I'm just not that mobile that I could justify it. Unless I could find a project that I required it! Even then, I would like to have a better keyboard. I like the idea of the folding bluetooth keyboards. You can just pull them out when you need to do more extensive typing. I have a borrowed one but I've yet to find a bluetooth device that it will work with. That seems to be a common problem with these things. Heavy ComputingWhen I'm going to camp out in some hotel room for a bit and need to do serious work while I'm there, I can get by with some standard equipment but I have been dreaming about the ideal set up. Most of this equipment does not exist and I doubt it ever will. It's not a matter of can it be built but is there a market for it and a manufacturer willing to risking building it. Most of the interface standards already exist. Bluetooth would work for a lot of the wireless communicatiosn between compoents. The screen connection might need some redesign, especially if it were wireless. Most of these components just require the tallents of a good notebook packaging designer and engineer. When you look at what Apple did with the Air, could you imagine that same skill applied to these component system? What's kind of funny about some of these ideas is that I've had some of this in the past. 15 to 20 years ago, I had a series of "portable" computers that weighted from 20 to 35 pounds. From the Osborne to the original Compaq, the metal cased Eagle and even IBM's first (and I think only) lunchbox style computer, I had computers that were fairly close in functionality to the then available desktop computers. I actually took most of these on airplanes (although the Compaq had to have it's boards and connectors reseated after each trip!) I can always dream! (posted on 14 Apr 2008)
I've always found that my productivity really suffers when I need to go mobile. I have to squeeze a subset of my office desktop's functionality onto a notebook computer. The time and effort required to get setup and low productivity environment that I end up with would generally make it not worth the effort. Well, I just came back from a 9 day stay at The Banff Center so before I left, I was determined to experiment on how I productive I could become in such an environment. The Banff Center is an academic and conference center that has a world class reputation for media, arts and management events. I was told that they had a great computing resources so I thought that this would be a best case test. To add an additional screen to the mix, I uses a S-Video cable to create a secondary desktop from the Dell on the room's TV. The quality wasn't great but worked well for Twhirl Twitter client and for playing movies at other times. At the workshop that my wife was attending, she won a Tangent WiFi Table Radio so we ended up with 6 WiFi devices in the room at times. They all worked really well. The one problem that I did have is they seemed to be capping the bandwidth of any individual WiFi device to about 100Kbps. It was low latency so it was fast enough for using VNC, SSH and general web surfing but made downloading my daily podcasts a pain. |