Archive Page 4

Garlik Launches FOAF Services

The FOAF space got a whole lot more interesting yesterday, when Garlik released two FOAF services under their QDOS umbrella. The first is effectively a viewer on FOAF data crawled from across the Web. Have a look at the data about Danny Ayers to see it in action.

As far as I can tell, having looked very briefly yesterday (the QDOS site is down at the moment for maintenance), the second service will use the collected social network data to enable services such as blog comment whitelisting based on connections in the graph, presumably in the manner used by the DIG blog at MIT.

For some reason I find this service much more exciting than the Google Social Graph API. Perhaps it’s because the first incarnation of the SG API obviously didn’t get FOAF, and claimed that Mischa Tuffield was trying to steal my identity (presumably because he had a fragment of RDF about me in his FOAF file). The SG API does seem to have improved (I can’t replicate the original bug), and is useful for finding who still links to my FOAF in its old location, but I’m still more drawn to what Garlik have to offer. Perhaps it’s because I trust them to do it right,; whilst there are currently errors in the output about me (John Domingue and I are apparently the same person), I know exactly where the error comes from, and it’s human. Perhaps it’s because the Social Graph API feels polluted by XFN.

As yet there don’t seem to be any actual APIs or machine-friendly services offered by Garlik over this FOAF data, and with the site being down I can’t hunt around for these. Requesting different content types from the site doesn’t have any effect either, but knowing the people behind Garlik there’ll be some interesting stuff on its way. Full SPARQL over FOAF data would be nice 😉 Either way, this could well be the trigger for large-scale updating of people’s FOAF files, something which is long overdue, my own included.

Update: The QDOS site came back up shortly after I posted this. The second service is broadly as described above. It’s called the “Social Verification” service. Tom Ilube from Garlik talks about this in some more detail in Issue 2 of Nodalities Magazine.

Out from Behind the Great Firewall

It’s been a quiet month, blog-wise, mainly due to MyOpera being inaccessible from behind the “Great Firewall of China“. Not sure what content on here is worth screening out (except perhaps on quality grounds ;), but anyway…

I was in China for WWW2008 initially, and then two weeks of holiday, giving me a chance to see some of this vast country, meet some great people who went out of their way to help us, have varied success at avoiding being ripped off (an occupational hazard for travellers in China it seems), and catch a glimpse of somewhere undergoing huge changes.

On the subject of the Great Firewall, I’ll admit to being a bit disappointed that TimBL’s Keynote at WWW2008 didn’t address this issue more explicitly. On the other hand, he gave a great plug in his speech for the Linked Data on the Web workshop we co-chaired with Chris Bizer and Kingsley Idehen earlier in the week (summarised nicely at ZDnet by Paul Miller), which really made my day.

To be fair to Tim though, I wouldn’t have wanted to be in his shoes, which were undoubtedly treading a very fine line. Before the conference I was see-sawing between thinking “he’s got to address this issue head-on”, and thinking “no way, it would just be too confrontational to raise it in this venue“.

Yesterday I came across this blog post on the subject from the New Scientist. Whilst I wholly sympathise with the strength of feeling, I think the post itself is misplaced, or at least misdirected. The IW3C2/W3C/Web community at large has two choices: engage, and hope to bring about change through dialogue and stronger relationships, or keep China at arms length and stand no chance of influencing policy on censorship.

In the end I think the correct decision was made, just as siting the 2008 Olympics in Beijing was probably the right decision. Let’s just hope that the human rights situation improves as much as was promised (and as fast as the public toilet situation seems to have done in Beijing ahead of the Olympics – excellent, in case you were wondering).

If the New Scientist writer is really going to take issue with the WWW2008 slogan, I think an equally valid target should be the “One World” aspect. OK, so geographically it’s true, but on all other counts I’m not convinced.

List of HTTP Status Codes, Comma-separated

How difficult can it be to find a comma-separated list of HTTP Status Codes and their associated labels? Quite hard, apparently, so I made one:

100, Continue
101, Switching Protocols
200, OK
201, Created
202, Accepted
203, Non-Authoritative Information
204, No Content
205, Reset Content
206, Partial Content
300, Multiple Choices
301, Moved Permanently
302, Found
303, See Other
304, Not Modified
305, Use Proxy
306, (Unused)
307, Temporary Redirect
400, Bad Request
401, Unauthorised
402, Payment Required
403, Forbidden
404, Not Found
405, Method Not Allowed
406, Not Acceptable
407, Proxy Authentication Required
408, Request Timeout
409, Conflict
410, Gone
411, Length Required
412, Precondition Failed
413, Request Entity Too Large
414, Request-URI Too Long
415, Unsupported Media Type
416, Requested Range Not Satisfiable
417, Expectation Failed
500, Internal Server Error
501, Not Implemented
502, Bad Gateway
503, Service Unavailable
504, Gateway Timeout
505, HTTP Version Not Supported

For the full documentation try

Twine Invites; First Come First Served

I have a couple of Twine invites going spare if anyone wants one. Just mail me: firstname dot surname at gmail…

Twine and Linked Data

A little while back I wrote briefly about first impressions of Twine. Now that the recent flurry of Twine-related analysis has died down, and a few more people have had the chance to actually use the system, it’s probably a good time to look at what Twine has to offer from a Semantic Web point of view. Given Tim‘s recent post that emphasises the importance of Linked Data to the Semantic Web concept, and Nova Spivack’s follow-up post, the timing is even better.

Speaking briefly as Joe User, my first impression was that Twine doesn’t yet offer me any clear benefits over Yet Another Popularity Arms Race is kind of fun while people build up their number of connections, but this masks a bigger issue that I’ll get to in a moment. Despite not planning to ditch any time soon, I’m not going to criticise Twine particularly from a user perspective. Getting these things right is hard, and it is still in private beta. However, the one area where I have to comment (constructively, I hope) is regarding Twine’s use (or otherwise) of external data.

For me (and many others) the Semantic Web is all about structured, linked data, and the reuse potential this creates. I don’t get the impression that this is at all divergent with the view held at Radar Networks. Unfortunately this principle isn’t yet fully embodied in Twine as far as I can tell. That’s a real shame, and a missed opportunity to demonstrate the power of non-silos.

This issue struck me from the moment I signed up. There was no option to provide the URI of a FOAF file from which my profile could be populated with people I know, a photo, location data and links to other online accounts I hold. Instead I had to recreate all this information manually, despite much of it being out there on the public Web here, and here, and also here ready for consumption. I even had to upload a photo.

For an application that claims to be Semantic Web enabled this is almost unforgiveable. Sure, not everyone has a FOAF file (but how many more have a photo online?), but for those who do and have wondered what to do with it this would be a great payback, and would in turn encourage more people to create one, or sign up with services like MyOpera (hey, that’s why I blog here) and Revyu that create FOAF on their behalf.

For me, probably the low point of this signup process from a Linked Data perspective was having to enter my location as a text string. In a world graced by DBpedia and Geonames this really shouldn’t be happening. In fact I’ve since gone back and replaced the textual location with the URI of Birmingham (UK) from DBpedia ( but of course it’s not actually a link in either the HTML or RDF output.

Just in case anyone missed it, yes, there’s RDF data describing things in Twine. Hurray! Let’s not underestimate the significance of this. But, and I’m afraid there is one, Marshall Kirkpatrick’s comment about the lack of RSS output is just the tip of the iceberg. I don’t just want RSS, or fragments of RDF, I want Linked Data in RDF.

Sticking with the profile theme, when I signed up I added a number of links to Web pages with which I’m associated, such as this blog, my profile page on Revyu and the Platform site at Talis. To Twine’s credit these are all exposed in the RDF document about me that is generated from my profile data. Great. Umm, except that they’re referenced using the property rather than something in more widespread use, such as Likewise my “account” is not a sioc:User, and there’s no statement here saying that the URI that identifies me ( identifies a thing of type foaf:Person.

One of the key things about creating network effects on the Web of Data is not just reusing those URIs that identify “things” (like the place “Birmingham”), but reusing widely adopted properties and classes from vocabularies/ontologies such as FOAF, that are widely understood by applications. Of course there may be a mapping defined between and foaf:page, but unfortunately I can’t tell, as the ontology URI just 404s. Linked Data principle number 3: “When someone looks up a URI, provide useful information.”

It is pleasing to see that Twine has minted a URI for me ( that is distinct from the page on the site that describes me ( This is definitely good. To really play nice in the world of Linked Data, however, there are a couple of other tweaks that are needed. If I dereference the URI that identifies me, I currently get a 302 Found response that redirects me to the page about me at ( The important bits of the headers look like this:

GET /item/1tjtp3mx-185 HTTP/1.1
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv: Gecko/20080311 Firefox/
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5

HTTP/1.x 302 Found
Content-Length: 0
Date: Fri, 28 Mar 2008 18:06:21 GMT
Content-Type: text/plain; charset=UTF-8

This needs to be changed to a “HTTP 303 See Other” redirect in order to be in line with the finding on httpRange-14. There is also some work to be done with the content negotation on the site. At present, if I dereference my URI and ask for application/rdf+xml, I get a 200 OK response and an RDF document returned. It’s seems that I am not my homepage, but I am my RDF description.

(The “How to Publish Linked Data…” tutorial has more on these issues.)

Weirdly the RDF I get back from this request is different to that from the RDF version of my profile page. This time I am a basic:Person (but not a foaf:Person), there’s no sign of my location or links to my other Web pages, but links to all my connections are given.

I imagine that all these sorts of niggles will be ironed out as the site develops further, but in the meantime Nova might like to slightly tame the claims he makes about support for Linked Data in Twine. Despite saying that “You can learn more about Twine’s support for Linked Data and see some examples here“, the example given does not show Linked Data, but simply an RDF fragment describing the book Jurassic Park. Perhaps the next iteration will have owl:sameAs links to and, and say that the author is Then there’ll really be some claims to make 😀

Having picked many holes, and hopefully provided some useful feedback, my final comment is a feature request. I think is on Radar’s, err, radar, but deserves to be aired for the sake of completeness. One of the features I’d most like to see in Twine is greater native handling of different types of things. Right now I can only add one of a finite list of things (audio, book, bookmark, event, person, etc). In order to truly scale I think an open world view needs to be taken on this, where even the “Add Item > Other” menu has an “Other” option, and types can be drawn from data on the Semantic Web at large.

For example, right now “Review” is not an explicitly supported type. Nor is “Cheese”. I would like to be able to add a URI such as this to Twine, and the system then tell me that it’s a review, not the other way around. At that point the claim of being able to tie it all together will really hit home.

Making Links at the BBC

Ian and I spent last Friday at BBC Television Centre in London. For anyone of my generation who grew up in the UK this place probably has an almost mythical status, as the place to send your competition entries or milk bottle tops for the latest Blue Peter appeal. We were there for a workshop on the theme of the Semantic Web, organised by Nicholas Humfrey and Patrick Sinclair from BBC Audio and Music Interactive.

Not only was it a privilege to get a look inside this great institution, it was great to see so many BBC people turn up to hear about the Semantic Web. Nick and Patrick had put together a very nicely structured programme, introducing people to the Semantic Web from the conceptual level of Linked Data (that was my bit), through a talk on DBpedia by Georgi Kobilarov, to the highs and lows of enterprise scale RDF storage as revealed by Steve Harris of Garlik, and finally to interfaces for structured data as presented by Daniel Smith from the University of Southampton. Hope all the slides will be linked to from the BBC Radio Labs blog in due course. In the meantime you can find mine here.

Aside from the inherent pleasure associated with talking to people about the Semantic Web, the highlight of the day for me was getting a sneak preview of the Linked Data work that’s going on within the BBC, and will hopefully soon see the light of day on the public Web. The /programmes area of the BBC site will be home to large amounts of RDF data about programmes going out across all channels, and each will be identified by a dereferenceable URI.

This is a huge deal, and testament to the hard work put in by people like Nick, Patrick and Michael Smethhurst from the BBC, with input from people like Yves. There is already a public commitment to linked data principles at , but what impressed me most was the extent to which linking to external data sets seems to be baked into the thinking from day one. Expect to see strong links to Musicbrainz in the first instance, and no doubt to many more data sets over time.

The BBC are well ahead of the game here. They don’t have an angry mob of license-fee payers at the gates demanding access to BBC data in RDF, with chants of “give us our data, we’ve paid for it already” (or hopefully something more poetic). This mob will never materialise. They’ve seen the willingness of the BBC in this area with previous initiatives such as the Catalogue, and are down the pub dreaming up ways to use this data. With the advent of the current work on Linked Data and /programmes the non-mob have even more to dream about.

Perhaps as a publicly-funded organisation the BBC is obliged (morally or otherwise) to be a good citizen of the Web of Data. However, I don’t get the impression that that’s what this is about, in the first instance at least. I’m left with the feeling that this is a result of a bunch of guys really getting the Web of Data, and seeing the value that links can bring to their organisation.

Twine First Impression

David Peterson was kind enough to send me a Twine invite (thanks David 🙂 Aside from the obligatory half hour spent making lots of friends (again) and adding a few items to try things out, I haven’t really spent enough time with it to form strong impressions. However, the one thing that struck me while I was signing up was: why do I have to upload another photo of myself, that’s the same as the one I use on countless other sites and even bother to define in my FOAF file? This isn’t very Webby, let alone Semantic Webby. Hmm.

Tim Berners-Lee Talks with Talis, and plugs Linked Data

The Talking with Talis series reaches new heights today, with the addition of a podcast discussion with Tim Berners-Lee, the inventor of the Web. Yes, the inventor of the Web. OK, so probably everyone reading this blog knows that Tim invented the Web, but sometimes I have to stop and get a reality check. Once upon a time there was no Web. It just didn’t exist. Huh? Weird. I can only just make sense of that.

It’s much easier to reconnect with the excitement I felt when I first used the Web in 1995. (I know, I know!! Where was I all those years?). Now, more than ten years on I feel the same kind of excitement about Linked Data and the Semantic Web. There are still many doubters out there, but efforts such as the Linking Open Data project give me strength. Tim plugs the project, and Linked Data in general, extensively in the Talis podcast (transcript, analysis).