Archive Page 3

VoCamp, Day Zero

Tomorrow is the first day of the first ever VoCamp. It may also be the last ever VoCamp, but I hope and believe that will not be the case. Around 20 of us have gathered in Oxford for two days for an event from which none of us quite knows what to expect. The goal is to help drive forward the creation, publication and utilisation of vocabularies for describing data on the Web.

In the last couple of years we, as in the Semantic Web community, have learned a great deal about how to publish data on the Web. As we’ve become more familiar with this process we’ve got better at knowing where to look to find existing data that could be published online according to Semantic Web and Linked Data principles. What hasn’t kept pace with this process is the availability of vocabularies/ontologies for describing this data. I may now be able to get hold of data about changes in polar ice caps and polar bear migration patterns, but would bet money that there’s no vocabulary with which to describe this data. Choose almost any domain and the situation will be the same.

If we’re serious about building a Web of Data, then this issue has to change. I see this from my own work, but Peter Mika’s experiences at Yahoo!, and the strength of his conviction (conveyed very nicely in this blog post), provide some great confirmation that I’m not alone in this perception. The vocabulary bottleneck has to be eased.

So, tomorrow is a chance for us to start changing that. The solution won’t come overnight, but I hope that we can start the ball rolling. VoCampOxford2008, and any VoCamp in fact, is about creating some dedicated time and space to create and publish vocabularies in domains that interest us. We all have grand ideas while waiting at the bus stop/traffic lights, doing the washing up, wherever, about cool domains we could model and in which we could publish data, but without some ring-fenced time in which to do so these plans can easily come to nothing. VoCamp aims to solve that.

The primary success criteria for the next two days will be the publication of new vocabularies on the Web that increase the availability of Linked Data. That’s the main goal, but there are many others. I am confident that this first VoCamp will be an opportunity to share issues, expertise, modeling techniques and design patterns. In doing so we will all become smarter. There is an opportunity to scope requirements in the wider Semantic Web field that impact upon the availability and reuse of vocabularies. Collectively we can identify missing pieces of the technical infrastructure required by the Web of Data, and begin to build a social infrastructure that helps us collectively ease the vocabulary bottleneck.

These are grand goals. Even if none of them were to be achieved, there is one other goal which I’m sure will be met; that is determining whether the VoCamp format works, and if so how. If the format fails, we’ll need to look elsewhere for a solution. If it succeeds, fully or partially, we’ll be closer to knowing how to do it even better next time.

Where is the business value in Linked Data?

Where is the business value in Linked Data? What is the business case for exposing your organisation’s data assets according to Linked Data principles and best practices, and being a good citizen of the Web of Data? Whenever I ask myself this question I’m tempted to give some trite answer like “you’ve got to be in it to win it”. Ultimately I think this is true, or at least will be in time, in just the same way that businesses in the nineties asked themselves about the value of having a Web site, and (hopefully) came to realise that this was a moot point; not having a Web site was not an option.

However, I’m impatient, and want to see everyone participating in a Web of Data sooner rather than later. I also want to have a meaningful answer when other people ask the business value question, that isn’t just a flimsy “trust me” or an arrogant “you’ll see”. With that in mind I’ve tried to clarify my thoughts on the subject and spell them out here.

The first issue to address relates to publishing data on the Web full-stop – we’ll get to Linked Data specifics later. APIs for public access to data are now widespread on the Web, with sites like Amazon and Flickr being good examples. A common reaction to this kind of arrangement is to think in terms of the data having been given away, and wonder about how this affects the bottom line.

For both Amazon and Flickr the data represents a core asset, but the route of openness has enabled them to foster communities that use this data and drive revenue generation in other areas, whether that’s selling stuff (Amazon) or collecting annual subscription fees (Flickr). People may pay for goods or pay an annual subscription, but my guess is that (perhaps in contrast to enterprises) individuals are unlikely to pay in large numbers for data. In the case of Amazon the data either isn’t *that* important to people /really/, is available from other sources, or would become available from other sources if Amazon began to charge at all, or charged more than a nominal amount. For Flickr the same rules apply, except that people are even less likely to pay a separate fee to access a pool of data that they themselves have contributed to. The key point here is that providing APIs to their data has allowed Amazon and Flickr to drive additional traffic into their established revenue channels.

Seen this way, an organisation with rich data assets has two choices. The first is to open up access to its data, and understand that the challenge is now not just about having qaulity data, but enabling others to create value around these assets and therefore ultimately do the same for the organisation. The second option is to keep the data locked away like the crown jewels, while the organisation and the data itself are slowly rendered irrelevant by cheaper or more open alternatives.

An interesting example in this case is the UK Government’s approach to the Ordnance Survey, the national mapping agency. Rather than accepting that the tax base has already financed the creation of the OS’s phenomenal data assets and therefore should have the right to re-use these as they see fit, the UK government requires the OS to generate revenue. Whilst the OS itself is making some great efforts to participate in the Semantic Web, to a large extent their hands are tied. This opens the door (or creates the door in the first place) for initiatives such as OpenStreetMap.

The kind of scenario I can imagine is this: the government continues to not “get it”, Ordnance Survey data remains largely inaccessible to those who can’t afford to license it, OpenStreetMap data becomes good enough for 80% of use cases, fewer people license OS data, OS raises prices to recoup the lost revenue, less popular locations stop being mapped as they are deemed unprofitable, even fewer people buy OS data, the OS and all its data assets are sold at a fraction of their former “value”.

What the UK government doesn’t fully understand (despite things like the “Show us a better way” competition), but has been well demonstrated in the US, is that opening up access to data creates economic benefits in the wider economy that can far outstrip those gained from keeping the data closed and attempting to turn it into a source of revenue. Organisations whose data assets have not been created using public funds may not have the same moral obligations to do so, but the options remain the same: open up or be rendered irrelevant by someone who does.

So if the choice is between openness or obsolescence, how does Linked Data help? Let’s look at Amazon and Flickr again. Both these services make data easily available, but have compelling reasons for data consumers to link back to the original site, whether that’s to gain revenue from affiliate schemes or to save the hassle of having to host one’s own photos at many different resolutions. The net result is the same in both scenarios: more traffic channelled to the site of the data provider.

A typical Web2.0 scenario is that data is accessed from an API, processed in some way, and re-presented to users in a form that differs somehow from the original offering provided by the data publisher — a mashup. This difference may be in the visual presentation of the data, in added value created by combining the data with that from other sources, or in both. Either way, this kind of mashup is likely to be presented to the user as an HTML document, perhaps with some AJAX embellishments to improve the user experience.

The extent to which the creator of the mashup chooses to link back to the data source is a function of the rewards on offer and the conditions under which the data can be used. Not all services will have the same compelling reasons for data consumers to link back to the data providers themselves, as not all data publishers will be able to afford the kind of affiliates scheme run by Amazon. However, even in cases such as a book mashup based on Amazon data, where the creator links back to Amazon prominently in order to gain affiliate revenue, both the data publisher and the application creator lose. Or at the very least they don’t win as much as they could.

This may sound counter-intuitive, so let’s look at the details. In processing data to create a mashup, the connection between the data and the data provider is effectively lost. This is a result of how conventional Web APIs typically publish their data. The code snippet below shows data from the Amazon E-commerce Service about the book “Harry Potter and the Deathly Hallows”:

<itemAttributes>
<author>J. K. Rowling</author>
<creator Role="Illustrator">Mary GrandPré</creator>
<manufacturer>Arthur A. Levine Books</manufacturer>
<productGroup>Book</productGroup>
<title>Harry Potter and the Deathly Hallows (Book 7)</title>
</itemAttributes>

If you look at elements such as <author>, you’ll see that author names are given simply as text strings. The author herself is not identified in a way that other data sources on the Web can point to. She does not have a unique identity, but exists only in the context of this document that describes a particular book. There is no unique identifier for this person that can be looked up to obtain more information. As a result this output from Amazon represents a data “blind alley” from which there’s nowhere to go. There is nothing in the data itself that leads anywhere, or even points back to the source – in effect the connection between the data and the data publisher is lost.

The connection between publisher and data may be reinstated to some degree in the form of HTML links back to the data source, but by this point the damage is done. These links are tenuous at best and enforced mainly by economic incentives or licensing requirements. In Web2.0-style mashups based on these principles there is no reliable way to express the relationships between the various pieces of source data in a way that can be reused to build further mashups – the effort is expended once for a human audience and then lost.

In contrast, Linked Data mashups (or “meshups” as they sometimes get called) are simply statements linking items in related data sets. Crucially these items are identified by URIs starting “http://”, each of which may have been minted in the domain of the data publisher, meaning that whenever anyone looks up one of these URIs they may be channeled back to the original data source. It is this feature that creates the business value in Linked Data compared to conventional Web APIs. Rather than releasing data into the cloud untethered and untraceable, Linked Data allows organisations and individuals to expose their data assets in a way that is easily consumed by others, whilst retaining indicators of provenance and a means to capitalise on or otherwise benefit from their commitment to openness. Minting URIs to identify the entities in your data, and linking these to related items in other data sets presents an opportunity to channel traffic back to conventional Web sites when someone looks up those URIs. It is this process that presents opportunities to generate business value through Linked Data principles.

Common Sense prevails at Triple-I

I’m here in Graz for Triple-I. All credit to Klaus Tochtermann, Hermann Mauerer and many others for putting together what is shaping up to be a great event. Things got off to a good start with the first keynote, from Henry Liebermann, who talked about how we manage and utilise common sense knowledge. The talk got me thinking on many levels, such as how we build task- or goal-oriented interfaces, and whether we might be able to get the 21 “most commonly used relationships in common sense statements” into an appropriate form for use on the Semantic Web. With the bar already set very high, I’d best go and work some more on the slides for my own keynote on Friday, titled “Humans and the Web of Data“.

Semantic Web In Use at ISWC2009

ISWC2008 isn’t upon us just yet, but already the preparations for ISWC2009 are under way. I’m pleased to say that I’ll be serving as co-chair of the Semantic Web In Use track alongside Lee Feigenbaum.

In my experience the ISWC series has been growing steadily in strength year on year and, while I’m inevitably biased, last year’s conference did seem a watershed moment for the series and the Semantic Web as a whole. There was a tangible energy in the air that suggested the Semantic Web was no longer just a vision, but both real and inevitable. It will be interesting to see in Karlsruhe how things are shaping up one year on. I can only speculate about where we’ll be at by autumn 2009, but I’m very much looking forward to finding out.

Leigh Dodds goes public about his move to Talis

Leigh Dodds has just blogged publicly about his forthcoming move to Talis. From 1st September he’ll be joining us as Programme Manager for the Talis Platform. I’m personally really excited about having Leigh on board – he’s been an impressive figure on the Semantic Web scene for quite some time; IIRC I even used his FOAF-a-matic tool to create my first FOAF file back in the day. Not only will he bring some impressive skills to the company, but his move here further demonstrates that we can attract top-class Semantic Web talent. Leigh, welcome on board 🙂

What's with the images in Cuil?

I’ve just been having a play with Cuil. In general I really like it, particularly the richer layout. What is very weird (aka rubbish) though is the algorithm they’re using to select images for display next to each result. A quick search for Talis shows some relatively sensible accommpanying images, although I’m not sure who the young guy with the beard is.

A bit of vanity searching though throws up all sorts of weirdness. This time who is the old dude with the beard known as 303 See Other? He looks kind of familiar, but there’s no way it’s me. And who’s the other young guy with the whispy chin hair, and why is he squatting on my publications page? I like the juxtaposition of Linked Data and the Killer App image, but why? There seem to be far too many false positives, so come on Cuil, up the confidence threshold slightly.

VoCamp – Tackling the Vocabulary Bottleneck

The last 18 months have seen amazing progress in the world of Linked Data, but we now face a new challenge: availability of vocabularies to describe this data. OK, so it’s not really a new challenge at all, but this time it’s real, and urgent. Anyone stumbling across a tasty open data set on the Web is generally faced with the decision of whether to create the necessary vocabulary with which to describe the data, or walk away and find something to do that is more immediately gratifying. There just isn’t a critical mass of existing vocabularies with which to describe the data that is already out there on the Web.

Out of the desire to do something about this issue, and spurred on by discussions with a number of people in the community, Richard Cyganiak and I have set a ball rolling called VoCamp – lightweight, informal hackfests, where motivated people can get together and spend some dedicated time creating vocabularies/ontologies in any area that interests them. Thanks to the generous efforts of David Shotton and Jun Zhao, the first VoCamp will take place in Oxford in late September.

We hope that this is a ball that will roll beyond that one event, and are already talking to others who have expressed an interest in hosting a VoCamp where they are based. If you want to see the Web of Data realised, and share our view that the vocabulary bottleneck is just a little bit restrictive, perhaps you’d like to run a VoCamp where you live/work (or anywhere else you like). It’s very easy, just drop me (firstname.surname at talis.com) and Richard (firstname.surname at deri.org) an email and we’ll point you in the right direction.

Old Web Site, New Location

After leaving it to languish for years, I’ve finally made some good use of tomheath.com, which is the new location for my Web site that previously lived at http://kmi.open.ac.uk/people/tom/html. The content is pretty much the same, and badly needs an update, but this is the first stage of my migration from Web hosting at KMi. No plans to move this blog just yet, although without a few improvements from the my.opera team I might be tempted.

Continental In-Flight Entertainment Runs Linux

Watching any system spontaneously reboot is a slightly unnerving experience, especially when it’s on a Boeing 757 during take-off. This happened to me last night on a Continental flight back to the UK after Linked Data Planet, and luckily (at least as far as I know) it was only the in-flight entertainment system that restarted, and presumably at the hands of one of the cabin crew.

I’m guessing the cabin crew aren’t geeks, but I was mildly entertained to see that the system runs on Linux, with the penguin there for all to see. I couldn’t get any photos of the startup messages, as turning on my phone at that point seemed like a bad idea, for so many reasons, but watching the whole process was mildly more entertaining than a game of TuxRacer. The only disappointment came when I got back to Bristol airport and saw that the machine running the baggage carousel screens was behind with its Windows updates. Sigh.

Powerset: More Than Just a Pretty Face?

For this months Semantic Web Gang podcast we were joined by Barney Pell from Powerset, who recently launched a public beta of their long-awaited natural language query engine operating over Wikipedia data. Amid all the buzz, it was great to hear about Powerset straight from the horse’s mouth, and prompted me to spend some time exploring the system. This post is about what I found.

I took Charlie Chaplin as my starting point, wanting a topic that should have fairly broad coverage, and asked “who did Charlie Chaplin marry?”. Powerset returned the name “Mildred Harris” in the results, which seemed like a fairly reasonable response. I have no idea if it’s correct, but looking for the same information via DBpedia I found two answers: Mildred Harris and Paulette Goddard. Interesting that Powerset didn’t pick up both of those, or at least it didn’t show me those in the first set of results.

Interestingly the results page for this query shows “Factz” at the top that the Powerset algorithms have extracted from the Wikipedia articles, presented (broadly speaking) in the form of subject, predicate and object triples, e.g. “Chaplin married actress, Mildred Harris”, and showing the sentence context from which they were extracted. At a general level this reminds me of Vanessa‘s work on PowerAqua, which breaks queries down into “linguistic triples” and operates pretty impressively over existing RDF data sets. I can’t help feeling that Powerset’s triple extraction algorithms and the PowerAqua query engine could be an interesting combination.

Underneath the “Factz” at the top of the results page are a series of “Wikipedia Article” results, the first of which contains the sentence from which the “Chaplin married actress, Mildred Harris” information is extracted. The key parts of this sentence are also highlighted, enabling me to pick out the information that answered my question (in part at least).

By this point I was fairly taken with the interface, which is sweeter eye candy than either Wikipedia or DBpedia, but not necessarily faster than either, and may be guilty of presenting only half the picture. I’m also not yet convinced that if we took a large sample of natural language queries and compared the results returned, whether Powerset would significantly outperform the results provided by Google, who are consistently good at highlighting in their search results the passage of a document that is relevant to the query. Of course Google uses a much larger corpus than Powerset, but it’s interesting to note that the summary of the first result for the Charlie Chaplin query on Google reads “Charlie Chaplin was married four times and had 11 children between 1919 and 1962”.

To continue my exploration I tried another natural language query: “what is the population of brazil?”. This would seem like something of a no-brainer for a search engine with any semantic capabilities, and access to the rich knowledge bound up in Wikipedia. However, this time there were no headline “Factz” helping the answer to jump out at me. Instead there were Wikipedia Article results, the first of which was a node titled “Population of Brazil” that comes with an accompanying chart, but does not show the actual answer based on the latest available figures. Result number 4 (“Economy of Brazil”) does have as its result summary the text “In the space of fifty five years (1950 to 2005), the population of Brazil grew by 51 million to approximately 180 million inhabitants, an increase of over 2% per year”, but none of this is highlighted as the answer to my question.

Going back to the Charlie Chaplin example, I followed the associative links in my own mental Web and arrived at the entry for “Waibaidu Bridge“, an historic landmark on the Shanghai waterfront, located (when it’s not been taken away for repairs) just down the street from the Astor House Hotel, another Shanghai landmark where Chaplin apparently stayed on more than one occasion. Waibaidu Bridge has an entry on Wikipedia, and therefore also [[http://dbpedia.org/resource/Waibaidu_Bridge]an entry on DBpedia and in Powerset.

The Wikipedia entry itself is a really nice one; just enough historical background to be useful, a couple of bits of trivia (the bridge features briefly in the film Empire of the Sun), and a manually compiled list of places nearby. All of this is visible in Powerset, wrapped in their rather more 2008 interface. There are also a number of “Factz” extracted from the text of the Wikipedia article and presented in a box on the right. These are simply more of the subject, predicate, object triples mentioned previously, and sadly they add little value to the article. Here are some examples from the first section of the article:

* name bears name
* Waibaidu bore name
* citizens use ferries
* decade(1850) increases need

There are a couple that capture key elements from the article:

* Wales built bridge (note that this was a person named Wales, rather than the country Wales)
* Chinese paid toll (reflecting the history of the original Waibaidu bridges and the discriminatory tolls charged to Chinese people crossing them)

However these are mostly drowned out by the surrounding noise:

* ferry eases traffic
* Outer ferried cross
* powers restrict people

In the end it’s quicker just to read the article, as you’ll need to do so anyway to understand the “Factz” and check that they stand up. The “ferry eases traffic” “Fact” is actually incorrect, as the sentence from which this is extracted reads “In 1856, a British businessman named Wales built a first, wooden bridge at the location of the outermost ferry crossing to ease traffic between the British Settlement to the south, and the American Settlement to the north of Suzhou River.”, which has quite the opposite implication.

All this aside, one glaring ommission from Powerset struck me when looking at this page, and it was this that really made me wonder whether Powerset is anything more than just a pretty face. Some thoughtful geodata geek has made the effort to record the geo coordinates of Waibaidu bridge in the Wikipedia entry; 31°14’43″N, 121°29’7.98″E apparently. Now Wikipedia doesn’t seem to do anything in particular with this data; the list of places nearby is manually compiled. I’ll forgive them this, as I imagine they have their hands full with keeping the whole operation running. Perhaps if I donated some money they would consider doing this by default for all entries with geo-coordinates.

However, what isn’t so easily forgiven is Powerset ignoring this information completely, not even bothering to start the page with a small Google map next to the nice old photo showing the view to the Bund, let alone thinking to use a service like Geonames to compute from the Web of Data a list of nearby places. (For the record, DBpedia doesn’t do this itself, but by making the effort to link items across the Wikipedia and Geonames data sets it does the majority of the hard work already). For an application that gets so closely associated with the Semantic Web effort (whether Powerset desire this or not) I find this ommission quite sad. It’s such a no brainer, and beautifully demonstrates the kind of thing that will separate Semantic Web applications, from just more closed world systems that happens to do something smart. I put this question about use of external data sets to Barney in the Gang podcast, but, either due to the intensity of the medium or bad communication on my part due to my cold-addled brain, the true meaning of my question was lost.

The question of when Powerset will open up its technology to other text sources, and even the Web at large, always comes up. For me this is a less interesting question than the one about when/if the company will make use of existing structured data sets in their user-facing tools. I hope that with time, and perhaps less pressure now that a product is out the door, Powerset will implement the kind of features I talk about above, as the starting point for becoming a true Semantic Web application. Until then however, the current product will be, for me, really just Wikipedia hiding behind a pretty face.