Archive for the 'Semantic Web' Category

Raw Data Now Dot Com

Last summer I registered the domain name rawdatanow.com. I was at Linked Data Planet in New York, listening to TimBL give his keynote, and was struck by his rallying cry for RAW DATA NOW!! The idea made perfect sense: concentrate on getting your data out (soundtrack by danja), and worry about the shiny interface later; or better still, by publishing the data according to Web standards and Linked Data principles you empower others to create the shiny interfaces that are meaningful and useful to them.

According to TimBL’s slides from TED the meme comes from this post from Rufus Pollock, but the clarity of the call was new to me (update: according to delicious I bookmarked Rufus’s post on 14th Nov 2007; obviously wasn’t paying attention properly) and it encouraged me to start the Linked Data Shopping List (which could do with some more attention). I also had in mind a grassroots campaign to promote the idea (full page ads in national newspapers signed by as many people as we could get, that sort of thing), which is why I bought the domain name. But it’s been a busy year, and between the ongoing efforts to sustain and increase the momentum of the Linked Data movement, and trying to get VoCamp off the ground, I’ve had no spare time to devote to more community activities.

So, here’s my offer to the community: if you have a compelling story to tell about how we can get encourage huge numbers of organisations and individuals to provide RAW DATA NOW, and being able to use rawdatanow.com would help with that, please let me know by blog comment, email (firstname.surname@gmail.com) or identi.ca.

Update (2009-03-13): the domain http://rawdatanow.com/ now points to TimBL’s talk from TED2009, but the offer still stands.

Yes, the Semantic Web does matter, and RDF is a key part of that picture

Paul Miller has a nice new post over at ZDnet, entitled Does the Semantic Web matter? He ultimately concludes ‘yes’, and I agree, but some of the details raised an eyebrow for me.

“Continuing landgrabs by startups that seek to attract, trap and exploit eyeballs stand unashamedly on the shoulders of Semantic Web promise whilst running counter to its basic tenets of linking and openness. On the other hand, companies ‘just’ doing perfectly reasonable – and valuable – things with the meanings of words, phrases and documents latch on to the Semantic Web’s buzz, whilst being all about Semantics and not at all about the Web.”

I have to agree, almost violently, with both these points.

One passage I can’t agree with however is this:

“The speed with which ‘RDF’ or ‘OWL’ enter any conversation about the Semantic Web is worrying; and must ultimately prove self-defeating as potential adopters retreat from a barrage of terminology and an opaque glut of unnecessary detail.”

This may be a fair criticism with regard to OWL, but saying this with regard to RDF is like criticising discussions of the Web in the early 90s for quickly coming down to details of HTML. Yes, we need to focus on what we can do with the technology, but lets not kick back too hard against discussing the technical details.

URIs and the RDF data model are exactly what enables the Semantic Web “proper” to address the issue of linking that Paul rightly criticises many startups for not properly addressing. We can’t hope to understand or predict the emergent properties of a Semantic Web without understanding the fundamental components of that Web, and right now RDF is about as fundamental as the components come.

VoCamp, Day Zero

Tomorrow is the first day of the first ever VoCamp. It may also be the last ever VoCamp, but I hope and believe that will not be the case. Around 20 of us have gathered in Oxford for two days for an event from which none of us quite knows what to expect. The goal is to help drive forward the creation, publication and utilisation of vocabularies for describing data on the Web.

In the last couple of years we, as in the Semantic Web community, have learned a great deal about how to publish data on the Web. As we’ve become more familiar with this process we’ve got better at knowing where to look to find existing data that could be published online according to Semantic Web and Linked Data principles. What hasn’t kept pace with this process is the availability of vocabularies/ontologies for describing this data. I may now be able to get hold of data about changes in polar ice caps and polar bear migration patterns, but would bet money that there’s no vocabulary with which to describe this data. Choose almost any domain and the situation will be the same.

If we’re serious about building a Web of Data, then this issue has to change. I see this from my own work, but Peter Mika’s experiences at Yahoo!, and the strength of his conviction (conveyed very nicely in this blog post), provide some great confirmation that I’m not alone in this perception. The vocabulary bottleneck has to be eased.

So, tomorrow is a chance for us to start changing that. The solution won’t come overnight, but I hope that we can start the ball rolling. VoCampOxford2008, and any VoCamp in fact, is about creating some dedicated time and space to create and publish vocabularies in domains that interest us. We all have grand ideas while waiting at the bus stop/traffic lights, doing the washing up, wherever, about cool domains we could model and in which we could publish data, but without some ring-fenced time in which to do so these plans can easily come to nothing. VoCamp aims to solve that.

The primary success criteria for the next two days will be the publication of new vocabularies on the Web that increase the availability of Linked Data. That’s the main goal, but there are many others. I am confident that this first VoCamp will be an opportunity to share issues, expertise, modeling techniques and design patterns. In doing so we will all become smarter. There is an opportunity to scope requirements in the wider Semantic Web field that impact upon the availability and reuse of vocabularies. Collectively we can identify missing pieces of the technical infrastructure required by the Web of Data, and begin to build a social infrastructure that helps us collectively ease the vocabulary bottleneck.

These are grand goals. Even if none of them were to be achieved, there is one other goal which I’m sure will be met; that is determining whether the VoCamp format works, and if so how. If the format fails, we’ll need to look elsewhere for a solution. If it succeeds, fully or partially, we’ll be closer to knowing how to do it even better next time.

Where is the business value in Linked Data?

Where is the business value in Linked Data? What is the business case for exposing your organisation’s data assets according to Linked Data principles and best practices, and being a good citizen of the Web of Data? Whenever I ask myself this question I’m tempted to give some trite answer like “you’ve got to be in it to win it”. Ultimately I think this is true, or at least will be in time, in just the same way that businesses in the nineties asked themselves about the value of having a Web site, and (hopefully) came to realise that this was a moot point; not having a Web site was not an option.

However, I’m impatient, and want to see everyone participating in a Web of Data sooner rather than later. I also want to have a meaningful answer when other people ask the business value question, that isn’t just a flimsy “trust me” or an arrogant “you’ll see”. With that in mind I’ve tried to clarify my thoughts on the subject and spell them out here.

The first issue to address relates to publishing data on the Web full-stop – we’ll get to Linked Data specifics later. APIs for public access to data are now widespread on the Web, with sites like Amazon and Flickr being good examples. A common reaction to this kind of arrangement is to think in terms of the data having been given away, and wonder about how this affects the bottom line.

For both Amazon and Flickr the data represents a core asset, but the route of openness has enabled them to foster communities that use this data and drive revenue generation in other areas, whether that’s selling stuff (Amazon) or collecting annual subscription fees (Flickr). People may pay for goods or pay an annual subscription, but my guess is that (perhaps in contrast to enterprises) individuals are unlikely to pay in large numbers for data. In the case of Amazon the data either isn’t *that* important to people /really/, is available from other sources, or would become available from other sources if Amazon began to charge at all, or charged more than a nominal amount. For Flickr the same rules apply, except that people are even less likely to pay a separate fee to access a pool of data that they themselves have contributed to. The key point here is that providing APIs to their data has allowed Amazon and Flickr to drive additional traffic into their established revenue channels.

Seen this way, an organisation with rich data assets has two choices. The first is to open up access to its data, and understand that the challenge is now not just about having qaulity data, but enabling others to create value around these assets and therefore ultimately do the same for the organisation. The second option is to keep the data locked away like the crown jewels, while the organisation and the data itself are slowly rendered irrelevant by cheaper or more open alternatives.

An interesting example in this case is the UK Government’s approach to the Ordnance Survey, the national mapping agency. Rather than accepting that the tax base has already financed the creation of the OS’s phenomenal data assets and therefore should have the right to re-use these as they see fit, the UK government requires the OS to generate revenue. Whilst the OS itself is making some great efforts to participate in the Semantic Web, to a large extent their hands are tied. This opens the door (or creates the door in the first place) for initiatives such as OpenStreetMap.

The kind of scenario I can imagine is this: the government continues to not “get it”, Ordnance Survey data remains largely inaccessible to those who can’t afford to license it, OpenStreetMap data becomes good enough for 80% of use cases, fewer people license OS data, OS raises prices to recoup the lost revenue, less popular locations stop being mapped as they are deemed unprofitable, even fewer people buy OS data, the OS and all its data assets are sold at a fraction of their former “value”.

What the UK government doesn’t fully understand (despite things like the “Show us a better way” competition), but has been well demonstrated in the US, is that opening up access to data creates economic benefits in the wider economy that can far outstrip those gained from keeping the data closed and attempting to turn it into a source of revenue. Organisations whose data assets have not been created using public funds may not have the same moral obligations to do so, but the options remain the same: open up or be rendered irrelevant by someone who does.

So if the choice is between openness or obsolescence, how does Linked Data help? Let’s look at Amazon and Flickr again. Both these services make data easily available, but have compelling reasons for data consumers to link back to the original site, whether that’s to gain revenue from affiliate schemes or to save the hassle of having to host one’s own photos at many different resolutions. The net result is the same in both scenarios: more traffic channelled to the site of the data provider.

A typical Web2.0 scenario is that data is accessed from an API, processed in some way, and re-presented to users in a form that differs somehow from the original offering provided by the data publisher — a mashup. This difference may be in the visual presentation of the data, in added value created by combining the data with that from other sources, or in both. Either way, this kind of mashup is likely to be presented to the user as an HTML document, perhaps with some AJAX embellishments to improve the user experience.

The extent to which the creator of the mashup chooses to link back to the data source is a function of the rewards on offer and the conditions under which the data can be used. Not all services will have the same compelling reasons for data consumers to link back to the data providers themselves, as not all data publishers will be able to afford the kind of affiliates scheme run by Amazon. However, even in cases such as a book mashup based on Amazon data, where the creator links back to Amazon prominently in order to gain affiliate revenue, both the data publisher and the application creator lose. Or at the very least they don’t win as much as they could.

This may sound counter-intuitive, so let’s look at the details. In processing data to create a mashup, the connection between the data and the data provider is effectively lost. This is a result of how conventional Web APIs typically publish their data. The code snippet below shows data from the Amazon E-commerce Service about the book “Harry Potter and the Deathly Hallows”:

<itemAttributes>
<author>J. K. Rowling</author>
<creator Role="Illustrator">Mary GrandPré</creator>
<manufacturer>Arthur A. Levine Books</manufacturer>
<productGroup>Book</productGroup>
<title>Harry Potter and the Deathly Hallows (Book 7)</title>
</itemAttributes>

If you look at elements such as <author>, you’ll see that author names are given simply as text strings. The author herself is not identified in a way that other data sources on the Web can point to. She does not have a unique identity, but exists only in the context of this document that describes a particular book. There is no unique identifier for this person that can be looked up to obtain more information. As a result this output from Amazon represents a data “blind alley” from which there’s nowhere to go. There is nothing in the data itself that leads anywhere, or even points back to the source – in effect the connection between the data and the data publisher is lost.

The connection between publisher and data may be reinstated to some degree in the form of HTML links back to the data source, but by this point the damage is done. These links are tenuous at best and enforced mainly by economic incentives or licensing requirements. In Web2.0-style mashups based on these principles there is no reliable way to express the relationships between the various pieces of source data in a way that can be reused to build further mashups – the effort is expended once for a human audience and then lost.

In contrast, Linked Data mashups (or “meshups” as they sometimes get called) are simply statements linking items in related data sets. Crucially these items are identified by URIs starting “http://”, each of which may have been minted in the domain of the data publisher, meaning that whenever anyone looks up one of these URIs they may be channeled back to the original data source. It is this feature that creates the business value in Linked Data compared to conventional Web APIs. Rather than releasing data into the cloud untethered and untraceable, Linked Data allows organisations and individuals to expose their data assets in a way that is easily consumed by others, whilst retaining indicators of provenance and a means to capitalise on or otherwise benefit from their commitment to openness. Minting URIs to identify the entities in your data, and linking these to related items in other data sets presents an opportunity to channel traffic back to conventional Web sites when someone looks up those URIs. It is this process that presents opportunities to generate business value through Linked Data principles.

Semantic Web In Use at ISWC2009

ISWC2008 isn’t upon us just yet, but already the preparations for ISWC2009 are under way. I’m pleased to say that I’ll be serving as co-chair of the Semantic Web In Use track alongside Lee Feigenbaum.

In my experience the ISWC series has been growing steadily in strength year on year and, while I’m inevitably biased, last year’s conference did seem a watershed moment for the series and the Semantic Web as a whole. There was a tangible energy in the air that suggested the Semantic Web was no longer just a vision, but both real and inevitable. It will be interesting to see in Karlsruhe how things are shaping up one year on. I can only speculate about where we’ll be at by autumn 2009, but I’m very much looking forward to finding out.

Leigh Dodds goes public about his move to Talis

Leigh Dodds has just blogged publicly about his forthcoming move to Talis. From 1st September he’ll be joining us as Programme Manager for the Talis Platform. I’m personally really excited about having Leigh on board – he’s been an impressive figure on the Semantic Web scene for quite some time; IIRC I even used his FOAF-a-matic tool to create my first FOAF file back in the day. Not only will he bring some impressive skills to the company, but his move here further demonstrates that we can attract top-class Semantic Web talent. Leigh, welcome on board 🙂

VoCamp – Tackling the Vocabulary Bottleneck

The last 18 months have seen amazing progress in the world of Linked Data, but we now face a new challenge: availability of vocabularies to describe this data. OK, so it’s not really a new challenge at all, but this time it’s real, and urgent. Anyone stumbling across a tasty open data set on the Web is generally faced with the decision of whether to create the necessary vocabulary with which to describe the data, or walk away and find something to do that is more immediately gratifying. There just isn’t a critical mass of existing vocabularies with which to describe the data that is already out there on the Web.

Out of the desire to do something about this issue, and spurred on by discussions with a number of people in the community, Richard Cyganiak and I have set a ball rolling called VoCamp – lightweight, informal hackfests, where motivated people can get together and spend some dedicated time creating vocabularies/ontologies in any area that interests them. Thanks to the generous efforts of David Shotton and Jun Zhao, the first VoCamp will take place in Oxford in late September.

We hope that this is a ball that will roll beyond that one event, and are already talking to others who have expressed an interest in hosting a VoCamp where they are based. If you want to see the Web of Data realised, and share our view that the vocabulary bottleneck is just a little bit restrictive, perhaps you’d like to run a VoCamp where you live/work (or anywhere else you like). It’s very easy, just drop me (firstname.surname at talis.com) and Richard (firstname.surname at deri.org) an email and we’ll point you in the right direction.