December 18, 2008
I just posted a summary of the current data portability landscape to the Official DataPortability Blog.
From the post:
Closed platforms are like ice cubes in a glass of water. They will float for a while. They will change the temperature of the liquid
beneath. Ultimately, however, the ice cube must eventually melt into the wider web.
Facebook’s success with Facebook Connect can and will further drive innovation in the community to develop an open alternative.
Facebook’s success will (like Google, Microsoft and Yahoo, AOL, Myspace, countless major media properties and countless small startups) to create alternatives. At least some of those participants will recognize (if they have not already) that the most open among them will earn both the respect and the market share of the next phase. Moving from Facebook Connect’s ‘data portability’ to Interoperable DataPortability.
A web of Data.
That’s a landscape where we can continue to innovate on a level playing field.
December 8, 2008
OpenID needs to be as simple as Facebook Connect if it has any chance of competing. The problem is User Experience. It’s a nightmare.
- All Email providers and OpenID Consumers (particularly Gmail, Hotmail and Yahoo Mail) implement: http://eaut.org/
- Until we have critical mass with step 1, a 3rd party, community controled “Email to OpenID mapping service” should be provided. Vidoop runs a related service at http://emailtoid.net/. It’s quite good but it should be donated to the OpenID foundation for independent control.
- OpenID Connect login prompts ask for your email address on 3rd party sites.
- When you hit ‘connect’ it generates a popup much like the FB Connect popup.
- The contents of the popup is either:
- The password screen of the OpenID provider as resolved via EAUT OR
- The password screen of the OpenID provider as resolved via the community EmailtoID service OR
- A prompt from the EmailToID service that walks you through creating a new OpenID or mapping an exiting OpenID to this email address.Here’s the important part: In all cases, the screens MUST conform to a strict UX Design Guideline set forth by the OpenID Foundation to ensure the process is as simple as Facebook Connect.Only providers that confirm to this OpenID Connect UX standard (as certified by the OpenID Foundation?) may have their OpenIDs validated in this popup. This is a harsh rule but it ensures a smooth UX for all involved.
- This initial Email to OpenID mapping through a 3rd party service is painful since most email providers and OpenID consumers do not use EAUT yet.
- This can be overcome if we get a series of OpenID Consumers and OpenID Providers involved as launch partners. A major email provider (Gmail, Hotmail and/or Yahoo) would also be be helpful but not a blocker.
- How do we deter phishing? Does this work-flow make phishing worse because of the predictable UX? Does it matter? Is there a way to ensure a distributed karma system is included in the work flow?
- This only solves the login problem and does not go into the issue of connecting to, accessing and manipulating data as the full data portability vision describes. This is a conversation for another thread.
- If you provide OpenID but do not consume it you need to be named and shamed. There should be a 2 month grace period, then The OpenID Foundation, the DataPortability Project and everyone else who is interested should participate.
- “OpenID Connect” should be a new brand with a fresh batch of announcements with strict implementation guidelines (not just around UX but also around things like consumption).
To summarize, my proposal world:
- Allow users to use their email address for OpenID
- Standardize the User Experience for OpenID
- Provide a stop gap while Email providers catch up with Email to OpenID mapping.
I’d love to do mockups for this – but I’m busy. Anyone interested in learning from the Facebook Connect UX and drafting OpenID Connect Mockups from which we can draw the strict UX guidelines I mentioned?
Could this work?
December 1, 2008
Let me quote the highlights for you:
If the initial development race of Web 2.0 centered around “building a better social network” then the next phase will certainly focus on extending the reach of existing social networks beyond their current domain. How? By using the elements of the social graph as the foundational components that will drive the social Web. Where we once focused on going to a destination – particular social network to participate – we will now begin to carry components of social networks along with us, wherever we go. In the next phase of the social Web, every site will become social.
Agreed. That’s been the vision and promise of much of my work for more than a year.
Here’s the scary part
Facebook Connect proposes to make data and friend connections currently held within the walled garden of Facebook accessible to other services. This has two distinct benefits, one for the sites and one for Facebook.
For the participating sites, Facebook Connect provides more social functionality without a great deal of additional development. A new user can opt to share the profile information in Facebook instead of developing a new account. This gives the user access to the site and its services without the tedium of developing yet another profile on yet another site. In addition, users can use the relationship information in Facebook to connect to their friends on the other services. In short, it makes the new partner site an extension of Facebook.
Essentially, Facebook is trying to replace all logins with their own, and control the creation, distribution and application of the social graph using their proprietary platform.
The most scary part of this, is that while Facebook is quietly and methodically building out this vision with massive partners, the standards community is busy squabbling about naming the open alternative.
Is it Data Portability? Is the Open Web? is it Open Social? Is it Federated Identity?
At the start of this year one would have thought that the open standards movement got a huge boost by the massive explosion of the DataPortability project. It’s set of high profile endorsements catapulted the geeky standards conversation into the mainstream consciousness and helped provide a rallying cry for the community to embrace.
Instead of embracing it, though, many of the leaders in the community decided to squabble about form and style. They argued about the name, about the organization, about the merits of the people involved – on and on it went.
Instead of embracing the opportunity, they squandered it by trying to coin new phrases, new organizations and new initiatives.
The result is a series of mixed messages that have largely diluted the value of DataPortability’s promise this year. The promise of making the conversation tangible for the mainstream – the executives who are now partnering with FaceBook.
Will we let this continue into 2009? Will we continue to allow our egos to get in the way of mounting a real alternative to Hailstorm 2.0? Are we more interested in the theater of it, the cool kids vs. the real world or will we be able to reach the mainstream once again and help them to understand that entire social web is at stake?
I’ve not lost hope. There are countless reasons why Facebook and it’s Hailstorm 2.0 are not inevitable.
I have, however, lost a lot of respect for a lot of people I once admired. Maybe they can clean up their act and we can work together once again in the new year.
I put a call out to all those who are interested – technologists, early adopters, bloggers (especially bloggers), conference organizers, conference speakers, media executives – let’s get our act together and take this party to the next level.
I, for one, am looking forward to it.
November 20, 2008
‘What about privacy and security’ is a question that comes up regularly when discussing Data Portability. I’d like to address some of the reasons why Data Portability is actually good for privacy.
More safe than today.
Data Portability is not about putting more personal data in the cloud. We’re dealing with data that’s already out there. The goal is to provide the ability to give access to your data to applications you trust.
Using proper protocols and formats to move the data such as oAuth and OpenID is safer than allowing sites to scrape your mail account by giving it your username and password. They are safer because you are not giving your username and password away and because the access is scoped. Scoped access mean that you can grant specific and precise access to only the data you want to share with the requesting application (e.g. just your address book) as apposed to giving them complete access to your entire gmail account (address book, email, account history, google searches etc).
Federated Karma – Market Forces made Explicit
It may be possible to build a distributed trust or Karma system that sites and services can expose on Authorization Screens so that users can make informed decisions before trusting an application.
Users could rate services and the ratings would be normalized and made available via trusted Karma aggregation services.
This would provide an explicit meta layer of market sentiment at the point of permitting a data portability transaction.
This solution is far better than the Facebook Protection Fee solution.
Privacy is the wrong word
The real issue should not be labeled Privacy. Privacy is an idea but it’s not actionable. It can not be converted into ‘functionality’. We should be discussing ‘access controls’, ‘portable permission metadata’ and ‘universal privacy models’. These ideas combined allow us to define and implement privacy preferences in concrete terms.
Privacy advocates can never and should never come to peace with it, but it’s clear that traditional ideas of privacy are changing.
Remember that It was once thought unconscionable to share you photos, daily activities, location, relationship status and other personal information for the world to see. Now it’s standard practice for young people around the world.
What taboos of personal privacy will fade next? It’s quite possible the question asked by future generations of Internet users will ask not why their data is available for everyone to see, but rather why it isn’t.
“I think therefore I am”.
Maybe now it’s
“I tweet therefore I am”.
November 19, 2008
According to CNet, Facebook is going to start charging app developers a fee to achieve ‘Verified Application’ status. The fee is optional, but that doesn’t matter. Apps that are not ‘verified’ will quickly get buried by those that are.
I think in hindsight people will recognize this move as one of the final death knels of the Facebook platform as we know it today.
First, they de-emphasized applications all together by relegating them to a ‘boxes’ page and making the stream their primary interaction metaphor (Read: FriendFeed clone). Now they are trying to lock down the platform further, raising the bar for participation and charging what amounts to a protection fee for app developers to get any real attention at all.
The fact of the matter is, an increasing number of people are finally realizing that Facebook looks very similar to Pre Internet networks, AOL, Passport/Hailstorm, and any other proprietary implementation of a platform that can and must be open.
The only platform that matters on the web is the web itself, and Facebook through its actions and inactions is helping us all learn this lesson faster than ever.
November 11, 2008
We have started a conversation over on the JS-Kit blog about data ownership when it comes to comments. This is one of the Data Portability grey areas that needs a resolution in the ongoing journey to create the data web.
This is also an important question for social media. If we are all participants, who owns the space inside which we are particiapting?
November 7, 2008
In this video, Tim O’reilly speaks about Data Portability. He suggests that it will be much like Open Source software in that it will never truly be adopted. I don’t know if I agree.
Data Portability is less like Open Source software and more like the Internet and the Web itself. The standardized and interoperable protocals that make up the web – TCP/IP, HTTP, HTML etc – are adopted by anyone who wants access to Internet users. In much the same way, anyone who wants access to user data from the emerging web-wide data ecosystem will need to adopt emerging data portability formats and protocals.
Later in the video he goes on to say that data portabilty will actually be adopted, but not through legislation, but rather through organic mechanisms that gravitate towards open solutions that ‘just work’.
On this front, I agree. But Tim does not mention how we might help the process along. He does not mention that organic processes can and should include incentives. How the DataPortability project, through its definition of the problem and ongoing work to highlight good work towards an open data ecosystem actually encourages our collective desired outcomes.
Data Portability will indeed occur organically. The building blocks themselves were born out of organic efforts. An accellerant in the form of community, media and support documentation, however, has already helped push things along.