Discussion:
[GNU/consensus] Who are the new kids on the block?
hellekin (GNU/consensus)
2013-03-12 15:35:16 UTC
Permalink
As an intermezzo while the User Data Manifesto is still cooking,
I'd like to ask a simple question so that we can feed the wiki a bit and
bring in more people.

IMO, there are interesting side-effects going on in the online world,
regarding the complex relations of technology development, attention
scarcity, attractiveness of novelty, and NIH syndrome.

For example, as Melvin keeps pointing out, there are existing semantic
web technologies that are working, and decentralized, but lack of
visibility: FOAF, RDF, WebID, etc.

There's also plentiful of "niche" social networking that actually gather
millions of users, but are not perceived as social networking at all
because they lack the general purpose of the giant players, such as
MMORPG, the good old FICS and MUDs and MOOs... Without mentioning our
beloved IRC, or such prestigious sidekicks such as blogs, mailing-lists,
and wikis.

Each new generation of developers--i.e. every few months these days,
seem to look at the previous stuff and think "yeah, well, not good
enough." They quickly come up with shiny new concepts and
implementations, and then what? Each new language gathers a herd of
enthusiasts and soon, an old bum such as Javascript finds rejuvenation:
NodeJS is born, and the world is rewriting Lisp, Perl, Python, and Ruby
libraries in ECMAscript!

So I understand that is a fast development, and that you can run the
same code on the server or on the client, blurring the architectural
foundation, and certainly there are actual innovations, in the sense of
iterations not previously contemplated, but... What is driving all of
this? Is there a technical, and engineering foundation to embracing such
drastic changes every couple of years? Or is there something else, more
irrational going on?

With that in mind, which does not really call for a debate, but rather
for personal reflection on the evolution of techniques, and the
refinement of technologies vs. starting from scratch, I'm wondering
who's on your radar appearing as truly innovative in our problem space.

==
hk
Melvin Carvalho
2013-03-12 16:51:24 UTC
Permalink
Post by hellekin (GNU/consensus)
As an intermezzo while the User Data Manifesto is still cooking,
I'd like to ask a simple question so that we can feed the wiki a bit and
bring in more people.
IMO, there are interesting side-effects going on in the online world,
regarding the complex relations of technology development, attention
scarcity, attractiveness of novelty, and NIH syndrome.
For example, as Melvin keeps pointing out, there are existing semantic
web technologies that are working, and decentralized, but lack of
visibility: FOAF, RDF, WebID, etc.
Im unsure this is accurate, I advocate using the WEB, and that's about it.
The reason is that the web has a strong track record for network economics,
and that is well suited to socially oriented projects.

This means understanding the nature of the URL, HTTP and HTML, in that
order.

The issue is partly that people (with the exception of mark zuckerberg)
have pre concieved notions about how this works, and do not understand that
it has the power to do everything you need.

People have the tendency to think you need something new to be successful.
The reality and history has proved the opposite. Making something new
leads to a local minimum that is rarely interoperable.

The majority of successful projects come from cloning something existing
and putting it in a slightly different context.
Post by hellekin (GNU/consensus)
There's also plentiful of "niche" social networking that actually gather
millions of users, but are not perceived as social networking at all
because they lack the general purpose of the giant players, such as
MMORPG, the good old FICS and MUDs and MOOs... Without mentioning our
beloved IRC, or such prestigious sidekicks such as blogs, mailing-lists,
and wikis.
Each new generation of developers--i.e. every few months these days,
seem to look at the previous stuff and think "yeah, well, not good
enough." They quickly come up with shiny new concepts and
implementations, and then what? Each new language gathers a herd of
NodeJS is born, and the world is rewriting Lisp, Perl, Python, and Ruby
libraries in ECMAscript!
So I understand that is a fast development, and that you can run the
same code on the server or on the client, blurring the architectural
foundation, and certainly there are actual innovations, in the sense of
iterations not previously contemplated, but... What is driving all of
this? Is there a technical, and engineering foundation to embracing such
drastic changes every couple of years? Or is there something else, more
irrational going on?
With that in mind, which does not really call for a debate, but rather
for personal reflection on the evolution of techniques, and the
refinement of technologies vs. starting from scratch, I'm wondering
who's on your radar appearing as truly innovative in our problem space.
==
hk
hellekin (GNU/consensus)
2013-03-12 18:26:02 UTC
Permalink
Post by Melvin Carvalho
This means understanding the nature of the URL, HTTP and HTML, in that
order.
The issue is partly that people (with the exception of mark zuckerberg)
have pre concieved notions about how this works, and do not understand
that it has the power to do everything you need.
*** Are you suggesting we invite him for a talk? <g>

Seriously, maybe we need to figure out some "How to leverage the Web in
your next Social Web project"? That would probably include comparative
implementations of basic building blocks:

- how to handle identity?
- how to handle messaging?
- how to handle references?

Saying that an URI can be your identity, messages can be passed as URIs,
and references point to RDF graphs does not seems to be sufficient for
most people to grasp the simple beauty of the URI.

Instead that would require working examples of how things work with
XMPP, OStatus, OAuth, etc. and compare to what can be done when
leveraging the powers of URIs instead.
Post by Melvin Carvalho
People have the tendency to think you need something new to be
successful. The reality and history has proved the opposite. Making
something new leads to a local minimum that is rarely interoperable.
*** +1
Post by Melvin Carvalho
The majority of successful projects come from cloning something existing
and putting it in a slightly different context.
*** Sounds like the theory of evolution.

So, I gather that dissecting Facebook might help finding out what they
do well. I'm still convinced that the means shape the form, and goals
are deeply intertwined with function. Let's contemplate for a minute
that Facebook got "all the right technology for all the wrong reasons".
How can we leverage that technology, or what technology is there to
unfold that social networking platform for freedom?

==
hk
Melvin Carvalho
2013-03-12 18:52:06 UTC
Permalink
Post by hellekin (GNU/consensus)
Post by Melvin Carvalho
This means understanding the nature of the URL, HTTP and HTML, in that
order.
The issue is partly that people (with the exception of mark zuckerberg)
have pre concieved notions about how this works, and do not understand
that it has the power to do everything you need.
*** Are you suggesting we invite him for a talk? <g>
I believe he invested in diaspora!
Post by hellekin (GNU/consensus)
Seriously, maybe we need to figure out some "How to leverage the Web in
your next Social Web project"? That would probably include comparative
Figuring out the web is certainly key to scaling.
Post by hellekin (GNU/consensus)
- how to handle identity?
Be lenient. Encourage URLs.
Post by hellekin (GNU/consensus)
- how to handle messaging?
Get HTTP POST right before looking further.
Post by hellekin (GNU/consensus)
- how to handle references?
As in REST?
Post by hellekin (GNU/consensus)
Saying that an URI can be your identity, messages can be passed as URIs,
and references point to RDF graphs does not seems to be sufficient for
most people to grasp the simple beauty of the URI.
A URI *points* to your identity. Messages are exchanged via a protocol
using a serialization. References are pointers to data via the "follow
your nose" pattern.
Post by hellekin (GNU/consensus)
Instead that would require working examples of how things work with
XMPP, OStatus, OAuth, etc. and compare to what can be done when
leveraging the powers of URIs instead.
No that only adds confusion and conflation. Understand how the web works
and THEN you an understand how other protocols work. Trying to do it all
in one go leads to a mess. Notice that Mark Zuckerberg had nice URLs to
describe things first, then he got HTTP working well. Only after the
system was in good shape did he add XMPP, email etc.
Post by hellekin (GNU/consensus)
Post by Melvin Carvalho
People have the tendency to think you need something new to be
successful. The reality and history has proved the opposite. Making
something new leads to a local minimum that is rarely interoperable.
*** +1
Post by Melvin Carvalho
The majority of successful projects come from cloning something existing
and putting it in a slightly different context.
*** Sounds like the theory of evolution.
So, I gather that dissecting Facebook might help finding out what they
do well. I'm still convinced that the means shape the form, and goals
are deeply intertwined with function. Let's contemplate for a minute
that Facebook got "all the right technology for all the wrong reasons".
How can we leverage that technology, or what technology is there to
unfold that social networking platform for freedom?
A good example was StudiVZ. They simply took the facebook pattern and made
it in another language. Take the facebook pattern and make it FLOSS /
distributed (it doesnt take much more than a tweak). Then add all the
bells and whistles.

If you want to OVERTAKE facebook, look at graph and the open graph protocol
and complete the missing pieces by adding data freedom
Post by hellekin (GNU/consensus)
==
hk
Melvin Carvalho
2013-03-12 17:31:49 UTC
Permalink
Post by hellekin (GNU/consensus)
As an intermezzo while the User Data Manifesto is still cooking,
I'd like to ask a simple question so that we can feed the wiki a bit and
bring in more people.
IMO, there are interesting side-effects going on in the online world,
regarding the complex relations of technology development, attention
scarcity, attractiveness of novelty, and NIH syndrome.
For example, as Melvin keeps pointing out, there are existing semantic
web technologies that are working, and decentralized, but lack of
visibility: FOAF, RDF, WebID, etc.
There's also plentiful of "niche" social networking that actually gather
millions of users, but are not perceived as social networking at all
because they lack the general purpose of the giant players, such as
MMORPG, the good old FICS and MUDs and MOOs... Without mentioning our
beloved IRC, or such prestigious sidekicks such as blogs, mailing-lists,
and wikis.
Each new generation of developers--i.e. every few months these days,
seem to look at the previous stuff and think "yeah, well, not good
enough." They quickly come up with shiny new concepts and
implementations, and then what? Each new language gathers a herd of
NodeJS is born, and the world is rewriting Lisp, Perl, Python, and Ruby
libraries in ECMAscript!
So I understand that is a fast development, and that you can run the
same code on the server or on the client, blurring the architectural
foundation, and certainly there are actual innovations, in the sense of
iterations not previously contemplated, but... What is driving all of
this? Is there a technical, and engineering foundation to embracing such
drastic changes every couple of years? Or is there something else, more
irrational going on?
With that in mind, which does not really call for a debate, but rather
for personal reflection on the evolution of techniques, and the
refinement of technologies vs. starting from scratch, I'm wondering
who's on your radar appearing as truly innovative in our problem space.
Favourite project at the moment:

https://my-profile.eu/

Main reasons: is that it has excellent use of URLs, is decentralized,
respects privacy, and scales to billions.
Post by hellekin (GNU/consensus)
==
hk
Nick Jennings
2013-03-12 17:45:28 UTC
Permalink
On Tue, Mar 12, 2013 at 10:31 AM, Melvin Carvalho
Post by Melvin Carvalho
https://my-profile.eu/
Main reasons: is that it has excellent use of URLs, is decentralized,
respects privacy, and scales to billions.
Pardon my ignorance, but:

How is it decentralized and how does it scale? Once you create a
profile there, it's dependant on that site:

https://my-profile.eu/people/<username> /card#me

Since there is no information on the site, it's hard to asses any of
those claims
Melvin Carvalho
2013-03-12 17:59:28 UTC
Permalink
Post by Nick Jennings
On Tue, Mar 12, 2013 at 10:31 AM, Melvin Carvalho
Post by Melvin Carvalho
https://my-profile.eu/
Main reasons: is that it has excellent use of URLs, is decentralized,
respects privacy, and scales to billions.
How is it decentralized and how does it scale? Once you create a
https://my-profile.eu/people/<username> /card#me
Thanks for bringing this up. This is exactly what I mean by excellent use
of URLs.

Although you CAN use that URL to login to the site, I can also login with
any URL that displays my public key.

In my case I login via my homepage: http://melvincarvalho.com/

In the case of Tim Berners-Lee he can (and does) login with
http://www.w3.org/People/Berners-Lee/card#i

You can login with your freedombox etc.

This is essentially known as the "opacity" property of the URL. ie that
characters in the variable string are independent of that variable's value.

Get URLs right, and federation drops out almost for free. If you can show
me another system this well designed separation of concerns, I'd be very
happy to use it!
Post by Nick Jennings
Since there is no information on the site, it's hard to asses any of
those claims
Nick Jennings
2013-03-12 18:09:42 UTC
Permalink
On Tue, Mar 12, 2013 at 10:59 AM, Melvin Carvalho
Post by Melvin Carvalho
Post by Nick Jennings
On Tue, Mar 12, 2013 at 10:31 AM, Melvin Carvalho
Post by Melvin Carvalho
https://my-profile.eu/
Main reasons: is that it has excellent use of URLs, is decentralized,
respects privacy, and scales to billions.
How is it decentralized and how does it scale? Once you create a
https://my-profile.eu/people/<username> /card#me
Thanks for bringing this up. This is exactly what I mean by excellent use
of URLs.
Although you CAN use that URL to login to the site, I can also login with
any URL that displays my public key.
In my case I login via my homepage: http://melvincarvalho.com/
In the case of Tim Berners-Lee he can (and does) login with
http://www.w3.org/People/Berners-Lee/card#i
Is that a quality of my-profile.eu? That, to me, seems like you are
describing FOAF in Linked Data... right? In that case I don't
understand what that site has to do with it, other than confusing
people with an ambiguously purposed (and not explained) profile-ish
site. To new-comers the whole thing is confusing and obtuse IMO. I'm
not sure what the 'project' is for that site.


Just playing devils advocate a bit
hellekin (GNU/consensus)
2013-03-12 19:13:45 UTC
Permalink
Post by Nick Jennings
not sure what the 'project' is for that site.
Just playing devils advocate a bit
*** Indeed http://myprofile-project.org/ is more explicit, but you have
to try and login in order to find it.

"MyProfile intends to put users in control of the data they have and
share on the Internet."

Their manifesto reads (http://myprofile-project.org/manifesto.html)

The Current Situation...

The Web we know is based on centralized resources, the so called 'silo'
approach. Offering particular services would usually involve having to
create dedicated accounts for each user, tying and limiting the user to
this particular service and/or resource. Furthermore, users have no
control over how their personal account data is used by the service.
Recently there have been numerous cases where social networks have made
public certain private details of their users (see Facebook and Google
Buzz), which made people realize the importance of online privacy and
public data control.

One may argue that better privacy policies may reduce the risk of
exposure. However, even if users decide to protect their public data or
even remove their accounts, there is no guarantee that the process is
instant and permanent, since most countries have passed laws which
require that online data be stored for several months up to one year or
even more.

Another important issue deals with authentication and identification.
Most services authenticate users based on username and password
combinations. Federated and single sign-on services like OpenID have
proven to be quite useful. However, implementing a cross-domain
authentication and user management system not only requires a lot of
effort from large entities in order to make everything compatible, but
also powerful trust relationships. In addition, once authentication has
been performed, services still require that users have local profiles.

To put things into perspective, let's take the case of Facebook. Its
success attracts more and more people to use it, encouraging its
developers to provide even more services. When these services prove
useful, users start to depend on them on a daily basis. There have been
people recently discussing the possibility of having Facebook acting as
a bank, or as an intermediary payment service (think PayPal). How bad it
would be if all the services offered by Facebook suddenly become
inaccessible, if all the time and data so carefully invested into
developing a rich user profile was wasted/lost?

MyProfile

This is where MyProfile comes into play. It tries to address the
shortcomings of silo-based user accounts, cross-domain authentication
and identification, as well as data sharing and propagation.

Authentication and Identification

In order to perform authentication and identification, MyProfile is
based on the recent standard proposed by W3C's WebID Community Group,
and the Friend of a Friend (FOAF) ontology.

WebID proposes a way to uniquely identify a person, company,
organization, or other agents, using a URI which is included in an X.509
browser certificate. The authentication process relies on TLS to
validate that the private key in use matches the public key of the
declared certificate, as well as the public key found in the profile at
the location indicated by the URI. In other words, it provides a
cryptographic way of authenticating and identifying a user, based on
resources managed by the user -- the browser certificate and the
corresponding profile accessible at the URI location.

The FOAF project is creating a Web of machine-readable pages describing
people, the links between them and the things they create and do; it is
a contribution to the linked information system known as the Web. FOAF
defines an open, decentralized technology for connecting social Web
sites, and the people they describe.

Initially, combining WebID and FOAF offers users the possibility to
directly participate in their interactions across the Web, by allowing
them to use a unique identity (pointing to a unique user account /
profile), across multiple domains and services. This approach comes in
contrast to current practices, where the Web centralizes all our
personal data through the multitude of online forms we have to fill in,
instead of allowing users to carefully select which information they
want to make public when accessing a particular service.

Depending on the user's social interactions on the Web, the profile
could also contain resources like blog and forum posts, or even mailing
list messages, all described using the Semantically-Interlinked Online
Communities (SIOC) ontology. We can safely say that the user's profile
can contain an unlimited number of resources, as long as they can be
expressed using standard semantic web vocabularies.

Requirements

When trying to model access control and privacy policies for social web
applications, we have to take into account several requirements.

Interoperable. Nobody likes being forced to use one identity solution
over the other, meaning that users must always be allowed to choose
their favorite platform. Also, sometimes projects are no longer
maintained, forcing people to look for alternatives. In these cases, it
is imperative that users have the means to import or export their data.
Even if most services already provide user data in common formats like
CSV or XLS, there is no way to preserve the privacy policies set in
place by the user. We believe that only by using the Semantic Web can a
true graph of a user’s identity be preserved across platforms.

Adaptive to social dynamics. Since human relations are very dynamic, the
proposed model must reflect these changes in the system’s policies.

Fine-grained privacy settings. If a picture shall be shared only with a
restricted set of people (maybe not even known in advance), it should be
easy to express such requirement.

Natural language interface and feedback. Defining privacy preferences
has to remain a simple and straightforward process. Ambiguity must be
avoided, therefore access control decisions should be transparent and
well explained to users. Similarly, the specification of privacy
preferences has to protect users from a plethora of check boxes defining
which friends is allowed to access which file or from similar
complicated policy definitions.

Security mechanisms. The solution must fulfil basic security and privacy
requirements, such as reliability, support to authentication, delegation
of rights, etc.

==
hk

P.S.: see, we're still working on the User Data Manifesto :]
Melvin Carvalho
2013-03-12 18:19:06 UTC
Permalink
Post by Nick Jennings
On Tue, Mar 12, 2013 at 10:59 AM, Melvin Carvalho
Post by Melvin Carvalho
Post by Nick Jennings
On Tue, Mar 12, 2013 at 10:31 AM, Melvin Carvalho
Post by Melvin Carvalho
https://my-profile.eu/
Main reasons: is that it has excellent use of URLs, is decentralized,
respects privacy, and scales to billions.
How is it decentralized and how does it scale? Once you create a
https://my-profile.eu/people/<username> /card#me
Thanks for bringing this up. This is exactly what I mean by excellent
use
Post by Melvin Carvalho
of URLs.
Although you CAN use that URL to login to the site, I can also login with
any URL that displays my public key.
In my case I login via my homepage: http://melvincarvalho.com/
In the case of Tim Berners-Lee he can (and does) login with
http://www.w3.org/People/Berners-Lee/card#i
Is that a quality of my-profile.eu? That, to me, seems like you are
describing FOAF in Linked Data... right? In that case I don't
understand what that site has to do with it, other than confusing
people with an ambiguously purposed (and not explained) profile-ish
site. To new-comers the whole thing is confusing and obtuse IMO. I'm
not sure what the 'project' is for that site.
You're diving into implementation specifics here. It's more important to
understand URLs and opacity first and how you can use them to build a
scalable system. Then look at implementation possibilities. There's a
need to separate the abstract from the specific when considering
architecture.
Post by Nick Jennings
Just playing devils advocate a bit
Did you try clicking the link in the footer?

http://myprofile-project.org/

Loading...