Discussion:
[GNU/consensus] Fwd: FYI: Securing the Future of the Social Web with Open Standards
☮ elf Pavlik ☮
2013-07-19 16:39:00 UTC
Permalink
--- Begin forwarded message from Melvin Carvalho ---
From: Melvin Carvalho <***@gmail.com>
To: "public-***@w3.org" <public-***@w3.org>, public-rww <public-***@w3.org>
Date: Fri, 19 Jul 2013 10:28:48 +0000
Subject: FYI: Securing the Future of the Social Web with Open Standards

http://www.w3.org/2013/socialweb/papers/w3c.html

Harry Halpin, W3C/MIT and IRI

The social web increasingly defines the Web itself. The Web is more than
hyperlinks between documents, as the web ultimately consists of the links
between people. Integrating the ability to co-operate socially on the Web
via open standards has the possibility of unleashing a new round of
innovation that would benefit everyone on the Web.
W3C's engagement with the Social Web

The W3C has engaged the open social web since 2009, when it first hosted
the "Future of Social Networking" workshop in Barcelona. While the workshop
engaged a large number of stakeholders, it failed to garner enough industry
interest and thus the Social Web Incubator Group was created to survey the
open social web. The Incubator Group produced a high-level report of the
standards landscape and a number of suggestions in their report @@, and
included a number of suggestions for improving the W3C in order to make it
more lightweight, suggested that led to the creation of W3C Community
Groups. The W3C then started a number of community groups around relevant
social standards (The Federated Social Web, OStatus, Pubsubhubbub) and
hosted a developer-centric Federated Social Web 2011 conference in Berlin
that brought companies such as Google together with grassroots activists
(including activists from Egypt) and developers. While the conference
concluded with a focus on adding secure group functionality to existing
protocols, there was again not enough major industry interest to start a
Working Groups. Then the W3C hosted, with the help of IBM, the Social
Business Jam that led to the creation of the Social Business Community
Group and this workshop. Thus, we hope that critical mass can now be
achieved to start a Working Group in this area.
Why Standards?

The initial attempt to create an "open stack" for the social web happened
outside of traditional standards bodies like the IETF and W3C. This in turn
led to a very fragmented landscape of standards that have had mixed
deployment with some success and some failures. However, there are a number
of disadvantages of the approach of creating a "stack" of technologies
outside of a standards body. In particular, there are:

- No unified IPR policy. While some specifications do specify their IPR
(OpenSocial), others had difficulty getting IPR signed over
(ActivityStreams), and some still have no clear IPR (Pubsubhubbub)
- No maintenance policy: Some specifications are in need of updating but
exist purely as informal specifications (PortableContacts) and fail to be
updated to take into account new developments (Salmon Protocol)
- Lack of guidance for developers: Developers need to make sense of a
bewildering number of specifications in order to build an open social
application. While the OStatus meta-architecture provided some guidance, it
needs to be maintained in lieu of current work.
- Lack of a test-suite. It is difficult to demonstrate interoperability
between code-bases without a single test-suite that can be easily deployed
via github. Thus, demonstrations of interoperability have been "one-off"
and have not been maintained.
- Lack of integration into the Web: HTML5 is providing a host of new
capabilities to HTML that will reliably work cross-platform across an
increasingly heterogeneous number of platforms, including mobile. Browser
plug-ins will be increasingly phased out of existence from all major
browsers. Any social work needs to take advantage of this.
- Lack of security considerations: A distributed social networking
architecture by nature needs strong authentication of parties and integrity
and even confidentiality of messages.

In combination with the OpenSocial Foundation, the W3C can help address
each of the above concerns by 1) providing a single unified royalty-free
IPR policy 2) a Working Group with clear responsibilities for editor(s) and
chair with management structure 3) providing a primer and integration of
examples into the Open Web Docs with the rest of HTML5 4) Adding client
testing into the git maintained HTML test-suite and a clear server-side
test-suite 5) re-factoring current specifications around HTML5 (in
particular, Web Components and CORS) 6) Providing a broad test-suite and
integration of the social web with security-oriented work such Content
Security Policy, the Web Cryptography API, and wide security reviews with
related work at the IETF. Future work should have a clear focus and work in
a unified manner, ideally with a single group with a well-defined timeline
and deliverables.
A Secure Open Social Web?

In particular, security considerations have received less attention that
needed on the social web, with the paradigm of an unauthenticated public
broadcast of messages failing to provide the elementary security
considerations needed for closed groups and valuable information, which are
requirements for many use-cases ranging from sensitive corporate
information to human rights activism. Any open social web that fails to
take on security considerations will be abused by spammers at the very
least.

Any new effort for the social web should clarify the threat model and
propose mitigations so that the open social web can handle high-value
information. For example, any attempt to broadcast messages needs to have
the sender authenticated, and so by nature all messages should be digitally
signed with integrity checks, lest a malicious party strip the signature
and replace it with its own when substituting a false message. For
sensitive information, the message should itself be encrypted and
de-crypted only to those in the group. To allow messages in distributed
systems to be re-integrated and ordered correctly (as originally tried with
Salmon Protocol), time-stamping is necessary. Lastly, it may be incorrect
that a distributed social system that isn't properly designed is actually
more secure than a centralized silo: considerations should be made that the
ability to post presence updates does not store more information than is
necessary in a centralized location (as is currently done by XMPP servers
for example) and for use-cases where high latency is allowed, constant rate
background traffic and mixing can prevent traffic analysis threats.
Next Steps

The result of this workshop will determine the future of the open social
web. Concretely, this will consist of a report released within one month
and then possibly, if consensus is reached and there is enough industry
interest, one or more charters for Working Groups. The W3C welcomes joining
forces with the OpenSocial Foundation and numerous grassroots efforts both
inside (Pubsubhubub, OStatus) and outside the W3C (ActivityStreams,
IndieWeb) in making social should be a "first class" citizen on the Web.
--- End forwarded message ---
hellekin
2013-07-19 20:03:03 UTC
Permalink
Post by ☮ elf Pavlik ☮
Subject: FYI: Securing the Future of the Social Web with Open
Standards
http://www.w3.org/2013/socialweb/papers/w3c.html
Harry Halpin, W3C/MIT and IRI
*** Thank you for sharing this Elf Pavlik!

http://www.w3.org/2013/socialweb/ is an upcoming "Workshop on Social
Standards: The Future of Business" to be held on 7-8th August 2013, in
San Francisco, USA.

"
The goal of this workshop is to bring together social business experts
with social technology experts in a neutral and objective environment
to discuss the use-cases that existing specifications don’t adequately
address and understand where new standards are needed.
"

I can't say the perspective of that workshop is exactly to my taste.
There will certainly be some good talks (the "Running Code" line-up
looks great) but there are some important absents: Friendi.ca,
Diaspora, GNU Social, Occupy-Dev, Riseup, or Lorea. They're probably
neither experts, nor objective, nor interested in the future of business.

I'm glad though that the topic of security of the social Web comes to
the table, because in the current situation, using a Web browser is
antithetical to privacy and security.

==
hk
Melvin Carvalho
2013-07-22 09:01:25 UTC
Permalink
Dear Melvin,
thanks for your information. In this post-Snowden era many more people
then ever
before have an open ear for security considerations (and cryptoparties).
I would like to make a number of remarks from a grassroots point to view.
In
essence, Edward Snowden has raised the question of "trust". For a system
that
serves as a trusted social web infrastructure more is needed than trusted
procedures and legal guarantees - the software itself and the platform it
is
running on has to be trustworthy as well. The very possibilities to
compromise
privacy have to be minimized by choosing an appropriate structure. This is
clearly to be preferred over a system that depends on the legal system to
"enforce" informational self determination.
Therefore, I believe that a social web for John Doe that can supersede
faceboogle needs to be not just open source, it needs to be crowd funded as
well.
Crowd funding is a good idea. Although it tends to be non optimally
allocated. ie most of the crowd funding went to Diaspora partly because
they evangelized well, and got good coverage on web 2.0 blogs and hacker
news. There are projects that didnt get 1% as much as diaspora that were
not 99% worse.

Decentralized payments could change things. Currently facebooble make a
lot of money by putting ads on content. Content they did not create. The
content creators often get no remuneration, or sometimes just a fraction.
This makes no sense to me in a world of decentralized micropayments. We
should have a multi faceted funding strategy. However, project selection
is quite hard either for the lay person or even the expert. But perhaps we
can try and make it slightly more democratic, at least that, if no more.
This makes the lack of interest of the industry sector less of a problem -
privacy conscious people will welcome a system that is free from corporate
interests. That does not mean that the corporate world is not welcome to
use the
standards and structures, which are going to be developed in the public
domain,
but the public domain needs to take the lead. As I see it, the situation is
similar to what happened with Linux: There has been an open system, which
then
got used by industry that threw development efforts on it, which
benefitted the
open community in return - a healthy win-win situation.
I agree although Linus was also an exceptional mind, coder and community
leader. He had a pure focus, ie to port the UNIX kernel to GNU as free
software.

In the social web we cant assume that we'll be blessed with someone as
brilliant as Linus, and there is mixed focus, so working together becomes
important, which is one area where standards can maybe help. I say *maybe*
because a standard that is restrictive may sometimes be counter
productive.
Before we can start to speak about standards, we need to have a consensus
of
those interested in a post-faceboogle social web about its structure and
capabilities. You may call this worthy of a standard in itself, but very
basic
properties have to be agreed upon before serious efforts should be
invested into
realizing it. See my brief presentation from this years Eastehegg at
https://frab.eh13.c3pb.de/system/attachments/6/original/13-03-29_19Uhr_social_networks.ppt?1364649557
,
which proposes a basic set of requirements needed to supersede faceboogle.
It
turned out to be quite agreeable.
Nice presentation. What happened to the dooble / intefrace social browser
that was based on libretroshare ... that was really nice, is it still part
of this group's focus?
Together with Elijah of the LEAP project we came up with this list of
1) Client side encryption
+1 Certainly should be an option, or a default, but key management is hard
and a topic in itself, it takes time. I would say a goal to build towards.
2) Social graph obfuscation
+1
3) Self determined data storage
+1

Very much so, and important is that you can store your data in multiple
places.
4) Scalability
+1

This is absolutely key. We need strategies to maximize this.
5) Integration of old friends on legacy networks (which would compromise 1
and 2
for those, of course).
+1 Some projects are underway to build this kind of 'driver'
6) High availability - you should be able to access your data when you
want it.
+1
7) Device portability - you should be able to access your data from
multiple
devices at the same time
+1
8) Client choice - you should be able to use a mobile, desktop, or html5
app
client (once webcrypto is deployed in browsers).
+1
9) Multiple identity - you should be able to maintain multiple identities,
and
choose to link them or not.
+1

These are sometimes called 'nyms' (short for pseudonyms) ... yes it's
important to think in terms of multiple identities, most systems restrict
you to one, and even worse, one that is a subset of their particular world
view. For example, one church believes your email *is* your identity, and
one church believes your homepage *is*. Neither are correct (tho of course
email favours google/yahoo/microsoft) and the two will never get on. We
need permissive identity solutions and multiple identity solutions. If we
solve this one thing alone, we will have advanced more than in the last few
years, because it will mean that different projects can talk to each other.
10) Protocol agnostic - you should be able to cross-communicate with
different
protocols, be they XMPP, HTTP, or p2p based.
+0

Although I would love to see this happen, it's perhaps taking on too much
for the first phase. Everyone goes away and programs something the does
not inerop with anything else. Most critical is that we prioritize solving
the HTTP case first, at least to a good level. The reason is that almost
everything is able to talk HTTP. If we can get that working quickly, the
same patterns can be extended to other transport layers.
11) Secure groups - groups with membership determined cryptographically.
Groups
function as a virtual user, with all users in the group able to receive
and send
as the group, because they share a private group-key.
+1
After Snowden, I am quite certain that we will have broad attendance for a
workshop on these topics at the upcoming 30C3, the annual get together of
the
Chaos Computer Club. See the CFP at
http://events.ccc.de/2013/07/18/30c3-call-for-participation-en/.
On the weekend of August 24/25 there will a preparatory meeting for this
30C3
workshop sponsored by Wau Holland Foundation. This meeting will sort out
the
- which grass-roots projects should to represented
- whom would we like to see there
- what preparatory material needs to be produced for the workshop
Whoever would like to participate, please drop me a note.
:)
Klaus Schleisiek
Wau-Holland-Stiftung W
Postfach 65 04 43 H O L L A N D
22364 Hamburg/Germany S T I F T U N G
http://www.wauland.de
Post by ☮ elf Pavlik ☮
--- Begin forwarded message from Melvin Carvalho ---
Date: Fri, 19 Jul 2013 10:28:48 +0000
Subject: FYI: Securing the Future of the Social Web with Open Standards
http://www.w3.org/2013/socialweb/papers/w3c.html
Harry Halpin, W3C/MIT and IRI
The social web increasingly defines the Web itself. The Web is more than
hyperlinks between documents, as the web ultimately consists of the links
between people. Integrating the ability to co-operate socially on the Web
via open standards has the possibility of unleashing a new round of
innovation that would benefit everyone on the Web.
W3C's engagement with the Social Web
The W3C has engaged the open social web since 2009, when it first hosted
the "Future of Social Networking" workshop in Barcelona. While the
workshop
Post by ☮ elf Pavlik ☮
engaged a large number of stakeholders, it failed to garner enough
industry
Post by ☮ elf Pavlik ☮
interest and thus the Social Web Incubator Group was created to survey
the
Post by ☮ elf Pavlik ☮
open social web. The Incubator Group produced a high-level report of the
included a number of suggestions for improving the W3C in order to make
it
Post by ☮ elf Pavlik ☮
more lightweight, suggested that led to the creation of W3C Community
Groups. The W3C then started a number of community groups around relevant
social standards (The Federated Social Web, OStatus, Pubsubhubbub) and
hosted a developer-centric Federated Social Web 2011 conference in Berlin
that brought companies such as Google together with grassroots activists
(including activists from Egypt) and developers. While the conference
concluded with a focus on adding secure group functionality to existing
protocols, there was again not enough major industry interest to start a
Working Groups. Then the W3C hosted, with the help of IBM, the Social
Business Jam that led to the creation of the Social Business Community
Group and this workshop. Thus, we hope that critical mass can now be
achieved to start a Working Group in this area.
Why Standards?
The initial attempt to create an "open stack" for the social web happened
outside of traditional standards bodies like the IETF and W3C. This in
turn
Post by ☮ elf Pavlik ☮
led to a very fragmented landscape of standards that have had mixed
deployment with some success and some failures. However, there are a
number
Post by ☮ elf Pavlik ☮
of disadvantages of the approach of creating a "stack" of technologies
- No unified IPR policy. While some specifications do specify their
IPR
Post by ☮ elf Pavlik ☮
(OpenSocial), others had difficulty getting IPR signed over
(ActivityStreams), and some still have no clear IPR (Pubsubhubbub)
- No maintenance policy: Some specifications are in need of updating
but
Post by ☮ elf Pavlik ☮
exist purely as informal specifications (PortableContacts) and fail
to be
Post by ☮ elf Pavlik ☮
updated to take into account new developments (Salmon Protocol)
- Lack of guidance for developers: Developers need to make sense of a
bewildering number of specifications in order to build an open social
application. While the OStatus meta-architecture provided some
guidance, it
Post by ☮ elf Pavlik ☮
needs to be maintained in lieu of current work.
- Lack of a test-suite. It is difficult to demonstrate
interoperability
Post by ☮ elf Pavlik ☮
between code-bases without a single test-suite that can be easily
deployed
Post by ☮ elf Pavlik ☮
via github. Thus, demonstrations of interoperability have been
"one-off"
Post by ☮ elf Pavlik ☮
and have not been maintained.
- Lack of integration into the Web: HTML5 is providing a host of new
capabilities to HTML that will reliably work cross-platform across an
increasingly heterogeneous number of platforms, including mobile.
Browser
Post by ☮ elf Pavlik ☮
plug-ins will be increasingly phased out of existence from all major
browsers. Any social work needs to take advantage of this.
- Lack of security considerations: A distributed social networking
architecture by nature needs strong authentication of parties and
integrity
Post by ☮ elf Pavlik ☮
and even confidentiality of messages.
In combination with the OpenSocial Foundation, the W3C can help address
each of the above concerns by 1) providing a single unified royalty-free
IPR policy 2) a Working Group with clear responsibilities for editor(s)
and
Post by ☮ elf Pavlik ☮
chair with management structure 3) providing a primer and integration of
examples into the Open Web Docs with the rest of HTML5 4) Adding client
testing into the git maintained HTML test-suite and a clear server-side
test-suite 5) re-factoring current specifications around HTML5 (in
particular, Web Components and CORS) 6) Providing a broad test-suite and
integration of the social web with security-oriented work such Content
Security Policy, the Web Cryptography API, and wide security reviews with
related work at the IETF. Future work should have a clear focus and work
in
Post by ☮ elf Pavlik ☮
a unified manner, ideally with a single group with a well-defined
timeline
Post by ☮ elf Pavlik ☮
and deliverables.
A Secure Open Social Web?
In particular, security considerations have received less attention that
needed on the social web, with the paradigm of an unauthenticated public
broadcast of messages failing to provide the elementary security
considerations needed for closed groups and valuable information, which
are
Post by ☮ elf Pavlik ☮
requirements for many use-cases ranging from sensitive corporate
information to human rights activism. Any open social web that fails to
take on security considerations will be abused by spammers at the very
least.
Any new effort for the social web should clarify the threat model and
propose mitigations so that the open social web can handle high-value
information. For example, any attempt to broadcast messages needs to have
the sender authenticated, and so by nature all messages should be
digitally
Post by ☮ elf Pavlik ☮
signed with integrity checks, lest a malicious party strip the signature
and replace it with its own when substituting a false message. For
sensitive information, the message should itself be encrypted and
de-crypted only to those in the group. To allow messages in distributed
systems to be re-integrated and ordered correctly (as originally tried
with
Post by ☮ elf Pavlik ☮
Salmon Protocol), time-stamping is necessary. Lastly, it may be incorrect
that a distributed social system that isn't properly designed is actually
more secure than a centralized silo: considerations should be made that
the
Post by ☮ elf Pavlik ☮
ability to post presence updates does not store more information than is
necessary in a centralized location (as is currently done by XMPP servers
for example) and for use-cases where high latency is allowed, constant
rate
Post by ☮ elf Pavlik ☮
background traffic and mixing can prevent traffic analysis threats.
Next Steps
The result of this workshop will determine the future of the open social
web. Concretely, this will consist of a report released within one month
and then possibly, if consensus is reached and there is enough industry
interest, one or more charters for Working Groups. The W3C welcomes
joining
Post by ☮ elf Pavlik ☮
forces with the OpenSocial Foundation and numerous grassroots efforts
both
Post by ☮ elf Pavlik ☮
inside (Pubsubhubub, OStatus) and outside the W3C (ActivityStreams,
IndieWeb) in making social should be a "first class" citizen on the Web.
--- End forwarded message ---
hellekin
2013-07-22 19:28:54 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Post by Melvin Carvalho
Klaus Schlesiek wrote: In essence, Edward Snowden has raised the
question of "trust".
*** Yes Klaus, you're right. I share most of your analysis. I posted
some of it yesterday and will correct the draft of the GNU/consensus
Whistle [0] to mention trust as well as privacy.

At least there's a shared agreement that we're living a momentum for
radical change.
Post by Melvin Carvalho
Crowd funding is a good idea. Although it tends to be non
optimally allocated.
*** That is the challenge we need to overcome! Two days ago I was
sitting in a library with a friend and I was proposing that we could
bring 10 developers from various projects and sequestrate them in a
cheap country to fix the inter-project communication for good. That
would not take a lot of investment to do it under nice weather
conditions, for 6 months to a year, and it certainly wouldn't take
that long to move from the actual mess to properly functioning
grassroots federation. We could do that with the Mocambos network in
Brazil (Vince?).

-
Melvin Carvalho
2013-07-22 20:13:04 UTC
Permalink
Post by hellekin
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Post by Melvin Carvalho
Klaus Schlesiek wrote: In essence, Edward Snowden has raised the
question of "trust".
*** Yes Klaus, you're right. I share most of your analysis. I posted
some of it yesterday and will correct the draft of the GNU/consensus
Whistle [0] to mention trust as well as privacy.
At least there's a shared agreement that we're living a momentum for
radical change.
Post by Melvin Carvalho
Crowd funding is a good idea. Although it tends to be non
optimally allocated.
*** That is the challenge we need to overcome! Two days ago I was
sitting in a library with a friend and I was proposing that we could
bring 10 developers from various projects and sequestrate them in a
cheap country to fix the inter-project communication for good. That
would not take a lot of investment to do it under nice weather
conditions, for 6 months to a year, and it certainly wouldn't take
that long to move from the actual mess to properly functioning
grassroots federation. We could do that with the Mocambos network in
Brazil (Vince?).
Melvin Carvalho
2013-07-24 19:27:44 UTC
Permalink
They are in the church of "your email is your identity" -- let's be clear
this is an unnecessary restriction with will not scale. Other projects
(not mentioning any names) *cough cough* are also in this religious sect.
email needs to be discontinued in the long run. it doesn't serve any of
the purposes it was constructed for. it gives the attacker a full view
of the social network, a view into the content by default and it also
fails at delivering to many recipients promptly and to handle spam.
a proper messaging system protects both content and meta data of
communications, multicasts whenever something wants to be received by
multiple recipients and sorts out spam because it is the only use
case for massive unicasting. also, key management is ridiculously
easy if it doesn't try to get along with email addresses. the key
is the identity. works great in modern systems such as tor hidden
services, retroshare etc.
Yes, but the key is not the person. The person has a key. When you
overload he key and the person to mean the same thing (known as an
'indirect identifier') you have to be quite careful. The advantage of not
overloading identifiers that the same identifier in one system means the
same in another, which helps with inerop with other systems that might not
have made the same design decisions as you.
Together with Elijah of the LEAP project we came up with this list
1) Client side encryption
haven't looked into it, but if there is a client i presume there is
a server so there is a server that gets to see meta data of who is
talking to whom. that means it doesn't fulfil today's requirements
for privacy. correct?
*** We tend to always repeat the same: public communication does not
need to be encrypted, so it's a use case that should easily be agreed
upon. Yet, everything that is not explicitly public is, in my opinion,
to be private: that means, in the current context, strongly encrypted.
Great point. We've not even solved the public version yet. When we do I
suspect the encrypted version will be easy, just some shared keys and AES
or another symmetric cypher, asymmetric PKI, or hybrid, or security by
obscurity. I'm not a huge believe in advertising which crypto algorithms
are being used to the whole world, that all the relevant parties know
should be enough.
once there is a system that can deliver public info without exposing who
is actually receiving it, there will be no need for public info to
actually be unencrypted. in other words, i am afraid you are putting
effort into something nobody will want as soon as something better is
available. there is no such thing as "public communication." even if i
subscribe to the public pirate party announcements channel, i am
exposing my political interest - thus, even the most popular twitter
content needs to be subscribeable in privacy, the delivery tree must
be invisible to outside viewers thus the content must be encrypted or
otherwise you would see how the distribution tree is architected.
conclusion: the use case for a "public version" does not exist - at
least not from the point of view of somebody who is not going to tolerate
any further data mining on the general population - even if it is
just subscribing to sports news. this whole 1984 approach has to be
stopped without exceptions.
8) Client choice - you should be able to use a mobile, desktop, or
html5 app client (once webcrypto is deployed in browsers).
*** Honestly, I'm not sure that is a sane decision. Not until there
- 1 TLS certification authorities (utterly broken and mostly
untrustable)
- 2 TLS perfect forward secrecy implementation everywhere (servers,
browsers) including protection against TLS-downgrade attacks. That's
mostly a matter of proper configuration though.
- 3 Javascript protection: Libre JS to ensure the code run on the
page is genuine and not malicious
- 4 proper protection against XSS, CSRF, and other MITM
3 and 4 are way out of reach at the moment. HTML5 is bringing new
attack vectors, and the development of the Web is going towards more
usage of centralized resources (+1, Like, Login with X, analytics,
etc.) without mentioning javascript-less CSS-based or DOM-based XSS,
plugin-based infection, and mobile phone network insecurity as a whole.
i agree and i also doubt that 1 and 2 can be fixed in a way that cuts
out big brother. too many bugs on all levels. X.509 is a failure.
the only way to use web technology is to forbid it from connecting
to anywhere else but the locally running social network daemon.
I dont believe in trying to create perfect security, security should be
"good enough" then you start the arms race between attackers and
defenders. Facebook didnt even start with https remember and go to 100
million users. In any other project I'd be a security fundamentalist,
but
for a social project, users count, this often seems to be
underestimated...
the user count is irrelevant if the load is fully distributed among those
who are using it. it doesn't matter if a hundred or a billion people
participate. each time we do something "good enough" we are only behind
the attacker and not achieving our goals - because each time we find
out it wasn't good enough. we have a realistic chance to do much better
so let's not invest in anything half-baked. after all Tor just needed
to be coded, too, and it has changed the landscape of the internet.
carlo von lynX
2013-07-24 22:57:28 UTC
Permalink
Post by Melvin Carvalho
Yes, but the key is not the person. The person has a key. When you
overload he key and the person to mean the same thing (known as an
'indirect identifier') you have to be quite careful. The advantage of not
overloading identifiers that the same identifier in one system means the
same in another, which helps with inerop with other systems that might not
have made the same design decisions as you.
Well, since there is currently NO system that fulfils all our privacy
requirements, there is nothing to interop to. I presume once there is
a tool that does everything right, it will be at the center of a big
bang - that means all variations will derive from it and thus be
compatible to it. All that went before will continue to fulfil its
niche jobs but slowly become irrelevant. Like Myspace. It doesn't make
sense to interop with something that is going to lower your degree of
privacy or security. It's like asking to downgrade the cipher. So the
challenge of interopability does not factually exist IMHO.

Lorea, for example, is doing a great job - but there is a danger of
getting prismed. If Lorea users want to go the next step, they simply
start using something like Retroshare in parallel to the Lorea websites.
There is no use in having any interop and thus damaging the
stronger security tool.

To me the question is, will Retroshare spark the big bang? Or does
gnunet with secushare have a chance? Or are we actually late and the
technology leading the way is already out there.. tor hidden services?
Or will something else along these lines appear out of nowhere?

In all of my thinking I am just hoping humanity will not once again
fall for some half-baked insecure solution.. like giving money to
heml.is. So that's the only scenario I am not taking into consideration:
the ability of humanity to settle for something that will not do the job.
--
»»» psyc://psyced.org/~lynX »»» irc://psyced.org/welcome
»»» xmpp:***@psyced.org »»» https://psyced.org/PSYC/
carlo von lynX
2013-07-24 18:01:30 UTC
Permalink
They are in the church of "your email is your identity" -- let's be clear
this is an unnecessary restriction with will not scale. Other projects
(not mentioning any names) *cough cough* are also in this religious sect.
email needs to be discontinued in the long run. it doesn't serve any of
the purposes it was constructed for. it gives the attacker a full view
of the social network, a view into the content by default and it also
fails at delivering to many recipients promptly and to handle spam.

a proper messaging system protects both content and meta data of
communications, multicasts whenever something wants to be received by
multiple recipients and sorts out spam because it is the only use
case for massive unicasting. also, key management is ridiculously
easy if it doesn't try to get along with email addresses. the key
is the identity. works great in modern systems such as tor hidden
services, retroshare etc.
1) Client side encryption
haven't looked into it, but if there is a client i presume there is
a server so there is a server that gets to see meta data of who is
talking to whom. that means it doesn't fulfil today's requirements
for privacy. correct?
*** We tend to always repeat the same: public communication does not
need to be encrypted, so it's a use case that should easily be agreed
upon. Yet, everything that is not explicitly public is, in my opinion,
to be private: that means, in the current context, strongly encrypted.
Great point. We've not even solved the public version yet. When we do I
suspect the encrypted version will be easy, just some shared keys and AES
or another symmetric cypher, asymmetric PKI, or hybrid, or security by
obscurity. I'm not a huge believe in advertising which crypto algorithms
are being used to the whole world, that all the relevant parties know
should be enough.
once there is a system that can deliver public info without exposing who
is actually receiving it, there will be no need for public info to
actually be unencrypted. in other words, i am afraid you are putting
effort into something nobody will want as soon as something better is
available. there is no such thing as "public communication." even if i
subscribe to the public pirate party announcements channel, i am
exposing my political interest - thus, even the most popular twitter
content needs to be subscribeable in privacy, the delivery tree must
be invisible to outside viewers thus the content must be encrypted or
otherwise you would see how the distribution tree is architected.

conclusion: the use case for a "public version" does not exist - at
least not from the point of view of somebody who is not going to tolerate
any further data mining on the general population - even if it is
just subscribing to sports news. this whole 1984 approach has to be
stopped without exceptions.
8) Client choice - you should be able to use a mobile, desktop, or
html5 app client (once webcrypto is deployed in browsers).
*** Honestly, I'm not sure that is a sane decision. Not until there
- 1 TLS certification authorities (utterly broken and mostly
untrustable)
- 2 TLS perfect forward secrecy implementation everywhere (servers,
browsers) including protection against TLS-downgrade attacks. That's
mostly a matter of proper configuration though.
- 3 Javascript protection: Libre JS to ensure the code run on the
page is genuine and not malicious
- 4 proper protection against XSS, CSRF, and other MITM
3 and 4 are way out of reach at the moment. HTML5 is bringing new
attack vectors, and the development of the Web is going towards more
usage of centralized resources (+1, Like, Login with X, analytics,
etc.) without mentioning javascript-less CSS-based or DOM-based XSS,
plugin-based infection, and mobile phone network insecurity as a whole.
i agree and i also doubt that 1 and 2 can be fixed in a way that cuts
out big brother. too many bugs on all levels. X.509 is a failure.
the only way to use web technology is to forbid it from connecting
to anywhere else but the locally running social network daemon.
I dont believe in trying to create perfect security, security should be
"good enough" then you start the arms race between attackers and
defenders. Facebook didnt even start with https remember and go to 100
million users. In any other project I'd be a security fundamentalist, but
for a social project, users count, this often seems to be underestimated...
the user count is irrelevant if the load is fully distributed among those
who are using it. it doesn't matter if a hundred or a billion people
participate. each time we do something "good enough" we are only behind
the attacker and not achieving our goals - because each time we find
out it wasn't good enough. we have a realistic chance to do much better
so let's not invest in anything half-baked. after all Tor just needed
to be coded, too, and it has changed the landscape of the internet.
carlo von lynX
2013-07-25 08:02:48 UTC
Permalink
We still live under Zooko's triangle. Identity <> key mapping is only
easy if you exclusively care about globally unique and decentralized,
but it is very hard if you care also about human friendly.
hi there eli.. well if saying 7yuogiqxgrak36kk is all it takes to
achieve Identity <> key mapping and as hard as human unfriendly gets,
I am positive people out there are going to deal with human unfriendly
for the sake of a truly reliable communications infrastructure.
You can get all three, if you cheat. Namecoin is an example of cheating
in a peer to peer way (the cheat is that the global append-only log is
essentially an authority, derived from consensus of miners). DANE
achieves all three by relying on the authority of the root DNS zone.
Nicknym, the protocol we are working on (https://leap.se/en/nicknym)
also achieves all three by relying on DNS, although in an entirely
different way.
So in my case the cheat is in selecting a slice of the hash?
We can, and must, do much better than a secure identity system that is
unfriendly to humans. It is the 21st century, after all.
The other two goals are a lot more important, so all we want to do
is mitigate this aspect. I see 3 fabulous ways to do it:

- socialist millionaire's shared secret while having a beer together
- public key in a QR code on a business card (printed paper is harder to mitm)
- a slice of the hash confirmed by voice on the phone

Tor is leading the way. Simply by spelling out 7yuogiqxgrak36kk to you
we have a cryptographic guarantee that your tor node will connect to mine
and only to mine. NSA can do a lot, but I doubt they can MITM all mails
and twitters on earth to intercept my hash and replace it with another,
but just in case they'd dare to do so for you because you are their target,
well then you can have a surveilled phone conversation with me and I can
*still* make sure you have my correct public key - no matter how many
people are listening into that conversation.

Many of the MITM problems arise from the abstraction of identity and her
public key. By actually using the key in addressing we solve the problems.
There is no need to maintain abstraction layers that reduce the
security of its users.

So I'd say Zooko is a problem solved.
Back to work, we've got to save the world.
And yes, I proudly belong to the church of identity in the form of the
fantastically usable, it is also universally understood by every
internet user on earth. There are other addressing schemes that are user
Neither Skype nor Facebook think in terms of ***@domain. Actually @domain
is totally distant from average humanity - it's abnormal to think of yourself
in terms of affiliation. No surprise the #1 domain in the world is gmail.com.
People would deal with it, if it worked, but it doesn't. Now it's time to
provide the key instead of the domain. You're living in the past, Eli. :)
carlo von lynX
2013-07-25 09:23:17 UTC
Permalink
you cannot plausibly argue that "7yuogiqxgrak36kk" is human memorable.
you are not cheating, you just don't care about one side of Zooko's
triangle.
Memorable is not the aim since once my connection to 7yuogiqxgrak36kk
is established I can memorize the nickname I assign to it (excuse me if
I stick to the original word for nym).

This approach only depends on my computer not being backdoored and evil
against me - but if that is the case then a memorable address won't be
of any help either.
Post by carlo von lynX
Tor is leading the way. Simply by spelling out 7yuogiqxgrak36kk to you
we have a cryptographic guarantee that your tor node will connect to mine
and only to mine.
onion addresses are a prefect example of why we need to do better. just
look at all the fake onion addresses floating around the internet for
silk road.
silkroadvb5piz3r is the correct one? it's a learning curve.. folks have to
learn that what's in front of that .onion is a lot more important than
what's behind any https. at least with silkroadvb5piz3r the challenge is
clear - just get it right. in the case of X.509 the mission is impossible:
figure out which CA's are not telling you the truth - in a world where
xx% of the CA's are operating under the patriot act.

for actual privacy there is no way around learning to handle public key
addressing, but all you need is a way to import them and then to keep
them safe in your electronic address book. it's solveable and requires
no trust towards external authorities.
Post by carlo von lynX
So I'd say Zooko is a problem solved.
keys as identifiers are the defining example of the problem that Zooko
was illustrating with the triangle postulate. using keys as identifiers
solves nothing.
there is no alternative. whatever else you have to offer is not acceptable AFAIK.
and the pain isn't really there. people can do this for the sake of freedom
and thanks to eddie they now know it's worth it.
Post by carlo von lynX
Now it's time to provide the key instead of the domain. You're living in the past, Eli. :)
good luck with that. you might get some people to use it, until
something actually human memorable comes along, which it will.
i hope there will be even better representations of public keys, indeed.
--
»»» psyc://psyced.org/~lynX »»» irc://psyced.org/welcome
»»» xmpp:***@psyced.org »»» http://my.pages.de/me
Guido Witmond
2013-07-25 10:04:33 UTC
Permalink
Post by carlo von lynX
We still live under Zooko's triangle. Identity <> key mapping is only
easy if you exclusively care about globally unique and decentralized,
but it is very hard if you care also about human friendly.
hi there eli.. well if saying 7yuogiqxgrak36kk is all it takes to
achieve Identity <> key mapping and as hard as human unfriendly gets,
I am positive people out there are going to deal with human unfriendly
for the sake of a truly reliable communications infrastructure.
I think that's too optimistic. People choose easy over secure anytime.
Even if it threatens their lives.

I would state the claim that one can't have reliable communication
without having all three of Zooko's properties. Proof is left as an
exercise for the reader.
Post by carlo von lynX
You can get all three, if you cheat. Namecoin is an example of cheating
in a peer to peer way (the cheat is that the global append-only log is
essentially an authority, derived from consensus of miners). DANE
achieves all three by relying on the authority of the root DNS zone.
Nicknym, the protocol we are working on (https://leap.se/en/nicknym)
also achieves all three by relying on DNS, although in an entirely
different way.
So in my case the cheat is in selecting a slice of the hash?
We can, and must, do much better than a secure identity system that is
unfriendly to humans. It is the 21st century, after all.
The other two goals are a lot more important, so all we want to do
- socialist millionaire's shared secret while having a beer together
- public key in a QR code on a business card (printed paper is harder to mitm)
- a slice of the hash confirmed by voice on the phone
1. Having a beer together is fun but doesn't scale over distance. How do
I set up a secure channel with people 50 km away? There is no such thing
as virtual beer.

2. Public key fingerprint or QR codes are not Zooko-proof. It's fails
the human memorize-able property. I can't read it from the side of a bus
and type it in at home.

3. A slice of a hash only proofs that you have a secure channel when you
know the identity of the other person.
Post by carlo von lynX
Tor is leading the way. Simply by spelling out 7yuogiqxgrak36kk to you
we have a cryptographic guarantee that your tor node will connect to mine
and only to mine. NSA can do a lot, but I doubt they can MITM all mails
and twitters on earth to intercept my hash and replace it with another,
but just in case they'd dare to do so for you because you are their target,
well then you can have a surveilled phone conversation with me and I can
*still* make sure you have my correct public key - no matter how many
people are listening into that conversation.
Many of the MITM problems arise from the abstraction of identity and her
public key. By actually using the key in addressing we solve the problems.
There is no need to maintain abstraction layers that reduce the
security of its users.
So I'd say Zooko is a problem solved.
Back to work, we've got to save the world.
Not so fast, mister 7yuoqigxqrak36kk,

You leave out one property of Zooko's triangle and declare it solved.

Feel free to check out my attempt at solving Zooko's triangle. It uses
DNSSEC/DANE to validate the domain and it uses a network perspective to
validate each CA at the domain. Users are anonymous.

Check out:
http://eccentric-authentication.org/eccentric-authentication/five-minute-overview.html
Post by carlo von lynX
And yes, I proudly belong to the church of identity in the form of the
fantastically usable, it is also universally understood by every
internet user on earth. There are other addressing schemes that are user
is totally distant from average humanity - it's abnormal to think of yourself
in terms of affiliation. No surprise the #1 domain in the world is gmail.com.
People would deal with it, if it worked, but it doesn't. Now it's time to
provide the key instead of the domain. You're living in the past, Eli. :)
I fully agree with Eli here, Carlo. Or should I say ***@lynX?

'Von' as used in your name means, '
carlo von lynX
2013-07-25 15:47:20 UTC
Permalink
Post by Guido Witmond
Post by carlo von lynX
hi there eli.. well if saying 7yuogiqxgrak36kk is all it takes to
achieve Identity <> key mapping and as hard as human unfriendly gets,
I am positive people out there are going to deal with human unfriendly
for the sake of a truly reliable communications infrastructure.
I think that's too optimistic. People choose easy over secure anytime.
Even if it threatens their lives.
point granted.
Post by Guido Witmond
Post by carlo von lynX
- socialist millionaire's shared secret while having a beer together
- public key in a QR code on a business card (printed paper is harder to mitm)
- a slice of the hash confirmed by voice on the phone
1. Having a beer together is fun but doesn't scale over distance. How do
I set up a secure channel with people 50 km away? There is no such thing
as virtual beer.
let's discuss the scenario of a phone conversation while at the same time
typing the shared secret into the prompt. this requires a MITM to be ready
to act immediately - that is, agencies must have a VERY high interest in
you if they are monitoring you in real-time, not just recording your stuff.
THIS to me sounds like an acceptable compromise if you can't take the
classic walk in the park and keep an eye on people following you.

i'm not trying to achieve perfection, i just want surveillance to cost as
much effort as it had in stasi days, back in the 80s.
Post by Guido Witmond
2. Public key fingerprint or QR codes are not Zooko-proof. It's fails
the human memorize-able property. I can't read it from the side of a bus
and type it in at home.
granted concerning fingerprint, not acknowledged concerning a QR code you
received on paper from that person. in that case it's irrelevant that you
can't memorize its pattern - you just show it to your camera.
Post by Guido Witmond
3. A slice of a hash only proofs that you have a secure channel when you
know the identity of the other person.
what's wrong with that?
Post by Guido Witmond
Post by carlo von lynX
Tor is leading the way. Simply by spelling out 7yuogiqxgrak36kk to you
we have a cryptographic guarantee that your tor node will connect to mine
and only to mine. NSA can do a lot, but I doubt they can MITM all mails
and twitters on earth to intercept my hash and replace it with another,
but just in case they'd dare to do so for you because you are their target,
well then you can have a surveilled phone conversation with me and I can
*still* make sure you have my correct public key - no matter how many
people are listening into that conversation.
Many of the MITM problems arise from the abstraction of identity and her
public key. By actually using the key in addressing we solve the problems.
There is no need to maintain abstraction layers that reduce the
security of its users.
So I'd say Zooko is a problem solved.
Back to work, we've got to save the world.
Not so fast, mister 7yuoqigxqrak36kk,
You leave out one property of Zooko's triangle and declare it solved.
https://en.wikipedia.org/wiki/Zooko%27s_triangle

Secure: check.
Decentralized: check.
Human-meaningful: not necessary, we use the power of paper and camera.

To me removing the necessity of one of the aspects
is solving the problem. No?
Post by Guido Witmond
Feel free to check out my attempt at solving Zooko's triangle. It uses
DNSSEC/DANE to validate the domain and it uses a network perspective to
validate each CA at the domain. Users are anonymous.
http://eccentric-authentication.org/eccentric-authentication/five-minute-overview.html
i like the CA root key taken offline... :) it's like our identity
recovery strategy described in http://secushare.org/threats

pretty advanced strategy.. not bad. so the achilles heel would be if
US government does something with the DNSSEC root? but all it could
do would be to break the certification system, right? i'm not sure
if i'm grasping the full implications of this. the UDP limit for keys
is evil. is it true that DNSSEC only provides URL download for larger
keys? does that mean the keys are no longer protected by DNSSEC in
that case?
Post by Guido Witmond
Post by carlo von lynX
And yes, I proudly belong to the church of identity in the form of the
fantastically usable, it is also universally understood by every
internet user on earth. There are other addressing schemes that are user
is totally distant from average humanity - it's abnormal to think of yourself
in terms of affiliation. No surprise the #1 domain in the world is gmail.com.
People would deal with it, if it worked, but it doesn't. Now it's time to
provide the key instead of the domain. You're living in the past, Eli. :)
Well, since ***@domain is the problem in the Zooko dilemma I don't see a
reason to stick with it.
Post by Guido Witmond
'Von' as used in your name means, '
Guido Witmond
2013-07-25 19:32:23 UTC
Permalink
Post by carlo von lynX
Post by Guido Witmond
Post by carlo von lynX
hi there eli.. well if saying 7yuogiqxgrak36kk is all it takes to
achieve Identity <> key mapping and as hard as human unfriendly gets,
I am positive people out there are going to deal with human unfriendly
for the sake of a truly reliable communications infrastructure.
I think that's too optimistic. People choose easy over secure anytime.
Even if it threatens their lives.
point granted.
Post by Guido Witmond
Post by carlo von lynX
- socialist millionaire's shared secret while having a beer together
- public key in a QR code on a business card (printed paper is harder to mitm)
- a slice of the hash confirmed by voice on the phone
1. Having a beer together is fun but doesn't scale over distance. How do
I set up a secure channel with people 50 km away? There is no such thing
as virtual beer.
let's discuss the scenario of a phone conversation while at the same time
typing the shared secret into the prompt. this requires a MITM to be ready
to act immediately - that is, agencies must have a VERY high interest in
you if they are monitoring you in real-time, not just recording your stuff.
THIS to me sounds like an acceptable compromise if you can't take the
classic walk in the park and keep an eye on people following you.
This will surely work, I think ZRTP already does this. But it is just
one scenario.
Post by carlo von lynX
i'm not trying to achieve perfection, i just want surveillance to cost as
much effort as it had in stasi days, back in the 80s.
Me too!

I want to be able to browse the net without being tracked left, right
and center to feed me more advertising.

I want all the data encrypted all the time.

I don't want to have to provide my email address each time I have to
create an account. And using temporary email providers is also an extra
hassle. Email address can be personal identifiable information. Mine is.

I don't want to have a single identity online. I want as many identities
as I have passwords at sites.

I want to stay anonymous when buying something on the net. They get my
money, I get the product/service. (Paying and shipping anonymously is
hard). But with a throwaway identity, it makes tracking that order to my
other activities harder.

I want a way to exchange encrypted messages (sort of email) just as easy.

I want to put the control of anonymity in the hands of the users, not
having to trust a company.

I don't want to prescribe these ideals to others. If someone uses my
protocol to log in to facebook, why not. Its their freedom. At least,
they are safe when facebook gets hacked. There is nothing to gain at
facebook to impersonate them at gmail.

I want to deploy it into the current web infrastructure.

With everything encrypted the spooks will have to hack the endpoints,
not hoover at the center.

Right now, that's where my quest for perfection stops. End user systems
are notoriously bad at protecting the users. Capability designs help
tremendously.
Post by carlo von lynX
Post by Guido Witmond
2. Public key fingerprint or QR codes are not Zooko-proof. It's fails
the human memorize-able property. I can't read it from the side of a bus
and type it in at home.
granted concerning fingerprint, not acknowledged concerning a QR code you
received on paper from that person. in that case it's irrelevant that you
can't memorize its pattern - you just show it to your camera.
Agreed, QR codes on business cards are an excellent way to distribute a
key/certificate. Can't MitM that.
Post by carlo von lynX
Post by Guido Witmond
3. A slice of a hash only proofs that you have a secure channel when you
know the identity of the other person.
what's wrong with that?
It only works when you can identify the person by voice. If I call the
help desk of my bank, I can be mitm'ed without me detecting it.
Post by carlo von lynX
Post by Guido Witmond
Post by carlo von lynX
Tor is leading the way. Simply by spelling out 7yuogiqxgrak36kk to you
Not so fast, mister 7yuoqigxqrak36kk,
Did anyone notice that I replaced the 'q' and 'g' characters in here?
That's the human memorizable property in action. Or the lack of it. :-)
Post by carlo von lynX
Post by Guido Witmond
Feel free to check out my attempt at solving Zooko's triangle. It uses
DNSSEC/DANE to validate the domain and it uses a network perspective to
validate each CA at the domain. Users are anonymous.
http://eccentric-authentication.org/eccentric-authentication/five-minute-overview.html
i like the CA root key taken offline... :) it's like our identity
recovery strategy described in http://secushare.org/threats
pretty advanced strategy.. not bad. so the achilles heel would be if
US government does something with the DNSSEC root? but all it could
do would be to break the certification system, right?
DNSSEC is used to *introduce* the correct certificate at first contact.
Validate all DNS results until you reach the ICANN Root Key that is
pinned in the client software. Fail hard if you fail validation.

After you've created a client certificate at the sites' CA,
you can verify that the sites Root CA is the same as your client
certificates Root CA. The sites' CA is the identity. That's why each
site needs its own CA.

If the registrar is coerced and replaces the sites IP-address and DANE
certificate to some spooks' site, spooks cannot impersonate the sites
Local CA root key unless they break RSA or capture that key from your
crypto-stick. Raising the price to a home break in.

Existing users would fail to validate the spooks CA Root against their
client certificates. And blog about it on another site.

It would 'fool' new users who don't read that blog. They could detect a
problem with it when they try to communicate with others on the site.
But that's dependent on the type of site and the paranoia of the user
agent software.
Post by carlo von lynX
i'm not sure
if i'm grasping the full implications of this. the UDP limit for keys
is evil. is it true that DNSSEC only provides URL download for larger
keys? does that mean the keys are no longer protected by DNSSEC in
that case?
I don't see a problem here. Good DNS resolvers switch to TCP when UDP
won't fit.

bash$ dig any _443._tcp.www.ecca.wtmnd.nl
;; Truncated, retrying in TCP mode.
.....
;; Query time: 31 msec
;; SERVER: 2a00:d00:ff:129:94:228:129:129#53(2a00:d00:ff:129:94:228:129:129)
;; WHEN: Thu Jul 25 21:17:36 2013
;; MSG SIZE rcvd: 3231



Cheers, Guido.
elijah
2013-07-25 08:37:55 UTC
Permalink
Post by carlo von lynX
So in my case the cheat is in selecting a slice of the hash?
you cannot plausibly argue that "7yuogiqxgrak36kk" is human memorable.
you are not cheating, you just don't care about one side of Zooko's
triangle.
Post by carlo von lynX
Tor is leading the way. Simply by spelling out 7yuogiqxgrak36kk to you
we have a cryptographic guarantee that your tor node will connect to mine
and only to mine.
onion addresses are a prefect example of why we need to do better. just
look at all the fake onion addresses floating around the internet for
silk road.
Post by carlo von lynX
So I'd say Zooko is a problem solved.
keys as identifiers are the defining example of the problem that Zooko
was illustrating with the triangle postulate. using keys as identifiers
solves nothing.
Post by carlo von lynX
Now it's time to provide the key instead of the domain. You're living in the past, Eli. :)
good luck with that. you might get some people to use it, until
something actually human memorable comes along, which it will.

-elijah
Michael Rogers
2013-07-25 08:33:09 UTC
Permalink
* Third-party-dropbox: To exchange messages, user A and user B
negotiate a unique "dropbox" URL for depositing messages,
potentially using a third party. To send a message, user A would
post the message to the "dropbox". To receive a message, user B
would regularly polls this URL to see if there are new messages.
Hi Elijah,

I'm curious about the third-party dropbox idea (partly because I'm
currently working on an HTTP dead drop transport for Briar). It seems
like there are two ways you could do this:

1. The dropbox is shared by multiple users; when user A authenticates
and deposits a message, she tells the dropbox that user B is allowed
to collect the message.

2. The dropbox is only used by one pair of users; when user A
authenticates and deposits a message, the dropbox knows it's for user B.

In either case, the dropbox has metadata about who's communicating
with whom. In case 2, anyone watching the dropbox also has that
metadata. (In case 1, anyone watching the dropbox has that metadata
unless communication with the dropbox is encrypted and padded.)

So I don't see how this technique is metadata-resistant, except in the
short term (NSA has to arrange metadata collection from a new service
provider). What am I missing?

Cheers,
Michael
carlo von lynX
2013-07-25 09:00:01 UTC
Permalink
Post by Michael Rogers
* Third-party-dropbox: To exchange messages, user A and user B
negotiate a unique "dropbox" URL for depositing messages,
potentially using a third party. To send a message, user A would
post the message to the "dropbox". To receive a message, user B
would regularly polls this URL to see if there are new messages.
Hi Elijah,
I'm curious about the third-party dropbox idea (partly because I'm
currently working on an HTTP dead drop transport for Briar). It seems
1. The dropbox is shared by multiple users; when user A authenticates
and deposits a message, she tells the dropbox that user B is allowed
to collect the message.
2. The dropbox is only used by one pair of users; when user A
authenticates and deposits a message, the dropbox knows it's for user B.
In either case, the dropbox has metadata about who's communicating
with whom. In case 2, anyone watching the dropbox also has that
metadata. (In case 1, anyone watching the dropbox has that metadata
unless communication with the dropbox is encrypted and padded.)
So I don't see how this technique is metadata-resistant, except in the
short term (NSA has to arrange metadata collection from a new service
provider). What am I missing?
you didn't ask for it, but I'll give a rough description of the
PSYC over gnunet approach to this as far as i think we are doing it.

a dropbox is technically a multicast context anonymously* subscribed
to by one or millions of recipients. the name of the context is the
public key necessary to write to it. by looking it up in the DHT you
find possible root nodes of the multicast. you can send messages to
them encrypted to that public key - they will trickle down to the
subscribers. the ones who have the private key can read the messages.

if the subscribers are offline, the nodes next to the subscribers in the
tree will keep the messages for a reasonable time until they come back.
thus there is no actual implementation of a dropbox - it's just a side
effect of the multicast infrastructure and works both for one-to-one
messaging as for twitter-like usage or videocasting to millions of
viewers... and what's best: it's redundant and doesn't depend on any
node staying up.

gnunet mesh actually reencrypts the message from hop to hop as it
travels down, as this has some advantages, but that doesn't mean
the original content has to be sent "in the clear." so my description
above is a bit of a simplification of what actually happens under the
hood.

too bad we didn't get the financing to have this already out there
running.


*) "anonymously" if you insert some onion hops between yourself and
the multicast source or if the tree is sufficiently big already.
you can choose to keep the tree short and fast, though. it will
still be hard to tell you are participating, unless the content is
a video stream and everybody else is just chatting. if instead
everybody else is watching streams or doing file sharing, then you
are covered well even if you chose to reduce the number of hops.
that's why gnunet is also for file sharing.
Michael Rogers
2013-07-25 10:00:43 UTC
Permalink
Post by carlo von lynX
a dropbox is technically a multicast context anonymously*
subscribed to by one or millions of recipients. the name of the
context is the public key necessary to write to it. by looking it
up in the DHT you find possible root nodes of the multicast. you
can send messages to them encrypted to that public key - they will
trickle down to the subscribers. the ones who have the private key
can read the messages.
This is interesting, but not what I have in mind, which is a way for
two users to communicate asynchronously by storing and collecting
encrypted data at a third party.

Cheers,
Michael
carlo von lynX
2013-07-25 15:09:24 UTC
Permalink
Post by Michael Rogers
Post by carlo von lynX
a dropbox is technically a multicast context anonymously*
subscribed to by one or millions of recipients. the name of the
context is the public key necessary to write to it. by looking it
up in the DHT you find possible root nodes of the multicast. you
can send messages to them encrypted to that public key - they will
trickle down to the subscribers. the ones who have the private key
can read the messages.
This is interesting, but not what I have in mind, which is a way for
two users to communicate asynchronously by storing and collecting
encrypted data at a third party.
think again, yours seems to me a special case of the above....
it's enough for you to have two devices ready to receive
the message for the multicast thingie to pay off.
Michael Rogers
2013-07-25 15:34:00 UTC
Permalink
Post by Michael Rogers
This is interesting, but not what I have in mind, which is a way
for two users to communicate asynchronously by storing and
collecting encrypted data at a third party.
think again, yours seems to me a special case of the above.... it's
enough for you to have two devices ready to receive the message for
the multicast thingie to pay off.
Yes, technically you could implement a dropbox using a multicast tree
with a single non-leaf node, a single leaf node and a very long
retention time. You could also implement a backscratcher using a
Sherman tank, but most people just use a stick. ;-)

Cheers,
Michael
carlo von lynX
2013-07-25 08:37:09 UTC
Permalink
email needs to be discontinued in the long run. it doesn't serve any of
the purposes it was constructed for. it gives the attacker a full view
of the social network, a view into the content by default and it also
fails at delivering to many recipients promptly and to handle spam.
This is a straw man argument. Yes, email as currently practiced has
problems, but there is no reason email cannot be reformed. There is
email has a >25 years track record of not being reformable. it is just
as essential to the internet like FTP or TELNET. in 1991 FTP was the #1
protocol on the internet, now it exists only for aficionados. in 1993
it looked like ssh is the fringe paranoia technology from the free soft-
ware extremists while are normal people were using telnet and rsh. Today
you have to explain that there was something before ssh came along.
facebook has been obsoleting millions of emails already. if it's not us
taking email to the grave, somebody like facebook will.
enough interest in secure email these days that I am certain the
problems will be solved. The needed pieces are (1) opportunistic
secure email is fine, just don't use any of the broken old protocols
for it.
encryption via automatic key discovery/validation, (2) enforced StartTLS
if the key isn't the address, there is no safe way to perform key validation.
x.509 is a failure, you can't trust it.

even if you starttls, you are still making direct links from sending to
receiving server. there are two bugs here: (1) the path and meta data is
exposed, (2) servers get to have an important role which is bad because
servers are prone to prism contracts.

and don't call me paranoid because in these weeks we are finding out
the situation is WORSE than i thought when we last met in amsterdam!
so what i said back then WASNT PARANOID ENOUGH!
(3) meta-data resistant routing. There are a couple good proposals on
the table for #1, postfix already supports #2 via DANE, and there are
four good ideas for #3 (auto-alias-pairs, onion-routing-headers,
third-party-dropbox, mixmaster-with-signatures [1]).
as long as it is backwards compatible to plain old unencrypted email
we are unnecessarily risking downgrade attacks. also we are exposing
our new safe mail system to st00pid spam problems of the past.

email compatibility must at max go as far as IMAP or POP3 to our
localhost onion router.

people prefer facebook mail anyway, so i presume they'll be fine
with retroshare mail or i2p mail or whatever we come up with in the
next weeks.
I do want to note that email has stood the test of time when it comes to
many recipients. I remember hosting email lists with 100k subscribers
and pushing millions of messages monthly on an ancient machine from
1998. Worked like a charm.
sure, how many hours did it take to deliver all of those messages?
just because you can barely survive without multicast doesn't mean you
should make a habit of that and stick to things that just weren't bad
enough to get replaced. well, email is getting replaced today - and i
don't want to be on the side of the ones getting replaced.
[1] details on ideas for meta-data resistant routing in a federated
client/server architecture
fine, but the federated client/server architecture is unnecessary and
servers are always prone to getting tapped. if you make servers
sufficiently dumb then they're essentially just some more nodes in
the network and there is no technical reason to distinguish clients
and servers much.
* Auto-alias-pairs: Each party auto-negotiates aliases for communicating
with each other. Behind the scenes, the client then invisibly uses these
aliases for subsequent communication. The advantage is that this is
backward compatible with existing routing. The disadvantage is that the
user's server stores a list of their aliases. As an improvement, you
could add the possibility of a third party service to maintain the alias
map.
sounds like a similar effort to setting up multicast trees, only trees
are more useful because they solve the distribution-to-many challenge.
gnunet provides this in the 'mesh' module.
* Onion-routing-headers: A message from user A to user B is encoded so
that the "to" routing information only contains the name of B's server.
When B's server receives the message, it unwraps (unencrypts) a
supplementary header that contains the actual user "B". Like aliases,
this i think is the default behaviour of tor and gnunet. gnunet in
particular lets you choose how many onion layers you need per message -
so you can choose freely between paranoid data and low security high
bandwidth or realtime data.
this provides no benefit if both users are on the same server. As an
both users should only be on the same node if they are in the same
flat sharing the same LAN. ;) anything else is a ideological
distortion of the topology which harms the security of the participants.
improvement, the message could bounce around intermediary servers, like
mixmaster.
the question of choosing such intermediary servers must not be left to
the intermediary servers or the attack vector is simple: impede the
origin server from communicating with any servers except the ones run
by the NSA -> the NSA finds out where your messages are going, no
matter how many onion slices you added.
* Third-party-dropbox: To exchange messages, user A and user B negotiate
a unique "dropbox" URL for depositing messages, potentially using a
third party. To send a message, user A would post the message to the
"dropbox". To receive a message, user B would regularly polls this URL
to see if there are new messages.
"URL" is the wrongest term here. a dropbox would be a node in the network,
so it is a public key address. you only use a dropbox if the thing you
want to store isn't suitable for getting stored in the distributed
hashtable. regular P2P apps use the DHT because it is redundant and
doesn't depend on the "dropbox" staying up.
* Mixmaster-with-signatures: Messages are bounced through a
mixmaster-like set of anonymization relays and then finally delivered to
the recipient's server. The user's client only displays the message if
it is encrypted, has a valid signature, and the user has previously
added the sender to a 'allow list' (perhaps automatically generated from
the list of validated public keys).
i presume you want to let the mixmaster itself decide where the things
are sent to. this was okay in past decade, but today this isn't safe
considering the kind of attack i described above.

P2P technology has made huge steps forward in the past decade and it
is not enough to understand onion routing to grasp all the scientific
progress that happened from there. i, myself, am not an expert - i am
just reflecting some gotchas from reading gnunet's university papers
and i bet christian or others can improve my critique to your idea of
retrofitting onion routing on top of SMTP.

other than that, gnunet already does operate over SMTP if necessary,
so although i wouldn't recommend it, you can already do this stuff.

since we have so many nice recipients of these emails, why don't we
also add libtech and unlike-us? ;)
--
»»» psyc://psyced.org/~lynX »»» irc://psyced.org/welcome
»»» xmpp:***@psyced.org »»» http://my.pages.de/me
elijah
2013-07-25 09:08:27 UTC
Permalink
Post by carlo von lynX
encryption via automatic key discovery/validation, (2) enforced StartTLS
if the key isn't the address, there is no safe way to perform key validation.
x.509 is a failure, you can't trust it.
Again, a straw man argument. yes, x.509 is a failure, but there are
other ways to perform key validation. I gave you two: DANE and Nicknym
(https://leap.se/nicknym)
Post by carlo von lynX
even if you starttls, you are still making direct links from sending to
receiving server. there are two bugs here: (1) the path and meta data is
exposed, (2) servers get to have an important role which is bad because
servers are prone to prism contracts.
I am beginning to suspect you are just trolling me now. Obviously,
starttls alone does not solve these problems, that is why I said the
solution requires opportunistic encryption of content and meta-data
resistant routing.
Post by carlo von lynX
(3) meta-data resistant routing. There are a couple good proposals on
the table for #1, postfix already supports #2 via DANE, and there are
four good ideas for #3 (auto-alias-pairs, onion-routing-headers,
third-party-dropbox, mixmaster-with-signatures [1]).
as long as it is backwards compatible to plain old unencrypted email
we are unnecessarily risking downgrade attacks. also we are exposing
our new safe mail system to st00pid spam problems of the past.
No and no. There are lots of ways to prevent downgrade attacks and lots
of ways to prevent spam.
Post by carlo von lynX
well, email is getting replaced today - and i
don't want to be on the side of the ones getting replaced.
email usage is a lower percentage of messages, but absolute email
traffic is still growing. email is not going away for a very long time.
Post by carlo von lynX
[1] details on ideas for meta-data resistant routing in a federated
client/server architecture
fine, but the federated client/server architecture is unnecessary and
servers are always prone to getting tapped. if you make servers
sufficiently dumb then they're essentially just some more nodes in
the network and there is no technical reason to distinguish clients
and servers much.
another way of saying this is that successful peer-to-peer networks
follow a power law distribution and effectively look much like a
federated architecture except with no one responsible for keeping the
lights on and with really poor support for mobile devices and devices
with intermittent network access.

so, yes, my goal is federated client/server where the servers are dumb.
by doing this we gain a lot, including organizations responsible for
maintain the health of the network, data availability and backup for
users, and high functionality on devices on bad networks or limited
battery, and (most importantly) the potential for more human friendly
secure identity.

I suspect you will continue to claim, as you have many times in the
past, that federated models are inherently insecure. There is simply no
basis for this claim, and the more you make it the less credible you
seem. So, please stop making this claim. We both share the same long
term goal, and we both think that eventually peer to peer architectures
will get us there. We disagree on the schedule, in that I think
federated approaches are better for the immediate future and you think
peer-to-peer approaches are the only way.

Fine, reasonable people can disagree, but there are real trade offs to
each approach [1], and by refusing to acknowledge the trade offs you are
making you do a disservice to the cause we share.

-elijah

[1] https://leap.se/en/infosec
carlo von lynX
2013-07-25 15:04:41 UTC
Permalink
Post by elijah
Again, a straw man argument. yes, x.509 is a failure, but there are
other ways to perform key validation. I gave you two: DANE and Nicknym
(https://leap.se/nicknym)
think of your grandma. she unpacks her brand new computer with
a free operating system on it. she clicks on "send a message"
and the software asks her to insert the identifier for her son.
typing ***@domain may be slightly easier than inserting
7yuogiqxgrak36kk, but it is harder than holding his business
card in front of the webcam or typing in what they agreed upon
(some keyboards have a very nasty place for the @ sign ;)).

but the real problem is different: once she starts using her
brand new computer, the communications software generates a key
pair for her. will she go to some israeli authority and get
herself a certificate? will she do it each year? no

so how can the son be sure who is calling? he can trust the
combination of TOFU, perspective and provider keys. uhm, well -
noone has talked to his grandma yet, so perspective is out.
provider keys, which seems to be a WOT-based certification
structure, isn't useful either (not talking about exposing
your interest for a certain public key to some external
authorities...). so the only thing remaining is TOFU:
trust on first use.

in practice he has no other choice than to call her up on
the phone and ask her to click on some menu items and have
her read the public key hash to him. or he just doesn't
care and just deals with it if there's a man in the middle.

under these circumstances i think 7yuogiqxgrak36kk is easier
to handle, or one of the other ways for true end to end
safety. how much do i get for this free review of the
usefulness of the LEAP strategy? is see in the document that
you know how to criticize DANE yourself, so i won't bother.
whereas the criticism on shared secret is rather lame.
it's enough to apply some normalization to the user string
to avoid such problems.

so, not only i disproved your accusation of having provided
a straw man argument.. according to
http://www.wisegeek.org/what-is-a-straw-man-argument.htm
that was a wrong assertion in the first place: i didn't look
at any argumentation of yours before you proposed it, so i
couldn't have made a straw man argument against it.

i just stated what my current knowledge is, and you have
unvoluntarily reconfirmed it.
Post by elijah
Post by carlo von lynX
even if you starttls, you are still making direct links from sending to
receiving server. there are two bugs here: (1) the path and meta data is
exposed, (2) servers get to have an important role which is bad because
servers are prone to prism contracts.
I am beginning to suspect you are just trolling me now. Obviously,
starttls alone does not solve these problems, that is why I said the
solution requires opportunistic encryption of content and meta-data
resistant routing.
i hadn't read point (3) yet when i wrote that. starttls is just less
efficient than having an encrypted protocol rightaway, but other than
that the problem with it is rather in the server-oriented architecture
of smtp which is inherently prismable. from amsterdam you should know
that i don't troll you (what for?), but i may misinterpret you as it
has happened before.
Post by elijah
Post by carlo von lynX
as long as it is backwards compatible to plain old unencrypted email
we are unnecessarily risking downgrade attacks. also we are exposing
our new safe mail system to st00pid spam problems of the past.
No and no. There are lots of ways to prevent downgrade attacks and lots
of ways to prevent spam.
so you expect all nodes to be properly configured against downgrade
attacks. great. and you filter spam because it's the only unencrypted
content left? :)
Post by elijah
Post by carlo von lynX
well, email is getting replaced today - and i
don't want to be on the side of the ones getting replaced.
email usage is a lower percentage of messages, but absolute email
traffic is still growing. email is not going away for a very long time.
as long as the internet is growing as such. still even old netheads
catch themselves typing an old friend's name in facebook rather
than to dig out her email address.. and then you get to see the
photo of her as you type.. email can't beat that.
Post by elijah
Post by carlo von lynX
fine, but the federated client/server architecture is unnecessary and
servers are always prone to getting tapped. if you make servers
sufficiently dumb then they're essentially just some more nodes in
the network and there is no technical reason to distinguish clients
and servers much.
another way of saying this is that successful peer-to-peer networks
follow a power law distribution and effectively look much like a
federated architecture except with no one responsible for keeping the
lights on and with really poor support for mobile devices and devices
with intermittent network access.
so, yes, my goal is federated client/server where the servers are dumb.
by doing this we gain a lot, including organizations responsible for
maintain the health of the network, data availability and backup for
users, and high functionality on devices on bad networks or limited
battery, and (most importantly) the potential for more human friendly
secure identity.
no, not federated. federated means people have a fix home address and
the network is to a large extent static. there can still be people
responsible for keeping the lights on. in secushare running a node
serves the people in your social neighborhood the most, so there is
a motivation to run one on a server, too. i know that's different
from your average p2p thing. so the criticism that still goes is mobile
use. well, we want privacy. if the thing is usable from a device you
do not own, you don't have that privacy. so first thing you do is
install a proper free operating system with a proper onion router
preinstalled. then it can send and receive messages and still choose
to participate in the network in a way that is battery and bandwidth
friendly. it doesn't mean you have to have a client-server role model.
Post by elijah
I suspect you will continue to claim, as you have many times in the
past, that federated models are inherently insecure. There is simply no
basis for this claim, and the more you make it the less credible you
seem. So, please stop making this claim. We both share the same long
term goal, and we both think that eventually peer to peer architectures
will get us there. We disagree on the schedule, in that I think
federated approaches are better for the immediate future and you think
peer-to-peer approaches are the only way.
well, maybe you have changed the definition of federation so much that
the criticism no longer applies? if federation means that your client
has a home server where the social graph resides then you can do all
the cryptography you want - you HAVE exposed meta data to such a server
and servers are no longer safe.

so here is the basis for that claim. do you have any argumentation
against it? feel free to also criticize http://secushare.org/federation
Post by elijah
Fine, reasonable people can disagree, but there are real trade offs to
each approach [1], and by refusing to acknowledge the trade offs you are
making you do a disservice to the cause we share.
that much can't be denied. but i think my trade-off is in line with
what RMS would expect: you can only run a proper secure communications
system if you have a free operating system installed on it. luckily
even iPhones can be reprogrammed with a free version of android, i was
told - so we can make it - we can achieve a revolution of privacy by
reflashing all of our devices worldwide. guess it's worth it. once we
do so, there is no reason to take half measures about the rest of the
architecture.
Post by elijah
[1] https://leap.se/en/infosec
ah great page.. i love cryptocat right next to skype!

let's extent the tables by "P2P/F2F hybrid" which is something between
Tor and Retroshare.. or GNUnet combined with PSYC.. the column is
just like "peer to peer" except for...

Availability.. higher than high, because more than one node keeps the
pending messages for you - which is technically better than a "home
server" in federation thinking.. also consider that "server"-like nodes
are incentivated so it isn't just nodes on some laptops.

Identity Security.. is solved better, thanks to the 3 approaches.
so Authenticity and Unmappability are high and Usability is medium -
since you still have to do a minimum effort for it - like printing
business cards. but i am glad you added Unmappability to the list
of priorities since last time we met... ;) what is the reason
why you believe LEAP has better Unmappability than generic P2P?
i talked of "social neighborhood" before, but i don't mean connecting
directly your friends as Retroshare seems to still be doing.

btw, gnunet is a really bad example for your P2P scores since it
comes with QR business cards, F2F mode, GADS and other things you
probably didn't take in consideration.

concerning User Freedom, Control can be the highest because only the
intended recipients get *any* data - and if they participate in a
distribution tree for your tweets they may not even see who else is
in that tree (you can decide if the member list is viewable).
in federation approaches at least the home server needs to know
where to send stuff to, right?

Usability.. Tor is indeed difficult to use in a way that you don't
expose your identity.. but that's because the web is in the ballgame.
approaches like Retroshare or ours require you to actually write a
custom broken version of the software to damage yourself and others.
what other potential problem where you thinking of?

"Compatibility: None" - lol. actually since all data is on your own
hard disk i think you have never been so in control of it and you
can use any damn editor, file manager or photo manipulation software
which is on your computer to interact with it - so from my perspective
compatibility has never been so high. But of course you were intending
the ability to also upload the stuff to facebook and diaspora - well,
it can be done, no sweat. wouldn't be surprised if Retroshare already
has a plug-in for posting stuff to traditional surveilled social
networks. oh, you meant federated social web protocol standards? well,
excuse me if i dare say so, but 99.999% of users out there wouldn't
give a $*.

in other words your assertion that "reasonable people may disagree"
and "adjust one or two cells" is either quite inaccurate or i must
deduce that i am very unreasonable... :-D
elijah
2013-07-25 22:28:46 UTC
Permalink
how much do i get for this free review of the usefulness of the LEAP strategy?
It was nice of you to read it, but it would have been better if you
tried to understand it.

Either you are not operating in good faith or you have a mental block
against hearing any ideas that are not peer-to-peer. Either way, I don't
think it is productive for us to continue interacting.
well, maybe you have changed the definition of federation so much that the criticism no longer applies?
you continue to critique what you want to believe we are doing, rather
than taking any time to critique what we are actually doing. good night
sir, please don't write again.

-elijah
Melvin Carvalho
2013-07-25 09:27:47 UTC
Permalink
They are in the church of "your email is your identity" -- let's
be clear this is an unnecessary restriction with will not scale.
Other projects (not mentioning any names) *cough cough* are also
in this religious sect.
also, key management is ridiculously easy if it doesn't try to get
along with email addresses. the key is the identity. works great in
modern systems such as tor hidden services, retroshare etc.
We still live under Zooko's triangle. Identity <> key mapping is only
easy if you exclusively care about globally unique and decentralized,
but it is very hard if you care also about human friendly.
Zooko's triangle is a useful tool, but note the comment:

Zooko's triangle is not a proof, but rather a suspicion; as Zooko puts it,
"I didn't prove that it is impossible to have all three features, I only
said that I doubted that your namespace will have all three.". (wikipedia)

The triangle applies when you wish to confine yourself to a single
identifier, which is just not a common best practice. In facebook, for
example, people use a combination of their real name, their email, their
telephone number, or their graph address (which is the one that ties all
the others together).

Overloading identifiers is conceptually wrong. You are not your email
address, you HAVE an email address. You are not your public key you HAVE a
public key. If you overload this, it's a hack, called 'indirect
identifiers', and you have to accept the disadvantages that it's harder to
scale. Scaling is what's important in social.
You can get all three, if you cheat. Namecoin is an example of cheating
in a peer to peer way (the cheat is that the global append-only log is
essentially an authority, derived from consensus of miners). DANE
achieves all three by relying on the authority of the root DNS zone.
Nicknym, the protocol we are working on (https://leap.se/en/nicknym)
also achieves all three by relying on DNS, although in an entirely
different way.
So this is a cleaner approach. Have a composite identity of which email
and key are but facets. Therefore you need a technology to tie them
together. This is the engineering challenge. Namecoin is a distributed
database of key / value pairs, which is not a bad idea. DANE could work,
but again we are overloading. DNS is already a central point of failure,
be careful not to put all your eggs in one basket. Key value pairs can
also be stored in something like a facebook graph, in a database, or in
your browser -- there's more than just one solution to this.
We can, and must, do much better than a secure identity system that is
unfriendly to humans. It is the 21st century, after all.
True.
And yes, I proudly belong to the church of identity in the form of the
fantastically usable, it is also universally understood by every
internet user on earth. There are other addressing schemes that are user
There's a number of disadvantages with this approach. Firstly, you have to
convince everybody to subscribe to your world view, which is time intensive
(also a losing battle from the start) which takes away from your, and
everyone else's development time. Secondly, it creates 'haves' and 'have
notes' and balkanized the space. Even though I like the technology of
LEAP, more so because it is free software, because it's done in an
intolerant way, it's harder to even fork or reuse the code, because of the
militant opposition to getting the patches upstream, either to the
codebase, or the protocol.

The battle of free software has shifted from proprietary software, to
centralized data, to the gatekeepers of the non-free protocols that people
use.

CC: zooko
-elijah
Melvin Carvalho
2013-07-25 10:10:06 UTC
Permalink
Post by Melvin Carvalho
Zooko's triangle is not a proof, but rather a suspicion
of course, but the triangle has stood the test of time remarkably well
as a tool for thinking about the problem of binding identifiers to
cryptographic keys. it is fracking brilliant really, and allows one to
bring clarity to what would otherwise be very muddy conversations.
Post by Melvin Carvalho
And yes, I proudly belong to the church of identity in the form of
the
Post by Melvin Carvalho
fantastically usable, it is also universally understood by every
internet user on earth. There are other addressing schemes that are
user
Post by Melvin Carvalho
There's a number of disadvantages with this approach. Firstly, you have
to convince everybody to subscribe to your world view, which is time
intensive (also a losing battle from the start) which takes away from
your, and everyone else's development time. Secondly, it creates
'haves' and 'have notes' and balkanized the space. Even though I like
the technology of LEAP, more so because it is free software, because
it's done in an intolerant way, it's harder to even fork or reuse the
code, because of the militant opposition to getting the patches
upstream, either to the codebase, or the protocol.
I really am genuinely confused. Am I getting trolled here? The LEAP
(1) downgrade to backward compatible communication protocols when
necessary, but allow for required upgrade to enhanced versions when
available.
(2) factor out as much of the code as possible into general libraries
that can be used by others
(3) cooperate as much as humanly possible with anyone and everyone
interested in the same problem space we are
So, basically, the exact opposite of everything you just wrote. We don't
have a working release for email yet, so we haven't had any submitted
patches, but rest assured they would be welcome.
I think maybe what you are getting at is that you think using
(email, chat, files, voip, social, etc) is a really bad idea. You don't
identifier there is. To embrace it is hardly being intolerant, it is
just being practical and backward compatible.
Embracing email as part of a holistic identity strategy could be
practical. But if it's using email as the 'one identity to rule them all'
-- it's going to be fractured by nature.
-elijah
Michael Rogers
2013-07-25 10:27:36 UTC
Permalink
I think maybe what you are getting at is that you think using
(email, chat, files, voip, social, etc) is a really bad idea. You
used identifier there is. To embrace it is hardly being intolerant,
it is just being practical and backward compatible.
Embracing email as part of a holistic identity strategy could be
practical. But if it's using email as the 'one identity to rule
them all' -- it's going to be fractured by nature.
I don't think the suggestion is to use email addresses as universal
identifiers, but simply to use the ***@domain format, which is
memorisable, easily recognised, and clearly represents the
user/service provider relationship.

However, using one format for multiple services is confusing - already
I can't put my Jabber ID on my business card without explaining that
it's not an email address. Perhaps it would be appropriate to use
different separators for different services - user#domain,
user*domain, etc?

Cheers,
Michael
Melvin Carvalho
2013-07-25 17:17:23 UTC
Permalink
I'm 100% with Marco. Forget friendly identifiers. Tor is already
playing with fire making its keys that short.
Things will only get worse. With the advent of quantum computing we
are looking at public keys in the order of megabytes.
We will all learn to keep address books like our grandmothers did for
unfriendly six digit phone numbers and street addresses.
No big deal.
Yes, this is true, but zooko's triangle only applies to a *single
string*identity.

In practice, identity is an OBJECT with facets. So you can have an email,
a telephone, a real name, a public key, or whatever you want. In most
cases there's no need to disambiguate, or even to remember anything. This
is exactly how facebook does it, and it works perfectly. It's only when
you pick ONE facet, and disallow all others that zooko's triangle comes
into play.

tl;dr this is only a problem for identifier overloading -- make identity a
multi faceted object and have the best of all worlds
Klaus
http://sneer.me
On Thu, Jul 25, 2013 at 7:27 AM, Michael Rogers
Post by hellekin
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I think maybe what you are getting at is that you think using
(email, chat, files, voip, social, etc) is a really bad idea. You
used identifier there is. To embrace it is hardly being intolerant,
it is just being practical and backward compatible.
Embracing email as part of a holistic identity strategy could be
practical. But if it's using email as the 'one identity to rule
them all' -- it's going to be fractured by nature.
I don't think the suggestion is to use email addresses as universal
memorisable, easily recognised, and clearly represents the
user/service provider relationship.
However, using one format for multiple services is confusing - already
I can't put my Jabber ID on my business card without explaining that
it's not an email address. Perhaps it would be appropriate to use
different separators for different services - user#domain,
user*domain, etc?
Cheers,
Michael
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iQEcBAEBAgAGBQJR8P2XAAoJEBEET9GfxSfMTBcIAKcxwJmhxQD3GqdggYZcL0QT
KXlSJRuOZ/Y8L+97MRKTvSzOroNS5Gk6tIdv62V5OdPZGRejfqbYtFH6N94u7ApS
ycIUCqxG83mLiDvb2I/5p7lANu60nV388OGfWlacqM2a5kYv5oB4N7f69Ci1LwCs
rU5MiQ8Z1QQBYPvv3WAJnNoZjQdjG77f1GXYyRLYa37tvQaaK6DCUhrnSr8sTY5B
N76fZDfOQJwu4Bdj/2r7H0tE+2IzkowEUYdCT//V75fCpyXRRd4SB6HYMk4jRJ1a
y34UUiOKbCrshlw/N6McV31KzgkirLQONDpKM+ORdKlFKE+titArHwlUfOUnvso=
=J38N
-----END PGP SIGNATURE-----
_______________________________________________
SocialSwarm-DISCUSSION mailing list
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-discussion
Website : http://socialswarm.net/
Wiki : https://wiki.socialswarm.net
Liquid Feedback: https://socialswarm.tracciabi.li
SocialSwarm-ANNOUNCE (Announcements only; no discussion)
SocialSwarm-DISCUSSION (discussion list)
SocialSwarm-TECH (discussion list for technik and coders)
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-announce
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-tech
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-discussion
FoeBuD e.V. | Marktstrasse 18 | 33602 Bielefeld | Germany |
--
Valeu, Klaus.
Melvin Carvalho
2013-07-25 17:34:54 UTC
Permalink
People wont authenticate multiple facets, will they?
Sure they will, at the moment we have a world split between http profiles
and email addresses. http is more popular, largely due to facebook, but
there's other systems such as tent, indieweb, foaf etc. email is pushed
hard by google, microsoft, yahoo and friends.

If you had to choose 1 of the two http is superior imho because you can
derefernce http to find more information, you cant dereference email
easily. Additionally, a normal user can create a profile page, but running
an email server is normally an enterprise level task.

Strangely, it seems to be a religious war for the last 5 years, and I have
no idea why. It has slowed us down, and unnecessary.

The point here is that it does not have to be either/or, it can be AND.
You can imagine other facets in future being added such as telephone, key,
name, fingerprint, qr code or whatever.

As it happens, authentication is rare, and normally happens as a one-off.
After that an unguessable string is normally shared between parties (eg in
a cookie) to mean you dont have to authenticate again. People often login
today by clicking a button. If your public key is in your client, you just
need to click and not type or remember anything.

Authentication and identity are different concepts which are commonly
grouped together. It's rare that people look up other people by email when
adding a friend, they will use the real name, and this is also displayed on
your wall etc.
On Thu, Jul 25, 2013 at 2:17 PM, Melvin Carvalho
I'm 100% with Marco. Forget friendly identifiers. Tor is already
playing with fire making its keys that short.
Things will only get worse. With the advent of quantum computing we
are looking at public keys in the order of megabytes.
We will all learn to keep address books like our grandmothers did for
unfriendly six digit phone numbers and street addresses.
No big deal.
Yes, this is true, but zooko's triangle only applies to a single string
identity.
In practice, identity is an OBJECT with facets. So you can have an
email, a
telephone, a real name, a public key, or whatever you want. In most
cases
there's no need to disambiguate, or even to remember anything. This is
exactly how facebook does it, and it works perfectly. It's only when you
pick ONE facet, and disallow all others that zooko's triangle comes into
play.
tl;dr this is only a problem for identifier overloading -- make identity
a
multi faceted object and have the best of all worlds
Klaus
http://sneer.me
On Thu, Jul 25, 2013 at 7:27 AM, Michael Rogers
Post by hellekin
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I think maybe what you are getting at is that you think using
(email, chat, files, voip, social, etc) is a really bad idea. You
used identifier there is. To embrace it is hardly being intolerant,
it is just being practical and backward compatible.
Embracing email as part of a holistic identity strategy could be
practical. But if it's using email as the 'one identity to rule
them all' -- it's going to be fractured by nature.
I don't think the suggestion is to use email addresses as universal
memorisable, easily recognised, and clearly represents the
user/service provider relationship.
However, using one format for multiple services is confusing - already
I can't put my Jabber ID on my business card without explaining that
it's not an email address. Perhaps it would be appropriate to use
different separators for different services - user#domain,
user*domain, etc?
Cheers,
Michael
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iQEcBAEBAgAGBQJR8P2XAAoJEBEET9GfxSfMTBcIAKcxwJmhxQD3GqdggYZcL0QT
KXlSJRuOZ/Y8L+97MRKTvSzOroNS5Gk6tIdv62V5OdPZGRejfqbYtFH6N94u7ApS
ycIUCqxG83mLiDvb2I/5p7lANu60nV388OGfWlacqM2a5kYv5oB4N7f69Ci1LwCs
rU5MiQ8Z1QQBYPvv3WAJnNoZjQdjG77f1GXYyRLYa37tvQaaK6DCUhrnSr8sTY5B
N76fZDfOQJwu4Bdj/2r7H0tE+2IzkowEUYdCT//V75fCpyXRRd4SB6HYMk4jRJ1a
y34UUiOKbCrshlw/N6McV31KzgkirLQONDpKM+ORdKlFKE+titArHwlUfOUnvso=
=J38N
-----END PGP SIGNATURE-----
_______________________________________________
SocialSwarm-DISCUSSION mailing list
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-discussion
Post by hellekin
Website : http://socialswarm.net/
Wiki : https://wiki.socialswarm.net
Liquid Feedback: https://socialswarm.tracciabi.li
SocialSwarm-ANNOUNCE (Announcements only; no discussion)
SocialSwarm-DISCUSSION (discussion list)
SocialSwarm-TECH (discussion list for technik and coders)
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-announce
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-tech
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-discussion
Post by hellekin
FoeBuD e.V. | Marktstrasse 18 | 33602 Bielefeld | Germany |
--
Valeu, Klaus.
--
Valeu, Klaus.
Nick Jennings
2013-07-25 21:11:00 UTC
Permalink
On Thu, Jul 25, 2013 at 7:34 PM, Melvin Carvalho
Post by Melvin Carvalho
People wont authenticate multiple facets, will they?
Sure they will, at the moment we have a world split between http profiles
and email addresses. http is more popular, largely due to facebook, but
there's other systems such as tent, indieweb, foaf etc. email is pushed
hard by google, microsoft, yahoo and friends.
I think this is painting with a wide brush and bending reality a bit. It
seems people most against email addresses tend to do this thing where they
compare it to http. It comparing apples to oranges.

There is no "split" between HTTP profiles and email addresses. An email
address is a target to send a message to. A profile is a display of
information. They serve different purposes and they do not compete.

Email is not "pushed hard" by anyone, at least to my knowledge. It's like
saying mailboxes are being pushed. If you don't want a mailbox, you don't
have to have one, you won't get any mail though.
Post by Melvin Carvalho
If you had to choose 1 of the two http is superior imho because you can
derefernce http to find more information, you cant dereference email
easily. Additionally, a normal user can create a profile page, but running
an email server is normally an enterprise level task.
Creating a profile page is not akin to running a mail server. That's simply
ridiculous. You could compare running a web-server to running a mail
server. Both of which require sysadmin skills to do correctly.
Post by Melvin Carvalho
Strangely, it seems to be a religious war for the last 5 years, and I have
no idea why. It has slowed us down, and unnecessary.
I agree, I think it's misrepresentation that rubs some people the wrong
way. I'm in no way involved in this 'religious war' but I do find it absurd
when people make sweeping statements about how email and HTTP are somehow
competing for user identity, and that email is obsolete.


The point here is that it does not have to be either/or, it can be AND.
Post by Melvin Carvalho
You can imagine other facets in future being added such as telephone, key,
name, fingerprint, qr code or whatever.
+1
Post by Melvin Carvalho
As it happens, authentication is rare, and normally happens as a one-off.
After that an unguessable string is normally shared between parties (eg in
a cookie) to mean you dont have to authenticate again. People often login
today by clicking a button. If your public key is in your client, you just
need to click and not type or remember anything.
Authentication and identity are different concepts which are commonly
grouped together. It's rare that people look up other people by email when
adding a friend, they will use the real name, and this is also displayed on
your wall etc.
Facebook and all other major social networks have a "find your friends -
import your contacts list" to search for people you may know via their
email address. Again, you seem to be trying to underplay the importance of
email, by extreme generalization of very isolated use-cases, to make a
point.
Melvin Carvalho
2013-07-25 21:25:14 UTC
Permalink
Post by Nick Jennings
Post by Melvin Carvalho
People wont authenticate multiple facets, will they?
Sure they will, at the moment we have a world split between http profiles
and email addresses. http is more popular, largely due to facebook, but
there's other systems such as tent, indieweb, foaf etc. email is pushed
hard by google, microsoft, yahoo and friends.
I think this is painting with a wide brush and bending reality a bit. It
seems people most against email addresses tend to do this thing where they
compare it to http. It comparing apples to oranges.
Im unsure what point you are trying to make here. Are you saying that I'm
deluded or that I'm against email. I would hope neither is the case.
Post by Nick Jennings
There is no "split" between HTTP profiles and email addresses. An email
address is a target to send a message to. A profile is a display of
information. They serve different purposes and they do not compete.
They do indeed. Until people decide to overload them to *also* be your
identity. Which is the case.
Post by Nick Jennings
Email is not "pushed hard" by anyone, at least to my knowledge. It's like
saying mailboxes are being pushed. If you don't want a mailbox, you don't
have to have one, you won't get any mail though.
I suggest you look at the big email providers statistics, a few players
control a lot of the market
Post by Nick Jennings
Post by Melvin Carvalho
If you had to choose 1 of the two http is superior imho because you can
derefernce http to find more information, you cant dereference email
easily. Additionally, a normal user can create a profile page, but running
an email server is normally an enterprise level task.
Creating a profile page is not akin to running a mail server. That's
simply ridiculous. You could compare running a web-server to running a mail
server. Both of which require sysadmin skills to do correctly.
Sorry? I said that creating a web page is easier than running an email
server, why do you find that ridiculous?
Post by Nick Jennings
Post by Melvin Carvalho
Strangely, it seems to be a religious war for the last 5 years, and I
have no idea why. It has slowed us down, and unnecessary.
I agree, I think it's misrepresentation that rubs some people the wrong
way. I'm in no way involved in this 'religious war' but I do find it absurd
when people make sweeping statements about how email and HTTP are somehow
competing for user identity, and that email is obsolete.
Are you now saying that I said email is obsolete? If so I think you are
putting words in my mouth.
Post by Nick Jennings
The point here is that it does not have to be either/or, it can be AND.
Post by Melvin Carvalho
You can imagine other facets in future being added such as telephone, key,
name, fingerprint, qr code or whatever.
+1
Post by Melvin Carvalho
As it happens, authentication is rare, and normally happens as a
one-off. After that an unguessable string is normally shared between
parties (eg in a cookie) to mean you dont have to authenticate again.
People often login today by clicking a button. If your public key is in
your client, you just need to click and not type or remember anything.
Authentication and identity are different concepts which are commonly
grouped together. It's rare that people look up other people by email when
adding a friend, they will use the real name, and this is also displayed on
your wall etc.
Facebook and all other major social networks have a "find your friends -
import your contacts list" to search for people you may know via their
email address. Again, you seem to be trying to underplay the importance of
email, by extreme generalization of very isolated use-cases, to make a
point.
I honestly think you are trolling now. I've already stated that facebook
can look by name, email and telephone. People tend to choose real name.
I'll leave it at that...
hellekin
2013-07-25 22:08:30 UTC
Permalink
People, it became a bit difficult to participate in this conversation,
and we're not going to fight each other, please.

Elijah's proposal doesn't find consensus now, but we can at least
agree on some of the points. Can we focus on sorting them out, and
later focus on the dissenting ones?

I'd like to make an experiment. Come back to the original 11 points,
and remove any point that brought contradictory argument. Then we can
consider the common ground, and see clearly what we're really starting
from. It would take me a couple of hours to review the thread and
sort it out, and I'm ready to do that as soon as I can.

In the meantime, can we refrain from getting angry at each other?

Nobody has the RIGHT solution: the world is complex enough that more
than one path will happen concurrently, and most of them will be wrong
anyway. It's important to understand the issues associated with each
path, and their relative advantages. In the end, we don't want to
follow the Cult of The Best Software, but we want to provide free
software tools that will become infrastructure. It has to be floating
beyond each one's ego. If we can't do that, we're simply playing the
game of divide and conquer, and will fail. There's a momentum now that
we need to ride, not miss.

==
hk
hellekin
2013-07-25 23:05:09 UTC
Permalink
Sorry for bumping myself. As the SocialSwarm wiki does not respond I
started putting the stuff there. Edits welcome.

http://libreplanet.org/wiki/GNU/consensus/berlin-2013
Zooko Wilcox-OHearn
2013-07-25 23:08:31 UTC
Permalink
Hi folks! Thanks for Cc:'ing me into this conversation, because I'm
interested in seeing where others are going.

I don't have time to contribute to the conversation myself because I'm
busy with the Least-Authority File System project
(https://tahoe-lafs.org). I think and hope that LAFS could be a useful
building block for some of the purposes that you folks are talking
about.

I'm about to announce a commercially-supported LAFS service for people
who would rather pay my company than operate the server software
themselves. We've contributed all of the code that we wrote for this
commercial service back to the Free and Open codebase at
https://tahoe-lafs.org.

Regards,

Zooko

Nick Jennings
2013-07-25 23:48:41 UTC
Permalink
On Thu, Jul 25, 2013 at 11:25 PM, Melvin Carvalho
Post by Melvin Carvalho
On Thu, Jul 25, 2013 at 7:34 PM, Melvin Carvalho <
Post by Melvin Carvalho
People wont authenticate multiple facets, will they?
Sure they will, at the moment we have a world split between http
profiles and email addresses. http is more popular, largely due to
facebook, but there's other systems such as tent, indieweb, foaf etc.
email is pushed hard by google, microsoft, yahoo and friends.
I think this is painting with a wide brush and bending reality a bit. It
seems people most against email addresses tend to do this thing where they
compare it to http. It comparing apples to oranges.
Im unsure what point you are trying to make here. Are you saying that I'm
deluded or that I'm against email. I would hope neither is the case.
Sorry, maybe that came off stronger than I meant it.

However, my point is that the world is not split, nor is there any way to
backup that HTTP is more popular. Just because someone has a facebook
profile URL doesn't mean that they in any way associate with that as their
identity. Nor do they ever actually use it directly as a means of
identifying themselves. How often is someone going to put their FB URL into
an auth form? They'll use their username, or a "fb login" button, but most
users would never think to enter a URL.

When you login to facebook, you enter your email address. When you want to
login to a website that uses fb auth, you click a fb login button. If
someone asks you if you're on facebook, an average person (not a
technically enclined one) isn't going to tell the person (let's pretend we
are away from our computers) their facebook URL (or username), they would
just either agree they knew mutual friends or pass along an email address
(or phone number for that matter). But the URL i think is lowest on the
totem pole. You can co-opt usernames, but it's a different concept that
just happens to translate to a URL on most sites.
Post by Melvin Carvalho
There is no "split" between HTTP profiles and email addresses. An email
address is a target to send a message to. A profile is a display of
information. They serve different purposes and they do not compete.
They do indeed. Until people decide to overload them to *also* be your
identity. Which is the case.
How do they compete?
Post by Melvin Carvalho
Email is not "pushed hard" by anyone, at least to my knowledge. It's like
saying mailboxes are being pushed. If you don't want a mailbox, you don't
have to have one, you won't get any mail though.
I suggest you look at the big email providers statistics, a few players
control a lot of the market
Post by Melvin Carvalho
If you had to choose 1 of the two http is superior imho because you can
derefernce http to find more information, you cant dereference email
easily. Additionally, a normal user can create a profile page, but running
an email server is normally an enterprise level task.
Creating a profile page is not akin to running a mail server. That's
simply ridiculous. You could compare running a web-server to running a mail
server. Both of which require sysadmin skills to do correctly.
Sorry? I said that creating a web page is easier than running an email
server, why do you find that ridiculous?
As I said - the two things are not akin. I could just as easily say: It's
easier to send and receive email than to run a web-server.



Strangely, it seems to be a religious war for the last 5 years, and I have
Post by Melvin Carvalho
no idea why. It has slowed us down, and unnecessary.
I agree, I think it's misrepresentation that rubs some people the wrong
Post by Melvin Carvalho
way. I'm in no way involved in this 'religious war' but I do find it absurd
when people make sweeping statements about how email and HTTP are somehow
competing for user identity, and that email is obsolete.
Are you now saying that I said email is obsolete? If so I think you are
putting words in my mouth.
I was not saying you said that. I was saying that it's often said as part
of this religious war.
Post by Melvin Carvalho
The point here is that it does not have to be either/or, it can be AND.
You can imagine other facets in future being added such as telephone, key,
name, fingerprint, qr code or whatever.
+1
As it happens, authentication is rare, and normally happens as a
one-off. After that an unguessable string is normally shared between
parties (eg in a cookie) to mean you dont have to authenticate again.
People often login today by clicking a button. If your public key is in
your client, you just need to click and not type or remember anything.
Authentication and identity are different concepts which are commonly
grouped together. It's rare that people look up other people by email when
adding a friend, they will use the real name, and this is also displayed on
your wall etc.
Facebook and all other major social networks have a "find your friends -
import your contacts list" to search for people you may know via their
email address. Again, you seem to be trying to underplay the importance of
email, by extreme generalization of very isolated use-cases, to make a
point.
I honestly think you are trolling now. I've already stated that facebook
Post by Melvin Carvalho
can look by name, email and telephone. People tend to choose real name.
I'll leave it at that...
What you said was it's rare that people lookup others by email. When
someone imports their entire contact list, I'd say that's a huge amount of
email lookups. I get notifications (on facebook) about people using this
feature all the time (and am invited to do so). I also use this feature
when I have met someone completely out of my social circle (they gave me
their email) and would like to find them on a social network.

Though finding new people you've just met (possibly the night before) you
have nothing else to go on but their name, and who they know (mutual
friends), so what you are really searching on is based on the social graph
of your immediate friends. It's a different use-case, albeit very effective
a lot of the time.

Not trying to troll you, Melvin. :)
Klaus Wuestefeld
2013-07-25 17:19:23 UTC
Permalink
People wont authenticate multiple facets, will they?

On Thu, Jul 25, 2013 at 2:17 PM, Melvin Carvalho
I'm 100% with Marco. Forget friendly identifiers. Tor is already
playing with fire making its keys that short.
Things will only get worse. With the advent of quantum computing we
are looking at public keys in the order of megabytes.
We will all learn to keep address books like our grandmothers did for
unfriendly six digit phone numbers and street addresses.
No big deal.
Yes, this is true, but zooko's triangle only applies to a single string
identity.
In practice, identity is an OBJECT with facets. So you can have an email, a
telephone, a real name, a public key, or whatever you want. In most cases
there's no need to disambiguate, or even to remember anything. This is
exactly how facebook does it, and it works perfectly. It's only when you
pick ONE facet, and disallow all others that zooko's triangle comes into
play.
tl;dr this is only a problem for identifier overloading -- make identity a
multi faceted object and have the best of all worlds
Klaus
http://sneer.me
On Thu, Jul 25, 2013 at 7:27 AM, Michael Rogers
Post by hellekin
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I think maybe what you are getting at is that you think using
(email, chat, files, voip, social, etc) is a really bad idea. You
used identifier there is. To embrace it is hardly being intolerant,
it is just being practical and backward compatible.
Embracing email as part of a holistic identity strategy could be
practical. But if it's using email as the 'one identity to rule
them all' -- it's going to be fractured by nature.
I don't think the suggestion is to use email addresses as universal
memorisable, easily recognised, and clearly represents the
user/service provider relationship.
However, using one format for multiple services is confusing - already
I can't put my Jabber ID on my business card without explaining that
it's not an email address. Perhaps it would be appropriate to use
different separators for different services - user#domain,
user*domain, etc?
Cheers,
Michael
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iQEcBAEBAgAGBQJR8P2XAAoJEBEET9GfxSfMTBcIAKcxwJmhxQD3GqdggYZcL0QT
KXlSJRuOZ/Y8L+97MRKTvSzOroNS5Gk6tIdv62V5OdPZGRejfqbYtFH6N94u7ApS
ycIUCqxG83mLiDvb2I/5p7lANu60nV388OGfWlacqM2a5kYv5oB4N7f69Ci1LwCs
rU5MiQ8Z1QQBYPvv3WAJnNoZjQdjG77f1GXYyRLYa37tvQaaK6DCUhrnSr8sTY5B
N76fZDfOQJwu4Bdj/2r7H0tE+2IzkowEUYdCT//V75fCpyXRRd4SB6HYMk4jRJ1a
y34UUiOKbCrshlw/N6McV31KzgkirLQONDpKM+ORdKlFKE+titArHwlUfOUnvso=
=J38N
-----END PGP SIGNATURE-----
_______________________________________________
SocialSwarm-DISCUSSION mailing list
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-discussion
Website : http://socialswarm.net/
Wiki : https://wiki.socialswarm.net
Liquid Feedback: https://socialswarm.tracciabi.li
SocialSwarm-ANNOUNCE (Announcements only; no discussion)
SocialSwarm-DISCUSSION (discussion list)
SocialSwarm-TECH (discussion list for technik and coders)
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-announce
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-tech
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-discussion
FoeBuD e.V. | Marktstrasse 18 | 33602 Bielefeld | Germany |
--
Valeu, Klaus.
--
Valeu, Klaus.
Klaus Wuestefeld
2013-07-25 17:11:07 UTC
Permalink
I'm 100% with Marco. Forget friendly identifiers. Tor is already
playing with fire making its keys that short.

Things will only get worse. With the advent of quantum computing we
are looking at public keys in the order of megabytes.

We will all learn to keep address books like our grandmothers did for
unfriendly six digit phone numbers and street addresses.

No big deal.

Klaus
http://sneer.me


On Thu, Jul 25, 2013 at 7:27 AM, Michael Rogers
Post by hellekin
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I think maybe what you are getting at is that you think using
(email, chat, files, voip, social, etc) is a really bad idea. You
used identifier there is. To embrace it is hardly being intolerant,
it is just being practical and backward compatible.
Embracing email as part of a holistic identity strategy could be
practical. But if it's using email as the 'one identity to rule
them all' -- it's going to be fractured by nature.
I don't think the suggestion is to use email addresses as universal
memorisable, easily recognised, and clearly represents the
user/service provider relationship.
However, using one format for multiple services is confusing - already
I can't put my Jabber ID on my business card without explaining that
it's not an email address. Perhaps it would be appropriate to use
different separators for different services - user#domain,
user*domain, etc?
Cheers,
Michael
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iQEcBAEBAgAGBQJR8P2XAAoJEBEET9GfxSfMTBcIAKcxwJmhxQD3GqdggYZcL0QT
KXlSJRuOZ/Y8L+97MRKTvSzOroNS5Gk6tIdv62V5OdPZGRejfqbYtFH6N94u7ApS
ycIUCqxG83mLiDvb2I/5p7lANu60nV388OGfWlacqM2a5kYv5oB4N7f69Ci1LwCs
rU5MiQ8Z1QQBYPvv3WAJnNoZjQdjG77f1GXYyRLYa37tvQaaK6DCUhrnSr8sTY5B
N76fZDfOQJwu4Bdj/2r7H0tE+2IzkowEUYdCT//V75fCpyXRRd4SB6HYMk4jRJ1a
y34UUiOKbCrshlw/N6McV31KzgkirLQONDpKM+ORdKlFKE+titArHwlUfOUnvso=
=J38N
-----END PGP SIGNATURE-----
_______________________________________________
SocialSwarm-DISCUSSION mailing list
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-discussion
Website : http://socialswarm.net/
Wiki : https://wiki.socialswarm.net
Liquid Feedback: https://socialswarm.tracciabi.li
SocialSwarm-ANNOUNCE (Announcements only; no discussion)
SocialSwarm-DISCUSSION (discussion list)
SocialSwarm-TECH (discussion list for technik and coders)
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-announce
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-tech
https://mail.foebud.org/cgi-bin/mailman/listinfo/socialswarm-discussion
--
Valeu, Klaus.
elijah
2013-07-25 10:05:33 UTC
Permalink
Post by Melvin Carvalho
Zooko's triangle is not a proof, but rather a suspicion
of course, but the triangle has stood the test of time remarkably well
as a tool for thinking about the problem of binding identifiers to
cryptographic keys. it is fracking brilliant really, and allows one to
bring clarity to what would otherwise be very muddy conversations.
Post by Melvin Carvalho
And yes, I proudly belong to the church of identity in the form of the
fantastically usable, it is also universally understood by every
internet user on earth. There are other addressing schemes that are user
There's a number of disadvantages with this approach. Firstly, you have
to convince everybody to subscribe to your world view, which is time
intensive (also a losing battle from the start) which takes away from
your, and everyone else's development time. Secondly, it creates
'haves' and 'have notes' and balkanized the space. Even though I like
the technology of LEAP, more so because it is free software, because
it's done in an intolerant way, it's harder to even fork or reuse the
code, because of the militant opposition to getting the patches
upstream, either to the codebase, or the protocol.
I really am genuinely confused. Am I getting trolled here? The LEAP
approach is:

(1) downgrade to backward compatible communication protocols when
necessary, but allow for required upgrade to enhanced versions when
available.

(2) factor out as much of the code as possible into general libraries
that can be used by others

(3) cooperate as much as humanly possible with anyone and everyone
interested in the same problem space we are

So, basically, the exact opposite of everything you just wrote. We don't
have a working release for email yet, so we haven't had any submitted
patches, but rest assured they would be welcome.

I think maybe what you are getting at is that you think using
***@domain as the single identifier for a lot of different things
(email, chat, files, voip, social, etc) is a really bad idea. You don't
have to like it, but ***@domain is still the most commonly used
identifier there is. To embrace it is hardly being intolerant, it is
just being practical and backward compatible.

-elijah
elijah
2013-07-25 07:40:24 UTC
Permalink
They are in the church of "your email is your identity" -- let's
be clear this is an unnecessary restriction with will not scale.
Other projects (not mentioning any names) *cough cough* are also
in this religious sect.
also, key management is ridiculously easy if it doesn't try to get
along with email addresses. the key is the identity. works great in
modern systems such as tor hidden services, retroshare etc.
We still live under Zooko's triangle. Identity <> key mapping is only
easy if you exclusively care about globally unique and decentralized,
but it is very hard if you care also about human friendly.

You can get all three, if you cheat. Namecoin is an example of cheating
in a peer to peer way (the cheat is that the global append-only log is
essentially an authority, derived from consensus of miners). DANE
achieves all three by relying on the authority of the root DNS zone.
Nicknym, the protocol we are working on (https://leap.se/en/nicknym)
also achieves all three by relying on DNS, although in an entirely
different way.

We can, and must, do much better than a secure identity system that is
unfriendly to humans. It is the 21st century, after all.

And yes, I proudly belong to the church of identity in the form of the
URI commonly referred to as an email address. Not only is ***@domain
fantastically usable, it is also universally understood by every
internet user on earth. There are other addressing schemes that are user
friendly-ish, like twitter @user, or namecoin (although namecoin
obviously has other problems), but ***@domain is here to stay.

-elijah
elijah
2013-07-25 07:58:23 UTC
Permalink
They are in the church of "your email is your identity" -- let's be clear
this is an unnecessary restriction with will not scale. Other projects
(not mentioning any names) *cough cough* are also in this religious sect.
email needs to be discontinued in the long run. it doesn't serve any of
the purposes it was constructed for. it gives the attacker a full view
of the social network, a view into the content by default and it also
fails at delivering to many recipients promptly and to handle spam.
This is a straw man argument. Yes, email as currently practiced has
problems, but there is no reason email cannot be reformed. There is
enough interest in secure email these days that I am certain the
problems will be solved. The needed pieces are (1) opportunistic
encryption via automatic key discovery/validation, (2) enforced StartTLS
(3) meta-data resistant routing. There are a couple good proposals on
the table for #1, postfix already supports #2 via DANE, and there are
four good ideas for #3 (auto-alias-pairs, onion-routing-headers,
third-party-dropbox, mixmaster-with-signatures [1]).

I do want to note that email has stood the test of time when it comes to
many recipients. I remember hosting email lists with 100k subscribers
and pushing millions of messages monthly on an ancient machine from
1998. Worked like a charm.

-elijah

[1] details on ideas for meta-data resistant routing in a federated
client/server architecture

* Auto-alias-pairs: Each party auto-negotiates aliases for communicating
with each other. Behind the scenes, the client then invisibly uses these
aliases for subsequent communication. The advantage is that this is
backward compatible with existing routing. The disadvantage is that the
user's server stores a list of their aliases. As an improvement, you
could add the possibility of a third party service to maintain the alias
map.

* Onion-routing-headers: A message from user A to user B is encoded so
that the "to" routing information only contains the name of B's server.
When B's server receives the message, it unwraps (unencrypts) a
supplementary header that contains the actual user "B". Like aliases,
this provides no benefit if both users are on the same server. As an
improvement, the message could bounce around intermediary servers, like
mixmaster.

* Third-party-dropbox: To exchange messages, user A and user B negotiate
a unique "dropbox" URL for depositing messages, potentially using a
third party. To send a message, user A would post the message to the
"dropbox". To receive a message, user B would regularly polls this URL
to see if there are new messages.

* Mixmaster-with-signatures: Messages are bounced through a
mixmaster-like set of anonymization relays and then finally delivered to
the recipient's server. The user's client only displays the message if
it is encrypted, has a valid signature, and the user has previously
added the sender to a 'allow list' (perhaps automatically generated from
the list of validated public keys).
Vincenzo Tozzi
2013-07-24 23:29:42 UTC
Permalink
Alo to all!

Massa HK?

How's going?
Post by hellekin
*** That is the challenge we need to overcome! Two days ago I was
sitting in a library with a friend and I was proposing that we
could bring 10 developers from various projects and sequestrate
them in a cheap country to fix the inter-project communication for
good. That would not take a lot of investment to do it under nice
weather conditions, for 6 months to a year, and it certainly
wouldn't take that long to move from the actual mess to properly
functioning grassroots federation. We could do that with the
Mocambos network in Brazil (Vince?).
Working on.. we are getting more solid and we can host hackers for
residency and development in next months..

I didn't follow all the thread .. well.. trying to share some bits..

.. our vision probably doesn't fit to all.. we're mainly a network of
quilombos (afrodescendants rural and urbans), indigenous and suburbs
communities.. (around 200 communities, ~120 with sat low band connection)

.. being inspired by local traditions and cultures we try to create a
digital and virtual dimension that follow and respect our needings..

Compressing..
* The first fight is for land, also digital one (Infrastructure)
* Eventually connected. Freedom to turn off the system (Asyncronous)
* Follow and empower real life connections (Decentralized + Federated)
* These tecnomacumbas belong to people.. (Agile + Docs + Communities) :)

So .. in Baoba's Path (Rota dos Baobás)
we're going on Baobáxia ..
http://wiki.mocambos.net/wiki/NPDD/Baobáxia

Abraço,
Vince


- --
NPDD - Núcleo de Pesquisa e Desenvolvimento Digital
Casa de Cultura Tainã / Rede Mocambos / Mercado Sul / Altakamul
Klaus Schleisiek
2013-07-22 08:06:17 UTC
Permalink
Dear Melvin,

thanks for your information. In this post-Snowden era many more people then ever
before have an open ear for security considerations (and cryptoparties).

I would like to make a number of remarks from a grassroots point to view. In
essence, Edward Snowden has raised the question of "trust". For a system that
serves as a trusted social web infrastructure more is needed than trusted
procedures and legal guarantees - the software itself and the platform it is
running on has to be trustworthy as well. The very possibilities to compromise
privacy have to be minimized by choosing an appropriate structure. This is
clearly to be preferred over a system that depends on the legal system to
"enforce" informational self determination.

Therefore, I believe that a social web for John Doe that can supersede
faceboogle needs to be not just open source, it needs to be crowd funded as
well. This makes the lack of interest of the industry sector less of a problem -
privacy conscious people will welcome a system that is free from corporate
interests. That does not mean that the corporate world is not welcome to use the
standards and structures, which are going to be developed in the public domain,
but the public domain needs to take the lead. As I see it, the situation is
similar to what happened with Linux: There has been an open system, which then
got used by industry that threw development efforts on it, which benefitted the
open community in return - a healthy win-win situation.

Before we can start to speak about standards, we need to have a consensus of
those interested in a post-faceboogle social web about its structure and
capabilities. You may call this worthy of a standard in itself, but very basic
properties have to be agreed upon before serious efforts should be invested into
realizing it. See my brief presentation from this years Eastehegg at
https://frab.eh13.c3pb.de/system/attachments/6/original/13-03-29_19Uhr_social_networks.ppt?1364649557,
which proposes a basic set of requirements needed to supersede faceboogle. It
turned out to be quite agreeable.

Together with Elijah of the LEAP project we came up with this list of priorities:

1) Client side encryption

2) Social graph obfuscation

3) Self determined data storage

4) Scalability

5) Integration of old friends on legacy networks (which would compromise 1 and 2
for those, of course).

6) High availability - you should be able to access your data when you want it.

7) Device portability - you should be able to access your data from multiple
devices at the same time

8) Client choice - you should be able to use a mobile, desktop, or html5 app
client (once webcrypto is deployed in browsers).

9) Multiple identity - you should be able to maintain multiple identities, and
choose to link them or not.

10) Protocol agnostic - you should be able to cross-communicate with different
protocols, be they XMPP, HTTP, or p2p based.

11) Secure groups - groups with membership determined cryptographically. Groups
function as a virtual user, with all users in the group able to receive and send
as the group, because they share a private group-key.

After Snowden, I am quite certain that we will have broad attendance for a
workshop on these topics at the upcoming 30C3, the annual get together of the
Chaos Computer Club. See the CFP at
http://events.ccc.de/2013/07/18/30c3-call-for-participation-en/.

On the weekend of August 24/25 there will a preparatory meeting for this 30C3
workshop sponsored by Wau Holland Foundation. This meeting will sort out the
following questions:

- which grass-roots projects should to represented
- whom would we like to see there
- what preparatory material needs to be produced for the workshop

Whoever would like to participate, please drop me a note.

:)

Klaus Schleisiek

Wau-Holland-Stiftung W
Postfach 65 04 43 H O L L A N D
22364 Hamburg/Germany S T I F T U N G
http://www.wauland.de
Post by ☮ elf Pavlik ☮
--- Begin forwarded message from Melvin Carvalho ---
Date: Fri, 19 Jul 2013 10:28:48 +0000
Subject: FYI: Securing the Future of the Social Web with Open Standards
http://www.w3.org/2013/socialweb/papers/w3c.html
Harry Halpin, W3C/MIT and IRI
The social web increasingly defines the Web itself. The Web is more than
hyperlinks between documents, as the web ultimately consists of the links
between people. Integrating the ability to co-operate socially on the Web
via open standards has the possibility of unleashing a new round of
innovation that would benefit everyone on the Web.
W3C's engagement with the Social Web
The W3C has engaged the open social web since 2009, when it first hosted
the "Future of Social Networking" workshop in Barcelona. While the workshop
engaged a large number of stakeholders, it failed to garner enough industry
interest and thus the Social Web Incubator Group was created to survey the
open social web. The Incubator Group produced a high-level report of the
included a number of suggestions for improving the W3C in order to make it
more lightweight, suggested that led to the creation of W3C Community
Groups. The W3C then started a number of community groups around relevant
social standards (The Federated Social Web, OStatus, Pubsubhubbub) and
hosted a developer-centric Federated Social Web 2011 conference in Berlin
that brought companies such as Google together with grassroots activists
(including activists from Egypt) and developers. While the conference
concluded with a focus on adding secure group functionality to existing
protocols, there was again not enough major industry interest to start a
Working Groups. Then the W3C hosted, with the help of IBM, the Social
Business Jam that led to the creation of the Social Business Community
Group and this workshop. Thus, we hope that critical mass can now be
achieved to start a Working Group in this area.
Why Standards?
The initial attempt to create an "open stack" for the social web happened
outside of traditional standards bodies like the IETF and W3C. This in turn
led to a very fragmented landscape of standards that have had mixed
deployment with some success and some failures. However, there are a number
of disadvantages of the approach of creating a "stack" of technologies
- No unified IPR policy. While some specifications do specify their IPR
(OpenSocial), others had difficulty getting IPR signed over
(ActivityStreams), and some still have no clear IPR (Pubsubhubbub)
- No maintenance policy: Some specifications are in need of updating but
exist purely as informal specifications (PortableContacts) and fail to be
updated to take into account new developments (Salmon Protocol)
- Lack of guidance for developers: Developers need to make sense of a
bewildering number of specifications in order to build an open social
application. While the OStatus meta-architecture provided some guidance, it
needs to be maintained in lieu of current work.
- Lack of a test-suite. It is difficult to demonstrate interoperability
between code-bases without a single test-suite that can be easily deployed
via github. Thus, demonstrations of interoperability have been "one-off"
and have not been maintained.
- Lack of integration into the Web: HTML5 is providing a host of new
capabilities to HTML that will reliably work cross-platform across an
increasingly heterogeneous number of platforms, including mobile. Browser
plug-ins will be increasingly phased out of existence from all major
browsers. Any social work needs to take advantage of this.
- Lack of security considerations: A distributed social networking
architecture by nature needs strong authentication of parties and integrity
and even confidentiality of messages.
In combination with the OpenSocial Foundation, the W3C can help address
each of the above concerns by 1) providing a single unified royalty-free
IPR policy 2) a Working Group with clear responsibilities for editor(s) and
chair with management structure 3) providing a primer and integration of
examples into the Open Web Docs with the rest of HTML5 4) Adding client
testing into the git maintained HTML test-suite and a clear server-side
test-suite 5) re-factoring current specifications around HTML5 (in
particular, Web Components and CORS) 6) Providing a broad test-suite and
integration of the social web with security-oriented work such Content
Security Policy, the Web Cryptography API, and wide security reviews with
related work at the IETF. Future work should have a clear focus and work in
a unified manner, ideally with a single group with a well-defined timeline
and deliverables.
A Secure Open Social Web?
In particular, security considerations have received less attention that
needed on the social web, with the paradigm of an unauthenticated public
broadcast of messages failing to provide the elementary security
considerations needed for closed groups and valuable information, which are
requirements for many use-cases ranging from sensitive corporate
information to human rights activism. Any open social web that fails to
take on security considerations will be abused by spammers at the very
least.
Any new effort for the social web should clarify the threat model and
propose mitigations so that the open social web can handle high-value
information. For example, any attempt to broadcast messages needs to have
the sender authenticated, and so by nature all messages should be digitally
signed with integrity checks, lest a malicious party strip the signature
and replace it with its own when substituting a false message. For
sensitive information, the message should itself be encrypted and
de-crypted only to those in the group. To allow messages in distributed
systems to be re-integrated and ordered correctly (as originally tried with
Salmon Protocol), time-stamping is necessary. Lastly, it may be incorrect
that a distributed social system that isn't properly designed is actually
more secure than a centralized silo: considerations should be made that the
ability to post presence updates does not store more information than is
necessary in a centralized location (as is currently done by XMPP servers
for example) and for use-cases where high latency is allowed, constant rate
background traffic and mixing can prevent traffic analysis threats.
Next Steps
The result of this workshop will determine the future of the open social
web. Concretely, this will consist of a report released within one month
and then possibly, if consensus is reached and there is enough industry
interest, one or more charters for Working Groups. The W3C welcomes joining
forces with the OpenSocial Foundation and numerous grassroots efforts both
inside (Pubsubhubub, OStatus) and outside the W3C (ActivityStreams,
IndieWeb) in making social should be a "first class" citizen on the Web.
--- End forwarded message ---
carlo von lynX
2013-07-22 12:29:12 UTC
Permalink
Post by Melvin Carvalho
Crowd funding is a good idea. Although it tends to be non optimally
allocated. ie most of the crowd funding went to Diaspora partly because
they evangelized well, and got good coverage on web 2.0 blogs and hacker
news. There are projects that didnt get 1% as much as diaspora that were
not 99% worse.
Crowd funding suffers from effects of P.R. - We need procedures like we
have in our liquid feedback driven pirate parties (austria, italy) to
establish a decent degree of technical competence and correctness before
leading people to decide where to put the money.
Post by Melvin Carvalho
Decentralized payments could change things. Currently facebooble make a
No, it's the lack of expertise and patience to get to it. Exactly the
problem liquid feedback was designed to solve (and does a lot better
than not doing anything at all).
Post by Melvin Carvalho
lot of money by putting ads on content. Content they did not create. The
content creators often get no remuneration, or sometimes just a fraction.
This makes no sense to me in a world of decentralized micropayments. We
should have a multi faceted funding strategy. However, project selection
is quite hard either for the lay person or even the expert. But perhaps we
can try and make it slightly more democratic, at least that, if no more.
^
Loading...