Discussion:
[GNU/consensus] [whistle] I.0 Looking Through The Prism
hellekin
2013-07-26 06:39:17 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

= GNU/consensus Whistle =

Volume I, Number 0
http://libreplanet.org/wiki/GNU/consensus/whistle/012013-07

(the wiki version displays an image of rms and Assange showing a
parody of Obama's Yes We Can campaign, showing Edward Snowden's face)


== Editorial: Looking Through The Prism ==

Edward Snowden achieved in a few weeks what we could not do in the
last two decades. Thanks to his exposure of the massive surveillance
program of the NSA, Prism, he brought to the mainstream the topic of
''privacy''. He demonstrated the need for it, and the need for
decentralization. He turned decades of accusations of police state
and loss of freedom, that were until now ignored by the general
public, or deemed paranoid, into visionary statements. The Prism
scandal, exposed, gives us free software developers and activists, an
unprecedented opportunity to leverage mass consciousness and
collective intelligence to break free from the social network services
oligarchy, to build a truly end-to-end and participatory (voluntary)
social environment.

But that won't go without a hitch. In this first issue of the
GNU/consensus Whistle, we'll see how a momentum is being formed that
we need to seize for ''binding chaos''[0] and overcome the current
trust crisis altogether, for the benefit of all people.

This issue will feature some less known software projects that may be
of interest to developers. Finally, the Whistle will provide a short
agenda of things past and to come, for which you're welcome as a
project or community manager to submit either to the list or to the
wiki for inclusion in the next issue.

It's also time that we think about organizing a common fund, or a
shared funding strategy to cover development and hosting expenses of
free software social networking. Each network, each software project
should evaluate and report its real costs and needs, so that we can
estimate what it takes to reach a decent independent social
networking. Lack of finance has long been hindering the development of
sustainable alternatives to centralized commercial platforms. Maybe
it's time to reconsider that central issue, in the wake of the massive
spread of consciousness relative to the Prism leak.
Mikael "MMN-o" Nordfeldth
2013-07-26 08:10:29 UTC
Permalink
First of all, I have to say I very much enjoyed this
resume-kind-of-post. Kudos for doing it!
=== Meanwhile, Plumbing... ===
At the heart of the controversy lies the loss of functionality that
accompanied the switch from StatusNet to Pump.IO.
Not only visually[1], but more importantly about broken links, missing
hashtags, the lack of group support, and federation features in the
core of the Pump. People were expecting a drop-in replacement, but it
feels more like a drop-out.
I believe people should not have expected identi.ca to run at all as
long as it did. Evan obviously went to great lengths to keep it running,
despite an overly resource demanding codebase.

The real problem, I think, is that identi.ca was "THE StatusNet site"
(heck, it wasn't even "StatusNet" for most users!), effectively removing
benefits of federation. Had more people taken off to start their own
nodes and federate, not a single one of them had been affected by this
change.

I think Evan is doing the right thing to turn the ship around and set
for new land. And anyone along for the ride will have to oblige, as he's
paying the server bills. If anyone complains, he's handing out free (as
in beer and speech) lifeboats in the form of source code, and GNU Social
will try to accommodate these users with patches.

Personally I can't see why people didn't just start federating away from
identi.ca long ago. Using a federated software without actually using
federation only results in centralisation. And then complaining just
shows a lack of insight, just like complaining that gmail.com is serving
you mail-content related ads or turning off XMPP.


...now what one may do to help people start their own federating nodes,
that's something to discuss.

--
Mikael "MMN-o" Nordfeldth
XMPP/mail: ***@hethane.se
http://blog.mmn-o.se/
hellekin
2013-07-26 14:15:49 UTC
Permalink
Post by Mikael "MMN-o" Nordfeldth
First of all, I have to say I very much enjoyed this
resume-kind-of-post. Kudos for doing it!
*** Thank you Mikael! That's an encouraging feedforth! I wholly
agree with all you wrote.
Post by Mikael "MMN-o" Nordfeldth
Personally I can't see why people didn't just start federating away
from identi.ca long ago. Using a federated software without
actually using federation only results in centralisation.
*** The same thing happened with N-1.cc, where the population grew and
the support not, making it unsustainable if it wouldn't be for crazy
people who still prefer cooperating. When the migration from 1.7 to
1.8 happened, the disruption was terrible. We're still in turmoil,
although the "population" kept growing, but support didn't start
matching the basic costs.

I don't have an answer on the "why such things happen", but I can
relate it to the past couple of decades of propaganda selling
everything "for free" (gratis), at the expense of plundering the
resources of the Earth, slave labor, and social dumping. (more
specifically driving the smaller competition out of the game by the
use of massive economies of scale, slave labor, and systematic buy
outs, hereby reinforcing the phagocyte behavior).

The Internet suffered this trend as much as the rest of society,
leading consumers to believe it comes "for free" as infrastructure. I
wish it were infrastructure as part of public funding, with net
neutrality built in. But the colonization of the minds around the
idea that profit leads to growth and growth leads to happiness is
terribly ignorant of the complexity of the ecosystem's cycles.

As Buckminster Fuller used to say: "Humanity acquires all the right
technology for all the wrong reasons."

==
hk
Guido Witmond
2013-07-26 15:15:53 UTC
Permalink
Post by hellekin
Post by Mikael "MMN-o" Nordfeldth
First of all, I have to say I very much enjoyed this
resume-kind-of-post. Kudos for doing it!
*** Thank you Mikael! That's an encouraging feedforth! I wholly
agree with all you wrote.
Post by Mikael "MMN-o" Nordfeldth
Personally I can't see why people didn't just start federating away
from identi.ca long ago. Using a federated software without
actually using federation only results in centralisation.
*** The same thing happened with N-1.cc, where the population grew and
the support not, making it unsustainable if it wouldn't be for crazy
people who still prefer cooperating. When the migration from 1.7 to
1.8 happened, the disruption was terrible. We're still in turmoil,
although the "population" kept growing, but support didn't start
matching the basic costs.
I don't have an answer on the "why such things happen", but I can
relate it to the past couple of decades of propaganda selling
everything "for free" (gratis), at the expense of plundering the
resources of the Earth, slave labor, and social dumping. (more
specifically driving the smaller competition out of the game by the
use of massive economies of scale, slave labor, and systematic buy
outs, hereby reinforcing the phagocyte behavior).
You should read Binding Chaos by Heather Marsh.... Plenty of answers
there. http://georgiebc.wordpress.com/2013/05/24/binding-chaos/
Post by hellekin
The Internet suffered this trend as much as the rest of society,
leading consumers to believe it comes "for free" as infrastructure. I
wish it were infrastructure as part of public funding, with net
neutrality built in. But the colonization of the minds around the
idea that profit leads to growth and growth leads to happiness is
terribly ignorant of the complexity of the ecosystem's cycles.
I wrote about it on the libtech list in:
https://mailman.stanford.edu/pipermail/liberationtech/2013-July/010335.html

It was about the high price of centralised server systems compared to
geographical caching.

Regards, Guido.
hellekin
2013-07-26 19:37:13 UTC
Permalink
Post by Guido Witmond
You should read Binding Chaos by Heather Marsh.... Plenty of
answers there.
http://georgiebc.wordpress.com/2013/05/24/binding-chaos/
*** I promised to translate it. I need to take the time.
Post by Guido Witmond
https://mailman.stanford.edu/pipermail/liberationtech/2013-July/010335.html
***
Brilliant! Allow me to <quote>

The problem with the web is that is favours a central distribution model
and forgoes geographical caching. For example, if I read an interesting
blog and send to URL to a friend in the same room, the data that forms
the blog has to travel all the way from the original site - over all the
same paths - a second time for my friend. Just so he can have an
identical copy.

He gets an identical copy of the important bits that mattered: the blog.
He might get different bits that don't matter, the advertisements.

If we had an easy way for me to transmit the blog to my friend, the
important bits would have an almost zero cost of transport while the
unimportant bits need the expensive path

</quote>

It reminded me the model for bandwidth allocation on large trunks: if
you're an ISP, your incentive is to maximize your available bandwidth,
in order to get allocated more bandwidth faster. Again, a model that
favors a few giant operators rather than many tiny ones.

So there, you have it: cutting the middleman by enforcing peer-to-peer
distribution. Notice how the concept of cloud, and "web apps" do
exactly the opposite. Unless the app in UnHosted.

==
hk
Guido Witmond
2013-07-27 11:26:57 UTC
Permalink
Post by hellekin
Post by Guido Witmond
You should read Binding Chaos by Heather Marsh.... Plenty of
answers there.
http://georgiebc.wordpress.com/2013/05/24/binding-chaos/
*** I promised to translate it. I need to take the time.
Post by Guido Witmond
https://mailman.stanford.edu/pipermail/liberationtech/2013-July/010335.html
***
Brilliant! Allow me to <quote>
The problem with the web is that is favours a central distribution model
and forgoes geographical caching. For example, if I read an interesting
blog and send to URL to a friend in the same room, the data that forms
the blog has to travel all the way from the original site - over all the
same paths - a second time for my friend. Just so he can have an
identical copy.
He gets an identical copy of the important bits that mattered: the blog.
He might get different bits that don't matter, the advertisements.
If we had an easy way for me to transmit the blog to my friend, the
important bits would have an almost zero cost of transport while the
unimportant bits need the expensive path
</quote>
It reminded me the model for bandwidth allocation on large trunks: if
you're an ISP, your incentive is to maximize your available bandwidth,
in order to get allocated more bandwidth faster. Again, a model that
favors a few giant operators rather than many tiny ones.
So there, you have it: cutting the middleman by enforcing peer-to-peer
distribution. Notice how the concept of cloud, and "web apps" do
exactly the opposite. Unless the app in UnHosted.
It's not that I want to cut out the middleman completely, I need those
long haul links to read interesting blogs. My neighbours are nice people
but don't write enough interesting blogs on cryptography ;-)

I want to avoid the waste with replicating all data all the time,
becoming less dependent on that middleman.


Creating such a decentralised system is hard. It's easier to throw more
hardware at it, again favouring the central model. Making publishing
expensive.



Freenet has an interesting sharing/replication model for this. It
replicates from the publishing node towards the readers, making popular
content spread out. It comes at the cost of deleting unpopular content.
It is the price for sender untraceability. With Freenet you don't know
what's in your cache, so that's a limit to how much of your precious
disk space you assign to Freenets data store. You're not rewarded for
having a large cache.

I want something with a global distributed cache, like Freenet, but one
that allows me to set a 'Keep-flag' to a file. My cache won't expunge it
and I can access it like a file on disk. It is available to others too,
like a torrent seed. Popular content will be shared by many, give me a
light load. Impopular content gives a light load too. If I delete it, it
will get purged from the cache sometime.

This allows me to match my disk space with my caching needs.

My own blog-ramblings will get stored at my disk (with keep flag set).
When it gets popular, it will get spread out, making it possible to
reach a large audience with a small computer and relatively thin connection.


Cheers, Guido.
Francesco
2013-07-27 01:49:02 UTC
Permalink
Post by hellekin
= GNU/consensus Whistle =
Thanks kellekin! Really a nice and informative read! I hope to see more and
more of it!
Post by hellekin
Of course all can mourn the loss of groups, a fantastic feature of
StatusNet^H<tt>GNU social</tt>, and the lack of bridges to other
systems--including its own past incarnation, as one ''may be clicking
dead links''[2]. That is not the first time such a thing occurs in the
(federated) Web.
What hurts me the most is the absence of RSS feeds for public timelines [1],
as an avid user of RSS (and this seems to break oStatus compatibility too).

Having said that, Evan (e14n) did a great job, had (in my opinion) a great
intuition and agrees with you on the fact that we should avoid crowded
instances!

[1] https://github.com/e14n/pump.io/issues/55

Loading...