Discussion:
[GNU/consensus] Fwd: An Update from the Name Resolution Trenches
hellekin
2015-01-26 14:34:39 UTC
Permalink
A very interesting overview of what's going on with DNS these days.
This basic block of the Internet, taken for granted, is a lot more
complex than we think the closer you look at it...

Kudos for mentioning the P2P Names draft and the analysis of DNS and
privacy. I will respond to the suggestion of a single pTLD for all 6
P2P Names in a following message.

==
hk

-------- Forwarded Message --------
Subject: An Update from the Name Resolution Trenches
Date: Mon, 26 Jan 2015 12:27:57 +0000
From: Hugo Maxwell Connery
To: christian, hellekin, tor-talk

Below is also attached as text to use in a potentially nicer viewer.

== An Update from the Name Resolution Trenches ==

Summary:

In the internet name resolution space,
the only real solutions for privacy are going
to come from the overlay communities, like
tor and gnunet. a.k.a DNS is too big to fail (change
significantly). Plus a suggestion below [P2P].

Verbiage:

In response to the IETF saying that "pervasive monitoring
is an attack on the internet" (RFC7258 [1]) the IETF
has established various working groups to examine how
to fit privacy protection to existing highly used
protocols. The DPRIVE [2] working group is addressing
DNS.

DNS could be described as the largest, highly available,
globally distributed, heirarchical name/value lookup
database ever built. It is the beginning of almost
any interaction on the net. And, its architecture,
both is governance and protocol, are a privacy nightmare.

I have been participating in, a little, and watching, a lot,
the DPRIVE working group. My expectation is that the
result of a year's work by the leaders of the standards
tech sphere will result in two proposals. (I may well
be wrong). They are:

A. Query minimization. That is instead of asking the
root for www.example.org to get a referral to the .org's
name servers, one just asks for that (the NS records of
.org). This continues down (ask .org for the NS records
of example.org), and then only ask the full question
(give me an IP address for www.example.org) to
those last name servers.

B. Offers of encryption, probably in TLS style, between
the client and the local recursive resolver.

Both solutions will preserve backwards compatibility
(or existing architectures will not need to change
for a long time). This is because DNS is that important
and that large.

The key missing ingredient is the encryption between
the local resolver and the authoritative resolver.
Why that is unlikely (or technically unwise) is argued
below by Paul Vixie [END;TL;DR] below.

The end result of this is that little is really done
to protect the privacy of the end user. Consider the
best case, where the local resolver offers encyption,
does not preserve logs of queries and implements query
minimisation. The end result is worst than using tor.
You have a fairly static community that has its queries
observed in clear text on the wire between the local
resolver and the authoritatives, but there are no
routing changes (i.e same exit node the whole time).

A recently published academic article looks at the
greater space of the name resolution communities
and what they offer [4]. I highly recommend reading
it, if you are interested.

Post RFC7258 it was claimed by many that the best
solutions to online privacy preservation would come
from the tech community, rather than the legislature.

However, the larger systems are just not nimble enough,
and this can be seen from the above, possibly erroneous,
analysis.

The real solutions are coming from the overlay network
communities like tor and gnunet. They have their
own threat models and are implementing solution to meet
those. The big boys cannot implement a single solution
to meet these varying threat models.

A key in these solutions is reserving the pseudo top
level domains (pTLDs) which are used by these overlay
networks (e.g .onion), a process which is under way [3].

One could argue that the overlay communities should
not care at all about the IANA, and should the IANA not
be supportive of the above or some similar proposal,
then of course the communities would just continue.
However, achieving acknowledgement by the IANA and
have pTLDs resolved would achieve an important
political victory: wider public legitimacy.

P2P:

I have a suggestion to the tor, gnunet, i2p, and
other overlay communities. It seems likely to me
that the IANA will not be too happy about reserving
all of .onion, .exit, .gnu, .zkey, .i2p and .bit.

I suggest that you ask for ONLY ONE pTLD. For
example, .p2p, and then stick all your specifics inside
that. e.g

.onion.p2p
.gnu.p2p

etc.. This would require work on your part, but if
that is the price for public legitimacy in the eyes of
the IANA, I humbly suggest that the price is cheap.


Sincerely, Hugo Connery
--

References:

1. https://tools.ietf.org/rfc/rfc7258.txt
2. https://datatracker.ietf.org/wg/dprive/charter/
3.
https://datatracker.ietf.org/doc/draft-grothoff-iesg-special-use-p2p-names/?include_text=1
4. "NSA's MORECOWBELL: Knell for DNS"
https://gnunet.org/sites/default/files/mcb-en.pdf


END;TL;DR

From: DNSOP on behalf of Paul Vixie
Sent: Monday, 26 January 2015 08:14
To: dnsop
Subject: Re: [DNSOP] Followup Discussion on TCP keepalive proposals

TL;DR: i'd like to only behave differently if the other side signals its
readiness for it. in a "big TCP" model where thousands or tens of
thousands of sessions remain open while idle (even if only for a few
seconds), we are asking for application, library, kernel, RAM, CPU, and
firewall conditions that are not pervasive in the installed base --
which includes tens of millions of responders who will never be
upgraded, and whose operators are not reading this mailing list, and
will not be reading any new RFCs on the topic.

if we want better TCP/53 behaviour than that required in RFC 1035 4.2.2,
then we should define signalling that requests it, and we should behave
differently if that request is granted.

that's what "first, do no harm" means in an installed base that's
literally the size and shape of The Internet.

longer version:

[-------------------------------------]
John Heidemann
Sunday, January 25, 2015 9:10 PM

...

We are, I think, in the lucky place of having a new feature (multiple
DNS queries over TCP with pipelining and reordering) with SOME level of
responder support and basically zero initiator use.
Do we really need new signaling?

yes, i think so. you're only talking about old-style initiators here.
there are problems on the responder side that i worry more about,
because of the impact that new-style initiators could indirectly but
pervasively have on old-style initiators, due to the behaviour of
old-style responders.


... The other question is harm on the
responder side. That's why I was trying to get to the bottom of the
assertion that DNS-over-TCP is inherently a DoS.

there may not be a bottom. existing responders who follow RFC 1035 4.2.2
are extremely weak, but are in the critical path for existing initiators
responding to TC=1 (or, in other cases where a UDP response is unusable
or untrustworthy, which i'm loathe to describe in public.)

if a new-style initiator prefers TCP and keeps a connection open longer
than the time it takes to send just the queries it has in hand, and if
the responder is old-style, then it causes significant problems for
old-style initiators. denying service to a by-the-book RFC 1035 4.2.2
TCP responder is childs play. we must not do it on purpose.


I haven't seen
evidence supporting that claim,

i am out of ideas as to what that might require.


... and I think we can all recognize the
installed base of HTTP to show that at least someone can make TCP work
at scale on the server side.

i have not, and i don't think anyone else has either, said that TCP
cannot be made to work at scale. however, TCP/53 as described in RFC
1035 4.2.2 is not part of making DNS-over-TCP work at scale; quite the
opposite.


bind
responders, since 4.8, has accepted pipelining, but with ordered
responses until a currently unreleased patch was put in recently. bind
responders through bind 8 did not read the next (potentially pipelined)
request out of the tcp socket until after it had sent its response to
the previous request, so, there was no parallelism of any resulting
cache miss activities.

Most implementations whose TCP we've examined (bind 9.9 and unbound)
have performance problems when running over TCP. But performance
problems can be fixed incrementally and in place, unlike correctness
issues where people fail.

the problems we must avoid involve servers whose source code you can't
get access to.


Yes, there are definitely performance problems that will need to be
fixed. But performance has very different deployment issues
than correctness does.

the problems we must avoid involve servers who will never be upgraded.

...

I haven't seen anyone assert that TCP should become *manditory* for
future DNS. If it's encouraged, or at least not discouraged, then I
suggest we can abide a multi-year rollout.

the problems we must avoid include those operated by people who do not
read this mailing list, or new RFC's.

--
Paul Vixie
carlo von lynX
2015-01-26 15:02:14 UTC
Permalink
Useful mail, thank you.
Post by hellekin
I have a suggestion to the tor, gnunet, i2p, and
other overlay communities. It seems likely to me
that the IANA will not be too happy about reserving
all of .onion, .exit, .gnu, .zkey, .i2p and .bit.
I suggest that you ask for ONLY ONE pTLD. For
example, .p2p, and then stick all your specifics inside
that. e.g
.onion.p2p
.gnu.p2p
Yes yes, bow to politics and bureaucracy.. would be so
amusing if by 2030 hardly anyone is using traditional
DNS and therefore all Internet business happens over
something .p2p, yet for backwards compatibility there's
always this senseless .p2p in the way.

Also, public-key routing is not limited to P2P architectures.
Post by hellekin
From: DNSOP on behalf of Paul Vixie
TL;DR: i'd like to only behave differently if the other side signals its
readiness for it. in a "big TCP" model where thousands or tens of
thousands of sessions remain open while idle (even if only for a few
seconds), we are asking for application, library, kernel, RAM, CPU, and
firewall conditions that are not pervasive in the installed base --
which includes tens of millions of responders who will never be
upgraded, and whose operators are not reading this mailing list, and
will not be reading any new RFCs on the topic.
This assumes that a legislation as proposed in
http://youbroketheinternet.org/legislation/ will
never be signed into law. Mr Vixie is looking at
the status quo from a too narrow perspective.
Post by hellekin
== An Update from the Name Resolution Trenches ==
In the internet name resolution space,
the only real solutions for privacy are going
to come from the overlay communities, like
tor and gnunet. a.k.a DNS is too big to fail (change
significantly). Plus a suggestion below [P2P].
And the law proposal makes the deployment of such a solution
a government and industry priority, and makes it a required
default for new systems.
hellekin
2015-01-27 02:12:43 UTC
Permalink
Post by carlo von lynX
Useful mail, thank you.
Post by hellekin
I have a suggestion to the tor, gnunet, i2p, and
other overlay communities. It seems likely to me
that the IANA will not be too happy about reserving
all of .onion, .exit, .gnu, .zkey, .i2p and .bit.
I suggest that you ask for ONLY ONE pTLD. For
example, .p2p, and then stick all your specifics inside
that. e.g
.onion.p2p
.gnu.p2p
Yes yes, bow to politics and bureaucracy.. would be so
amusing if by 2030 hardly anyone is using traditional
DNS and therefore all Internet business happens over
something .p2p, yet for backwards compatibility there's
always this senseless .p2p in the way.
Also, public-key routing is not limited to P2P architectures.
*** The authors of the P2P Names Internet Draft[0] thought a lot about
the possibility of applying for a single TLD for GNUnet, I2P, Namecoin,
and Tor. We came to the conclusion that it does not make any sense:

1. it would require a lot changes in the existing deployments. This is
probably the least of the issues, but this is still considerable work
(and cost) as well as prone to breaking a lot of legacy working code.

2. each of the six pTLDs have a different way to manage names and to
resolve them. It would be confusing for both implementors and users to
have them all under one tree.

3. a single TLD introduces the issue of who manages the assignation of
names under this TLD. As the .alt I.-D.[1] demonstrates, by proposing a
first-come-first-served-but-without-guarantee-of-uniqueness, this is
likely to be a mess, but our proposal is simple and clear.

4. existing TLDs such as .test, .example, .invalid, .localhost
(RFC2606), as well as .local (RFC6762), etc. should then receive the
same treatment and all move under a single TLD. That is unlikely to
happen, and it makes to technical sense.

5. the only valid argument in favor of a single TLD is to validate the
superiority of the top-down hierarchical tree and consider any alternate
approach as an annoying and invading experiment. That is not a
technical approach. And that is not loyal to the permissive-network and
end-to-end fundamental axioms of the Cerf/Khan Internet Protocol that
made it future-proof and wildly successful.

Additionally, 2. and 3. bring another usability issue. Consider HTTPS,
the HyperText Transfer Protocol, whose final "S" either means "perfectly
forward secure" or "completely compromised". That is confusing. Now
it's complicated enough for users to make a difference between two
similar but completely different HTTPS--one secure, one not. How would
you expect that they make a difference between different P2P systems and
their scope, threat models, privacy implications, etc., if they all went
under a single .p2p (or whatever) TLD? TLD is made to distinguish one
thing from another: If I hit .de, it's a German site, if I hit .int, it
belongs to the United Nations. If I hit a .p2p, it's... Peer-to-peer?
Really? Should we move all legacy applications under .c2s for
client-to-server? Of course not. If P2P Names share commonalities,
they are also different from each other. They belong to the same
"family" as far as IANA is concerned, because it makes sense
technically, and also for implementors, it makes sense to look up one
RFC and find them there.

In conclusion, refusing to grant 6 names instead of a single one when
RFC1591 declared "[i]t is extremely unlikely that any other TLDs will be
created", and when RFC6762 reserved 6 names (one TLD and 5 in-addr.arpa
names), in the year the Root Zone became crowded with .boo, .fail, .foo,
.porn, .sucks, .wtf and a thousand others is kind of insulting.

Regards,

==
hk

[0]
https://datatracker.ietf.org/doc/draft-grothoff-iesg-special-use-p2p-names/
[1] https://datatracker.ietf.org/doc/draft-wkumari-dnsop-alt-tld/
Loading...