RE: [SQU] Credentials forwarding?

From: Robert Collins <robert.collins@dont-contact.us>
Date: Tue, 9 Jan 2001 14:32:12 +1100

> -----Original Message-----
> From: Henrik Nordstrom [mailto:hno@hem.passagen.se]
> Sent: Tuesday, 9 January 2001 11:59 AM
> To: Robert Collins
> Cc: squid-dev@squid-cache.org
> Subject: Re: [SQU] Credentials forwarding?
>
>
> Robert Collins wrote:
>
> > > what about login=*:password. Looks better I think ;-)
>
> > The problem is, it's vulnerable to replay attacks.
>
> In what way is it more vulnerable than any Basic
> authentication going by
> that path?
 
I never suggested Basic going upstream for username logging. This isn't
for ACL control and user authentication, it's for distributed username
logging. The suggestion was to use
client--NTLM/Digest--downstream_cache--somemethod--upstream_cache, and
have the upstream cache only let the downstream caches have access.

> > Re: implementing
> > -Sure as a quick hack it'll get the username to the upstream server,
> > which then needs to be told something like
> > acl foo proxy_auth PASSEDTHROUGH
> > so that it doesn't try to authenticate externally every
> usercode, and
> > instead trusts the downstrem.
>
> Minor issue. The basic auth cache should build up pretty
> quickly anyway.

Uhmm, no. Say you have 1000 users. On your downstream cache you have
(say) LDAP authentication.
You user cache will be
user:password
foou:foop
baru:barp
...

on the upstream you want to get
user:password
foou:secret
baru:secret.

Which you cannot get via LDAP.
So you need a helper that knows ALL of your potential users and has the
secret the same for them all. Personally I'd like to tell squid that it
is acting as a 'no end user clients' cache, and thus avoid all that
latency.
Secondly as the avowed purpose of this upstream cache is to avoid
username acl processing, building up a auth_user cache is the reverse of
the intent.

>
> Only one thing: To be really useful, the forwarding must be able to
> identify the downstream.
>
> Something like
> login=*-downstream_unique_tag:password

Yes. However this is overloading the design of basic authentication. I'd
rather address the root issue - HTTP has no thought out facility for
co-operating devices in the request path to exchange user information in
a trustable fashion.

> should do quite nicely, as this allows a single ACL entry to match all
> users on that downstream, and match it agains the known IP(s) of the
> downstream server.
>
> downstream:
> cache_peer ... login=*-downstream1:password
>
> upstream:
> acl downstream1 src 192.168.1.2
> acl downstream1-users proxy_auth_regex -downstream1$
>
> http_access deny downstream1-users !downstream1

 
> And if the helper protocol is extended to allow for the helpers to
> change the effective (logger,forwarded) username then even better.
>
> /Henrik
>

Returning a different username may make the acl's on the downstream
cache where the checks should take place difficult to read & very hard
to get dynamically form the user directory (ie groups in the future).
 
What I'm saying is that we can code up some short term hacks, &| look
into a thought out solution.

==== Concept ====

On the internet devices in co-operating but not integrated
administrative domains may wish to exchange user information for the
purposes of access control or logging (say for billing purposes).

HTTP defines only two sets of user credentials possible for any request,
a origin server set, and a proxy cache set. Thus in the following
transaction

Origin --- ISP Proxy (NA) --- ISP Proxy (A) --- Corporation_1 Proxy ---
User Agents
                                          \ --- Corporation_2 Proxy ---
User Agents

The two ISP Proxies cannot directly recieve user credentials. The one
marked A desires user details, the one marked NA does not.

While Proxy Credentials may be passed upstream by co-operating devices,
the challenge-response nature of all non-trivial authentication schemes
requires the proxy furthest from the end user to provide the challenge.

From an ACL point of view this is bad - the closest proxy must allow ALL
requests that are going to any upstream authenticating proxy, and ONLY
authenticate those going through non-authenticating proxies or direct to
the origin. This will lead to inconsistent authentication-Info headers
with Digest authentication, and is likely to confuse other
authentication schemes as well. Furthermore the NTLM scheme, whilst
unofficial, cannot be proxied at all.

Question: Will browsers send multiple Proxy-Authenticate headers, one
for each proxy realm when multiple proxy realms have been seen?

From a Administrative domain perspective this situation is also bad -
The A proxy is in the ISP's administrative domain, and may not want or
have the ability to authenticate a user in the Corporation's user
directory.

Proposal: provide a http extension for co-operatively passing user
details (not credentials) between co-operating caches (and other edge
devices).

Such an extension would ideally have the following characteristics:
1) It is policy free. Definition of policy for when to co-operate and
when not to should be covered separately. Implementors of such an
extension should provde a means for defining their products policy.
2) Spoofed requests to the upstream co-operating proxies are not
possible, or are at least dificult to achieve.
3) Implementation and management should be easy to encourage adoption.
4) It should co-exist with existing authentication mechanisms well
without requiring the use or presence of a given scheme. (ie be flexable
as to the environment it is operating in).
5) It should NOT supplant or act as a mechanism for trust between
co-operating devices, simply as a mechanism for passing usernames
around.

Rough sketch of issues:
* The Upstream cache will need to differentiate colliding usernames from
corporation_1 & _2. The use of domains or some form of namespace may
well solve this.
* Each request will need to pass on the user details, and whatever
protection is used against spoofed or replayed requests will need to be
deterministic give the state of the cooperating devices.

Rough spec: (verrrrry rough)
define a new response code: 418 Proxy Cooperation Required
define three new headers: (note I haven't reviewed the HTTP extension
framework draft yet). And the names suck. Sorry but they do :-]

Proxy-Cooperation: realm="isp.worldwide.domain"
Proxy-Cooperate: user="john@corporate.domain.name"

Operation:
A typical sequence would go like this:
Corporate proxy request to ISP Proxy
ISP Proxy returns 418 And a Proxy-Cooperate: header.
Corporate proxy request to ISP Proxy with a Proxy-Cooperation header.
ISP proxy satisfies the request

A slightly more complex example:
Corporate proxy request to ISP proxy
ISP proxy responds with 407 and a Proxy-Authenticate header.
Corporate proxy retries with credentials previous agreed between the
corporation and the ISP.
ISP proxy responds with 418 and a Proxy-Cooperation-Info header.
Corporate proxy request to ISP Proxy with a Proxy-Cooperation header.
ISP proxy satisfies the request.

As shown above the ISP can use any policy of it's own choosing to decide
when to require Proxy-Cooperation

Operational notes & reasoning.
HTTP authentication schemes provide 'trust' between two cooperating
proxies, so no trust mechanism is needed for the proxy cooperation
mechanism. Protection from replay and spoofing is also provided via the
HTTP authentication mechanism. Thus all we need to accomplish is clearly
signalling when cooperation is required, and provide the username in
such a way as to distinguish different organisational domains.

This method will accomplish points 1,3,4,5. Point 2 is covered by
recommending the use of HTTP authentication stronger than 'basic'
authenticaton when using Proxy-Cooperate.

Comments?
Received on Mon Jan 08 2001 - 20:35:55 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:14 MST