On 02.10.2012 12:27, forum wrote:
> Hey alltogether
>
> Unfortunatelly I am still suffering from this problem. Any help would
> be greatly appreciated.
>
> Thanks in advance
>
> B. Brandt
>
> On 2012-09-26 14:05, forum wrote:
>> Hey alltogether
>>
>> We have the following reverse proxy setup:
>>
>> Client <--SSL--> Squid <--SSL--> Server
>>
>> and the important acls look like:
>>
>> https_port 443 accel cert=/usr/local/squid/certs/cert.pem
>> key=/usr/local/squid/certs/key.pem defaultsite=example.org
>> clientca=/usr/local/squid/certs/cacert.pem
>> cafile=/usr/local/squid/certs/cacert.pem
>> capath=/usr/local/squid/certs/ sslcontext=id
1) What Squid version?
On 3.1 and older the use of "defaultsite=" without "vhost" causes all
URLs to be re-written to "https://example.org" instead of whatever the
client was actually requesting.
On 3.2 and later the "vhost" is implicit/assumed and the client
provided Host: header will be used.
>>
>> cache_peer xxx.xxx.xxx.xxx parent 443 0 no-query originserver
>> login=PASS ssl sslflags=DONT_VERIFY_PEER
>> sslcert=/usr/local/squid/certs/exchange.crt
>> sslkey=/usr/local/squid/certs/nopassexchange.key name=exchange_peer
>>
>> So as you can see the client uses ssl and a client certificate as
>> authentication to connect to squid. Now we wanted to do some url
>> filtering:
>>
>> acl exchange_dirs urlpath_regex
>> (\/owa|\/Autodiscover|\/Microsoft-Server-ActiveSync)
This regex does not do what you think it does.
If the client places "/owa" (or any of those strings) *anywhere* in the
URL path or query parameters it will match.
2) Please list the access.log lines for some of the URLs which you
think are supposed to not be accepted?
>> acl exchange_base_url url_regex ^https://example.org
>> http_access allow exchange_dirs exchange_base_url
In absence of any http_port lines, all your traffic will be "https://"
making that regex part redundant.
This is better written as:
acl example dstdomain example.org
http_access allow example exchange_base_url
With also:
cache_peer_access exchange_peer allow example
cache_peer_access exchange_peer deny all
>> http_access deny all
>>
>> However as you might already guess its not working and I am
>> wondering
>> why. From my understanding, there is an SSL connection from Client
>> to
>> Squid and an SSL connection from Squid to Server. Squid encrypts and
>> decrypts in the middle. Therefore squid schould be able to do the
>> url
>> filtering.
Yes Squid can and does.
>>
>> However the observed behviour is, that URL filtering works as long
>> as
>> the user has NOT authenticated itself with its client CA. However
>> after the user authentication, the user can browse every url within
>> example.org. As if there were a direkt ssl connection between Client
>> and Server.
Client machine (what you are calling a "user") certificate
authentication is always being done in the config you posted. Clients
which "have not" or "do not" present valid certificates signed by your
trusted "clientca=" certificate, are rejected by the TLS handshake
before any HTTP request details get near Squid.
3) Are you certain those "authenticated" requests are passing through
the proxy and not going direct to Exchange?
It sounds to me a bit like the clients which are NOT presenting
certificates are being accepted and gatewayed by the reverse proxy.
Whereas clients which ARE presenting certificates are probably being
rejected and going direct to Exchange.
You may need to use "https_port ... sslflags=NO_DEFAULT_CA" to use
only your custom CA file to verify client certificates instead of the
OpenSSL built in CA list.
>>
>> Do we need to set the ssl-bump option? And if yes why? Isn't squid
>> already doing encryption and decryption?
No you don't need it for a reverse-proxy.
Amos
Received on Tue Oct 02 2012 - 01:06:11 MDT
This archive was generated by hypermail 2.2.0 : Tue Oct 02 2012 - 12:00:02 MDT