The current code for extracting the validity dates for certificates assumes that the strings "Not Before" and "Not After" will appear exactly once in the pretty-print of the certificate. In most cases that works. However, there are a few server certificates that include the private key usage period extension, which also includes "Not Before" and "Not After" times. The result is that the current code does not correctly extract the start date and end date from any certificates that have private key usage period extensions.
This PR fixes the problem and also speeds up extraction of the dates by only using Bash internal functions.
The pretty-print of a certificate begins as follows:
Certificate:
Data:
Version: 3 (0x2)
Serial Number: ...
Signature Algorithm: ...
Issuer: ...
Validity
Not Before: ... GMT
Not After : ... GMT
...
The code in this PR extracts the start date by first removing from the certificate everything that comes before "Not Before: ". It looks for the shortest string that includes ""Not Before: " in order to ensure it is not getting the date from the private key usage period extension. After that, the longest string that begins with "GMT" is removed so that only the notBefore date remains.
The part that removes the string up to "Not Before: " actually looks for the first instance of "Not Before: " that comes after the "Validity". This is to protect against the unlikely possibility that the string "Not Before: " appears somewhere in the issuer's name.
The extraction of the notAfter date works similarly. It first looks for the first instance of "Not After :" that appears after both "Validity" and "Not Before: " and then takes the date string that appears immediately afterwards, with the assumption that the date string ends in "GMT".
In some cases the OCSP URI contains multiple components in the path (e.g., http://www.example.com/OCSP/myOCSPresponder).
This PR changes check_revocation_ocsp() to remove all components in the path, rather than just the final component, when extracting the host name from the URI for the host header.
My previous commit added a host header but didn't properly
format the host header (trailing slashes / path). This commit
corrects that so that the 305 times HTTP 400 in #1056
should now be gone (TBC), including Google CA responders.
One issue which needs to be addressed (same as in CRL
revocation checks): Not trusted certificates (zhanqi.tv,
taken from my Alexa scans) fail for obvious reasons.
Error from OCSP responder is now being displayed (and logged to JSON, ...)
Whole replay is kept in $tmpfile for debugging purposes
JSON output added for OCSP responderi query failures
Furtermore wget was replaced by "type -p" and grep by fgrep.
https://github.com/tlswg/tls13-spec/wiki/implementations now lists a server that supports TLS 1.3 draft 28, so this PR adds supports for drafts 27 and 28.
Since run_protocols() now checks for 11 different drafts of TLS 1.3 in addition to the final version, performing a separate test for each draft had become far too time consuming. So, this PR rewrites the check for TLS 1.3 versions in run_protocols() so that the number of tests is proportional to the number of drafts that the server supports rather than the number of drafts that testssl.sh can check for.
testssl.sh was inserting two spaces between the CBC ciphers detected by OpenSSL and those detected to tls_sockets(). This PR fixes the problem.
This issue was previously fixed by 87fe0c15da, but that fix was accidentally removed by the next commit: f3dc53f554.
There is at least one server that will fail under some circumstances if the ClientHello offers a compression method other than null.
In OpenSSL 1.1.0 and 1.1.1, s_client will not offer any other compression methods unless the "-comp" option is provided. However, in earlier versions of OpenSSL, s_client will by default offer the DEFLATE compression method, however, this can be disabled using the "-no_comp" option.
This PR addresses the flaw in this server by having s_client_options() add a "-no_comp" option to the command line if "-no_comp" is supported and the test doesn't require offering compression.
Since run_crime() requires compression to be offered, run_crime() was changed to always add "-comp" to the command line, and then s_client_options() was changed to remove "-comp" from the command line, if that option isn't supported.
* Extra function for ldap_get()
* Hint when curl is not installed and LDAP URI is encountered
* Rename jsonID cert_cRLDistributionPoints to cert_crlDistributionPoints
* Fix trailing _ in jsonID
Open/to be clarified:
* Proxy for curl / proxy needs to come from testssl.sh
* Proxy support for HTTP bash socket GET
* cert_CRLrevoked comes before cert_cRLDistributionPoints
* Unit tests
Still open: OCSP
At the moment the code for downloading a CRL seems to only work if URL is an HTTP or HTTP URL. It fails if the URL is an LDAP URL. The wget command does not support LDAP and when curl retrieves data from an LDAP URL it stores the result in LDIF format, which http_get() cannot currently convert into a PEM-encoded CRL.
This PR addresses the issue by skipping the revocation check for any URL that does not begin with "http".
In general, a CA only needs to keep the status information for a certificate until it expires. So, once a certificate has expired, the information provided about it in a CRL or OCSP response may no longer be reliable. The certificate may no longer be listed as revoked, even it is had been revoked at some point before it expired.
So, this PR changes certificate_info() to only check CRLs for revocation status if the certificate has not expired.
In order to use it one has to use --phone-out (PHONE_OUT
is the respective ENV) like
``./testssl.sh --phone-out --json-pretty -S wikipedia.org``
This makes use of curl (if available) or wget (if available) and
falls back to bash socket GET. The latter uses HTTP/1.0 as
chunked transfers by the server (used for bigger files normally)
can't be reasonably separated from their HTTP header. (HTTP/1.0
doesn't support chunked transfers).
curl and wget use the enviroment variables automatically. Probably
we want to use those proxies only if told by a switch to testssl.sh.
"-crl_download" would have been an option. Support would have
been needed to check beforehand. Alos information on proper
usage seems limited, so for now a solution which works is
preferred.
Open/to be clarified:
* Documentation
* Proxy for curl / proxy needs to come from testssl.sh
* Proxy support for HTTP bash socket GET
* JSON ID is cert_CRLrevoked_ (trailing underscore)
* cert_CRLrevoked_ comes before cert_cRLDistributionPoints
(* reconsider naming of cert_cRLDistributionPoints)
* Unit tests
Still open: OCSP
This PR was developed in response to #845. It adds to the list of ciphers used to determine whether the server has a cipher order in order to help avoid cases in which testssl.sh cannot determine a cipher order.
In order to create this list I scanned thousands of servers in order to determine what ciphers they support, including (1) about 20 thousand U.S. government web sites, (2) all of the sites listed at badssl.com, (3) all of the test servers listed at https://github.com/tlswg/tls13-spec/wiki/implementations, (4) about 30 additional non-U.S. government sites, and (5) one server configured as described in #845. I scanned each of these servers using OpenSSL 1.0.2-chacha, 1.0.2o, and 1.1.1.
Then I ran collection information through a script that created the updated list. For each scanned server, and for each of the 3 versions of OpenSSL, the script checked whether $list_fwd contained at least two ciphers from the list. If it didn't, then it would add one of the ciphers supported by the server (and by OpenSSL) to the list. In choosing among the ciphers supported by the server that were not already in $list_fwd, it would choose the cipher that was supported by the most other servers.
The list contain a few oddities as a result of the servers that I scanned. The script added two TLSv1.3 ciphers, since I scanned at least one server that only supports TLSv1.3. The list also includes ADH-AES256-GCM-SHA384 and AECDH-AES128-SHA, which may only be supported by null.badssl.com.
I made one manual change to the list - adding TLS_CHACHA20_POLY1305_SHA256. I did this since the number of TLSv1.3 servers scanned was so small, I didn't think it was safe to assume that all servers that support TLSv1.3 would support both TLS_AES_256_GCM_SHA384 and TLS_AES_128_GCM_SHA256.
Since most of the servers that I scanned were U.S. government servers, it may not be a representative sample. However, since the new list only adds to the current list, it can only be an improvement. Also, the updated list still only includes 37 ciphers, so many more could be added without creating any problems.
As it would be a possible privacy violation a new flag PHONE_OUTSIDE
is introduced (later accompanied by a switch). It determines whether
the client is allowed to retrieve the CRL specified (HTTP only supported).
Tested ok against wikipedia.de and revoked.badssl.com.
To do:
* look into -crl_download
* fileout
* Unit tests
OCSP verification
There is currently a problem if mass testing is being performed, JSON and/or CSV output is to be produced, the parent process calls fileout(), and each child process have its own output file for the JSON and/or CSV output. The can be seen, for example, with the following:
testssl.sh --openssl=openssl_1.1.1 --file test_servers.txt --csvfile output_dir --jsonfile output_dir
A call will be made in the parent process to report that openssl_1.1.1 has "No engine or GOST support via engine." fileout() will try to write to output_dir, which will result in an error.
This PR fixes the problem by checking the the file to be written to is not a directory (as is already done in html_out() for HTML output).
In certain situations while testting for CCS injection it could have happened
that an error code was sent which was not interpreted properly by testssl.sh.
(https://tools.ietf.org/html/rfc5246#section-7.2)
This has now been fixed and thus addresses #906. Also it has been made sure
that other error codes are reported appropiately.
The case where this test failed before was a non-patched Ubuntu 12.04
with openssl/postfix on port 25.
For the upcoming release this commit initiated the beta phase: important features
will be allowed. On the agenda is otherwise to fix bugs.
I ran shellcheck (see #434), and fixed some complaints and adjusted some coding
style mismatches.
There were some cases where security headers were served two times by the
server. The result (screen+html) wasn't properly formatted in those cases.
match_httpheader_key() was improved so that it keeps track when
a CR or an indentation needs to be done.
Some egrep statements were replaced by grep -E as this has been used
already and it is the thing testssl.sh should settle for. (precursor
to #1022).
run_more_flags was renamed to sun_security_headers and names of
variables is better.
HAS_SPDY is now HAS_NPN (similar to renaming the function a while
back)
mktemp should only be used when not avoidable (performance, code). For
temporarily local variables names can often be borrowed from globals
which were already generated by mktemp (SOCK_REPLY_FILE).
This is a fix for #722. It updates the client simulation data from
the SSLlabs API. As usual data was pulled, resorted and clients
to display were hand-selected.
Wishlist: Missing is Oreo, OpenSSL 1.1.1, Safari on OX 11, Firefox
52.x (ESR)
With the recent PR #1033 from @dcooper it can also show TLS 1.3
handshakes.
https://api.dev.ssllabs.com/api/v3/getClients incorrectly indicates a highestProtocol of 771 (TLS 1.2) for clients that support TLS 1.3, which leads run_client_simulation() to incorrectly report "no connection" if the client would have actually connected using TLS 1.3.
This has been addressed by manually editing etc/client-simulation.txt to set the highest_protocol to 0x0304 for the clients that support TLS 1.3.
This PR modifies update_client_sim_data.pl to automatically apply the fix for clients that support TLS 1.3 in order to avoid a possible regression when etc/client-simulation.txt is updated.