When `certificate_info()` is given a certificate with a DH public key it displays something like:
```
Server key size fixme: dhKeyAgreement 3072 bits (FIXME: can't tell whether this is good or not)
```
This PR fixes that so that the output is:
```
Server key size DH 3072 bits
```
This PR is in response to issue #454. I tried repeating the reported problem by creating a certificate in which the extendedKeyUsage extension was present and only included the anyExtendedKeyUsage OID. In running the test, I discovered two problems.
First, when `determine_trust()` is calling `verify_retcode_helper()` to display the reason that path validation failed, it assumes that there are at least two certificate bundles provided. (I was running the test using just one certificate bundle, containing my local root.) So, I changed `determine_trust()` to use `${verify_retcode[1]}` rather than `${verify_retcode[2]}` in the case that all bundles failed (it seems that 2 vs. 1 was an arbitrary choice).
Once that was fixed, testssl.sh output "NOT ok (unknown, pls report) 26". So, the second thing this PR fixes is to output "NOT ok (unsupported certificate purpose)" if OpenSSL responds with an unsupported certificate purpose error.
With OpenSSL 1.1.0, `s_client -no_ssl2` fails with an "unknown option" error. At the moment the `-no_ssl2` option is only used in two functions, `run_client_simulation()` and `run_crime()`. In `run_crime()`, the `-no_ssl2` option is only included if the OpenSSL version is 0.9.8.
This PR checks whether the OpenSSL version in use supports the `-no_ssl2` option, and if it doesn't, it removes it from the calls to `s_client` in `run_client_simulation()`.
If the version of OpenSSL being used doesn't support `s_client -ssl3` (e.g., OpenSSL 1.1.0), `run_beast()` doesn't display a warning that testing for CBC in SSLv3 isn't locally supported.
This PR adds a "Local problem" warning if the OpenSSL being used doesn't support `s_client -ssl3`.
The test for whether a server only supports SSLv2 was broken, since `$OPTIMAL_PROTO` will be `-ssl2` whether SSLv2 is the only protocol that succeeds or no protocol succeeds.
This PR sets $OPTIMAL_PROTO (or $STARTTLS_OPTIMAL_PROTO) to "" if no protocol succeeds.
If the version of OpenSSL being used doesn't support `s_client -ssl3` (e.g., OpenSSL 1.1.0), `run_ssl_poodle()` displays `not vulnerable (OK)` even though it can't test whether the server is vulnerable.
This PR fixes it so that a "Local problem" warning is displayed is `s_client -ssl3` isn't supported.
The PR also removes the `$SNI` from the call to `$OPENSSL s_client` since OpenSSL ignores the `-servername` directive for `-ssl3` anyways.
If testssl.sh is called with `--devel 22` and the response from `sslv2_sockets()` is not 0, then `tls_sockets()` will be called, and the result of the `tls_sockets()` command will be output rather than the result of the `sslv2_sockets()` command.
This PR addresses the "FIXME" in `run_protocols()`:
```
sslv2_sockets #FIXME: messages/output need to be moved to this (higher) level
```
It also changes `run_drown()` to call `sslv2_sockets()` in order to avoid duplicate code.
This PR is in response to issue #352, where it was noted that Bash does not support binary data in strings.
I replaced all calls to `sockread()` with calls to `sockread_serverhello()`, and then, since is now used everywhere and not just to read ServerHello messages, I renamed `sockread_serverhello()` to `sockread()`.
I tested the revised code against several servers, including one that is vulnerable to CCS and Heartbleed, and got the same results as with the current code (although the hexdumps displayed in debug mode differ).
One concern I have is the code in `run_ccs_injection()`. The current code is:
```
byte6=$(echo "$SOCKREPLY" | "${HEXDUMPPLAIN[@]}" | sed 's/^..........//')
lines=$(echo "$SOCKREPLY" | "${HEXDUMP[@]}" | count_lines )
debugme echo "lines: $lines, byte6: $byte6"
if [[ "$byte6" == "0a" ]] || [[ "$lines" -gt 1 ]]; then
pr_done_best "not vulnerable (OK)"
...
```
I revised this to:
```
if [[ -s "$SOCK_REPLY_FILE" ]]; then
byte6=$(hexdump -ve '1/1 "%.2x"' "$SOCK_REPLY_FILE" | sed 's/^..........//')
lines=$(hexdump -ve '16/1 "%02x " " \n"' "$SOCK_REPLY_FILE" | count_lines )
debugme echo "lines: $lines, byte6: $byte6"
fi
rm "$SOCK_REPLY_FILE"
if [[ "$byte6" == "0a" ]] || [[ "$lines" -gt 1 ]]; then
...
```
In the revised code `byte6` is initialized to `0a` so that the response is `not vulnerable (OK)` if `$SOCK_REPLY_FILE` is empty. This has worked okay since for all of the servers that I tested that weren't vulnerable `$SOCK_REPLY_FILE` was empty. Since I haven't seen any other examples, I don't understand why check for vulnerability was written the way it was. So, I'm a bit concerned that the test in the revised code may produce incorrect results now that `hexdump -ve '1/1 "%.2x"' "$SOCK_REPLY_FILE"` is an accurate hexdump of the reply.
In the check for old versions of OpenSSL, the results of the call to `ignore_no_or_lame()` are ignored, and so the program continues even if the user enters `no`.
This PR makes three changes to `determine_optimal_proto()`:
* It no longer tries an empty string for `$OPTIMAL_PROTO` twice.
* It does not include `-servername` for `-ssl2` or `-ssl3`, since some versions of OpenSSL that support SSLv2 will fail if `s_client` is provided both the `-ssl2` and `-servername` options.
* It displays a warning if `$OPTIMAL_PROTO` is `-ssl2`, since some tests in testssl.sh will not work correctly for SSLv2-only servers.