SSH Rekey Limits with OpenSSH

Greg McLearn Common Criteria

Background

In the current version of the NDcPP there is a cryptographic Security Functional Requirement (SFR) called FCS_SSH*_EXT.1.8.  On the face of it, FCS_SSH*_EXT.1.8 is a fairly straightforward SFR with a relatively straightforward means to enforce it:

FCS_SSHS_EXT.1.8: The TSF shall ensure that within SSH connections the same session keys are used for a threshold of no longer than one hour, and no more than one gigabyte of transmitted data. After either of the thresholds are reached a rekey needs to be performed.

However, it is vitally important to read the application note (Application Note 102 in NDcPP v2.0+20180314) that follows this SFR element, because one small detail appears to be catching vendors by surprise:

For the maximum transmitted data threshold, the total incoming and outgoing data needs to be counted.

If your solution involves an OpenSSH server or client, you might be surprised to find out that OpenSSH’s “RekeyLimit” option does not actually fulfill this requirement according to the Application Note.  OpenSSH’s RekeyLimit’s volume limiter will rekey on data volume only when one of the incoming or outgoing meets or exceeds the defined limit. It does not check the aggregate. The SFR Application Note doesn’t appear to realize that the send and receive legs are independently keyed as per section 6.3 of RFC4253 which is actually the root of this particular problem. OpenSSH does key each leg independently, so from a standards perspective, rekeying when one of the legs reaches the defined threshold is, of course, the correct and most efficient way to handle this.

(TL;DR, I don’t want to read code: take me to the answers!)

OpenSSH Code Dive

If you dive into the OpenSSH codebase, you can easily find where the rekey limit is checked.  Using OpenSSH 7.7p1 as our example codebase (the most modern at the time of this post), you can see in packet.c, at line 930, the function ssh_packet_need_rekeying. Near the bottom of that function, we see something like this:

	/* Rekey after (cipher-specific) maxiumum blocks */
	out_blocks = ROUNDUP(outbound_packet_len,
	    state->newkeys[MODE_OUT]->enc.block_size);
	return (state->max_blocks_out &&
	    (state->p_send.blocks + out_blocks > state->max_blocks_out)) ||
	    (state->max_blocks_in &&
	    (state->p_read.blocks > state->max_blocks_in));

 

Specifically, this code says that if the number of data blocks on either the send or receive side has exceeded the set limitations (with an added factor for cipher-specific limits) then we need to rekey. (Time-based rekeying is checked earlier in that same function.) The boolean check does not aggregate the send and receive block counters.

But what is a block? A block is defined by the underlying cipher. For claimed ciphers of AES in the NDcPP v2.0, the block size will be 16 bytes. When the rekey limit is set by options (which is set in human terms such as ’10M’ for 10 MiB or 500G for 500 GiB), it will invoke a macro function called packet_set_rekey_limits in sshconnect2.c.  This macro expands (via opacket.h) to ssh_packet_set_rekey_limits in packet.c on line 2107:

void
ssh_packet_set_rekey_limits(struct ssh *ssh, u_int64_t bytes, u_int32_t seconds)
{
	debug3("rekey after %llu bytes, %u seconds", (unsigned long long)bytes,
	    (unsigned int)seconds);
	ssh->state->rekey_limit = bytes;
	ssh->state->rekey_interval = seconds;
}

 

This function sets the structure element rekey_limit for bytes (and the rekey_interval element for time-based rekey limits).  When keys are being exchanged — which is done once at the very start of a session, and then each time a rekey is performed — the function ssh_set_newkeys at line 834 of packet.c is invoked. Near the bottom of that function we see:

	/*
	 * The 2^(blocksize*2) limit is too expensive for 3DES,
	 * so enforce a 1GB limit for small blocksizes.
	 * See RFC4344 section 3.2.
	 */
	if (enc->block_size >= 16)
		*max_blocks = (u_int64_t)1 << (enc->block_size*2);
	else
		*max_blocks = ((u_int64_t)1 << 30) / enc->block_size;
	if (state->rekey_limit)
		*max_blocks = MINIMUM(*max_blocks,
		    state->rekey_limit / enc->block_size);
	debug("rekey after %llu blocks", (unsigned long long)*max_blocks);
	return 0;

 

With the AES 16-byte block size, the first volume-based conditional on line 918 will be used to set the system-wide limit maximum blocks of 232 which will then be throttled back by the conditional on line 922.  Since the requested limit is supposed to be no more than 1 GiB (or 67,108,864 16-byte blocks) as per the SFR, the max_blocks pointer will always be set to the user’s requested limit for AES-based ciphers.

Note the *max_blocks is a dereferenced pointer.  In function ssh_set_newkeys, line 853 and line 858, max_blocks is, in fact, the SSH state structure max_blocks_out or max_blocks_in depending on which leg of the channel is being rekeyed.

How to Meet the SFR Requirement

Armed with the knowledge above, we can see that there are two obvious ways to meet the SFR outside of correcting all affected Protection Profiles:

  1. Modify the OpenSSH packet.c function ssh_packet_need_rekeying to account for the aggregate; or
  2. Alter the RekeyLimit values presented for the SSH server and/or client in the TOE.

Option 1 will ensure that the intent is met regardless of how the RekeyLimit option is used at the expense of semantics, and needing to maintain a custom version of the OpenSSH source repository.  Some vendors already maintain a copy of this and some do not. This option can ensure that the most efficient rekeying limits are being employed.

Option 2 would require changing the RekeyLimit to be no more than 512M which can potentially yield more key exchanges during the life of the channel and is therefore slightly less efficient.  These are public key operations, which can be relatively expensive if there are a lot of connections to maintain, but your mileage may vary.  Advantages include that there is no need to maintain a custom copy of OpenSSH and the semantics of the RekeyLimit remain as expected.

The key to understanding why dropping the RekeyLimit to 512M (or even lower, to account for some fudge factor) lies in thinking about the worst-case scenarios in the communications channel.  Let’s assume that, for whatever reason, the TOE is transmitting data outbound only and never receives data.  This means that once the send leg reaches 512M, the channel will be rekeyed.  If the TOE is receiving data inbound and never sending data, likewise, once the receive leg reaches 512M, the channel will be rekeyed.  If the TOE is sending and receiving data simultaneously, then the first one to reach 512M will instantiate a rekey operation.  If the send and receive are completely symmetric, then the sum of the two (the magical ‘aggregate’ concept which is the requirement in the SFR) can never be more than 1024M before a rekey is issued.

 

Lightship is committed to making certifications faster and easier for vendors.  Talk to us about how we can help you achieve your certifications at the speed of development.