Note on AMPv1 Value Sizes¶
NOTE: AMPv2 does away with value size limits by automatically chunking large values. The 16-bit restriction only applies to AMPv1.
An AMP connection is designed for asynchronous, low-latency use. Many AMP calls may be in-flight at once. You may continue to send requests, and receive responses, while waiting for the responses to earlier, long-running requests.
Thus, sending a huge amount of data in a single AMP packet (request or response) will clog up the connection, and prevent any other messages from being processed until the large packet has been cleared.
In order to discourage this, and to make the wire-protocol as simple as possible, an unsigned 16-bit integer is used to encode the length of values in an AMP packet. Thus the value for a given key may not exceed 65,535 bytes.
Even so, you may wish to ensure that you send quite a bit less than this in each AMP call to ensure the low-latency interactivity of your connection.
The scalable approach is to “stream” your data across multiple AMP calls in 65k (or smaller!) chunks. Each call should include an “id” key/value that identifies the stream so that the receiving end knows what to do with the data.
Another option is out-of-band transfers for large content, but still this is vulnerable to starving your low-latency AMP connection if one of the links maxes out its bandwidth (as TCP tries to do). HTTP is a suitable bulk-transfer protocol, and it should be quite secure if run over TLS, using one-time cryptographically-strong passwords to identify and authenticate transfer requests. Thus you use your AMP connection to set up the bulk transfer OTP (a one-time access token) and authorize the request before proceeding. The HTTP server authenticates the request by consuming the OTP.
If you really don’t care about the interactive performance of your AMP connection, but just want to send a big chunk of data in a single AMP call, then you can use an approach like BigString. For certain constrained use-cases this approach may make sense, but it does not scale very well in terms of wire-protocol performance… more RAM will be used up decoding a large payload that was encoding by BigString. The entire contents will be buffered in memory before the message is processed.
These are merely general guidelines, and necessarily the best approach will be specific to your application environment.