Oneway and datagram invocations are normally sent as individual messages, that is, the Ice run time sends the oneway or datagram invocation to the server immediately, as soon as the client makes the call. If a client sends a number of oneway or datagram invocations in succession, the client-side run time traps into the OS kernel for each message, which is expensive. In addition, each message is sent with its own message header, that is, for N messages, the bandwidth for N message headers is consumed. In situations where a client sends a number of oneway or datagram invocations, the additional overhead can be considerable.
To avoid the overhead of sending many small messages, you can send oneway and datagram invocations in a batch: instead of being sent as a separate message, a batch invocation is placed into a client-side buffer by the Ice run time. Successive batch invocations are added to the buffer and accumulated on the client side until they are flushed, either explicitly by the client or automatically by the Ice run time.
On this page:
Proxy Methods for Batched Invocations
Several proxy methods support the use of batched invocations. In Slice, these methods would look as follows:
ice_batchDatagram methods create a new proxy configured for batch invocations. Once you obtain a batch proxy, messages sent via that proxy are buffered by the proxy instead of being sent immediately. Once the client has invoked one or more operations on a batch proxy, it can call
ice_flushBatchRequests to explicitly flush the batched invocations. This causes the batched messages to be sent "in bulk", preceded by a single message header. On the server side, batched messages are dispatched by a single thread, in the order in which they were written into the batch. This means that messages from a single batch cannot appear to be reordered in the server. Moreover, either all messages in a batch are delivered or none of them. (This is true even for batched datagrams.)
Asynchronous versions of
ice_flushBatchRequests are also available; see the relevant language mapping for more information.
Batched invocations are queued by the proxy on which the request was invoked (this is true for all proxies except fixed proxies). It's important to be aware of this behavior for several reasons:
- Batched invocations queued on a proxy will be lost if that proxy is deallocated prior to being flushed
- Proxy instances maintain separate queues even if they refer to the same target object
Using proxy factory methods may (or may not) create new proxy instances, which affects how batched invocations are queued. Consider this example:
At run time, the
secureBatchvariables might refer to the same proxy instance or two separate proxy instances, depending on the original stringified proxy's configuration. As a result, flushing requests using one variable may or may not flush the requests of the other.
ice_flushBatchRequests on a proxy behaves like a oneway invocation in that automatic retries take place and an exception is raised if an error occurs while establishing a connection or sending the batch message.
Automatically Flushing Batched Invocations
The default behavior of the Ice run time, as governed by the configuration property
Ice.BatchAutoFlushSize, automatically flushes batched invocations as soon as a batched request causes the accumulated message to exceed the specified limit. When this occurs, the Ice run time immediately flushes the existing batch of requests and begins a new batch with this latest request as its first element.
For batched oneway invocations, the value of
Ice.BatchAutoFlushSize specifies the maximum message size in kilobytes; the default value is 1MB. In the case of batched datagram invocations, the maximum message size is the smaller of the system's maximum size for datagram packets and the value of
The receiver's setting for
Ice.MessageSizeMax determines the maximum size that the Ice run time will accept for an incoming protocol message. The sender's setting for
Ice.BatchAutoFlushSize must not exceed this limit, otherwise the receiver will silently discard the entire batch.
Automatic flushing is enabled by default as a convenience for clients to ensure a batch never exceeds the configured limit. A client can track batch request activity, and even implement its own auto-flush logic, by installing an interceptor.
Batched Invocations for Fixed Proxies
A fixed proxy is a special form of proxy that an application explicitly creates for use with a specific bidirectional connection. Batched invocations on a fixed proxy are not queued by the proxy, as is the case for regular proxies, but rather by the connection associated with the fixed proxy. Automatic flushing continues to work as usual for batched invocations on fixed proxies, and you have three options for manually flushing:
ice_flushBatchRequestson a fixed proxy flushes all batched requests queued by its connection; this includes batched requests from other fixed proxies that share the same connection
Connection::flushBatchRequestsflushes all batched requests queued by the target connection
Communicator::flushBatchRequestsflushes all batched requests on all connections associated with the target communicator
Communicator::flushBatchRequests have no effect on batched requests queued by regular (non-fixed) proxies.
The synchronous versions of
flushBatchRequests block the calling thread until the batched requests have been successfully written to the local transport. To avoid the risk of blocking, you must use the asynchronous versions instead (assuming they are supported by your chosen language mapping).
Note the following limitations in case a connection error occurs:
- Any requests queued by that connection are lost
- Automatic retries are not attempted
- The proxy method
Connection::flushBatchRequestswill raise an exception, but
Communicator::flushBatchRequestsignores any errors
Communicator::flushBatchRequests methods take an
Ice::CompressBatch argument to specify under which conditions the batch should be compressed or not. The
Ice::CompressBatch enumeration is shown below:
You can choose to always or never compress the batch with
BasedOnProxy value specifies to compress based on the proxies used to add requests to the batch. If at least one request was queued with a compressed fixed proxy (a proxy created with
ice_compress(true) or if
Ice.Override.Compress is enabled), the batch will be compressed.
Considerations for Batched Datagrams
For batched datagram invocations, you need to keep in mind that, if the data for the invocations in a batch substantially exceeds the PDU size of the network, it becomes increasingly likely for an individual UDP packet to get lost due to fragmentation. In turn, loss of even a single packet causes the entire batch to be lost. For this reason, batched datagram invocations are most suitable for simple interfaces with a number of operations that each set an attribute of the target object (or interfaces with similar semantics). Batched oneway invocations do not suffer from this risk because they are sent over stream-oriented transports, so individual packets cannot be lost.
Compressing Batched Invocations
Batched invocations are more efficient if you also enable compression for the transport: many isolated and small messages are unlikely to compress well, whereas batched messages are likely to provide better compression because the compression algorithm has more data to work with.
Regardless of whether you used batched messages or not, you should enable compression only on lower-speed links. For high-speed LAN connections, the CPU time spent doing the compression and decompression is typically longer than the time it takes to just transmit the uncompressed data.
Active Connection Management and Batched Invocations
As for oneway invocations, server-side Active Connection Management (ACM) can interfere with batched invocations over TCP or TCP-based transports (SSL, WebSocket, etc.). With server-side ACM enabled, it's possible for a server to close the connection at the wrong moment and not process a batch – with no indication being returned to the client that the batch was lost. We recommend that you either disable ACM for the server side, or enable ACM heartbeats in the client to ensure the connection remains active.
Batched Invocation Interceptors
Batch invocation interceptors allow you to implement your own auto-flush algorithm or receive notification when an auto-flush fails.
You install an interceptor by setting the
batchRequestInterceptor member of the
InitializationData object that the application constructs when initializing a new communicator. The Ice run time invokes the interceptor for each batch request, passing the following arguments:
req- An object representing the batch request being queued
count- The number of requests currently in the queue
size- The number of bytes consumed by the requests currently in the queue
The request represented by
req is not included in the
A batch request is not queued until the interceptor calls
BatchRequest::enqueue. The minimal interceptor implementation is therefore:
A more sophisticated implementation might use its own logic for automatically flushing queued requests:
In this example, the implementation consults the existing Ice property
Ice.BatchAutoFlushSize to determine the limit that triggers an automatic flush. If a flush is necessary, the interceptor can obtain the relevant proxy by calling
getProxy on the
Specifying your own exception handler when calling
ice_flushBatchRequestsAsync gives you the ability to take action if a failure occurs (Ice's default automatic flushing implementation ignores any errors). Aside from logging a message, your options are somewhat limited because it's not possible for the interceptor to force a retry.
For datagram proxies, we strongly recommend using a maximum queue size that is smaller than the network MTU to minimize the risk that datagram fragmentation could cause an entire batch to be lost.