The number of simultaneous synchronous requests a server is capable of supporting is determined by the number of threads in the server's thread pool. If all of the threads are busy dispatching long-running operations, then no threads are available to process new requests and therefore clients may experience an unacceptable lack of responsiveness.
Asynchronous Method Dispatch (AMD), the server-side equivalent of AMI, addresses this scalability issue. Using AMD, a server can receive a request but then suspend its processing in order to release the dispatch thread as soon as possible. When processing resumes and the results are available, the server can provide its results to the Ice run time for delivery to the client.
AMD is transparent to the client, that is, there is no way for a client to distinguish a request that, in the server, is processed synchronously from a request that is processed asynchronously.
In practical terms, an AMD operation typically queues the request data for later processing by an application thread (or thread pool). In this way, the server minimizes the use of dispatch threads and becomes capable of efficiently supporting thousands of simultaneous clients.
On this page:
Enabling AMD with Metadata in Java
To enable asynchronous dispatch, you must add an ["amd"]
metadata directive to your Slice definitions. The directive applies at the interface and the operation level. If you specify ["amd"]
at the interface level, all operations in that interface use asynchronous dispatch; if you specify ["amd"]
for an individual operation, only that operation uses asynchronous dispatch. In either case, the metadata directive replaces synchronous dispatch, that is, a particular operation implementation must use synchronous or asynchronous dispatch and cannot use both.
Consider the following Slice definitions:
["amd"] interface I { bool isValid(); float computeRate(); } interface J { ["amd"] void startProcess(); int endProcess(); }
In this example, both operations of interface I
use asynchronous dispatch, whereas, for interface J
, startProcess
uses asynchronous dispatch and endProcess
uses synchronous dispatch.
Specifying metadata at the operation level (rather than at the interface) minimizes complexity: although the asynchronous model is more flexible, it is also more complicated to use. It is therefore in your best interest to limit the use of the asynchronous model to those operations that need it, while using the simpler synchronous model for the rest.
AMD Mapping in Java
The mapping for an AMD operation is very similar to the synchronous mapping for an operation. There are two differences: the method's name has an Async
suffix, and it returns java.util.concurrent.CompletionStage<T>
where T
is the actual return type as defined by the parameter mapping. The implementation of the operation, which typically returns an instance of the derived class java.util.concurrent.CompletableFuture<T>
, must eventually complete the future by supplying either the results or an exception.
Here's a simple example to show the mapping for several operations:
["amd"] interface I { void opVoid(); string opStringRet(); void opStringIn(string s); void opStringOut(out string s); string opStringAll(string s, out string os); }
Since we annotated the interface with the amd
metadata, all of the operations use the asynchronous mapping:
public interface I extends com.zeroc.Ice.Object { public static class OpStringAllResult { public OpStringAllResult() {} public OpStringAllResult(String returnValue, String os) { this.returnValue = returnValue; this.os = os; } public String returnValue; public String os; } java.util.concurrent.CompletionStage<Void> opVoidAsync(com.zeroc.Ice.Current current); java.util.concurrent.CompletionStage<java.lang.String> opStringRetAsync(com.zeroc.Ice.Current current); java.util.concurrent.CompletionStage<Void> opStringInAsync(String s, com.zeroc.Ice.Current current); java.util.concurrent.CompletionStage<java.lang.String> opStringOutAsync(com.zeroc.Ice.Current current); java.util.concurrent.CompletionStage<I.OpStringAllResult> opStringAllAsync(String s, com.zeroc.Ice.Current current); ... }
AMD Thread Safety in Java
As with the synchronous mapping, you can add the marshaled-result
metadata to operations that return mutable types in order to avoid potential thread-safety issues. The return type of your operation will change to be CompletionStage<OpMarshaledResult>
.
AMD Exceptions in Java
There are two processing contexts in which the logical implementation of an AMD operation may need to report an exception: the dispatch thread (the thread that receives the invocation), and the response thread (the thread that sends the response).
These are not necessarily two different threads: it is legal to send the response from the dispatch thread.
Although we recommend that the future be used to report all exceptions to the client, it is legal for the implementation to raise an exception instead, but only from the dispatch thread.
As you would expect, an exception raised from a response thread cannot be caught by the Ice run time; the application's run time environment determines how such an exception is handled. Therefore, a response thread must ensure that it traps all exceptions and sends the appropriate response using the future or callback object. Otherwise, if a response thread is terminated by an uncaught exception, the request may never be completed and the client might wait indefinitely for a response.
AMD Example in Java
To demonstrate the use of AMD in Ice, let us define the Slice interface for a simple computational engine:
module Demo { sequence<float> Row; sequence<Row> Grid; exception RangeError {} interface Model { ["amd"] Grid interpolate(Grid data, float factor) throws RangeError; } }
Given a two-dimensional grid of floating point values and a factor, the interpolate
operation returns a new grid of the same size with the values interpolated in some interesting (but unspecified) way.
Our servant class implements Demo.Model
and supplies a definition for the interpolateAsync
method that creates a Job
to hold the callback object and arguments, and adds the Job
to a queue. The method is synchronized to guard access to the queue:
public final class ModelI implements Demo.Model { synchronized public java.util.concurrent.CompletionStage<float[][]> interpolateAsync(float[][] data, float factor, com.zeroc.Ice.Current current) throws Demo.RangeError { java.util.concurrent.CompletableFuture<float[][]> future = new java.util.concurrent.CompletableFuture<float[][]>(); _jobs.add(new Job(future, data, factor)); return future; } java.util.LinkedList<Job> _jobs = new java.util.LinkedList<Job>(); }
After queuing the information, the implementation returns an uncompleted future to the Ice run time, making the dispatch thread available to process another request. An application thread removes the next Job
from the queue and invokes execute
, which uses interpolateGrid
(not shown) to perform the computational work:
class Job { Job(java.util.concurrent.CompletableFuture<float[][]> future, float[][] grid, float factor) { _future = future; _grid = grid; _factor = factor; } void execute() { if(!interpolateGrid()) { _future.completeExceptionally(new Demo.RangeError()); } else { _future.complete(_grid); } } private boolean interpolateGrid() { // ... } private java.util.concurrent.CompletableFuture<float[][]> _future; private float[][] _grid; private float _factor; }
If interpolateGrid
returns false
, then we complete the future to indicate that a range error has occurred. If interpolation was successful, we send the modified grid back to the client by calling complete
on the future.