Asynchronous Method Dispatch (AMD) in Swift
The number of simultaneous synchronous requests an object adapter is capable of dispatching is determined by the dispatch queue associated with this object adapter and the number of threads in the object adapter's thread pool. If all of the threads are busy dispatching long-running operations, then no threads are available to process new requests and therefore clients may experience an unacceptable lack of responsiveness.
Asynchronous Method Dispatch (AMD), the server-side equivalent of AMI, addresses this scalability issue. Using AMD, a server can receive a request but then suspend its processing in order to release the dispatch thread as soon as possible. When processing resumes and the results are available, the server can provide its results to the Ice run time for delivery to the client.
AMD is transparent to the client, that is, there is no way for a client to distinguish a request that, in the server, is processed synchronously from a request that is processed asynchronously.
In practical terms, an AMD operation typically queues the request data for later processing. In this way, the server minimizes the use of dispatch threads and becomes capable of efficiently supporting thousands of simultaneous clients.
On this page:
Enabling AMD with Metadata in Swift
To enable asynchronous dispatch, you must add an ["amd"]
metadata directive to your Slice definitions. The directive applies at the interface and the operation level. If you specify ["amd"]
at the interface level, all operations in that interface use asynchronous dispatch; if you specify ["amd"]
for an individual operation, only that operation uses asynchronous dispatch. In either case, the metadata directive replaces synchronous dispatch, that is, a particular operation implementation must use synchronous or asynchronous dispatch and cannot use both.
Consider the following Slice definitions:
["amd"] interface I { bool isValid(); float computeRate(); } interface J { ["amd"] void startProcess(); int endProcess(); }
In this example, both operations of interface I
use asynchronous dispatch, whereas, for interface J
, startProcess
uses asynchronous dispatch and endProcess
uses synchronous dispatch.
Specifying metadata at the operation level (rather than at the interface level) minimizes complexity: although the asynchronous model is more flexible, it is also more complicated to use. It is therefore in your best interest to limit the use of the asynchronous model to those operations that need it, while using the simpler synchronous model for the rest.
AMD Mapping in Swift
The asynchronous mapping for an operation differs in several ways from its synchronous mapping:
- The dispatch method name has the suffix
Async
- For an operation that returns
void
and has no out parameters, the return type of the dispatch method isPromiseKit.Promise<Void>
- For an operation that returns at least one value, the dispatch method returns PromiseKit.Promise
<T>
, whereT
represents a type or tuple as described below - Async method do not throw: all exceptions must be reported through the promise returned by the method
Let's start with some simple examples to demonstrate the asynchronous mapping:
["amd"] interface Example { void opVoid(int n); string opString(); void opStringOut(out string s); }
The Slice compiler generates the following Swift protocol:
public protocol Example { func opVoidAsync(n: Int32, current: Ice.Current) -> PromiseKit.Promise<Void> func opStringAsync(current: Ice.Current) -> PromiseKit.Promise<String> func opStringOutAsync(current: Ice.Current) -> PromiseKit.Promise<String> }
Pay particular attention to the mappings for opString
and opStringOut
. For operations like these that return a single value of type T
(whether it's a non-void
return value or an out parameter), the method returns PromiseKit.Promise<T>
.
Finally, for an operation that returns multiple values, the generated method returns a promise of a tuple with labels, just like for the asynchronous proxy mapping. Let's add another operation to our example to demonstrate the mapping for multiple return values:
["amd"] interface Example { void opVoid(int n); string opString(); void opStringOut(out string s); string opAll(bool flag, out int count); }
The mapping for opAll
is shown below:
public protocol Example { func opVoidAsync(n: Int32, current: Ice.Current) -> PromiseKit.Promise<Void> func opStringAsync(current: Ice.Current) -> PromiseKit.Promise<String> func opStringOutAsync(current: Ice.Current) -> PromiseKit.Promise<Swift.String> func opAllAsync(flag: Bool, current: Ice.Current) -> PromiseKit.Promise<(returnValue: String, count: Int32)> }
Chaining AMI and AMD Invocations in Swift
Since the asynchronous proxy API and the asynchronous dispatch API both use PromiseKit promises, chaining nested invocations together without blocking becomes very straightforward. Continuing our example from the previous section, suppose our servant also holds a proxy to another object of the same type and delegates to that object. We can implement the servant operation as:
struct ExampleI: Example { let other: ExamplePrx func optStringAsync(current: Ice.Current) -> PromiseKit.Promise<String> { return other.opStringAsync() } }
Now suppose our servant method goes a step further and modifies the asynchronous response from this delegate. The code becomes:
struct ExampleI: Example { let other: ExamplePrx func optStringAsync(current: Ice.Current) -> PromiseKit.Promise<String> { return other.opStringAsync().map { "Hello \($0)" } } }
AMD Exceptions in Swift
The implementation of an AMD operation must return all exceptions through the promise: it cannot throw any exception.
AMD Example in Swift
For a more realistic example of using AMD in Ice, let's define the Slice interface for a simple computational engine:
module Demo { sequence<float> Row; sequence<Row> Grid; exception RangeError {} interface Model { ["amd"] Grid interpolate(Grid data, float factor) throws RangeError; } }
Given a two-dimensional grid of floating point values and a factor, the interpolate
operation returns a new grid of the same size with the values interpolated in some interesting (but unspecified) way.
Our servant struct adopts Demo.Model
and implements the interpolateAsync
method that returns a promise object, the closure passed to the promise constructor schedules a work item in the global dispatch queue, when the job is executed by the queue it call the interpolate
implementation and will use the results to fulfill the promise, if the call to interpolate
throws an exception the exception will be use to reject the promise.
struct ModelI: Model { func interpolateAsync(data: Grid, factor: Float, current: Ice.Current) -> PromiseKit.Promise<Grid> { return DispatchQueue.global().async(.promise) { try interpolateImpl(data, factor) } } }