The number of requests that can execute concurrently in an Ice server is limited to the number of threads in its server-side thread pool. If more clients attempt to concurrently call operations than there are threads in the pool, the corresponding requests are not dispatched until a currently-executing invocation completes and returns its thread to the pool; that thread then picks up the next pending request.
By default, the server-side thread pool has a size of one, meaning that only one operation can execute in the server at a time. If you don't see concurrent invocations in a server, it is likely that the server is running with a thread pool containing only a single thread, thereby serializing all incoming invocations.
The size of the server-side thread pool is controlled by a number of properties:
Size property controls the number of threads in the pool. When you create a communicator, the specified number of threads are created and added to the pool. During idle periods, the size of the pool may eventually shrink to just one thread, depending on the value of the
SizeMax property has a default value that equals the size of the thread pool. However, you can set this property to a value that is larger than
Size. If you do, the server-side run time will allow the thread pool to grow in size up to this value if enough requests arrive concurrently.
SizeWarn property sets a threshold. If the number of threads in use exceeds this value, the run time emits a warning via the communicator's logger. The default value of this property is 80% of the value specified by