Connection Management in Ice

The Ice run time transparently creates and closes connections on behalf of the application so, as an application developer, you can generally ignore how Ice manages connections. However, it is useful to know how Ice deals with connections and chooses among them, especially if servers provide multiple endpoints for Ice objects.

On this page:

Client-side Connections

When clients contact a server via TCP or SSL, Ice needs to establish a connection between the two. Connections are always initiated by clients and accepted by servers. A client can query a proxy to obtain its Connection object, which describes the underlying connection for the proxy. (A connection object can be obtained even for a datagram proxy, that is, a proxy that contacts the server via UDP.) The Connection object provides operations such as close and createProxy, as well as a number of other operations.

There are two methods for obtaining the Connection object from a proxy:

  • ice_getConnection
    This proxy method returns the Connection object associated with the proxy. If no connection to the target exists yet, the Ice run time establishes a connection first and then returns the Connection object for the new connection. If the run time cannot establish a connection the operation raises an exception; if the Ice object to which the proxy refers is collocated, the method returns null.
  • ice_getCachedConnection
    This proxy method returns the Connection object associated with the proxy if the proxy is already bound to a connection; if no connection is bound yet, the method returns null.

Here is a simple example that illustrates how to obtain a Connection object:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> hello = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy("hello:tcp -h remote.host.com -p 10000"));
shared_ptr<Connection> conn = hello->ice_getConnection();

The call to ice_getConnection establishes a connection to remote.host.com at port 10000 and returns the associated Connection object. Contrast this with the following example:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> hello = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy("hello:tcp -h remote.host.com -p 10000"));
shared_ptr<Connection> conn = hello->ice_getCachedConnection();

In this case, the call to ice_getCachedConnection returns null because no connection was established previously for the hello proxy.

As you might imagine, connections are not cheap. In particular, a connection consumes a file descriptor and uses memory to keep track of pending requests. Because connections are expensive, connection reuse is an integral part of the Ice run time. It is important to understand how the client side determines whether to establish a new connection or re-use an existing one.

Connection Life Cycle

The Ice run time maintains a pool of existing connections (on a per-communicator basis); the run time binds these connections to a proxy as a side-effect of the client making remote invocations via that proxy. The run time creates new connections transparently as they are needed.

For example, consider the following code:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy("hello:tcp -h remote.host.com -p 10000"));
h1->sayHello(); // Connection creation and binding occurs here

When the client invokes sayHello via the proxy h1, the Ice run creates a connection to remote.host.com at port 10000 and binds this connection to the proxy. Note that the preceding example uses an uncheckedCast which does not make a remote invocation and, therefore, never establishes a connection. On the other hand, if the code were to use a checkedCast instead, then connection establishment would take place as part of the checkedCast, because a checked cast requires a remote call to determine whether the target object supports the specified interface. (See The Fundamentals of Proxies for more information on casting proxies.)

The life cycle of a connection is independent of the life cycle of a proxy. For example:

C++
void
doit(shared_ptr<Communicator> communicator)
{
    shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(
        communicator->stringToProxy("hello:tcp -h remote.host.com -p 10000"));
    h1->sayHello(); // Connection creation and binding occurs here
}

Once the doit function returns, the C++ run time destroys the proxy h1. However, the connection bound to that proxy object remains: the life cycle of a connection and the life cycle of the proxies that are bound to that connection are completely independent. This raises the question of how and when Ice closes connections and releases their associated resources. The Ice run time closes and destroys connections in a variety of circumstances:

  • Destroying a communicator closes and destroys that communicator's connections.
  • If active connection management (ACM) is enabled, it will close connections that have been idle for longer than a specified timeout.
  • You can call close on a proxy's Connection object to explicitly close a connection.
  • If a connection has a timeout, the run time closes the connection when the timeout expires. (This is considered an unrecoverable error.)
  • If the run time encounters an unrecoverable error, such as a socket error, or receives data that violates the Ice protocol or encoding, it closes the corresponding connection.

A proxy may be bound to different connections during its life cycle. For example, a proxy may have a connection that remains idle for some time and is closed by ACM; the next time the proxy is used to make an invocation, the run time transparently establishes a new connection for the proxy. Similarly, a new connection may be established for a proxy because the previous connection was closed for any of the preceding reasons, or because connection caching is disabled. (We will return to this topic shortly.)

If you want to permanently bind a proxy to a specific connection, you can create a fixed proxy by calling Connection::createProxy.

The Ice run time reuses existing connections when possible. For example, consider:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy("hello:tcp -h remote.host.com -p 10000"));
h1->sayHello();
shared_ptr<HelloPrx> h2 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy("hello2:tcp -h remote.host.com -p 10000"));
h2->sayHello();

In this case, the Ice run time binds the two proxies h1 and h2 to the same connection because both proxies refer to an object at the same endpoint (remote.host.com at port 10000). In contrast, consider:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy("hello:tcp -h remote.host.com -p 10000"));
h1->sayHello();
shared_ptr<HelloPrx> h2 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy("hello2:tcp -h remote2.host.com -p 8000"));
h2->sayHello();

In this example, the hello object resides on remote.host.com at port 10000, and the hello2 object resides on remote2.host.com at port 8000. Because the two proxies have different endpoints, the Ice run time establishes a separate connection for each proxy.

The situation becomes more complex if a proxy contains more than one endpoint. For example, consider:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy(
        "hello:tcp -h remote.host.com -p 10000:tcp -h remote2.host.com -p 8000"));
h1->sayHello();
shared_ptr<HelloPrx> h2 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy(
        "hello2:tcp -h remote.host.com -p 10000:tcp -h remote2.host.com -p 8000"));
h2->sayHello();

In this case, both the hello and the hello2 objects can be reached on either remote.host.com at port 10000, or on remote2.host.com at port 8000. The question is whether the two proxies will share the same connection or use different connections. The answer is that the proxies share a single connection — to see why, we need to explore in greater detail how Ice binds connections to proxies.

Endpoint Selection

During binding, the Ice run time looks at the endpoints of a proxy and, from that list of endpoints, produces an ordered list of candidate endpoints. The default algorithm for creating the list of candidate endpoints and binding a connection is as follows:

  1. Remove any unusable or incompatible endpoints.
  2. Shuffle the endpoints and, after shuffling, move secure endpoints to the end of the list. This establishes an endpoint preference order.
  3. Check whether a compatible connection exists to any of the candidate endpoints. If so, reuse that connection.
  4. Otherwise, no compatible connection exists. For each endpoint in the candidate list, attempt to establish a connection to that endpoint and use the connection if successful; otherwise, try the next endpoint on the candidate list until either a connection can be established, or no more candidate endpoints remain.

Proxy settings can modify this algorithm — what follows are the nitty-gritty details of how Ice selects endpoints and establishes connections.

Removing Unusable and Incompatible Endpoints

The first step is to remove any endpoints that satisfy one of the following criteria:

  • The endpoint is unknown. For example, if the IceSSL plug-in is not installed, an SSL endpoint is an unknown endpoint.
  • The endpoint is incompatible, meaning it does not match the proxy's invocation mode. For example, all non-UDP endpoints are removed from a datagram proxy.
  • The endpoint is insecure, but the proxy is configured to require a secure connection, or Ice.Override.Secure is set.

If no endpoints remain once the run time has removed unusable and incompatible endpoints, the invocation raises NoEndpointException to the client. Consider the examples that follow for illustration. (Note that the examples do not specify the -h option in the endpoints for brevity; omitting this option causes Ice to set the host to 127.0.0.1 or the value of Ice.Default.Host if that property is set.)

C++
// Server: IceSSL plug-in installed.
shared_ptr<ObjectPrx>
SomeServantI::getObj(const Current& current)
{
    return current.adapter->getCommunicator()->
        stringToProxy("obj:tcp -p 8000:udp -p 9000:ssl -p 10000");
}

// Client: IceSSL plug-in not installed.
shared_ptr<ObjectPrx> obj = someServant->getObj();
obj->ice_ping();

In this example, a client without the IceSSL plug-in receives a proxy containing TCP, SSL, and UDP endpoints over the wire. The Ice run time preserves the SSL endpoint in the client's proxy even though the client cannot use the endpoint. This allows the client to later send the proxy over the wire without losing the SSL endpoint. Also note that the client cannot directly create a proxy with an SSL endpoint by calling stringToProxy because, without the IceSSL plug-in, stringToProxy would raise EndpointParseException for SSL endpoints.

After the client-side run time removes the unsuitable SSL and UDP endpoints, only the TCP endpoint remains eligible. Ice removes the SSL endpoint because IceSSL is not installed in the client and the UDP endpoint because it can only be used to make datagram invocations.

C++
// IceSSL plug-in installed.
shared_ptr<ObjectPrx> obj = communicator->stringToProxy("obj:tcp -p 8000:udp -p 9000:ssl -p 10000");
obj->ice_ping();

In this example, the TCP and SSL endpoints both remain. If the client defined Ice.Override.Secure, Ice would remove the TCP endpoint as well.

// IceSSL plug-in installed.
shared_ptr<ObjectPrx> obj = communicator->stringToProxy("obj:tcp -p 8000:udp -p 9000:ssl -p 10000");
obj = obj->ice_datagram();
obj->ice_ping();

In this case, the only remaining endpoint is the UDP endpoint. If the client defined Ice.Override.Secure, Ice would remove the UDP endpoint as well (because UDP cannot be used for secure invocations) and the call to ice_ping would raise NoEndpointException.

C++
// IceSSL plug-in installed.
shared_ptr<ObjectPrx> obj = communicator->stringToProxy("obj:tcp -p 8000:udp -p 9000:ssl -p 10000");
obj = obj->ice_secure();
obj->ice_ping();

In this example, because the proxy is secure, only the SSL endpoint remains eligible for connection establishment.

Endpoint Order

Once the Ice run time has removed unsuitable endpoints, it establishes the order in which endpoints will be used for connection attempts. Doing so proceeds in two steps:

  1. The run time sorts the endpoint list based on the endpoint selection policy (which can be set with the
    ice_endpointSelection proxy method). By default, the endpoint selection policy is Random, meaning that the run time shuffles the endpoints into random order. Otherwise, the selection policy is Ordered and Ice preserves the order in which the endpoints are listed in the proxy.
  2. If PreferSecure is false (the default value), the run time moves all secure endpoints to the end of the list. Conversely, if PreferSecure is true, the run time moves all secure endpoints to the beginning of the list. (You can set PreferSecure with the ice_preferSecure proxy method.)

Consider the following examples:

C++
// IceSSL plug-in is installed.
shared_ptr<ObjectPrx> obj = communicator->stringToProxy("obj:tcp -p 8000:ssl -p 10000:tcp -p 9000");
obj->ice_ping();

In this case, the endpoint list is either <tcp -p 8000, tcp -p 9000, ssl -p 10000> or <tcp -p 9000, tcp -p 8000, ssl -p 10000>. The order of the TCP endpoints is random because the endpoint selection policy has the default value. However, the SSL endpoint is guaranteed to be at the end because PreferSecure is false.

C++
// IceSSL plug-in is installed.
shared_ptr<ObjectPrx> obj = communicator->stringToProxy("obj:tcp -p 8000:ssl -p 10000:tcp -p 9000");
obj = obj->ice_endpointSelection(Ordered);
obj->ice_ping();

In this case, the endpoint list is <tcp -p 8000, tcp -p 9000, ssl -p 10000> because the selection policy is Ordered, so the two TCP endpoints retain their original order. The SSL endpoint appears at the end because PreferSecure is false.

C++
// IceSSL plug-in is installed.
shared_ptr<ObjectPrx> obj = communicator->stringToProxy("obj:tcp -p 8000:ssl -p 10000:tcp -p 9000");
obj = obj->ice_endpointSelection(Ordered);
obj = obj->ice_preferSecure(true);
obj->ice_ping();

In this case, the endpoint list is <ssl -p 10000, tcp -p 8000, tcp -p 9000>. Again, the endpoint selection policy is Ordered, so the two TCP endpoints retain their original order. However, because PreferSecure is true, the SSL endpoint appears first.

Connection Creation and Binding

If connection caching is enabled (which it is by default), the run time first checks whether it has already established a connection to any of the proxy's endpoints. If so, it reuses that connection; otherwise, it establishes a new one. In other words, the run time establishes a new connection only if no compatible connection to any of the endpoints exists.

If connection caching is disabled, the run time goes through the endpoint list in order and, for each endpoint, determines whether a compatible connection to that endpoint exists, in which case that connection is bound; otherwise, it attempts to establish a new connection to that endpoint. This means that the run time may establish a new connection even if there is an existing connection that is compatible with an endpoint appearing later in the list.

A connection can be reused if the connection's endpoint matches the proxy's endpoint and the connection matches the proxy's configuration. Specifically, the timeout setting of the connection must match the configured timeout for the proxy. (If the proxy is configured with a connection ID, the connection ID must also match.)

Connection timeouts are a very important and often misunderstood aspect of Ice. In short, each connection has an associated timeout value. The timeout value is copied from the proxy that originally caused the connection to be established. If a request sent over that connection times out, all outstanding requests on that connection also time out and Ice forcefully closes the connection. Therefore, two proxies with different timeout values cannot share a connection. For example:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<ObjectPrx> o = communicator->stringToProxy("hello:tcp -h remote.host.com -p 10000");
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(o->ice_timeout(1000));
h1->sayHello();
shared_ptr<HelloPrx> h2 = Ice::uncheckedCast<HelloPrx>(o->ice_timeout(2000));
h2->sayHello();

In this case, h1 and h2 are bound to different connections because the timeouts of the two proxies differ. Let's return to an earlier example again:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy(
        "hello:tcp -h remote.host.com -p 10000:tcp -h remote2.host.com -p 8000"));
h1->sayHello();
shared_ptr<HelloPrx> h2 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy(
        "hello2:tcp -h remote.host.com -p 10000:tcp -h remote2.host.com -p 8000"));
h2->sayHello();

Consider the first invocation via h1. The endpoint list will be either <tcp -p 10000, tcp -p 8000> or <tcp -p 10000, tcp -p 8000> depending on how the endpoints are shuffled. (The endpoints are shuffled because the endpoint selection policy has the default value of Random.) Assuming a server runs at each endpoint, the client creates a connection to whatever endpoint happens to be first in the candidate list and binds that connection to the h1 proxy. Now consider the second invocation via h2. In this case, as before, there are two possible endpoint lists. However, because connection caching is enabled, the Ice run time prefers to reuse the existing connection, and thus binds to whatever connection was established by the initial invocation via h1.

Connection Establishment Retries

If an attempt to establish a connection fails, the run time retries based on the value of Ice.RetryIntervals. The default value of this property is zero, which instructs the run time to retry connection establishment once for each endpoint, with no intervening delay. If no connection can be established via any of the endpoints, the run time raises an exception that indicates the reason for the final failed connection attempt, such as ConnectionRefusedException.

Connection Caching

By default, a connection is bound to a proxy during the first remote invocation via that proxy; thereafter, the proxy continues to use this connection for as long as it remains open. In other words, the proxy caches the connection. If the connection is closed at some point, the next remote invocation via the proxy transparently establishes a new connection using the algorithm we outlined earlier. For the majority of applications, this is the correct behavior because it minimizes the overhead of remote invocations.

For some applications, however, it is desirable to rebind a proxy's connection on each remote invocation. In particular, the default algorithm is unsuitable for per-request load balancing. In this scenario, the proxy contains an endpoint for each replica in a replica group. However, the default algorithm does exactly the wrong thing because, once a connection to any one of the replicas is established, all future requests are sent via that same connection, so only one replica is ever used:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy(
        "hello:tcp -h remote.host.com -p 10000:tcp -h remote2.host.com -p 8000"));
h1->sayHello();
h1->sayHello();

In this case, the second sayHello invocation is sent via whatever connection was established by the first invocation. To change this behavior, you must create a new proxy by calling ice_connectionCached(false):

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<ObjectPrx> o = communicator->stringToProxy(
    "hello:tcp -h remote.host.com -p 10000:tcp -h remote2.host.com -p 8000");
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(o->ice_connectionCached(false));
h1->sayHello();
h1->sayHello();

By disabling connection caching for h1, the second call to sayHello causes the binding algorithm to execute again:

  • During the first invocation, the endpoints are shuffled and the run time establishes a connection to one of the endpoints, for example, tcp -p 10000.
  • During the second invocation, the selection algorithm runs a second time. If the endpoint shuffle results in the same order as for the first invocation, the request is sent over the already-existing connection. However, if the shuffle results in the opposite order, the second invocation causes a second connection to be opened, to tcp -p 8000.

Eventually, after a number of invocations, both connections will be established; selecting one of the two existing connections for every invocation made by the client is very efficient and results in per-request random load balancing.

Now assume we disable connection caching and set the selection policy to Ordered. Assuming a server actually runs at the first endpoint, all invocations made by the client will be bound separately, and the first endpoint will be tried first on each invocation. This behavior is useful for servers in a master-slave relationship: the master endpoint is listed first and will always be used unless the master is down, at which point the slaves identified by subsequent endpoints are tried. However, note that at present this is quite expensive because, while the master is down, the run time attempts to create a new connection to the first endpoint on every invocation, only to have every such attempt fail until the master comes back on line.

Active Connection Management

Active connection management (ACM) improves application scalability by closing idle connections. At regular intervals, the Ice run time checks each existing connection and, if a connection has been idle for more than Ice.ACM.Client (or Ice.ACM.Server) seconds, it gracefully closes the connection. The default values of these properties are 60 seconds for the client and zero (i.e., disabled) for the server. The next invocation made by a client via a proxy whose connection was closed causes the connection to be re-established automatically, so ACM is transparent to application code.

Note that, on the server side, ACM is disabled by default because server-side ACM can cause oneway invocations to be silently discarded. Disabling ACM on the client side is necessary only if the client uses bidirectional connections. To disable ACM, set the corresponding property to zero.

In the context of ACM, "idle" means that, for the timeout period, no request has been sent over the connection, no invocations are in progress whose requests were sent over the connection, and no batch messages were added to a batch to be sent over the connection. For example:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy("hello:tcp -h remote.host.com -p 10000"));
h1->sayHello();
sleep(70);
h1->sayHello();

In this case, the connection that is established by the first call to sayHello is closed after 60 seconds (the default idle timeout for the client side). The second call creates a new connection to the same endpoint and binds that connection to the proxy. However:

C++
shared_ptr<Communicator> communicator = ...;
shared_ptr<HelloPrx> h1 = Ice::uncheckedCast<HelloPrx>(
    communicator->stringToProxy("hello:tcp -h remote.host.com -p 10000"));
h1->sayHello(); // Takes 70 seconds
h1->sayHello();

In this case, the connection is not closed because a reply is outstanding on the connection during the 70 seconds it takes the first call to complete.

Note that disabling ACM on the client side does not guarantee that the connection will remain open because ACM may be active on the server side. If you want to be sure that connections remain open, you must disable ACM for both client and server.

See Also