Instead of trying to arrange code such that it is deadlock-free in the presence of callbacks and cyclic dependencies, it is often easier to change the code to avoid the callbacks entirely and to use an approach known as reaping:
destroy
marks the servant as destroyed and removes the Active Servant Map (ASM) entry as usual, but it does not call back into the factory to update the_names
set.- Whenever a collection manager operation, such as
list
,getDetails
, orfind
is called, the factory checks for destroyed servants and, if it finds any, removes them from the_names
set.
Reaping can make for a much cleaner design because it avoids both the cyclic type dependency and the cyclic locking dependency.
On this page:
A Simple Reaping Implementation
To implement reaping, we need to change our PhoneEntryI
definition a little. It no longer has a static _factory
smart pointer back to the factory (because it no longer calls remove
). Instead, the servant now provides a member function _isZombie
that the factory calls to check whether the servant was destroyed some time in the past:
class PhoneEntryI : public PhoneEntry { public: // ... bool _isZombie() const; private: const string _name; string _phNum; bool _destroyed; Mutex _m; };
The implementation of _isZombie
is trivial: it returns the _destroyed
flag under protection of the lock:
bool PhoneEntryI::_isZombie() const { Mutex::Lock lock(_m); return _destroyed; }
The destroy
operation no longer calls back into the factory to update the _names
set; instead, it simply sets the _destroyed
flag and removes the ASM entry:
void PhoneEntryI::destroy(const Current& c) { Mutex::Lock lock(_m); if (_destroyed) throw ObjectNotExistException(__FILE__, __LINE__); _destroyed = true; c.adapter?>remove(c.id); }
The factory now, instead of storing just the names of existing servants, maintains a map that maps the name of each servant to its smart pointer:
class PhoneEntryFactoryI : public PhoneEntryFactory { public: // Constructor and Slice operations here... private: typedef map<string, PhoneEntryIPtr> PMap; PMap _entries; Mutex _entriesMutex; };
During create
(and other operations, such as list
, getDetails
, and find
), we scan for zombie servants and remove them from the _entries
map:
PhoneEntryPrx PhoneEntryFactory::create(const string& name, const string& phNum, const Current& c) { Mutex::Lock lock(_entriesMutex); PhoneEntryPrx pe; PhoneEntryIPtr servant = new PhoneEntryI(name, phNum); // Try to create new object. // try { CommunicatorPtr comm = c.adapter?>getCommunicator(); pe = PhoneEntryPrx::uncheckedCast( c.adapter?>add(servant, comm?>stringToIdentity(name))); } catch (const Ice::AlreadyRegisteredException&) { throw PhoneEntryExists(name, phNum); } // Scan for zombies. // PMap::iterator i = _entries.begin(); while (i != _entries.end()) { if (i?>second?>_isZombie()) _entries.erase(i++); else ++i; } _entries[name] = servant; return pe; }
The implementations of list
, getDetails
, and find
scan for zombies as well. Because they need to iterate over the existing entries anyway, reaping incurs essentially no extra cost:
PhoneEntries PhoneEntryFactoryI::list(const Current& c) { Mutex::Lock lock(_entriesMutex); CommunicatorPtr comm = c.adapter?>getCommunicator(); PhoneEntries pe; PMap::iterator i = _entries.begin(); for (i = _entries.begin(); i != _entries.end(); ++i) { if (i?>second?>_isZombie()) { _entries.erase(i++); } else { ObjectPrx o = c.adapter?>createProxy(comm?>stringToIdentity(i?>first)); pe.push_back(PhoneEntryPrx::uncheckedCast(o)); ++i; } } return pe; } // Similar for getDetails and find...
This is a much cleaner design: there is no cyclic dependency between the factory and the servants, either implicit (in the type system) or explicit (as a locking dependency). Moreover, the implementation is easier to understand once you get used to the idea of reaping: there is no need to follow complex callbacks and to carefully analyze the order of lock acquisition. (Note that, depending on how state is maintained for servants, you may also need to reap during start-up and shutdown.)
In general, we recommend that you use a reaping approach in preference to callbacks for all but the most trivial applications: it simply is a better approach that is easier to maintain and understand.
Alternative Reaping Implementations
You may be concerned that reaping increases the cost of create
from O(log n) to O(n) because create
now iterates over all existing entries and locks and unlocks a mutex in each servant (whereas, previously, it simply added each new servant to the _names
set). Often, this is not an issue because life cycle operations are called infrequently compared to normal operations. However, you will notice the additional cost if you have a large number of servants (in the thousands or more) and life cycle operations are called frequently.
If you find that create
is a bottleneck (by profiling, not by guessing!), you can change to a more efficient implementation by adding zombie servants to a separate zombie list. Reaping then iterates over the zombie list and removes each servant in the zombie list from the _entries
map before clearing the zombie list. This reduces the cost of reaping to be proportional to the number of zombie servants instead of the total number of servants. In addition, it allows us to remove the _isZombie
member function and to lock and unlock _entriesMutex
only once instead of locking a mutex in each servant as part of _isZombie
. We will see such an implementation when we revisit the file system application.
You may also be concerned about the number of zombie servants that can accumulate in the server if create
is not called for some time. For most applications, this is not a problem: the servants occupy memory, but no other resources because destroy
can clean up scarce resources, such as file descriptors or network connections before it turns the servant into a zombie. If you really need to prevent accumulation of zombie servants, you can reap from a background thread that runs periodically, or you can count the number of zombies and trigger a reaping pass once that number exceeds some threshold.