Build: #679 was successful Scheduled with changes by 3 people

Build result summary

Details

Completed
Queue duration
3 minutes
Duration
1 minute
Labels
None
Revision
c7d154ba3e4fac5d24c15442b54c15279848f242
Total tests
437
Successful since
#642 ()

Tests

Code commits

Author Commit Message Commit date
Guus der Kinderen Guus der Kinderen c7d154ba3e4fac5d24c15442b54c15279848f242 OF-2071: 'Lock' should be locked in front of a 'try' block (instead of in to block).
And for good measure, replace usage of the deprecated CacheFactory#getLock with Cache#getLock
Greg Thomas <greg.d.thomas@gmail.com> Greg Thomas <greg.d.thomas@gmail.com> d61d78fa4467afb430f3d2b3777f3f6db5f30503 OF-2068: Display the versions of the various servers and plugins in the cluster
Guus der Kinderen Guus der Kinderen 3922348dcbe3280dd031e5cbfd7ea3c465b70398 OF-2064: Invoke IQResultListener on other cluster nodes
If an IQ result can't be processed locally, try to identify another cluster node that can.
Guus der Kinderen Guus der Kinderen 62a0715905cf17129c30efd5028b6a20b3b25b82 OF-2070: Tests in FlattenNestedGroupsTest regularly fail - disable
Disable this test until a fix can be found.
Guus der Kinderen Guus der Kinderen bb858323878b96f36491768d70fb2775ae96f33e OF-2060: (re)populate clustered caches when switching implementation
This commit is a follow-up to reverting the changes for OF-974

When a server joins or leaves a cluster, the implementation that's behind the Cache interface is swapped. When switching _to_ a clustered implementation (_from_ a local/non-clustered implementation), the replacement cache can be used to interact with the cache content that's shared in the cluster. When switching _from_ a clustered implementation (_to_ a local/non-clustered implementation) the cache is replaced with a new (empty) default cache.

OF-974 (now reverted) introduced a change that caused the content from the old implementation to be copied to the new one during the switch-over. This prevented data to be 'lost'. This approach has a considerable drawbacks: When data exists in both caches under the same key, one of the entries will be lost. Depending on the usage of a cache, different techniques to merge data can be desirable. The solution for OF-974 does not allow for that. Also, OF-974 assumes that it is desirable to retain data from the cluster after the node leaves the cluster. This is questionable. In cases, usage-specific code is needed to 'clean up' caches. This leads to a fragmentation of the responsibility of maintaining the cache content over various places: the utilizing code of the cache, as well as the CacheFactory. This adds code complexity.

The original stategy (which has been restored by reverting OF-974) was for code to anticipate that, during cluster join and leave action, the data from the local node was missing from the cache. This introduced the various 'restoreCacheContent' method implementations. This has it's own challenges: there's the problem of 'missing data' (which lead to OF-974 in the first place), but also of not having access to the state of the cache prior to the change makes it hard to detect what changes are applied, which in turn makes it difficult to invoke corresponding event listeners, etc). It does, however, allow for far more granual control over the cache content, on a per-usage base. Additionally, cache content control can more easily be implemented in one central place, reducing complexity of the solution.

This commit restores the original strategy, mainly by restoring notion of 'caches will not contain local data directly after a switchover' and re-introducing various 'restoreCacheContent' implementations, in places that currently inherit from ClusterEventListener.

Jira issues

IssueDescriptionStatus
5 more issues…
Unknown Issue TypeOF-974Could not obtain issue details from Jira
Unknown Issue TypeOF-2060Could not obtain issue details from Jira
Unknown Issue TypeOF-2064Could not obtain issue details from Jira
Unknown Issue TypeOF-2065Could not obtain issue details from Jira
Unknown Issue TypeOF-2066Could not obtain issue details from Jira