Updating replicated data in distributed database
As discussed earlier, replication is a technique used in distributed databases to store multiple copies of a data table at different sites.The problem with having multiple copies in multiple sites is the overhead of maintaining data consistency, particularly during update operations.This chapter looks into replication control, which is required to maintain consistent data in all sites.We will study the replication control techniques and the algorithms required for replication control.Transactional replication typically starts with a snapshot of the publication database objects and data.As soon as the initial snapshot is taken, subsequent data changes and schema modifications made at the Publisher are usually delivered to the Subscriber as they occur (in near real time).
The formulation initially attained by considering the role of an isolated processor in a valid strategy contains some constraints that are nonlinear.The primary purposes of multi-master replication are increased availability and faster server response time.Many directory servers are based on LDAP and implement multi-master replication.Depending on the availability and redundancy factor there are three types of replications: Full replication.
When spatial objects are replicated at several sites in the network, the updates of a long transaction in a specific site should be propagated to the other sites for maintaining the consistency of replicated spatial objects. Multi-master replication can also be contrasted with failover clustering where passive slave servers are replicating the master data in order to prepare for takeover in the event that the master stops functioning.