![]() When the watch is triggered by an event, it generates a notification. A watch is a one-time trigger associated with a znode and a type of event (e.g., data is set in the znode, or the znode is deleted). We talk about an event to denote the execution of an update to a given znode. We have illustrated the use of this mechanism with the master example, but the general mechanism is applicable to a wide variety of situations.īefore getting deeper into watches, let’s establish some terminology. Watches and notifications form a general mechanism that enables clients to observe changes by other clients without having to continually poll ZooKeeper. ![]() Once the backup masters receive their notifications, they can start a new master election by trying to create a new ephemeral znode, as we showed in “Getting Mastership”. If the primary crashes, the master lock is automatically deleted and the backup masters are notified. For example, we can have the primary master create an ephemeral znode representing the master lock, and the backup masters register a watch for the existence of the master lock. With watches, a client registers its request to receive a one-time notification of a change to a given znode. The primary mechanism ZooKeeper provides to deal with changes is watches. We can avoid this tuning and polling traffic altogether by having ZooKeeper notify interested clients of concrete events. ![]() The problem with increasing this period is that it increases the time it takes to recover from a primary crash. ![]() Suppose we therefore reduce the amount of polling traffic to ZooKeeper by increasing the period between requests for the status of the primary, say to 1 second. Even if such an amount of traffic is easy for a system like ZooKeeper to deal with, primary master crashes should be rare, so most of this traffic is unnecessary. If there are multiple backup masters, we multiply this frequency by the number of backups to obtain the total request traffic generated just to poll ZooKeeper for the status of the primary master. In this case, each backup master generates 20 requests/second. To reduce the time it takes to recover from the primary crash, we need to poll frequently-say, every 50 ms-just for an example of aggressive polling. Polling, however, is not efficient, especially when the expected changes are somewhat rare.įor example, let’s consider backup masters they need to know when the primary has crashed so that they can fail over. ZooKeeper clients could, of course, poll the ZooKeeper ensemble periodically to determine whether changes have occurred. For instance, in our example in Chapter 1, backup masters need to know that the primary master has crashed, and workers need to know when new tasks have been assigned to them. It is not uncommon to have application processes that need to learn about changes to the state of a ZooKeeper ensemble. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |