Apache > ZooKeeper
 

ZooKeeper Administrator's Guide

A Guide to Deployment and Administration

Deployment

This section contains information about deploying Zookeeper and covers these topics:

The first two sections assume you are interested in installing ZooKeeper in a production environment such as a datacenter. The final section covers situations in which you are setting up ZooKeeper on a limited basis - for evaluation, testing, or development - but not in a production environment.

System Requirements

Supported Platforms

ZooKeeper consists of multiple components. Some components are supported broadly, and other components are supported only on a smaller set of platforms.

The following matrix describes the level of support committed for running each component on different operating system platforms.

Support Matrix
Operating System Client Server Native Client Contrib
GNU/Linux Development and Production Development and Production Development and Production Development and Production
Solaris Development and Production Development and Production Not Supported Not Supported
FreeBSD Development and Production Development and Production Not Supported Not Supported
Windows Development and Production Development and Production Not Supported Not Supported
Mac OS X Development Only Development Only Not Supported Not Supported

For any operating system not explicitly mentioned as supported in the matrix, components may or may not work. The ZooKeeper community will fix obvious bugs that are reported for other platforms, but there is no full support.

Required Software

ZooKeeper runs in Java, release 1.8 or greater (JDK 8 LTS, JDK 11 LTS, JDK 12 - Java 9 and 10 are not supported). It runs as an ensemble of ZooKeeper servers. Three ZooKeeper servers is the minimum recommended size for an ensemble, and we also recommend that they run on separate machines. At Yahoo!, ZooKeeper is usually deployed on dedicated RHEL boxes, with dual-core processors, 2GB of RAM, and 80GB IDE hard drives.

Clustered (Multi-Server) Setup

For reliable ZooKeeper service, you should deploy ZooKeeper in a cluster known as an ensemble. As long as a majority of the ensemble are up, the service will be available. Because Zookeeper requires a majority, it is best to use an odd number of machines. For example, with four machines ZooKeeper can only handle the failure of a single machine; if two machines fail, the remaining two machines do not constitute a majority. However, with five machines ZooKeeper can handle the failure of two machines.

Note

As mentioned in the ZooKeeper Getting Started Guide , a minimum of three servers are required for a fault tolerant clustered setup, and it is strongly recommended that you have an odd number of servers.

Usually three servers is more than enough for a production install, but for maximum reliability during maintenance, you may wish to install five servers. With three servers, if you perform maintenance on one of them, you are vulnerable to a failure on one of the other two servers during that maintenance. If you have five of them running, you can take one down for maintenance, and know that you're still OK if one of the other four suddenly fails.

Your redundancy considerations should include all aspects of your environment. If you have three ZooKeeper servers, but their network cables are all plugged into the same network switch, then the failure of that switch will take down your entire ensemble.

Here are the steps to set a server that will be part of an ensemble. These steps should be performed on every host in the ensemble:

  1. Install the Java JDK. You can use the native packaging system for your system, or download the JDK from: http://java.sun.com/javase/downloads/index.jsp

  2. Set the Java heap size. This is very important to avoid swapping, which will seriously degrade ZooKeeper performance. To determine the correct value, use load tests, and make sure you are well below the usage limit that would cause you to swap. Be conservative - use a maximum heap size of 3GB for a 4GB machine.

  3. Install the ZooKeeper Server Package. It can be downloaded from: http://zookeeper.apache.org/releases.html

  4. Create a configuration file. This file can be called anything. Use the following settings as a starting point:

    tickTime=2000
    dataDir=/var/lib/zookeeper/
    clientPort=2181
    initLimit=5
    syncLimit=2
    server.1=zoo1:2888:3888
    server.2=zoo2:2888:3888
    server.3=zoo3:2888:3888
    

    You can find the meanings of these and other configuration settings in the section Configuration Parameters. A word though about a few here: Every machine that is part of the ZooKeeper ensemble should know about every other machine in the ensemble. You accomplish this with the series of lines of the form server.id=host:port:port. (The parameters host and port are straightforward, for each server you need to specify first a Quorum port then a dedicated port for ZooKeeper leader election). Since ZooKeeper 3.6.0 you can also specify multiple addresses for each ZooKeeper server instance (this can increase availability when multiple physical network interfaces can be used parallel in the cluster). You attribute the server id to each machine by creating a file named myid, one for each server, which resides in that server's data directory, as specified by the configuration file parameter dataDir.

  5. The myid file consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text "1" and nothing else. The id must be unique within the ensemble and should have a value between 1 and 255. IMPORTANT: if you enable extended features such as TTL Nodes (see below) the id must be between 1 and 254 due to internal limitations.

  6. Create an initialization marker file initialize in the same directory as myid. This file indicates that an empty data directory is expected. When present, an empty database is created and the marker file deleted. When not present, an empty data directory will mean this peer will not have voting rights and it will not populate the data directory until it communicates with an active leader. Intended use is to only create this file when bringing up a new ensemble.

  7. If your configuration file is set up, you can start a ZooKeeper server:

    $ java -cp zookeeper.jar:lib/*:conf org.apache.zookeeper.server.quorum.QuorumPeerMain zoo.conf
    

QuorumPeerMain starts a ZooKeeper server, JMX management beans are also registered which allows management through a JMX management console. The ZooKeeper JMX document contains details on managing ZooKeeper with JMX. See the script bin/zkServer.sh, which is included in the release, for an example of starting server instances. 8. Test your deployment by connecting to the hosts: In Java, you can run the following command to execute simple operations:

    $ bin/zkCli.sh -server 127.0.0.1:2181

Single Server and Developer Setup

If you want to setup ZooKeeper for development purposes, you will probably want to setup a single server instance of ZooKeeper, and then install either the Java or C client-side libraries and bindings on your development machine.

The steps to setting up a single server instance are the similar to the above, except the configuration file is simpler. You can find the complete instructions in the Installing and Running ZooKeeper in Single Server Mode section of the ZooKeeper Getting Started Guide.

For information on installing the client side libraries, refer to the Bindings section of the ZooKeeper Programmer's Guide.

Administration

This section contains information about running and maintaining ZooKeeper and covers these topics:

Designing a ZooKeeper Deployment

The reliability of ZooKeeper rests on two basic assumptions.

  1. Only a minority of servers in a deployment will fail. Failure in this context means a machine crash, or some error in the network that partitions a server off from the majority.
  2. Deployed machines operate correctly. To operate correctly means to execute code correctly, to have clocks that work properly, and to have storage and network components that perform consistently.

The sections below contain considerations for ZooKeeper administrators to maximize the probability for these assumptions to hold true. Some of these are cross-machines considerations, and others are things you should consider for each and every machine in your deployment.

Cross Machine Requirements

For the ZooKeeper service to be active, there must be a majority of non-failing machines that can communicate with each other. To create a deployment that can tolerate the failure of F machines, you should count on deploying 2xF+1 machines. Thus, a deployment that consists of three machines can handle one failure, and a deployment of five machines can handle two failures. Note that a deployment of six machines can only handle two failures since three machines is not a majority. For this reason, ZooKeeper deployments are usually made up of an odd number of machines.

To achieve the highest probability of tolerating a failure you should try to make machine failures independent. For example, if most of the machines share the same switch, failure of that switch could cause a correlated failure and bring down the service. The same holds true of shared power circuits, cooling systems, etc.

Single Machine Requirements

If ZooKeeper has to contend with other applications for access to resources like storage media, CPU, network, or memory, its performance will suffer markedly. ZooKeeper has strong durability guarantees, which means it uses storage media to log changes before the operation responsible for the change is allowed to complete. You should be aware of this dependency then, and take great care if you want to ensure that ZooKeeper operations aren’t held up by your media. Here are some things you can do to minimize that sort of degradation:

Provisioning

Things to Consider: ZooKeeper Strengths and Limitations

Administering

Maintenance

Little long term maintenance is required for a ZooKeeper cluster however you must be aware of the following:

Ongoing Data Directory Cleanup

The ZooKeeper Data Directory contains files which are a persistent copy of the znodes stored by a particular serving ensemble. These are the snapshot and transactional log files. As changes are made to the znodes these changes are appended to a transaction log. Occasionally, when a log grows large, a snapshot of the current state of all znodes will be written to the filesystem and a new transaction log file is created for future transactions. During snapshotting, ZooKeeper may continue appending incoming transactions to the old log file. Therefore, some transactions which are newer than a snapshot may be found in the last transaction log preceding the snapshot.

A ZooKeeper server will not remove old snapshots and log files when using the default configuration (see autopurge below), this is the responsibility of the operator. Every serving environment is different and therefore the requirements of managing these files may differ from install to install (backup for example).

The PurgeTxnLog utility implements a simple retention policy that administrators can use. The API docs contains details on calling conventions (arguments, etc...).

In the following example the last count snapshots and their corresponding logs are retained and the others are deleted. The value of should typically be greater than 3 (although not required, this provides 3 backups in the unlikely event a recent log has become corrupted). This can be run as a cron job on the ZooKeeper server machines to clean up the logs daily.

java -cp zookeeper.jar:lib/slf4j-api-1.7.5.jar:lib/slf4j-log4j12-1.7.5.jar:lib/log4j-1.2.17.jar:conf org.apache.zookeeper.server.PurgeTxnLog <dataDir> <snapDir> -n <count>

Automatic purging of the snapshots and corresponding transaction logs was introduced in version 3.4.0 and can be enabled via the following configuration parameters autopurge.snapRetainCount and autopurge.purgeInterval. For more on this, see Advanced Configuration below.

Debug Log Cleanup (log4j)

See the section on logging in this document. It is expected that you will setup a rolling file appender using the in-built log4j feature. The sample configuration file in the release tar's conf/log4j.properties provides an example of this.

Supervision

You will want to have a supervisory process that manages each of your ZooKeeper server processes (JVM). The ZK server is designed to be "fail fast" meaning that it will shutdown (process exit) if an error occurs that it cannot recover from. As a ZooKeeper serving cluster is highly reliable, this means that while the server may go down the cluster as a whole is still active and serving requests. Additionally, as the cluster is "self healing" the failed server once restarted will automatically rejoin the ensemble w/o any manual interaction.

Having a supervisory process such as daemontools or SMF (other options for supervisory process are also available, it's up to you which one you would like to use, these are just two examples) managing your ZooKeeper server ensures that if the process does exit abnormally it will automatically be restarted and will quickly rejoin the cluster.

It is also recommended to configure the ZooKeeper server process to terminate and dump its heap if an OutOfMemoryError** occurs. This is achieved by launching the JVM with the following arguments on Linux and Windows respectively. The zkServer.sh and zkServer.cmd scripts that ship with ZooKeeper set these options.

-XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError='kill -9 %p'

"-XX:+HeapDumpOnOutOfMemoryError" "-XX:OnOutOfMemoryError=cmd /c taskkill /pid %%%%p /t /f"

Monitoring

The ZooKeeper service can be monitored in one of two primary ways; 1) the command port through the use of 4 letter words and 2) JMX. See the appropriate section for your environment/requirements.

Logging

ZooKeeper uses SLF4J version 1.7.5 as its logging infrastructure. For backward compatibility it is bound to LOG4J but you can use LOGBack or any other supported logging framework of your choice.

The ZooKeeper default log4j.properties file resides in the conf directory. Log4j requires that log4j.properties either be in the working directory (the directory from which ZooKeeper is run) or be accessible from the classpath.

For more information about SLF4J, see its manual.

For more information about LOG4J, see Log4j Default Initialization Procedure of the log4j manual.

Troubleshooting

Configuration Parameters

ZooKeeper's behavior is governed by the ZooKeeper configuration file. This file is designed so that the exact same file can be used by all the servers that make up a ZooKeeper server assuming the disk layouts are the same. If servers use different configuration files, care must be taken to ensure that the list of servers in all of the different configuration files match.

Note

In 3.5.0 and later, some of these parameters should be placed in a dynamic configuration file. If they are placed in the static configuration file, ZooKeeper will automatically move them over to the dynamic configuration file. See Dynamic Reconfiguration for more information.

Minimum Configuration

Here are the minimum configuration keywords that must be defined in the configuration file:

Advanced Configuration

The configuration settings in the section are optional. You can use them to further fine tune the behaviour of your ZooKeeper servers. Some can also be set using Java system properties, generally of the form zookeeper.keyword. The exact system property, when available, is noted below.

Cluster Options

The options in this section are designed for use with an ensemble of servers -- that is, when deploying clusters of servers.

If you really need enable all four letter word commands by default, you can use the asterisk option so you don't have to include every command one by one in the list. As an example, this will enable all four letter word commands:

4lw.commands.whitelist=*

Encryption, Authentication, Authorization Options

The options in this section allow control over encryption/authentication/authorization performed by the service.

Beside this page, you can also find useful information about client side configuration in the Programmers Guide. The ZooKeeper Wiki also has useful pages about ZooKeeper SSL support, and SASL authentication for ZooKeeper.

Experimental Options/Features

New features that are currently considered experimental.

Unsafe Options

The following options can be useful, but be careful when you use them. The risk of each is explained along with the explanation of what the variable does.

Disabling data directory autocreation

New in 3.5: The default behavior of a ZooKeeper server is to automatically create the data directory (specified in the configuration file) when started if that directory does not already exist. This can be inconvenient and even dangerous in some cases. Take the case where a configuration change is made to a running server, wherein the dataDir parameter is accidentally changed. When the ZooKeeper server is restarted it will create this non-existent directory and begin serving - with an empty znode namespace. This scenario can result in an effective "split brain" situation (i.e. data in both the new invalid directory and the original valid data store). As such is would be good to have an option to turn off this autocreate behavior. In general for production environments this should be done, unfortunately however the default legacy behavior cannot be changed at this point and therefore this must be done on a case by case basis. This is left to users and to packagers of ZooKeeper distributions.

When running zkServer.sh autocreate can be disabled by setting the environment variable ZOO_DATADIR_AUTOCREATE_DISABLE to 1. When running ZooKeeper servers directly from class files this can be accomplished by setting zookeeper.datadir.autocreate=false on the java command line, i.e. -Dzookeeper.datadir.autocreate=false

When this feature is disabled, and the ZooKeeper server determines that the required directories do not exist it will generate an error and refuse to start.

A new script zkServer-initialize.sh is provided to support this new feature. If autocreate is disabled it is necessary for the user to first install ZooKeeper, then create the data directory (and potentially txnlog directory), and then start the server. Otherwise as mentioned in the previous paragraph the server will not start. Running zkServer-initialize.sh will create the required directories, and optionally setup the myid file (optional command line parameter). This script can be used even if the autocreate feature itself is not used, and will likely be of use to users as this (setup, including creation of the myid file) has been an issue for users in the past. Note that this script ensures the data directories exist only, it does not create a config file, but rather requires a config file to be available in order to execute.

Enabling db existence validation

New in 3.6.0: The default behavior of a ZooKeeper server on startup when no data tree is found is to set zxid to zero and join the quorum as a voting member. This can be dangerous if some event (e.g. a rogue 'rm -rf') has removed the data directory while the server was down since this server may help elect a leader that is missing transactions. Enabling db existence validation will change the behavior on startup when no data tree is found: the server joins the ensemble as a non-voting participant until it is able to sync with the leader and acquire an up-to-date version of the ensemble data. To indicate an empty data tree is expected (ensemble creation), the user should place a file 'initialize' in the same directory as 'myid'. This file will be detected and deleted by the server on startup.

Initialization validation can be enabled when running ZooKeeper servers directly from class files by setting zookeeper.db.autocreate=false on the java command line, i.e. -Dzookeeper.db.autocreate=false. Running zkServer-initialize.sh will create the required initialization file.

Performance Tuning Options

New in 3.5.0: Several subsystems have been reworked to improve read throughput. This includes multi-threading of the NIO communication subsystem and request processing pipeline (Commit Processor). NIO is the default client/server communication subsystem. Its threading model comprises 1 acceptor thread, 1-N selector threads and 0-M socket I/O worker threads. In the request processing pipeline the system can be configured to process multiple read request at once while maintaining the same consistency guarantee (same-session read-after-write). The Commit Processor threading model comprises 1 main thread and 0-N worker threads.

The default values are aimed at maximizing read throughput on a dedicated ZooKeeper machine. Both subsystems need to have sufficient amount of threads to achieve peak read throughput.

Debug Observability Configurations

New in 3.6.0: The following options are introduced to make zookeeper easier to debug.

AdminServer configuration

New in 3.6.0: The following options are used to configure the AdminServer.

New in 3.5.0: The following options are used to configure the AdminServer.

Metrics Providers

New in 3.6.0: The following options are used to configure metrics.

By default ZooKeeper server exposes useful metrics using the AdminServer. and Four Letter Words interface.

Since 3.6.0 you can configure a different Metrics Provider, that exports metrics to your favourite system.

Since 3.6.0 ZooKeeper binary package bundles an integration with Prometheus.io

Communication using the Netty framework

Netty is an NIO based client/server communication framework, it simplifies (over NIO being used directly) many of the complexities of network level communication for java applications. Additionally the Netty framework has built in support for encryption (SSL) and authentication (certificates). These are optional features and can be turned on or off individually.

In versions 3.5+, a ZooKeeper server can use Netty instead of NIO (default option) by setting the environment variable zookeeper.serverCnxnFactory to org.apache.zookeeper.server.NettyServerCnxnFactory; for the client, set zookeeper.clientCnxnSocket to org.apache.zookeeper.ClientCnxnSocketNetty.

Quorum TLS

New in 3.5.5

Based on the Netty Framework ZooKeeper ensembles can be set up to use TLS encryption in their communication channels. This section describes how to set up encryption on the quorum communication.

Please note that Quorum TLS encapsulates securing both leader election and quorum communication protocols.

  1. Create SSL keystore JKS to store local credentials

One keystore should be created for each ZK instance.

In this example we generate a self-signed certificate and store it together with the private key in keystore.jks. This is suitable for testing purposes, but you probably need an official certificate to sign your keys in a production environment.

Please note that the alias (-alias) and the distinguished name (-dname) must match the hostname of the machine that is associated with, otherwise hostname verification won't work.

keytool -genkeypair -alias $(hostname -f) -keyalg RSA -keysize 2048 -dname "cn=$(hostname -f)" -keypass password -keystore keystore.jks -storepass password

  1. Extract the signed public key (certificate) from keystore

This step might only necessary for self-signed certificates.

keytool -exportcert -alias $(hostname -f) -keystore keystore.jks -file $(hostname -f).cer -rfc

  1. Create SSL truststore JKS containing certificates of all ZooKeeper instances

The same truststore (storing all accepted certs) should be shared on participants of the ensemble. You need to use different aliases to store multiple certificates in the same truststore. Name of the aliases doesn't matter.

keytool -importcert -alias [host1..3] -file [host1..3].cer -keystore truststore.jks -storepass password

  1. You need to use NettyServerCnxnFactory as serverCnxnFactory, because SSL is not supported by NIO. Add the following configuration settings to your zoo.cfg config file:

sslQuorum=true serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory ssl.quorum.keyStore.location=/path/to/keystore.jks ssl.quorum.keyStore.password=password ssl.quorum.trustStore.location=/path/to/truststore.jks ssl.quorum.trustStore.password=password

  1. Verify in the logs that your ensemble is running on TLS:

INFO [main:QuorumPeer@1789] - Using TLS encrypted quorum communication INFO [main:QuorumPeer@1797] - Port unification disabled ... INFO [QuorumPeerListener:QuorumCnxManager$Listener@877] - Creating TLS-only quorum server socket

Upgrading existing non-TLS cluster with no downtime

New in 3.5.5

Here are the steps needed to upgrade an already running ZooKeeper ensemble to TLS without downtime by taking advantage of port unification functionality.

  1. Create the necessary keystores and truststores for all ZK participants as described in the previous section

  2. Add the following config settings and restart the first node

sslQuorum=false portUnification=true serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory ssl.quorum.keyStore.location=/path/to/keystore.jks ssl.quorum.keyStore.password=password ssl.quorum.trustStore.location=/path/to/truststore.jks ssl.quorum.trustStore.password=password

Note that TLS is not yet enabled, but we turn on port unification.

  1. Repeat step #2 on the remaining nodes. Verify that you see the following entries in the logs:

INFO [main:QuorumPeer@1791] - Using insecure (non-TLS) quorum communication INFO [main:QuorumPeer@1797] - Port unification enabled ... INFO [QuorumPeerListener:QuorumCnxManager$Listener@874] - Creating TLS-enabled quorum server socket

You should also double check after each node restart that the quorum become healthy again.

  1. Enable Quorum TLS on each node and do rolling restart:

sslQuorum=true portUnification=true

  1. Once you verified that your entire ensemble is running on TLS, you could disable port unification and do another rolling restart

sslQuorum=true portUnification=false

ZooKeeper Commands

The Four Letter Words

ZooKeeper responds to a small set of commands. Each command is composed of four letters. You issue the commands to ZooKeeper via telnet or nc, at the client port.

Three of the more interesting commands: "stat" gives some general information about the server and connected clients, while "srvr" and "cons" give extended details on server and connections respectively.

New in 3.5.3: Four Letter Words need to be explicitly white listed before using. Please refer 4lw.commands.whitelist described in cluster configuration section for details. Moving forward, Four Letter Words will be deprecated, please use AdminServer instead.

The output is compatible with java properties format and the content may change over time (new keys added). Your scripts should expect changes. ATTENTION: Some of the keys are platform specific and some of the keys are only exported by the Leader. The output contains multiple lines with the following format:

key \t value

Here's an example of the ruok command:

$ echo ruok | nc 127.0.0.1 5111
    imok

The AdminServer

New in 3.5.0: The AdminServer is an embedded Jetty server that provides an HTTP interface to the four letter word commands. By default, the server is started on port 8080, and commands are issued by going to the URL "/commands/[command name]", e.g., http://localhost:8080/commands/stat. The command response is returned as JSON. Unlike the original protocol, commands are not restricted to four-letter names, and commands can have multiple names; for instance, "stmk" can also be referred to as "set_trace_mask". To view a list of all available commands, point a browser to the URL /commands (e.g., http://localhost:8080/commands). See the AdminServer configuration options for how to change the port and URLs.

The AdminServer is enabled by default, but can be disabled by either:

Note that the TCP four letter word interface is still available if the AdminServer is disabled.

Available commands include:

Data File Management

ZooKeeper stores its data in a data directory and its transaction log in a transaction log directory. By default these two directories are the same. The server can (and should) be configured to store the transaction log files in a separate directory than the data files. Throughput increases and latency decreases when transaction logs reside on a dedicated log devices.

The Data Directory

This directory has two or three files in it:

Each ZooKeeper server has a unique id. This id is used in two places: the myid file and the configuration file. The myid file identifies the server that corresponds to the given data directory. The configuration file lists the contact information for each server identified by its server id. When a ZooKeeper server instance starts, it reads its id from the myid file and then, using that id, reads from the configuration file, looking up the port on which it should listen.

The snapshot files stored in the data directory are fuzzy snapshots in the sense that during the time the ZooKeeper server is taking the snapshot, updates are occurring to the data tree. The suffix of the snapshot file names is the zxid, the ZooKeeper transaction id, of the last committed transaction at the start of the snapshot. Thus, the snapshot includes a subset of the updates to the data tree that occurred while the snapshot was in process. The snapshot, then, may not correspond to any data tree that actually existed, and for this reason we refer to it as a fuzzy snapshot. Still, ZooKeeper can recover using this snapshot because it takes advantage of the idempotent nature of its updates. By replaying the transaction log against fuzzy snapshots ZooKeeper gets the state of the system at the end of the log.

The Log Directory

The Log Directory contains the ZooKeeper transaction logs. Before any update takes place, ZooKeeper ensures that the transaction that represents the update is written to non-volatile storage. A new log file is started when the number of transactions written to the current log file reaches a (variable) threshold. The threshold is computed using the same parameter which influences the frequency of snapshotting (see snapCount and snapSizeLimitInKb above). The log file's suffix is the first zxid written to that log.

File Management

The format of snapshot and log files does not change between standalone ZooKeeper servers and different configurations of replicated ZooKeeper servers. Therefore, you can pull these files from a running replicated ZooKeeper server to a development machine with a stand-alone ZooKeeper server for troubleshooting.

Using older log and snapshot files, you can look at the previous state of ZooKeeper servers and even restore that state. The LogFormatter class allows an administrator to look at the transactions in a log.

The ZooKeeper server creates snapshot and log files, but never deletes them. The retention policy of the data and log files is implemented outside of the ZooKeeper server. The server itself only needs the latest complete fuzzy snapshot, all log files following it, and the last log file preceding it. The latter requirement is necessary to include updates which happened after this snapshot was started but went into the existing log file at that time. This is possible because snapshotting and rolling over of logs proceed somewhat independently in ZooKeeper. See the maintenance section in this document for more details on setting a retention policy and maintenance of ZooKeeper storage.

Note

The data stored in these files is not encrypted. In the case of storing sensitive data in ZooKeeper, necessary measures need to be taken to prevent unauthorized access. Such measures are external to ZooKeeper (e.g., control access to the files) and depend on the individual settings in which it is being deployed.

Recovery - TxnLogToolkit

More details can be found in this

Things to Avoid

Here are some common problems you can avoid by configuring ZooKeeper correctly:

Best Practices

For best results, take note of the following list of good Zookeeper practices:

For multi-tenant installations see the section detailing ZooKeeper "chroot" support, this can be very useful when deploying many applications/services interfacing to a single ZooKeeper cluster.