Interface | Description |
---|---|
ConnectionMXBean |
This MBean represents a client connection.
|
DataTreeMXBean |
Zookeeper data tree MBean.
|
RequestProcessor |
RequestProcessors are chained together to process transactions.
|
ServerStats.Provider | |
SessionTracker |
This is the basic interface that ZooKeeperServer uses to track sessions.
|
SessionTracker.Session | |
SessionTracker.SessionExpirer | |
ZooKeeperServerListener |
Listener for the critical resource events.
|
ZooKeeperServerMXBean |
ZooKeeper server MBean.
|
Class | Description |
---|---|
ByteBufferInputStream | |
ByteBufferOutputStream | |
ConnectionBean |
Implementation of connection MBean interface.
|
ContainerManager |
Manages cleanup of container ZNodes.
|
DatadirCleanupManager |
This class manages the cleanup of snapshots and corresponding transaction
logs by scheduling the auto purge task with the specified
'autopurge.purgeInterval'.
|
DataNode |
This class contains the data for a node in the data tree.
|
DataTree |
This class maintains the tree data structure.
|
DataTree.ProcessTxnResult | |
DataTreeBean |
This class implements the data tree MBean.
|
ExitCode |
Exit code used to exit server
|
ExpiryQueue<E> |
ExpiryQueue tracks elements in time sorted fixed duration buckets.
|
FinalRequestProcessor |
This Request processor actually applies any transaction associated with a
request and services any queries.
|
LogFormatter | |
NettyServerCnxn | |
NettyServerCnxnFactory | |
NIOServerCnxn |
This class handles communication with clients using NIO.
|
NIOServerCnxnFactory |
NIOServerCnxnFactory implements a multi-threaded ServerCnxnFactory using
NIO non-blocking socket calls.
|
ObserverBean |
ObserverBean
|
PrepRequestProcessor |
This request processor is generally at the start of a RequestProcessor
change.
|
PurgeTxnLog |
this class is used to clean up the
snapshot and data log dir's.
|
RateLogger | |
ReferenceCountedACLCache | |
Request |
This is the structure that represents a request moving through a chain of
RequestProcessors.
|
ServerCnxn |
Interface to a Server connection - represents a connection from a client
to the server.
|
ServerCnxnFactory | |
ServerConfig |
Server configuration storage.
|
ServerStats |
Basic Server Statistics
|
SessionTrackerImpl |
This is a full featured SessionTracker.
|
SessionTrackerImpl.SessionImpl | |
SnapshotFormatter |
Dump a snapshot file to stdout.
|
SyncRequestProcessor |
This RequestProcessor logs requests to disk.
|
TraceFormatter | |
TxnLogProposalIterator |
This class provides an iterator interface to access Proposal deserialized
from on-disk txnlog.
|
UnimplementedRequestProcessor |
Manages the unknown requests (i.e.
|
WatchesPathReport |
A watch report, essentially a mapping of path to session IDs of sessions that
have set a watch on that path.
|
WatchesReport |
A watch report, essentially a mapping of session ID to paths that the session
has set a watch on.
|
WatchesSummary |
A summary of watch information.
|
WorkerService |
WorkerService is a worker thread pool for running tasks and is implemented
using one or more ExecutorServices.
|
WorkerService.WorkRequest |
Callers should implement a class extending WorkRequest in order to
schedule work with the service.
|
ZKDatabase |
This class maintains the in memory database of zookeeper
server states that includes the sessions, datatree and the
committed logs.
|
ZooKeeperCriticalThread |
Represents critical thread.
|
ZooKeeperSaslServer | |
ZooKeeperServer |
This class implements a simple standalone ZooKeeperServer.
|
ZooKeeperServerBean |
This class implements the ZooKeeper server MBean interface.
|
ZooKeeperServerConf |
Configuration data for a
ZooKeeperServer . |
ZooKeeperServerMain |
This class starts and runs a standalone ZooKeeperServer.
|
ZooKeeperThread |
This is the main class for catching all the uncaught exceptions thrown by the
threads.
|
ZooTrace |
This class encapsulates and centralizes tracing for the ZooKeeper server.
|
Enum | Description |
---|---|
DatadirCleanupManager.PurgeTaskStatus |
Status of the dataDir purge task
|
EphemeralType |
Abstraction that interprets the
ephemeralOwner field of a ZNode. |
EphemeralTypeEmulate353 |
See https://issues.apache.org/jira/browse/ZOOKEEPER-2901
version 3.5.3 introduced bugs associated with how TTL nodes were implemented.
|
ZooKeeperServer.State |
Exception | Description |
---|---|
RequestProcessor.RequestProcessorException | |
ServerCnxn.CloseRequestException | |
ServerCnxn.EndOfStreamException | |
ZooKeeperServer.MissingSessionException |
ZooKeeper maintains a order when processing requests:
We will explain the three aspects of ZooKeeperServer: request processing, data structure maintenance, and session tracking.
If the request is just a query, it will be processed by ZooKeeper and returned. Otherwise, the request will be validated and a transaction will be generated and logged. This the request will then wait until the request has been logged before continuing processing.
Requests are logged as a group. Transactions are queued up and the SyncThread will process them at predefined intervals. (Currently 20ms) The SyncThread interacts with ZooKeeperServer the txnQueue. Transactions are added to the txnQueue of SyncThread via queueItem. When the transaction has been synced to disk, its callback will be invoked which will cause the request processing to be completed.
We guarantee that changes to nodes are stored to non-volatile media before responding to a client. We do this quickly by writing changes as a sequence of transactions in a log file. Even though we flush transactions as a group, we need to avoid seeks as much as possible. Also, since the server can fail at any point, we need to be careful of partial records.
We address the above problems by
As the server runs, the log file will grow quite large. To avoid long startup times we periodically take a snapshot of the tree of DataNodes. We cannot take the snapshot synchronously as the data takes a while to write out, so instead we asynchronously write out the tree. This means that we end up with a "corrupt" snapshot of the data tree. More formally if we define T to be the real snapshot of the tree at the time we begin taking the snapshot and l as the sequence of transactions that are applied to the tree between the time the snapshot begins and the time the snapshot completes, we write to disk T+l' where l' is a subset of the transactions in l. While we do not have a way of figuring out which transactions make up l', it doesn't really matter. T+l'+l = T+l since the transactions we log are idempotent (applying the transaction multiple times has the same result as applying the transaction once). So when we restore the snapshot we also play all transactions in the log that occur after the snapshot was begun. We can easily figure out where to start the replay because we start a new logfile when we start a snapshot. Both the snapshot file and log file have a numeric suffix that represent the transaction id that created the respective files.
Copyright © 2008–2019 The Apache Software Foundation. All rights reserved.