«MangoDB Документация для v 1.2» и «Beanstalk Protocol»: разница между страницами

Материал из support.qbpro.ru
(Различия между страницами)
imported>Vix
(Новая страница: «Документация для официального драйвера MongoDB Nodejs Official Driver v 1.2 (supported by 10gen) [https://github.com/mongodb/n…»)
 
imported>Supportadmin
(Новая страница: « = Beanstalk Protocol = Protocol -------- The beanstalk protocol runs over TCP using ASCII encoding. Clients connect, send commands and data, wait for responses…»)
 
Строка 1: Строка 1:
Документация для официального драйвера MongoDB Nodejs Official Driver v 1.2  (supported by 10gen)
[https://github.com/mongodb/node-mongodb-native/tree/1.2-dev/docs оригинал полной документации]


Примечания для понимания взяты [http://jsman.ru/mongo-book/ здесь].
= Beanstalk Protocol =
= MongoClient - новый улучшенный или как по новому подключится лучше =
*[https://github.com/mongodb/node-mongodb-native/blob/1.2-dev/docs/articles/MongoClient.md оригинал]
Начиная с драйвера версии '''1.2'''  включен новый класс подключения, который имеет одинаковое название во всех официальных драйверах. Это не означает, что существующие приложения перестанут работать, просто рекомендуется использовать новые API упрощенного подключения и разработки.


В дальнейшем будет разработан новый класс '''MongoClient''' принимающий все написанное (???неточный перевод) для MongoDB в отличие от существующего класса подключения '''Db''' в котором acknowledgements выключен.
Protocol
--------


<nowiki>MongoClient = function(server, options);
The beanstalk protocol runs over TCP using ASCII encoding. Clients connect,
MongoClient.prototype.open
send commands and data, wait for responses, and close the connection. For each
connection, the server processes commands serially in the order in which they
were received and sends responses in the same order. All integers in the
protocol are formatted in decimal and (unless otherwise indicated)
nonnegative.


MongoClient.prototype.close
Names, in this protocol, are ASCII strings. They may contain letters (A-Z and
a-z), numerals (0-9), hyphen ("-"), plus ("+"), slash ("/"), semicolon (";"),
dot ("."), dollar-sign ("$"), underscore ("_"), and parentheses ("(" and ")"),
but they may not begin with a hyphen. They are terminated by white space
(either a space char or end of line). Each name must be at least one character
long.


MongoClient.prototype.db
The protocol contains two kinds of data: text lines and unstructured chunks of
data. Text lines are used for client commands and server responses. Chunks are
used to transfer job bodies and stats information. Each job body is an opaque
sequence of bytes. The server never inspects or modifies a job body and always
sends it back in its original form. It is up to the clients to agree on a
meaningful interpretation of job bodies.


MongoClient.connect</nowiki>
The client may issue the "quit" command, or simply close the TCP connection
when it no longer has use for the server. However, beanstalkd performs very
well with a large number of open connections, so it is usually better for the
client to keep its connection open and reuse it as much as possible. This also
avoids the overhead of establishing new TCP connections.


If a client violates the protocol (such as by sending a request that is not
well-formed or a command that does not exist) or if the server has an error,
the server will reply with one of the following error messages:


Выше описан полный интерфейс MongoClient. Методы '''open''', '''close''' and '''db''' работают аналогично существующим методам в классе (прим. переводчика: Объекте) '''Db'''. Основное отличие в том, что конструктор пропускает '''database name''' из '''Db'''. Рассмотрим простое подключение используя '''open''', код заменит тысячи слов.
- "OUT_OF_MEMORY\r\n" The server cannot allocate enough memory for the job.
  The client should try again later.


- "INTERNAL_ERROR\r\n" This indicates a bug in the server. It should never
  happen. If it does happen, please report it at
  http://groups.google.com/group/beanstalk-talk.


  <nowiki>var MongoClient = require('mongodb').MongoClient,
  - "BAD_FORMAT\r\n" The client sent a command line that was not well-formed.
    Server = require('mongodb').Server;
  This can happen if the line does not end with \r\n, if non-numeric
  characters occur where an integer is expected, if the wrong number of
  arguments are present, or if the command line is mal-formed in any other
  way.


var mongoClient = new MongoClient(new Server('localhost', 27017));
- "UNKNOWN_COMMAND\r\n" The client sent a command that the server does not
           
  know.
    mongoClient.open(function(err, mongoClient) {


var db1 = mongoClient.db("mydb");
These error responses will not be listed in this document for individual
commands in the following sections, but they are implicitly included in the
description of all commands. Clients should be prepared to receive an error
response after any command.


    mongoClient.close();
As a last resort, if the server has a serious error that prevents it from
});</nowiki>
continuing service to the current client, the server will close the
connection.


Следует обратить внимание, что настройки MongoClient такие же, как для объекта '''Db'''. Основным отличием является то, что доступ к данным происходит через  метод '''db''' объекта MongoClient вместо  непосредственного использования экземпляра объекта '''db''', как было раньше. Также MongoClient поддерживает те же параметры, что и предыдущий экземпляр Db.
Job Lifecycle
-------------


Таким образом, с минимальными изменениями в приложении можно применить новый объект MongoClient для подключения.  
A job in beanstalk gets created by a client with the "put" command. During its
life it can be in one of four states: "ready", "reserved", "delayed", or
"buried". After the put command, a job typically starts out ready. It waits in
the ready queue until a worker comes along and runs the "reserve" command. If
this job is next in the queue, it will be reserved for the worker. The worker
will execute the job; when it is finished the worker will send a "delete"
command to delete the job.


== URL формат подключения ==
Here is a picture of the typical job lifecycle:
<nowiki>mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]</nowiki>


URL формата унифицированы во всех официальных драйверах от 10gen, некоторые опции не поддерживается сторонними драйверами в силу естественных причин.


=== Составные части url ===
  put            reserve              delete
* <span style="color:darkgreen">'''mongodb://'''</span> - префикс запроса идентифицирующий строку как стандартный формат подключения
  -----> [READY] ---------> [RESERVED] --------> *poof*
* <span style="color:darkgreen">'''username:password@'''</span> - необязательные параметры. Если заданы, драйвер использует авторизацию для подключения к database после соединения с серером.
* <span style="color:darkgreen">'''host1'''</span> - единственная обязательная часть URI. Идентифицируется каждый hostname, IP адрес или  or unix сокет
* <span style="color:darkgreen">''':portX'''</span> - порт подключения, необязательный параметр, по умолчанию :27017.
* <span style="color:darkgreen">'''/database'''</span> это имя базы данных для входа и, следовательно, имеет смысл только, если имя пользователя: пароль @ синтаксис. Если не указано "Admin" база данных будет использоваться по умолчанию.(???не понятно)
* <span style="color:darkgreen">'''?options'''</span> - параметры подключения. Если значение database будет отсутствовать, то символ / должен все равно присутствовать между последним host и знаком ?, предваряющим параметры. Параметры имеют формат name=value и разделены знаком "&". Для неправильных или не поддерживаемых параметров драйвер запишет предупреждение в лог и продолжит выполнение. Драйвер не поддерживает других опций, кроме описанных в спецификации. Это делается для того, чтобы уменьшить вероятность того, что различные драйверы будут поддерживать немного измененные, но в последствие несовместимые параметры (например, другие имена, разные значения, или другое значение по умолчанию).


===Параметры Replica set: ===
* '''replicaSet=name'''
** Драйвер проверяет имя replica set для подключения к машине с этим именем. Подразумевается, что hostы указаны в списке, а драйвер будет пытаться найти все элементы набора.
** НЕТ ЗНАЧЕНИЯ ПО УМОЛЧАНИЮ.
:::::Прим. Репликация в MongoDB работает сходным образом с репликацией в реляционных базах данных. Записи посылаются на один сервер — ведущий (master), который потом синхронизирует своё состояние с другими серверами — ведомыми (slave). Вы можете разрешить или запретить чтение с ведомых серверов, в зависимости от того, допускается ли в вашей системе чтение несогласованных данных. Если ведущий сервер падает, один из ведомых может взять на себя роль ведущего.


:::::Хотя репликация увеличивает производительность чтения, делая его распределённым, основная её цель — увеличение надёжности. Типичным подходом является сочетание репликации и шардинга. Например, каждый шард может состоять из ведущего и ведомого серверов. (Технически, вам также понадобится арбитр, чтобы разрешить конфликт, когда два ведомых сервера пытаются объявить себя ведущими. Но арбитр потребляет очень мало ресурсов и может быть использован для нескольких шардов сразу.)


=== Конфигурация подключения: ===
Here is a picture with more possibilities:
* '''ssl=true|false|prefer'''
** true: драйвер инициирует каждое подключение и использованием SSL
** false: драйвер инициирует каждое подключение без использования SSL
** prefer: драйвер будет пытаться инициировать каждое подключение и использованием SSL, в случае неудачи, будет инициировано подключение без использования SSL
** Значение по умолчанию: false.
* '''connectTimeoutMS=ms'''
** How long a connection can take to be opened before timing out.
** Current driver behavior already differs on this, so default must be left to each driver. For new implementations, the default should be to never timeout.
* '''socketTimeoutMS=ms'''
** How long a send or receive on a socket can take before timing out.
** Current driver behavior already differs on this, so default must be left to each driver. For new implementations, the default should be to never timeout.


=== Конфигурация пула подключений: ===
* '''maxPoolSize=n:''' Максимальное число подключений в пуле
** Значение по умолчанию: 100


=== Write concern configuration: ===
'''w=wValue'''


*For numeric values above 1, the driver adds { w : wValue } to the getLastError command.
  put with delay              release with delay
  ----------------> [DELAYED] <------------.
                        |                  |
                        | (time passes)    |
                        |                  |
  put                  v    reserve      |      delete
  -----------------> [READY] ---------> [RESERVED] --------> *poof*
                      ^  ^                |  |
                      |  \  release      |  |
                      |    `-------------'  |
                      |                      |
                      | kick                |
                      |                      |
                      |      bury          |
                    [BURIED] <---------------'
                      |
                      |  delete
                        `--------> *poof*


*wValue is typically a number, but can be any string in order to allow for specifications like "majority"


*Default value is 1.
The system has one or more tubes. Each tube consists of a ready queue and a
delay queue. Each job spends its entire life in one tube. Consumers can show
interest in tubes by sending the "watch" command; they can show disinterest by
sending the "ignore" command. This set of interesting tubes is said to be a
consumer's "watch list". When a client reserves a job, it may come from any of
the tubes in its watch list.


*If wValue == -1 ignore network errors
When a client connects, its watch list is initially just the tube named
"default". If it submits jobs without having sent a "use" command, they will
live in the tube named "default".


*If wValue == 0 Don't send getLastError
Tubes are created on demand whenever they are referenced. If a tube is empty
(that is, it contains no ready, delayed, or buried jobs) and no client refers
to it, it will be deleted.


*If wValue == 1 send {getlasterror: 1} (no w)
Producer Commands
-----------------


'''wtimeoutMS=ms'''
The "put" command is for any process that wants to insert a job into the queue.
It comprises a command line followed by the job body:


*The driver adds { wtimeout : ms } to the getlasterror command.
put <pri> <delay> <ttr> <bytes>\r\n
<data>\r\n


*Used in combination with w
It inserts a job into the client's currently used tube (see the "use" command
below).


*No default value
- <pri> is an integer < 2**32. Jobs with smaller priority values will be
  scheduled before jobs with larger priorities. The most urgent priority is 0;
  the least urgent priority is 4,294,967,295.


'''journal=true|false'''
- <delay> is an integer number of seconds to wait before putting the job in
  the ready queue. The job will be in the "delayed" state during this time.


*true: Sync to journal.
- <ttr> -- time to run -- is an integer number of seconds to allow a worker
  to run this job. This time is counted from the moment a worker reserves
  this job. If the worker does not delete, release, or bury the job within
  <ttr> seconds, the job will time out and the server will release the job.
  The minimum ttr is 1. If the client sends 0, the server will silently
  increase the ttr to 1.


*false: the driver does not add j to the getlasterror command
- <bytes> is an integer indicating the size of the job body, not including the
  trailing "\r\n". This value must be less than max-job-size (default: 2**16).


*Default value is false
- <data> is the job body -- a sequence of bytes of length <bytes> from the
  previous line.


'''fsync=true|false'''
After sending the command line and body, the client waits for a reply, which
may be:


*true: Sync to disk.
- "INSERTED <id>\r\n" to indicate success.


*false: the driver does not add fsync to the getlasterror command
  - <id> is the integer id of the new job


*Default value is false
- "BURIED <id>\r\n" if the server ran out of memory trying to grow the
  priority queue data structure.


If conflicting values for fireAndForget, and any write concern are passed the driver should raise an exception about the conflict.
  - <id> is the integer id of the new job


=== Read Preference ===
- "EXPECTED_CRLF\r\n" The job body must be followed by a CR-LF pair, that is,
'''slaveOk=true|false:''' Whether a driver connected to a replica set will send reads to slaves/secondaries.
  "\r\n". These two bytes are not counted in the job size given by the client
  in the put command line.


*Default value is false
- "JOB_TOO_BIG\r\n" The client has requested to put a job with a body larger
  than max-job-size bytes.


'''readPreference=enum:''' The read preference for this connection. If set, it overrides any slaveOk value.
- "DRAINING\r\n" This means that the server has been put into "drain mode"
  and is no longer accepting new jobs. The client should try another server
  or disconnect and try again later.


*Enumerated values:
The "use" command is for producers. Subsequent put commands will put jobs into
the tube specified by this command. If no use command has been issued, jobs
will be put into the tube named "default".


:*primary
use <tube>\r\n


:*primaryPreferred
- <tube> is a name at most 200 bytes. It specifies the tube to use. If the
  tube does not exist, it will be created.


:*secondary
The only reply is:


:*secondaryPreferred
USING <tube>\r\n


:*nearest
- <tube> is the name of the tube now being used.


*Default value is primary
Worker Commands
---------------


'''readPreferenceTags=string.''' A representation of a tag set as a comma-separated list of colon-separated key-value pairs, e.g.'''dc:ny,rack:1'''. Spaces should be stripped from beginning and end of all keys and values. To specify a list of tag sets, using multiple readPreferenceTags, e.g. '''readPreferenceTags=dc:ny,rack:1&readPreferenceTags=dc:ny&readPreferenceTags='''
A process that wants to consume jobs from the queue uses "reserve", "delete",
"release", and "bury". The first worker command, "reserve", looks like this:


*Note the empty value, it provides for fallback to any other secondary server if none is available
reserve\r\n


*Order matters when using multiple readPreferenceTags
Alternatively, you can specify a timeout as follows:


*There is no default value
reserve-with-timeout <seconds>\r\n


== MongoClient.connect ==
This will return a newly-reserved job. If no job is available to be reserved,
При использовании MongoClient.connect можно (наверное нужно) использовать URL формат подключения. Где возможно, MongoClient максимально наилучшие параметры по умолчанию, но их всегда можно изменить. Это относится к параметрам '''auto_reconnect:true'''и '''native_parser:true''' если возможно. Ниже примеры подключения к single server a replicaset and a sharded system using '''MongoClient.connect'''
beanstalkd will wait to send a response until one becomes available. Once a
job is reserved for the client, the client has limited time to run (TTR) the
job before the job times out. When the job times out, the server will put the
job back into the ready queue. Both the TTR and the actual time left can be
found in response to the stats-job command.


=== Подключение к single server ===
If more than one job is ready, beanstalkd will choose the one with the
<nowiki>var MongoClient = require('mongodb').MongoClient;
smallest priority value. Within each priority, it will choose the one that
was received first.


  MongoClient.connect("mongodb://localhost:27017/integration_test", function(err, db) {
A timeout value of 0 will cause the server to immediately return either a
  test.equal(null, err);
response or TIMED_OUT. A positive value of timeout will limit the amount of
  test.ok(db != null);
time the client will block on the reserve request until a job becomes
available.


  db.collection("replicaset_mongo_client_collection").update({a:1}, {b:1}, {upsert:true}, function(err, result) {
During the TTR of a reserved job, the last second is kept by the server as a
    test.equal(null, err);
safety margin, during which the client will not be made to wait for another
    test.equal(1, result);
job. If the client issues a reserve command during the safety margin, or if
the safety margin arrives while the client is waiting on a reserve command,
the server will respond with:


    db.close();
DEADLINE_SOON\r\n
    test.done();
  });
});</nowiki>


=== A replicaset connect using no ackowledgment by default and readPreference for secondary ===
This gives the client a chance to delete or release its reserved job before
<nowiki>var MongoClient = require('mongodb').MongoClient;
the server automatically releases it.


MongoClient.connect("mongodb://localhost:30000,localhost:30001/integration_test_?w=0&readPreference=secondary", function(err, db) {
TIMED_OUT\r\n
  test.equal(null, err);
  test.ok(db != null);


  db.collection("replicaset_mongo_client_collection").update({a:1}, {b:1}, {upsert:true}, function(err, result) {
If a non-negative timeout was specified and the timeout exceeded before a job
    test.equal(null, err);
became available, or if the client's connection is half-closed, the server
    test.equal(1, result);
will respond with TIMED_OUT.


    db.close();
Otherwise, the only other response to this command is a successful reservation
    test.done();
in the form of a text line followed by the job body:
  });
});</nowiki>


=== A sharded connect using no ackowledgment by default and readPreference for secondary ===
RESERVED <id> <bytes>\r\n
<nowiki>var MongoClient = require('mongodb').MongoClient;
<data>\r\n


  MongoClient.connect("mongodb://localhost:50000,localhost:50001/integration_test_?w=0&readPreference=secondary", function(err, db) {
  - <id> is the job id -- an integer unique to this job in this instance of
  test.equal(null, err);
   beanstalkd.
   test.ok(db != null);


   db.collection("replicaset_mongo_client_collection").update({a:1}, {b:1}, {upsert:true}, function(err, result) {
- <bytes> is an integer indicating the size of the job body, not including
    test.equal(null, err);
   the trailing "\r\n".
    test.equal(1, result);


    db.close();
- <data> is the job body -- a sequence of bytes of length <bytes> from the
    test.done();
  previous line. This is a verbatim copy of the bytes that were originally
   });
   sent to the server in the put command for this job.
});</nowiki>


Notice that when connecting to the shareded system it's pretty much the same url as for connecting to the replicaset. This is because the driver itself figures out if it's a replicaset or a set of Mongos proxies it's connecting to. No special care is needed to specify if it's one or the other. This is in contrast to having to use the '''ReplSet''' or '''Mongos''' instances when using the '''open''' command.
The delete command removes a job from the server entirely. It is normally used
by the client when the job has successfully run to completion. A client can
delete jobs that it has reserved, ready jobs, delayed jobs, and jobs that are
buried. The delete command looks like this:


== MongoClient.connect опции ==
delete <id>\r\n
The connect function also takes a hash of options divided into db/server/replset/mongos alowing you to tweak options not directly supported by the unified url string format. To use these options you do pass in a has like this.


  <nowiki>var MongoClient = require('mongodb').MongoClient;
  - <id> is the job id to delete.


MongoClient.connect("mongodb://localhost:27017/integration_test_?", {
The client then waits for one line of response, which may be:
    db: {
      native_parser: false
    },
    server: {
      socketOptions: {
        connectTimeoutMS: 500
      }
    },
    replSet: {},
    mongos: {}
  }, function(err, db) {
  test.equal(null, err);
  test.ok(db != null);


  db.collection("replicaset_mongo_client_collection").update({a:1}, {b:1}, {upsert:true}, function(err, result) {
- "DELETED\r\n" to indicate success.
    test.equal(null, err);
    test.equal(1, result);


    db.close();
- "NOT_FOUND\r\n" if the job does not exist or is not either reserved by the
    test.done();
  client, ready, or buried. This could happen if the job timed out before the
   });
   client sent the delete command.
});</nowiki>


Below are all the options supported for db/server/replset/mongos.
The release command puts a reserved job back into the ready queue (and marks
its state as "ready") to be run by any client. It is normally used when the job
fails because of a transitory error. It looks like this:


*'''db''' A hash of options at the db level overriding or adjusting functionaliy not suppported by the url
release <id> <pri> <delay>\r\n


:*'''w'''<nowiki>, {Number/String, > -1 || 'majority'} the write concern for the operation where < 1 is no acknowlegement of write and w >= 1 or w = 'majority' acknowledges the write</nowiki>
- <id> is the job id to release.


:*'''wtimeout''', {Number, 0} set the timeout for waiting for write concern to finish (combines with w option)
- <pri> is a new priority to assign to the job.


:*'''fsync''', (Boolean, default:false) write waits for fsync before returning
- <delay> is an integer number of seconds to wait before putting the job in
  the ready queue. The job will be in the "delayed" state during this time.


:*'''journal''', (Boolean, default:false) write waits for journal sync before returning
The client expects one line of response, which may be:


:*'''readPreference''' {String}, the prefered read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
- "RELEASED\r\n" to indicate success.


:*'''native_parser''' {Boolean, default:false}, use c++ bson parser.
- "BURIED\r\n" if the server ran out of memory trying to grow the priority
  queue data structure.


:*'''forceServerObjectId''' {Boolean, default:false}, force server to create _id fields instead of client.
- "NOT_FOUND\r\n" if the job does not exist or is not reserved by the client.


:*'''pkFactory''' {Object}, object overriding the basic ObjectID primary key generation.
The bury command puts a job into the "buried" state. Buried jobs are put into a
FIFO linked list and will not be touched by the server again until a client
kicks them with the "kick" command.


:*'''serializeFunctions''' {Boolean, default:false}, serialize functions.
The bury command looks like this:


:*'''raw''' {Boolean, default:false}, peform operations using raw bson buffers.
bury <id> <pri>\r\n


:*'''recordQueryStats''' {Boolean, default:false}, record query statistics during execution.
- <id> is the job id to release.


:*'''retryMiliSeconds''' {Number, default:5000}, number of miliseconds between retries.
- <pri> is a new priority to assign to the job.


:*'''numberOfRetries''' {Number, default:5}, number of retries off connection.
There are two possible responses:


*'''server''' A hash of options at the server level not supported by the url.
- "BURIED\r\n" to indicate success.


:*'''readPreference''' {String, default:null}, set's the read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST)
- "NOT_FOUND\r\n" if the job does not exist or is not reserved by the client.


:*'''ssl''' {Boolean, default:false}, use ssl connection (needs to have a mongod server with ssl support)
The "touch" command allows a worker to request more time to work on a job.
This is useful for jobs that potentially take a long time, but you still want
the benefits of a TTR pulling a job away from an unresponsive worker.  A worker
may periodically tell the server that it's still alive and processing a job
(e.g. it may do this on DEADLINE_SOON). The command postpones the auto
release of a reserved job until TTR seconds from when the command is issued.


:*'''slaveOk''' {Boolean, default:false}, legacy option allowing reads from secondary, use '''readPrefrence''' instead.
The touch command looks like this:


:*'''poolSize''' {Number, default:1}, number of connections in the connection pool, set to 1 as default for legacy reasons.
touch <id>\r\n


:*'''socketOptions''' {Object, default:null}, an object containing socket options to use (noDelay:(boolean), keepAlive:(number), connectTimeoutMS:(number), socketTimeoutMS:(number))
- <id> is the ID of a job reserved by the current connection.


:*'''logger''' {Object, default:null}, an object representing a logger that you want to use, needs to support functions debug, log, error '''({error:function(message, object) {}, log:function(message, object) {}, debug:function(message, object) {}})'''.
There are two possible responses:


:*'''auto_reconnect''' {Boolean, default:false}, reconnect on error.
- "TOUCHED\r\n" to indicate success.


:*'''disableDriverBSONSizeCheck''' {Boolean, default:false}, force the server to error if the BSON message is to big
- "NOT_FOUND\r\n" if the job does not exist or is not reserved by the client.


*'''replSet''' A hash of options at the replSet level not supported by the url.
The "watch" command adds the named tube to the watch list for the current
connection. A reserve command will take a job from any of the tubes in the
watch list. For each new connection, the watch list initially consists of one
tube, named "default".


:*'''ha''' {Boolean, default:true}, turn on high availability.
watch <tube>\r\n


:*'''haInterval''' {Number, default:2000}, time between each replicaset status check.
- <tube> is a name at most 200 bytes. It specifies a tube to add to the watch
  list. If the tube doesn't exist, it will be created.


:*'''reconnectWait''' {Number, default:1000}, time to wait in miliseconds before attempting reconnect.
The reply is:


:*'''retries''' {Number, default:30}, number of times to attempt a replicaset reconnect.
WATCHING <count>\r\n


:*'''rs_name''' {String}, the name of the replicaset to connect to.
- <count> is the integer number of tubes currently in the watch list.


:*'''socketOptions''' {Object, default:null}, an object containing socket options to use (noDelay:(boolean), keepAlive:(number), connectTimeoutMS:(number), socketTimeoutMS:(number))
The "ignore" command is for consumers. It removes the named tube from the
watch list for the current connection.


:*'''readPreference''' {String}, the prefered read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
ignore <tube>\r\n


:*'''strategy''' {String, default:null}, selection strategy for reads choose between (ping and statistical, default is round-robin)
The reply is one of:


:*'''secondaryAcceptableLatencyMS''' {Number, default:15}, sets the range of servers to pick when using NEAREST (lowest ping ms + the latency fence, ex: range of 1 to (1 + 15) ms)
- "WATCHING <count>\r\n" to indicate success.


:*'''connectArbiter''' {Boolean, default:false}, sets if the driver should connect to arbiters or not.
  - <count> is the integer number of tubes currently in the watch list.


*'''mongos''' A hash of options at the mongos level not supported by the url.
- "NOT_IGNORED\r\n" if the client attempts to ignore the only tube in its
  watch list.


:*'''socketOptions''' {Object, default:null}, an object containing socket options to use (noDelay:(boolean), keepAlive:(number), connectTimeoutMS:(number), socketTimeoutMS:(number))
Other Commands
--------------


:*'''ha''' {Boolean, default:true}, turn on high availability, attempts to reconnect to down proxies
The peek commands let the client inspect a job in the system. There are four
variations. All but the first operate only on the currently used tube.


:*'''haInterval''' {Number, default:2000}, time between each replicaset status check.
- "peek <id>\r\n" - return job <id>.


= Database =
- "peek-ready\r\n" - return the next ready job.
The first thing to do in order to make queries to the database is to open one. This can be done with the Db constructor.


  <nowiki>var mongodb = require("mongodb"),
  - "peek-delayed\r\n" - return the delayed job with the shortest delay left.
    mongoserver = new mongodb.Server(host, port, server_options),
    db_connector = new mongodb.Db(name, mongoserver, db_options);


  db_connector.open(callback);</nowiki>
  - "peek-buried\r\n" - return the next job in the list of buried jobs.


* host is a server hostname or IP
There are two possible responses, either a single line:
* port is a MongoDB port, use mongodb.Connection.DEFAULT_PORT for default (27017)
* server_options see ''Server options''
* name is the databse name that needs to be opened, database will be created automatically if it doesn't yet exist
* db_options see ''DB options''


== Параметры Server ==
- "NOT_FOUND\r\n" if the requested job doesn't exist or there are no jobs in
Several options can be passed to the Server constructor with options parameter.
  the requested state.


* auto_reconnect - to reconnect automatically, default:false
Or a line followed by a chunk of data, if the command was successful:
* poolSize - specify the number of connections in the pool default:5
* socketOptions - a collection of pr socket settings


== Параметры Socket  ==
FOUND <id> <bytes>\r\n
Several options can be set for the socketOptions.
<data>\r\n


* timeout = set seconds before connection times out '''default:0'''
- <id> is the job id.
* noDelay = Отключает алгоритм Nagle '''default:true'''
::::''Алгоритм Nagle TCP/IP был разработан, чтобы избежать проблем при передаче небольших пакетов, называемых tinygrams, в медленных сетях. Задача алгоритма балансировать нагрузку TCP соединения, т.е. он пытается равномерно "размазывать" трафик. Поэтому когда идёт активная передача небольших (менее 1500 байт) пакетов данных, алгоритм старается сгладить этот пик нагрузки, задерживая пакеты и пытаясь распределить их более равномерно по времени. Последствием работы данного алгоритма могут быть задержки в передаче пакетов до 200мс. [http://heroes.fragoria.ru/forum/index.php?topic=6161.0 источник 1], [http://support.microsoft.com/kb/138831/ru источник 2]''
* keepAlive = Set if keepAlive is used default:0, which means no keepAlive, set higher than 0 for keepAlive
* encoding = 'ascii'|'utf8'|'base64' default:null


== DB опции ==
- <bytes> is an integer indicating the size of the job body, not including
Several options can be passed to the Db constructor with options parameter.
  the trailing "\r\n".


* native_parser - if true, use native BSON parser
- <data> is the job body -- a sequence of bytes of length <bytes> from the
* strict - sets ''strict mode'', if true then existing collections can't be "recreated" etc.
  previous line.
* pk - custom primary key factory to generate _id values (see Custom primary keys).
* forceServerObjectId - generation of objectid is delegated to the mongodb server instead of the driver. default is false
* retryMiliSeconds - specify the number of milliseconds between connection attempts default:5000
* numberOfRetries - specify the number of retries for connection attempts default:3
* reaper - enable/disable reaper (true/false) default:false
* reaperInterval - specify the number of milliseconds between each reaper attempt default:10000
* reaperTimeout - specify the number of milliseconds for timing out callbacks that don't return default:30000
* raw - driver expects Buffer raw bson document, default:false
* logger - object specifying error(), debug() and log() functions


== Подключение к database ==
The kick command applies only to the currently used tube. It moves jobs into
Database может быть открыта с помощью метода '''open'''.
the ready queue. If there are any buried jobs, it will only kick buried jobs.
Otherwise it will kick delayed jobs. It looks like:


<nowiki>db_connector.open(callback);</nowiki>
kick <bound>\r\n


callback is a callback function which gets 2 parameters - an error object (or null, if no errors occured) and a database object.
- <bound> is an integer upper bound on the number of jobs to kick. The server
  will kick no more than <bound> jobs.


Resulting database object can be used for creating and selecting  [[Документация_для_v_1.2#Collections|collections]].
The response is of the form:


<nowiki>db_connector.open(function(err, db){
KICKED <count>\r\n
    db.collection(...);
});</nowiki>


=== Свойства Database ===
- <count> is an integer indicating the number of jobs actually kicked.
* databaseName is the name of the database
* serverConfig includes information about the server (serverConfig.host, serverConfig.port etc.)
* state indicates if the database is connected or not
* strict indicates if ''strict mode'' is on (true) or off (false, default)
* version indicates the version of the MongoDB database


===События Database ===
The kick-job command is a variant of kick that operates with a single job
* close to indicate that the connection to the database was closed
identified by its job id. If the given job id exists and is in a buried or
delayed state, it will be moved to the ready queue of the the same tube where it
currently belongs. The syntax is:


Например:
kick-job <id>\r\n


  <nowiki>db.on("close", function(error){
  - <id> is the job id to kick.
    console.log("Connection to the database was closed!");
});</nowiki>


NB! If auto_reconnect was set to true when creating the server, then the connection will be automatically reopened on next database operation. Nevertheless the close event will be fired.
The response is one of:


== Совместное использование соединений с несколькими базами ==
- "NOT_FOUND\r\n" if the job does not exist or is not in a kickable state. This
Для совместного использования пула соединений между несколькими базами данных экземпляр базы данных имеет метод '''db'''
  can also happen upon internal errors.


  <nowiki>db_connector.db(name)</nowiki>
  - "KICKED\r\n" when the operation succeeded.


this returns a new db instance that shares the connections off the previous instance but will send all commands to the databasename. This allows for better control of resource usage in a multiple database scenario.
The stats-job command gives statistical information about the specified job if
it exists. Its form is:


== Удаление database ==
stats-job <id>\r\n
Для удаления database, вначале необходимо установить на неё курсор. Удаление может быть выполнено методом '''dropDatabase'''
<nowiki>db_connector.open(function(err, db){
    if (err) { throw err; }
    db.dropDatabase(function(err) {
        if (err) { throw err; }
        console.log("database has been dropped!");
    });
});</nowiki>


== Пользовательские первичные ключи ==
  - <id> is a job id.
Каждая запись в database имеет уникальный '''первичный ключ''' называющийся '''_id'''. По умолчанию первичный ключ представляет собой хэш длиной 12 байт, но пользовательский генератор ключей может это переопределить. Если установить '''_id''' вручную, то для добавляемых записей можно использовать что угодно, генератор первичных ключей (primary key factory generates) подставит автосгенерированное значение '''_id''' только для тех записей, где ключ '''_id''' не определен.


Example 1: No need to generate primary key, as its already defined:
The response is one of:


  <nowiki>collection.insert({name:"Daniel", _id:"12345"});</nowiki>
  - "NOT_FOUND\r\n" if the job does not exist.


Example 2: No primary key, so it needs to be generated before save:
- "OK <bytes>\r\n<data>\r\n"


<nowiki>collectionn.insert({name:"Daniel"});</nowiki>
  - <bytes> is the size of the following data section in bytes.


Custom primary key factory is actually an object with method createPK which returns a primary key. The context (value for this) forcreatePK is left untouched.
  - <data> is a sequence of bytes of length <bytes> from the previous line. It
    is a YAML file with statistical information represented a dictionary.


<nowiki>var CustomPKFactory = {
The stats-job data is a YAML file representing a single dictionary of strings
    counter:0,
to scalars. It contains these keys:
    createPk: function() {
        return ++this.counter;
    }
}


  db_connector = new mongodb.Db(name, mongoserver, {pk: CustomPKFactory});</nowiki>
  - "id" is the job id


== Отладка ==
- "tube" is the name of the tube that contains this job
In order to debug the commands sent to the database you can add a logger object to the DB options. Make sure also the propertydoDebug is set.


Пример:
- "state" is "ready" or "delayed" or "reserved" or "buried"


  <nowiki>options = {}
  - "pri" is the priority value set by the put, release, or bury commands.
options.logger = {};
options.logger.doDebug = true;
options.logger.debug = function (message, object) {
    // print the mongo command:
    // "writing command to mongodb"
    console.log(message);


    // print the collection name
- "age" is the time in seconds since the put command that created this job.
    console.log(object.json.collectionName)


    // print the json query sent to MongoDB
- "time-left" is the number of seconds left until the server puts this job
    console.log(object.json.query)
  into the ready queue. This number is only meaningful if the job is
  reserved or delayed. If the job is reserved and this amount of time
  elapses before its state changes, it is considered to have timed out.


    // print the binary object
- "file" is the number of the earliest binlog file containing this job.
    console.log(object.binary)
  If -b wasn't used, this will be 0.
}


  var db = new Db('some_database', new Server(...), options);</nowiki>
  - "reserves" is the number of times this job has been reserved.


= Collections =
- "timeouts" is the number of times this job has timed out during a
Так же смотри:
  reservation.
* [[Документация_для_v_1.2#Database|Database]]
* [[Документация_для_v_1.2#Queries|Queries]]


== Объекты-коллекции ==
- "releases" is the number of times a client has released this job from a
Collection object is a pointer to a specific collection in the [[Документация_для_v_1.2#Database|database]]. If you want to [[Документация_для_v_1.2#Insert|insert]] new records or [[Документация_для_v_1.2#Queries|query]] existing ones then you need to have a valid collection object.
  reservation.


'''Примечание''' Название коллекций не может начинаться или содержать знакa $ (.tes$t - is not allowed)
- "buries" is the number of times this job has been buried.


== Создание коллекции ==
- "kicks" is the number of times this job has been kicked.
Collections can be created with createCollection


<nowiki>db.createCollection([[name[, options]], callback)</nowiki>
The stats-tube command gives statistical information about the specified tube
if it exists. Its form is:


where name is the name of the collection, options a set of configuration parameters and callback is a callback function. db is the database object.
stats-tube <tube>\r\n


The first parameter for the callback is the error object (null if no error) and the second one is the pointer to the newly created collection. If strict mode is on and the table exists, the operation yields in error. With strict mode off (default) the function simple returns the pointer to the existing collection and does not truncate it.
- <tube> is a name at most 200 bytes. Stats will be returned for this tube.


db.createCollection("test", function(err, collection){
The response is one of:
    collection.insert({"test":"value"});
});


== Создание параметров коллекции ==
- "NOT_FOUND\r\n" if the tube does not exist.
Several options can be passed to the createCollection function with options parameter.


  <nowiki>* `raw` - driver returns documents as bson binary Buffer objects, `default:false`</nowiki>
  - "OK <bytes>\r\n<data>\r\n"


=== Collection properties ===
  - <bytes> is the size of the following data section in bytes.
* collectionName is the name of the collection (not including the database name as a prefix)
* db is the pointer to the corresponding databse object


Example of usage:
  - <data> is a sequence of bytes of length <bytes> from the previous line. It
    is a YAML file with statistical information represented a dictionary.


console.log("Collection name: "+collection.collectionName)
The stats-tube data is a YAML file representing a single dictionary of strings
to scalars. It contains these keys:


== Список стандартных коллекций ==
- "name" is the tube's name.
=== Список names ===
Collections can be listed with collectionNames


  <nowiki>db.collectionNames(callback);</nowiki>
  - "current-jobs-urgent" is the number of ready jobs with priority < 1024 in
  this tube.


callback gets two parameters - an error object (if error occured) and an array of collection names as strings.
- "current-jobs-ready" is the number of jobs in the ready queue in this tube.


Collection names also include database name, so a collection named posts in a database blog will be listed as blog.posts.
- "current-jobs-reserved" is the number of jobs reserved by all clients in
  this tube.


Additionally there's system collections which should not be altered without knowing exactly what you are doing, these sollections can be identified with system prefix. For example posts.system.indexes.
- "current-jobs-delayed" is the number of delayed jobs in this tube.


Пример:
- "current-jobs-buried" is the number of buried jobs in this tube.


  <nowiki>var mongodb = require("mongodb"),
  - "total-jobs" is the cumulative count of jobs created in this tube in
    mongoserver = new mongodb.Server("localhost"),
  the current beanstalkd process.
    db_connector = new mongodb.Db("blog", mongoserver);


  db_connector.open(function(err, db){
  - "current-using" is the number of open connections that are currently
    db.collectionNames(function(err, collections){
  using this tube.
        <nowiki>console.log(collections); // ["blog.posts", "blog.system.indexes"]</nowiki>
    });
});</nowiki>


== Список collections ==
- "current-waiting" is the number of open connections that have issued a
Collection objects can be listed with database method collections
  reserve command while watching this tube but not yet received a response.


  db.collections(callback)
  - "current-watching" is the number of open connections that are currently
  watching this tube.


Where callback gets two parameters - an error object (if an error occured) and an array of collection objects.
- "pause" is the number of seconds the tube has been paused for.


== Выбор collections ==
- "cmd-delete" is the cumulative number of delete commands for this tube
Созданная коллекция может быть открыта при помощи метода '''collection'''


  <nowiki>db.collection([[name[, options]], callback);</nowiki>
  - "cmd-pause-tube" is the cumulative number of pause-tube commands for this
  tube.


Если strict mode выключен, тогда в случае отсутствия коллекции, новая коллекция создастся автоматически.
- "pause-time-left" is the number of seconds until the tube is un-paused.


== Selecting collections options ==
The stats command gives statistical information about the system as a whole.
Several options can be passed to the collection function with options parameter.
Its form is:


* `raw` - driver returns documents as bson binary Buffer objects, `default:false`
stats\r\n


== Renaming collections ==
The server will respond:
A collection can be renamed with collection method rename


collection.rename(new_name, callback);
OK <bytes>\r\n
<data>\r\n


== Removing records from collections ==
- <bytes> is the size of the following data section in bytes.
Records can be erased from a collection with remove


  <nowiki>collection.remove([[query[, options]], callback]);</nowiki>
  - <data> is a sequence of bytes of length <bytes> from the previous line. It
  is a YAML file with statistical information represented a dictionary.


Where
The stats data for the system is a YAML file representing a single dictionary
of strings to scalars. Entries described as "cumulative" are reset when the
beanstalkd process starts; they are not stored on disk with the -b flag.


* query is the query that records to be removed need to match. If not set all records will be removed
- "current-jobs-urgent" is the number of ready jobs with priority < 1024.
* options indicate advanced options. For example use {safe: true} when using callbacks
* callback callback function that gets two parameters - an error object (if an error occured) and the count of removed records


== Removing collections ==
- "current-jobs-ready" is the number of jobs in the ready queue.
A collection can be dropped with drop


  collection.drop(callback);
  - "current-jobs-reserved" is the number of jobs reserved by all clients.


or with dropCollection
- "current-jobs-delayed" is the number of delayed jobs.


  db.dropCollection(collection_name, callback)
  - "current-jobs-buried" is the number of buried jobs.


= Inserting and updating =
- "cmd-put" is the cumulative number of put commands.
See also:


* [[Документация_для_v_1.2#Database|Database]]
- "cmd-peek" is the cumulative number of peek commands.
* [[Документация_для_v_1.2#Collections|Collections]]


== Insert ==
- "cmd-peek-ready" is the cumulative number of peek-ready commands.
Records can be inserted to a collection with insert


  <nowiki>collection.insert(docs[, options, callback])</nowiki>
  - "cmd-peek-delayed" is the cumulative number of peek-delayed commands.


Where
- "cmd-peek-buried" is the cumulative number of peek-buried commands.


* docs is a single document object or an array of documents
- "cmd-reserve" is the cumulative number of reserve commands.
* options is an object of parameters, if you use a callback, set safe to true - this way the callback is executed ''after'' the record is saved to the database, if safe is false (default) callback is fired immediately and thus doesn't make much sense.
* callback - callback function to run after the record is inserted. Set safe to true in options when using callback. First parameter for callback is the error object (if an error occured) and the second is an array of records inserted.


For example
- "cmd-use" is the cumulative number of use commands.


  var document = {name:"David", title:"About MongoDB"};
  - "cmd-watch" is the cumulative number of watch commands.
collection.insert(document, {safe: true}, function(err, records){
    <nowiki>console.log("Record added as "+records[0]._id);</nowiki>
});


If trying to insert a record with an existing _id value, then the operation yields in error.
- "cmd-ignore" is the cumulative number of ignore commands.


  collection.insert({_id:1}, {safe:true}, function(err, doc){
  - "cmd-delete" is the cumulative number of delete commands.
    // no error, inserted new document, with _id=1
    collection.insert({_id:1}, {safe:true}, function(err, doc){
        // error occured since _id=1 already existed
    });
});


== Save ==
- "cmd-release" is the cumulative number of release commands.
Shorthand for insert/update is save - if _id value set, the record is updated if it exists or inserted if it does not; if the _id value is not set, then the record is inserted as a new one.


  collection.save({_id:"abc", user:"David"},{safe:true}, callback)
  - "cmd-bury" is the cumulative number of bury commands.


callback gets two parameters - an error object (if an error occured) and the record if it was inserted or 1 if the record was updated.
- "cmd-kick" is the cumulative number of kick commands.


== Update ==
- "cmd-stats" is the cumulative number of stats commands.
Updates can be done with update


  <nowiki>collection.update(criteria, update[, options[, callback]]);</nowiki>
  - "cmd-stats-job" is the cumulative number of stats-job commands.


Where
- "cmd-stats-tube" is the cumulative number of stats-tube commands.


* criteria is a query object to find records that need to be updated (see [[Документация_для_v_1.2#Queries|Queries]])
- "cmd-list-tubes" is the cumulative number of list-tubes commands.
* update is the replacement object
* options is an options object (see below)
* callback is the callback to be run after the records are updated. Has two parameters, the first is an error object (if error occured), the second is the count of records that were modified.


=== Update options ===
- "cmd-list-tube-used" is the cumulative number of list-tube-used commands.
There are several option values that can be used with an update


* safe - run callback only after the update is done, defaults to false
- "cmd-list-tubes-watched" is the cumulative number of list-tubes-watched
* multi - update all records that match the query object, default is false (only the first one found is updated)
  commands.
* upsert - if true and no records match the query, insert update as a new record
* raw - driver returns updated document as bson binary Buffer, default:false


=== Replacement object ===
- "cmd-pause-tube" is the cumulative number of pause-tube commands.
If the replacement object is a document, the matching documents will be replaced (except the _id values if no _id is set).


  collection.update({_id:"123"}, {author:"Jessica", title:"Mongo facts"});
  - "job-timeouts" is the cumulative count of times a job has timed out.


The example above will replace the document contents of id=123 with the replacement object.
- "total-jobs" is the cumulative count of jobs created.


To update only selected fields, $set operator needs to be used. Following replacement object replaces author value but leaves everything else intact.
- "max-job-size" is the maximum number of bytes in a job.


  collection.update({_id:"123"}, {$set: {author:"Jessica"}});
  - "current-tubes" is the number of currently-existing tubes.


See [http://www.mongodb.org/display/DOCS/Updating MongoDB documentation] for all possible operators.
- "current-connections" is the number of currently open connections.


== Find and Modify ==
- "current-producers" is the number of open connections that have each
To update and retrieve the contents for one single record you can use findAndModify.
  issued at least one put command.


  <nowiki>collection.findAndModify(criteria, sort, update[, options, callback])</nowiki>
  - "current-workers" is the number of open connections that have each issued
  at least one reserve command.


Where
- "current-waiting" is the number of open connections that have issued a
  reserve command but not yet received a response.


* criteria is the query object to find the record
- "total-connections" is the cumulative count of connections.
* sort indicates the order of the matches if there's more than one matching record. The first record on the result set will be used. See [https://github.com/mongodb/node-mongodb-native/blob/1.2-dev/docs/queries.md Queries->find->options->sort] for the format.
* update is the replacement object
* options define the behavior of the function
* callback is the function to run after the update is done. Has two parameters - error object (if error occured) and the record that was updated.


=== Options ===
- "pid" is the process id of the server.
Options object can be used for the following options:


* remove - if set to true (default is false), removes the record from the collection. Callback function still gets the object but it doesn't exist in the collection any more.
- "version" is the version string of the server.
* new - if set to true, callback function returns the modified record. Default is false (original record is returned)
* upsert - if set to true and no record matched to the query, replacement object is inserted as a new record


=== Example ===
  - "rusage-utime" is the cumulative user CPU time of this process in seconds
  <nowiki>var mongodb = require('mongodb'),
  and microseconds.
    server = new mongodb.Server("127.0.0.1", 27017, {});


  new mongodb.Db('test', server, {}).open(function (error, client) {
  - "rusage-stime" is the cumulative system CPU time of this process in
    if (error) throw error;
  seconds and microseconds.
    var collection = new mongodb.Collection(client, 'test_collection');
    collection.findAndModify(
        {hello: 'world'}, // query
        <nowiki>[['_id','asc']], </nowiki> // sort order
        {$set: {hi: 'there'}}, // replacement, replaces only the field "hi"
        {}, // options
        function(err, object) {
            if (err){
                console.warn(err.message);  // returns error if no matching object found
            }else{
                console.dir(object);
            }
        });
    });
</nowiki>


= Queries =
- "uptime" is the number of seconds since this server process started running.
See also:


* [[Документация_для_v_1.2#Database|Database]]
- "binlog-oldest-index" is the index of the oldest binlog file needed to
* [[Документация_для_v_1.2#Collections|Collections]]
  store the current jobs.


== Выполнение запросов при помощи find() ==
- "binlog-current-index" is the index of the current binlog file being
[[Документация_для_v_1.2#Collections|Collections]] can be queried with find.
  written to. If binlog is not active this value will be 0.


  <nowiki>collection.find(query[[[, fields], options], callback]);</nowiki>
  - "binlog-max-size" is the maximum size in bytes a binlog file is allowed
  to get before a new binlog file is opened.


Where
- "binlog-records-written" is the cumulative number of records written
  to the binlog.


* query - is a query object, defining the conditions the documents need to apply
- "binlog-records-migrated" is the cumulative number of records written
* fields - indicates which fields should be included in the response (default is all)
  as part of compaction.
* options - defines extra logic (sorting options, paging etc.)
* raw - driver returns documents as bson binary Buffer objects, default:false


The result for the query is actually a cursor object. This can be used directly or converted to an array.
- "id" is a random id string for this server process, generated when each
  beanstalkd process starts.


  var cursor = collection.find({});
  - "hostname" the hostname of the machine as determined by uname.
cursor.each(...);


To indicate which fields must or must no be returned fields value can be used. For example the following fields value
The list-tubes command returns a list of all existing tubes. Its form is:


{
list-tubes\r\n
    "name": true,
    "title": true
}


retrieves fields name and title (and as a default also _id) but not any others.
The response is:


== Find first occurence with findOne() ==
OK <bytes>\r\n
findOne is a convinence method finding and returning the first match of a query while regular find returns a cursor object instead. Use it when you expect only one record, for example when querying with _id or another unique property.
<data>\r\n


  <nowiki>collection.findOne([query], callback)</nowiki>
  - <bytes> is the size of the following data section in bytes.


Where
- <data> is a sequence of bytes of length <bytes> from the previous line. It
  is a YAML file containing all tube names as a list of strings.


* query is a query object or an _id value
The list-tube-used command returns the tube currently being used by the
* callback has two parameters - an error object (if an error occured) and the document object.
client. Its form is:


Example:
list-tube-used\r\n


collection.findOne({_id: doc_id}, function(err, document) {
The response is:
    console.log(document.name);
});


== Значения _id ==
USING <tube>\r\n
Default _id values are 12 byte binary hashes. You can alter the format with custom Primary Key factories (see ''[[Документация_для_v_1.2#Custom_primary_keys|Custom Primarky Keys]]'' in [[Документация_для_v_1.2#Database|Database]]).


In order to treat these binary _id values as strings it would be wise to convert binary values to hex strings. This can be done withtoHexString property.
- <tube> is the name of the tube being used.


var idHex = document._id.toHexString();
The list-tubes-watched command returns a list tubes currently being watched by
the client. Its form is:


Hex strings can be reverted back to binary (for example to perform queries) with ObjectID.createFromHexString
list-tubes-watched\r\n


{_id: ObjectID.createFromHexString(idHex)}
The response is:


When inserting new records it is possible to use custom _id values as well which do not need to be binary hashes, for example strings.
OK <bytes>\r\n
<data>\r\n


  collection.insert({_id: "abc", ...});
  - <bytes> is the size of the following data section in bytes.
collection.findOne({_id: "abc"},...);


This way it is not necessary to convert _id values to hex strings and back.
- <data> is a sequence of bytes of length <bytes> from the previous line. It
  is a YAML file containing watched tube names as a list of strings.


== Объект Query ==
The quit command simply closes the connection. Its form is:
The simplest query object is an empty one {} which matches every record in the database.


To make a simple query where one field must match to a defined value, one can do it as simply as
quit\r\n


{fieldname: "fieldvalue"} 
The pause-tube command can delay any new job being reserved for a given time. Its form is:


This query matches all the records that a) have fields called ''fieldname'' and b) its value is ''"fieldvalue"''.
pause-tube <tube-name> <delay>\r\n


For example if we have a collection of blog posts where the structure of the records is {title, author, contents} and we want to retrieve all the posts for a specific author then we can do it like this:
- <tube> is the tube to pause


  posts = pointer_to_collection;
  - <delay> is an integer number of seconds to wait before reserving any more
posts.find({author:"Daniel"}).toArray(function(err, results){
  jobs from the queue
    console.log(results); // output all records
});


If the queried field is inside an object then that can be queried also. For example if we have a record with the following structure:
There are two possible responses:


  {
  - "PAUSED\r\n" to indicate success.
    user: {
        name: "Daniel"
    }
}


Then we can query the "name" field like this: {"user.name":"Daniel"}
  - "NOT_FOUND\r\n" if the tube does not exist.
 
=== AND ===
If more than one fieldname is specified, then it's an AND query
 
{
    key1: "value1",
    name2: "value2"
}
 
Whis query matches all records where ''key1'' is ''"value1"'' and ''key2'' is ''"value2"''
 
=== OR ===
OR queries are a bit trickier but doable with the $or operator. Query operator takes an array which includes a set of query objects and at least one of these must match a document before it is retrieved
 
{
    <nowiki>$or:[</nowiki>
        {author:"Daniel"},
        {author:"Jessica"}
    ]
}
 
This query match all the documents where author is Daniel or Jessica.
 
To mix AND and OR queries, you just need to use $or as one of regular query fields.
 
{
    title:"MongoDB",
    <nowiki>$or:[</nowiki>
        {author:"Daniel"},
        {author:"Jessica"}
    ]
}
 
=== Conditionals ===
Conditional operators <nowiki><</nowiki>, <nowiki><=</nowiki>, >, >= and != can't be used directly, as the query object format doesn't support it but the same can be achieved with their aliases $lt, $lte, $gt, $gte and $ne. When a field value needs to match a conditional, the value must be wrapped into a separate object.
 
{"fieldname":{$gte:100}}
 
This query defines that ''fieldname'' must be greater than or equal to 100.
 
Conditionals can also be mixed to create ranges.
 
{"fieldname": {$lte:10, $gte:100}}
 
=== Regular expressions in queries ===
Queried field values can also be matched with regular expressions
 
{author:/^Daniel/}
 
=== Специальные операторы в запросах ===
In addition to OR and conditional there's some more operators:
 
* $in - specifies an array of possible matches, <nowiki>{"name":{$in:[1,2,3]}}</nowiki>
* $nin - specifies an array of unwanted matches
* $all - array value must match to the condition <nowiki>{"name":{$all:[1,2,3]}}</nowiki>
* $exists - checks for existence of a field {"name":{$exists:true}}
* $mod - check for a modulo {"name":{$mod:{3,2}} is the same as "name" % 3 == 2
* $size - checks the size of an array value {"name": {$size:2}} matches arrays ''name'' with 2 elements
 
== Queries inside objects and arrays ==
If you have a document with nested objects/arrays then the keys inside these nested objects can still be used for queries.
 
For example with the following document
 
{
    "_id": idvalue,
    "author":{
        "firstname":"Daniel",
        "lastname": "Defoe"
    },
    <nowiki>"books":[</nowiki>
        {
            "title":"Robinson Crusoe"
            "year": 1714
        }
    ]
  }
 
not only the _id field can be used as a query field - also the firstname and even title can be used. This can be done when using nested field names as strings, concated with periods.
 
collection.find({"author.firstname":"Daniel})
 
Works even inside arrays
 
collection.find({"books.year":1714})
 
== Query options ==
Query options define the behavior of the query.
 
var options = {
    "limit": 20,
    "skip": 10,
    "sort": title
}
 
collection.find({}, options).toArray(...);
 
=== Paging ===
Paging can be achieved with option parameters limit and skip
 
{
    "limit": 20,
    "skip" 10
}
 
retrieves 10 elements starting from 20
 
=== Sorting ===
Sorting can be acieved with option parameter sort which takes an array of sort preferences
 
{
    <nowiki>"sort": [['field1','asc'], ['field2','desc']]</nowiki>
}
 
With single ascending field the array can be replaced with the name of the field.
 
{
    "sort": "name"
}
 
=== Explain ===
Option parameter explain turns the query into an explain query.
 
== Cursors ==
Cursor objects are the results for queries and can be used to fetch individual fields from the database.
 
=== nextObject ===
cursor.nextObject(function(err, doc){}) retrieves the next record from database. If doc is null, then there weren't any more records.
 
=== each ===
cursor.each(function(err, doc){}) retrieves all matching records one by one.
 
=== toArray ===
cursor.toArray(function(err, docs){}) converts the cursor object into an array of all the matching records. Probably the most convenient way to retrieve results but be careful with large datasets as every record is loaded into memory.
 
collection.find().toArray(function(err, docs){
    console.log("retrieved records:");
    console.log(docs);
});
 
=== rewind ===
cursor.rewind() resets the internal pointer in the cursor to the beginning.
 
== Counting matches ==
Counting total number of found matches can be done against cursors with method count.
 
cursor.count(callback)
 
Where
 
* callback is the callback function with two parameters - an error object (if an error occured) and the number on matches as an integer.
 
Example
 
cursor.count(function(err, count){
    console.log("Total matches: "+count);
});
 
= Replicasets =
== Introduction ==
Replica sets is the asynchronous master/slave replication added to Mongodb that takes care off all the failover and recovery for the member nodes. According to the mongodb documentation a replicaset is
 
* Two or more nodes that are copies of each other
* Automatic assignment of a primary(master) node if none is available
* Drivers that automatically detect the new master and send writes to it
 
More information at [http://www.mongodb.org/display/DOCS/Replica+Sets Replicasets]
 
== Driver usage ==
To create a new replicaset follow the instructions on the mongodb site to setup the config and the replicaset instances. Then using the driver.
 
<nowiki>var replSet = new ReplSetServers( [ </nowiki>
    new Server( 127.0.0.1, 30000, { auto_reconnect: true } ),
    new Server( 127.0.0.1, 30001, { auto_reconnect: true } ),
    new Server( 127.0.0.1, 30002, { auto_reconnect: true } )
  ],
  {rs_name:RS.name}
);
 
var db = new Db('integration_test_', replSet);
db.open(function(err, p_db) {
  // Do you app stuff :)
})
 
The ReplSetSrvers object has the following parameters
 
var replSet = new ReplSetSrvers(servers, options)
 
Where
 
* servers is an array of Server objects
* options can contain the following options
 
== Replicaset options ==
Several options can be passed to the Replicaset constructor with options parameter.
 
* rs_name is the name of the replicaset you configured when you started the server, you can have multiple replicasets running on your servers.
* read_secondary set's the driver to read from secondary servers (slaves) instead of only from the primary(master) server.
* socketOptions - a collection of pr socket settings
 
== Socket options ==
Several options can be set for the socketOptions.
 
* timeout = set seconds before connection times out default:0
* noDelay = Disables the Nagle algorithm default:true
* keepAlive = Set if keepAlive is used default:0, which means no keepAlive, set higher than 0 for keepAlive
* encoding = 'ascii'|'utf8'|'base64' default:null
 
= Indexes =
Indexes are needed to make queries faster. For example if you need to find records by a field named ''username'' and the field has a related index set, then the query will be a lot faster compared to if the index was not present.
 
See [http://www.mongodb.org/display/DOCS/Indexes MongoDB documentation] for details.
 
== Create indexes with createIndex() ==
createIndex adds a new index to a collection. For checking if the index was already set, use ensureIndex instead.
 
<nowiki>collection.createIndex(index[, options], callback)</nowiki>
 
or
 
<nowiki>db.createIndex(collectionname, index[, options], callback)</nowiki>
 
where
 
* index is the field or fields to be indexed. See ''index field''
* options are options, for example {sparse: true} to include only records that have indexed field set or {unique: true} for unique indexes. If the options is a boolean value, then it indicates if it's an unique index or not.
* callback gets two parameters - an error object (if an error occured) and the name for the newly created index
 
== Ensure indexes with ensureIndex() ==
Same as createIndex with the difference that the index is checked for existence before adding to avoid duplicate indexes.
 
== Index field ==
Index field can be a simple string like "username" to index certain field (in this case, a field named as ''username'').
 
collection.ensureIndex("username",callback)
 
It is possible to index fields inside nested objects, for example "user.firstname" to index field named ''firstname'' inside a document named ''user''.
 
collection.ensureIndex("user.firstname",callback)
 
It is also possible to create mixed indexes to include several fields at once.
 
collection.ensureIndex({firstname:1, lastname:1}, callback)
 
or with tuples
 
<nowiki>collection.ensureIndex([["firstname", 1], ["lastname", 1]], callback)</nowiki>
 
The number value indicates direction - if it's 1, then it is an ascending value, if it's -1 then it's descending. For example if you have documents with a field ''date'' and you want to sort these records in descending order then you might want to add corresponding index
 
collection.ensureIndex({date:-1}, callback)
 
== Remove indexes with dropIndex() ==
All indexes can be dropped at once with dropIndexes
 
collection.dropIndexes(callback)
 
callback gets two parameters - an error object (if an error occured) and a boolean value true if operation succeeded.
 
== Get index information with indexInformation() ==
indexInformation can be used to fetch some useful information about collection indexes.
 
collection.indexInformation(callback)
 
Where callback gets two parameters - an error object (if an error occured) and an index information object.
 
The keys in the index object are the index names and the values are tuples of included fields.
 
For example if a collection has two indexes - as a default an ascending index for the _id field and an additonal descending index for"username" field, then the index information object would look like the following
 
{
    <nowiki>"_id":[["_id", 1]],</nowiki>
    <nowiki>"username_-1":[["username", -1]]</nowiki>
}
 
 
= GridStore =
GridFS is a scalable MongoDB ''filesystem'' for storing and retrieving large files. The default limit for a MongoDB record is 16MB, so to store data that is larger than this limit, GridFS can be used. GridFS shards the data into smaller chunks automatically. See [http://www.mongodb.org/display/DOCS/GridFS+Specification MongoDB documentation] for details.
 
GridStore is a single file inside GridFS that can be managed by the script.
 
== Open GridStore ==
Opening a GridStore (a single file in GridFS) is a bit similar to opening a database. At first you need to create a GridStore object and then open it.
 
<nowiki>var gs = new mongodb.GridStore(db, filename, mode[, options])</nowiki>
 
Where
 
* db is the database object
* filename is the name of the file in GridFS that needs to be accessed/created
* mode indicated the operation, can be one of:
** "r" (Read): Looks for the file information in fs.files collection, or creates a new id for this object.
** "w" (Write): Erases all chunks if the file already exist.
** "w+" (Append): Finds the last chunk, and keeps writing after it.
* options can be used to specify some metadata for the file, for example content_type, metadata and chunk_size
 
Example:
 
var gs = new mongodb.GridStore(db, "test.png", "w", {
    "content_type": "image/png",
    "metadata":{
        "author": "Daniel"
    },
    "chunk_size": 1024*4
});
 
When GridStore object is created, it needs to be opened.
 
gs.open(callback);
 
callback gets two parameters - and error object (if error occured) and the GridStore object.
 
Opened GridStore object has a set of useful properties
 
* gs.length - length of the file in bytes
* gs.contentType - the content type for the file
* gs.uploadDate - when the file was uploaded
* gs.metadata - metadata that was saved with the file
* gs.chunkSize - chunk size
 
Example
 
gs.open(function(err, gs){
    console.log("this file was uploaded at "+gs.uploadDate);
});
 
== Writing to GridStore ==
Writing can be done with write
 
gs.write(data, callback)
 
where data is a Buffer or a string, callback gets two parameters - an error object (if error occured) and result value which indicates if the write was successful or not.
 
While the GridStore is not closed, every write is appended to the opened GridStore.
 
== Writing a file to GridStore ==
This function opens the GridStore, streams the contents of the file into GridStore, and closes the GridStore.
 
gs.writeFile( file, callback )
 
where
 
* file is a file descriptor, or a string file path
* callback is a function with two parameters - error object (if error occured) and the GridStore object.
 
== Reading from GridStore ==
Reading from GridStore can be done with read
 
<nowiki>gs.read([size], callback)</nowiki>
 
where
 
* size is the length of the data to be read
* callback is a callback function with two parameters - error object (if an error occured) and data (binary string)
 
== Streaming from GridStore ==
You can stream data as it comes from the database using stream
 
<nowiki>gs.stream([autoclose=false])</nowiki>
 
where
 
* autoclose If true current GridStore will be closed when EOF and 'close' event will be fired
 
The function returns [http://nodejs.org/docs/v0.4.12/api/streams.html#readable_Stream read stream] based on this GridStore file. It supports the events 'read', 'error', 'close' and 'end'.
 
== Delete a GridStore ==
GridStore files can be unlinked with unlink
 
mongodb.GridStore.unlink(db, name, callback)
 
Where
 
* db is the database object
* name is either the name of a GridStore object or an array of GridStore object names
* callback is the callback function
 
== Closing the GridStore ==
GridStore needs to be closed after usage. This can be done with close
 
gs.close(callback)
 
== Check the existance of a GridStore file ==
Checking if a file exists in GridFS can be done with exist
 
mongodb.GridStore.exist(db, filename, callback)
 
Where
 
* db is the database object
* filename is the name of the file to be checked or a regular expression
* callback is a callback function with two parameters - an error object (if an error occured) and a boolean value indicating if the file exists or not
 
== Seeking in a GridStore ==
Seeking can be done with seek
 
gs.seek(position);
 
This function moves the internal pointer to the specified position.

Версия от 08:14, 8 мая 2014

Beanstalk Protocol

Protocol


The beanstalk protocol runs over TCP using ASCII encoding. Clients connect, send commands and data, wait for responses, and close the connection. For each connection, the server processes commands serially in the order in which they were received and sends responses in the same order. All integers in the protocol are formatted in decimal and (unless otherwise indicated) nonnegative.

Names, in this protocol, are ASCII strings. They may contain letters (A-Z and a-z), numerals (0-9), hyphen ("-"), plus ("+"), slash ("/"), semicolon (";"), dot ("."), dollar-sign ("$"), underscore ("_"), and parentheses ("(" and ")"), but they may not begin with a hyphen. They are terminated by white space (either a space char or end of line). Each name must be at least one character long.

The protocol contains two kinds of data: text lines and unstructured chunks of data. Text lines are used for client commands and server responses. Chunks are used to transfer job bodies and stats information. Each job body is an opaque sequence of bytes. The server never inspects or modifies a job body and always sends it back in its original form. It is up to the clients to agree on a meaningful interpretation of job bodies.

The client may issue the "quit" command, or simply close the TCP connection when it no longer has use for the server. However, beanstalkd performs very well with a large number of open connections, so it is usually better for the client to keep its connection open and reuse it as much as possible. This also avoids the overhead of establishing new TCP connections.

If a client violates the protocol (such as by sending a request that is not well-formed or a command that does not exist) or if the server has an error, the server will reply with one of the following error messages:

- "OUT_OF_MEMORY\r\n" The server cannot allocate enough memory for the job.
  The client should try again later.
- "INTERNAL_ERROR\r\n" This indicates a bug in the server. It should never
  happen. If it does happen, please report it at
  http://groups.google.com/group/beanstalk-talk.
- "BAD_FORMAT\r\n" The client sent a command line that was not well-formed.
  This can happen if the line does not end with \r\n, if non-numeric
  characters occur where an integer is expected, if the wrong number of
  arguments are present, or if the command line is mal-formed in any other
  way.
- "UNKNOWN_COMMAND\r\n" The client sent a command that the server does not
  know.

These error responses will not be listed in this document for individual commands in the following sections, but they are implicitly included in the description of all commands. Clients should be prepared to receive an error response after any command.

As a last resort, if the server has a serious error that prevents it from continuing service to the current client, the server will close the connection.

Job Lifecycle


A job in beanstalk gets created by a client with the "put" command. During its life it can be in one of four states: "ready", "reserved", "delayed", or "buried". After the put command, a job typically starts out ready. It waits in the ready queue until a worker comes along and runs the "reserve" command. If this job is next in the queue, it will be reserved for the worker. The worker will execute the job; when it is finished the worker will send a "delete" command to delete the job.

Here is a picture of the typical job lifecycle:


  put            reserve               delete
 -----> [READY] ---------> [RESERVED] --------> *poof*


Here is a picture with more possibilities:


  put with delay               release with delay
 ----------------> [DELAYED] <------------.
                       |                   |
                       | (time passes)     |
                       |                   |
  put                  v     reserve       |       delete
 -----------------> [READY] ---------> [RESERVED] --------> *poof*
                      ^  ^                |  |
                      |   \  release      |  |
                      |    `-------------'   |
                      |                      |
                      | kick                 |
                      |                      |
                      |       bury           |
                   [BURIED] <---------------'
                      |
                      |  delete
                       `--------> *poof*


The system has one or more tubes. Each tube consists of a ready queue and a delay queue. Each job spends its entire life in one tube. Consumers can show interest in tubes by sending the "watch" command; they can show disinterest by sending the "ignore" command. This set of interesting tubes is said to be a consumer's "watch list". When a client reserves a job, it may come from any of the tubes in its watch list.

When a client connects, its watch list is initially just the tube named "default". If it submits jobs without having sent a "use" command, they will live in the tube named "default".

Tubes are created on demand whenever they are referenced. If a tube is empty (that is, it contains no ready, delayed, or buried jobs) and no client refers to it, it will be deleted.

Producer Commands


The "put" command is for any process that wants to insert a job into the queue. It comprises a command line followed by the job body:

put <pri> <delay> <ttr> <bytes>\r\n \r\n

It inserts a job into the client's currently used tube (see the "use" command below).

- <pri> is an integer < 2**32. Jobs with smaller priority values will be
  scheduled before jobs with larger priorities. The most urgent priority is 0;
  the least urgent priority is 4,294,967,295.
- <delay> is an integer number of seconds to wait before putting the job in
  the ready queue. The job will be in the "delayed" state during this time.
- <ttr> -- time to run -- is an integer number of seconds to allow a worker
  to run this job. This time is counted from the moment a worker reserves
  this job. If the worker does not delete, release, or bury the job within
  <ttr> seconds, the job will time out and the server will release the job.
  The minimum ttr is 1. If the client sends 0, the server will silently
  increase the ttr to 1.
- <bytes> is an integer indicating the size of the job body, not including the
  trailing "\r\n". This value must be less than max-job-size (default: 2**16).
-  is the job body -- a sequence of bytes of length <bytes> from the
  previous line.

After sending the command line and body, the client waits for a reply, which may be:

- "INSERTED <id>\r\n" to indicate success.
  - <id> is the integer id of the new job
- "BURIED <id>\r\n" if the server ran out of memory trying to grow the
  priority queue data structure.
  - <id> is the integer id of the new job
- "EXPECTED_CRLF\r\n" The job body must be followed by a CR-LF pair, that is,
  "\r\n". These two bytes are not counted in the job size given by the client
  in the put command line.
- "JOB_TOO_BIG\r\n" The client has requested to put a job with a body larger
  than max-job-size bytes.
- "DRAINING\r\n" This means that the server has been put into "drain mode"
  and is no longer accepting new jobs. The client should try another server
  or disconnect and try again later.

The "use" command is for producers. Subsequent put commands will put jobs into the tube specified by this command. If no use command has been issued, jobs will be put into the tube named "default".

use <tube>\r\n

- <tube> is a name at most 200 bytes. It specifies the tube to use. If the
  tube does not exist, it will be created.

The only reply is:

USING <tube>\r\n

- <tube> is the name of the tube now being used.

Worker Commands


A process that wants to consume jobs from the queue uses "reserve", "delete", "release", and "bury". The first worker command, "reserve", looks like this:

reserve\r\n

Alternatively, you can specify a timeout as follows:

reserve-with-timeout <seconds>\r\n

This will return a newly-reserved job. If no job is available to be reserved, beanstalkd will wait to send a response until one becomes available. Once a job is reserved for the client, the client has limited time to run (TTR) the job before the job times out. When the job times out, the server will put the job back into the ready queue. Both the TTR and the actual time left can be found in response to the stats-job command.

If more than one job is ready, beanstalkd will choose the one with the smallest priority value. Within each priority, it will choose the one that was received first.

A timeout value of 0 will cause the server to immediately return either a response or TIMED_OUT. A positive value of timeout will limit the amount of time the client will block on the reserve request until a job becomes available.

During the TTR of a reserved job, the last second is kept by the server as a safety margin, during which the client will not be made to wait for another job. If the client issues a reserve command during the safety margin, or if the safety margin arrives while the client is waiting on a reserve command, the server will respond with:

DEADLINE_SOON\r\n

This gives the client a chance to delete or release its reserved job before the server automatically releases it.

TIMED_OUT\r\n

If a non-negative timeout was specified and the timeout exceeded before a job became available, or if the client's connection is half-closed, the server will respond with TIMED_OUT.

Otherwise, the only other response to this command is a successful reservation in the form of a text line followed by the job body:

RESERVED <id> <bytes>\r\n \r\n

- <id> is the job id -- an integer unique to this job in this instance of
  beanstalkd.
- <bytes> is an integer indicating the size of the job body, not including
  the trailing "\r\n".
-  is the job body -- a sequence of bytes of length <bytes> from the
  previous line. This is a verbatim copy of the bytes that were originally
  sent to the server in the put command for this job.

The delete command removes a job from the server entirely. It is normally used by the client when the job has successfully run to completion. A client can delete jobs that it has reserved, ready jobs, delayed jobs, and jobs that are buried. The delete command looks like this:

delete <id>\r\n

- <id> is the job id to delete.

The client then waits for one line of response, which may be:

- "DELETED\r\n" to indicate success.
- "NOT_FOUND\r\n" if the job does not exist or is not either reserved by the
  client, ready, or buried. This could happen if the job timed out before the
  client sent the delete command.

The release command puts a reserved job back into the ready queue (and marks its state as "ready") to be run by any client. It is normally used when the job fails because of a transitory error. It looks like this:

release <id> <pri> <delay>\r\n

- <id> is the job id to release.
- <pri> is a new priority to assign to the job.
- <delay> is an integer number of seconds to wait before putting the job in
  the ready queue. The job will be in the "delayed" state during this time.

The client expects one line of response, which may be:

- "RELEASED\r\n" to indicate success.
- "BURIED\r\n" if the server ran out of memory trying to grow the priority
  queue data structure.
- "NOT_FOUND\r\n" if the job does not exist or is not reserved by the client.

The bury command puts a job into the "buried" state. Buried jobs are put into a FIFO linked list and will not be touched by the server again until a client kicks them with the "kick" command.

The bury command looks like this:

bury <id> <pri>\r\n

- <id> is the job id to release.
- <pri> is a new priority to assign to the job.

There are two possible responses:

- "BURIED\r\n" to indicate success.
- "NOT_FOUND\r\n" if the job does not exist or is not reserved by the client.

The "touch" command allows a worker to request more time to work on a job. This is useful for jobs that potentially take a long time, but you still want the benefits of a TTR pulling a job away from an unresponsive worker. A worker may periodically tell the server that it's still alive and processing a job (e.g. it may do this on DEADLINE_SOON). The command postpones the auto release of a reserved job until TTR seconds from when the command is issued.

The touch command looks like this:

touch <id>\r\n

- <id> is the ID of a job reserved by the current connection.

There are two possible responses:

- "TOUCHED\r\n" to indicate success.
- "NOT_FOUND\r\n" if the job does not exist or is not reserved by the client.

The "watch" command adds the named tube to the watch list for the current connection. A reserve command will take a job from any of the tubes in the watch list. For each new connection, the watch list initially consists of one tube, named "default".

watch <tube>\r\n

- <tube> is a name at most 200 bytes. It specifies a tube to add to the watch
  list. If the tube doesn't exist, it will be created.

The reply is:

WATCHING <count>\r\n

- <count> is the integer number of tubes currently in the watch list.

The "ignore" command is for consumers. It removes the named tube from the watch list for the current connection.

ignore <tube>\r\n

The reply is one of:

- "WATCHING <count>\r\n" to indicate success.
  - <count> is the integer number of tubes currently in the watch list.
- "NOT_IGNORED\r\n" if the client attempts to ignore the only tube in its
  watch list.

Other Commands


The peek commands let the client inspect a job in the system. There are four variations. All but the first operate only on the currently used tube.

- "peek <id>\r\n" - return job <id>.
- "peek-ready\r\n" - return the next ready job.
- "peek-delayed\r\n" - return the delayed job with the shortest delay left.
- "peek-buried\r\n" - return the next job in the list of buried jobs.

There are two possible responses, either a single line:

- "NOT_FOUND\r\n" if the requested job doesn't exist or there are no jobs in
  the requested state.

Or a line followed by a chunk of data, if the command was successful:

FOUND <id> <bytes>\r\n \r\n

- <id> is the job id.
- <bytes> is an integer indicating the size of the job body, not including
  the trailing "\r\n".
-  is the job body -- a sequence of bytes of length <bytes> from the
  previous line.

The kick command applies only to the currently used tube. It moves jobs into the ready queue. If there are any buried jobs, it will only kick buried jobs. Otherwise it will kick delayed jobs. It looks like:

kick <bound>\r\n

- <bound> is an integer upper bound on the number of jobs to kick. The server
  will kick no more than <bound> jobs.

The response is of the form:

KICKED <count>\r\n

- <count> is an integer indicating the number of jobs actually kicked.

The kick-job command is a variant of kick that operates with a single job identified by its job id. If the given job id exists and is in a buried or delayed state, it will be moved to the ready queue of the the same tube where it currently belongs. The syntax is:

kick-job <id>\r\n

- <id> is the job id to kick.

The response is one of:

- "NOT_FOUND\r\n" if the job does not exist or is not in a kickable state. This
  can also happen upon internal errors.
- "KICKED\r\n" when the operation succeeded.

The stats-job command gives statistical information about the specified job if it exists. Its form is:

stats-job <id>\r\n

- <id> is a job id.

The response is one of:

- "NOT_FOUND\r\n" if the job does not exist.
- "OK <bytes>\r\n\r\n"
  - <bytes> is the size of the following data section in bytes.
  -  is a sequence of bytes of length <bytes> from the previous line. It
    is a YAML file with statistical information represented a dictionary.

The stats-job data is a YAML file representing a single dictionary of strings to scalars. It contains these keys:

- "id" is the job id
- "tube" is the name of the tube that contains this job
- "state" is "ready" or "delayed" or "reserved" or "buried"
- "pri" is the priority value set by the put, release, or bury commands.
- "age" is the time in seconds since the put command that created this job.
- "time-left" is the number of seconds left until the server puts this job
  into the ready queue. This number is only meaningful if the job is
  reserved or delayed. If the job is reserved and this amount of time
  elapses before its state changes, it is considered to have timed out.
- "file" is the number of the earliest binlog file containing this job.
  If -b wasn't used, this will be 0.
- "reserves" is the number of times this job has been reserved.
- "timeouts" is the number of times this job has timed out during a
  reservation.
- "releases" is the number of times a client has released this job from a
  reservation.
- "buries" is the number of times this job has been buried.
- "kicks" is the number of times this job has been kicked.

The stats-tube command gives statistical information about the specified tube if it exists. Its form is:

stats-tube <tube>\r\n

- <tube> is a name at most 200 bytes. Stats will be returned for this tube.

The response is one of:

- "NOT_FOUND\r\n" if the tube does not exist.
- "OK <bytes>\r\n\r\n"
  - <bytes> is the size of the following data section in bytes.
  -  is a sequence of bytes of length <bytes> from the previous line. It
    is a YAML file with statistical information represented a dictionary.

The stats-tube data is a YAML file representing a single dictionary of strings to scalars. It contains these keys:

- "name" is the tube's name.
- "current-jobs-urgent" is the number of ready jobs with priority < 1024 in
  this tube.
- "current-jobs-ready" is the number of jobs in the ready queue in this tube.
- "current-jobs-reserved" is the number of jobs reserved by all clients in
  this tube.
- "current-jobs-delayed" is the number of delayed jobs in this tube.
- "current-jobs-buried" is the number of buried jobs in this tube.
- "total-jobs" is the cumulative count of jobs created in this tube in
  the current beanstalkd process.
- "current-using" is the number of open connections that are currently
  using this tube.
- "current-waiting" is the number of open connections that have issued a
  reserve command while watching this tube but not yet received a response.
- "current-watching" is the number of open connections that are currently
  watching this tube.
- "pause" is the number of seconds the tube has been paused for.
- "cmd-delete" is the cumulative number of delete commands for this tube
- "cmd-pause-tube" is the cumulative number of pause-tube commands for this
  tube.
- "pause-time-left" is the number of seconds until the tube is un-paused.

The stats command gives statistical information about the system as a whole. Its form is:

stats\r\n

The server will respond:

OK <bytes>\r\n \r\n

- <bytes> is the size of the following data section in bytes.
-  is a sequence of bytes of length <bytes> from the previous line. It
  is a YAML file with statistical information represented a dictionary.

The stats data for the system is a YAML file representing a single dictionary of strings to scalars. Entries described as "cumulative" are reset when the beanstalkd process starts; they are not stored on disk with the -b flag.

- "current-jobs-urgent" is the number of ready jobs with priority < 1024.
- "current-jobs-ready" is the number of jobs in the ready queue.
- "current-jobs-reserved" is the number of jobs reserved by all clients.
- "current-jobs-delayed" is the number of delayed jobs.
- "current-jobs-buried" is the number of buried jobs.
- "cmd-put" is the cumulative number of put commands.
- "cmd-peek" is the cumulative number of peek commands.
- "cmd-peek-ready" is the cumulative number of peek-ready commands.
- "cmd-peek-delayed" is the cumulative number of peek-delayed commands.
- "cmd-peek-buried" is the cumulative number of peek-buried commands.
- "cmd-reserve" is the cumulative number of reserve commands.
- "cmd-use" is the cumulative number of use commands.
- "cmd-watch" is the cumulative number of watch commands.
- "cmd-ignore" is the cumulative number of ignore commands.
- "cmd-delete" is the cumulative number of delete commands.
- "cmd-release" is the cumulative number of release commands.
- "cmd-bury" is the cumulative number of bury commands.
- "cmd-kick" is the cumulative number of kick commands.
- "cmd-stats" is the cumulative number of stats commands.
- "cmd-stats-job" is the cumulative number of stats-job commands.
- "cmd-stats-tube" is the cumulative number of stats-tube commands.
- "cmd-list-tubes" is the cumulative number of list-tubes commands.
- "cmd-list-tube-used" is the cumulative number of list-tube-used commands.
- "cmd-list-tubes-watched" is the cumulative number of list-tubes-watched
  commands.
- "cmd-pause-tube" is the cumulative number of pause-tube commands.
- "job-timeouts" is the cumulative count of times a job has timed out.
- "total-jobs" is the cumulative count of jobs created.
- "max-job-size" is the maximum number of bytes in a job.
- "current-tubes" is the number of currently-existing tubes.
- "current-connections" is the number of currently open connections.
- "current-producers" is the number of open connections that have each
  issued at least one put command.
- "current-workers" is the number of open connections that have each issued
  at least one reserve command.
- "current-waiting" is the number of open connections that have issued a
  reserve command but not yet received a response.
- "total-connections" is the cumulative count of connections.
- "pid" is the process id of the server.
- "version" is the version string of the server.
- "rusage-utime" is the cumulative user CPU time of this process in seconds
  and microseconds.
- "rusage-stime" is the cumulative system CPU time of this process in
  seconds and microseconds.
- "uptime" is the number of seconds since this server process started running.
- "binlog-oldest-index" is the index of the oldest binlog file needed to
  store the current jobs.
- "binlog-current-index" is the index of the current binlog file being
  written to. If binlog is not active this value will be 0.
- "binlog-max-size" is the maximum size in bytes a binlog file is allowed
  to get before a new binlog file is opened.
- "binlog-records-written" is the cumulative number of records written
  to the binlog.
- "binlog-records-migrated" is the cumulative number of records written
  as part of compaction.
- "id" is a random id string for this server process, generated when each
  beanstalkd process starts.
- "hostname" the hostname of the machine as determined by uname.

The list-tubes command returns a list of all existing tubes. Its form is:

list-tubes\r\n

The response is:

OK <bytes>\r\n \r\n

- <bytes> is the size of the following data section in bytes.
-  is a sequence of bytes of length <bytes> from the previous line. It
  is a YAML file containing all tube names as a list of strings.

The list-tube-used command returns the tube currently being used by the client. Its form is:

list-tube-used\r\n

The response is:

USING <tube>\r\n

- <tube> is the name of the tube being used.

The list-tubes-watched command returns a list tubes currently being watched by the client. Its form is:

list-tubes-watched\r\n

The response is:

OK <bytes>\r\n \r\n

- <bytes> is the size of the following data section in bytes.
-  is a sequence of bytes of length <bytes> from the previous line. It
  is a YAML file containing watched tube names as a list of strings.

The quit command simply closes the connection. Its form is:

quit\r\n

The pause-tube command can delay any new job being reserved for a given time. Its form is:

pause-tube <tube-name> <delay>\r\n

- <tube> is the tube to pause
- <delay> is an integer number of seconds to wait before reserving any more
  jobs from the queue

There are two possible responses:

- "PAUSED\r\n" to indicate success.
- "NOT_FOUND\r\n" if the tube does not exist.