«MangoDB Документация для v 1.2» и «Recover Deleted Files on an NTFS Hard Drive from a Linux»: разница между страницами

Материал из support.qbpro.ru
(Различия между страницами)
imported>Vix
(Новая страница: «Документация для официального драйвера MongoDB Nodejs Official Driver v 1.2 (supported by 10gen) [https://github.com/mongodb/n…»)
 
imported>Vix
(Новая страница: «To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in: sudo fdisk –l and press ente…»)
 
Строка 1: Строка 1:
Документация для официального драйвера MongoDB Nodejs Official Driver v 1.2  (supported by 10gen)
To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in:
[https://github.com/mongodb/node-mongodb-native/tree/1.2-dev/docs оригинал полной документации]


Примечания для понимания взяты [http://jsman.ru/mongo-book/ здесь].
  sudo fdisk –l
= MongoClient - новый улучшенный или как по новому подключится лучше =
*[https://github.com/mongodb/node-mongodb-native/blob/1.2-dev/docs/articles/MongoClient.md оригинал]
Начиная с драйвера версии '''1.2''' включен новый класс подключения, который имеет одинаковое название во всех официальных драйверах. Это не означает, что существующие приложения перестанут работать, просто рекомендуется использовать новые API упрощенного подключения и разработки.


В дальнейшем будет разработан новый класс '''MongoClient''' принимающий все написанное (???неточный перевод) для MongoDB в отличие от существующего класса подключения '''Db''' в котором acknowledgements выключен.
and press enter.


  <nowiki>MongoClient = function(server, options);
  sshot-2
MongoClient.prototype.open


MongoClient.prototype.close
What you’re looking for is a line that ends with HPSF/NTFS (under the heading System). In our case, the device is “/dev/sda1”. This may be slightly different for you, but it will still begin with /dev/. Note this device name.


MongoClient.prototype.db
If you have more than one hard drive partition formatted as NTFS, then you may be able to identify the correct partition by the size. If you look at the second line of text in the screenshot above, it reads “Disk /dev/sda: 136.4 GB, …” This means that the hard drive that Ubuntu has named /dev/sda is 136.4 GB large. If your hard drives are of different size, then this information can help you track down the right device name to use. Alternatively, you can just try them all, though this can be time consuming for large hard drives.


MongoClient.connect</nowiki>
Now that you know the name Ubuntu has assigned to your hard drive, we’ll scan it to see what files we can uncover.


In the terminal window, type:


Выше описан полный интерфейс MongoClient. Методы '''open''', '''close''' and '''db''' работают аналогично существующим методам в классе (прим. переводчика: Объекте) '''Db'''. Основное отличие в том, что конструктор пропускает '''database name''' из '''Db'''. Рассмотрим простое подключение используя '''open''', код заменит тысячи слов.
sudo ntfsundelete <HD name>


and hit enter. In our case, the command is:


  <nowiki>var MongoClient = require('mongodb').MongoClient,
  sudo ntfsundelete /dev/sda1
    Server = require('mongodb').Server;


var mongoClient = new MongoClient(new Server('localhost', 27017));
sshot-3
           
    mongoClient.open(function(err, mongoClient) {


var db1 = mongoClient.db("mydb");
The names of files that can recovered show up in the far right column. The percentage in the third column tells us how much of that file can be recovered. Three of the four files that we originally deleted are showing up in this list, even though we shut down the computer right after deleting the four files – so even in ideal cases, your files may not be recoverable.


    mongoClient.close();
Nevertheless, we have three files that we can recover – two JPGs and an MPG.
});</nowiki>


Следует обратить внимание, что настройки MongoClient такие же, как для объекта '''Db'''. Основным отличием является то, что доступ к данным происходит через  метод '''db''' объекта MongoClient вместо  непосредственного использования экземпляра объекта '''db''', как было раньше. Также MongoClient поддерживает те же параметры, что и предыдущий экземпляр Db.
Note: ntfsundelete is immediately available in the Ubuntu 9.10 Live CD. If you are in a different version of Ubuntu, or for some other reason get an error when trying to use ntfsundelete, you can install it by entering “sudo apt-get install ntfsprogs” in a terminal window.


Таким образом, с минимальными изменениями в приложении можно применить новый объект MongoClient для подключения.  
To quickly recover the two JPGs, we will use the * wildcard to recover all of the files that end with .jpg.


== URL формат подключения ==
In the terminal window, enter
<nowiki>mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]</nowiki>


URL формата унифицированы во всех официальных драйверах от 10gen, некоторые опции не поддерживается сторонними драйверами в силу естественных причин.
sudo ntfsundelete <HD name> –u –m *.jpg


=== Составные части url ===
which is, in our case,
* <span style="color:darkgreen">'''mongodb://'''</span> - префикс запроса идентифицирующий строку как стандартный формат подключения
* <span style="color:darkgreen">'''username:password@'''</span> - необязательные параметры. Если заданы, драйвер использует авторизацию для подключения к database после соединения с серером.
* <span style="color:darkgreen">'''host1'''</span> - единственная обязательная часть URI. Идентифицируется каждый hostname, IP адрес или  or unix сокет
* <span style="color:darkgreen">''':portX'''</span> - порт подключения, необязательный параметр, по умолчанию :27017.
* <span style="color:darkgreen">'''/database'''</span> это имя базы данных для входа и, следовательно, имеет смысл только, если имя пользователя: пароль @ синтаксис. Если не указано "Admin" база данных будет использоваться по умолчанию.(???не понятно)
* <span style="color:darkgreen">'''?options'''</span> - параметры подключения. Если значение database будет отсутствовать, то символ / должен все равно присутствовать между последним host и знаком ?, предваряющим параметры. Параметры имеют формат name=value и разделены знаком "&". Для неправильных или не поддерживаемых параметров драйвер запишет предупреждение в лог и продолжит выполнение. Драйвер не поддерживает других опций, кроме описанных в спецификации. Это делается для того, чтобы уменьшить вероятность того, что различные драйверы будут поддерживать немного измененные, но в последствие несовместимые параметры (например, другие имена, разные значения, или другое значение по умолчанию).


===Параметры Replica set: ===
sudo ntfsundelete /dev/sda1 –u –m *.jpg
* '''replicaSet=name'''
** Драйвер проверяет имя replica set для подключения к машине с этим именем. Подразумевается, что hostы указаны в списке, а драйвер будет пытаться найти все элементы набора.
** НЕТ ЗНАЧЕНИЯ ПО УМОЛЧАНИЮ.
:::::Прим. Репликация в MongoDB работает сходным образом с репликацией в реляционных базах данных. Записи посылаются на один сервер — ведущий (master), который потом синхронизирует своё состояние с другими серверами — ведомыми (slave). Вы можете разрешить или запретить чтение с ведомых серверов, в зависимости от того, допускается ли в вашей системе чтение несогласованных данных. Если ведущий сервер падает, один из ведомых может взять на себя роль ведущего.  


:::::Хотя репликация увеличивает производительность чтения, делая его распределённым, основная её цель — увеличение надёжности. Типичным подходом является сочетание репликации и шардинга. Например, каждый шард может состоять из ведущего и ведомого серверов. (Технически, вам также понадобится арбитр, чтобы разрешить конфликт, когда два ведомых сервера пытаются объявить себя ведущими. Но арбитр потребляет очень мало ресурсов и может быть использован для нескольких шардов сразу.)
sshot-10


=== Конфигурация подключения: ===
The two files are recovered from the NTFS hard drive and saved in the current working directory of the terminal. By default, this is the home directory of the current user, though we are working in the Desktop folder.
* '''ssl=true|false|prefer'''
** true: драйвер инициирует каждое подключение и использованием SSL
** false: драйвер инициирует каждое подключение без использования SSL
** prefer: драйвер будет пытаться инициировать каждое подключение и использованием SSL, в случае неудачи, будет инициировано подключение без использования SSL
** Значение по умолчанию: false.
* '''connectTimeoutMS=ms'''
** How long a connection can take to be opened before timing out.
** Current driver behavior already differs on this, so default must be left to each driver. For new implementations, the default should be to never timeout.
* '''socketTimeoutMS=ms'''
** How long a send or receive on a socket can take before timing out.
** Current driver behavior already differs on this, so default must be left to each driver. For new implementations, the default should be to never timeout.


=== Конфигурация пула подключений: ===
Note that the ntfsundelete program does not make any changes to the original NTFS hard drive. If you want to take those files and put them back in the NTFS hard drive, you will have to move them there after they are undeleted with ntfsundelete. Of course, you can also put them on your flash drive or open Firefox and email them to yourself – the sky’s the limit!
* '''maxPoolSize=n:''' Максимальное число подключений в пуле
** Значение по умолчанию: 100


=== Write concern configuration: ===
We have one more file to undelete – our MPG.
'''w=wValue'''


*For numeric values above 1, the driver adds { w : wValue } to the getLastError command.
sshot-4


*wValue is typically a number, but can be any string in order to allow for specifications like "majority"
Note the first column on the far left. It contains a number, its Inode. Think of this as the file’s unique identifier. Note this number.


*Default value is 1.
To undelete a file by its Inode, enter the following in the terminal:


*If wValue == -1 ignore network errors
sudo ntfsundelete <HD name> –u –i <Inode>


*If wValue == 0 Don't send getLastError
In our case, this is:


*If wValue == 1 send {getlasterror: 1} (no w)
sudo ntfsundelete /dev/sda1 –u –i 14159


'''wtimeoutMS=ms'''
sshot-11


*The driver adds { wtimeout : ms } to the getlasterror command.
This recovers the file, along with an identifier that we don’t really care about. All three of our recoverable files are now recovered.
 
<hr>
*Used in combination with w
'''Resurses:'''
 
<hr>
*No default value
* [https://www.howtogeek.com/howto/13706/recover-deleted-files-on-an-ntfs-hard-drive-from-a-ubuntu-live-cd/ Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CD]
 
'''journal=true|false'''
 
*true: Sync to journal.
 
*false: the driver does not add j to the getlasterror command
 
*Default value is false
 
'''fsync=true|false'''
 
*true: Sync to disk.
 
*false: the driver does not add fsync to the getlasterror command
 
*Default value is false
 
If conflicting values for fireAndForget, and any write concern are passed the driver should raise an exception about the conflict.
 
=== Read Preference ===
'''slaveOk=true|false:''' Whether a driver connected to a replica set will send reads to slaves/secondaries.
 
*Default value is false
 
'''readPreference=enum:''' The read preference for this connection. If set, it overrides any slaveOk value.
 
*Enumerated values:
 
:*primary
 
:*primaryPreferred
 
:*secondary
 
:*secondaryPreferred
 
:*nearest
 
*Default value is primary
 
'''readPreferenceTags=string.''' A representation of a tag set as a comma-separated list of colon-separated key-value pairs, e.g.'''dc:ny,rack:1'''. Spaces should be stripped from beginning and end of all keys and values. To specify a list of tag sets, using multiple readPreferenceTags, e.g. '''readPreferenceTags=dc:ny,rack:1&readPreferenceTags=dc:ny&readPreferenceTags='''
 
*Note the empty value, it provides for fallback to any other secondary server if none is available
 
*Order matters when using multiple readPreferenceTags
 
*There is no default value
 
== MongoClient.connect ==
При использовании MongoClient.connect можно (наверное нужно) использовать URL формат подключения. Где возможно, MongoClient максимально наилучшие параметры по умолчанию, но их всегда можно изменить. Это относится к параметрам '''auto_reconnect:true'''и '''native_parser:true''' если возможно. Ниже примеры подключения к single server a replicaset and a sharded system using '''MongoClient.connect'''
 
=== Подключение к single server ===
<nowiki>var MongoClient = require('mongodb').MongoClient;
 
MongoClient.connect("mongodb://localhost:27017/integration_test", function(err, db) {
  test.equal(null, err);
  test.ok(db != null);
 
  db.collection("replicaset_mongo_client_collection").update({a:1}, {b:1}, {upsert:true}, function(err, result) {
    test.equal(null, err);
    test.equal(1, result);
 
    db.close();
    test.done();
  });
});</nowiki>
 
=== A replicaset connect using no ackowledgment by default and readPreference for secondary ===
<nowiki>var MongoClient = require('mongodb').MongoClient;
 
MongoClient.connect("mongodb://localhost:30000,localhost:30001/integration_test_?w=0&readPreference=secondary", function(err, db) {
  test.equal(null, err);
  test.ok(db != null);
 
  db.collection("replicaset_mongo_client_collection").update({a:1}, {b:1}, {upsert:true}, function(err, result) {
    test.equal(null, err);
    test.equal(1, result);
 
    db.close();
    test.done();
  });
});</nowiki>
 
=== A sharded connect using no ackowledgment by default and readPreference for secondary ===
<nowiki>var MongoClient = require('mongodb').MongoClient;
 
MongoClient.connect("mongodb://localhost:50000,localhost:50001/integration_test_?w=0&readPreference=secondary", function(err, db) {
  test.equal(null, err);
  test.ok(db != null);
 
  db.collection("replicaset_mongo_client_collection").update({a:1}, {b:1}, {upsert:true}, function(err, result) {
    test.equal(null, err);
    test.equal(1, result);
 
    db.close();
    test.done();
  });
});</nowiki>
 
Notice that when connecting to the shareded system it's pretty much the same url as for connecting to the replicaset. This is because the driver itself figures out if it's a replicaset or a set of Mongos proxies it's connecting to. No special care is needed to specify if it's one or the other. This is in contrast to having to use the '''ReplSet''' or '''Mongos''' instances when using the '''open''' command.
 
== MongoClient.connect опции ==
The connect function also takes a hash of options divided into db/server/replset/mongos alowing you to tweak options not directly supported by the unified url string format. To use these options you do pass in a has like this.
 
<nowiki>var MongoClient = require('mongodb').MongoClient;
 
MongoClient.connect("mongodb://localhost:27017/integration_test_?", {
    db: {
      native_parser: false
    },
    server: {
      socketOptions: {
        connectTimeoutMS: 500
      }
    },
    replSet: {},
    mongos: {}
  }, function(err, db) {
  test.equal(null, err);
  test.ok(db != null);
 
  db.collection("replicaset_mongo_client_collection").update({a:1}, {b:1}, {upsert:true}, function(err, result) {
    test.equal(null, err);
    test.equal(1, result);
 
    db.close();
    test.done();
  });
});</nowiki>
 
Below are all the options supported for db/server/replset/mongos.
 
*'''db''' A hash of options at the db level overriding or adjusting functionaliy not suppported by the url
 
:*'''w'''<nowiki>, {Number/String, > -1 || 'majority'} the write concern for the operation where < 1 is no acknowlegement of write and w >= 1 or w = 'majority' acknowledges the write</nowiki>
 
:*'''wtimeout''', {Number, 0} set the timeout for waiting for write concern to finish (combines with w option)
 
:*'''fsync''', (Boolean, default:false) write waits for fsync before returning
 
:*'''journal''', (Boolean, default:false) write waits for journal sync before returning
 
:*'''readPreference''' {String}, the prefered read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
 
:*'''native_parser''' {Boolean, default:false}, use c++ bson parser.
 
:*'''forceServerObjectId''' {Boolean, default:false}, force server to create _id fields instead of client.
 
:*'''pkFactory''' {Object}, object overriding the basic ObjectID primary key generation.
 
:*'''serializeFunctions''' {Boolean, default:false}, serialize functions.
 
:*'''raw''' {Boolean, default:false}, peform operations using raw bson buffers.
 
:*'''recordQueryStats''' {Boolean, default:false}, record query statistics during execution.
 
:*'''retryMiliSeconds''' {Number, default:5000}, number of miliseconds between retries.
 
:*'''numberOfRetries''' {Number, default:5}, number of retries off connection.
 
*'''server''' A hash of options at the server level not supported by the url.
 
:*'''readPreference''' {String, default:null}, set's the read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST)
 
:*'''ssl''' {Boolean, default:false}, use ssl connection (needs to have a mongod server with ssl support)
 
:*'''slaveOk''' {Boolean, default:false}, legacy option allowing reads from secondary, use '''readPrefrence''' instead.
 
:*'''poolSize''' {Number, default:1}, number of connections in the connection pool, set to 1 as default for legacy reasons.
 
:*'''socketOptions''' {Object, default:null}, an object containing socket options to use (noDelay:(boolean), keepAlive:(number), connectTimeoutMS:(number), socketTimeoutMS:(number))
 
:*'''logger''' {Object, default:null}, an object representing a logger that you want to use, needs to support functions debug, log, error '''({error:function(message, object) {}, log:function(message, object) {}, debug:function(message, object) {}})'''.
 
:*'''auto_reconnect''' {Boolean, default:false}, reconnect on error.
 
:*'''disableDriverBSONSizeCheck''' {Boolean, default:false}, force the server to error if the BSON message is to big
 
*'''replSet''' A hash of options at the replSet level not supported by the url.
 
:*'''ha''' {Boolean, default:true}, turn on high availability.
 
:*'''haInterval''' {Number, default:2000}, time between each replicaset status check.
 
:*'''reconnectWait''' {Number, default:1000}, time to wait in miliseconds before attempting reconnect.
 
:*'''retries''' {Number, default:30}, number of times to attempt a replicaset reconnect.
 
:*'''rs_name''' {String}, the name of the replicaset to connect to.
 
:*'''socketOptions''' {Object, default:null}, an object containing socket options to use (noDelay:(boolean), keepAlive:(number), connectTimeoutMS:(number), socketTimeoutMS:(number))
 
:*'''readPreference''' {String}, the prefered read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
 
:*'''strategy''' {String, default:null}, selection strategy for reads choose between (ping and statistical, default is round-robin)
 
:*'''secondaryAcceptableLatencyMS''' {Number, default:15}, sets the range of servers to pick when using NEAREST (lowest ping ms + the latency fence, ex: range of 1 to (1 + 15) ms)
 
:*'''connectArbiter''' {Boolean, default:false}, sets if the driver should connect to arbiters or not.
 
*'''mongos''' A hash of options at the mongos level not supported by the url.
 
:*'''socketOptions''' {Object, default:null}, an object containing socket options to use (noDelay:(boolean), keepAlive:(number), connectTimeoutMS:(number), socketTimeoutMS:(number))
 
:*'''ha''' {Boolean, default:true}, turn on high availability, attempts to reconnect to down proxies
 
:*'''haInterval''' {Number, default:2000}, time between each replicaset status check.
 
= Database =
The first thing to do in order to make queries to the database is to open one. This can be done with the Db constructor.
 
<nowiki>var mongodb = require("mongodb"),
    mongoserver = new mongodb.Server(host, port, server_options),
    db_connector = new mongodb.Db(name, mongoserver, db_options);
 
db_connector.open(callback);</nowiki>
 
* host is a server hostname or IP
* port is a MongoDB port, use mongodb.Connection.DEFAULT_PORT for default (27017)
* server_options see ''Server options''
* name is the databse name that needs to be opened, database will be created automatically if it doesn't yet exist
* db_options see ''DB options''
 
== Параметры Server ==
Several options can be passed to the Server constructor with options parameter.
 
* auto_reconnect - to reconnect automatically, default:false
* poolSize - specify the number of connections in the pool default:5
* socketOptions - a collection of pr socket settings
 
== Параметры Socket  ==
Several options can be set for the socketOptions.
 
* timeout = set seconds before connection times out '''default:0'''
* noDelay = Отключает алгоритм Nagle '''default:true'''
::::''Алгоритм Nagle TCP/IP был разработан, чтобы избежать проблем при передаче небольших пакетов, называемых tinygrams, в медленных сетях. Задача алгоритма балансировать нагрузку TCP соединения, т.е. он пытается равномерно "размазывать" трафик. Поэтому когда идёт активная передача небольших (менее 1500 байт) пакетов данных, алгоритм старается сгладить этот пик нагрузки, задерживая пакеты и пытаясь распределить их более равномерно по времени. Последствием работы данного алгоритма могут быть задержки в передаче пакетов до 200мс. [http://heroes.fragoria.ru/forum/index.php?topic=6161.0 источник 1], [http://support.microsoft.com/kb/138831/ru источник 2]''
* keepAlive = Set if keepAlive is used default:0, which means no keepAlive, set higher than 0 for keepAlive
* encoding = 'ascii'|'utf8'|'base64' default:null
 
== DB опции ==
Several options can be passed to the Db constructor with options parameter.
 
* native_parser - if true, use native BSON parser
* strict - sets ''strict mode'', if true then existing collections can't be "recreated" etc.
* pk - custom primary key factory to generate _id values (see Custom primary keys).
* forceServerObjectId - generation of objectid is delegated to the mongodb server instead of the driver. default is false
* retryMiliSeconds - specify the number of milliseconds between connection attempts default:5000
* numberOfRetries - specify the number of retries for connection attempts default:3
* reaper - enable/disable reaper (true/false) default:false
* reaperInterval - specify the number of milliseconds between each reaper attempt default:10000
* reaperTimeout - specify the number of milliseconds for timing out callbacks that don't return default:30000
* raw - driver expects Buffer raw bson document, default:false
* logger - object specifying error(), debug() and log() functions
 
== Подключение к database ==
Database может быть открыта с помощью метода '''open'''.
 
<nowiki>db_connector.open(callback);</nowiki>
 
callback is a callback function which gets 2 parameters - an error object (or null, if no errors occured) and a database object.
 
Resulting database object can be used for creating and selecting  [[Документация_для_v_1.2#Collections|collections]].
 
<nowiki>db_connector.open(function(err, db){
    db.collection(...);
});</nowiki>
 
=== Свойства Database ===
* databaseName is the name of the database
* serverConfig includes information about the server (serverConfig.host, serverConfig.port etc.)
* state indicates if the database is connected or not
* strict indicates if ''strict mode'' is on (true) or off (false, default)
* version indicates the version of the MongoDB database
 
===События Database ===
* close to indicate that the connection to the database was closed
 
Например:
 
<nowiki>db.on("close", function(error){
    console.log("Connection to the database was closed!");
});</nowiki>
 
NB! If auto_reconnect was set to true when creating the server, then the connection will be automatically reopened on next database operation. Nevertheless the close event will be fired.
 
== Совместное использование соединений с несколькими базами ==
Для совместного использования пула соединений между несколькими базами данных экземпляр базы данных имеет метод '''db'''
 
<nowiki>db_connector.db(name)</nowiki>
 
this returns a new db instance that shares the connections off the previous instance but will send all commands to the databasename. This allows for better control of resource usage in a multiple database scenario.
 
== Удаление database ==
Для удаления database, вначале необходимо установить на неё курсор. Удаление может быть выполнено методом '''dropDatabase'''
<nowiki>db_connector.open(function(err, db){
    if (err) { throw err; }
    db.dropDatabase(function(err) {
        if (err) { throw err; }
        console.log("database has been dropped!");
    });
});</nowiki>
 
== Пользовательские первичные ключи ==
Каждая запись в database имеет уникальный '''первичный ключ''' называющийся '''_id'''. По умолчанию первичный ключ представляет собой хэш длиной 12 байт, но пользовательский генератор ключей может это переопределить. Если установить '''_id''' вручную,  то для добавляемых записей можно использовать что угодно, генератор первичных ключей (primary key factory generates) подставит автосгенерированное значение '''_id''' только для тех записей, где ключ '''_id''' не определен.
 
Example 1: No need to generate primary key, as its already defined:
 
<nowiki>collection.insert({name:"Daniel", _id:"12345"});</nowiki>
 
Example 2: No primary key, so it needs to be generated before save:
 
<nowiki>collectionn.insert({name:"Daniel"});</nowiki>
 
Custom primary key factory is actually an object with method createPK which returns a primary key. The context (value for this) forcreatePK is left untouched.
 
<nowiki>var CustomPKFactory = {
    counter:0,
    createPk: function() {
        return ++this.counter;
    }
}
 
db_connector = new mongodb.Db(name, mongoserver, {pk: CustomPKFactory});</nowiki>
 
== Отладка ==
In order to debug the commands sent to the database you can add a logger object to the DB options. Make sure also the propertydoDebug is set.
 
Пример:
 
<nowiki>options = {}
options.logger = {};
options.logger.doDebug = true;
options.logger.debug = function (message, object) {
    // print the mongo command:
    // "writing command to mongodb"
    console.log(message);
 
    // print the collection name
    console.log(object.json.collectionName)
 
    // print the json query sent to MongoDB
    console.log(object.json.query)
 
    // print the binary object
    console.log(object.binary)
}
 
var db = new Db('some_database', new Server(...), options);</nowiki>
 
= Collections =
Так же смотри:
* [[Документация_для_v_1.2#Database|Database]]
* [[Документация_для_v_1.2#Queries|Queries]]
 
== Объекты-коллекции ==
Collection object is a pointer to a specific collection in the [[Документация_для_v_1.2#Database|database]]. If you want to [[Документация_для_v_1.2#Insert|insert]] new records or [[Документация_для_v_1.2#Queries|query]] existing ones then you need to have a valid collection object.
 
'''Примечание''' Название коллекций не может начинаться или содержать знакa $ (.tes$t - is not allowed)
 
== Создание коллекции ==
Collections can be created with createCollection
 
<nowiki>db.createCollection([[name[, options]], callback)</nowiki>
 
where name is the name of the collection, options a set of configuration parameters and callback is a callback function. db is the database object.
 
The first parameter for the callback is the error object (null if no error) and the second one is the pointer to the newly created collection. If strict mode is on and the table exists, the operation yields in error. With strict mode off (default) the function simple returns the pointer to the existing collection and does not truncate it.
 
db.createCollection("test", function(err, collection){
    collection.insert({"test":"value"});
});
 
== Создание параметров коллекции ==
Several options can be passed to the createCollection function with options parameter.
 
<nowiki>* `raw` - driver returns documents as bson binary Buffer objects, `default:false`</nowiki>
 
=== Collection properties ===
* collectionName is the name of the collection (not including the database name as a prefix)
* db is the pointer to the corresponding databse object
 
Example of usage:
 
console.log("Collection name: "+collection.collectionName)
 
== Список стандартных коллекций ==
=== Список names ===
Collections can be listed with collectionNames
 
<nowiki>db.collectionNames(callback);</nowiki>
 
callback gets two parameters - an error object (if error occured) and an array of collection names as strings.
 
Collection names also include database name, so a collection named posts in a database blog will be listed as blog.posts.
 
Additionally there's system collections which should not be altered without knowing exactly what you are doing, these sollections can be identified with system prefix. For example posts.system.indexes.
 
Пример:
 
<nowiki>var mongodb = require("mongodb"),
    mongoserver = new mongodb.Server("localhost"),
    db_connector = new mongodb.Db("blog", mongoserver);
 
db_connector.open(function(err, db){
    db.collectionNames(function(err, collections){
        <nowiki>console.log(collections); // ["blog.posts", "blog.system.indexes"]</nowiki>
    });
});</nowiki>
 
== Список collections ==
Collection objects can be listed with database method collections
 
db.collections(callback)
 
Where callback gets two parameters - an error object (if an error occured) and an array of collection objects.
 
== Выбор collections ==
Созданная коллекция может быть открыта при помощи метода '''collection'''
 
<nowiki>db.collection([[name[, options]], callback);</nowiki>
 
Если strict mode выключен, тогда в случае отсутствия коллекции, новая коллекция создастся автоматически.
 
== Selecting collections options ==
Several options can be passed to the collection function with options parameter.
 
* `raw` - driver returns documents as bson binary Buffer objects, `default:false`
 
== Renaming collections ==
A collection can be renamed with collection method rename
 
collection.rename(new_name, callback);
 
== Removing records from collections ==
Records can be erased from a collection with remove
 
<nowiki>collection.remove([[query[, options]], callback]);</nowiki>
 
Where
 
* query is the query that records to be removed need to match. If not set all records will be removed
* options indicate advanced options. For example use {safe: true} when using callbacks
* callback callback function that gets two parameters - an error object (if an error occured) and the count of removed records
 
== Removing collections ==
A collection can be dropped with drop
 
collection.drop(callback);
 
or with dropCollection
 
db.dropCollection(collection_name, callback)
 
= Inserting and updating =
See also:
 
* [[Документация_для_v_1.2#Database|Database]]
* [[Документация_для_v_1.2#Collections|Collections]]
 
== Insert ==
Records can be inserted to a collection with insert
 
<nowiki>collection.insert(docs[, options, callback])</nowiki>
 
Where
 
* docs is a single document object or an array of documents
* options is an object of parameters, if you use a callback, set safe to true - this way the callback is executed ''after'' the record is saved to the database, if safe is false (default) callback is fired immediately and thus doesn't make much sense.
* callback - callback function to run after the record is inserted. Set safe to true in options when using callback. First parameter for callback is the error object (if an error occured) and the second is an array of records inserted.
 
For example
 
var document = {name:"David", title:"About MongoDB"};
collection.insert(document, {safe: true}, function(err, records){
    <nowiki>console.log("Record added as "+records[0]._id);</nowiki>
});
 
If trying to insert a record with an existing _id value, then the operation yields in error.
 
collection.insert({_id:1}, {safe:true}, function(err, doc){
    // no error, inserted new document, with _id=1
    collection.insert({_id:1}, {safe:true}, function(err, doc){
        // error occured since _id=1 already existed
    });
});
 
== Save ==
Shorthand for insert/update is save - if _id value set, the record is updated if it exists or inserted if it does not; if the _id value is not set, then the record is inserted as a new one.
 
collection.save({_id:"abc", user:"David"},{safe:true}, callback)
 
callback gets two parameters - an error object (if an error occured) and the record if it was inserted or 1 if the record was updated.
 
== Update ==
Updates can be done with update
 
<nowiki>collection.update(criteria, update[, options[, callback]]);</nowiki>
 
Where
 
* criteria is a query object to find records that need to be updated (see [[Документация_для_v_1.2#Queries|Queries]])
* update is the replacement object
* options is an options object (see below)
* callback is the callback to be run after the records are updated. Has two parameters, the first is an error object (if error occured), the second is the count of records that were modified.
 
=== Update options ===
There are several option values that can be used with an update
 
* safe - run callback only after the update is done, defaults to false
* multi - update all records that match the query object, default is false (only the first one found is updated)
* upsert - if true and no records match the query, insert update as a new record
* raw - driver returns updated document as bson binary Buffer, default:false
 
=== Replacement object ===
If the replacement object is a document, the matching documents will be replaced (except the _id values if no _id is set).
 
collection.update({_id:"123"}, {author:"Jessica", title:"Mongo facts"});
 
The example above will replace the document contents of id=123 with the replacement object.
 
To update only selected fields, $set operator needs to be used. Following replacement object replaces author value but leaves everything else intact.
 
collection.update({_id:"123"}, {$set: {author:"Jessica"}});
 
See [http://www.mongodb.org/display/DOCS/Updating MongoDB documentation] for all possible operators.
 
== Find and Modify ==
To update and retrieve the contents for one single record you can use findAndModify.
 
<nowiki>collection.findAndModify(criteria, sort, update[, options, callback])</nowiki>
 
Where
 
* criteria is the query object to find the record
* sort indicates the order of the matches if there's more than one matching record. The first record on the result set will be used. See [https://github.com/mongodb/node-mongodb-native/blob/1.2-dev/docs/queries.md Queries->find->options->sort] for the format.
* update is the replacement object
* options define the behavior of the function
* callback is the function to run after the update is done. Has two parameters - error object (if error occured) and the record that was updated.
 
=== Options ===
Options object can be used for the following options:
 
* remove - if set to true (default is false), removes the record from the collection. Callback function still gets the object but it doesn't exist in the collection any more.
* new - if set to true, callback function returns the modified record. Default is false (original record is returned)
* upsert - if set to true and no record matched to the query, replacement object is inserted as a new record
 
=== Example ===
<nowiki>var mongodb = require('mongodb'),
    server = new mongodb.Server("127.0.0.1", 27017, {});
 
new mongodb.Db('test', server, {}).open(function (error, client) {
    if (error) throw error;
    var collection = new mongodb.Collection(client, 'test_collection');
    collection.findAndModify(
        {hello: 'world'}, // query
        <nowiki>[['_id','asc']], </nowiki> // sort order
        {$set: {hi: 'there'}}, // replacement, replaces only the field "hi"
        {}, // options
        function(err, object) {
            if (err){
                console.warn(err.message);  // returns error if no matching object found
            }else{
                console.dir(object);
            }
        });
    });
</nowiki>
 
= Queries =
See also:
 
* [[Документация_для_v_1.2#Database|Database]]
* [[Документация_для_v_1.2#Collections|Collections]]
 
== Выполнение запросов при помощи find() ==
[[Документация_для_v_1.2#Collections|Collections]] can be queried with find.
 
<nowiki>collection.find(query[[[, fields], options], callback]);</nowiki>
 
Where
 
* query - is a query object, defining the conditions the documents need to apply
* fields - indicates which fields should be included in the response (default is all)
* options - defines extra logic (sorting options, paging etc.)
* raw - driver returns documents as bson binary Buffer objects, default:false
 
The result for the query is actually a cursor object. This can be used directly or converted to an array.
 
var cursor = collection.find({});
cursor.each(...);
 
To indicate which fields must or must no be returned fields value can be used. For example the following fields value
 
{
    "name": true,
    "title": true
}
 
retrieves fields name and title (and as a default also _id) but not any others.
 
== Find first occurence with findOne() ==
findOne is a convinence method finding and returning the first match of a query while regular find returns a cursor object instead. Use it when you expect only one record, for example when querying with _id or another unique property.
 
<nowiki>collection.findOne([query], callback)</nowiki>
 
Where
 
* query is a query object or an _id value
* callback has two parameters - an error object (if an error occured) and the document object.
 
Example:
 
collection.findOne({_id: doc_id}, function(err, document) {
    console.log(document.name);
});
 
== Значения _id ==
Default _id values are 12 byte binary hashes. You can alter the format with custom Primary Key factories (see ''[[Документация_для_v_1.2#Custom_primary_keys|Custom Primarky Keys]]'' in [[Документация_для_v_1.2#Database|Database]]).
 
In order to treat these binary _id values as strings it would be wise to convert binary values to hex strings. This can be done withtoHexString property.
 
var idHex = document._id.toHexString();
 
Hex strings can be reverted back to binary (for example to perform queries) with ObjectID.createFromHexString
 
{_id: ObjectID.createFromHexString(idHex)}
 
When inserting new records it is possible to use custom _id values as well which do not need to be binary hashes, for example strings.
 
collection.insert({_id: "abc", ...});
collection.findOne({_id: "abc"},...);
 
This way it is not necessary to convert _id values to hex strings and back.
 
== Объект Query ==
The simplest query object is an empty one {} which matches every record in the database.
 
To make a simple query where one field must match to a defined value, one can do it as simply as
 
{fieldname: "fieldvalue"} 
 
This query matches all the records that a) have fields called ''fieldname'' and b) its value is ''"fieldvalue"''.
 
For example if we have a collection of blog posts where the structure of the records is {title, author, contents} and we want to retrieve all the posts for a specific author then we can do it like this:
 
posts = pointer_to_collection;
posts.find({author:"Daniel"}).toArray(function(err, results){
    console.log(results); // output all records
});
 
If the queried field is inside an object then that can be queried also. For example if we have a record with the following structure:
 
{
    user: {
        name: "Daniel"
    }
}
 
Then we can query the "name" field like this: {"user.name":"Daniel"}
 
=== AND ===
If more than one fieldname is specified, then it's an AND query
 
{
    key1: "value1",
    name2: "value2"
}
 
Whis query matches all records where ''key1'' is ''"value1"'' and ''key2'' is ''"value2"''
 
=== OR ===
OR queries are a bit trickier but doable with the $or operator. Query operator takes an array which includes a set of query objects and at least one of these must match a document before it is retrieved
 
{
    <nowiki>$or:[</nowiki>
        {author:"Daniel"},
        {author:"Jessica"}
    ]
}
 
This query match all the documents where author is Daniel or Jessica.
 
To mix AND and OR queries, you just need to use $or as one of regular query fields.
 
{
    title:"MongoDB",
    <nowiki>$or:[</nowiki>
        {author:"Daniel"},
        {author:"Jessica"}
    ]
}
 
=== Conditionals ===
Conditional operators <nowiki><</nowiki>, <nowiki><=</nowiki>, >, >= and != can't be used directly, as the query object format doesn't support it but the same can be achieved with their aliases $lt, $lte, $gt, $gte and $ne. When a field value needs to match a conditional, the value must be wrapped into a separate object.
 
{"fieldname":{$gte:100}}
 
This query defines that ''fieldname'' must be greater than or equal to 100.
 
Conditionals can also be mixed to create ranges.
 
{"fieldname": {$lte:10, $gte:100}}
 
=== Regular expressions in queries ===
Queried field values can also be matched with regular expressions
 
{author:/^Daniel/}
 
=== Специальные операторы в запросах ===
In addition to OR and conditional there's some more operators:
 
* $in - specifies an array of possible matches, <nowiki>{"name":{$in:[1,2,3]}}</nowiki>
* $nin - specifies an array of unwanted matches
* $all - array value must match to the condition <nowiki>{"name":{$all:[1,2,3]}}</nowiki>
* $exists - checks for existence of a field {"name":{$exists:true}}
* $mod - check for a modulo {"name":{$mod:{3,2}} is the same as "name" % 3 == 2
* $size - checks the size of an array value {"name": {$size:2}} matches arrays ''name'' with 2 elements
 
== Queries inside objects and arrays ==
If you have a document with nested objects/arrays then the keys inside these nested objects can still be used for queries.
 
For example with the following document
 
{
    "_id": idvalue,
    "author":{
        "firstname":"Daniel",
        "lastname": "Defoe"
    },
    <nowiki>"books":[</nowiki>
        {
            "title":"Robinson Crusoe"
            "year": 1714
        }
    ]
}
 
not only the _id field can be used as a query field - also the firstname and even title can be used. This can be done when using nested field names as strings, concated with periods.
 
collection.find({"author.firstname":"Daniel})
 
Works even inside arrays
 
collection.find({"books.year":1714})
 
== Query options ==
Query options define the behavior of the query.
 
var options = {
    "limit": 20,
    "skip": 10,
    "sort": title
}
 
collection.find({}, options).toArray(...);
 
=== Paging ===
Paging can be achieved with option parameters limit and skip
 
{
    "limit": 20,
    "skip" 10
}
 
retrieves 10 elements starting from 20
 
=== Sorting ===
Sorting can be acieved with option parameter sort which takes an array of sort preferences
 
{
    <nowiki>"sort": [['field1','asc'], ['field2','desc']]</nowiki>
}
 
With single ascending field the array can be replaced with the name of the field.
 
{
    "sort": "name"
}
 
=== Explain ===
Option parameter explain turns the query into an explain query.
 
== Cursors ==
Cursor objects are the results for queries and can be used to fetch individual fields from the database.
 
=== nextObject ===
cursor.nextObject(function(err, doc){}) retrieves the next record from database. If doc is null, then there weren't any more records.
 
=== each ===
cursor.each(function(err, doc){}) retrieves all matching records one by one.
 
=== toArray ===
cursor.toArray(function(err, docs){}) converts the cursor object into an array of all the matching records. Probably the most convenient way to retrieve results but be careful with large datasets as every record is loaded into memory.
 
collection.find().toArray(function(err, docs){
    console.log("retrieved records:");
    console.log(docs);
});
 
=== rewind ===
cursor.rewind() resets the internal pointer in the cursor to the beginning.
 
== Counting matches ==
Counting total number of found matches can be done against cursors with method count.
 
cursor.count(callback)
 
Where
 
* callback is the callback function with two parameters - an error object (if an error occured) and the number on matches as an integer.
 
Example
 
cursor.count(function(err, count){
    console.log("Total matches: "+count);
});
 
= Replicasets =
== Introduction ==
Replica sets is the asynchronous master/slave replication added to Mongodb that takes care off all the failover and recovery for the member nodes. According to the mongodb documentation a replicaset is
 
* Two or more nodes that are copies of each other
* Automatic assignment of a primary(master) node if none is available
* Drivers that automatically detect the new master and send writes to it
 
More information at [http://www.mongodb.org/display/DOCS/Replica+Sets Replicasets]
 
== Driver usage ==
To create a new replicaset follow the instructions on the mongodb site to setup the config and the replicaset instances. Then using the driver.
 
<nowiki>var replSet = new ReplSetServers( [ </nowiki>
    new Server( 127.0.0.1, 30000, { auto_reconnect: true } ),
    new Server( 127.0.0.1, 30001, { auto_reconnect: true } ),
    new Server( 127.0.0.1, 30002, { auto_reconnect: true } )
  ],
  {rs_name:RS.name}
);
 
var db = new Db('integration_test_', replSet);
db.open(function(err, p_db) {
  // Do you app stuff :)
})
 
The ReplSetSrvers object has the following parameters
 
var replSet = new ReplSetSrvers(servers, options)
 
Where
 
* servers is an array of Server objects
* options can contain the following options
 
== Replicaset options ==
Several options can be passed to the Replicaset constructor with options parameter.
 
* rs_name is the name of the replicaset you configured when you started the server, you can have multiple replicasets running on your servers.
* read_secondary set's the driver to read from secondary servers (slaves) instead of only from the primary(master) server.
* socketOptions - a collection of pr socket settings
 
== Socket options ==
Several options can be set for the socketOptions.
 
* timeout = set seconds before connection times out default:0
* noDelay = Disables the Nagle algorithm default:true
* keepAlive = Set if keepAlive is used default:0, which means no keepAlive, set higher than 0 for keepAlive
* encoding = 'ascii'|'utf8'|'base64' default:null
 
= Indexes =
Indexes are needed to make queries faster. For example if you need to find records by a field named ''username'' and the field has a related index set, then the query will be a lot faster compared to if the index was not present.
 
See [http://www.mongodb.org/display/DOCS/Indexes MongoDB documentation] for details.
 
== Create indexes with createIndex() ==
createIndex adds a new index to a collection. For checking if the index was already set, use ensureIndex instead.
 
<nowiki>collection.createIndex(index[, options], callback)</nowiki>
 
or
 
<nowiki>db.createIndex(collectionname, index[, options], callback)</nowiki>
 
where
 
* index is the field or fields to be indexed. See ''index field''
* options are options, for example {sparse: true} to include only records that have indexed field set or {unique: true} for unique indexes. If the options is a boolean value, then it indicates if it's an unique index or not.
* callback gets two parameters - an error object (if an error occured) and the name for the newly created index
 
== Ensure indexes with ensureIndex() ==
Same as createIndex with the difference that the index is checked for existence before adding to avoid duplicate indexes.
 
== Index field ==
Index field can be a simple string like "username" to index certain field (in this case, a field named as ''username'').
 
collection.ensureIndex("username",callback)
 
It is possible to index fields inside nested objects, for example "user.firstname" to index field named ''firstname'' inside a document named ''user''.
 
collection.ensureIndex("user.firstname",callback)
 
It is also possible to create mixed indexes to include several fields at once.
 
collection.ensureIndex({firstname:1, lastname:1}, callback)
 
or with tuples
 
<nowiki>collection.ensureIndex([["firstname", 1], ["lastname", 1]], callback)</nowiki>
 
The number value indicates direction - if it's 1, then it is an ascending value, if it's -1 then it's descending. For example if you have documents with a field ''date'' and you want to sort these records in descending order then you might want to add corresponding index
 
collection.ensureIndex({date:-1}, callback)
 
== Remove indexes with dropIndex() ==
All indexes can be dropped at once with dropIndexes
 
collection.dropIndexes(callback)
 
callback gets two parameters - an error object (if an error occured) and a boolean value true if operation succeeded.
 
== Get index information with indexInformation() ==
indexInformation can be used to fetch some useful information about collection indexes.
 
collection.indexInformation(callback)
 
Where callback gets two parameters - an error object (if an error occured) and an index information object.
 
The keys in the index object are the index names and the values are tuples of included fields.
 
For example if a collection has two indexes - as a default an ascending index for the _id field and an additonal descending index for"username" field, then the index information object would look like the following
 
{
    <nowiki>"_id":[["_id", 1]],</nowiki>
    <nowiki>"username_-1":[["username", -1]]</nowiki>
}
 
 
= GridStore =
GridFS is a scalable MongoDB ''filesystem'' for storing and retrieving large files. The default limit for a MongoDB record is 16MB, so to store data that is larger than this limit, GridFS can be used. GridFS shards the data into smaller chunks automatically. See [http://www.mongodb.org/display/DOCS/GridFS+Specification MongoDB documentation] for details.
 
GridStore is a single file inside GridFS that can be managed by the script.
 
== Open GridStore ==
Opening a GridStore (a single file in GridFS) is a bit similar to opening a database. At first you need to create a GridStore object and then open it.
 
<nowiki>var gs = new mongodb.GridStore(db, filename, mode[, options])</nowiki>
 
Where
 
* db is the database object
* filename is the name of the file in GridFS that needs to be accessed/created
* mode indicated the operation, can be one of:
** "r" (Read): Looks for the file information in fs.files collection, or creates a new id for this object.
** "w" (Write): Erases all chunks if the file already exist.
** "w+" (Append): Finds the last chunk, and keeps writing after it.
* options can be used to specify some metadata for the file, for example content_type, metadata and chunk_size
 
Example:
 
var gs = new mongodb.GridStore(db, "test.png", "w", {
    "content_type": "image/png",
    "metadata":{
        "author": "Daniel"
    },
    "chunk_size": 1024*4
});
 
When GridStore object is created, it needs to be opened.
 
gs.open(callback);
 
callback gets two parameters - and error object (if error occured) and the GridStore object.
 
Opened GridStore object has a set of useful properties
 
* gs.length - length of the file in bytes
* gs.contentType - the content type for the file
* gs.uploadDate - when the file was uploaded
* gs.metadata - metadata that was saved with the file
* gs.chunkSize - chunk size
 
Example
 
gs.open(function(err, gs){
    console.log("this file was uploaded at "+gs.uploadDate);
});
 
== Writing to GridStore ==
Writing can be done with write
 
gs.write(data, callback)
 
where data is a Buffer or a string, callback gets two parameters - an error object (if error occured) and result value which indicates if the write was successful or not.
 
While the GridStore is not closed, every write is appended to the opened GridStore.
 
== Writing a file to GridStore ==
This function opens the GridStore, streams the contents of the file into GridStore, and closes the GridStore.
 
gs.writeFile( file, callback )
 
where
 
* file is a file descriptor, or a string file path
* callback is a function with two parameters - error object (if error occured) and the GridStore object.
 
== Reading from GridStore ==
Reading from GridStore can be done with read
 
<nowiki>gs.read([size], callback)</nowiki>
 
where
 
* size is the length of the data to be read
* callback is a callback function with two parameters - error object (if an error occured) and data (binary string)
 
== Streaming from GridStore ==
You can stream data as it comes from the database using stream
 
<nowiki>gs.stream([autoclose=false])</nowiki>
 
where
 
* autoclose If true current GridStore will be closed when EOF and 'close' event will be fired
 
The function returns [http://nodejs.org/docs/v0.4.12/api/streams.html#readable_Stream read stream] based on this GridStore file. It supports the events 'read', 'error', 'close' and 'end'.
 
== Delete a GridStore ==
GridStore files can be unlinked with unlink
 
mongodb.GridStore.unlink(db, name, callback)
 
Where
 
* db is the database object
* name is either the name of a GridStore object or an array of GridStore object names
* callback is the callback function
 
== Closing the GridStore ==
GridStore needs to be closed after usage. This can be done with close
 
gs.close(callback)
 
== Check the existance of a GridStore file ==
Checking if a file exists in GridFS can be done with exist
 
mongodb.GridStore.exist(db, filename, callback)
 
Where
 
* db is the database object
* filename is the name of the file to be checked or a regular expression
* callback is a callback function with two parameters - an error object (if an error occured) and a boolean value indicating if the file exists or not
 
== Seeking in a GridStore ==
Seeking can be done with seek
 
gs.seek(position);
 
This function moves the internal pointer to the specified position.

Версия от 23:39, 25 сентября 2017

To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in:

sudo fdisk –l
and press enter.
sshot-2

What you’re looking for is a line that ends with HPSF/NTFS (under the heading System). In our case, the device is “/dev/sda1”. This may be slightly different for you, but it will still begin with /dev/. Note this device name.

If you have more than one hard drive partition formatted as NTFS, then you may be able to identify the correct partition by the size. If you look at the second line of text in the screenshot above, it reads “Disk /dev/sda: 136.4 GB, …” This means that the hard drive that Ubuntu has named /dev/sda is 136.4 GB large. If your hard drives are of different size, then this information can help you track down the right device name to use. Alternatively, you can just try them all, though this can be time consuming for large hard drives.

Now that you know the name Ubuntu has assigned to your hard drive, we’ll scan it to see what files we can uncover.

In the terminal window, type:

sudo ntfsundelete <HD name>

and hit enter. In our case, the command is:

sudo ntfsundelete /dev/sda1

sshot-3

The names of files that can recovered show up in the far right column. The percentage in the third column tells us how much of that file can be recovered. Three of the four files that we originally deleted are showing up in this list, even though we shut down the computer right after deleting the four files – so even in ideal cases, your files may not be recoverable.

Nevertheless, we have three files that we can recover – two JPGs and an MPG.

Note: ntfsundelete is immediately available in the Ubuntu 9.10 Live CD. If you are in a different version of Ubuntu, or for some other reason get an error when trying to use ntfsundelete, you can install it by entering “sudo apt-get install ntfsprogs” in a terminal window.

To quickly recover the two JPGs, we will use the * wildcard to recover all of the files that end with .jpg.

In the terminal window, enter

sudo ntfsundelete <HD name> –u –m *.jpg

which is, in our case,

sudo ntfsundelete /dev/sda1 –u –m *.jpg

sshot-10

The two files are recovered from the NTFS hard drive and saved in the current working directory of the terminal. By default, this is the home directory of the current user, though we are working in the Desktop folder.

Note that the ntfsundelete program does not make any changes to the original NTFS hard drive. If you want to take those files and put them back in the NTFS hard drive, you will have to move them there after they are undeleted with ntfsundelete. Of course, you can also put them on your flash drive or open Firefox and email them to yourself – the sky’s the limit!

We have one more file to undelete – our MPG.

sshot-4

Note the first column on the far left. It contains a number, its Inode. Think of this as the file’s unique identifier. Note this number.

To undelete a file by its Inode, enter the following in the terminal:

sudo ntfsundelete <HD name> –u –i <Inode>

In our case, this is:

sudo ntfsundelete /dev/sda1 –u –i 14159

sshot-11

This recovers the file, along with an identifier that we don’t really care about. All three of our recoverable files are now recovered.


Resurses: