Recovering a corrupted MySQL database with TwinDB
MySQL is a popular database system for many purposes.
The default storage engine is InnoDB, and the default configuration has one really annoying "feature": the system creates one binary data file, usually called /var/lib/mysql/ibdata1, containing the data for all the databases you've created on the system.
This file grows and can never be shrunk. Deleting a database doesn't make the file any smaller (in fact it makes it bigger, because the delete command simply gets added to it).
If this file gets corrupted for some reason, you can lose all the data in every table in all your databases on that machine.
There is a way of avoiding (some of) these problems - you can tell the system to create one of these files per table, instead of one enormous file for the whole machine. You have to do this before creating your databases, though - once the system has started to create a single system-wide ibdata1 file, you can't convert it to one file per table, without dumping all your databases, deleting them, making the configuration change, and then reloading the database dump.
Inconvenient though this sounds, I recommend you do it now, before ibdata1 gets any bigger, and definitely before it gets corrupted.
One file per table, not one file per system
Here's how to create one data file per table instead of one for the whole system:
- Use mysqldump to dump all your databases, plus any stored procedures, triggers etc that you might have created, except for the mysql and performance_schema databases (however, note that mysqldump by default does not dump the performance_schema database, so you only need to omit mysql)
- Irritatingly, mysqldump has no "exclude" option to tell it to process all databases except the ones excluded. You have to get a list of all your databases, edit out the ones you don't want, and pass the rest of the list to mysqldump :(
- Delete (drop) all the databases you included in the dump (ie: everything except mysql and performance_schema)
- Stop MySQL
- Delete the ibdata1 and ib_log files (which are probably in /var/lib/mysql, and the log files are called ib_logfile0 and ib_logfile1)
- Add the line innodb_file_per_table=1 to the [mysqld] stanza of your /etc/mysql/my.cnf
- Start MySQL
- Restore all the databases from the dump you started off with
There is an alternative method documented in the MySQL Reference Manual, however the steps shown there, although simpler, need to be followed for each table to be converted. This is far more tedious than the above, which just needs doing once for the entire server, however the process could probably be scripted.
Once you've converted things by one means or another, you should now get one file named ibd<something> for each table, located in the directory of the corresponding database under /var/lib/mysql, instead of one enormous file in /var/lib/mysql itself.
You will still have a /var/lib/mysql/ibdata1 file, but it won't be anything like as large, and it doesn't contain all the data from all your databases any more. Note, however, that it does still contain vital meta-information about each of your databases, so your MySQL server can still run into problems if it gets damaged. In my opinion those problems are far easier to recover from than losing all the data in the one big file, however.
Using one file per table has two big advantages:
- Each database has data files for its own tables, so if you delete a database, you get the disk space back again
- It's very unlikely that all your ibd<something> files will get corrupted at once, so any file system damage has far less of an impact
News Flash
Before you read everything below (it may still be useful to you, I don't want to put you off, but this bit might just be even more useful), have a look at innochecksum_changer, which is a tool I found after I'd worked through everything I've documented below, and worked wonders on my data, even though I've found no reference to it yet in the TwinDB documentation.
Back to the recovery problem
Anyway, I had a system where I hadn't done the above, and the file system started to get corrupted, due to a hard disk problem. For unfortunate and complicated reasons due to some symbolic links I'd set up, my backups of the machine turned out not to include the directory /var/lib/mysql :(
So, I recovered what I could of the ibdata1 file (using the excellent ddrescue utility) and discovered the TwinDB InnoDB data recovery tool.
ddrescue told me that it managed to recover all except 712kbytes (in 6 sections) of my 23Gbyte ibdata1 file:
rescued: 23657 MB, errsize: 712 kB, current rate: 0 B/s ipos: 23200 MB, errors: 6, average rate: 0 B/s opos: 23200 MB, time since last successful read: 15 s Finished
That seemed pretty encouraging to me (however MySQL still wouldn't start with this ibdata1 file in place, even using innodb_force_recovery set all the way up to 6).
So, I created a Debian Jessie VM to work in, and installed the packages git, make, gcc, flex and bison, as per the TwinDB documentation.
I downloaded the source code and built it [Update: it was only later that I found out from https://twindb.com/twindb-software-repository/ that there's a .deb package of twindb. I didn't try this, so I don't know whether there are any differences in practice]:
~# git clone https://github.com/twindb/undrop-for-innodb.git Cloning into 'undrop-for-innodb'... remote: Counting objects: 227, done. remote: Total 227 (delta 0), reused 0 (delta 0), pack-reused 227 Receiving objects: 100% (227/227), 1.10 MiB | 639.00 KiB/s, done. Resolving deltas: 100% (57/57), done. Checking connectivity... done. ~#
The make process (rather boring)
This built the binaries:
- c_parser
- innochecksum_changer
- stream_parser
So far, so good…
https://github.com/twindb/undrop-for-innodb has links to various bits of documentation for using TwinDB. In my opinion, it completely lacks:
- any good summary of the different tools and what they are for
- an overall guide to which tools should be used with different forms of data loss
- a step-by-step guide to recovering a single database to the best of the tools' ability
- (which could obviously be followed repeatedly for each database you need to recover)
The author of the software is clearly trying to push people to pay for his online recovery tool instead of providing comprehensive documentation for the Open Source code:
"You can recover your database using the toolkit and detailed instructions from the blog posts. We can help you to do it much faster." (from https://github.com/twindb/undrop-for-innodb)
"UPDATE (01/01/2017): We stopped further development of undrop-for-innodb and do not support its open source versions." (from https://twindb.com/recover-corrupt-mysql-database)
I found that last link to have the most useful summary of what steps to take to actually achieve something useful, so here are my notes on how I went about recovering what I could from the corrupted 23Gbyte ibdata1 file:
Parse the file to see what InnoDB tables it contains
stream_parser is the tool to scan through the ibdata1 file and split it out into "pages" of information relating to the way InnoDB stores its data.
~/undrop-for-innodb# ./stream_parser -f ../recovery/ibdata1 Opening file: ../recovery/ibdata1 File information: ID of device containing file: 65025 inode number: 1311013 protection: 100644 (regular file) number of hard links: 1 user ID of owner: 0 group ID of owner: 0 device ID (if special file): 0 blocksize for filesystem I/O: 4096 number of blocks allocated: 46206984 time of last access: 1512587335 Wed Dec 6 19:08:55 2017 time of last modification: 1512574020 Wed Dec 6 15:27:00 2017 time of last status change: 1512586418 Wed Dec 6 18:53:38 2017 total size, in bytes: 23657971712 (22.033 GiB) Size to process: 23657971712 (22.033 GiB) ...progress summaries snipped... All workers finished in 1988 sec ~/undrop-for-innodb#
Investigate the results
stream_parser doesn't tell you anything useful about what it found - it simply creates lots of files under pages-ibdata1/FIL_PAGE_INDEX and pages-ibdata1/FIL_PAGE_TYPE_BLOB.
In my case it created 274 files under FIL_PAGE_INDEX, and none under FIL_PAGE_TYPE_BLOB (possibly because none of my databases contain binary fields).
c_parser is the tool for then investigating these files to see what they contain.
FIL_PAGE_INDEX/0000000000000001.page contains information about each table of each database from your system. Specifically, it contains (in the fifth column of data) the internal MySQL number of each table.
FIL_PAGE_INDEX/0000000000000003.page contains information about the fields in each table, and whether they are indexes in the tables or not.
For example:
~/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page -t dictionary/SYS_TABLES.sql -- Page id: 8, Format: REDUNDANT, Records list: Valid, Expected records: (45 45) 00000AB9F97C 3500159C3A2A31 SYS_TABLES "Lego/inventory" 712 3 1 0 0 "" 0
(Many lines of output omitted - only one left as an example.)
This tells us that details of the 'inventory' table in the 'Lego' database is table number 712.
~/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page -t dictionary/SYS_INDEXES.sql | grep 712 00000AB9F97C B500159AFA014F SYS_INDEXES 712 1755 "GEN\_CLUST\_INDEX" 0 1 0 1415890
This tells us that there is a "GEN_CLUST_INDEX" (your guess is as good as mine at this point) with index number 1755, which turns out to correspond to one of the files created by stream_parser earlier on:
~/undrop-for-innodb# ls -al pages-ibdata1/FIL_PAGE_INDEX/*1755.page -rw-r--r-- 1 root root 65536 Dec 6 22:47 pages-ibdata1/FIL_PAGE_INDEX/0000000000001755.page
Recover the data
https://twindb.com/recover-table-structure-from-innodb-dictionary/ has a fairly simple-looking sequence of steps to recover an entire database (so, I take back my comment earlier about this not being in the documentation - it's just not clearly labelled as such and not easy to find).
Create a subdirectory called dumps/default in the directory where you're working, and then perform the following steps in sequence:
Create a new database and load the InnoDB dictionary into it
~/undrop-for-innodb# mysql -p -e "create database recovery" Enter password: ~/undrop-for-innodb# cat dictionary/SYS_*.sql | mysql -p recovery Enter password: ~/undrop-for-innodb#
Get the tables
~/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page -t dictionary/SYS_TABLES.sql >dumps/default/SYS_TABLES 2>dumps/default/SYS_TABLES.sql
Get the table columns
~/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000002.page -t dictionary/SYS_COLUMNS.sql >dumps/default/SYS_COLUMNS 2>dumps/default/SYS_COLUMNS.sql
Get the table indexes
~/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page -t dictionary/SYS_INDEXES.sql >dumps/default/SYS_INDEXES 2>dumps/default/SYS_INDEXES.sql
Get the field definitions
~/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000004.page -t dictionary/SYS_FIELDS.sql >dumps/default/SYS_FIELDS 2>dumps/default/SYS_FIELDS.sql
Finally, we can…
Load the definitions into the recovery database
~/undrop-for-innodb# cat dumps/default/*.sql | mysql -p recovery Enter password: ERROR 1148 (42000) at line 2: The used command is not allowed with this MySQL version
Okay, what went on there?
Line 2 of the cat output starts with the command load data local infile which I seem to recall went out of fashion with the MySQL developers quite a few versions ago, so basically it means we're not able to do it like this any more.
There is a way round this - I'm pretty sure it's the "local" part which is being complained about. A little bit of fiddling seems to confirm this:
~/undrop-for-innodb# cat dumps/default/*.sql | sed 's/LOCAL //' | mysql -p recovery Enter password: ERROR 1290 (HY000) at line 2: The MySQL server is running with the --secure-file-priv option so it cannot execute this statement
Add the line secure_file_priv= to the [mysqld] stanza in /etc/mysql/my.cnf and restart MySQL.
Create a home directory for MySQL and copy the dump files into it, and make everything there owned by MySQL:
# mkdir /home/mysql # rsync -Pav dumps /home/mysql # chown -R mysql: /home/mysql
Finally you can get MySQL to import the dumped data:
~/undrop-for-innodb# cat dumps/default/*.sql | sed 's/LOCAL //; s#/root/undrop-for-innodb#/home/mysql#' | mysql -p recovery Enter password:
What do we end up with?
mysql> show tables; +--------------------+ | Tables_in_recovery | +--------------------+ | SYS_COLUMNS | | SYS_FIELDS | | SYS_INDEXES | | SYS_TABLES | +--------------------+
These tables contain the data definitions extracted from the ibdata1 file earlier.
Note that, as yet, nothing contains any actual useful data - we clearly need to investigate the documentation further…
Build sys_parser
Ah. The "make" command we ran earlier didn't actually (even try to) build all the tools.
We need to "make sys_parser" as well, which by default fails:
~/undrop-for-innodb# make sys_parser sys_parser needs mysql development package( either -devel or -dev) /bin/sh: 1: exit: Illegal number: -1 Makefile:61: recipe for target 'sys_parser' failed make: *** [sys_parser] Error 2
Install the package libmysqlclient-dev and try again…
~/undrop-for-innodb# make sys_parser /usr/bin/mysql_config cc -o sys_parser sys_parser.c `mysql_config --cflags` `mysql_config --libs`
Now we have another tool we can use.
This one turns out to give us the MySQL create statement for each table we ask for (which is pretty tedious, depending on how many tables in how many databases you have…).
Note, we still haven't got to anything which gives us records in the table (ie: actual data)…
Find some data!
Back when we were using the c_parser tool above (on the pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page file), the output told us the table numbers containing the actual data. Let's take another look:
~/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page -t dictionary/SYS_TABLES.sql | grep Lego/inventory 00000AB9F97C 3500159C3A2A31 SYS_TABLES "Lego/inventory" 712 3 1 0 0 "" 0 ~/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page -t dictionary/SYS_INDEXES.sql | grep 712 00000AB9F97C B500159AFA014F SYS_INDEXES 712 1755 "GEN\_CLUST\_INDEX" 0 1 0 1415890
We now know that the data for the "inventory" table in the "Lego" database is in page file 1755. What can we find there?
Well, first we have to save the create statement which sys_parser gives us:
~/undrop-for-innodb# ./sys_parser -u root -p <password> -d recovery Lego/inventory >Lego/inventory.sql
Then we can use that together with the page number to find some actual data!
~/undrop-for-innodb# ./c_parser -6f pages-ibdata1/FIL_PAGE_INDEX/0000000000001755.page -t Lego/inventory.sql
(This gives us lots of output, complete with a load data local infile statement which could be used to finally get it into MySQL).
Interim conclusion
This is a very useful toolset, and I think it's a great pity the MySQL developers don't include it (or something even better, given their inside knowledge of how InnoDB works) with a standard MySQL distribution (or perhaps as a MySQL-tools add-on).
However, putting the pieces together, and drilling your way down through the databases, tables, indexes, fields, pages, and records to get at something which helps you recover your lost data is confusing and tedious.
The developer has almost certainly come up with a way of automating / scripting it, and presumably uses this behind the online recovery tool (which costs US$99 per gigabyte).
However, if the automated system doesn't work for you, you can always hand over US$900 per hour for an expert to cast their eyes over your data… See https://recovery.twindb.com/ if you happen to find this an attractive proposition.
I plan to investigate this a bit further and see whether I can improve these instructions to help make the whole process a bit simpler, preferably a bit more automated, and in the long run, a bit quicker. After all, the whole point of needing this toolset is that you may have lost a large amount of data from your MySQL system, and you don't want to spend hours (or days) fiddling around with a tedious and poorly-documented application, even if it eventually helps you get your data back, if it can be made quicker, more efficient, and less error-prone.
Aaargh.
After all this fiddling around, searching through the documentation for something comprehensive and comprehensible, and learning more about the innards of InnoDB than I really wanted to know, I found that the original make command had built a tool for me (which I still haven't found mentioned in the TwinDB documentation) which simply worked like a charm!
See innochecksum_changer for details.
Go up
Return to main index.