1- All Databases
You need SSH access and possible Super User Privileges to run this commands.
mysqldump --all-databases | gzip > /path/backups/backup_$(date "+%b-%d-%Y-%H-%M-%S").sql.gz
mysqldump : utility to dump mysql database
-u specifies the user with mysql privileges with DB privileges… In this tutorial, I am using root user
-p specifies the password flag needed by the user.
-h flag specifies the host you are connecting to… You can simply ignore this if its localhost but if it’s a remote IP, you need to specify the IP address…
gzip : Utility to compress the .sql file after dump file is created.
date : $(date “+%b-%d-%Y-%H-%M-%S”) by adding this you are giving a date and time to your backups…
If you’re just trying to backup a simple database you can run this command
mysqldump -h localhost -u root -ppassword dbname | gzip > /path/backups/backup_$(date "+%b-%d-%Y-%H-%M-%S").sql.gz
2 - To export the data to a remote host. I advice using rsync for this.
Eg. command to transfer the backup to a backup server…
rsync -av --progress /your/local/backup/directory/backup_$(date "+%b-%d-%Y-%H-%M-%S") .sql.gz [email protected]e.ip.address.here:/home/backups
rsync -av --progress /your/local/backup/directory/backup_$(date "+%b-%d-%Y-%H-%M-%S") .sql.gz
rsync – tool to transfer files
-v flag needed to view the output of rsync
–progress flag to see the progress of the transfer
/your/local/backup/directory path to your backup directory
backup_$(date “+%b-%d-%Y-%H-%M-%S”) .sql.gz this will be your backup file if you used the backup eg.in my first post.
root the user that has access to the remote server receiving the backup, in my case it;s root, you can change this accordingly.
@remote.ip.address.here the remote location or IP address eg. @45.36.324.23
: add this if the remote is using default SSH ports…
/home/backups this is the drectory you want the backup to be stored into
3 - Fast data import trick
$ mysqldump -h localhost -u root -p --extended-insert --quick --no-create-info mydb mytable | gzip > mytable.sql.gz
A bit more about this line:
--extended-insert:it makes sure that it is not one INSERT per line, meaning a single statement can have dozens of rows.
--quick:useful when dumping large tables, by default MySQL reads the whole table in memory then dumps into a file, that way the data is streamed without consuming much memory.
--no-create-info: this means only the data is being exported, no CREATE TABLE statements will be added
To do the import do this:
Disable foreign key checks when importing batch data in MySQL
SET foreign_key_checks = 0;
/* do you stuff REALLY CAREFULLY */
SET foreign_key_checks = 1;
Add SET FOREIGN_KEY_CHECKS=0; to the beginning of your sql file
cat <(echo "SET FOREIGN_KEY_CHECKS=0;") data.sql | mysql
mysql --init-command="SET SESSION FOREIGN_KEY_CHECKS=0;"
You don’t need to run SET FOREIGN_KEY_CHECKS=1 after as it is reset automatically after the session ends
Checkout Google if you are interested in backing up to their Cloud Service