Wednesday, March 14, 2012

Dumping MySQL Privileges and Users

Doing a mysqldump of a database is easy. You can take the resulting SQL file and re-create the database on another server. What you are missing is the users and their privileges that your app depends on for accessing the database. Here is how you can dump your users and privileges to a SQL file that can be easily imported on another server.

** Make sure to replace XXXXX with your actual root password. There is no space after -p

mysql -u root -pXXXXX -B -N $@ -e "SELECT DISTINCT CONCAT('SHOW GRANTS FOR ''', user, '''@''', host, ''';') AS query FROM mysql.user" | mysql -u root -pXXXXX $@ | sed 's/\(GRANT .*\)/\1;/;s/^\(Grants for .*\)/## \1 ##/;/##/{x;p;x;}' > users.sql

The resulting users.sql file can be edited down to just the users you need and then used with the source command to import them into another MySQL server.

Thursday, February 2, 2012

Reduce InnoDB Hard Drive Usage: ibdata1 ib_log


InnoDB databases in MySQL are stored in one huge file by default. This file does not release used space when you delete tables. If you are developing a database and make frequent changes to the structure, you will quickly notice that the size of this file is much larger than all of your databases combined. The file is called ibdata1.

So what can you do about it? MySQL allows you to store InnoDB databases on a file-per-table basis. Here's how you do it:

  1. dump all databases, sprocs, triggers, etc (you need to restore them after)
  2. DROP all databases (except phpmyadmin or other system DBs)
  3. /etc/init.d/mysql stop (stop MySQL server)
  4. nano /etc/mysql/my.cnf (edit the MySQL configuration file, add this line)

    [mysqld]

    innodb_file_per_table=1
  5. rm /var/lib/mysql/ibdata1 (delete ibdata1 and ib_log files)
  6. /etc/init.d/mysql/start (Start MySQL server)
  7. restore dbs from dump (Now all databases will be split into many files)
The difference is that when you drop a table, the corresponding file is deleted and your OS can reclaim that hard drive space.

Wednesday, January 18, 2012

How To Find The Largest Files On Your LAMP

My development server has been running low on space lately which is strange as I haven't been adding anything to it. I track my space usage with the df command.
df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda2            8.0G  4.8G  3.3G  60% /
tmpfs                 306M     0  306M   0% /lib/init/rw
udev                  286M   40K  286M   1% /dev
tmpfs                 306M     0  306M   0% /dev/shm
/dev/xvda1            130M   15M  109M  12% /mnt/boot
none                  306M     0  306M   0% /dev/shm

I've noticed the space on xvda2 slowly decreasing over the last few months. What could be causing it? None of my projects are that big, so how do I find what is responsible? Here's the command you'll need:
du -sm * | sort -nr | head -15
Just make sure you cd to the desired directory and this command will show you the 15 largest files and directories below your current directory. I started in my root ( cd / ) and followed the largest directory it found each time. In my case it was the ibdata1 file in MySQL that grew to over 2.6 Gb. I had been making lots of structure changes to my databases and it turns out that InnoDB does not reallocate space after you delete a table or modify something. After a few months my 300 Mb worth of databases were taking up almost 3 Gb.

Jobs in Linux: Sending stuff to the Background

I use SSH to access my servers and often run commands that take a long time to finish. I could open another SSH window, but it's actually possible to multitask in the same command shell. The command you need to know are:

  • & (start job in background)
  • jobs (list all jobs)
  • fg (move a job to the foreground)
  • bg  (move a job to the background)
  • CTRL + Z (suspend the current job)
  • CTRL + C (kill the current job)
The & (ampersand) is used to run a command and send it to the background all in one go. (Example: sleep 100&) If you need to run a command and you know it will take a while, you can run it this way and you'll be able to run other jobs while it runs in the background. You can always see a list of your jobs and their job numbers by using the jobs command.

If you run a command and it's taking too long, you can press CTRL + Z (send it to the foreground AND pause it), then type bg [job number] to send it to the background and resume it. For example:
sleep 100 
CTRL + Z 
^Z 
[1]+  Stopped sleep 100
bg 1
[1]+  Running sleep 100
This will allow you to run other commands while sleep 100 continues running in the background.

If you need to suspend a job running in the background, you can use the fg [job number] command to bring it to the foreground (your active job) and press CTRL + Z to pause it. In the event that you need to kill a job, bring it to the foreground and use CTRL + C to kill it.