Laravel — How to change MySQL data directory on Digital Ocean
We seen rapid growth in business and before we know it we had consumed 83% of our assigned storage space on our package with Digital Ocean. We needed to take ACTION.
1) We didn’t have much time to go for storage space increase and needed a fail safe plan to increase it with all the data safe and intact.
2) If we would have resized the droplet, it had to be shut down and we didn’t want down time, plus DO recommends taking snapshot beforehand anyway in case if anything goes wrong (it was also not a good feeling when we actually needed storage space increase but we were to buy memory and CPU as a by-product).
3) There was not much of space where I could take the SQL dump, and it locked the application as I started taking the dump(too many connections).
1) As a developer you need to remain calm under pressure (not easy :-)) and should always go for the right thing that works in the long run.
2) I had all the project files (on GitHub) so at no point I was worried about how I would be going to set the application up and running. The only concern was data; so I pin-pointed that I needed data dump and not full droplet snapshot.
3) I added a block volume to my droplet to solve the storage space issue. Simply put the volume is like an additional drive (like USB stick) that you connect with your droplet.
# SSH as root user
$ ssh user@ip # Create a mount point for your volume:
$ mkdir -p /mnt/database
# Mount your volume at the newly-created mount point:
$ mount -o discard,defaults,noatime /dev/disk/by-id/scsi-0DO_Volume_database /mnt/database
# Change fstab so the volume will be mounted after a reboot
$ echo '/dev/disk/by-id/scsi-0DO_Volume_database /mnt/database ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Now I could take the data dump in /mnt/database here but wait when I tried:
mysqldump db_name --result-file=/mnt/database/backup/PRODUCTION-dump.sql --port=... --user=... -password
All of sudden the I started to get messages “ Site is down”. When I seen the spikes in all matrixes I knew this was due to action of taking a dump; so on query console I ran a simple query which resulted in error: too many connections. So I stopped the dump immediately and site was back to normal again.
The reason was that I was using InnoDB and it locked the tables and consumed all RAM for buffer memory; so what I did instead was this:
mysqldump db_name --result-file=/mnt/database/backup/PRODUCTION-dump.sql --port=... --single-transaction --quick --user=... -password
MySQL :: MySQL 8.0 Reference Manual :: 4.5.4 mysqldump - A Database Backup Program
The client utility performs logical backups, producing a set of SQL statements that can be executed to reproduce the…
It took the dump on specified location with normal site load and everybody seemed to be happy. Now we had a safe approach, but I knew that it would take very long to restore very large DB dump; so make sure you account for this in your migration strategy.
At this point we covered all our bases; now it was time for real action:
1 - Open SSH console:
- ssh user@ip
- sudo -i -> (user becomes root; sudo password needed)2 - Put site into maintenance mode
- cd /home/user/site_name -> php artisan down3 - Stop MySQL service (root)
- systemctl status mysql -> systemctl stop mysql -> systemctl status mysql4 - Move database folder to new volume
- mv /var/lib/mysql /mnt/database/mysql5 - Create a symbolic link (root)
- ln -s /mnt/database/mysql /var/lib/mysql -> ll /var/lib/ (make sure the link is there: the cyan color)6 - Configure AppArmor Access
- nano /etc/apparmor.d/tunables/alias
- (insert this at the end of file) alias /var/lib/mysql/ -> /mnt/database/mysql/, (^O[save] -> ^X[exit])
- cat /etc/apparmor.d/tunables/alias (make sure new rule is there)7 - Restart AppArmor
- systemctl restart apparmor8 - Start MySQL Service
- systemctl start mysql -> systemctl status mysql9 - Remove maintenance mode from site:
- php artisan up
And there you have it. Happy migration!!