Migrating From Non-cPanel Sources To cPanel: An Indirect Approach

It would be wonderful to have a single tool that can migrate from any source into a managed environment like cPanel. Unfortunately, in the Linux world, it is easy to put anything you want anywhere you want, so even if a server is running a common LAMP stack with default installations, manual research is still required to determine where data is stored and how to access it.

This issue extrapolates when paired with non-standard installs of these common web server programs, and especially with non-standard programs. Website files and databases could literally be stored anywhere.

This doesn’t jive with cPanel. It wants things to be in a known format, and puts everything just so when creating and updating accounts. So, if we can get data into a known format before we present it to cPanel for restoration, everything will fall into place.

What’s the scenario?

The idea here is that we have a system with files all over the place. For instance, let’s say that Apache is being run from /opt/apache/ with domains stored at /var/www/websites/$domain/html/, DNS is BIND called from /opt/named/, MySQL is off premises at a cloud provider, and mail is bring run through Google Apps.

cPanel has a known backup file format. It’s a compressed or uncompressed tarball with a series of folders and configuration files stored inside of it, including flat MySQL dumps, grant and create statements, SSL files, the web docroot, and mailbox information.

So, if we can filter the information on our example server into this format, we will have a legitimate and completely restore-able cPanel backup file.

For this article, I’ll show commands formulated using domain.com, and a username for this domain of username. You can of course suit these variables to your needs, and if everything on the server is in a similar format, a quick loop can be crafted to affect all sites at the same time. I’ll show an example of this at the end.

Roll-your-own cPanel backups!

We first start with a working directory. I don’t believe it is specifically required anymore, but your working folder should have the same name as the tarball you intend to create, include the cPanel username, and must be in one of a few different formats. The easiest one to remember is cpmove-username, but you can also use username or backup-epochtimestamp-username. Since I’m using /home/ as my temporary folder, I’ll use cpmove-username to avoid colliding with a real homedir. This command creates the entire folder structure, including nested folders.

mkdir -p /home/cpmove-username/{bandwidth,cp,dnszones,cron,homedir/{public_html,mail/{cur,tmp,new},etc},mysql,sslcerts,sslkeys,va,vad}

Generate the user file

Technically, the package only needs to contain one thing in order to generate a cPanel account: the file that goes into /var/cpanel/users/. This file has a filename of the user it goes with, and contains a few choice lines. Here are the important ones for you to populate with data from your system:

DNS=domain.com
FEATURELIST=default
IP=123.45.67.89
MAILBOX_FORMAT=maildir
MXCHECK-domain.com=0
OWNER=root
PLAN=default
RS=paper_lantern
USER=username

If you check any file in /var/cpanel/users/ on your cPanel server, you will see a lot more lines, which reference limits and create dates and all that. Most of it is unnecessary to actually create an account and will default when you restore.

These important lines generally define the plan and features for the account (default to be safe), its domain name and username, mail routing, and the account owner.

This file is stored, in our example, as /home/cpmove-username/cp/username.

Populate additional files

It’s handy to have the appropriate password for the linux user, if you do indeed have distinct users per domain. The password will, I believe, be randomized if this shadow file is not populated:

echo -n $(grep ^username: /etc/shadow | cut -d: -f2) > /home/cpmove-username/shadow

Your cPanel users do not need shell access generally, so disable it from the get go with:

echo -n /usr/local/cpanel/bin/noshell > /home/cpmove-username/shell

Fill the folders

Each folder next needs to be filled with the individual data that belongs in it, collected from around your system. We start with the easy stuff; the document root for the site.

rsync -aqH /var/www/websites/domain.com/html/ /home/cpmove-username/homedir/public_html/

Since it is run from the same system, we will also collect BIND zonefiles. If one is not available, cPanel will generate one for you based on WHM’s template upon account restore, so this is optional.

cp -a /opt/named/domain.com.db /home/cpmove-username/dnszones/

Crontabs are in a generally known location, if they exist, so they should be added too:

cp -a /var/spool/cron/username /home/cpmove-username/cron/

It is a bit complicated to add an SSL from the existing information, because the yaml ssl.db file must be created, so I leave this alone and let AutoSSL take over once the site is restored.

Collect database information

This has to be done in two parts per database. The cpmove file stores a create statement separate from the actual database contents. Our example user only has one database called data_base, so we will first dump this create statement:

mysqldump data_base --no-data --no-create-info --databases > /home/cpmove-username/mysql/data_base.create

We follow that up with the data:

mysqldump data_base > /home/cpmove-username/mysql/data_base.sql

Grants for this database are entered into a single master file outside of the mysql folder. I happened to discover that the user for this database is called data_user. Fancy that! A developer that uses unique database users per database for security! In most cases, dumping grants from localhost would be sufficient:

mysql -Ns -e "show grants for data_user@localhost" >> /home/cpmove-username/mysql.sql

In our case, grants were a bit odd since our mysql database was remote. So, we had to dump the grants and change them with sed to enact the same output:

mysql -Ns -e "show grants for data_user@remotemysqlhost" | sed -e 's/remotemysqlhost/localhost/g' >> /home/cpmove-username/mysql.sql

This mysql.sql file is restored following the database creation, and the users generated (as well as the databases created) are mapped to the new user.

Zip it up

At this point (well actually a few steps before this point) we have a cpmove file structure that will restore properly, and it contains all of the data we want to restore on our cPanel server. Let’s compress it:

tar -zcf /home/cpmove-username.tar.gz /home/cpmove-username

All that is left now is to get this file onto our cPanel server and restore!

This seems very scriptable!

You’re right! It can be very scriptable for a single server with a known structure. However, servers vary as I mentioned at the beginning of the article, and you may need to tweak your procedure for every server you use this technique on. Here’s all of the above information put into a single script. This script only works on our imaginary example system, with the following additional information.

We assume that one apache user owns all data, keeping with tradition, and we need to generate our own usernames. Therefore we cannot populate the shadow file, nor the crontab. I start by creating a file called userlist.txt that I can loop against in the format $domain $user:

domain.com domain
anotherdomain.com anotherd
widgets-4-sale.com widgets

The folder structure is known, and all of these sites are luckily wordpress, so we know how to grep out the database information from wp-config.php.

while read domain user; do
  echo "working on $domain..."
  mkdir /home/cpmove-$user/{bandwidth,cp,dnszones,cron,homedir/{public_html,mail/{cur,tmp,new},etc},mysql,sslcerts,sslkeys,va,vad}
  echo "DNS=$domain
FEATURELIST=default
IP=123.45.67.89
MAILBOX_FORMAT=maildir
MXCHECK-${domain}=0
OWNER=root
PLAN=default
RS=paper_lantern
USER=$user" > /home/cpmove-$user/cp/$user
  echo -n /usr/local/cpanel/bin/noshell > /home/cpmove-$user/shell
  rsync -aqH /var/www/websites/$domain/html/ /home/cpmove-$user/homedir/public_html/
  db=$(grep DB_NAME /var/www/websites/$domain/html/wp-config.php | cut -d\' -f4)
  dbuser=$(grep DB_USER /var/www/websites/$domain/html/wp-config.php | cut -d\' -f4)
  dbhost=$(grep DB_HOST /var/www/websites/$domain/html/wp-config.php | cut -d\' -f4)
  dbpass=$(grep DB_PASS /var/www/websites/$domain/html/wp-config.php | cut -d\' -f4)
  mysqldump -h $dbhost -u $dbuser -p${dbpass} --no-data --no-create-info --databases $db > /home/cpmove-$user/mysql/$db.create
  mysqldump -h $dbhost -u $dbuser -p${dbpass} $db > /home/cpmove-$user/mysql/$db.sql
  mysql -Ns -h $dbhost -u $dbuser -p${dbpass} -e 'show grants for '$dbuser'@'$dbhost';' | sed -e 's/'$dbhost'/localhost/g' >> /home/cpmove-$user/mysql.sql
  tar -zcf /home/cpmove-$user.tar.gz /home/cpmove-$user
  #rm -rf /home/cpmove-$user/
done < userlist.txt

I’ve commented out the rm command, which is included just for show; you can remove data during or after your creation depending on how daring you are. Make sure you test whatever variation of this script you create and use first, feeding it just one or two users and domains to be safe.

Because a dummy user was created and all public_html files are owned by apache, file ownerships will have to be corrected upon restore of the account to be owned by the cPanel user itself.

Applying on non-working cPanel machines

This technique is more easily applied to non-working cPanel servers. Let’s say someone does the forbidden command of recursively removing data from their entire system accidentally. The binaries are the first to go, and once the rm command is removed, the system stops doing anything at all. However, the /var partition that contains critical metadata is usually safe, as rm works alphabetically. So, the following technique can generate cPanel backup files manually, adding in a few important extra features because of the cPanel source.

In this example, we have an accidental recursive removal mounted on our new system under /mnt/rooted/. the echo command to pull the shadow information can be skipped if the /etc/ folder is missing.

for user in $(/bin/ls -A /mnt/rooted/var/cpanel/users/ | egrep -v "^HASH" | egrep -vx "root|nobody|system"); do
    echo "Generating cpmove folder for $user..."
    mkdir -p /mnt/rooted/home/cpmove-$user/{bandwidth,cp,dnszones,cron,homedir/{public_html,mail/{cur,tmp,new},etc},mysql,sslcerts,sslkeys,va,vad}
    echo -n $(grep ^${user}: /mnt/rooted/etc/shadow | cut -d: -f2) > /mnt/rooted/home/cpmove-$user/shadow
    cp -a /mnt/rooted/var/cpanel/users/$user /mnt/rooted/home/cpmove-$user/cp/
    cp -a /mnt/rooted/var/spool/cron/$user /mnt/rooted/home/cpmove-$user/cron/
    tar -zcf /home/cpmove-$user.tar.gz /mnt/rooted/home/cpmove-$user
done

If you are incredibly lucky and are able to get the mysql daemon running, or get this data accessible by some other fashion, you can add this stanza before the tar command:

if [ -f /mnt/rooted/var/cpanel/databases/$user.json ]; then
  dblist=$(cat /mnt/rooted/var/cpanel/databases/$user.json | python -c 'import sys,json; dbs=json.load(sys.stdin)["MYSQL"]["dbs"].keys() ; print "\n".join(dbs)' | grep -v \*)
  dbuserlist=$(cat /mnt/rooted/var/cpanel/databases/$user.json | python -c 'import sys,json; users=json.load(sys.stdin)["MYSQL"]["dbusers"].keys() ; print "\n".join(users)' | grep -v \* | grep -v cpses_)
fi
for db in $dblist; do
  mysqldump $db > /mnt/rooted/home/cpmove-$user/mysql/$db.sql
done
for dbuser in $dbuserlist; do
  mysql -Ns -e "show grants for '$dbuser'@'localhost'" >> /mnt/rooted/home/cpmove-$user/mysql.sql
done

This will pull data on mapped databases and users directly from the cPanel yaml files that store the same information, and dump databases into the temporary folders before compression.

Following the restore of these cpmove files, the tradition is to follow it up with an rsync of data into the appropriate location:

for user in $(/bin/ls -A /mnt/rooted/var/cpanel/users/ | egrep -v "^HASH" | egrep -vx "root|nobody|system"); do
  echo "rsyncing $user..."
  rsync -aqH /mnt/rooted/home/$user/ /home/$user/
done

Because the old homedir and new homedir are on the same system, compressing and decompressing the homedir data is just a waste of space and time.