Setup

Create a mountpoint for the disk :

mkdir /mnt/ramdisk

Secondly, add this line to /etc/fstab in to mount the drive at boot-time.

tmpfs /mnt/ramdisk tmpfs defaults,size=2g,noexec,nosuid,uid=65534,gid=65534,mode=1755 0 0

Change the size option in the above line to easily accommodate the amount of the files you’ll have. Don’t worry, it doesn’t allocate all of that space immediately, but only as it’s used. It’s safe to use up to half of your RAM, perhaps more if your system has a lot of ram that’s not being used.

Mount the new filesystem

mount /mnt/ramdisk

Check to see that it’s mounted

mount
df -h

You should see these entries in mount and df output

tmpfs on /mnt/ramdisk type tmpfs (rw,relatime,size=8388608k)

tmpfs                 8.0G  0.0G  8.0G   0% /mnt/ramdisk

Create directory for Backups

Next we need to create a directory to store the backup copies of the files in.

mkdir /var/ramdisk-backup

Init Script

You can put it wherever you like, so long as you change the script we create below to reflect the new location.

Create a script at /etc/init.d/ramdisk with the following contents

#! /bin/sh 
# /etc/init.d/ramdisk
### BEGIN INIT INFO
# Provides: ramdisk
# Required-Start: $local_fs $remote_fs $syslog $named $network $time
# Required-Stop: $local_fs $remote_fs $syslog $named $network
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: ramdisk sync for files
# Description: ramdisk syncing of files
### END INIT INFO

case "$1" in
 start)
        echo "Copying files to RAM disk"
        rsync -av /var/ramdisk-backup/ /mnt/ramdisk/
        echo [date +"%Y-%m-%d %H:%M"] Ramdisk Synched from HD >> /var/log/ramdisk_sync.log
        ;;
 sync)
        echo "Syncing files from RAM disk to Hard Disk"
        echo [date +"%Y-%m-%d %H:%M"] Ramdisk Synched to HD >> /var/log/ramdisk_sync.log
        rsync -av --delete --recursive --force /mnt/ramdisk/ /var/ramdisk-backup/
        ;;
 stop)
        echo "Synching log files from RAM disk to Hard Disk"
        echo [date +"%Y-%m-%d %H:%M"] Ramdisk Synched to HD >> /var/log/ramdisk_sync.log
        rsync -av --delete --recursive --force /mnt/ramdisk/ /var/ramdisk-backup/
        ;;
 *)
        echo "Usage: /etc/init.d/ramdisk {start|stop|sync}"
        exit 1
        ;;
esac

exit 0

Now set this up to run at startup:

update-rc.d ramdisk defaults 00 99

Example For Apache

Configure and add the disk_cache module to Apache, or enable it from your /etc/apache2/mods-available/ then symlink that to the mods-enabled directory.

Our /etc/apache2/mods-available/disk_cache.conf file looks like this below:

<IfModule mod_disk_cache.c>
# cache cleaning is done by htcacheclean, which can be configured in
# /etc/default/apache2
#
# For further information, see the comments in that file, 
# /usr/share/doc/apache2.2-common/README.Debian, and the htcacheclean(8)
# man page.

# This path must be the same as the one in /etc/default/apache2
#CacheRoot /var/cache/apache2/mod_disk_cache
CacheRoot /mod_disk_cache

# This will also cache local documents. It usually makes more sense to
# put this into the configuration for just one virtual host.

CacheEnable disk /

# The result of CacheDirLevels * CacheDirLength must not be higher than
# 20. Moreover, pay attention on file system limits. Some file systems
# do not support more than a certain number of subdirectories in a
# single directory (e.g. 32000 for ext3)
CacheDirLevels 5
CacheDirLength 3

# CacheLock on
# CacheLockPath /tmp/mod_cache-lock
# CacheLockMaxAge 5

</IfModule>

Inspect htcacheclean Parameters

Now, review your /etc/default/apache2 file for htcacheclean changes:

### htcacheclean settings ###

## run htcacheclean: yes, no, auto
## auto means run if /etc/apache2/mods-enabled/disk_cache.load exists
## default: auto
HTCACHECLEAN_RUN=auto

## run mode: cron, daemon
## run in daemon mode or as daily cron job
## default: daemon
HTCACHECLEAN_MODE=daemon

## cache size 
##HTCACHECLEAN_SIZE=300M
HTCACHECLEAN_SIZE=2000M

## interval: if in daemon mode, clean cache every x minutes
HTCACHECLEAN_DAEMON_INTERVAL=360

## path to cache
## must be the same as in CacheRoot directive
##HTCACHECLEAN_PATH=/var/cache/apache2/mod_disk_cache
HTCACHECLEAN_PATH=/mnt/ramdisk

## additional options:
## -n : be nice
## -t : remove empty directories
HTCACHECLEAN_OPTIONS="-n -t"

Modify /etc/init.d/apache2 init File.  Add the following close to the top to check to see if we already have /mod_disk_cache mounted:

#-------------------------------------------------------------------------------------------------#
# Added by Shane 02/23/16 for local caching of files to keep the I/O down
if [ df -k|grep ramdisk|wc -l|awk '{print $1}' -eq 1 ] 
then
 echo "We already have /mnt/ramdisk mounted. Not remounting."
else
 if [ ! -d /mnt/ramdisk ]
 then
 echo "/mnt/ramdisk Does not exist. Let's create it."
 mkdir /mnt/ramdisk 2>/dev/null
 else
 echo "Mounting tmpfs /mnt/ramdisk"
 mount -o defaults,size=2g,noexec,nosuid,uid=65534,gid=65534,mode=1755 -t tmpfs tmpfs /mnt/ramdisk
 if [ $? -gt 0 ] 
 then
 echo "There's an error and it probably did not mount."
 else
 echo "tmpfs /mnt/ramdisk mount successful."
 fi 
 fi 
fi
#edit /etc/default/apache2 to change this.
HTCACHECLEAN_RUN=auto
HTCACHECLEAN_MODE=daemon
HTCACHECLEAN_SIZE=2000M
HTCACHECLEAN_DAEMON_INTERVAL=120
#HTCACHECLEAN_PATH=/var/cache/apache2$DIR_SUFFIX/mnt/ramdisk
HTCACHECLEAN_PATH=/mnt/ramdisk
HTCACHECLEAN_OPTIONS=""
#-------------------------------------------------------------------------------------------------#

 

Example for RRD Files in Observium – Move or Copy files to Prime

If you’re doing this for RRD files, either move your RRDs to /var/ramdisk-backup/observium_rrd and then load them into the ram disk:

mv /opt/observium/rrd /var/ramdisk-backup/observium_rrd
/etc/init.d/ramdisk start

Or move your RRDs to the ram disk itself and then sync them out to the backup:

mv /opt/observium/rrd /mnt/ramdisk/rrd
/etc/init.d/ramdisk sync

Create Symlink

Now either symlink /mnt/ramdisk/rrd to /opt/observium/rrd, or change the configuration so the rrds are loaded from the ramdisk path.

You can put ramdisk sync into your crontab to periodically sync your ram disk back to the hard disk:

2 * * * * root        /etc/init.d/ramdisk sync >> /dev/null 2>&1

MySQL replication is the process by which a single data set, stored in a MySQL database, will be live-copied to a second server. This configuration, called “master-slave” replication, is a typical setup. Our setup will be better than that, because master-master replication allows data to be copied from either server to the other one. This subtle but important difference allows us to perform mysql read or writes from either server. This configuration adds redundancy and increases efficiency when dealing with accessing the data.

The examples in this article will be based on two VM Guests, named vdb1 and vdb2.  We’re also going to note that vdb1 will have already contained multiple databases and is currently in production.

vdb1: 1.1.1.1

vdb2: 2.2.2.2

Step 1 – Install and Configure MySQL on vdb1

The first thing we need to do is to install the mysql-server and mysql-client packages on our server. We can do that by typing the following:

apt-get install mysql-server mysql-client

By default, the mysql process will only accept connections on localhost (127.0.0.1). To change this default behavior and change a few other settings necessary for replication to work properly, we need to edit /etc/mysql/my.cnf on vdb1. We need to stop binding to the loopback address which is currently set to the following:

bind-address            = 127.0.0.1

This line tells our server to accept connections from anywhere (by not listening on 127.0.0.1)

#bind-address           = 127.0.0.1

Now make sure the includedir for configuration files are part of the primary my.cnf:

#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/

Once you verify the includedir is part of your my.cnf, then go to the conf.d directory and create a file that will contain your replication configurations:

cd conf.d
vi replications.cnf
[mysqld]
#           _ _     _ 
# __   ____| | |__ / |
# \ \ / / _ | '_ \| |
#  \ V / (_| | |_) | |
#   \_/ \__,_|_.__/|_|
#                     
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
server-id                   = 1
log_bin                     = /var/log/mysql/mysql-bin.log
expire_logs_days            = 20
max_binlog_size             = 500M
# By setting the auto_increment_increment and auto_increment_offset values independent
# servers will create unique auto_increment values allowing for replication without fear
# of collision!
replicate-same-server-id    = 0
log-slave-updates           = true
auto_increment_increment    = 2
auto_increment_offset       = 1
relay-log                   = /var/lib/mysql/slave-relay.log
relay-log-index             = /var/lib/mysql/slave-relay-log.index
binlog_format               = MIXED</pre>
Now we need to restart mysql:
<pre>service mysql restart
</pre>
We next need to change some command-line settings within our mysql instance. Back at our shell, we can get to our root mysql user by typing the following:
<pre>mysql -hlocalhost -uroot -p 
</pre>
Please note that the password this command will prompt you for is that of the root <span style="text-decoration: underline; font-style: italic;">mysql user</span>, not the root user the server. To confirm that you are logged in to the mysql shell, the prompt should look like the following.
<pre>mysql> 
</pre>
Once we are logged in, we need to set some things up.

We need to create a pseudo-user that will be used for replicating data between our two VMs. The examples in this article will assume that you name this user "replicator". Replace "password" with the password you wish to use for replication.
<pre>CREATE USER 'replicator'@'%' IDENTIFIED BY 'password'; 
</pre>
Next, we need to give this user permissions to replicate our mysql data:
<pre>GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%'; 
</pre>
Permissions for replication cannot, unfortunately, be given on a per-database basis. Our user will only replicate the database(s) that we instruct it to in our config file.

For the final step of the initial vdb1 configuration, we need to get some information about the current MySQL instance which we will later provide to vdb2.

The following command will output a few pieces of important information, which we will need to make note of:
<pre>show master status; 
</pre>
The output will looking something like the following, and will have two pieces of critical information:
<pre>+------------------+-----------+--------------+------------------+
| File             | Position  | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+-----------+--------------+------------------+
| mysql-bin.000004 | 114764712 |              |                  |
+------------------+-----------+--------------+------------------+
1 row in set (0.00 sec)
</pre>
We need to make a note of the file and position which will be used in the next step.
<h2>Step 2 - Install and Configure MySQL on vdb2</h2>
We need to repeat the same steps that we followed on vdb1. First we need to install it, which we can do with the following command:
<pre>apt-get install mysql-server mysql-client
</pre>
Once the two packages are properly installed, we need to configure it in much the same way as we configured vdb1. We will start by editing the /etc/mysql/my.cnf file<span style="font-family: monospace;"><span style="font-family: proxima-nova,sans-serif;"> and o</span></span>nce again, by default the mysql process will only accept connections on localhost (127.0.0.1). To change this default behavior and change a few other settings necessary for replication to work properly, we need to edit /etc/mysql/my.cnf on vdb1. We need to stop binding to the loopback address which is currently set to the following:
<pre>vi /etc/mysql/my.cnf</pre>
<pre>bind-address            = 127.0.0.1
</pre>
This line tells our server to accept connections from anywhere (by not listening on 127.0.0.1)
<pre>bind-address           = 127.0.0.1
</pre>
Now make sure the includedir for configuration files are part of the primary my.cnf:
<pre>#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/</pre>
<span style="font-family: monospace;"><span style="font-family: proxima-nova,sans-serif;"> Once you verify the includedir is part of your my.cnf, then go to the conf.d directory <span style="text-decoration: underline;">and create a file that will contain your replication configurations</span>:</span>
</span>
<pre>cd conf.d
vi replications.cnf</pre>
<pre class="">[mysqld]
#           _ _    ____  
# __   ____| | |__|___ \ 
# \ \ / / _ | '_ \ __) |
#  \ V / (_| | |_) / __/ 
#   \_/ \__,_|_.__/_____|
#                        
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
server-id                   = 2
log_bin                     = /var/log/mysql/mysql-bin.log
expire_logs_days            = 20
max_binlog_size             = 500M
# By setting the auto_increment_increment and auto_increment_offset values independent
# servers will create unique auto_increment values allowing for replication without fear
# of collision!
replicate-same-server-id    = 0
log-slave-updates           = true
auto_increment_increment    = 2
auto_increment_offset       = 2
relay-log                   = /var/lib/mysql/slave-relay.log
relay-log-index             = /var/lib/mysql/slave-relay-log.index
binlog_format               = MIXED

Please note, that unlike vdb1, the server-id for vdb2 cannot be set to 1.  Now we need to restart mysql:

service mysql restart

It is time to go into the mysql shell and set some more configuration options.

mysql -hlocalhost -uroot -p 

First, just as on vdb1, we are going to create the pseudo-user which will be responsible for the replication. Replace “password” with the password you wish to use.

CREATE USER 'replicator'@'%' IDENTIFIED BY 'password'; 

Give our newly created ‘replication’ user permissions to replicate.

GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%'; 

The next step involves taking the information that we took a note of earlier and applying it to our mysql instance. This will allow replication to begin. The following should be typed at the mysql shell… Special note though, if you’re using a existing MySQL database that you’re trying to turn into a master-master configuration, do Step A first.  Otherwise, skip this and move on the step B.

Step A – If you are using an existing populated MySQL server with various databases already in existence.  Log back into vdb1 and get the latest position since the system is LIVE.  Then temporarily flush and global lock the tables while we perform this next step:

mysql> RESET MASTER;
mysql> FLUSH PRIVILEGES;

mysql> FLUSH TABLES WITH READ LOCK;

Stay logged into your mysql session.  Otherwise it will unlock the tables automatically… Now check the current Position once again.  If your session is still active, then the Position should not increase if you issue the “show master status;” command multiple times:

mysql> show master status;
+------------------+-----------+--------------+------------------+
| File             | Position  | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+-----------+--------------+------------------+
| mysql-bin.000001 | 117423221 |              |                  |
+------------------+-----------+--------------+------------------+
1 row in set (0.00 sec)

Step B – Set the initial replication & Restore from vdb1

mysql> SLAVE STOP;

On vdb2, let’s dump vdb1 while it’s locked and restore here on vdb2:

# mysqldump -uMYSQLROOTUSER -pMYSQLROOTPW -hVDB1 --all-databases --delete-master-logs --flush-logs| mysql -uMYSQLROOTUSER -pMYSQLROOTPW
mysql> CHANGE MASTER TO MASTER_HOST = '1.1.1.1', MASTER_USER = 'replicator', MASTER_PASSWORD = 'password', MASTER_LOG_FILE = 'mysql-bin.000001', MASTER_LOG_POS = 117423221; 
mysql> SLAVE START;

You need to replace ‘password’ with the password that you have chosen for replication. Your values for MASTER_LOG_FILE and MASTER_LOG_POS may differ than those above. You should copy the values that “SHOW MASTER STATUS” returns on vdb1.

If you performed Step A, then on vdb1, unlock the tables you had locked on the session you had open on vdb1:

mysql> UNLOCK TABLES;

The last thing we have to do before we complete the mysql master-master replication is to make note of the master log file and position to use to replicate in the other direction (from vdb2 to vdb1).  Unlike having to flush and lock tables, you shouldn’t have to do it here since it’s acting as a slave for the moment.

Now, start up the slave (to pull and update from vdb1) on vdb1:

mysql> START SLAVE;
mysql> SHOW SLAVE STATUS\G

Look for a line like this:

     Seconds_Behind_Master: 31

We need to give vdb2 a chance to catch up and wait until it’s caught up and the Seconds_Behind_Master is 0.

Once at this point, we can now flush and lock the tables on vdb2:

mysql> FLUSH TABLES WITH READ LOCK;

We can check the file and position by typing the following on vdb2:

mysql> SHOW MASTER STATUS; 

The output will look something like the following:

+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |      101 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

Take note of the file and position, as we will have to enter those on vdb1, to complete the two-way replication.

The next step will explain how to do that.

Step 3 – Completing Replication on vdb1

Back on vdb1, we need to finish configuring replication on the command line. Running this command will replicate all data from vdb2.

mysql> SLAVE STOP; 
mysql> CHANGE MASTER TO MASTER_HOST = '2.2.2.2', MASTER_USER = 'replicator', MASTER_PASSWORD = 'password', MASTER_LOG_FILE = 'mysql-bin.000001', MASTER_LOG_POS = 101; 
mysql> SLAVE START; 

Keep in mind that your values may differ from those above. Please also replace the value of MASTER_PASSWORD with the password you created when setting up the replication user.

The output will look something like the following:

Query OK, 0 rows affected (0.01 sec)

The last thing to do is to test that replication is working on both VMs. The last step will explain an easy way to test this configuration.

Step 4 – Testing Master-Master Replication

Now that have all the configuration set up, we are going to test it now. To do this, we are going to create a table in our example database on vdb1 and check on vdb2 to see if it shows up. Then, we are going to delete it from vdb2 and make sure it’s no longer showing up on vdb1.

We now need to create the database that will be replicated between the servers. We can do that by typing the following at the mysql shell:

create database example; 

Once that’s done, let’s create a dummy table on vdb1:

create table example.dummy (id varchar(10)); 

We now are going to check vdb2 to see if our table exists.

show tables in example; 

We should see output similar to the following:

+-------------------+
| Tables_in_example |
+-------------------+
| dummy             |
+-------------------+
1 row in set (0.00 sec)

The last test to do is to delete our dummy table from vdb2. It should also be deleted from vdb1.

We can do this by entering the following on vdb2:

DROP TABLE dummy; 

To confirm this, running the “show tables” command on vdb1 will show no tables:

Empty set (0.00 sec)

And there it is!   A working mysql master-master replication.

You could also check the consistency between the two using the pt-table-checksum tool from Percona.

# pt-table-checksum --host=vdb1 --user=MYSQLROOTUSR --password=MYSQLROOTPW --no-check-binlog-format

Then, if there are any issue, you can sync those databases/tables with the vdb2 in a master-master config with the tool from Percona

# pt-table-sync --execute --sync-to-master h=vdb2 --user=MYSQLROOTUSR --password=MYSQLROOTPW

Look people – This is a bare bones debian install – to get a webserver going…(or basically anything installed)

After logging in with ‘admin’ and your private key via SSH

sudo nano /etc/apt/source.list

deb http://http.debian.net/debian wheezy main 
deb-src http://http.debian.net/debian wheezy main

deb http://security.debian.org/ wheezy/updates main 
deb-src http://security.debian.org/ wheezy/updates main

deb http://http.debian.net/debian wheezy-updates main 
deb-src http://http.debian.net/debian wheezy-updates main

 

You will have to update missing sw sources to the above.
Then;

 

sudo apt-get update 
sudo apt-get install tasksel #key install sw for complex installs

 

Then; Set up a webserver.

sudo tasksel install web-server 
sudo apt-get install build-essential php5-dev php5-gd php-pear

 

#make the webserver start on boot via remote control file

sudo update-rc.d apache2 defaults

 

#Personally I cannot function with out the locate command!

sudo apt-get install locate

 

then

sudo updatedb #update the directory database

 

anyway that should help a lot in getting many setup to use the service!