Wednesday, August 23, 2017

How to fix DRBD recovery from split brain

Step 1: Start drbd manually on both nodes
Step 2: Define one node as secondary and discard data on this
drbdadm secondary all
drbdadm disconnect all
drbdadm -- --discard-my-data connect all
Step 3: Define anoher node as primary and connect
drbdadm primary all
drbdadm disconnect all
drbdadm connect all

Tuesday, August 22, 2017

Change Mysql Data Directory

  1. Stop MySQL using the following command:
    sudo /etc/init.d/mysql stop
  2. Copy the existing data directory (default located in /var/lib/mysql) using the following command:
    sudo cp -R -p /var/lib/mysql /newpath
  3. edit the MySQL configuration file with the following command:
    sudo gedit /etc/mysql/my.cnf   # or perhaps /etc/mysql/mysql.conf.d/mysqld.cnf
  4. Look for the entry for datadir, and change the path (which should be /var/lib/mysql) to the new data directory.
  5. In the terminal, enter the command:
    sudo gedit /etc/apparmor.d/usr.sbin.mysqld
  6. Look for lines beginning with /var/lib/mysql. Change /var/lib/mysql in the lines with the new path.
  7. Save and close the file.
  8. Restart the AppArmor profiles with the command:
    sudo /etc/init.d/apparmor reload
  9. Restart MySQL with the command:
    sudo /etc/init.d/mysql restart

Friday, August 18, 2017

Configuration DRBD

1. install DRBD  > sudo apt install drbd8-utils
2.
To configure drbd, on the first host edit /etc/drbd.conf

global { usage-count no; }
common { syncer { rate 100M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "secret";
        }
        on eoblas03 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.0.1:7788;
                meta-disk internal;
        }
        on eoblas03 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.0.2:7788;
                meta-disk internal;
        }
} 
 
3.  Now copy /etc/drbd.conf to the second host:
     scp /etc/drbd.conf drbd02:~
4.  Now using the drbdadm utility initialize the meta data storage.  On each server
          execute:
    suudo drbdadm create-md r0
5.  Next, on both hosts, start the drbd daemon:
    sudo systemctl start drbd.service 
 
6.  On the drbd01, or whichever host you wish to be the primary, enter the following:
    sudo drbdadm -- --overwrite-data-of-peer primary all 
 
7.  After executing the above command, the data will start syncing with the secondary host.  To watch the progress, on
          drbd02 enter the following:
    watch -n1 cat /proc/drbd
    To stop watching the output press Ctrl+c.
          
8.  Finally, add a filesystem to /dev/drbd0 and mount it:
    sudo mkfs.ext3 /dev/drbd0
    sudo mount /dev/drbd0 /srv 
 

Tuesday, August 15, 2017

High Availability with Heartbeat on Unbuntu

1. make sure all server/node can comunicate
2. install heartbeat > sudo apt-get install heartbeat
3. Create ha.cf File, On both servers,
    sudo vi /etc/ha.d/ha.cf
  
  keepalive 2
  warntime 5
  deadtime 15
  initdead 90
  udpport 694
  auto_failback on
  ucast eth0 172.16.10.135 #for node2 change ip node1
  logfile /var/log/ha-log
  node eoblas05
  node eoblas03 


4. Create authkeys File

The authorization key is used to allow cluster members to join a cluster. We can simply generate a random key for this purpose.

On the primary node, run these commands to generate a suitable authorization key in an environment variable named AUTH_KEY:

if [ -z "${AUTH_KEY}" ]; then
  export AUTH_KEY="$(command dd if='/dev/urandom' bs=512 count=1 2>'/dev/null' \
      | command openssl sha1 \
      | command cut --delimiter=' ' --fields=2)"
fi

Then write the /etc/ha.d/authkeys file with these commands:

sudo bash -c "{
  echo auth1
  echo 1 sha1 $AUTH_KEY
} > /etc/ha.d/authkeys"

Check the contents of the authkeys file like this:

    sudo cat /etc/ha.d/authkeys

It should like something like this (with a different authorization key):

/etc/ha.d/authkeys example:
auth1
1 sha1 d1e6557e2fcb30ff8d4d3ae65b50345fa46a2faa

Ensure that the file is only readable by root:

    sudo chmod 600 /etc/ha.d/authkeys

Now copy the /etc/ha.d/authkeys file from your primary node to your secondary node. You can do this manually, or with scp.

On the secondary server, be sure to set the permissions of the authkeys file:

    sudo chmod 600 /etc/ha.d/authkeys

Both servers should have an identical /etc/ha.d/authkeys file.


5. Create haresources File

The haresources file specifies preferred hosts paired with services that the cluster manages. The preferred host is the node that should run the associated service(s) if the node is available. If the preferred host is not available, i.e. it is not reachable by the cluster, one of the other nodes will take over. In other words, the secondary server will take over if the primary server goes down.

On both servers, open the haresources file in your favorite editor. We'll use vi:

    sudo vi /etc/ha.d/haresources


    eoblas05 IPaddr::172.16.10.35/24 apache2
  #eoblas05 IPaddr::172.16.10.35/24 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext4::noatime  mysql apache2  (contoh untuk drbd dan mysql)

6. TESTING WITH APACHE WEB SERVER

  
Please try to access an alias IP from browser. You will see a text This is node1. Please try to stop Heartbeat service on node1 and refresh browser. You will see a text This is node2 (all services handled by Heartbeat on node1 will be taken by node2). For failback, please start again Heartbeat service on node1 (all services handled by Heartbeat on node2 will be taken again by node1)

GlusterFS on Ubuntu Servers

1. make sure all server/node can comucicate
2. Install server (do for 2 node) > sudo apt-get install glusterfs-server
3. sudo gluster peer probe eoblasxx
   peer probe: success
 
4. This means that the peering was successful. We can check that the nodes are communicating at any time by typing:
   sudo gluster peer status

   Number of Peers: 1

   Hostname: eoblasxx
   Port: 24007
   Uuid: 7bcba506-3a7a-4c5e-94fa-1aaf83f5729b
   State: Peer in Cluster (Connected)
5. Create a Storage Volume > sudo gluster volume create Voltest 2 transport tcp eoblas03:/cluster-storage eoblas05:/cluster-storage force
6. sudo gluster volume start Voltest
   volume start: volume1: success
7. Install Client > sudo apt-get install glusterfs-client
8. sudo mkdir /data
9. sudo mount -t glusterfs eoblas03:Voltest /data
10. make automount at /etc/fstab > eoblas03:/Voltest /data glusterfs defaults,nobootwait,_netdev,backupvolfile-server=172.16.10.135,fetch-attempts=10 0 2


Tuesday, February 21, 2017

Exchange hub transport “Mail.que” file large in size

  1. Open exchange management shell and run “Get-TransportServer ” HUB01″ |fl
  2. Here, look for PipelineTracingEnabled. This should be set to False. If not, run
    Set-TransportServer HUB01 -PipelineTracingEnabled $False
  3. Now run “Get-TransportConfig” and ensure that
    MaxDumpsterSizePerStorageGroup is in MB’s and not GB’s

    MaxDumpsterTime : 7.00:00:00

    If not, run

    Set-TransportConfig -MaxDumpsterSizePerStorageGroup -MaxDumpsterTime
  4. Now run “Get-Queue” and take a look at the count of messages in HUB01
  5. Goto services.msc and Pause the Microsoft Exchange Transport service
  6. Again, run “Get-Queue” and ensure all pending messages are “zeroed” out
  7. Once messages pending becomes zero, stop the Transport service
  8. Move the mail.que file and all others to a new folder in the same location
  9. Start the Transport service
  10. Take a look at the queue again
  11. You should see that messages would have started getting delivered
  12. Now you can backup or safely delete the old mail.que file

Friday, January 20, 2017

Verbose status on welcome sceen

It enabled Verbose Status Messages in Windows Vista/7/2008
  1. Open regedit
  2. "HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\
    CurrentVersion\\Policies\\System"
  3. Create DWORD VerboseStatus (or DWORD-32 for 64bit OS)
  4. Change Value to 1
  5. Reboot