Cluster Nodes:
node1. 192.168.0.11
node2. 192.168.0.12
iSCSI Storage:
server 192.168.0.20
Prepare Iscsi Storage to connect all node
see http://kafemis.blogspot.com/2011/01/setting-koneksi-ke-hp-lefthand-dengan.html
Setup Cluster Nodes:
Go to all of your nodes and check whether the new disk is visible or not. In my case, /dev/sdb is the new disk.
# fdisk -l | grep -i sd Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 209715199 104344576 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10733223936 bytes, 20963328 sectors
On any one of your node (Ex, node1), create an LVM using below commands
[root@node1 ~]# pvcreate /dev/sdb [root@node1 ~]# vgcreate vg_cluster /dev/sdb [root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_cluster [root@node1 ~]# mkfs.ext4 /dev/vg_cluster/lv_cluster
Now, go to your remaining nodes and run below commands
[root@node2 ~]# pvscan [root@node2 ~]# vgscan [root@node2 ~]# lvscan
Finally, verify the LV we created on node1 is available to you on all your
remaining nodes (Ex. node2) using below command. You should
see /dev/vg_apache/lv_apache on all your nodes. Restart node if not appear
[root@node2 ~]# /dev/vg_apache/lv_apache
Make a host entry on each node for all nodes, the cluster will be using the host name to communicate each other. Perform tasks on all of your cluster nodes.
# vi /etc/hosts 192.168.0.11 node1.local node1 192.168.0.12 node2.local node2
192.168.12.12 node2.itzgeek.local node2
Install cluster packages (pacemaker) on all nodes using below command.
# apt-get install pcs
Set password for hacluster user, this is cluster administration account. We suggest you set the same password for all nodes.
# passwd hacluster
Start the cluster service, also, enable it to start automatically on system startup.
# systemctl start pcsd.service
# systemctl enable pcsd.service
Cluster Creation:
Authorize the nodes using below command, run the command in any one of the node.
[root@node1 ~]# pcs cluster auth node1 node2 Username: hacluster Password: node1: Authorized node2: Authorized
Create a cluster.
[root@node1 ~]# pcs cluster setup --start --name Node_cluster node1 node2 Shutting down pacemaker/corosync services... Redirecting to /bin/systemctl stop pacemaker.service Redirecting to /bin/systemctl stop corosync.service Killing any remaining services... Removing all cluster configuration files... node1: Succeeded node2: Succeeded Starting cluster on nodes: node1, node2.... node2: Starting Cluster... node1: Starting Cluster... Synchronizing pcsd certificates on nodes node1, node2... node1: Success node2: Success Restaring pcsd on the nodes in order to reload the certificates... node1: Success node2: Success
Enable the cluster to start at the system startup, else you will need to start the cluster on every time you restart the system.
[root@node1 ~]# pcs cluster enable --all node1: Cluster Enabled node2: Cluster Enabled
Run the below command to get a detailed information about the cluster
including its resources, pacemaker status, and nodes details.
Preparing resources:
Apache Web Server:
Install apache server on both nodes.
Now we need to use shared storage for storing the web content (HTML) file. Perform below operation in any one of the nodes. Use /Data/www as document root
[root@node2 ~]#mkdir /Data
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/www
[root@node2 ~]# mkdir /Data/www/html [root@node2 ~]# mkdir /var/www/cgi-bin [root@node2 ~]# mkdir /Data/www/error [root@node2 ~]# restorecon -R /var/www [root@node2 ~]# cat <<-end>/Data/www/html/index.html Hello This Is Coming From Kafemis Cluster END [root@node2 ~]# umount /Data-end>
MySQL Server:
Install Mysql server on both nodes.
Now
we need to use shared storage for storing Database file.
Perform below operation in any one of the nodes. Use /Data/mysql as
data directory
[root@node2 ~]#mkdir /Data
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/mysql
Change Mysql Data Derectory
see http://kafemis.blogspot.com/2017/08/change-mysql-data-directory.html
Change /usr/lib/ocf/lib/heartbeat/mysql-common.sh script to our new data directory. change OCF_RESKEY_datadir_default="/var/lib/mysql" to OCF_RESKEY_datadir_default="/Data/mysql"
Creating Resources:
Create a filesystem resource for Apache server, this is nothing but a shared storage coming from the iSCSI server.
#pcs resource create httpd_fs Filesystem device="/dev/mapper/vg_cluster-lv_cluster" directory="/Data" fstype="ext4" --group cls_node
Create an IP address resource, this will act a virtual IP for the Apache. Clients will use this ip for accessing the web content instead of individual nodes ip.
#pcs resource create vip IPaddr2 ip=192.168.0.100 cidr_netmask=24 --group cls_node
Create an Apache resource which will monitor the status of Apache server and move the resource to another node in case of any failure.
# pcs resource create httpd_ser apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group cls_node
Create an MYSQL Resource which will monitor the status of MYSQL server and move the resource to another node in case of any failure.
#pcs resource create p_mysql ocf:heartbeat:mysql binary="/usr/bin/mysqld_safe" config="/etc/mysql/my.cnf" --group cls_node
Since we are not using fencing, disable it (STONITH). You must disable to start the cluster resources, but disabling STONITH in the production environment is not recommended.
# pcs property set stonith-enabled=false
Check the status of the cluster
[root@node1 ~]# pcs status
No comments:
Post a Comment