https://www.tecmint.com/extend-and-reduce-lvms-in-linux/
https://packetpushers.net/ubuntu-extend-your-default-lvm-space/
https://www.tecmint.com/extend-and-reduce-lvms-in-linux/
https://packetpushers.net/ubuntu-extend-your-default-lvm-space/
# swapoff -v /dev/VolGroup00/LogVol01
# lvreduce /dev/VolGroup00/LogVol01 -L -512M
# mkswap /dev/VolGroup00/LogVol01
# swapon -v /dev/VolGroup00/LogVol01
$cat /proc/swaps
$free -h
apt-get install libwww-perl
$ uname -r
It will shows the list like below:3.19.0-64-generic
$ sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
You will get the list of images something like below:linux-image-3.19.0-25-generic
linux-image-3.19.0-56-generic
linux-image-3.19.0-58-generic
linux-image-3.19.0-59-generic
linux-image-3.19.0-61-generic
linux-image-3.19.0-65-generic
linux-image-extra-3.19.0-25-generic
linux-image-extra-3.19.0-56-generic
linux-image-extra-3.19.0-58-generic
linux-image-extra-3.19.0-59-generic
linux-image-extra-3.19.0-61-generic
$ sudo apt-get purge linux-image-3.19.0-25-generic
$ sudo apt-get purge linux-image-3.19.0-56-generic
$ sudo apt-get purge linux-image-3.19.0-58-generic
$ sudo apt-get purge linux-image-3.19.0-59-generic
$ sudo apt-get purge linux-image-3.19.0-61-generic
$ sudo apt-get purge linux-image-3.19.0-65-generic
When you're done removing the older kernels, you can run this to remove ever packages you won't need anymore:$ sudo apt-get autoremove
And finally you can run this to update grub kernel list:$ sudo update-grub
apt
i.e. /boot is 100% full$ sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
You will get the list of images somethign like below:linux-image-3.19.0-25-generic
linux-image-3.19.0-56-generic
linux-image-3.19.0-58-generic
linux-image-3.19.0-59-generic
linux-image-3.19.0-61-generic
linux-image-3.19.0-65-generic
linux-image-extra-3.19.0-25-generic
linux-image-extra-3.19.0-56-generic
linux-image-extra-3.19.0-58-generic
linux-image-extra-3.19.0-59-generic
linux-image-extra-3.19.0-61-generic
sudo rm -rf /boot/*-3.19.0-{25,56,58,59,61,65}-*
sudo apt-get -f install
sudo apt-get autoremove
sudo update-grub
sudo apt-get update
# fdisk -l | grep -i sd Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 209715199 104344576 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10733223936 bytes, 20963328 sectors
[root@node1 ~]# pvcreate /dev/sdb [root@node1 ~]# vgcreate vg_cluster /dev/sdb [root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_cluster [root@node1 ~]# mkfs.ext4 /dev/vg_cluster/lv_cluster
[root@node2 ~]# pvscan [root@node2 ~]# vgscan [root@node2 ~]# lvscan
Finally, verify the LV we created on node1 is available to you on all your
remaining nodes (Ex. node2) using below command. You should
see /dev/vg_apache/lv_apache on all your nodes. Restart node if not appear
[root@node2 ~]# /dev/vg_apache/lv_apache
Make a host entry on each node for all nodes, the cluster will be using the host name to communicate each other. Perform tasks on all of your cluster nodes.
# vi /etc/hosts 192.168.0.11 node1.local node1 192.168.0.12 node2.local node2
192.168.12.12 node2.itzgeek.local node2
# apt-get install pcs
# passwd hacluster
[root@node1 ~]# pcs cluster auth node1 node2 Username: hacluster Password: node1: Authorized node2: Authorized
[root@node1 ~]# pcs cluster setup --start --name Node_cluster node1 node2 Shutting down pacemaker/corosync services... Redirecting to /bin/systemctl stop pacemaker.service Redirecting to /bin/systemctl stop corosync.service Killing any remaining services... Removing all cluster configuration files... node1: Succeeded node2: Succeeded Starting cluster on nodes: node1, node2.... node2: Starting Cluster... node1: Starting Cluster... Synchronizing pcsd certificates on nodes node1, node2... node1: Success node2: Success Restaring pcsd on the nodes in order to reload the certificates... node1: Success node2: Success
[root@node1 ~]# pcs cluster enable --all node1: Cluster Enabled node2: Cluster Enabled
[root@node2 ~]#mkdir /Data
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/www
[root@node2 ~]# mkdir /Data/www/html [root@node2 ~]# mkdir /var/www/cgi-bin [root@node2 ~]# mkdir /Data/www/error [root@node2 ~]# restorecon -R /var/www [root@node2 ~]# cat <<-end>/Data/www/html/index.html Hello This Is Coming From Kafemis Cluster END [root@node2 ~]# umount /Data-end>
[root@node2 ~]#mkdir /Data
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/mysql
Change Mysql Data Derectory
see http://kafemis.blogspot.com/2017/08/change-mysql-data-directory.html
Change /usr/lib/ocf/lib/heartbeat/mysql-common.sh script to our new data directory. change OCF_RESKEY_datadir_default="/var/lib/mysql" to OCF_RESKEY_datadir_default="/Data/mysql"
Create a filesystem resource for Apache server, this is nothing but a shared storage coming from the iSCSI server.
#pcs resource create httpd_fs Filesystem device="/dev/mapper/vg_cluster-lv_cluster" directory="/Data" fstype="ext4" --group cls_node
Create an IP address resource, this will act a virtual IP for the Apache. Clients will use this ip for accessing the web content instead of individual nodes ip.
#pcs resource create vip IPaddr2 ip=192.168.0.100 cidr_netmask=24 --group cls_node
Create an Apache resource which will monitor the status of Apache server and move the resource to another node in case of any failure.
# pcs resource create httpd_ser apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group cls_node
Create an MYSQL Resource which will monitor the status of MYSQL server and move the resource to another node in case of any failure.
# pcs property set stonith-enabled=false
Check the status of the cluster
[root@node1 ~]# pcs status
git clone --recursive https://github.com/ONLYOFFICE/docker-onlyoffice-owncloud
cd docker-onlyoffice-owncloud
git submodule update --remote
docker-compose.yml
file (if you want to connect Document Server to Nextcloud), opening it and altering the image: owncloud:fpm
line:image: nextcloud:fpm
This step is optional and, if you want to use Document Server with ownCloud, you do not need to change anything.sudo docker-compose up -d
Please note: you might need to wait a couple of minutes when all the containers are up and running after the above command.set_configuration.sh
script:sudo bash set_configuration.sh
Now you can enter ownCloud/Nextcloud and create a new document. It will be opened in ONLYOFFICE Document Server.
apt
(the script above uses apt)
can’t remove packages due to broken dependency, here you can manually
find out the old kernel packages and remove them via DPKG:uname -r2. List all kernels excluding the current booted:
dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r)Example output:
rc linux-image-4.4.0-15-generic 4.4.0-15.31 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMP ii linux-image-4.4.0-18-generic 4.4.0-18.34 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMP rc linux-image-4.6.0-040600rc3-generic 4.6.0-040600rc3.201604120934 amd64 Linux kernel image for version 4.6.0 on 64 bit x86 SMPThere will be three status in the listed kernel images:
sudo dpkg --purge linux-image-4.4.0-18-generic
If the command fails, remove the dependency packages that the output tells you via sudo dpkg --purge PACKAGE
.sudo dpkg --purge linux-image-4.4.0-18-header linux-image-4.4.0-185. Finally you may fix the apt broken dependency via command:
sudo apt -f install
drbdadm secondary all drbdadm disconnect all drbdadm -- --discard-my-data connect all |
drbdadm primary all drbdadm disconnect all drbdadm connect all
sudo /etc/init.d/mysql stop
/var/lib/mysql
) using the following command:sudo cp -R -p /var/lib/mysql /newpath
sudo gedit /etc/mysql/my.cnf # or perhaps /etc/mysql/mysql.conf.d/mysqld.cnf
datadir
, and change the path (which should be /var/lib/mysql
) to the new data directory.sudo gedit /etc/apparmor.d/usr.sbin.mysqld
/var/lib/mysql
. Change /var/lib/mysql
in the lines with the new path.sudo /etc/init.d/apparmor reload
sudo /etc/init.d/mysql restart
global { usage-count no; } common { syncer { rate 100M; } } resource r0 { protocol C; startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "secret"; } on eoblas03 { device /dev/drbd0; disk /dev/sdb1; address 192.168.0.1:7788; meta-disk internal; } on eoblas03 { device /dev/drbd0; disk /dev/sdb1; address 192.168.0.2:7788; meta-disk internal; } }
3. Now copy /etc/drbd.conf to the second host: scp /etc/drbd.conf drbd02:~ 4. Now using the drbdadm utility initialize the meta data storage. On each server execute: suudo drbdadm create-md r0 5. Next, on both hosts, start the drbd daemon: sudo systemctl start drbd.service
6. On the drbd01, or whichever host you wish to be the primary, enter the following: sudo drbdadm -- --overwrite-data-of-peer primary all
7. After executing the above command, the data will start syncing with the secondary host. To watch the progress, on
drbd02 enter the following:
watch -n1 cat /proc/drbd To stop watching the output press Ctrl+c. 8. Finally, add a filesystem to /dev/drbd0 and mount it: sudo mkfs.ext3 /dev/drbd0 sudo mount /dev/drbd0 /srv
sudo vi /etc/ha.d/ha.cf
keepalive 2
warntime 5
deadtime 15
initdead 90
udpport 694
auto_failback on
ucast eth0 172.16.10.135 #for node2 change ip node1
logfile /var/log/ha-log
node eoblas05
node eoblas03
4. Create authkeys File
The authorization key is used to allow cluster members to join a cluster. We can simply generate a random key for this purpose.
On the primary node, run these commands to generate a suitable authorization key in an environment variable named AUTH_KEY:
if [ -z "${AUTH_KEY}" ]; then
export AUTH_KEY="$(command dd if='/dev/urandom' bs=512 count=1 2>'/dev/null' \
| command openssl sha1 \
| command cut --delimiter=' ' --fields=2)"
fi
Then write the /etc/ha.d/authkeys file with these commands:
sudo bash -c "{
echo auth1
echo 1 sha1 $AUTH_KEY
} > /etc/ha.d/authkeys"
Check the contents of the authkeys file like this:
sudo cat /etc/ha.d/authkeys
It should like something like this (with a different authorization key):
/etc/ha.d/authkeys example:
auth1
1 sha1 d1e6557e2fcb30ff8d4d3ae65b50345fa46a2faa
Ensure that the file is only readable by root:
sudo chmod 600 /etc/ha.d/authkeys
Now copy the /etc/ha.d/authkeys file from your primary node to your secondary node. You can do this manually, or with scp.
On the secondary server, be sure to set the permissions of the authkeys file:
sudo chmod 600 /etc/ha.d/authkeys
Both servers should have an identical /etc/ha.d/authkeys file.
5. Create haresources File
The haresources file specifies preferred hosts paired with services that the cluster manages. The preferred host is the node that should run the associated service(s) if the node is available. If the preferred host is not available, i.e. it is not reachable by the cluster, one of the other nodes will take over. In other words, the secondary server will take over if the primary server goes down.
On both servers, open the haresources file in your favorite editor. We'll use vi:
sudo vi /etc/ha.d/haresources
eoblas05 IPaddr::172.16.10.35/24 apache2
#eoblas05 IPaddr::172.16.10.35/24 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext4::noatime mysql apache2 (contoh untuk drbd dan mysql)
6. TESTING WITH APACHE WEB SERVER