1. sudo parted
2. resizepart numberdisk end disk (contoh resizepart 1 300GB)
3. go to webmin, edit phisycal volume "resize to match device"
4. go to logical volume click "use all free VG space"
Friday, October 12, 2018
Thursday, October 11, 2018
Safest way to clean up boot partition - Ubuntu
Case I: if /boot is not 100% full and apt is working
1. Check the current kernel version
$ uname -r
It will shows the list like below:3.19.0-64-generic
2. Remove the OLD kernels
2.a. List the old kernel
$ sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
You will get the list of images something like below:linux-image-3.19.0-25-generic
linux-image-3.19.0-56-generic
linux-image-3.19.0-58-generic
linux-image-3.19.0-59-generic
linux-image-3.19.0-61-generic
linux-image-3.19.0-65-generic
linux-image-extra-3.19.0-25-generic
linux-image-extra-3.19.0-56-generic
linux-image-extra-3.19.0-58-generic
linux-image-extra-3.19.0-59-generic
linux-image-extra-3.19.0-61-generic
2.b. Now its time to remove old kernel one by one as
$ sudo apt-get purge linux-image-3.19.0-25-generic
$ sudo apt-get purge linux-image-3.19.0-56-generic
$ sudo apt-get purge linux-image-3.19.0-58-generic
$ sudo apt-get purge linux-image-3.19.0-59-generic
$ sudo apt-get purge linux-image-3.19.0-61-generic
$ sudo apt-get purge linux-image-3.19.0-65-generic
When you're done removing the older kernels, you can run this to remove ever packages you won't need anymore:$ sudo apt-get autoremove
And finally you can run this to update grub kernel list:$ sudo update-grub
Case II: Can't Use apt
i.e. /boot is 100% full
NOTE: this is only if you can't use apt to clean up due to a 100% full /boot1. Get the list of kernel images
Get the list of kernel images and determine what you can do without. This command will show installed kernels except the currently running one$ sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
You will get the list of images somethign like below:linux-image-3.19.0-25-generic
linux-image-3.19.0-56-generic
linux-image-3.19.0-58-generic
linux-image-3.19.0-59-generic
linux-image-3.19.0-61-generic
linux-image-3.19.0-65-generic
linux-image-extra-3.19.0-25-generic
linux-image-extra-3.19.0-56-generic
linux-image-extra-3.19.0-58-generic
linux-image-extra-3.19.0-59-generic
linux-image-extra-3.19.0-61-generic
2. Prepare Delete
Craft a command to delete all files in /boot for kernels that don't matter to you using brace expansion to keep you sane. Remember to exclude the current and two newest kernel images. From above Example, it'ssudo rm -rf /boot/*-3.19.0-{25,56,58,59,61,65}-*
3. Clean up what's making apt grumpy about a partial install.
sudo apt-get -f install
4. Autoremove
Finally, autoremove to clear out the old kernel image packages that have been orphaned by the manual boot clean.sudo apt-get autoremove
5. Update Grub
sudo update-grub
6. Now you can update, install packages
sudo apt-get update
Thursday, September 13, 2018
HA Mysql and Share Storage on Ubuntu 18.04
Cluster Nodes:
node1. 192.168.0.11
node2. 192.168.0.12
iSCSI Storage:
server 192.168.0.20
Prepare Iscsi Storage to connect all node
see http://kafemis.blogspot.com/2011/01/setting-koneksi-ke-hp-lefthand-dengan.html
Setup Cluster Nodes:
Go to all of your nodes and check whether the new disk is visible or not. In my case, /dev/sdb is the new disk.
# fdisk -l | grep -i sd Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 209715199 104344576 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10733223936 bytes, 20963328 sectors
On any one of your node (Ex, node1), create an LVM using below commands
[root@node1 ~]# pvcreate /dev/sdb [root@node1 ~]# vgcreate vg_cluster /dev/sdb [root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_cluster [root@node1 ~]# mkfs.ext4 /dev/vg_cluster/lv_cluster
Now, go to your remaining nodes and run below commands
[root@node2 ~]# pvscan [root@node2 ~]# vgscan [root@node2 ~]# lvscan
Finally, verify the LV we created on node1 is available to you on all your
remaining nodes (Ex. node2) using below command. You should
see /dev/vg_apache/lv_apache on all your nodes. Restart node if not appear
[root@node2 ~]# /dev/vg_apache/lv_apache
Make a host entry on each node for all nodes, the cluster will be using the host name to communicate each other. Perform tasks on all of your cluster nodes.
# vi /etc/hosts 192.168.0.11 node1.local node1 192.168.0.12 node2.local node2
192.168.12.12 node2.itzgeek.local node2
Install cluster packages (pacemaker) on all nodes using below command.
# apt-get install pcs
Set password for hacluster user, this is cluster administration account. We suggest you set the same password for all nodes.
# passwd hacluster
Start the cluster service, also, enable it to start automatically on system startup.
# systemctl start pcsd.service
# systemctl enable pcsd.service
Cluster Creation:
Authorize the nodes using below command, run the command in any one of the node.
[root@node1 ~]# pcs cluster auth node1 node2 Username: hacluster Password: node1: Authorized node2: Authorized
Create a cluster.
[root@node1 ~]# pcs cluster setup --start --name Node_cluster node1 node2 Shutting down pacemaker/corosync services... Redirecting to /bin/systemctl stop pacemaker.service Redirecting to /bin/systemctl stop corosync.service Killing any remaining services... Removing all cluster configuration files... node1: Succeeded node2: Succeeded Starting cluster on nodes: node1, node2.... node2: Starting Cluster... node1: Starting Cluster... Synchronizing pcsd certificates on nodes node1, node2... node1: Success node2: Success Restaring pcsd on the nodes in order to reload the certificates... node1: Success node2: Success
Enable the cluster to start at the system startup, else you will need to start the cluster on every time you restart the system.
[root@node1 ~]# pcs cluster enable --all node1: Cluster Enabled node2: Cluster Enabled
Run the below command to get a detailed information about the cluster
including its resources, pacemaker status, and nodes details.
Preparing resources:
Apache Web Server:
Install apache server on both nodes.
Now we need to use shared storage for storing the web content (HTML) file. Perform below operation in any one of the nodes. Use /Data/www as document root
[root@node2 ~]#mkdir /Data
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/www
[root@node2 ~]# mkdir /Data/www/html [root@node2 ~]# mkdir /var/www/cgi-bin [root@node2 ~]# mkdir /Data/www/error [root@node2 ~]# restorecon -R /var/www [root@node2 ~]# cat <<-end>/Data/www/html/index.html Hello This Is Coming From Kafemis Cluster END [root@node2 ~]# umount /Data-end>
MySQL Server:
Install Mysql server on both nodes.
Now
we need to use shared storage for storing Database file.
Perform below operation in any one of the nodes. Use /Data/mysql as
data directory
[root@node2 ~]#mkdir /Data
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/mysql
Change Mysql Data Derectory
see http://kafemis.blogspot.com/2017/08/change-mysql-data-directory.html
Change /usr/lib/ocf/lib/heartbeat/mysql-common.sh script to our new data directory. change OCF_RESKEY_datadir_default="/var/lib/mysql" to OCF_RESKEY_datadir_default="/Data/mysql"
Creating Resources:
Create a filesystem resource for Apache server, this is nothing but a shared storage coming from the iSCSI server.
#pcs resource create httpd_fs Filesystem device="/dev/mapper/vg_cluster-lv_cluster" directory="/Data" fstype="ext4" --group cls_node
Create an IP address resource, this will act a virtual IP for the Apache. Clients will use this ip for accessing the web content instead of individual nodes ip.
#pcs resource create vip IPaddr2 ip=192.168.0.100 cidr_netmask=24 --group cls_node
Create an Apache resource which will monitor the status of Apache server and move the resource to another node in case of any failure.
# pcs resource create httpd_ser apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group cls_node
Create an MYSQL Resource which will monitor the status of MYSQL server and move the resource to another node in case of any failure.
#pcs resource create p_mysql ocf:heartbeat:mysql binary="/usr/bin/mysqld_safe" config="/etc/mysql/my.cnf" --group cls_node
Since we are not using fencing, disable it (STONITH). You must disable to start the cluster resources, but disabling STONITH in the production environment is not recommended.
# pcs property set stonith-enabled=false
Check the status of the cluster
[root@node1 ~]# pcs status
Monday, July 16, 2018
Setup Port Forwarding di Ubuntu 16.04
Setup Port Forwarding di Ubuntu
1. sudo ufw enable (enalbe firewall)
2. Enabling the Default Policies
sudo ufw default deny incoming (Default incoming policy changed to 'deny')
sudo ufw default allow outgoing (Default outgoing policy changed to 'allow')
3. sudo ufw allow ssh (Enabling SSH Connections)
4. sudo ufw allow 80 (Enabling HTTP)
5. Sudo ufw allow 443 (Enable https and do for others port)
6. sudo ufw deny 80 (deny http)
7. sudo ufw delete allow http
8. sudo ufw status numbered (check status port)
9. update /etc/ufw/before.rules
*filter
-A FORWARD -i eth0 -o eth1 -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -i ens192 -o ens160 -p tcp --syn --dport 587 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -i ens192 -o ens160 -p tcp --syn --dport 465 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -i eth0 -o eth1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A FORWARD -i eth1 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -i ens192 -d 203.125.90.92 -p tcp --dport 143 -j DNAT --to-destination 172.16.10.18:143
-A PREROUTING -i ens192 -d 203.125.90.92 -p tcp --dport 993 -j DNAT --to-destination 172.16.10.18:993
-A PREROUTING -i ens192 -d 203.125.90.92 -p tcp --dport 587 -j DNAT --to-destination 172.16.10.18:587
-A PREROUTING -i ens192 -d 203.125.90.92 -p tcp --dport 465 -j DNAT --to-destination 172.16.10.18:465
-A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 587 -j SNAT --to-source 172.16.80.104
-A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 465 -j SNAT --to-source 172.16.80.104
-A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 143 -j SNAT --to-source 172.16.80.104
-A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 993 -j SNAT --to-source 172.16.80.104
COMMIT
for machine with single network
9. update /etc/ufw/before.rules
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -p tcp --dport 993 -j DNAT --to-destination 172.16.10.18:993
-A PREROUTING -p tcp --dport 143 -j DNAT --to-destination 172.16.10.18:143
-A PREROUTING -p tcp --dport 465 -j DNAT --to-destination 172.16.10.18:465
-A PREROUTING -p tcp --dport 587 -j DNAT --to-destination 172.16.10.18:587
-A PREROUTING -p tcp --dport 25 -j DNAT --to-destination 172.16.10.18:25
-A POSTROUTING -j MASQUERADE
COMMIT
*filter
-A FORWARD -p tcp --syn --dport 143 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -p tcp --syn --dport 993 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -p tcp --syn --dport 587 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -p tcp --syn --dport 465 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -p tcp --syn --dport 25 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
COMMIT
tool nya
1. iptables -t nat --line-numbers -L
2. iptables -t nat -f POSTROUTING
3. iptables -t nat -f PREROUTING
4. iptables -S
5. systemctl restart ufw
6. iptables -L
1. sudo ufw enable (enalbe firewall)
2. Enabling the Default Policies
sudo ufw default deny incoming (Default incoming policy changed to 'deny')
sudo ufw default allow outgoing (Default outgoing policy changed to 'allow')
3. sudo ufw allow ssh (Enabling SSH Connections)
4. sudo ufw allow 80 (Enabling HTTP)
5. Sudo ufw allow 443 (Enable https and do for others port)
6. sudo ufw deny 80 (deny http)
7. sudo ufw delete allow http
8. sudo ufw status numbered (check status port)
9. update /etc/ufw/before.rules
*filter
-A FORWARD -i eth0 -o eth1 -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -i ens192 -o ens160 -p tcp --syn --dport 587 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -i ens192 -o ens160 -p tcp --syn --dport 465 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -i eth0 -o eth1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A FORWARD -i eth1 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -i ens192 -d 203.125.90.92 -p tcp --dport 143 -j DNAT --to-destination 172.16.10.18:143
-A PREROUTING -i ens192 -d 203.125.90.92 -p tcp --dport 993 -j DNAT --to-destination 172.16.10.18:993
-A PREROUTING -i ens192 -d 203.125.90.92 -p tcp --dport 587 -j DNAT --to-destination 172.16.10.18:587
-A PREROUTING -i ens192 -d 203.125.90.92 -p tcp --dport 465 -j DNAT --to-destination 172.16.10.18:465
-A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 587 -j SNAT --to-source 172.16.80.104
-A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 465 -j SNAT --to-source 172.16.80.104
-A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 143 -j SNAT --to-source 172.16.80.104
-A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 993 -j SNAT --to-source 172.16.80.104
COMMIT
for machine with single network
9. update /etc/ufw/before.rules
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -p tcp --dport 993 -j DNAT --to-destination 172.16.10.18:993
-A PREROUTING -p tcp --dport 143 -j DNAT --to-destination 172.16.10.18:143
-A PREROUTING -p tcp --dport 465 -j DNAT --to-destination 172.16.10.18:465
-A PREROUTING -p tcp --dport 587 -j DNAT --to-destination 172.16.10.18:587
-A PREROUTING -p tcp --dport 25 -j DNAT --to-destination 172.16.10.18:25
-A POSTROUTING -j MASQUERADE
COMMIT
*filter
-A FORWARD -p tcp --syn --dport 143 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -p tcp --syn --dport 993 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -p tcp --syn --dport 587 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -p tcp --syn --dport 465 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -p tcp --syn --dport 25 -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
COMMIT
tool nya
1. iptables -t nat --line-numbers -L
2. iptables -t nat -f POSTROUTING
3. iptables -t nat -f PREROUTING
4. iptables -S
5. systemctl restart ufw
6. iptables -L
Tuesday, February 27, 2018
Document Server and ownCloud Docker installation
Requirements
- The latest version of Docker (can be downloaded here: https://docs.docker.com/engine/installation/)
- Docker compose (can be downloaded here: https://docs.docker.com/compose/install/)
Installation
- Get the latest version of this repository running the command:
git clone --recursive https://github.com/ONLYOFFICE/docker-onlyoffice-owncloud
cd docker-onlyoffice-owncloud
git submodule update --remote
- Edit the
docker-compose.yml
file (if you want to connect Document Server to Nextcloud), opening it and altering theimage: owncloud:fpm
line:
image: nextcloud:fpm
This step is optional and, if you want to use Document Server with ownCloud, you do not need to change anything.- Run Docker Compose:
sudo docker-compose up -d
Please note: you might need to wait a couple of minutes when all the containers are up and running after the above command.-
Now launch the browser and enter the webserver address. The
ownCloud/Nextcloud wizard webpage will be opened. Enter all the
necessary data to complete the wizard.
-
Go to the project folder and run the
set_configuration.sh
script:
sudo bash set_configuration.sh
Now you can enter ownCloud/Nextcloud and create a new document. It will be opened in ONLYOFFICE Document Server.
Tuesday, February 13, 2018
Remove Old Kernels via DPKG
If your /boot partition has already full while doing an upgrade or package install, and
apt
(the script above uses apt)
can’t remove packages due to broken dependency, here you can manually
find out the old kernel packages and remove them via DPKG:1. Run command to check out current kernel and DON’T REMOVE it:
uname -r2. List all kernels excluding the current booted:
dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r)Example output:
rc linux-image-4.4.0-15-generic 4.4.0-15.31 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMP ii linux-image-4.4.0-18-generic 4.4.0-18.34 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMP rc linux-image-4.6.0-040600rc3-generic 4.6.0-040600rc3.201604120934 amd64 Linux kernel image for version 4.6.0 on 64 bit x86 SMPThere will be three status in the listed kernel images:
- rc: means it has already been removed.
- ii: means installed, eligible for removal.
- iU: DON’T REMOVE. It means not installed, but queued for install in apt.
dpkg -l linux-image-\* | grep ^ii
4. Remove old kernel images in status ii, it’s “linux-image-4.4.0-18-generic” in the example above:
sudo dpkg --purge linux-image-4.4.0-18-generic
If the command fails, remove the dependency packages that the output tells you via sudo dpkg --purge PACKAGE
.And also try to remove the respective header and common header packages (Don’t worry if the command fails):
sudo dpkg --purge linux-image-4.4.0-18-header linux-image-4.4.0-185. Finally you may fix the apt broken dependency via command:
sudo apt -f install
Subscribe to:
Posts (Atom)