Tuesday, April 28, 2020

Extend LVM Disk at Ubuntu

https://www.tecmint.com/extend-and-reduce-lvms-in-linux/

https://packetpushers.net/ubuntu-extend-your-default-lvm-space/

Reducing an LVM2 Swap Logical Volume

  1. Disable swapping for the associated logical volume:
    # swapoff -v /dev/VolGroup00/LogVol01
  2. Reduce the LVM2 logical volume by 512 MB:
    # lvreduce /dev/VolGroup00/LogVol01 -L -512M
  3. Format the new swap space:
    # mkswap /dev/VolGroup00/LogVol01
  4. Activate swap on the logical volume:
    # swapon -v /dev/VolGroup00/LogVol01
  5. To test if the swap logical volume was successfully reduced, inspect active swap space:
    $ cat /proc/swaps
    $ free -h

Monday, March 23, 2020

webmin nginx 500 - error - perl execution failed

Install the below;
sudo apt-get install libwww-perl

Friday, October 12, 2018

Resize HDD virtual machine at ubuntu

1. sudo parted
2. resizepart  numberdisk end disk (contoh resizepart 1 300GB)
3. go to webmin, edit phisycal volume "resize to match device"
4. go to logical volume click "use all free VG space"

Thursday, October 11, 2018

Safest way to clean up boot partition - Ubuntu

Case I: if /boot is not 100% full and apt is working

1. Check the current kernel version

$ uname -r 
It will shows the list like below:
3.19.0-64-generic

2. Remove the OLD kernels

2.a. List the old kernel

$ sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
You will get the list of images something like below:
linux-image-3.19.0-25-generic
linux-image-3.19.0-56-generic
linux-image-3.19.0-58-generic
linux-image-3.19.0-59-generic
linux-image-3.19.0-61-generic
linux-image-3.19.0-65-generic
linux-image-extra-3.19.0-25-generic
linux-image-extra-3.19.0-56-generic
linux-image-extra-3.19.0-58-generic
linux-image-extra-3.19.0-59-generic
linux-image-extra-3.19.0-61-generic

2.b. Now its time to remove old kernel one by one as

$ sudo apt-get purge linux-image-3.19.0-25-generic
$ sudo apt-get purge linux-image-3.19.0-56-generic
$ sudo apt-get purge linux-image-3.19.0-58-generic
$ sudo apt-get purge linux-image-3.19.0-59-generic
$ sudo apt-get purge linux-image-3.19.0-61-generic
$ sudo apt-get purge linux-image-3.19.0-65-generic
When you're done removing the older kernels, you can run this to remove ever packages you won't need anymore:
$ sudo apt-get autoremove
And finally you can run this to update grub kernel list:
$ sudo update-grub

Case II: Can't Use apt i.e. /boot is 100% full

NOTE: this is only if you can't use apt to clean up due to a 100% full /boot

1. Get the list of kernel images

Get the list of kernel images and determine what you can do without. This command will show installed kernels except the currently running one
$ sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
You will get the list of images somethign like below:
linux-image-3.19.0-25-generic
linux-image-3.19.0-56-generic
linux-image-3.19.0-58-generic
linux-image-3.19.0-59-generic
linux-image-3.19.0-61-generic
linux-image-3.19.0-65-generic
linux-image-extra-3.19.0-25-generic
linux-image-extra-3.19.0-56-generic
linux-image-extra-3.19.0-58-generic
linux-image-extra-3.19.0-59-generic
linux-image-extra-3.19.0-61-generic

2. Prepare Delete

Craft a command to delete all files in /boot for kernels that don't matter to you using brace expansion to keep you sane. Remember to exclude the current and two newest kernel images. From above Example, it's
sudo rm -rf /boot/*-3.19.0-{25,56,58,59,61,65}-*

3. Clean up what's making apt grumpy about a partial install.

sudo apt-get -f install

4. Autoremove

Finally, autoremove to clear out the old kernel image packages that have been orphaned by the manual boot clean.
sudo apt-get autoremove

5. Update Grub

sudo update-grub

6. Now you can update, install packages

sudo apt-get update

Thursday, September 13, 2018

HA Mysql and Share Storage on Ubuntu 18.04

Cluster Nodes:

node1. 192.168.0.11
node2. 192.168.0.12

iSCSI Storage:

server 192.168.0.20

Prepare Iscsi Storage to connect all node

see http://kafemis.blogspot.com/2011/01/setting-koneksi-ke-hp-lefthand-dengan.html
  

Setup Cluster Nodes:

Go to all of your nodes and check whether the new disk is visible or not. In my case, /dev/sdb is the new disk.

# fdisk -l | grep -i sd
Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048   209715199   104344576   8e  Linux LVM
Disk /dev/sdb: 10.7 GB, 10733223936 bytes, 20963328 sectors
 
On any one of your node (Ex, node1), create an LVM using below commands


[root@node1 ~]# pvcreate /dev/sdb
[root@node1 ~]# vgcreate vg_cluster /dev/sdb
[root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_cluster
[root@node1 ~]# mkfs.ext4 /dev/vg_cluster/lv_cluster
 
Now, go to your remaining nodes and run below commands

[root@node2 ~]# pvscan
[root@node2 ~]# vgscan
[root@node2 ~]# lvscan
 
Finally, verify the LV we created on node1 is available to you on all your 
remaining nodes (Ex. node2) using below command. You should 
see /dev/vg_apache/lv_apache on all your nodes. Restart node if not appear 


[root@node2 ~]# /dev/vg_apache/lv_apache
 
Make a host entry on each node for all nodes, the cluster will be using 
the host name to communicate each other. Perform tasks on all of your 
cluster nodes.
 
# vi /etc/hosts
192.168.0.11 node1.local  node1
192.168.0.12 node2.local  node2 
192.168.12.12 node2.itzgeek.local  node2
 
Install cluster packages (pacemaker) on all nodes using below command.

# apt-get install pcs 
 
Set password for hacluster user, this is cluster administration account. We suggest you set the same password for all nodes.

# passwd hacluster
 
Start the cluster service, also, enable it to start automatically on system startup.

# systemctl start pcsd.service
# systemctl enable pcsd.service
 

 Cluster Creation:

Authorize the nodes using below command, run the command in any one of the node.

[root@node1 ~]# pcs cluster auth node1 node2
Username: hacluster
Password:
node1: Authorized
node2: Authorized

Create a cluster.

[root@node1 ~]# pcs cluster setup --start --name Node_cluster node1 node2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
node1: Succeeded
node2: Succeeded
Starting cluster on nodes: node1, node2....
node2: Starting Cluster...
node1: Starting Cluster...
Synchronizing pcsd certificates on nodes node1, node2...
node1: Success
node2: Success

Restaring pcsd on the nodes in order to reload the certificates...
node1: Success
node2: Success

Enable the cluster to start at the system startup, else you will need to start the cluster on every time you restart the system.

[root@node1 ~]# pcs cluster enable --all
node1: Cluster Enabled
node2: Cluster Enabled
 
Run the below command to get a detailed information about the cluster including its resources, pacemaker status, and nodes details.

Preparing resources:

Apache Web Server:

Install apache server on both nodes.

Now we need to use shared storage for storing the web content (HTML) file. Perform below operation in any one of the nodes. Use /Data/www as document root

[root@node2 ~]#mkdir /Data 
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/www
[root@node2 ~]# mkdir /Data/www/html
[root@node2 ~]# mkdir /var/www/cgi-bin
[root@node2 ~]# mkdir /Data/www/error
[root@node2 ~]# restorecon -R /var/www
[root@node2 ~]# cat <<-end>/Data/www/html/index.html

Hello This Is Coming From Kafemis Cluster

END
[root@node2 ~]# umount /Data
 
  

MySQL Server: 

Install Mysql server on both nodes.


Now we need to use shared storage for storing Database file. Perform below operation in any one of the nodes. Use /Data/mysql as data directory

[root@node2 ~]#mkdir /Data 
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/mysql
 
Change Mysql Data Derectory
see http://kafemis.blogspot.com/2017/08/change-mysql-data-directory.html
 
Change  /usr/lib/ocf/lib/heartbeat/mysql-common.sh script to our new data directory. change OCF_RESKEY_datadir_default="/var/lib/mysql" to OCF_RESKEY_datadir_default="/Data/mysql"
 

Creating Resources: 

Create a filesystem resource for Apache server, this is nothing but a shared storage coming from the iSCSI server. 
 
#pcs resource create httpd_fs Filesystem device="/dev/mapper/vg_cluster-lv_cluster" directory="/Data" fstype="ext4" --group cls_node
Create an IP address resource, this will act a virtual IP for the 
Apache. Clients will use this ip for accessing the web content instead 
of individual nodes ip.
#pcs resource create vip IPaddr2 ip=192.168.0.100 cidr_netmask=24 --group cls_node
 
Create an Apache resource which will monitor the status of Apache server
 and move the resource to another node in case of any failure.
 
# pcs resource create httpd_ser apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group cls_node
 
 
Create an MYSQL Resource which will monitor the status of MYSQL server
 and move the resource to another node in case of any failure.

#pcs resource create p_mysql ocf:heartbeat:mysql binary="/usr/bin/mysqld_safe" config="/etc/mysql/my.cnf" --group cls_node

Since we are not using fencing, disable it (STONITH). You must disable to start the cluster resources, but disabling STONITH in the production environment is not recommended.

# pcs property set stonith-enabled=false
 
Check the status of the cluster
 
[root@node1 ~]# pcs status 


 

Monday, July 16, 2018

Setup Port Forwarding di Ubuntu 16.04

Setup Port Forwarding di Ubuntu
1. sudo ufw enable (enalbe firewall)
2. Enabling the Default Policies
    sudo ufw default deny incoming (Default incoming policy changed to 'deny')
    sudo ufw default allow outgoing (Default outgoing policy changed to 'allow')
3. sudo ufw allow ssh (Enabling SSH Connections)
4. sudo ufw allow 80 (Enabling HTTP)
5. Sudo ufw allow 443 (Enable https and do for others port)
6. sudo ufw deny 80 (deny http)
7. sudo ufw delete allow http
8. sudo ufw status numbered (check status port)


9. update /etc/ufw/before.rules
    *filter
   -A FORWARD -i eth0 -o eth1 -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT
   -A FORWARD -i ens192 -o ens160 -p tcp --syn --dport 587 -m conntrack --ctstate NEW -j ACCEPT
   -A FORWARD -i ens192 -o ens160 -p tcp --syn --dport 465 -m conntrack --ctstate NEW -j ACCEPT
   -A FORWARD -i eth0 -o eth1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
   -A FORWARD -i eth1 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
   *nat
    :PREROUTING ACCEPT [0:0]
    :INPUT ACCEPT [0:0]
    :OUTPUT ACCEPT [0:0]
    :POSTROUTING ACCEPT [0:0]
    -A PREROUTING -i ens192 -d 203.125.90.92  -p tcp --dport 143 -j  DNAT --to-destination 172.16.10.18:143
    -A PREROUTING -i ens192 -d 203.125.90.92  -p tcp --dport 993 -j  DNAT --to-destination 172.16.10.18:993
    -A PREROUTING -i ens192 -d 203.125.90.92  -p tcp --dport 587 -j  DNAT --to-destination 172.16.10.18:587
    -A PREROUTING -i ens192 -d 203.125.90.92  -p tcp --dport 465 -j  DNAT --to-destination 172.16.10.18:465
    -A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 587 -j  SNAT --to-source 172.16.80.104
    -A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 465 -j  SNAT --to-source 172.16.80.104
    -A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 143 -j  SNAT --to-source 172.16.80.104
    -A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 993 -j  SNAT --to-source 172.16.80.104
    COMMIT

for machine with single network
9. update /etc/ufw/before.rules
*nat
 :PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]

  -A PREROUTING -p tcp --dport 993 -j DNAT --to-destination 172.16.10.18:993
  -A PREROUTING -p tcp --dport 143 -j DNAT --to-destination 172.16.10.18:143
  -A PREROUTING -p tcp --dport 465 -j DNAT --to-destination 172.16.10.18:465
 -A PREROUTING -p tcp --dport 587 -j DNAT --to-destination 172.16.10.18:587
 -A PREROUTING -p tcp --dport 25 -j DNAT --to-destination 172.16.10.18:25
 -A POSTROUTING -j MASQUERADE

COMMIT

*filter
  -A FORWARD -p tcp --syn --dport 143 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -p tcp --syn --dport 993 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -p tcp --syn --dport 587 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -p tcp --syn --dport 465 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -p tcp --syn --dport 25 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

COMMIT









tool nya
1. iptables -t nat --line-numbers -L
2.  iptables -t nat -f POSTROUTING
3. iptables -t nat -f PREROUTING
4. iptables -S
5. systemctl restart ufw
6. iptables -L

Tuesday, February 27, 2018

Document Server and ownCloud Docker installation

Requirements

Installation

  1. Get the latest version of this repository running the command:
git clone --recursive https://github.com/ONLYOFFICE/docker-onlyoffice-owncloud
cd docker-onlyoffice-owncloud
git submodule update --remote
  1. Edit the docker-compose.yml file (if you want to connect Document Server to Nextcloud), opening it and altering the image: owncloud:fpm line:
image: nextcloud:fpm
This step is optional and, if you want to use Document Server with ownCloud, you do not need to change anything.
  1. Run Docker Compose:
sudo docker-compose up -d
Please note: you might need to wait a couple of minutes when all the containers are up and running after the above command.
  1. Now launch the browser and enter the webserver address. The ownCloud/Nextcloud wizard webpage will be opened. Enter all the necessary data to complete the wizard.
  2. Go to the project folder and run the set_configuration.sh script:
sudo bash set_configuration.sh
Now you can enter ownCloud/Nextcloud and create a new document. It will be opened in ONLYOFFICE Document Server.

Tuesday, February 13, 2018

Remove Old Kernels via DPKG


If your /boot partition has already full while doing an upgrade or package install, and apt (the script above uses apt) can’t remove packages due to broken dependency, here you can manually find out the old kernel packages and remove them via DPKG:
1. Run command to check out current kernel and DON’T REMOVE it:
uname -r
2. List all kernels excluding the current booted:
dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r)
Example output:
rc  linux-image-4.4.0-15-generic               4.4.0-15.31                                         amd64        Linux kernel image for version 4.4.0 on 64 bit x86 SMP
ii  linux-image-4.4.0-18-generic               4.4.0-18.34                                         amd64        Linux kernel image for version 4.4.0 on 64 bit x86 SMP
rc  linux-image-4.6.0-040600rc3-generic        4.6.0-040600rc3.201604120934                        amd64        Linux kernel image for version 4.6.0 on 64 bit x86 SMP
There will be three status in the listed kernel images:
  • rc: means it has already been removed.
  • ii: means installed, eligible for removal.
  • iU: DON’T REMOVE. It means not installed, but queued for install in apt.
3. Lsit all kernels that can deleted
dpkg -l linux-image-\* | grep ^ii
4. Remove old kernel images in status ii, it’s “linux-image-4.4.0-18-generic” in the example above:
sudo dpkg --purge linux-image-4.4.0-18-generic
If the command fails, remove the dependency packages that the output tells you via sudo dpkg --purge PACKAGE.
And also try to remove the respective header and common header packages (Don’t worry if the command fails):
sudo dpkg --purge linux-image-4.4.0-18-header linux-image-4.4.0-18
5. Finally you may fix the apt broken dependency via command:
sudo apt -f install

Wednesday, August 23, 2017

How to fix DRBD recovery from split brain

Step 1: Start drbd manually on both nodes
Step 2: Define one node as secondary and discard data on this
drbdadm secondary all
drbdadm disconnect all
drbdadm -- --discard-my-data connect all
Step 3: Define anoher node as primary and connect
drbdadm primary all
drbdadm disconnect all
drbdadm connect all

Tuesday, August 22, 2017

Change Mysql Data Directory

  1. Stop MySQL using the following command:
    sudo /etc/init.d/mysql stop
  2. Copy the existing data directory (default located in /var/lib/mysql) using the following command:
    sudo cp -R -p /var/lib/mysql /newpath
  3. edit the MySQL configuration file with the following command:
    sudo gedit /etc/mysql/my.cnf   # or perhaps /etc/mysql/mysql.conf.d/mysqld.cnf
  4. Look for the entry for datadir, and change the path (which should be /var/lib/mysql) to the new data directory.
  5. In the terminal, enter the command:
    sudo gedit /etc/apparmor.d/usr.sbin.mysqld
  6. Look for lines beginning with /var/lib/mysql. Change /var/lib/mysql in the lines with the new path.
  7. Save and close the file.
  8. Restart the AppArmor profiles with the command:
    sudo /etc/init.d/apparmor reload
  9. Restart MySQL with the command:
    sudo /etc/init.d/mysql restart

Friday, August 18, 2017

Configuration DRBD

1. install DRBD  > sudo apt install drbd8-utils
2.
To configure drbd, on the first host edit /etc/drbd.conf

global { usage-count no; }
common { syncer { rate 100M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "secret";
        }
        on eoblas03 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.0.1:7788;
                meta-disk internal;
        }
        on eoblas03 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.0.2:7788;
                meta-disk internal;
        }
} 
 
3.  Now copy /etc/drbd.conf to the second host:
     scp /etc/drbd.conf drbd02:~
4.  Now using the drbdadm utility initialize the meta data storage.  On each server
          execute:
    suudo drbdadm create-md r0
5.  Next, on both hosts, start the drbd daemon:
    sudo systemctl start drbd.service 
 
6.  On the drbd01, or whichever host you wish to be the primary, enter the following:
    sudo drbdadm -- --overwrite-data-of-peer primary all 
 
7.  After executing the above command, the data will start syncing with the secondary host.  To watch the progress, on
          drbd02 enter the following:
    watch -n1 cat /proc/drbd
    To stop watching the output press Ctrl+c.
          
8.  Finally, add a filesystem to /dev/drbd0 and mount it:
    sudo mkfs.ext3 /dev/drbd0
    sudo mount /dev/drbd0 /srv 
 

Tuesday, August 15, 2017

High Availability with Heartbeat on Unbuntu

1. make sure all server/node can comunicate
2. install heartbeat > sudo apt-get install heartbeat
3. Create ha.cf File, On both servers,
    sudo vi /etc/ha.d/ha.cf
  
  keepalive 2
  warntime 5
  deadtime 15
  initdead 90
  udpport 694
  auto_failback on
  ucast eth0 172.16.10.135 #for node2 change ip node1
  logfile /var/log/ha-log
  node eoblas05
  node eoblas03 


4. Create authkeys File

The authorization key is used to allow cluster members to join a cluster. We can simply generate a random key for this purpose.

On the primary node, run these commands to generate a suitable authorization key in an environment variable named AUTH_KEY:

if [ -z "${AUTH_KEY}" ]; then
  export AUTH_KEY="$(command dd if='/dev/urandom' bs=512 count=1 2>'/dev/null' \
      | command openssl sha1 \
      | command cut --delimiter=' ' --fields=2)"
fi

Then write the /etc/ha.d/authkeys file with these commands:

sudo bash -c "{
  echo auth1
  echo 1 sha1 $AUTH_KEY
} > /etc/ha.d/authkeys"

Check the contents of the authkeys file like this:

    sudo cat /etc/ha.d/authkeys

It should like something like this (with a different authorization key):

/etc/ha.d/authkeys example:
auth1
1 sha1 d1e6557e2fcb30ff8d4d3ae65b50345fa46a2faa

Ensure that the file is only readable by root:

    sudo chmod 600 /etc/ha.d/authkeys

Now copy the /etc/ha.d/authkeys file from your primary node to your secondary node. You can do this manually, or with scp.

On the secondary server, be sure to set the permissions of the authkeys file:

    sudo chmod 600 /etc/ha.d/authkeys

Both servers should have an identical /etc/ha.d/authkeys file.


5. Create haresources File

The haresources file specifies preferred hosts paired with services that the cluster manages. The preferred host is the node that should run the associated service(s) if the node is available. If the preferred host is not available, i.e. it is not reachable by the cluster, one of the other nodes will take over. In other words, the secondary server will take over if the primary server goes down.

On both servers, open the haresources file in your favorite editor. We'll use vi:

    sudo vi /etc/ha.d/haresources


    eoblas05 IPaddr::172.16.10.35/24 apache2
  #eoblas05 IPaddr::172.16.10.35/24 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext4::noatime  mysql apache2  (contoh untuk drbd dan mysql)

6. TESTING WITH APACHE WEB SERVER

  
Please try to access an alias IP from browser. You will see a text This is node1. Please try to stop Heartbeat service on node1 and refresh browser. You will see a text This is node2 (all services handled by Heartbeat on node1 will be taken by node2). For failback, please start again Heartbeat service on node1 (all services handled by Heartbeat on node2 will be taken again by node1)

GlusterFS on Ubuntu Servers

1. make sure all server/node can comucicate
2. Install server (do for 2 node) > sudo apt-get install glusterfs-server
3. sudo gluster peer probe eoblasxx
   peer probe: success
 
4. This means that the peering was successful. We can check that the nodes are communicating at any time by typing:
   sudo gluster peer status

   Number of Peers: 1

   Hostname: eoblasxx
   Port: 24007
   Uuid: 7bcba506-3a7a-4c5e-94fa-1aaf83f5729b
   State: Peer in Cluster (Connected)
5. Create a Storage Volume > sudo gluster volume create Voltest 2 transport tcp eoblas03:/cluster-storage eoblas05:/cluster-storage force
6. sudo gluster volume start Voltest
   volume start: volume1: success
7. Install Client > sudo apt-get install glusterfs-client
8. sudo mkdir /data
9. sudo mount -t glusterfs eoblas03:Voltest /data
10. make automount at /etc/fstab > eoblas03:/Voltest /data glusterfs defaults,nobootwait,_netdev,backupvolfile-server=172.16.10.135,fetch-attempts=10 0 2


Tuesday, February 21, 2017

Exchange hub transport “Mail.que” file large in size

  1. Open exchange management shell and run “Get-TransportServer ” HUB01″ |fl
  2. Here, look for PipelineTracingEnabled. This should be set to False. If not, run
    Set-TransportServer HUB01 -PipelineTracingEnabled $False
  3. Now run “Get-TransportConfig” and ensure that
    MaxDumpsterSizePerStorageGroup is in MB’s and not GB’s

    MaxDumpsterTime : 7.00:00:00

    If not, run

    Set-TransportConfig -MaxDumpsterSizePerStorageGroup -MaxDumpsterTime
  4. Now run “Get-Queue” and take a look at the count of messages in HUB01
  5. Goto services.msc and Pause the Microsoft Exchange Transport service
  6. Again, run “Get-Queue” and ensure all pending messages are “zeroed” out
  7. Once messages pending becomes zero, stop the Transport service
  8. Move the mail.que file and all others to a new folder in the same location
  9. Start the Transport service
  10. Take a look at the queue again
  11. You should see that messages would have started getting delivered
  12. Now you can backup or safely delete the old mail.que file

Friday, January 20, 2017

Verbose status on welcome sceen

It enabled Verbose Status Messages in Windows Vista/7/2008
  1. Open regedit
  2. "HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\
    CurrentVersion\\Policies\\System"
  3. Create DWORD VerboseStatus (or DWORD-32 for 64bit OS)
  4. Change Value to 1
  5. Reboot

Monday, December 19, 2016

Resest CSC file

  1. Open up registry editor (WARNING: Only for Advanced Users)
  2. Browse to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Csc
  3. Add a new key (folder) called Parameters
  4. Under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Csc\Parameters, add a new DWORD called FormatDatabase and set its value to 1

Friday, November 25, 2016

Create croj job at VMware Esxi

1. chmod +w /var/spool/cron/crontabs/root
2. vi  /var/spool/cron/crontabs/root
3.  chmod -w /var/spool/cron/crontabs/root
4.  Tambahkan cronjob, misalnya
     20 1 26 11 * shutdown -h now #jam dalam format UTC (beda +7 jam dengan WIB)
5. chmod +w /var/spool/cron/crontabs/root
6. Run the command "cat /var/run/crond.pid"  #catat nomor pid
7. Run the command "kill 12345" where "12345" should be replaced with the number output by the previous command
8. Restart cron "/usr/lib/vmware/busybox/bin/busybox crond"

Setelah reboot conjob akan diremove/lost 

Tuesday, August 02, 2016

Force online file

  1. Locate and click the following registry subkey: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\NetCache
  1. Click Edit, point to New, and then click DWORD Value.
  2. Type SilentForcedAutoReconnect, and then press ENTER to name the value.
  3. Double-click SilentForcedAutoReconnect.
  4. In the Value data box, type 1, and then click OK.