Thursday, September 13, 2018

HA Mysql and Share Storage on Ubuntu 18.04

Cluster Nodes:

node1. 192.168.0.11
node2. 192.168.0.12

iSCSI Storage:

server 192.168.0.20

Prepare Iscsi Storage to connect all node

see http://kafemis.blogspot.com/2011/01/setting-koneksi-ke-hp-lefthand-dengan.html
  

Setup Cluster Nodes:

Go to all of your nodes and check whether the new disk is visible or not. In my case, /dev/sdb is the new disk.

# fdisk -l | grep -i sd
Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048   209715199   104344576   8e  Linux LVM
Disk /dev/sdb: 10.7 GB, 10733223936 bytes, 20963328 sectors
 
On any one of your node (Ex, node1), create an LVM using below commands


[root@node1 ~]# pvcreate /dev/sdb
[root@node1 ~]# vgcreate vg_cluster /dev/sdb
[root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_cluster
[root@node1 ~]# mkfs.ext4 /dev/vg_cluster/lv_cluster
 
Now, go to your remaining nodes and run below commands

[root@node2 ~]# pvscan
[root@node2 ~]# vgscan
[root@node2 ~]# lvscan
 
Finally, verify the LV we created on node1 is available to you on all your 
remaining nodes (Ex. node2) using below command. You should 
see /dev/vg_apache/lv_apache on all your nodes. Restart node if not appear 


[root@node2 ~]# /dev/vg_apache/lv_apache
 
Make a host entry on each node for all nodes, the cluster will be using 
the host name to communicate each other. Perform tasks on all of your 
cluster nodes.
 
# vi /etc/hosts
192.168.0.11 node1.local  node1
192.168.0.12 node2.local  node2 
192.168.12.12 node2.itzgeek.local  node2
 
Install cluster packages (pacemaker) on all nodes using below command.

# apt-get install pcs 
 
Set password for hacluster user, this is cluster administration account. We suggest you set the same password for all nodes.

# passwd hacluster
 
Start the cluster service, also, enable it to start automatically on system startup.

# systemctl start pcsd.service
# systemctl enable pcsd.service
 

 Cluster Creation:

Authorize the nodes using below command, run the command in any one of the node.

[root@node1 ~]# pcs cluster auth node1 node2
Username: hacluster
Password:
node1: Authorized
node2: Authorized

Create a cluster.

[root@node1 ~]# pcs cluster setup --start --name Node_cluster node1 node2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
node1: Succeeded
node2: Succeeded
Starting cluster on nodes: node1, node2....
node2: Starting Cluster...
node1: Starting Cluster...
Synchronizing pcsd certificates on nodes node1, node2...
node1: Success
node2: Success

Restaring pcsd on the nodes in order to reload the certificates...
node1: Success
node2: Success

Enable the cluster to start at the system startup, else you will need to start the cluster on every time you restart the system.

[root@node1 ~]# pcs cluster enable --all
node1: Cluster Enabled
node2: Cluster Enabled
 
Run the below command to get a detailed information about the cluster including its resources, pacemaker status, and nodes details.

Preparing resources:

Apache Web Server:

Install apache server on both nodes.

Now we need to use shared storage for storing the web content (HTML) file. Perform below operation in any one of the nodes. Use /Data/www as document root

[root@node2 ~]#mkdir /Data 
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/www
[root@node2 ~]# mkdir /Data/www/html
[root@node2 ~]# mkdir /var/www/cgi-bin
[root@node2 ~]# mkdir /Data/www/error
[root@node2 ~]# restorecon -R /var/www
[root@node2 ~]# cat <<-end>/Data/www/html/index.html

Hello This Is Coming From Kafemis Cluster

END
[root@node2 ~]# umount /Data
 
  

MySQL Server: 

Install Mysql server on both nodes.


Now we need to use shared storage for storing Database file. Perform below operation in any one of the nodes. Use /Data/mysql as data directory

[root@node2 ~]#mkdir /Data 
[root@node2 ~]# mount /dev/vg_cluster/lv_apache /Data
[root@node2 ~]# mkdir /Data/mysql
 
Change Mysql Data Derectory
see http://kafemis.blogspot.com/2017/08/change-mysql-data-directory.html
 
Change  /usr/lib/ocf/lib/heartbeat/mysql-common.sh script to our new data directory. change OCF_RESKEY_datadir_default="/var/lib/mysql" to OCF_RESKEY_datadir_default="/Data/mysql"
 

Creating Resources: 

Create a filesystem resource for Apache server, this is nothing but a shared storage coming from the iSCSI server. 
 
#pcs resource create httpd_fs Filesystem device="/dev/mapper/vg_cluster-lv_cluster" directory="/Data" fstype="ext4" --group cls_node
Create an IP address resource, this will act a virtual IP for the 
Apache. Clients will use this ip for accessing the web content instead 
of individual nodes ip.
#pcs resource create vip IPaddr2 ip=192.168.0.100 cidr_netmask=24 --group cls_node
 
Create an Apache resource which will monitor the status of Apache server
 and move the resource to another node in case of any failure.
 
# pcs resource create httpd_ser apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group cls_node
 
 
Create an MYSQL Resource which will monitor the status of MYSQL server
 and move the resource to another node in case of any failure.

#pcs resource create p_mysql ocf:heartbeat:mysql binary="/usr/bin/mysqld_safe" config="/etc/mysql/my.cnf" --group cls_node

Since we are not using fencing, disable it (STONITH). You must disable to start the cluster resources, but disabling STONITH in the production environment is not recommended.

# pcs property set stonith-enabled=false
 
Check the status of the cluster
 
[root@node1 ~]# pcs status 


 

Monday, July 16, 2018

Setup Port Forwarding di Ubuntu 16.04

Setup Port Forwarding di Ubuntu
1. sudo ufw enable (enalbe firewall)
2. Enabling the Default Policies
    sudo ufw default deny incoming (Default incoming policy changed to 'deny')
    sudo ufw default allow outgoing (Default outgoing policy changed to 'allow')
3. sudo ufw allow ssh (Enabling SSH Connections)
4. sudo ufw allow 80 (Enabling HTTP)
5. Sudo ufw allow 443 (Enable https and do for others port)
6. sudo ufw deny 80 (deny http)
7. sudo ufw delete allow http
8. sudo ufw status numbered (check status port)


9. update /etc/ufw/before.rules
    *filter
   -A FORWARD -i eth0 -o eth1 -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT
   -A FORWARD -i ens192 -o ens160 -p tcp --syn --dport 587 -m conntrack --ctstate NEW -j ACCEPT
   -A FORWARD -i ens192 -o ens160 -p tcp --syn --dport 465 -m conntrack --ctstate NEW -j ACCEPT
   -A FORWARD -i eth0 -o eth1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
   -A FORWARD -i eth1 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
   *nat
    :PREROUTING ACCEPT [0:0]
    :INPUT ACCEPT [0:0]
    :OUTPUT ACCEPT [0:0]
    :POSTROUTING ACCEPT [0:0]
    -A PREROUTING -i ens192 -d 203.125.90.92  -p tcp --dport 143 -j  DNAT --to-destination 172.16.10.18:143
    -A PREROUTING -i ens192 -d 203.125.90.92  -p tcp --dport 993 -j  DNAT --to-destination 172.16.10.18:993
    -A PREROUTING -i ens192 -d 203.125.90.92  -p tcp --dport 587 -j  DNAT --to-destination 172.16.10.18:587
    -A PREROUTING -i ens192 -d 203.125.90.92  -p tcp --dport 465 -j  DNAT --to-destination 172.16.10.18:465
    -A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 587 -j  SNAT --to-source 172.16.80.104
    -A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 465 -j  SNAT --to-source 172.16.80.104
    -A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 143 -j  SNAT --to-source 172.16.80.104
    -A POSTROUTING -d 172.16.10.18 -o ens160 -p tcp --dport 993 -j  SNAT --to-source 172.16.80.104
    COMMIT

for machine with single network
9. update /etc/ufw/before.rules
*nat
 :PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]

  -A PREROUTING -p tcp --dport 993 -j DNAT --to-destination 172.16.10.18:993
  -A PREROUTING -p tcp --dport 143 -j DNAT --to-destination 172.16.10.18:143
  -A PREROUTING -p tcp --dport 465 -j DNAT --to-destination 172.16.10.18:465
 -A PREROUTING -p tcp --dport 587 -j DNAT --to-destination 172.16.10.18:587
 -A PREROUTING -p tcp --dport 25 -j DNAT --to-destination 172.16.10.18:25
 -A POSTROUTING -j MASQUERADE

COMMIT

*filter
  -A FORWARD -p tcp --syn --dport 143 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -p tcp --syn --dport 993 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -p tcp --syn --dport 587 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -p tcp --syn --dport 465 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -p tcp --syn --dport 25 -m conntrack --ctstate NEW -j ACCEPT
  -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

COMMIT









tool nya
1. iptables -t nat --line-numbers -L
2.  iptables -t nat -f POSTROUTING
3. iptables -t nat -f PREROUTING
4. iptables -S
5. systemctl restart ufw
6. iptables -L

Tuesday, February 27, 2018

Document Server and ownCloud Docker installation

Requirements

Installation

  1. Get the latest version of this repository running the command:
git clone --recursive https://github.com/ONLYOFFICE/docker-onlyoffice-owncloud
cd docker-onlyoffice-owncloud
git submodule update --remote
  1. Edit the docker-compose.yml file (if you want to connect Document Server to Nextcloud), opening it and altering the image: owncloud:fpm line:
image: nextcloud:fpm
This step is optional and, if you want to use Document Server with ownCloud, you do not need to change anything.
  1. Run Docker Compose:
sudo docker-compose up -d
Please note: you might need to wait a couple of minutes when all the containers are up and running after the above command.
  1. Now launch the browser and enter the webserver address. The ownCloud/Nextcloud wizard webpage will be opened. Enter all the necessary data to complete the wizard.
  2. Go to the project folder and run the set_configuration.sh script:
sudo bash set_configuration.sh
Now you can enter ownCloud/Nextcloud and create a new document. It will be opened in ONLYOFFICE Document Server.

Tuesday, February 13, 2018

Remove Old Kernels via DPKG


If your /boot partition has already full while doing an upgrade or package install, and apt (the script above uses apt) can’t remove packages due to broken dependency, here you can manually find out the old kernel packages and remove them via DPKG:
1. Run command to check out current kernel and DON’T REMOVE it:
uname -r
2. List all kernels excluding the current booted:
dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r)
Example output:
rc  linux-image-4.4.0-15-generic               4.4.0-15.31                                         amd64        Linux kernel image for version 4.4.0 on 64 bit x86 SMP
ii  linux-image-4.4.0-18-generic               4.4.0-18.34                                         amd64        Linux kernel image for version 4.4.0 on 64 bit x86 SMP
rc  linux-image-4.6.0-040600rc3-generic        4.6.0-040600rc3.201604120934                        amd64        Linux kernel image for version 4.6.0 on 64 bit x86 SMP
There will be three status in the listed kernel images:
  • rc: means it has already been removed.
  • ii: means installed, eligible for removal.
  • iU: DON’T REMOVE. It means not installed, but queued for install in apt.
3. Lsit all kernels that can deleted
dpkg -l linux-image-\* | grep ^ii
4. Remove old kernel images in status ii, it’s “linux-image-4.4.0-18-generic” in the example above:
sudo dpkg --purge linux-image-4.4.0-18-generic
If the command fails, remove the dependency packages that the output tells you via sudo dpkg --purge PACKAGE.
And also try to remove the respective header and common header packages (Don’t worry if the command fails):
sudo dpkg --purge linux-image-4.4.0-18-header linux-image-4.4.0-18
5. Finally you may fix the apt broken dependency via command:
sudo apt -f install

Wednesday, August 23, 2017

How to fix DRBD recovery from split brain

Step 1: Start drbd manually on both nodes
Step 2: Define one node as secondary and discard data on this
drbdadm secondary all
drbdadm disconnect all
drbdadm -- --discard-my-data connect all
Step 3: Define anoher node as primary and connect
drbdadm primary all
drbdadm disconnect all
drbdadm connect all

Tuesday, August 22, 2017

Change Mysql Data Directory

  1. Stop MySQL using the following command:
    sudo /etc/init.d/mysql stop
  2. Copy the existing data directory (default located in /var/lib/mysql) using the following command:
    sudo cp -R -p /var/lib/mysql /newpath
  3. edit the MySQL configuration file with the following command:
    sudo gedit /etc/mysql/my.cnf   # or perhaps /etc/mysql/mysql.conf.d/mysqld.cnf
  4. Look for the entry for datadir, and change the path (which should be /var/lib/mysql) to the new data directory.
  5. In the terminal, enter the command:
    sudo gedit /etc/apparmor.d/usr.sbin.mysqld
  6. Look for lines beginning with /var/lib/mysql. Change /var/lib/mysql in the lines with the new path.
  7. Save and close the file.
  8. Restart the AppArmor profiles with the command:
    sudo /etc/init.d/apparmor reload
  9. Restart MySQL with the command:
    sudo /etc/init.d/mysql restart

Friday, August 18, 2017

Configuration DRBD

1. install DRBD  > sudo apt install drbd8-utils
2.
To configure drbd, on the first host edit /etc/drbd.conf

global { usage-count no; }
common { syncer { rate 100M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "secret";
        }
        on eoblas03 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.0.1:7788;
                meta-disk internal;
        }
        on eoblas03 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.0.2:7788;
                meta-disk internal;
        }
} 
 
3.  Now copy /etc/drbd.conf to the second host:
     scp /etc/drbd.conf drbd02:~
4.  Now using the drbdadm utility initialize the meta data storage.  On each server
          execute:
    suudo drbdadm create-md r0
5.  Next, on both hosts, start the drbd daemon:
    sudo systemctl start drbd.service 
 
6.  On the drbd01, or whichever host you wish to be the primary, enter the following:
    sudo drbdadm -- --overwrite-data-of-peer primary all 
 
7.  After executing the above command, the data will start syncing with the secondary host.  To watch the progress, on
          drbd02 enter the following:
    watch -n1 cat /proc/drbd
    To stop watching the output press Ctrl+c.
          
8.  Finally, add a filesystem to /dev/drbd0 and mount it:
    sudo mkfs.ext3 /dev/drbd0
    sudo mount /dev/drbd0 /srv 
 

Tuesday, August 15, 2017

High Availability with Heartbeat on Unbuntu

1. make sure all server/node can comunicate
2. install heartbeat > sudo apt-get install heartbeat
3. Create ha.cf File, On both servers,
    sudo vi /etc/ha.d/ha.cf
  
  keepalive 2
  warntime 5
  deadtime 15
  initdead 90
  udpport 694
  auto_failback on
  ucast eth0 172.16.10.135 #for node2 change ip node1
  logfile /var/log/ha-log
  node eoblas05
  node eoblas03 


4. Create authkeys File

The authorization key is used to allow cluster members to join a cluster. We can simply generate a random key for this purpose.

On the primary node, run these commands to generate a suitable authorization key in an environment variable named AUTH_KEY:

if [ -z "${AUTH_KEY}" ]; then
  export AUTH_KEY="$(command dd if='/dev/urandom' bs=512 count=1 2>'/dev/null' \
      | command openssl sha1 \
      | command cut --delimiter=' ' --fields=2)"
fi

Then write the /etc/ha.d/authkeys file with these commands:

sudo bash -c "{
  echo auth1
  echo 1 sha1 $AUTH_KEY
} > /etc/ha.d/authkeys"

Check the contents of the authkeys file like this:

    sudo cat /etc/ha.d/authkeys

It should like something like this (with a different authorization key):

/etc/ha.d/authkeys example:
auth1
1 sha1 d1e6557e2fcb30ff8d4d3ae65b50345fa46a2faa

Ensure that the file is only readable by root:

    sudo chmod 600 /etc/ha.d/authkeys

Now copy the /etc/ha.d/authkeys file from your primary node to your secondary node. You can do this manually, or with scp.

On the secondary server, be sure to set the permissions of the authkeys file:

    sudo chmod 600 /etc/ha.d/authkeys

Both servers should have an identical /etc/ha.d/authkeys file.


5. Create haresources File

The haresources file specifies preferred hosts paired with services that the cluster manages. The preferred host is the node that should run the associated service(s) if the node is available. If the preferred host is not available, i.e. it is not reachable by the cluster, one of the other nodes will take over. In other words, the secondary server will take over if the primary server goes down.

On both servers, open the haresources file in your favorite editor. We'll use vi:

    sudo vi /etc/ha.d/haresources


    eoblas05 IPaddr::172.16.10.35/24 apache2
  #eoblas05 IPaddr::172.16.10.35/24 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext4::noatime  mysql apache2  (contoh untuk drbd dan mysql)

6. TESTING WITH APACHE WEB SERVER

  
Please try to access an alias IP from browser. You will see a text This is node1. Please try to stop Heartbeat service on node1 and refresh browser. You will see a text This is node2 (all services handled by Heartbeat on node1 will be taken by node2). For failback, please start again Heartbeat service on node1 (all services handled by Heartbeat on node2 will be taken again by node1)

GlusterFS on Ubuntu Servers

1. make sure all server/node can comucicate
2. Install server (do for 2 node) > sudo apt-get install glusterfs-server
3. sudo gluster peer probe eoblasxx
   peer probe: success
 
4. This means that the peering was successful. We can check that the nodes are communicating at any time by typing:
   sudo gluster peer status

   Number of Peers: 1

   Hostname: eoblasxx
   Port: 24007
   Uuid: 7bcba506-3a7a-4c5e-94fa-1aaf83f5729b
   State: Peer in Cluster (Connected)
5. Create a Storage Volume > sudo gluster volume create Voltest 2 transport tcp eoblas03:/cluster-storage eoblas05:/cluster-storage force
6. sudo gluster volume start Voltest
   volume start: volume1: success
7. Install Client > sudo apt-get install glusterfs-client
8. sudo mkdir /data
9. sudo mount -t glusterfs eoblas03:Voltest /data
10. make automount at /etc/fstab > eoblas03:/Voltest /data glusterfs defaults,nobootwait,_netdev,backupvolfile-server=172.16.10.135,fetch-attempts=10 0 2


Tuesday, February 21, 2017

Exchange hub transport “Mail.que” file large in size

  1. Open exchange management shell and run “Get-TransportServer ” HUB01″ |fl
  2. Here, look for PipelineTracingEnabled. This should be set to False. If not, run
    Set-TransportServer HUB01 -PipelineTracingEnabled $False
  3. Now run “Get-TransportConfig” and ensure that
    MaxDumpsterSizePerStorageGroup is in MB’s and not GB’s

    MaxDumpsterTime : 7.00:00:00

    If not, run

    Set-TransportConfig -MaxDumpsterSizePerStorageGroup -MaxDumpsterTime
  4. Now run “Get-Queue” and take a look at the count of messages in HUB01
  5. Goto services.msc and Pause the Microsoft Exchange Transport service
  6. Again, run “Get-Queue” and ensure all pending messages are “zeroed” out
  7. Once messages pending becomes zero, stop the Transport service
  8. Move the mail.que file and all others to a new folder in the same location
  9. Start the Transport service
  10. Take a look at the queue again
  11. You should see that messages would have started getting delivered
  12. Now you can backup or safely delete the old mail.que file

Friday, January 20, 2017

Verbose status on welcome sceen

It enabled Verbose Status Messages in Windows Vista/7/2008
  1. Open regedit
  2. "HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\
    CurrentVersion\\Policies\\System"
  3. Create DWORD VerboseStatus (or DWORD-32 for 64bit OS)
  4. Change Value to 1
  5. Reboot

Monday, December 19, 2016

Resest CSC file

  1. Open up registry editor (WARNING: Only for Advanced Users)
  2. Browse to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Csc
  3. Add a new key (folder) called Parameters
  4. Under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Csc\Parameters, add a new DWORD called FormatDatabase and set its value to 1

Friday, November 25, 2016

Create croj job at VMware Esxi

1. chmod +w /var/spool/cron/crontabs/root
2. vi  /var/spool/cron/crontabs/root
3.  chmod -w /var/spool/cron/crontabs/root
4.  Tambahkan cronjob, misalnya
     20 1 26 11 * shutdown -h now #jam dalam format UTC (beda +7 jam dengan WIB)
5. chmod +w /var/spool/cron/crontabs/root
6. Run the command "cat /var/run/crond.pid"  #catat nomor pid
7. Run the command "kill 12345" where "12345" should be replaced with the number output by the previous command
8. Restart cron "/usr/lib/vmware/busybox/bin/busybox crond"

Setelah reboot conjob akan diremove/lost 

Tuesday, August 02, 2016

Force online file

  1. Locate and click the following registry subkey: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\NetCache
  1. Click Edit, point to New, and then click DWORD Value.
  2. Type SilentForcedAutoReconnect, and then press ENTER to name the value.
  3. Double-click SilentForcedAutoReconnect.
  4. In the Value data box, type 1, and then click OK.

Monday, May 16, 2016

Create new Swap file

The quicker way of getting the same file is by using the fallocate program. This command creates a file of a preallocated size instantly, without actually having to write dummy contents.
We can create a 4 Gigabyte file by typing:
sudo fallocate -l 4G /swapfile
The prompt will be returned to you almost immediately. We can verify that the correct amount of space was reserved by typing:
ls -lh /swapfile
-rw-r--r-- 1 root root 4.0G Apr 28 17:19 /swapfile
As you can see, our file is created with the correct amount of space set aside.

Enabling the Swap File

Right now, our file is created, but our system does not know that this is supposed to be used for swap. We need to tell our system to format this file as swap and then enable it.
Before we do that though, we need to adjust the permissions on our file so that it isn't readable by anyone besides root. Allowing other users to read or write to this file would be a huge security risk. We can lock down the permissions by typing:
sudo chmod 600 /swapfile
Verify that the file has the correct permissions by typing:
ls -lh /swapfile
-rw------- 1 root root 4.0G Apr 28 17:19 /swapfile
As you can see, only the columns for the root user have the read and write flags enabled.
Now that our file is more secure, we can tell our system to set up the swap space by typing:
sudo mkswap /swapfile
Setting up swapspace version 1, size = 4194300 KiB
no label, UUID=e2f1e9cf-c0a9-4ed4-b8ab-714b8a7d6944
Our file is now ready to be used as a swap space. We can enable this by typing:
sudo swapon /swapfile
We can verify that the procedure was successful by checking whether our system reports swap space now:
sudo swapon -s
Filename                Type        Size    Used    Priority
/swapfile               file        4194300 0       -1
We have a new swap file here. We can use the free utility again to corroborate our findings:
free -m
             total       used       free     shared    buffers     cached
Mem:          3953        101       3851          0          5         30
-/+ buffers/cache:         66       3887
Swap:         4095          0       4095
Our swap has been set up successfully and our operating system will begin to use it as necessary.

Make the Swap File Permanent

We have our swap file enabled, but when we reboot, the server will not automatically enable the file. We can change that though by modifying the fstab file.
Edit the file with root privileges in your text editor:
sudo nano /etc/fstab
At the bottom of the file, you need to add a line that will tell the operating system to automatically use the file you created:
/swapfile   none    swap    sw    0   0
Save and close the file when you are finished.

Tuesday, September 15, 2015

Adding a Hard Drive to Solaris 10 (Copas http://utahsysadmin.com/2008/02/07/adding-a-hard-drive-to-solaris-10)

Here’s how you would add a hard drive to Solaris 10, including the format, fdisk, partition, and then creation of the file system. Of course, you first need to actually add the hard drive physically to the machine, I’m not going to cover that – if you don’t know how to do that then the rest of the information isn’t going to help!
If you installed a drive through VMWare while the VM is running, you will need Solaris to recognize the new drive. In this case, run devfsadm, otherwise boot your system and Solaris should recognize the new drive.
First, here’s the original drives (c0t0d0 & c1t0d0):
# ls /dev/rdsk/*s0
/dev/rdsk/c0t0d0s0 /dev/rdsk/c1t0d0s0
Have Solaris check for new hardware:
# devfsadm
Now you can see there is a new disk on another bus (c1t1d0):
# ls /dev/rdsk/*s0
/dev/rdsk/c0t0d0s0 /dev/rdsk/c1t0d0s0 /dev/rdsk/c1t1d0s0
Next, we want to format the drive (which includes creating the partitions):
# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@0,0/pci1000,30@10/sd@0,0
1. c1t1d0
/pci@0,0/pci1000,30@10/sd@1,0
Specify disk (enter its number):
Type “1″, the option for the new drive and hit “enter”. Depending on the type of disk it may be preformatted:
selecting c1t1d0
[disk formatted]
If your drive is not formatted, type format at the format prompt to low level format your hard drive. Next, we need to use fdisk to create the partitions, type “y” to create the default Solaris partition:
format> fdisk
No fdisk table exists. The default partition for the disk is:
a 100% “SOLARIS System” partition
Type “y” to accept the default partition, otherwise type “n” to edit the
partition table.
y
Next enter the partition menu, by typing partition:
format> partition
You can print out the current partitioning first if you like:
partition> print
Current partition table (original):
Total disk cylinders available: 1020 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 – 1020 1.99GB (1021/0/0) 4182016
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 – 0 2.00MB (1/0/0) 4096
9 unassigned wm 0 0 (0/0/0) 0
In this case, I just want to create one large partition for some extra storage so I will allocate all I can to partition 0. Note that partition 2 is used to reference the entire drive and is not a usable partition. To modify a given partition, just enter the number of the partition at the partition prompt:
partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 1
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 1019c
And now to print the partition table again you can see what has changed:
partition> print
Current partition table (unnamed):
Total disk cylinders available: 1020 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 1 – 1019 1.99GB (1019/0/0) 4173824
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 – 1020 1.99GB (1021/0/0) 4182016
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 – 0 2.00MB (1/0/0) 4096
9 unassigned wm 0 0 (0/0/0) 0
Save your changes by writing the label to the disk:
partition> label
Ready to label disk, continue? y
Quit out of the partition prompt, and then the format prompt, which takes you back to the command prompt:
partition> quit
format> quit
#
Now we are ready to create a file system on this new partition (in this case UFS).
# newfs /dev/rdsk/c1t1d0s0
newfs: construct a new file system /dev/rdsk/c1t1d0s0: (y/n)? y
/dev/rdsk/c1t1d0s0: 4173824 sectors in 1019 cylinders of 128 tracks, 32 sectors
2038.0MB in 45 cyl groups (23 c/g, 46.00MB/g, 11264 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 94272, 188512, 282752, 376992, 471232, 565472, 659712, 753952, 848192,
3298432, 3392672, 3486912, 3581152, 3675392, 3769632, 3863872, 3958112,
4052352, 4146592
Make sure that the file system is clean:
# fsck /dev/rdsk/c1t1d0s0
** /dev/rdsk/c1t1d0s0
** Last Mounted on
** Phase 1 – Check Blocks and Sizes
** Phase 2 – Check Pathnames
** Phase 3a – Check Connectivity
** Phase 3b – Verify Shadows/ACLs
** Phase 4 – Check Reference Counts
** Phase 5 – Check Cylinder Groups
2 files, 9 used, 2020758 free (14 frags, 252593 blocks, 0.0% fragmentation)
Next, add the proper line to /etc/vfstab:
/dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 /data ufs 2 yes -
And then mount the partition. In this case, I’m making a /data partition:
# mkdir /data
# mount /data
# df -h /data
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t1d0s0 1.9G 2.0M 1.9G 1% /data

Tuesday, August 04, 2015

Change Harddisk Poweredge3310 with Sun Solaris Running

1. check with cfgadm -al
    Find the label AP-ID  that the disk will be replace
2. Unconfigure the hard disk that will be replace
#cfgadm -c unconfigure c1::dsk/c1t1d0
3. verify that the disk is now unconfigured
#cfgadm -al
4. Replace the disk
5. Connect the new drive
#cfgadm -c configure c1::dsk/c1t1d0
6.  format the disk and label the disk
#format > type > label
7.  Run metastat then check the disk that needed maintenance
8. Run metareplace -e d8 c1t1d0s0 (that the disk need maintenance)

done

Monday, June 01, 2015

OpenKM 5.1.9 with PostgreSQL 9.3.7

Step:
1. ubuntu 14.04
2. Postgresql9.3.7 > sudo apt-get install postgresql-9.3
3. install jdbc Driver > sudo apt-get install libpostgresql-jdbc-java
4. install Openkm 5.1.9
5. copy file postgresql-jdbc4-9.2.jar > cp /usr/share/java/postgresql-jdbc4-9.2.jar $JBOSS_HOME/server/default/lib/
6 follow this link http://wiki.openkm.com/index.php/PostgreSQL_-_OpenKM_5.0
note: don't forget to create database

Thursday, March 19, 2015

Remote Desktop Licensing Error

Today I got a call from client that he can’t connect to his Remote Desktop Service  from his Windows 7 client.
He would get following error: The remote computer disconnected the session because of an error in the licensing protocol. Please try connecting to the remote computer again or contact your server administrator.
Since I was sure the licensing was in order I searched for another possible problems.
The cause of the problem was in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSLicensing.
So here is one of possible solutions if you are getting this error when trying to connect to Terminal Server (Be sure to backup your registry before you do this or export the MSLicensing keys, in case this isn’t the cause of your problems):
  1. Open registry editor (Start > Run > regedit)
  2. Go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ and find MSLicensing folder
  3. Right click on it and select delete
  4. Start Remote Desktop Client (Start>Run>mstsc) as local administrator to rebuild the deleted keys