Archive for the ‘Linux’ Category

Error: Cannot retrieve repository metadata (repomd.xml) for repository: epel. Please verify its path and try again

Wednesday, April 18th, 2018
yum --disablerepo="epel" update nss

old nss library cant connect to fedora  via curl. 
By updating nss library it will solve the issues.

How to create bonding with bridge network ( Linux Kvm)

Thursday, November 21st, 2013

Package needed :

rpm -qa | grep bridge-utils
bridge-utils-1.2-10.el6.x86_64

Create Bondig :

cat /etc/sysconfig/network-scripts/ifcfg-bond0
 
DEVICE=bond0
ONBOOT=yes
BONDING_OPTS='mode=1 miimon=100'
BRIDGE=br0

Create Bride network:

DEVICE=br0
TYPE=Bridge
IPADDR=192.168.0.50
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=static
NM_CONTROLLED=no
DELAY=0

configure eth0:

DEVICE=eth0
HWADDR=00:1D:09:66:8A:7A
TYPE=Ethernet
UUID=5e76d7f6-7526-4b6e-baf3-cde82362a914
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
USERCTL=no
SLAVE=yes
MASTER=bond0

Configure eth1:

DEVICE=eth1
HWADDR=00:1D:09:66:8A:7C
TYPE=Ethernet
UUID=ef4ef437-73c7-4c42-8552-6777a789c5a6
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=no
USERCTL=no
SLAVE=yes
MASTER=bond0

Note : Please make sure that both eth0 and eth1 has HWADDR defined in the respective file , its needed for Virtualization and bridging.

Further Reading :NetWork Bridge

Troubleshooting: Client not getting IP from dhcp server

Saturday, September 28th, 2013

(1) If your dhcp server running fine and it has enough free IP to lease , try bellow command :

dhclient eth0

apache puppet module (example)

Monday, April 29th, 2013

(a)
init.pp (/etc/puppet/modules/apache/manifests)

class apache {
 
	package {
			'apache2':
			ensure => installed
 
		}
 
	package {
			'libapache2-mod-python':
			ensure => installed,
			notify =>  Exec["reload-apache2"],
			require => Package["apache2"],
 
		}
 
	service { "apache2":
		ensure => running,
		hasstatus => true,
		hasrestart => true,
		require => Package["apache2"],
 
                }
 
	file { "/etc/apache2/sites-available/debian.fosiul.lan":
 
		ensure => present,
		source => "puppet://$servername/modules/apache/scripts/debian.fosiul.lan",
		owner => root,
                group => root,
		replace => true,
		force =>true
 
		}
 
	file { "/etc/apache2/sites-available/web1.fosiul.lan":
 
                ensure => present,
                source => "puppet://$servername/modules/apache/scripts/web1.fosiul.lan",
                owner => root,
                group => root,
                replace => true,
                force =>true
 
                }
 
		define module ( $ensure = 'present', $require = 'apache2' ) {
      		case $ensure {
         'present' : {
            exec { "/usr/sbin/a2enmod $name":
               unless => "/bin/readlink -e ${apache2_mods}-enabled/${name}.load",
               notify => Exec["force-reload-apache2"],
               require => Package[$require],
            }
         }
         'absent': {
            exec { "/usr/sbin/a2dismod $name":
               onlyif => "/bin/readlink -e ${apache2_mods}-enabled/${name}.load",
               notify => Exec["force-reload-apache2"],
               require => Package["apache2"],
            }
         }
         default: { err ( "Unknown ensure value: '$ensure'" ) }
      }
   }
 
	exec {
		"reload-apache2":
			command => "/etc/init.d/apache2 reload",
			refreshonly =>true,
 
		}
 
}

Ref : http://projects.puppetlabs.com/projects/1/wiki/Debian_Apache2_Recipe_Patterns

(b) Create related file under /etc/puppet/modules/apache/files/scripts

boot a system from command line when grub.cfg file is missing (debian)

Monday, April 29th, 2013

(a) if you know where is the grub.cfg located then you can run like this

configfile /boot/grub/grub.cfg or configfile (hdX,Y)/boot/grub/grub.cfg

(b) if you dont know where is grub.cfg then follow the bellow steps:

(a) set root='(hdo0,msdos1)'
(b) linux /vmlinuz root=/dev/sda1
(c) initrd /initrd.img
(d) boot

miscellaneous date output (bash)

Monday, April 22nd, 2013

(1)

DATE=`/bin/date --utc "+%Y%m%d%H%M%S"`
echo $DATE

Output : 20130422134138

git basic commands

Saturday, June 23rd, 2012

(1) Set the Identity

$ git config --global user.name "John Doe"
$ git config --global user.email johndoe@example.com

(2) Set Editor

git config --global core.editor nano

(3) Check Your settings

git config --list

(4) Initializing a Repository in an Existing Directory

git init

(5) Add Files into Existing repository

git add testfile.txt
git commit -m "Initial Commit"
(6) Clone existing repository
<pre lang="GNU">
git clone git://github.com/schacon/grit.git

How to install spacewalk in Server and client

Sunday, June 17th, 2012

Installing Spacewalk in Server:
Ref: https://fedorahosted.org/spacewalk/wiki/HowToInstall

Installing Spacewalk in client(Registering client with spacewalk server):
Ref: https://fedorahosted.org/spacewalk/wiki/RegisteringClients

(1) Install the bellow repo

# rpm -Uvh http://spacewalk.redhat.com/yum/1.7/RHEL/5/i386/spacewalk-client-repo-1.7-5.el5.noarch.rpm
also
 
# BASEARCH=$(uname -i)
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/$BASEARCH/epel-release-5-4.noarch.rpm

(2) Install bellow rpm

yum install rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

(3) Register the client with server

rhnreg_ks --serverUrl=http://YourSpacewalk.example.org/XMLRPC --activationkey=<key-with-fedora-custom-channel>

Invalid command ‘WSGIScriptAliasMatch’, perhaps misspelled \ or defined by a module not included in the server configuration(Spacewalk)

Friday, June 15th, 2012

You can enable this module by editing /etc/httpd/conf.d/wsgi.conf and un-commenting the “LoadModule wsgi_module modules/mod_wsgi.so” line.

Turn off FSCK at booting time

Friday, June 15th, 2012
$sudo tune2fs -c -1 `mount | awk '$3 == "/" {print $1}'`
or
$sudo tune2fs -c -1 /dev/yourhdd

or
set the last field of /etc/fstab to 0:
/dev/sda1 / ext4 defaults 1 0

Github and Git commands

Wednesday, June 6th, 2012

1)How to clone a git repo into a new computer
Ref:https://help.github.com/articles/fork-a-repo

git clone git@github.com:username/Spoon-Knife.git

2) push code from cloned repo

git push origin master

3)

Protected: Bash script learning(essential notes)

Monday, April 2nd, 2012

This content is password protected. To view it please enter your password below:

How add a cronjob under user apache

Thursday, March 22nd, 2012

if you want to setup a cron job under different user or under apache, find out who is the owner of apache process

ps aux | grep apache

in my case the owner of apache process is “daemon”.
now create a cron job under “demon” user

crontab -u daemon -e

Now insert any cron job

* * * * *  cd /usr/local/apache/htdocs/website; /usr/local/bin/php webpage.php  > /dev/null 2>&1

How to install snmp in centos/debian

Wednesday, December 21st, 2011

In centos

yum install net-snmp-utils

In debian

apt-get install snmpd

Take a Backup of Original Configuration file and Create a new one

cd /etc/snmp
mv snmp.conf snmp.bk
mcedit snmp.conf

Create a new config file from scratch

agentAddress udp:192.0.0.xxx:161
rocommunity  public 192.0.0.0/24
syslocation  "MysqlServer, unit1"

Now Restart the snmpd server

In Centos

/etc/init.d/snmpd start

In Debian

 /etc/init.d/snmpd start

Check if snmp server is running or not (From the server itself)

 pgrep snmpd
19946
snmpwalk -v1 -cpublic 192.0.0.ip-of-server

Centos:Yum behind a proxy

Wednesday, November 2nd, 2011

if your servers are behind a proxy and you need to provide username and password for the proxy server , then you need to configure yum.conf file with bellow syntax

http_proxy=http://username:password@proxyaddress:port/
proxy_username=username
proxy_password=password
<pre lang="GNU">
 
if you dont need to provide username and password for proxy server then :
<pre lang="GNU">
http_proxy=http://proxyserveraddress:port/
 
<pre lang="GNU">
also  add bellow lines into .bashrc file
 
<pre lang="GNU">
http_proxy="http://proxyserveraddress:3128"
export http_proxy

Centos:How to add newly created logical volume into fstab

Wednesday, November 2nd, 2011

When you create a Logical volume , you need to add it into /etc/fstab file for it to stay as mounted when server reboot.
suppose you have create a logical volume like bellow

 
 lvdisplay
  --- Logical volume ---
  LV Name                /dev/POSREP-DB/DB
  VG Name                POSREP-DB
  LV UUID                0IEKZw-tEoI-jJWt-OGXT-F0B7-hEic-hCbteW
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.17 GB
  Current LE             300
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

now you’ll need to set the label for this directory with the following command:

# e2label /dev/POSREP-DB/DB  DB

Now add this one into /etc/fstab

LABEL=DB                /DB                     ext3    defaults        0 1

at last, create a directory with bellow command

mkdir /DB

now if you reboot the server , the logical volument will be mounted automatically in /DB directory.

Linux:how to clone hardrive over network

Wednesday, August 24th, 2011

Purpose :
I want to clone a hardrive “/dev/sda” over network.
Server A will get the clone data and Server B will will sent the clone data.

disk space of Server B is :

 fdisk -l
 
Disk /dev/sda: 20.0 GB, 20020396032 bytes
255 heads, 63 sectors/track, 2434 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00083b1c
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2329    18705408   83  Linux
/dev/sda2            2329        2434      842753    5  Extended
/dev/sda5            2329        2434      842752   82  Linux swap / Solaris

step 1:
Receving Serer (Sever A) :

nc -l -p 1234 | dd of=/dev/sda

step 2:
Sending Serer (Sever B) :

dd if=/dev/sda | nc 192.168.1.220 1234

it will start to clone hardirve.

it can take up to 3 to 4 hours depends on hardrive size.

How to install apache2 php mysql in debian

Thursday, August 18th, 2011

Install apache2 and php modules

apt-get install apache2 php5 libapache2-mod-php5 php5-mysql

Install mysql server

apt-get install mysql-server

Restart apache2

/etc/init.d/apache2 restart

How to allow root to login in debian desktop

Wednesday, August 17th, 2011

(a) edit gdm3 file

nano /etc/pam.d/gdm3

(b) disable bellow line

auth   required        pam_succeed_if.so user != root quiet_success

Linux:how to setup openvpn in centos or debain

Sunday, May 1st, 2011

In debain

apt-get install openvpn

In Centos

yum install openvpn

Create Certificate in debain

(a) The default directory for easy-rsa certificates is "/usr/share/doc/openvpn/examples/easy-rsa/2.0/". Now copy that directory into /etc/openvpn 
 
#cp -R /usr/share/doc/openvpn/examples/easy-rsa/ /etc/openvpn/
# cd /etc/openvpn/2.0/
 
(b). Now we will create the certificate for CA
 
#. ./vars
 
#./clean-all
 
#./build-ca
 
7. Then we will create the certificate for server
 
#./build-key-server server
 
(c). Then we will create the certificate for client
 
#./build-key client
 
(d). We will build diffie hellman
 
#./build-dh
 
(e). now all the keys should be created in /keys
 
#cd /usr/share/doc/openvpn/examples/easy-rsa/2.0/keys/
 
#ls -al
ca.key ca.crt server.key server.csr server.crt client.key client.crt client.csr

Note :
Now we have the keys and certificates. So we will send them to our clients who want to connect OPENVPN Server. Just be sure that:

ca.key-> only,must be in CA Server

client.crt-> only,must be in Client

client.key-> only,must be in Client

server.crt-> only,must be in OPENVPN Server

server.key-> only,must be in OPENVPN Server

ca.crt-> must be in CA Server and all of the clients.

Openvpn server file configuration : (In debain)

(a) create a file in /etc/openvpn/server.conf
#vim /etc/openvpn/server.conf

and past the following :

port 1194
proto udp
dev tun
ca /etc/openvpn/keys/ca.crt
cert /etc/openvpn/keys/server.crt
key /etc/openvpn/keys/server.key
dh /etc/openvpn/keys/dh1024.pem
 
#Note:
#(it should be a network that you DONT currently use)
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
#Note
#(whatever the network is that you want the VPN client to connect to)
push "route 192.168.2.0 255.255.255.0"
#push "redirect-gateway def1"
push "dhcp-option DNS 192.168.2.1"
client-to-client
keepalive 10 120
comp-lzo
persist-key
persist-tun
status openvpn-status.log
log /var/log/openvpn.log
log-append /var/log/openvpn.log
verb 3

Now Restart the openvpn server

/etc/init.d/openvpn restart

Make sure firewall can forward port 1194 to your openvpn server

Linux:Iptables rules for different services

Sunday, March 20th, 2011

Bellow information for nfs server:

 vi /etc/sysconfig/nfs
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
RQUOTAD_PORT=875
STATD_PORT=662
STATD_OUTGOING_PORT=2020

Now reboot the services

# service portmap restart
# service nfs restart
# service rpcsvcgssd restart

Now add rules into iptables

-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p udp -m udp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p tcp -m tcp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p tcp -m tcp --dport 2049 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p tcp -m tcp --dport 32803 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p udp -m udp --dport 32769 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p tcp -m tcp --dport 892 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p udp -m udp --dport 892 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p tcp -m tcp --dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p udp -m udp --dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p tcp -m tcp --dport 662 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.2.0/24 -p udp -m udp --dport 662 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

Selinux commands for services

Saturday, March 5th, 2011

(a)Selinux Requirement for NIS Clients

setsebool -P allow_ypbind=1 ypbind_disable_trans=1 yppasswdd_disable_trans=1

Use getsebool command to verify :

getsebool allow_ypbind ypbind_disable_trans yppasswdd_disabled _trans

allow_ypbind -> on
ypbind_disable_trans –>on
yppasswdd_disable_trans –> on

b) Selinux for vsftpd

getsebool -a | grep ftp
allow_ftpd_anon_write --> off
allow_ftpd_full_access --> off
allow_ftpd_use_cifs --> off
allow_ftpd_use_nfs --> off
ftp_home_dir --> off
httpd_enable_ftp_server --> off
tftp_anon_write --> off

allow user to read and write to their own home directory

setsebool -P ftp_home_dir 1

(c) Selinux for Samba Share

If you want to share /data via samba

chcon -R -t samba_share_t /data

If you want to share home directory

setsebool -P samba_enable_home_dirs 1

Linux: Mutt(How to attach file from command line)

Wednesday, January 5th, 2011

If you want to attach a file in mutt from command line :

 echo "Body of email" | mutt -a attach.txt -s "subject" user@gmail.com

-a : please provide the full path for attachment.

Linux:How to exclude packages from yum update

Tuesday, January 4th, 2011

If you want to exclude packages from yum update then you can type –exclude command as bellow :

 yum update --exclude=openssl,openssl-devel,bind,bind-chroot,bind-utils,bind-libs

Or

 yum update --exclude=openssl --exclude=openssl-devel --exclude=bind --exclude=bind-chroot --exclude=bind-utils --exclude=bind-libs

Or

4 my apache keep crashing

Tuesday, January 4th, 2011

Hi
My apache keep crashing

Linux:How to compile php with mysqli support

Thursday, December 23rd, 2010

While installing php from source with mysqli support could be big trouble , Most of the time it through bellow errors

configure: error: Cannot find libmysqlclient under /usr.

if you see this kind of error, try to find out where is libmysqlclient into your server, by typing

locate libmysqlclient

you might see output like this :

/usr/lib64/mysql/libmysqlclient.a
/usr/lib64/mysql/libmysqlclient.la
/usr/lib64/mysql/libmysqlclient_r.a
/usr/lib64/mysql/libmysqlclient_r.la

Resolution is to tell php where is your lib directory is ,
for a 64 bit server, its /usr/lib64. so configure your php like bellow

 ./configure --with-apxs2=/usr/local/apache/bin/apxs --with-mysql=/var/lib/mysql --with-libdir=/lib64 --with-mysqli --enable-mbstring --with-gd --with-zlib --with-jpeg-dir --with-png-dir --with-openssl --with-curl --with-mcrypt --with-imap --with-kerberos --with-imap-ssl

Hope this will help.

Linux: How to configure sendmail to receive email (Basic Steps)

Monday, December 13th, 2010

Ref:http://www.sendmail.org/tips/virtualHosting

(a) Edit /etc/mail/sendmail.mc and modify bellow lines , It will allow sendmail to received email from outside of localhost.

DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl
to
 
DAEMON_OPTIONS(`Port=smtp, Name=MTA')dnl

(b) Edit /etc/mail/virtualtable type , this will map virtual addresses into real addresses

joe@yourdomain.com jschmoe

Here, sendmail will receive email , and any email comming with address joe@yourdomain.com will be delivered into to jschmoe’s inbox

(c) Edit /etc/mail/local-host-names and insert the domain name. This lets Sendmail know that you will provide it with a list of domains for which it may accept mail

fosiul.com
domain1.com
domain2.com

(d) Now restart sendmail

service sendmail restart

e) Open the port 25 in your firewall or iptables

Linux:How to configure centralized yum repo server (Centos)

Friday, November 26th, 2010

Local yum repository is used for local network and to make sure that all your server has same rpm for benchmarking and patching purpose.Its also save bandwidth because all the rpm will be store in one server(Central Repo Server) and rest of the servers will install those rpm from local repo server. Hence they don’t have to download from public server.

For Creating Central Repo server, you will need a Apache server .

In our organization I have created Yum server directory under (Its for 64 bit server)
/usr/local/apache/htdocs/install/centos64/
But you can chose any Directory .

    Building the Base Repository:

Step 1 :# Copy all content from CD/DVD to Repository Directory

Copy all the files and directory from Centos 5.5 DVD or CD into /usr/local/apache/htdocs/install/centos64/
So your directory should look like bellow

[root@controlserver1 centos64]# ls
CentOS                 RELEASE-NOTES-de.html     RELEASE-NOTES-nl
EULA                   RELEASE-NOTES-en          RELEASE-NOTES-nl.html
GPL                    RELEASE-NOTES-en.html     RELEASE-NOTES-pt_BR
images                 RELEASE-NOTES-en_US       RELEASE-NOTES-pt_BR.html
isolinux               RELEASE-NOTES-en_US.html  RELEASE-NOTES-ro
kicks                  RELEASE-NOTES-es          RELEASE-NOTES-ro.html
ks.cfg                 RELEASE-NOTES-es.html     repodata
NOTES                  RELEASE-NOTES-fr          RPM-GPG-KEY-beta
RELEASE-NOTES-cs       RELEASE-NOTES-fr.html     RPM-GPG-KEY-CentOS-5
RELEASE-NOTES-cs.html  RELEASE-NOTES-ja          TRANS.TBL
RELEASE-NOTES-de       RELEASE-NOTES-ja.html

As you can see Centos Directory has all the rpm , So I decided to make Centos directory as my Centralized yum directory.

For Centralized yum repository , I need to create rpm headers for base repository , so execute bellow command

Step 2: Create the base repository headers

createrepo /usr/local/apache/htdocs/install/centos64/CentOS

Upper command will create repodata directory under Centos directory
the directory should be like bellow :

[root@controlserver1 CentOS]# cd repodata/
[root@controlserver1 repodata]# pwd
/usr/local/apache/htdocs/install/centos64/CentOS/repodata
[root@controlserver1 repodata]# ls -al
total 14252
drwxr-xr-x 2 root root    4096 Nov 26 15:20 .
drwxr-xr-x 3 root root  221184 Nov 26 15:20 ..
-rw-r--r-- 1 root root 3373682 Nov 26 15:20 filelists.xml.gz
-rw-r--r-- 1 root root 9813890 Nov 26 15:20 other.xml.gz
-rw-r--r-- 1 root root 1144150 Nov 26 15:20 primary.xml.gz
-rw-r--r-- 1 root root     951 Nov 26 15:20 repomd.xml
[root@controlserver1 repodata]#

Building repository for updating yum packages

Step 3: Create a directory call updates

[root@controlserver1 centos64]# pwd
/usr/local/apache/htdocs/install/centos64
[root@controlserver1 centos64]# mkdir updates

So it should be like this

[root@controlserver1 centos64]# pwd
/usr/local/apache/htdocs/install/centos64
[root@controlserver1 centos64]# ls
CentOS                 RELEASE-NOTES-de.html     RELEASE-NOTES-nl
EULA                   RELEASE-NOTES-en          RELEASE-NOTES-nl.html
GPL                    RELEASE-NOTES-en.html     RELEASE-NOTES-pt_BR
images                 RELEASE-NOTES-en_US       RELEASE-NOTES-pt_BR.html
isolinux               RELEASE-NOTES-en_US.html  RELEASE-NOTES-ro
kicks                  RELEASE-NOTES-es          RELEASE-NOTES-ro.html
ks.cfg                 RELEASE-NOTES-es.html     repodata
NOTES                  RELEASE-NOTES-fr          RPM-GPG-KEY-beta
RELEASE-NOTES-cs       RELEASE-NOTES-fr.html     RPM-GPG-KEY-CentOS-5
RELEASE-NOTES-cs.html  RELEASE-NOTES-ja          TRANS.TBL
RELEASE-NOTES-de       RELEASE-NOTES-ja.html     updates

Step 4: Select an rsync mirror to upload
Select any mirror from here:
http://www.centos.org/modules/tinycontent/index.php?id=31

Step 5 : Rsync the updates-released repository

 rsync -avrt rsync://rsync.mirrorservice.org/mirror.centos.org/5.5/updates/x86_64/RPMS/ --exclude=debug/ /usr/local/apache/htdocs/install/centos64/updates/

It will download all the rpms from listed website into my updates directory.

Step 6: Rsync the repodata from

Go into updates directory and download all the contents from repodata.

[root@controlserver1 updates]# pwd
/usr/local/apache/htdocs/install/centos64/updates
[root@controlserver1 updates]#
 
rsync -avrt rsync://rsync.mirrorservice.org/mirror.centos.org/5.5/updates/x86_64/repodata --exclude=debug/ /usr/local/apache/htdocs/install/centos64/updates/

Step 7:Edit yum.conf

Create a repo file under your : /etc/yum.repos.d directory.

[root@mysqlcluster2 yum.repos.d]# pwd
/etc/yum.repos.d
[root@mysqlcluster2 yum.repos.d]# ls
CentOS-Base.repo CentOS-Media.repo local.repo
[root@mysqlcluster2 yum.repos.d]

And disable other repos by inserting enabled=0 , Example :
[centosplus]
name=CentOS-$releasever – Plus
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5

Insert the bellow lines into local.repo files

[base-local]
name=Centos $releasever - $basearch
failovermethod=priority
baseurl=http://10.0.0.55/centos64/CentOS/
enabled=1
gpgcheck=0
 
[updates-local]
name=Centos $releasever - $basearch - Updates
failovermethod=priority
baseurl=http://10.0.0.55/centos64/updates/
enabled=1
gpgcheck=0

Now try yum command

 yum clean all
Loaded plugins: fastestmirror
Cleaning up Everything
Cleaning up list of fastest mirrors
[root@mysqlcluster2 /]# yum update
Loaded plugins: fastestmirror
Determining fastest mirrors
base-local                                               |  951 B     00:00
base-local/primary                                       | 1.1 MB     00:00
base-local                                                            3186/3186
updates-local                                            | 1.9 kB     00:00
updates-local/primary_db                                 | 1.0 MB     00:00
Setting up Update Process

Centralized Local repository is done!!..

Apapce 2: How to turn off directory listings

Monday, November 22nd, 2010

Directory listings can be a security threat .

By default apache has bellow lines:

 
Options Indexes FollowSymLinks

Delete indexes from that line, so it will be like bellow

 
Options  FollowSymLinks

Now restart apache daemon.

It will stop Apache to show directory listing .

Linux:How to install vncserver

Monday, November 15th, 2010

Ref: http://wiki.centos.org/HowTos/VNC-Server

(a)Install vnc-server packages

yum install vnc-server

(b)Create your VNC users

useradd user1

(c)Set your users’ VNC passwords:
Login to each user, and run vncpasswd. This will create a .vnc directory.

vncpasswd

(d)Edit the server configuration
Edit /etc/sysconfig/vncservers, and add the following to the end of the file.

VNCSERVERS="2:root 3:user1"
VNCSERVERARGS[2]="-geometry 640x480"
VNCSERVERARGS[3]="-geometry 640x480"

(e)Create xstartup scripts/ Starting the server

 /sbin/service vncserver start

(g) Edit xstartup
Login each user’s home directory and Edit xstartup file

cd /root/.vnc
 vi xstartup
Uncomment bellow 2 lines 
 unset SESSION_MANAGER
 exec /etc/X11/xinit/xinitrc

xstartup file should be like this

#!/bin/sh
( while true ; do xterm ; done ) &
 
# Uncomment the following two lines for normal desktop:
 unset SESSION_MANAGER
 exec /etc/X11/xinit/xinitrc
 
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
twm &

(g) Restart vnserver again

service restart vncserver

Linux: lsof command and its uses

Monday, November 1st, 2010

How to view only TCP Established connections

lsof -iTCP | grep ESTABLISHED

How to view traffic on specific port ( port 22)

Syntax is : lsof -i : port number
lsof -i :22  | grep ESTABLISHED

How to view traffic from specific ip address

lsof -i@ip.of.your.user

how to view open files by a individual users

lsof -u username

How to collect information about a process

lsof -p process_id

Linux:Unable to copy long( _ ,#) file name from windows to samba server

Tuesday, October 5th, 2010

Some times , When trying to copy long directories/subdirectories or file name include (_ or # ) from windows to Samba server, it gives error example : “unable to copy” or “Cant move folder file_name_long_name.cfm ,the file name or extension is too long”

The solution is :

[ share ]
         path = /share-name /long-directory
         read only = no
         case sensitive = True
         default case = upper
         preserve case = no
         short preserve case = no

after Edit, please reboot the server

Linux:How to force puppet client to download updates from puppet server

Friday, September 17th, 2010

By default puppetd (puppet server) applies the client configuration; in 1800 seconds. If you have some emergency updates which has to be apply to every puppet clients instanly , you can do followings :

(a) puppetrun (This commands run from the puppet server)

 SYNOPSIS
Trigger a puppetd run on a set of hosts.
 
USAGE
puppetrun [-a|--all] [-c|--class ] [-d|--debug] [-f|--fore-
ground]
[-h|--help] [--host ] [--no-fqdn] [--ignoreschedules]
[-t|--tag ] [--test] [-p|--ping]

If you dont have LDAP support then -a(–all) and -c(–class) is useless . In that case ,if you want to force update every hosts, you will have to define all your hosts with puppetrun command ,
Eample :

According to puppetrun man pages, then uses is :
EXAMPLE
sudo puppetrun -p 10 --host host1 --host host2 -t remotefile -t web-server
 
or
puppetrun --host host1 --host host2

(b) func
If you have loads of server then its not practical to add all the hosts with puppetrun command!!.
in that case we can use func command .
how to install and use func

After install func in master and all rest of the server.
we can execute the bellow command :
Note : Please dont run puppetd daemon in clients if you want to update by calling func .

http://docs.puppetlabs.com/guides/scaling.html#triggered_selective_updates

func "*" call command run "puppetd --onetime"

This command will execute puppetd command one time only and it will download all the updates from puppet server.

Last updates : 17th September 2010

Linux: Troubleshooting Redhat Cluster Suite

Wednesday, September 8th, 2010

Ref:http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Configuration_Example_-_NFS_Over_GFS/NFS_GFS_Troubleshoot.html

If you find that you are seeing error messages when you try to configure your system, or if after configuration your system does not behave as expected, you can perform the following checks and examine the following areas.

*
Connect to one of the nodes in the cluster and execute the clustat(8) command. This command runs a utility that displays the status of the cluster. It shows membership information, quorum view, and the state of all configured user services.
The following example shows the output of the clustat(8) command.

      [root@clusternode4 ~]# clustat
      Cluster Status for nfsclust @ Wed Dec  3 12:37:22 2008
      Member Status: Quorate
 
       Member Name                              ID   Status
       ------ ----                              ---- ------
       clusternode5.example.com          1 Online, rgmanager
       clusternode4.example.com          2 Online, Local, rgmanager
       clusternode3.example.com          3 Online, rgmanager
       clusternode2.example.com          4 Online, rgmanager
       clusternode1.example.com          5 Online, rgmanager
 
       Service Name             Owner (Last)                     State
       ------- ---              ----- ------                     -----
       service:nfssvc           clusternode2.example.com         starting

In this example, clusternode4 is the local node since it is the host from which the command was run. If rgmanager did not appear in the Status category, it could indicate that cluster services are not running on the node.
*
Connect to one of the nodes in the cluster and execute the group_tool(8) command. This command provides information that you may find helpful in debugging your system. The following example shows the output of the group_tool(8) command.

      [root@clusternode1 ~]# group_tool
      type             level name       id       state
      fence            0     default    00010005 none
      [1 2 3 4 5]
      dlm              1     clvmd      00020005 none
      [1 2 3 4 5]
      dlm              1     rgmanager  00030005 none
      [3 4 5]
      dlm              1     mygfs      007f0005 none
      [5]
      gfs              2     mygfs      007e0005 none
      [5]

The state of the group should be none. The numbers in the brackets are the node ID numbers of the cluster nodes in the group. The clustat shows which node IDs are associated with which nodes. If you do not see a node number in the group, it is not a member of that group. For example, if a node ID is not in dlm/rgmanager group, it is not using the rgmanager dlm lock space (and probably is not running rgmanager).
The level of a group indicates the recovery ordering. 0 is recovered first, 1 is recovered second, and so forth.
*
Connect to one of the nodes in the cluster and execute the cman_tool nodes -f command This command provides information about the cluster nodes that you may want to look at. The following example shows the output of the cman_tool nodes -f command.

      [root@clusternode1 ~]# cman_tool nodes -f
      Node  Sts   Inc   Joined               Name
         1   M    752   2008-10-27 11:17:15  clusternode5.example.com
         2   M    752   2008-10-27 11:17:15  clusternode4.example.com
         3   M    760   2008-12-03 11:28:44  clusternode3.example.com
         4   M    756   2008-12-03 11:28:26  clusternode2.example.com
         5   M    744   2008-10-27 11:17:15  clusternode1.example.com

The Sts heading indicates the status of a node. A status of M indicates the node is a member of the cluster. A status of X indicates that the node is dead. The Inc heading indicating the incarnation number of a node, which is for debugging purposes only.
*
Check whether the cluster.conf is identical in each node of the cluster. If you configure your system with Conga, as in the example provided in this document, these files should be identical, but one of the files may have accidentally been deleted or altered.
*
In addition to using Conga to fence a node in order to test whether failover is working properly as described in Chapter 6, Testing the NFS Cluster Service, you could disconnect the ethernet connection between cluster members. You might try disconnecting one, two, or three nodes, for example. This could help isolate where the problem is.
*
If you are having trouble mounting or modifying an NFS volume, check whether the cause is one of the following:
o
The network between server and client is down.
o
The storage devices are not connected to the system.
o
More than half of the nodes in the cluster have crashed, rendering the cluster inquorate. This stops the cluster.
o
The GFS file system is not mounted on the cluster nodes.
o
The GFS file system is not writable.
o
The IP address you defined in the cluster.conf is not bounded to the correct interface / NIC (sometimes the ip.sh script does not perform as expected).
*
Execute a showmount -e command on the node running the cluster service. If it shows up the right 5 exports, check your firewall configuration for all necessary ports for using NFS.
*
If SELinux is currently in enforcing mode on your system, check your /var/log/audit.log file for any relevant messages. If you are using NFS to serve home directories, check whether the correct SELinux boolean value for nfs_home_dirs has been set to 1; this is required if you want to use NFS-based home directories on a client that is running SELinux. If you do not set this value on, you can mount the directories as root but cannot use them as home directories for your users.
*
Check the /var/log/messages file for error messages from the NFS daemon.
*
If you see the expected results locally at the cluster nodes and between the cluster nodes but not at the defined clients, check the firewall configuration at the clients.

Troubleshooting Red Hat Cluster Suite Networking
Ref : http://people.redhat.com/ccaulfie/docs/CSNetworking.pdf

Linux:named: transfer of ‘domain.com/IN’ from #53: failed while receiving responses: permission denied

Friday, September 3rd, 2010

When you setup a Slave Dns server and trying to transfer zone from master server, you might see problem as bellow :

Sep  3 09:52:37 publicdns1.domani.local named[13635]: dumping master file: tmp-PKhZ6y6rRp: open: permission denied
Sep  3 09:52:37 publicdns1.domain.local named[13635]: transfer of 'domain.com/IN' from 11.22.33.44#53: failed while receiving responses: permission denied
Sep  3 09:52:37 publicdns1.domain.local named[13635]: transfer of 'domain.com/IN' from 11.22.33.44#53: end of transfer

Solutions :
Make sure slave server is trying to create the zone file under /slave directory .( file “slaves/domain.com.zone”;)
Setting in named.conf for slave server would be like bellow

### Add Authoritiative zone for domain.com#######
        zone "domain.com" IN {
        type slave;
        file "slaves/domain.com.zone";
        masters {11.22.33.44; };
 
};

How to run perl/Python script from Linux Apache server

Thursday, September 2nd, 2010

For httpd.conf ( /usr/local/apache/conf – if you compile by source OR /etc/httpd/conf/httpd -: if you compile by yum)

ScriptAlias /cgi-bin/ "/usr/local/apache/cgi-bin/"

If you want to run cgi script from under your domain , example , www.fosiul.com/cgi-bin/test.cgi , do as bellow

<VirtualHost *:80>
ServerAdmin fosiul@example.co.uk
DocumentRoot /usr/local/apache/htdocs/example/
ServerName www.example.co.uk
ServerAlias example.co.uk
......................................
......................................
 
<Directory "/usr/local/apache/htdocs/example/">
Options FollowSymLinks ExecCGI
AllowOverride None
Order allow,deny
Allow from all
</Directory>
<Directory "/usr/local/apache/htdocs/example/cgi-bin/">
        Order deny,allow
        Allow from all                
       Allow from xx.xx.xx.xx # If you just want to run cgi script from certain Ips , then you need to disable "Allow from All" options
       #Deny from all  # if you only want to allow cgi script from certain ip then you need to enable "Deny from all" options
        </Directory>
 
ScriptAlias /cgi-bin/ /usr/local/apache/htdocs/example/cgi-bin/

Now create a directory under : /usr/local/apache/example/cgi-bin
Create a cgi script

#!/usr/bin/perl -T
use strict;
use CGI;
my $cgi = new CGI;
print $cgi->header;
print $cgi->start_html('test world');
print $cgi->h1('Hellow test');
print $cgi->li('list');
print $cgi->end_html();

run this cgi script : http://www.example.co.uk/cgi-bin/test.cgi

How to run a python under cgi script

create a cgi script (testpy.cgi) as bellow to run python

#!/usr/bin/python
print "Content-Type: text/plain\n\n"
print "Hello, World!\n"

Now run this script as , www.example.co.uk/cgi-bin/testpy.cgi

Linux:How to configure/secure public primary/secondary bind dns server

Wednesday, September 1st, 2010

Localhost Resolver :
(a) install bind

yum install bind bind-chroot bind-devel

(b) Copy named.conf and related files from /usr/share/doc/bind-9.3.6/sample/etc/

cp /usr/share/doc/bind-9.3.6/sample/etc/* /var/named/chroot/etc/

(c) File lists in /var/named/chroot/etc are as bellows :

[root@publicdns1 etc]# ls
localtime   named.rfc1912.zones  rndc.conf
named.conf  named.root.hints     rndc.key

Check the Ownership of files. Ownership should be root:named as
bellow:

[root@publicdns1 etc]# pwd
/var/named/chroot/etc
[root@publicdns1 etc]# ls -al
total 64
drwxr-x--- 2 root named 4096 Aug 28 13:38 .
drwxr-x--- 6 root named 4096 Aug 28 13:37 ..
-rw-r--r-- 1 root root  3661 Aug 24 12:53 localtime
-rw-r--r-- 1 root named 5299 Aug 28 13:38 named.conf
-rw-r--r-- 1 root named  775 Aug 28 12:20 named.rfc1912.zones
-rw-r--r-- 1 root named  524 Aug 28 12:20 named.root.hints
-rw-r--r-- 1 root named    0 Aug 28 12:20 rndc.conf
-rw-r----- 1 root named  113 Aug 28 12:12 rndc.key
[root@publicdns1 etc]#

If the ownership is not right then we can change it as follows :

chown root:named named.conf  named.rfc1912.zones named.root.hints rndc.conf  rndc.key

(d) Copy named.root into /var/named/chroot/var/named directory

cp /usr/share/doc/bind-9.3.6/sample/var/named/named.root  /var/named/chroot/var/named/

File lists are :

[root@publicdns1 named]# ls
data  domain.co.uk.zone  named.root  slaves
[root@publicdns1 named]#

(e) For allowing internal pc’s to resolve dns request and for internal host name , we need to work on “view “localhost_resolver” ” section as bellow

view "localhost_resolver"
{
/* This view sets up named to be a localhost resolver ( caching only nameserver                                              ).
 * If all you want is a caching-only nameserver, then you need only define this                                              view:
 */
        match-clients           { localhost;10.0.0.0/24; };
        match-destinations      { localhost;10.0.0.0/24; };
        recursion yes;
        # all views must contain the root hints zone:
        include "/etc/named.root.hints";
 
        /* these are zones that contain definitions for all the localhost
         * names and addresses, as recommended in RFC1912 - these names should
         * ONLY be served to localhost clients:
         */
        include "/etc/named.rfc1912.zones";
};

Note : all the internal zone information will be placed on named.rfc1912.zones files

(f) Now edit named.rfc1912.zones which is located /var/named/chroot/etc
and enter bellow lines

zone “internaldomain.local” IN {

type master;
file “internaldomain.local.zone”;
};

So the Edited named.rfc1912.zones file be like bellow

// named.rfc1912.zones:
//
// Provided by Red Hat caching-nameserver package
//
// ISC BIND named zone configuration for zones recommended by
// RFC 1912 section 4.1 : localhost TLDs and address zones
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
//zone "." IN {
//      type hint;
//      file "named.ca";
//};
zone "internaldomain.local" IN {
 
        type master;
        file "internaldomain.local.zone";
 
};
 
zone "localdomain" IN {
        type master;
        file "localdomain.zone";
        allow-update { none; };
};
 
zone "localhost" IN {
        type master;
        file "localhost.zone";
        allow-update { none; };
};
 
zone "0.0.127.in-addr.arpa" IN {
        type master;
        file "named.local";
        allow-update { none; };
};
 
zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
        type master;
        file "named.ip6.local";
        allow-update { none; };
};
 
zone "255.in-addr.arpa" IN {
        type master;
        file "named.broadcast";
        allow-update { none; };
};
 
zone "0.in-addr.arpa" IN {
        type master;
        file "named.zero";
        allow-update { none; };
};

(g)

create a zone file internaldomain.local.zone file in /var/named/chroot/var/named like bellow:

$TTL    86400
@               IN SOA  @       root (
                                        42              ; serial (d. adams)
                                        3H              ; refresh
                                        15M             ; retry
                                        1W              ; expiry
                                        1D )            ; minimum
 
                          IN NS           internaldns
                          IN MX   10     internalmailserver
                          IN        A      10.0.0.20
internaldns            IN        A      10.0.0.9
Account               IN        A       10.0.0.6
internalmailserver   IN        A   10.0.0.10
www                    IN        A       10.0.0.20

Note : make sure you have permission as bellow or bind would not be able to read it.

chown root:named internaldomain.local.zone

Primary Server:

(A)
Create zone entries in named.conf
Since This server will work as public dns server,We will create zone entries for example.co.uk under external views.

view    "external"
{
/* This view will contain zones you want to serve only to "external" clients
 * that have addresses that are not on your directly attached LAN interface subnets:
 */
        match-clients           { any; };
        match-destinations      { any; };
 
        recursion no;
        // you'd probably want to deny recursion to external clients, so you don't
        // end up providing free DNS service to all takers
 
    allow-query-cache { none; };
        // Disable lookups for any cached data and root hints
 
        // all views must contain the root hints zone:
        include "/etc/named.root.hints";
 
        // These are your "authoritative" external zones, and would probably
        // contain entries for just your web and mail servers:
 
        ### Add Authoritiative zone for example.co.uk#######
        zone "example.co.uk" IN {
        type master;
        file "example.co.uk.zone";
        allow-update { none; };
        allow-transfer { 22.33.44.55; };//only this host will received updates from this master server.
 
};
 
};

Secondary Server :

Follow every steps from beginning . We just need to make changes on named.conf file to allow slave to download zone file, updates from master server.

view    "external"
{
/* This view will contain zones you want to serve only to "external" clients
 * that have addresses that are not on your directly attached LAN interface subnets:
 */
        match-clients           { any; };
        match-destinations      { any; };
 
        recursion no;
        // you'd probably want to deny recursion to external clients, so you don't
        // end up providing free DNS service to all takers
 
    allow-query-cache { none; };
        // Disable lookups for any cached data and root hints
 
        // all views must contain the root hints zone:
        include "/etc/named.root.hints";
 
        // These are your "authoritative" external zones, and would probably
        // contain entries for just your web and mail servers:
 
        ### Add Authoritiative zone for example.co.uk#######
      zone "example.co.uk" IN {
        type slave;
        file "slaves/example.co.uk.zone";
        masters { 55.55.55.55 ;};
};
 
};

Full named.conf file for Primary Name server(Public + Local host resolver :

 cat named.conf
//
// Sample named.conf BIND DNS server 'named' configuration file
// for the Red Hat BIND distribution.
//
// See the BIND Administrator's Reference Manual (ARM) for details, in:
//   file:///usr/share/doc/bind-*/arm/Bv9ARM.html
// Also see the BIND Configuration GUI : /usr/bin/system-config-bind and
// its manual.
//
options
{
        // Those options should be used carefully because they disable port
        // randomization
        // query-source    port 53;
        // query-source-v6 port 53;
 
        // Put files that named is allowed to write in the data/ directory:
        directory "/var/named"; // the default
        dump-file               "data/cache_dump.db";
        statistics-file         "data/named_stats.txt";
        memstatistics-file      "data/named_mem_stats.txt";
 
};
logging
{
/*      If you want to enable debugging, eg. using the 'rndc trace' command,
 *      named will try to write the 'named.run' file in the $directory (/var/nam                                             ed).
 *      By default, SELinux policy does not allow named to modify the /var/named                                              directory,
 *      so put the default debug log file in data/ :
 */
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
//
// All BIND 9 zones are in a "view", which allow different zones to be served
// to different types of client addresses, and for options to be set for groups
// of zones.
//
// By default, if named.conf contains no "view" clauses, all zones are in the
// "default" view, which matches all clients.
//
// If named.conf contains any "view" clause, then all zones MUST be in a view;
// so it is recommended to start off using views to avoid having to restructure
// your configuration files in the future.
//
view "localhost_resolver"
{
/* This view sets up named to be a localhost resolver ( caching only nameserver                                              ).
 * If all you want is a caching-only nameserver, then you need only define this                                              view:
 */
        match-clients           { localhost;10.0.0.0/24; };
        match-destinations      { localhost;10.0.0.0/24; };
        recursion yes;
        # all views must contain the root hints zone:
        include "/etc/named.root.hints";
 
        /* these are zones that contain definitions for all the localhost
         * names and addresses, as recommended in RFC1912 - these names should
         * ONLY be served to localhost clients:
         */
        include "/etc/named.rfc1912.zones";
};
view    "external"
{
/* This view will contain zones you want to serve only to "external" clients
 * that have addresses that are not on your directly attached LAN interface subn                                             ets:
 */
        match-clients           { any; };
        match-destinations      { any; };
 
        recursion no;
        // you'd probably want to deny recursion to external clients, so you don                                             't
        // end up providing free DNS service to all takers
 
        allow-query-cache { none; };
        // Disable lookups for any cached data and root hints
 
        // all views must contain the root hints zone:
        include "/etc/named.root.hints";
 
        // These are your "authoritative" external zones, and would probably
        // contain entries for just your web and mail servers:
 
            zone "example.co.uk" IN {
        type master;
        file "example.co.uk.zone";
        allow-update { none; };
        allow-transfer { 22.33.44.55; };//only this host will received updates from this master server.
 
};
 
};

Full named.conf for Public Slave server

 cat named.conf
//
// Sample named.conf BIND DNS server 'named' configuration file
// for the Red Hat BIND distribution.
//
// See the BIND Administrator's Reference Manual (ARM) for details, in:
//   file:///usr/share/doc/bind-*/arm/Bv9ARM.html
// Also see the BIND Configuration GUI : /usr/bin/system-config-bind and
// its manual.
//
options
{
        // Those options should be used carefully because they disable port
        // randomization
        // query-source    port 53;
        // query-source-v6 port 53;
 
        // Put files that named is allowed to write in the data/ directory:
        directory "/var/named"; // the default
        dump-file               "data/cache_dump.db";
        statistics-file         "data/named_stats.txt";
        memstatistics-file      "data/named_mem_stats.txt";
 
};
logging
{
/*      If you want to enable debugging, eg. using the 'rndc trace' command,
 *      named will try to write the 'named.run' file in the $directory (/var/nam                                             ed).
 *      By default, SELinux policy does not allow named to modify the /var/named                                              directory,
 *      so put the default debug log file in data/ :
 */
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
//
// All BIND 9 zones are in a "view", which allow different zones to be served
// to different types of client addresses, and for options to be set for groups
// of zones.
//
// By default, if named.conf contains no "view" clauses, all zones are in the
// "default" view, which matches all clients.
//
// If named.conf contains any "view" clause, then all zones MUST be in a view;
// so it is recommended to start off using views to avoid having to restructure
// your configuration files in the future.
//
view "localhost_resolver"
{
/* This view sets up named to be a localhost resolver ( caching only nameserver                                              ).
 * If all you want is a caching-only nameserver, then you need only define this                                              view:
 */
        match-clients           { localhost;10.0.0.0/24; };
        match-destinations      { localhost;10.0.0.0/24; };
        recursion yes;
        # all views must contain the root hints zone:
        include "/etc/named.root.hints";
 
        /* these are zones that contain definitions for all the localhost
         * names and addresses, as recommended in RFC1912 - these names should
         * ONLY be served to localhost clients:
         */
        include "/etc/named.rfc1912.zones";
};
view    "external"
{
/* This view will contain zones you want to serve only to "external" clients
 * that have addresses that are not on your directly attached LAN interface subn                                             ets:
 */
        match-clients           { any; };
        match-destinations      { any; };
 
        recursion no;
        // you'd probably want to deny recursion to external clients, so you don                                             't
        // end up providing free DNS service to all takers
 
        allow-query-cache { none; };
        // Disable lookups for any cached data and root hints
 
        // all views must contain the root hints zone:
        include "/etc/named.root.hints";
 
        // These are your "authoritative" external zones, and would probably
        // contain entries for just your web and mail servers:
 
        zone "example.co.uk" IN {
        type slave;
        file "slaves/example.co.uk.zone";
        masters { 55.55.55.55 ;};
};
 
};

Securing Name server :
(a) Dont End up providing free dns service for every one

options {
     recursion no;
};

(b)

options {
      fetch-glue no;
};

(c)Allow zone transfer from specific host

 ### Add Authoritiative zone for example.co.uk#######
        zone "example.co.uk" IN {
        type master;
        file "example.co.uk.zone";
        allow-update { none; };
        allow-transfer { 22.33.44.55; };//only this host will received updates from this master server.

(d) Don’t disclose Bind version

options {
     version "Not disclosed";
 
};

Nagios script to monitor memory uses

Thursday, June 24th, 2010
#!/bin/bash
 
#Version 1.0
#######################################
#Nagios scrept to check memory status##
#Commands : free -m#####################
#######################################
 
 
#Status check for nagios script
 
STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4
 
 
#Define All the variables for commands
 
declare -rx SCRIPT=${0##*/}
declare -rx CMD_AWK="/bin/awk"
declare  -rx CMD_CAT="/bin/cat"
declare  -rx CMD_FREE="/usr/bin/free"
#####Section 1.1 :Definning function for free memory checking########
#Definning function to check free memory status#####################
#####################################################################
 
function FUNC_FREE_CMD
 
{
 
MEM_STATUS=$( $CMD_FREE -m | grep buffers/cache | awk '{print $4}')
 
 
########Checking if Current memory is critial or normal ######
 
if [ $MEM_STATUS -le 325 ]
then
 
#echo "Critical,Memory Level: $MEM_STATUS"
echo "Critical,Memory Level: $MEM_STATUS|Memory_level=$MEM_STATUS;350;325;0"
exit $STATE_CRITICAL
fi
 
if [ $MEM_STATUS -le 350 ]
then
 
echo "Warnings,Memory Level: $MEM_STATUS|Memory_level=$MEM_STATUS;350;325;0"
exit $STATE_WARNING
 
else
echo "Memory Seems Ok,Total Memory is: $MEM_STATUS|Memory_level=$MEM_STATUS;350;325;0"
#echo "Critical,Memory Level: $MEM_STATUS|Memory_level=$MEM_STATUS"
exit $STATE_OK
fi
 
}
 
#############Section 1.2 calling  the function###############
######## And processing data from this fucntion##############
FUNC_FREE_CMD

Thanks

Mysql Server processlist shows negative value(-) in connect column for system user

Wednesday, June 9th, 2010

Some times process list out put show negative value like bellow :
Command :

watch /usr/local/mysql/bin/mysqladmin -ppass processlist

8 | system user | | Connect | -1247 | Has read all relay log; waiting for the slave I/O thread to update it |

One of the reason :
make sure both Server has same time zone.
if there is any time difference between 2 server the replicate client show negative values

How To Set Up MySQL Database Replication With SSL Encryption

Wednesday, June 9th, 2010

SSl Replication between 2 Active Active Mysql Server


Step1 :
Set up normal replication first and find out if mysql server is compiled with ssl supports
Ref:http://www.fosiul.com/index.php/2009/11/mysql-server-master-master-active-active-replication/

Bellow commands will verify if mysql server is compiled with ssl supports

SHOW VARIABLES LIKE ‘have_openssl’;

output :

(Yes mean)Mysql Server is compiled with ssl


Step2 :
in Server1 :
(a)Create Self signed certificate .
Note : While Creating self signed certificate use different common name for each certificate,other wise it will through ssl certificate error.

Creating Self signed certificate :
ref :http://dev.mysql.com/doc/refman/5.1/en/secure-create-certs.html

mkdir /usr/local/mysql/ssl ( I am assuming ,mysql has been compiled at /usr/local/mysql directory)

cd /usr/local/mysql/ssl

# Create CA certificate (Use different common name)

shell> openssl genrsa 2048 > ca-key.pem
shell> openssl req -new -x509 -nodes -days 1000 \
         -key ca-key.pem > ca-cert.pem

# Create server certificate (use different common name)

shell> openssl req -newkey rsa:2048 -days 1000 \
         -nodes -keyout server-key.pem > server-req.pem
shell> openssl x509 -req -in server-req.pem -days 1000 \
         -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > server-cert.pem

# Create client certificate

shell> openssl req -newkey rsa:2048 -days 1000 \
         -nodes -keyout client-key.pem > client-req.pem
shell> openssl x509 -req -in client-req.pem -days 1000 \
         -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > client-cert.pem

So it will be like bellow picture

Step 2 :

Copy all these files into Server 2 .Location : /usr/local/mysql/ssl

Reason : we will make Master Master Active Active Replication. There would be ssl encryption between Server1 to server 2 and server2 to server1.
Picture :SSl Replication between 2 Active Active Mysql Server

scp * root@ns2.server2co.uk:/usr/local/mysql/ssl/
(Assume, we are in /usr/local/mysql/ssl directory of Server1)

Step 3:

For Server1 :

Edit my.cnf file add bellow lines in [ Mysqld] sections
 
ssl-key=/usr/local/mysql/ssl/server-key.pem
ssl-cert=/usr/local/mysql/ssl/server-cert.pem
ssl-ca=/usr/local/mysql/ssl/ca-cert.pem
 
[client]
ssl-ca=/usr/local/mysql/ssl/ca-cert.pem
ssl-key=/usr/local/mysql/ssl/client-key.pem
ssl-cert=/usr/local/mysql/ssl/client-cert.pem
 
 
For Server2 :
<pre lang="GNU">
Edit my.cnf file add bellow lines in [ Mysqld] sections
 
ssl-key=/usr/local/mysql/ssl/server-key.pem
ssl-cert=/usr/local/mysql/ssl/server-cert.pem
ssl-ca=/usr/local/mysql/ssl/ca-cert.pem
 
 
[client]
ssl-ca=/usr/local/mysql/ssl/ca-cert.pem
ssl-key=/usr/local/mysql/ssl/client-key.pem
ssl-cert=/usr/local/mysql/ssl/client-cert.pem

Restart the both server, using the –skip-slave-start
ref :href=”http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_skip-slave-start

/usr/local/mysql/bin/mysqld_safe --skip-slave-start --user=mysql &

Now check if both server has ssl linked to accurate directory

Execute bellow command in mysql server console in both server.

mysql> show variables like '%ssl%';

it will give output like bellow picture

Ssl Enabled and its looking to right directory

Step 4 :
Create replication user
For server 1

GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO slave@'ip.of.your.server2' IDENTIFIED BY 'strong-password' require SSL;

For server 2

GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO slave@'ip.of.your.server1' IDENTIFIED BY 'strong-password' require SSL;

Step 5 :
Server 1:
Open firewall rules ,and allow traffic to 3306 port from only ip of server2

Server 2:

Open Firewall rules ,and allow traffic to 3306 port from only ip of server1

Step 5 :

Test if both server accepting ssl connection from each other and its going via Secure ssl encryption
From Server 1/Server 2 :

mysql --ssl -hip-of-server1 -uSSL_CLIENT -ppassword

if everything goes ok then you should see mysql prompt. at the mysql prompt , type
\s to verify that its going through via ssl encryption.

Look at the ssl column for :
SSL:Cipher in use is DHE-RSA-AES256-SHA or similar
Same as bellow picture:

Ssl is enabled

Step 6 :
Connect Serve1 with SErver 2 and SErver 2 with Server 1

Server1 to Server2 :

CHANGE MASTER TO MASTER_HOST='ip.of.your.server2', MASTER_USER='slave', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=53678, MASTER_SSL=1,MASTER_SSL_CA = '/usr/local/mysql/ssl/ca-cert.pem', MASTER_SSL_CERT = '/usr/local/mysql/ssl/client-cert.pem', MASTER_SSL_KEY = '/usr/local/mysql/ssl/client-key.pem';

Server2 to Server1

CHANGE MASTER TO MASTER_HOST='ip.of.your.server1', MASTER_USER='slave', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=53488, MASTER_SSL=1,MASTER_SSL_CA = '/usr/local/mysql/ssl/ca-cert.pem', MASTER_SSL_CERT = '/usr/local/mysql/ssl/client-cert.pem', MASTER_SSL_KEY = '/usr/local/mysql/ssl/client-key.pem';

NOte : make sure you lock al the tables before taking log file positions and also check the log file position from both server.

Step 6 :
Now start slave server on both server.

 
slave start

Step 7:
Verify if both server looking to each other.

Server1/Server2

show slave status\G;

check if the output is similar with the bellow picture

Check if all slaves looking to each other

Look for bellow options :

Master_Host: xx.xx.xx.xx
Master_User: slave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 128108
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Master_SSL_Allowed: Yes
Master_SSL_CA_File: /usr/local/mysql/ssl/ca-cert.pem
Master_SSL_Cert: /usr/local/mysql/ssl/client-cert.pem
Master_SSL_Key: /usr/local/mysql/ssl/client-key.pem

Please let me know if there is any problem you face while implementing this.
Thanks

nagios script to check dns servers status

Monday, June 7th, 2010
#!/bin/bash
###################################
#Purpose:################################################################
###(a) Monitor if all your name server is online:        Status :Done ####
###(b) Monitor if all name server has same zone record : Staus : Ongoing##
###(c) Monitor the Response time of Dns server         : Status : Ongoing#
#########################################################################
 
#Status check variables for nagios script#####
#####################################
STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4
 
#####################################
##Declaration of vairables###########
#####################################
 
declare -rx  CMD_HOST="/usr/bin/host";
declare -rx CMD_AWK="/bin/awk"
declare  -rx CMD_CAT="/bin/cat"
declare -rx CMD_GREP="/bin/grep"
declare -rx CMD_DIG="/usr/bin/dig"
ZONE=$1;  # This value will captuer zone record prvided as parameter from script.
#############################################################
#Command to use : host -t ns fosiul.co.uk | awk '{print $4}'#
#############################################################
NUMBER_OF_DNSSrv=$($CMD_HOST -t ns $ZONE | $CMD_AWK '{print $4}' )
s=0
for i in $NUMBER_OF_DNSSrv
do
###########################################################
###Now Find out if all the name server is running##########
##########################################################
 
############Command#######################
########dig @dnserver ############
DNS_LIVE_RESULT=$($CMD_DIG @$i | $CMD_GREP -c  'connection timed out')
 
if [ $DNS_LIVE_RESULT -gt 0 ]
 
        then
         OFFLINE_ARRAY[$s]=$i
          ((s+=1))
fi
done
if [ ${#OFFLINE_ARRAY[*]} -eq 0 ]
then
 echo "All servers are online"
 exit $STATUS_OK
else
 s=0
  echo -n "Following servers are offline: "
  while [ $s -lt ${#OFFLINE_ARRAY[*]} ]
   do
    echo -n "${OFFLINE_ARRAY[$s]} "
    ((s+=1))
   done
   echo
  exit $STATE_CRITICAL
fi
 
 
 
done

Linux:How to run c program in linux

Friday, June 4th, 2010

1. Open an editor in linux Example vi editor
2. Write a simple program and save it as progra1.c

  #include <stdio.h>
  int main (void)
{
printf ("Programming is fun.\n");
return 0;
}

3. compile the program : $ gcc prog1.c
4. Run the program : ./a.out
Or
5.you can give it a different name : gcc prog1.c –o prog1
Now run the program by typing : prog1

Linux:How to configure logrotate for ModSecurity(source install)

Monday, April 26th, 2010

Problem: When you install Mod-security from source , by default log-rotate will not rotate those logs file as the path for log files are not defined logrotate configuration file by default. So if you want to allow logo-ratate to rotate your modsecurity log files. here is the steps:

1. Create a file modsecurity under /etc/logrotate.d

 cd /etc/logrotate.d/
touch modsecurity

2. Copy and past bellow lines in their

#Bellow is my modsecurity log file (/opt/modsecurity/var/log/audit.log)
 
/opt/modsecurity/var/log/audit.log {
    missingok
    notifempty
    postrotate
 ##Restart the apache daemon
       /usr/local/apache/bin/apachectl graceful > /dev/null 2>/dev/null || true
    endscript
}

Now you can forcefully rotate log files by executing bellow commands:

 
logrotate -f /etc/logrotate.conf

Linux:How to create multiple OpenVPN instances

Monday, April 26th, 2010

Problem :
How to configure openvpn to create multiple instances and listen more then 2 ports(1194,1195) ??
Solution:
you need more then 2 openvpn configuration file. example : openvpn.conf and openvpn1.conf

Now you need to define different port , Server Ip address,ifconfig-pool-persist, and log files

For openvpn.conf :

port 1194
proto tcp
dev tun
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
log         openvpn.log
log-append  openvpn.log

For openvpn1.conf :

 
port 1195
proto tcp
dev tun
server 192.168.1.0 255.255.255.0
ifconfig-pool-persist /etc/openvpn/config2/ipp.txt
log         /etc/openvpn/config2/openvpn.log
log-append  /etc/openvpn/config2/openvpn.log

Now start openvpn daemon with these 2 config file separately

shell> openvpn –config /etc/openvpn/openvpn.conf &
shell> openvpn –config /etc/openvpn/openvpn1.conf &

Or add this into /etc/rc.local file so that when computer will reboot , it will start automatically.

so now if you take ifconfig output , it will show like this

tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:10.8.0.1  P-t-P:10.8.0.2  Mask:255.255.255.255
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
 
tun1      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:192.168.1.1  P-t-P:192.168.1.2  Mask:255.255.255.255
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

Nagios script to monitor memory uses

Friday, April 23rd, 2010

Purpose:
###########################################
Develop a nagios script, which will monitor Linux memory uses.
###########################################

This script will check following :
#############################################
#1.If free memory is more then the defined memory as free: Status Done
#2.If System is using swap memory : Status:Done
##############################################

#!/bin/bash
 
#Version 1.0
#######################################
#Nagios scrept to check memory status##
#Commands : fre -m#####################
#######################################
 
 
#Status check for nagios script
 
STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4
 
 
#Define All the variables for commands
 
declare -rx SCRIPT=${0##*/}
declare -rx CMD_AWK="/bin/awk"
declare  -rx CMD_CAT="/bin/cat"
declare  -rx CMD_FREE="/usr/bin/free"
declare  -rx CMD_VMSTAT="/usr/bin/vmstat"
declare  -rx CMD_GREP="/bin/grep"
 
#####Section 1.1 :Definning function for free memory checking########
#Definning function to check free memory status#####################
##########################################
 
function FUNC_FREE_CMD
 
{
 
MEM_STATUS=$( $CMD_FREE -m | grep buffers/cache | awk '{print $4}')
 
 
########Checking if Current memory is critial or normal ######
 
if [ $MEM_STATUS -le 325 ]
then
 
#echo "Critical,Memory Level: $MEM_STATUS"
echo "Critical,Memory Level: $MEM_STATUS|Memory_level=$MEM_STATUS;350;325;0"
exit $STATE_CRITICAL
fi
 
if [ $MEM_STATUS -le 350 ]
then
 
echo "Warnings,Memory Level: $MEM_STATUS|Memory_level=$MEM_STATUS;350;325;0"
exit $STATE_WARNING
 
else
echo "Memory Seems Ok,Total Memory is: $MEM_STATUS|Memory_level=$MEM_STATUS;350;325;0"
#echo "Critical,Memory Level: $MEM_STATUS|Memory_level=$MEM_STATUS"
#exit $STATE_OK
fi
 
}
 
#####Section 2.1 Definning function for checking swap uses###########
#### Commands: free -m | grep Swap | awk '{print $3}################
###################################################################
 
function FUNC_FREE_SWAP_CMD
{
 
SWAP_STATUS=$( $CMD_FREE -m | grep Swap | awk '{print $3}')
 
if [ $SWAP_STATUS -ne 0 ]
then
echo "System is using swap:$SWAP_STATUS"
echo "Lets Try to find out how much swap system using by using vmstat output"
 
fi
 
}
 
######Section 3.1, Definning funtion , to check how much swap in and swap out for  5 seconds####
#####Commands : vmstat
###############################################################################################
 
 
 
function FUNC_VMSTAT_CMD
{
 
#echo $( $CMD_VMSTAT 3 5 | $CMD_GREP "^[ ][0-9]"|  $CMD_AWK 'BEGIN{for(n=1;n<=8;n++){printf("%s ", "Average Uses:" [3]/5)}}')
echo $( $CMD_VMSTAT 3 5 | $CMD_GREP "^[ ][0-9]"|  $CMD_AWK 'BEGIN{for(n=1;n<=8;n++){printf("Average Uses:" s[3]/5)}}')
 
}
 
 
#############Section 3.1 calling  all  functions###############
###Function from section 1.1:To Calculate Free memory##############
###Funciton from section 2.1:To calucate  Swap uses ###############
FUNC_FREE_CMD
FUNC_FREE_SWAP_CMD
FUNC_VMSTAT_CMD

configure nrpe(nagios) to listen on different port

Thursday, April 15th, 2010

Purpose : Some times Isp Or vps provider they block port 5666 Or for any reason if you want to configure nrpe to listen different port example 15666, follow as bellow:

On the Remote host(linux-vps) :

1. Change the Port number in : /etc/xinetd.d/nrpe

# default: on
# description: NRPE (Nagios Remote Plugin Executor)
service nrpe
{
        flags           = REUSE
        socket_type     = stream
       port            = 15666
        wait            = no
        user            = nagios
        group           = nagios
        server          = /usr/local/nagios/bin/nrpe
        server_args     = -c /usr/local/nagios/etc/nrpe.cfg --inetd
        log_on_failure  += USERID
        disable         = no
        only_from       = 127.0.0.1 ip.of.nagios.server
}

2. Change port number : vi /etc/services

nrpe            15666/tcp                        # NRPE

3. Change port number in : /usr/local/nagios/etc/nrpe.cfg

server_port=15666

4 . Restart nrpe daemon : service xinetd restart

On the server(nagiosserver) :
Purpose : Example, I have more then 10 linux server. 9 of them listen port 5666 , but only one of them listen port 15666 . So I need to create a different set of commands for nagios server to connect that nrpe client on different port.

1. Create a command in command.cgi file ( /usr/local/nagios/etc/objects/commands.cgi)

#This is slightly modified from check_nrpe command
#Because Vps company they blocked port 5666
#So i had to configure linuxvps server to listen on port  15666, So
#I need to create a different command to connect to different port
 
define command{
command_name check_nrpe_vps
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -p 15666 -c $ARG1$
}

2. Now call this check_nrpe_vps commands from host definition file
Example : host definition file for linuxvps is : linuxvps.cgi ( /usr/local/nagios/etc/objects/linuxvps.cgi)

   define service{
   use generic-service
   host_name linuxvps
   service_description CPU Load
  check_command check_nrpe_vps!check_load
}

3. now Call this linuxvps.cgi from nagios.cfg file

  cfg_file=/usr/local/nagios/etc/objects/linuxvps.cfg

4. restart the nagios.
So now this nagios server will connect to nrpe client via 15666 port.

Linux :file and directory permission

Thursday, April 8th, 2010

Octal Permission:

0 — 000 All types of access are denied
1 –x 001 Execute access is allowed only
2 -w- 010 Write access is allowed only
3 -wx 011 Write and execute access are allowed
4 r– 100 Read access is allowed only
5 r-x 101 Read and execute access are allowed
6 rw- 110 Read and write access are allowed
7 rwx 111 Everything is allowed

Linux-Memory Performance statistics

Wednesday, March 31st, 2010

Ref: Optimizing Linux® Performance: A Hands-On Guide to Linux® Performance Tools

Ref:http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/admin-primer/s1-resource-what-to-monitor.html

Ref:http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/admin-primer/s1-resource-rhlspec.html

Basic explanation of memory related words:

Swap (Not Enough Physical Memory)

All systems have a fixed amount of physical memory in the form of RAM chips.
The Linux kernel allows applications to run even if they require more memory
than available with the physical memory.The Linux kernel uses the hard drive
as a temporary memory. This hard drive space is called swap
space.

Buffers and Cache (Too Much Physical Memory)

 if your system has much more physical memory than required by your applications,
Linux will cache recently used files in physical memory so that subsequent accesses to that file do
not require an access to the hard drive. This can greatly speed up applications that access the hard
drive frequently, which, obviously, can prove especially useful for frequently launched applications.
Note :  most tools that report statistics about “cache” are actually referring to
disk cache.

Buffer:

In addition to cache, Linux also uses extra memory as buffers. To further optimize applications,
Linux sets aside memory to use for data that needs to be written to disk. These set-asides are called
buffers. If an application has to write something to the disk, which would usually take a long time,
Linux lets the application continue immediately but saves the file data into a memory buffer. At some
point in the future, the buffer is flushed to disk, but the application can continue immediately.

Low Memory is not always bad thing:

It can be discouraging to see very little free memory in a system because of the cache and buffer
usage, but this is not necessarily a bad thing. By default, Linux tries to use as much of your memory
as possible. This is good. If Linux detects any free memory, it caches applications and data in the
free memory to speed up future accesses. Because it is usually a few orders of magnitude faster to
access things from memory rather than disk, this can dramatically improve overall performance.
When the system needs the cache memory for more important things, the cache memory is erased
and given to the system. Subsequent access to the object that was previously cached has to go out
to disk to be filled.

Active Versus Inactive Memory

Active memory is currently being used by a process. Inactive memory is memory that is allocated
but has not been used for a while. Nothing is essentially different between the two types of memory.
When required, the Linux kernel takes a process’s least recently used memory pages and moves
them from the active to the inactive list. When choosing which memory will be swapped to disk, the
kernel chooses from the inactive memory list.

High Versus Low Memory

For 64bit processor it does not matter  because they can
directly address additional memory that is available in current system
For 32-bit processors (for example, IA32) with 1 GB or more of physical of memory, Linux must
manage the physical memory as high and low memory. The high memory is not directly accessible
by the Linux kernel and must be mapped into the low-memory range before it can be used.

Bottom line is : If system does not use Swap , there is no need to worry about ,but will have to keep on eye : cache, buffer,free ram. Memory Performance monitoring tools as bellow , which provide

* How much swap is being used

* How the physical memory is being used

* How much free ram.

Memory Performance monitoring tools and related commands:

1.vmstat

2.free -m

3.slabtop

4.top ( Press Shift + m )

5. Ps command

6.procinfo ( yum install procinfo)

6.sar [-B -W -r] ( sysstat packages, yum install sysstate)

Vmstat uses :

 vmstat [-a] [-s] [-m]

vmstat command line options :

-a This changes the default output of memory statistics to indicate the active/
inactive amount of memory rather than information about buffer and cache
usage.
-s  This prints out the vm table. This is a grab bag of differentstatistics about the
system since it has booted. It cannot be run in sample mode. It contains both
memory and CPU statistics.
-m This prints out the kernel’s slab info. This is the same information that can be
retrieved by typing cat/proc/slabinfo. This describes in detail how the
kernel’s memory is allocated and can be helpful to determine what area of the
kernel is consuming the most memory.

Memory Specific vmstat Output statistics swpd:

The total amount of memory currently swapped to disk.
free The amount of physical memory not being used by the operating system or
applications.

buff:

The size (in KB) of the system buffers, or memory used to store data waiting
to be saved to disk. This memory allows an application to continue execution
immediately after it has issued a write call to the Linux kernel (instead of
waiting until the data has been committed to disk).

cache :

The size (in KB) of the system cache or memory used to store data previously
read from disk. If an application needs this data again, it allows the kernel to
fetch it from memory rather than disk, thus increasing performance. &lt;/pre&gt;

active:

 The amount of memory actively being used. The active/ inactive statistics are
orthogonal to the buffer/cache; buffer and cache memory can be active and

inactive:

inactive The amount of inactive memory (in KB), or memory that has not been used
for a while and is eligible to be swapped to disk.

si

:

 The rate of memory (in KB/s) that has been swapped in from disk during the
last sample.

so :

The rate of memory (in KB/s) that has been swapped out to disk during the last
sample.

pages paged in

:

 The amount of memory (in pages) read from the disk(s) into the system buffers.
(On most IA32 systems, a page is 4KB.)

pages paged out :

 The amount of memory (in pages) written to the disk(s) from the system cache.
(On most IA32 systems, a page is 4KB.)

pages swapped in

:

 The amount of memory (in pages) read from swap into system memory.

pages swapped in/out :

 The amount of memory (in pages) written from system memory to the swap.

used swap :

 The amount of swap currently being used by the Linux kernel.

free swap:

 The amount of swap currently available for use.

total swap

:

 The total amount of swap that the

Free Command free can be invoked using the following command line:

free [ -l] [-t] [-s delay ] [-c count ]

Output :

[root@sandbox ~]# free -m
             total       used       free     shared    buffers     cached
Mem:           375        355         19          0        177         86
-/+ buffers/cache:         91        283
Swap:         2000          0       2000

Total:

 This is the total amount of physical memory and swap.

Used

This is the amount of physical memory and swap in use.

Free

 This is the amount of unused physical memory and swap.

Shared

 This is an obsolete value and should be ignored.

Buffers

This is the amount of physical memory used as buffers for disk writes.

Cached

This is the amount of physical memory used as cache for disk reads.

-/+ buffers/cache

 In the Used column, this shows the amount of memory that would be used if
buffers/cache were not counted as used memory. In the Free column, this
shows the amount of memory that would be free if buffers/cache were counted
as free memory.

Low

The total amount of low memory or memory directly accessible by the kernel.

High

The total amount of high memory or memory not directly accessible by the
kernel.

Totals

This shows the combination of physical memory and swap for the Total,
Used, and Free columns.

Sar Command
sar can be invoked with

 sar [-B] [-r] [-R]

Sar Command line options

-B

This reports information about the number of blocks that the kernel swapped

to and from disk. In addition, for kernel versions after v2.5, it reports information about the number of page faults.
-W

This reports the number of pages of swap that are brought in and out of the system.

-r

 This reports information about the memory being used in the system. It includes
information about the total free memory, swap, cache, and buffers being used

Explanation of Sar -B output

pgpgin/s:-    The amount of memory (in KB) that the kernel paged in from disk.
pgpgout/s:-  The amount of memory (in KB) that the kernel paged out to disk.
fault/s:-       The total number of faults that that the memory subsystem needed to fill. These
may or may not have required a disk access.
maj flt/s:-    The total number of faults that the memory subsystem needed to fill and required a disk access.
pswpin/s:-    The amount of swap (in pages) that the system brought into memory.

Explanation of Sar -W output:

pswpout/s:-    The amount of memory (in pages) that the system wrote to swap.
kbmemfree:-    This is the total physical memory (in KB) that is currently free or not being
used

Explanation of Sar -r output :

 
kbmemused:-   This is the total amount of physical memory (in KB) currently being used.
%memused:-   This is the percentage of the total physical memory being used.
kbbuffers:-     This is the amount of physical memory used as buffers for disk writes.
kbcached:-     This is the amount of physical memory used as cache for disk reads.
kbswpfree:-    This is the amount of swap (in KB) currently free.
kbswpused:-   This is the amount of swap (in KB) currently used.
%swpused:-   This is the percentage of the swap being used.
kbswpcad:-    This is memory that is both swapped to disk and present in memory. If the
memory is needed, it can be immediately reused because the data is already
present in the swap area.
frmpg/s:-       The rate that the system is freeing memory pages. A negative number means
the system is allocating them.
bufpg/s:-       The rate that the system is using new memory pages as buffers. A negative
number means that number of buffers is shrinking, and the system is using less
of them

how to configure logrotate for apache log files

Wednesday, March 24th, 2010

Problem : When you install apache from source , by default logrotate will not rotate those logs file as the path for log files are different.

Solution: You can edit httpd file under /etc/logrotate.d/ directory and insert bellow lines

/usr/local/apache/logs/*log {
    missingok
    notifempty
    sharedscripts
    postrotate
        /usr/local/apache/bin/apachectl graceful > /dev/null 2>/dev/null || true
    endscript
}

Now you can forcefully rotate log files by executing bellow commands:

logrotate -f /etc/logrotate.conf

Linux:How to use aide to check file system integrity

Monday, March 15th, 2010

Installing Aide:

Yum install aide

Creating the database:

aide -c /etc/aide.conf –i
Output : AIDE database at /var/lib/aide/aide.db.new.gz initialized.
This process creates a new file, aide.db.new.gz in /var/lib/aide/.You must rename this file to aide.db.gz, which is the correct name for the AIDE database.

Testing Aide:

aide -c /etc/aide.conf –C