Friday, June 14, 2013

Linux Cheat sheet

Basic Operation
#hostame - Displays the hostname and/or FQDN of the system
#uname -a - Displays the hostname and detailed kernel version
#cat /etc/redhat-release - Displays the version of Linux installed Example:
#cat /proc/cpuinfo - Displays information about the CPU(s)
#df -h - Displays the partitions, their sizes details, and mount points
#free - Displays detail about the system memory and usage
#lsof - Displays all open files
#lsof -nPi:22 - Displays any open files which use port 22
#locate httpd.conf - Displays the full path to any file named httpd.conf
#updatedb - Rebuilds index of files for search using the locate utility

Copy, Move, Delete
#cp file1.txt file2.txt - Copies file1.txt to file2.txt
#mv old.txt new.txt - Renames a file called old.txt to new.txt
#rm file1.txt - Deletes file1.txt
#mkdir httpds - Creates a new directory called httpds
#cp -R httpd httpds - Recursively copies all files from directory httpd to httpds
#cp -PR httpd httpds - Recursively copies all files from directory httpd to httpds and retains all permission settings
#rm -rf httpd - Recursively deletes folder httpd and all contents
#chkconfig --list - Displays all services and their state (start or stop) at each runlevel
#chkconfig --level 35 httpd on - Sets httpd to start on runlevels 35 when machine is booted
#service httpd start - Immediately starts Apache

File Attributes
#chown apache virtualhosts.txt - Changes ownership of the virtualhosts.txt file to user apache
#chgrp apache virtualhosts.txt - Changes membership of the virtualhosts.txt file to group apache
#chmod a+x sniffer.pl - Allows the sniffer.pl file to be executed

CHMOD
7 rwx read, write, execute
6 rw- read, write
5 r-x read, execute
4 r-- read
3 -wx write, execute
2 -w- write
1 --x execute
0 --- no permissions


#chmod 777 passwords.txt - Allows read, write, and execute on the file passwords.txt to anyone
#chmod 000 passwords.txt - Blocks read, write, and execute on the file passwords.txt to anyone

YUM
#yum update -y - Updates all packages without prompting
#yum install iptraf - Installs a package named iptraf
#yum whatprovides */iostat - Searches all repositories and returns RPMs that provide the program iostat
#yum update samba - updates a package named samba

RPM
#rpm -q http - Displays the version of daemon http (apache)
#rpm -qa | grep bind - Displays all packages installed with the word bind. Example:
#rpm -qa | grep bind
bind-chroot-9.3.6-16.P1.el5
system-config-bind-4.0.3-4.el5.centos
bind-utils-9.3.6-16.P1.el5
bind-9.3.6-16.P1.el5
bind-libs-9.3.6-16.P1.el5
ypbind-1.19-12.el5


#rpm -ivh proftpd - Interactively installs proftpd
#rpm -Uvh proftpd - Interactive upgrades named proftpd
#rpm -e proftpd - Removes package proftpd
#rpm --rebuilddb - Rebuilds a corrupt RPM database

Compressed files
#unzip package.zip - Unzips the file package.zip
#tar -zvxf stunnel.tar.gz - Decompressed a gzip file named stunnel.tar.gz

Networking
#ifup eth0 - Enables network interface eth0
#ifdown eth0 - Disables network interface eth0
#vi /etc/sysconfig/network-scripts/ifcfg-eth0 - Uses vi to edit network settings on eth0

IP tables
#service iptables status - Displays status of iptables (running or not)
#iptables -L - Displays ruleset of iptables
#iptables -I INPUT -p tcp -m tcp -s 192.168.15.254/26 --dport 22 -j ACCEPT - Accepts incoming

SSH connections from IP range 192.168.15.254/26
#iptables -I INPUT -p tcp -m tcp -s 0.0.0.0/0 --dport 22 -j DROP - Blocks SSH connections from everywhere else
#iptables -I INPUT -s "192.168.10.121" -j DROP - Drops all traffic from IP 192.168.10.121
#iptables -D INPUT -s "192.168.10.121" -j DROP - Removes previously allied drop all from IP 192.168.10.121
#iptables -I INPUT -s "192.168.10.0/24" -j DROP - Drops all traffic from IP range 192.168.10.0/24
#iptables -A INPUT -p tcp --dport 25 -j DROP - Blocks all traffic to TCP port 25
#iptables -A INPUT -p tcp --dport 25 -j ACCEPT - Allows all traffic to TCP port 25
#iptables -A INPUT -p udp --dport 53 -j DROP - Blocks all traffic to UDP port 53
#/etc/init.d/iptables save - Saves all IPtables rules and re-applies them after a reboot

Processes
#ps ax - Displays all running processes
#ps aux - Displays all running processes including CPU and memory usage of each
#ps ax | wc -l - Displays the total number of processes
#top - Interactive process manager which allows sorting by criteria
Logs
#tail -f /var/log/messages - Displays the most current entries to the messages log in real-time
#tail -50 /var/log/messages - Displays the last 50 lines of the messages log
#head -50 /var/log/messages - Displays the first 50 lines of the messages log
#cat /var/log/messages - Displays the entire messages log
#cat /var/log/messages | grep "FTP session opened" - Displays any entries in the messages log that contain the ext FTP session opened
#cat /var/log/messages | grep "FTP session opened" > log2.txt - Writes any entries in the messages log that contain the ext FTP session opened to a file named log2.txt

Paths to Common Files
Bind (named)
/var/named - Bind zone files (non chrooted)
/etc/named.conf - Bind configuration file (non chrooted)
/var/named/chroot/var/named - Bind zone files (chrooted)
/var/named/chroot/etc/named.conf - Bind configuration file (chrooted)

Apache (httpd)
/etc/httpd/conf/httpd.conf - Main apache configuration file
/var/www/html - Default directory for serving pages
/var/log/httpd/ - Default location for logs (access and error)

Networking
/etc/hosts - System hosts file
/etc/resolv.conf - DNS lookup configuration file
/etc/sysconfig/network - Network/hostname configuration file
/etc/selinux - SELinux configuration file
/etc/sysconfig/network-scripts/ - Default location of a network setting file
/etc/sysconfig/iptables - Default iptables policy configuration file
/etc/sysconfig/iptables-config - Default iptables daemon configuration file

Backing up mysql data bases daily,weekly and monthly

I recently find a script that does automatic backup of mysql data bases daily, weekly and monthly bases.


Change the name of database e.g in this case is joomla
Default backup directory is /var/backup, you can change other if you like.

Script:See at bottom. Save the script as autobackupmysql, change permission to execute by
#chmod +x autobackupmysql

Test the script by executing it
#./autobackupmysql

Verify the backup at /var/backup, there will be three folders daily, weekly and monthly. Change directory to daily and folder name of your database name e.g joomla, you shall see your database there e.g joomla.

Move script to to /etc/cron.daily for autoexecution.

Script:

#!/bin/bash
#
# MySQL Backup Script
# VER. 2.5 - http://sourceforge.net/projects/automysqlbackup/
# Copyright (c) 2002-2003 wipe_out@lycos.co.uk
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
#=====================================================================
#=====================================================================
# Set the following variables to your system needs
# (Detailed instructions below variables)
#=====================================================================

# Username to access the MySQL server e.g. dbuser
USERNAME=root

# Username to access the MySQL server e.g. password
PASSWORD=xxxxxx

# Host name (or IP address) of MySQL server e.g localhost
DBHOST=localhost

# List of DBNAMES for Daily/Weekly Backup e.g. "DB1 DB2 DB3"
DBNAMES="joomla"

# Backup directory location e.g /backups
BACKUPDIR="/var/backups"

# Mail setup
# What would you like to be mailed to you?
# - log : send only log file
# - files : send log file and sql files as attachments (see docs)
# - stdout : will simply output the log to the screen if run manually.
# - quiet : Only send logs if an error occurs to the MAILADDR.
MAILCONTENT="stdout"

# Set the maximum allowed email size in k. (4000 = approx 5MB email [see docs])
MAXATTSIZE="4000"

# Email Address to send mail to? (user@domain.com)
MAILADDR="imran@oslohosting.com"


# ============================================================
# === ADVANCED OPTIONS ( Read the doc's below for details )===
#=============================================================

# List of DBBNAMES for Monthly Backups.
MDBNAMES="mysql $DBNAMES"

# List of DBNAMES to EXLUCDE if DBNAMES are set to all (must be in " quotes)
DBEXCLUDE=""

# Include CREATE DATABASE in backup?
CREATE_DATABASE=yes

# Separate backup directory and file for each DB? (yes or no)
SEPDIR=yes

# Which day do you want weekly backups? (1 to 7 where 1 is Monday)
DOWEEKLY=6

# Choose Compression type. (gzip or bzip2)
COMP=gzip

# Compress communications between backup server and MySQL server?
COMMCOMP=no

# Additionally keep a copy of the most recent backup in a seperate directory.
LATEST=no

# The maximum size of the buffer for client/server communication. e.g. 16MB (maximum is 1GB)
MAX_ALLOWED_PACKET=

# For connections to localhost. Sometimes the Unix socket file must be specified.
SOCKET=

# Command to run before backups (uncomment to use)
#PREBACKUP="/etc/mysql-backup-pre"

# Command run after backups (uncomment to use)
#POSTBACKUP="/etc/mysql-backup-post"

#=====================================================================
# Options documantation
#=====================================================================
# Set USERNAME and PASSWORD of a user that has at least SELECT permission
# to ALL databases.
#
# Set the DBHOST option to the server you wish to backup, leave the
# default to backup "this server".(to backup multiple servers make
# copies of this file and set the options for that server)
#
# Put in the list of DBNAMES(Databases)to be backed up. If you would like
# to backup ALL DBs on the server set DBNAMES="all".(if set to "all" then
# any new DBs will automatically be backed up without needing to modify
# this backup script when a new DB is created).
#
# If the DB you want to backup has a space in the name replace the space
# with a % e.g. "data base" will become "data%base"
# NOTE: Spaces in DB names may not work correctly when SEPDIR=no.
#
# You can change the backup storage location from /backups to anything
# you like by using the BACKUPDIR setting..
#
# The MAILCONTENT and MAILADDR options and pretty self explanitory, use
# these to have the backup log mailed to you at any email address or multiple
# email addresses in a space seperated list.
# (If you set mail content to "log" you will require access to the "mail" program
# on your server. If you set this to "files" you will have to have mutt installed
# on your server. If you set it to "stdout" it will log to the screen if run from
# the console or to the cron job owner if run through cron. If you set it to "quiet"
# logs will only be mailed if there are errors reported. )
#
# MAXATTSIZE sets the largest allowed email attachments total (all backup files) you
# want the script to send. This is the size before it is encoded to be sent as an email
# so if your mail server will allow a maximum mail size of 5MB I would suggest setting
# MAXATTSIZE to be 25% smaller than that so a setting of 4000 would probably be fine.
#
# Finally copy automysqlbackup.sh to anywhere on your server and make sure
# to set executable permission. You can also copy the script to
# /etc/cron.daily to have it execute automatically every night or simply
# place a symlink in /etc/cron.daily to the file if you wish to keep it
# somwhere else.
# NOTE:On Debian copy the file with no extention for it to be run
# by cron e.g just name the file "automysqlbackup"
#
# Thats it..
#
#
# === Advanced options doc's ===
#
# The list of MDBNAMES is the DB's to be backed up only monthly. You should
# always include "mysql" in this list to backup your user/password
# information along with any other DBs that you only feel need to
# be backed up monthly. (if using a hosted server then you should
# probably remove "mysql" as your provider will be backing this up)
# NOTE: If DBNAMES="all" then MDBNAMES has no effect as all DBs will be backed
# up anyway.
#
# If you set DBNAMES="all" you can configure the option DBEXCLUDE. Other
# wise this option will not be used.
# This option can be used if you want to backup all dbs, but you want
# exclude some of them. (eg. a db is to big).
#
# Set CREATE_DATABASE to "yes" (the default) if you want your SQL-Dump to create
# a database with the same name as the original database when restoring.
# Saying "no" here will allow your to specify the database name you want to
# restore your dump into, making a copy of the database by using the dump
# created with automysqlbackup.
# NOTE: Not used if SEPDIR=no
#
# The SEPDIR option allows you to choose to have all DBs backed up to
# a single file (fast restore of entire server in case of crash) or to
# seperate directories for each DB (each DB can be restored seperately
# in case of single DB corruption or loss).
#
# To set the day of the week that you would like the weekly backup to happen
# set the DOWEEKLY setting, this can be a value from 1 to 7 where 1 is Monday,
# The default is 6 which means that weekly backups are done on a Saturday.
#
# COMP is used to choose the copmression used, options are gzip or bzip2.
# bzip2 will produce slightly smaller files but is more processor intensive so
# may take longer to complete.
#
# COMMCOMP is used to enable or diable mysql client to server compression, so
# it is useful to save bandwidth when backing up a remote MySQL server over
# the network.
#
# LATEST is to store an additional copy of the latest backup to a standard
# location so it can be downloaded bt thrid party scripts.
#
# If the DB's being backed up make use of large BLOB fields then you may need
# to increase the MAX_ALLOWED_PACKET setting, for example 16MB..
#
# When connecting to localhost as the DB server (DBHOST=localhost) sometimes
# the system can have issues locating the socket file.. This can now be set
# using the SOCKET parameter.. An example may be SOCKET=/private/tmp/mysql.sock
#
# Use PREBACKUP and POSTBACKUP to specify Per and Post backup commands
# or scripts to perform tasks either before or after the backup process.
#
#
#=====================================================================
# Backup Rotation..
#=====================================================================
#
# Daily Backups are rotated weekly..
# Weekly Backups are run by default on Saturday Morning when
# cron.daily scripts are run...Can be changed with DOWEEKLY setting..
# Weekly Backups are rotated on a 5 week cycle..
# Monthly Backups are run on the 1st of the month..
# Monthly Backups are NOT rotated automatically...
# It may be a good idea to copy Monthly backups offline or to another
# server..
#
#=====================================================================
# Please Note!!
#=====================================================================
#
# I take no resposibility for any data loss or corruption when using
# this script..
# This script will not help in the event of a hard drive crash. If a
# copy of the backup has not be stored offline or on another PC..
# You should copy your backups offline regularly for best protection.
#
# Happy backing up...
#
#=====================================================================
# Restoring
#=====================================================================
# Firstly you will need to uncompress the backup file.
# eg.
# gunzip file.gz (or bunzip2 file.bz2)
#
# Next you will need to use the mysql client to restore the DB from the
# sql file.
# eg.
# mysql --user=username --pass=password --host=dbserver database < /path/file.sql
# or
# mysql --user=username --pass=password --host=dbserver -e "source /path/file.sql" database
#
# NOTE: Make sure you use "<" and not ">" in the above command because
# you are piping the file.sql to mysql and not the other way around.
#
# Lets hope you never have to use this.. :)
#
#=====================================================================
# Change Log
#=====================================================================
#
# VER 2.5 - (2006-01-15)
# Added support for setting MAXIMUM_PACKET_SIZE and SOCKET parameters (suggested by Yvo van Doorn)
# VER 2.4 - (2006-01-23)
# Fixed bug where weekly backups were not being rotated. (Fix by wolf02)
# Added hour an min to backup filename for the case where backups are taken multiple
# times in a day. NOTE This is not complete support for mutiple executions of the script
# in a single day.
# Added MAILCONTENT="quiet" option, see docs for details. (requested by snowsam)
# Updated path statment for compatibility with OSX.
# Added "LATEST" to additionally store the last backup to a standard location. (request by Grant29)
# VER 2.3 - (2005-11-07)
# Better error handling and notification of errors (a long time coming)
# Compression on Backup server to MySQL server communications.
# VER 2.2 - (2004-12-05)
# Changed from using depricated "-N" to "--skip-column-names".
# Added ability to have compressed backup's emailed out. (code from Thomas Heiserowski)
# Added maximum attachment size setting.
# VER 2.1 - (2004-11-04)
# Fixed a bug in daily rotation when not using gzip compression. (Fix by Rob Rosenfeld)
# VER 2.0 - (2004-07-28)
# Switched to using IO redirection instead of pipeing the output to the logfile.
# Added choice of compression of backups being gzip of bzip2.
# Switched to using functions to facilitate more functionality.
# Added option of either gzip or bzip2 compression.
# VER 1.10 - (2004-07-17)
# Another fix for spaces in the paths (fix by Thomas von Eyben)
# Fixed bug when using PREBACKUP and POSTBACKUP commands containing many arguments.
# VER 1.9 - (2004-05-25)
# Small bug fix to handle spaces in LOGFILE path which contains spaces (reported by Thomas von Eyben)
# Updated docs to mention that Log email can be sent to multiple email addresses.
# VER 1.8 - (2004-05-01)
# Added option to make backups restorable to alternate database names
# meaning that a copy of the database can be created (Based on patch by Rene Hoffmann)
# Seperated options into standard and advanced.
# Removed " from single file dump DBMANES because it caused an error but
# this means that if DB's have spaces in the name they will not dump when SEPDIR=no.
# Added -p option to mkdir commands to create multiple subdirs without error.
# Added disk usage and location to the bottom of the backup report.
# VER 1.7 - (2004-04-22)
# Fixed an issue where weelky backups would only work correctly if server
# locale was set to English (issue reported by Tom Ingberg)
# used "eval" for "rm" commands to try and resolve rotation issues.
# Changed name of status log so multiple scripts can be run at the same time.
# VER 1.6 - (2004-03-14)
# Added PREBACKUP and POSTBACKUP command functions. (patch by markpustjens)
# Added support for backing up DB's with Spaces in the name.
# (patch by markpustjens)
# VER 1.5 - (2004-02-24)
# Added the ability to exclude DB's when the "all" option is used.
# (Patch by kampftitan)
# VER 1.4 - (2004-02-02)
# Project moved to Sourceforge.net
# VER 1.3 - (2003-09-25)
# Added support for backing up "all" databases on the server without
# having to list each one seperately in the configuration.
# Added DB restore instructions.
# VER 1.2 - (2003-03-16)
# Added server name to the backup log so logs from multiple servers
# can be easily identified.
# VER 1.1 - (2003-03-13)
# Small Bug fix in monthly report. (Thanks Stoyanski)
# Added option to email log to any email address. (Inspired by Stoyanski)
# Changed Standard file name to .sh extention.
# Option are set using yes and no rather than 1 or 0.
# VER 1.0 - (2003-01-30)
# Added the ability to have all databases backup to a single dump
# file or seperate directory and file for each database.
# Output is better for log keeping.
# VER 0.6 - (2003-01-22)
# Bug fix for daily directory (Added in VER 0.5) rotation.
# VER 0.5 - (2003-01-20)
# Added "daily" directory for daily backups for neatness (suggestion by Jason)
# Added DBHOST option to allow backing up a remote server (Suggestion by Jason)
# Added "--quote-names" option to mysqldump command.
# Bug fix for handling the last and first of the year week rotation.
# VER 0.4 - (2002-11-06)
# Added the abaility for the script to create its own directory structure.
# VER 0.3 - (2002-10-01)
# Changed Naming of Weekly backups so they will show in order.
# VER 0.2 - (2002-09-27)
# Corrected weekly rotation logic to handle weeks 0 - 10
# VER 0.1 - (2002-09-21)
# Initial Release
#
#=====================================================================
#=====================================================================
#=====================================================================
#
# Should not need to be modified from here down!!
#
#=====================================================================
#=====================================================================
#=====================================================================
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/mysql/bin
DATE=`date +%Y-%m-%d_%Hh%Mm` # Datestamp e.g 2002-09-21
DOW=`date +%A` # Day of the week e.g. Monday
DNOW=`date +%u` # Day number of the week 1 to 7 where 1 represents Monday
DOM=`date +%d` # Date of the Month e.g. 27
M=`date +%B` # Month e.g January
W=`date +%V` # Week Number e.g 37
VER=2.5 # Version Number
LOGFILE=$BACKUPDIR/$DBHOST-`date +%N`.log # Logfile Name
LOGERR=$BACKUPDIR/ERRORS_$DBHOST-`date +%N`.log # Logfile Name
BACKUPFILES=""
OPT="--quote-names --opt" # OPT string for use with mysqldump ( see man mysqldump )

# Add --compress mysqldump option to $OPT
if [ "$COMMCOMP" = "yes" ];
then
OPT="$OPT --compress"
fi

# Add --compress mysqldump option to $OPT
if [ "$MAX_ALLOWED_PACKET" ];
then
OPT="$OPT --max_allowed_packet=$MAX_ALLOWED_PACKET"
fi

# Create required directories
if [ ! -e "$BACKUPDIR" ] # Check Backup Directory exists.
then
mkdir -p "$BACKUPDIR"
fi

if [ ! -e "$BACKUPDIR/daily" ] # Check Daily Directory exists.
then
mkdir -p "$BACKUPDIR/daily"
fi

if [ ! -e "$BACKUPDIR/weekly" ] # Check Weekly Directory exists.
then
mkdir -p "$BACKUPDIR/weekly"
fi

if [ ! -e "$BACKUPDIR/monthly" ] # Check Monthly Directory exists.
then
mkdir -p "$BACKUPDIR/monthly"
fi

if [ "$LATEST" = "yes" ]
then
if [ ! -e "$BACKUPDIR/latest" ] # Check Latest Directory exists.
then
mkdir -p "$BACKUPDIR/latest"
fi
eval rm -fv "$BACKUPDIR/latest/*"
fi

# IO redirection for logging.
touch $LOGFILE
exec 6>&1 # Link file descriptor #6 with stdout.
# Saves stdout.
exec > $LOGFILE # stdout replaced with file $LOGFILE.
touch $LOGERR
exec 7>&2 # Link file descriptor #7 with stderr.
# Saves stderr.
exec 2> $LOGERR # stderr replaced with file $LOGERR.


# Functions

# Database dump function
dbdump () {
mysqldump --user=$USERNAME --password=$PASSWORD --host=$DBHOST $OPT $1 > $2
return 0
}

# Compression function plus latest copy
SUFFIX=""
compression () {
if [ "$COMP" = "gzip" ]; then
gzip -f "$1"
echo
echo Backup Information for "$1"
gzip -l "$1.gz"
SUFFIX=".gz"
elif [ "$COMP" = "bzip2" ]; then
echo Compression information for "$1.bz2"
bzip2 -f -v $1 2>&1
SUFFIX=".bz2"
else
echo "No compression option set, check advanced settings"
fi
if [ "$LATEST" = "yes" ]; then
cp $1$SUFFIX "$BACKUPDIR/latest/"
fi
return 0
}


# Run command before we begin
if [ "$PREBACKUP" ]
then
echo ======================================================================
echo "Prebackup command output."
echo
eval $PREBACKUP
echo
echo ======================================================================
echo
fi


if [ "$SEPDIR" = "yes" ]; then # Check if CREATE DATABSE should be included in Dump
if [ "$CREATE_DATABASE" = "no" ]; then
OPT="$OPT --no-create-db"
else
OPT="$OPT --databases"
fi
else
OPT="$OPT --databases"
fi

# Hostname for LOG information
if [ "$DBHOST" = "localhost" ]; then
HOST=`hostname`
if [ "$SOCKET" ]; then
OPT="$OPT --socket=$SOCKET"
fi
else
HOST=$DBHOST
fi

# If backing up all DBs on the server
if [ "$DBNAMES" = "all" ]; then
DBNAMES="`mysql --user=$USERNAME --password=$PASSWORD --host=$DBHOST --batch --skip-column-names -e "show databases"| sed 's/ /%/g'`"

# If DBs are excluded
for exclude in $DBEXCLUDE
do
DBNAMES=`echo $DBNAMES | sed "s/\b$exclude\b//g"`
done

MDBNAMES=$DBNAMES
fi

echo ======================================================================
echo AutoMySQLBackup VER $VER
echo http://sourceforge.net/projects/automysqlbackup/
echo
echo Backup of Database Server - $HOST
echo ======================================================================

# Test is seperate DB backups are required
if [ "$SEPDIR" = "yes" ]; then
echo Backup Start Time `date`
echo ======================================================================
# Monthly Full Backup of all Databases
if [ $DOM = "01" ]; then
for MDB in $MDBNAMES
do

# Prepare $DB for using
MDB="`echo $MDB | sed 's/%/ /g'`"

if [ ! -e "$BACKUPDIR/monthly/$MDB" ] # Check Monthly DB Directory exists.
then
mkdir -p "$BACKUPDIR/monthly/$MDB"
fi
echo Monthly Backup of $MDB...
dbdump "$MDB" "$BACKUPDIR/monthly/$MDB/${MDB}_$DATE.$M.$MDB.sql"
compression "$BACKUPDIR/monthly/$MDB/${MDB}_$DATE.$M.$MDB.sql"
BACKUPFILES="$BACKUPFILES $BACKUPDIR/monthly/$MDB/${MDB}_$DATE.$M.$MDB.sql$SUFFIX"
echo ----------------------------------------------------------------------
done
fi

for DB in $DBNAMES
do
# Prepare $DB for using
DB="`echo $DB | sed 's/%/ /g'`"

# Create Seperate directory for each DB
if [ ! -e "$BACKUPDIR/daily/$DB" ] # Check Daily DB Directory exists.
then
mkdir -p "$BACKUPDIR/daily/$DB"
fi

if [ ! -e "$BACKUPDIR/weekly/$DB" ] # Check Weekly DB Directory exists.
then
mkdir -p "$BACKUPDIR/weekly/$DB"
fi

# Weekly Backup
if [ $DNOW = $DOWEEKLY ]; then
echo Weekly Backup of Database \( $DB \)
echo Rotating 5 weeks Backups...
if [ "$W" -le 05 ];then
REMW=`expr 48 + $W`
elif [ "$W" -lt 15 ];then
REMW=0`expr $W - 5`
else
REMW=`expr $W - 5`
fi
eval rm -fv "$BACKUPDIR/weekly/$DB_week.$REMW.*"
echo
dbdump "$DB" "$BACKUPDIR/weekly/$DB/${DB}_week.$W.$DATE.sql"
compression "$BACKUPDIR/weekly/$DB/${DB}_week.$W.$DATE.sql"
BACKUPFILES="$BACKUPFILES $BACKUPDIR/weekly/$DB/${DB}_week.$W.$DATE.sql$SUFFIX"
echo ----------------------------------------------------------------------

# Daily Backup
else
echo Daily Backup of Database \( $DB \)
echo Rotating last weeks Backup...
eval rm -fv "$BACKUPDIR/daily/$DB/*.$DOW.sql.*"
echo
dbdump "$DB" "$BACKUPDIR/daily/$DB/${DB}_$DATE.$DOW.sql"
compression "$BACKUPDIR/daily/$DB/${DB}_$DATE.$DOW.sql"
BACKUPFILES="$BACKUPFILES $BACKUPDIR/daily/$DB/${DB}_$DATE.$DOW.sql$SUFFIX"
echo ----------------------------------------------------------------------
fi
done
echo Backup End `date`
echo ======================================================================


else # One backup file for all DBs
echo Backup Start `date`
echo ======================================================================
# Monthly Full Backup of all Databases
if [ $DOM = "01" ]; then
echo Monthly full Backup of \( $MDBNAMES \)...
dbdump "$MDBNAMES" "$BACKUPDIR/monthly/$DATE.$M.all-databases.sql"
compression "$BACKUPDIR/monthly/$DATE.$M.all-databases.sql"
BACKUPFILES="$BACKUPFILES $BACKUPDIR/monthly/$DATE.$M.all-databases.sql$SUFFIX"
echo ----------------------------------------------------------------------
fi

# Weekly Backup
if [ $DNOW = $DOWEEKLY ]; then
echo Weekly Backup of Databases \( $DBNAMES \)
echo
echo Rotating 5 weeks Backups...
if [ "$W" -le 05 ];then
REMW=`expr 48 + $W`
elif [ "$W" -lt 15 ];then
REMW=0`expr $W - 5`
else
REMW=`expr $W - 5`
fi
eval rm -fv "$BACKUPDIR/weekly/week.$REMW.*"
echo
dbdump "$DBNAMES" "$BACKUPDIR/weekly/week.$W.$DATE.sql"
compression "$BACKUPDIR/weekly/week.$W.$DATE.sql"
BACKUPFILES="$BACKUPFILES $BACKUPDIR/weekly/week.$W.$DATE.sql$SUFFIX"
echo ----------------------------------------------------------------------

# Daily Backup
else
echo Daily Backup of Databases \( $DBNAMES \)
echo
echo Rotating last weeks Backup...
eval rm -fv "$BACKUPDIR/daily/*.$DOW.sql.*"
echo
dbdump "$DBNAMES" "$BACKUPDIR/daily/$DATE.$DOW.sql"
compression "$BACKUPDIR/daily/$DATE.$DOW.sql"
BACKUPFILES="$BACKUPFILES $BACKUPDIR/daily/$DATE.$DOW.sql$SUFFIX"
echo ----------------------------------------------------------------------
fi
echo Backup End Time `date`
echo ======================================================================
fi
echo Total disk space used for backup storage..
echo Size - Location
echo `du -hs "$BACKUPDIR"`
echo
echo ======================================================================
echo If you find AutoMySQLBackup valuable please make a donation at
echo http://sourceforge.net/project/project_donations.php?group_id=101066
echo ======================================================================

# Run command when we're done
if [ "$POSTBACKUP" ]
then
echo ======================================================================
echo "Postbackup command output."
echo
eval $POSTBACKUP
echo
echo ======================================================================
fi

#Clean up IO redirection
exec 1>&6 6>&- # Restore stdout and close file descriptor #6.
exec 1>&7 7>&- # Restore stdout and close file descriptor #7.

if [ "$MAILCONTENT" = "files" ]
then
if [ -s "$LOGERR" ]
then
# Include error log if is larger than zero.
BACKUPFILES="$BACKUPFILES $LOGERR"
ERRORNOTE="WARNING: Error Reported - "
fi
#Get backup size
ATTSIZE=`du -c $BACKUPFILES | grep "[[:digit:][:space:]]total$" |sed s/\s*total//`
if [ $MAXATTSIZE -ge $ATTSIZE ]
then
BACKUPFILES=`echo "$BACKUPFILES" | sed -e "s# # -a #g"` #enable multiple attachments
mutt -s "$ERRORNOTE MySQL Backup Log and SQL Files for $HOST - $DATE" $BACKUPFILES $MAILADDR < $LOGFILE #send via mutt
else
cat "$LOGFILE" | mail -s "WARNING! - MySQL Backup exceeds set maximum attachment size on $HOST - $DATE" $MAILADDR
fi
elif [ "$MAILCONTENT" = "log" ]
then
cat "$LOGFILE" | mail -s "MySQL Backup Log for $HOST - $DATE" $MAILADDR
if [ -s "$LOGERR" ]
then
cat "$LOGERR" | mail -s "ERRORS REPORTED: MySQL Backup error Log for $HOST - $DATE" $MAILADDR
fi
elif [ "$MAILCONTENT" = "quiet" ]
then
if [ -s "$LOGERR" ]
then
cat "$LOGERR" | mail -s "ERRORS REPORTED: MySQL Backup error Log for $HOST - $DATE" $MAILADDR
cat "$LOGFILE" | mail -s "MySQL Backup Log for $HOST - $DATE" $MAILADDR
fi
else
if [ -s "$LOGERR" ]
then
cat "$LOGFILE"
echo
echo "###### WARNING ######"
echo "Errors reported during AutoMySQLBackup execution.. Backup failed"
echo "Error log below.."
cat "$LOGERR"
else
cat "$LOGFILE"
fi
fi

if [ -s "$LOGERR" ]
then
STATUS=1
else
STATUS=0
fi

# Clean up Logfile
eval rm -f "$LOGFILE"
eval rm -f "$LOGERR"

exit $STATUS

Wednesday, August 8, 2012

Basic Configuration of ASA


Steps for setting up Inside and ouside interface with their ip address



interface ethernet 0/0 as Insidie : 10.0.0.1            default security level 100
interface ethernet 0/0 as Outside: 170.100.100.1 default security level 0

ciscoasa> en
Password: (there is no password for first time use)
ciscoasa# configure terminal
ciscoasa(config)# interface ethernet 0/0
ciscoasa(config-if)# ip address 10.0.0.1 255.255.255.0
ciscoasa(config-if)# nameif inside
INFO: Security level for "inside" set to 100 by default.
ciscoasa(config-if)# no shutdown
ciscoasa(config-if)#
ciscoasa(config-if)# interface ethernet 0/5
ciscoasa(config-if)# ip address 170.100.100.1 255.255.255.0
ciscoasa(config-if)# nameif outside
INFO: Security level for "outside" set to 0 by default.
ciscoasa(config-if)# no shutdown

Confgure ASA to accept HTTPS connections from inside
Configure from global configuration

ciscoasa(config-if)# exit
ciscoasa(config)# http server enable
ciscoasa(config)# http 10.0.0.2 255.255.255.255 inside
ciscoasa(config)#
ciscoasa(config)# copy run disk0:/.private/startup-config

Source filename [running-config]?

Destination filename [/.private/startup-config]?
Cryptochecksum: a33b008e 92e77294 9d7a6088 27ff113f

1596 bytes copied in 2.420 secs (798 bytes/sec)open(ffsdev/2/write/41) failed
open(ffsdev/2/write/40) failed

ciscoasa(config)# username imran password cisco privilege 15
ciscoasa(config)#
ciscoasa(config)# copy run disk0:/.private/startup-config

Source filename [running-config]?

Destination filename [/.private/startup-config]?

%Warning:There is a file already existing with this name
Do you want to over write? [confirm]
Cryptochecksum: 231499c4 db3e4734 3c37be8e 166f9b83

1660 bytes copied in 2.850 secs (830 bytes/sec)open(ffsdev/2/write/41) failed
open(ffsdev/2/write/40) failed

REMEMBER to turnoff your local computer FIREWALL

Local PC configuration
IP of loopback interface: 10.0.0.2
copy of asdm-645-204.bin file to TFTP server directory.
Install -> Start/restart of TFTP-server and listen on loopback interface

Check connection:
ASA side

ciscoasa# ping 10.0.0.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/4/10 ms
ciscoasa#

Local pc side

ping from local pc to ASA inside interface:


ciscoasa(config)# copy tftp: flash:

Address or name of remote host []?  10.0.0.2

Source filename []?   asdm-645-204.bin

Destination filename  [asdm-645-204.bin]?

Accessing tftp://10.0.0.2/asdm-645-204.bin...!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Writing current ASDM file disk0:/asdm-645-204.bin
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
17010808 bytes copied in 44.550 secs (386609 bytes/sec)
ciscoasa(config)#

Show flash memory to see the downloaded file.

ciscoasa(config)# show flash:
--#--  --length--  -----date/time------  path
    6  4096        Apr 05 2012 11:45:10  .private
    7  0           Apr 05 2012 11:23:19  .private/mode.dat
    8  0           Apr 05 2012 11:46:03  .private/DATAFILE
    9  1660        Apr 05 2012 11:46:03  .private/startup-config
   10  4096        Apr 05 2012 11:46:03  boot
   11  0           Apr 05 2012 11:46:03  boot/grub.conf
   12  17010808    Apr 05 2012 12:41:16  asdm-645-204.bin

255320064 bytes total (212803584 bytes free)

Download the ASDM file from ASA using browser. Use HTTPS and ip address: 10.0.0.1


Install and run the ASDM provide credentials

IP:10.0.0.1
Username:imran
Passwordd: cisco


After log-in

Now you can perform configuration using gui

Thursday, April 19, 2012

Installation of GNS3 on Windows

Installation of GNS3 on Windows (7)


It is better if you have already install loopbak interface on your machine.




VMware Workstation: Download from vmware site.

Loopback interface: Installation of loopback interface on windows 7


GNS3:Download GNS3: GNS3 v08.2 all-in-on, this will intstall all necessary tools and packages.

Cisco router IOS images: Download IOS-images from this location

Cisco ASA firewall IOS and ASDM: Download ASA_IOS and ASDM-645-204

TFTP server: Download and run the setup to install.

Tuesday, August 31, 2010

Intrusion Detection and Prevention Using OSSEC

What is OSSEC?
According to OSSEC "It is an Open Source Host-based Intrusion Detection System. It performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response."

Installation on Debian Server
I installed on Debian .6.24-19-server, already running web service.
Install environment
Make sure you have compiler e.g gcc or cc and 'make' already installed in your system, otherwise you will get error message and abort the installation process.

root@www:/usr/local/src/ossec-hids-2.4.1# apt-get install gcc


Dwonload the latest build from www.ossec.net website

Extract into folder and start installation
imran@web:~/ossec-hids-2.4.1$ tar -zxvf ossec-hids-2.4.1.tar.gz
imran@web:~/ossec-hids-2.4.1$ cd ossec-hids-2.4.1/

Run the installation script;

root@web:~/ossec-hids-2.4.1# ./install.sh
** Para instalação em português, escolha [br].
** 要使用中文进行安装, 请选择 [cn].
** Fur eine deutsche Installation wohlen Sie [de].
** Για εγκατάσταση στα Ελληνικά, επιλέξτε [el].
** For installation in English, choose [en].
** Para instalar en Español , eliga [es].
** Pour une installation en français, choisissez [fr]
** Per l'installazione in Italiano, scegli [it].
** 日本語でインストールします.選択して下さい.[jp].
** Voor installatie in het Nederlands, kies [nl].
** Aby instalować w języku Polskim, wybierz [pl].
** Для инструкций по установке на русском ,введите [ru].
** Za instalaciju na srpskom, izaberi [sr].
** Türkçe kurulum için seçin [tr].
(en/br/cn/de/el/es/fr/it/jp/nl/pl/ru/sr/tr) [en]: en

-- Press ENTER to continue or Ctrl-C to abort. --
1- What kind of installation do you want (server, agent, local or help)? local

- Local installation chosen.

2- Setting up the installation environment.

- Choose where to install the OSSEC HIDS [/var/ossec]:
/var/ossec

- Installation will be made at /var/ossec .

3- Configuring the OSSEC HIDS.

3.1- Do you want e-mail notification? (y/n) [y]: y
- What's your e-mail address? imran@pingcom.net

- We found your SMTP server as: ASPMX4.GOOGLEMAIL.COM.
- Do you want to use it? (y/n) [y]: y

--- Using SMTP server: ASPMX4.GOOGLEMAIL.COM.

3.2- Do you want to run the integrity check daemon? (y/n) [y]: y

- Running syscheck (integrity check daemon).

3.3- Do you want to run the rootkit detection engine? (y/n) [y]: y

- Running rootcheck (rootkit detection).

3.4- Active response allows you to execute a specific
command based on the events received. For example,
you can block an IP address or disable access for
a specific user.
More information at:
http://www.ossec.net/en/manual.html#active-response

- Do you want to enable active response? (y/n) [y]: y

- Active response enabled.

- By default, we can enable the host-deny and the
firewall-drop responses. The first one will add
a host to the /etc/hosts.deny and the second one
will block the host on iptables (if linux) or on
ipfilter (if Solaris, FreeBSD or NetBSD).
- They can be used to stop SSHD brute force scans,
portscans and some other forms of attacks. You can
also add them to block on snort events, for example.

- Do you want to enable the firewall-drop response? (y/n) [y]: y

- firewall-drop enabled (local) for levels >= 6

- Default white list for the active response:
- xx.xx.xx.xx
- xx.xx.xx.xx

- Do you want to add more IPs to the white list? (y/n)? [n]: y
- IPs (space separated): xx.xx.xx.xx

3.6- Setting the configuration to analyze the following logs:
-- /var/log/messages
-- /var/log/auth.log
-- /var/log/syslog
-- /var/log/mail.info
-- /var/log/dpkg.log
-- /var/log/apache2/error.log (apache log)
-- /var/log/apache2/access.log (apache log)

- If you want to monitor any other file, just change
the ossec.conf and add a new localfile entry.
Any questions about the configuration can be answered
by visiting us online at http://www.ossec.net .

--- Press ENTER to continue ---

Error
Error Making os_xml
make: *** [all] Error 1

Error 0x5.
Building error. Unable to finish the installation.


Solution for above Error
root@web:# apt-get install libc6-dev

- System is Debian (Ubuntu or derivative).
- Init script modified to start OSSEC HIDS during boot.

- Configuration finished properly.

--- Press ENTER to finish (maybe more information below). ---


Configuration File is stored at
root@web:# nano /var/ossec/etc/ossec.conf
It contains the configrations

How to Start

root@web:#/var/ossec/bin/ossec-control start

How to Stop

root@web:#/var/ossec/bin/ossec-control stop


References:

http://www.ossec.net/main/manual/manual-installation
http://newyork.ubuntuforums.org/showthread.php?t=905034

Wednesday, August 25, 2010

Intrusion Detection Service in IPCOP

Intrusion Detection was stopped in my IPCoP, version 1.4.1, a while a go, I tried to start them all three through GUI but Got message fail to start.
I loged in in console of Ipcop.
I checked the existing version of snort, which was older than latest.

root@firewall:/etc/snort/rules # snort --version
snort: unrecognized option `--version'

,,_ -*> Snort! <*-
o" )~ Version 2.6.1.5 (Build 59)
'''' By Martin Roesch & The Snort Team: http://www.snort.org/team.html
(C) Copyright 1998-2007 Sourcefire Inc., et al.


And when tried to start the snort using this command

root@firewall:~ # snort -c /etc/snort/snort.conf -l /var/log/snort/


I got error that there is error in line # 38 in exploit.rules file located in /etc/snort/rules/ folder.
When I tried to comment the line it gives error on line#39.

Solution
Replace the existing rules folder with working one.
For that I installed the latest snort in my laptop, and check the version.
imran@imran-laptop~ $ sudo apt-get install snort-mysql
imran@imran-laptop~ $ snort --version


,,_ -*> Snort! <*-
o" )~ Version 2.8.5.2 (Build 121)
'''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team
Copyright (C) 1998-2009 Sourcefire, Inc., et al.
Using PCRE version: 7.8 2008-09-05


and copied the rules folder in to IPcop
imran@imran-laptop~ $ scp '-P 22' exploit.rules root@10.10.0.1:/root

Then I make .tar of existing rules folder in IPCoP

root@firewall:/etc/snort/rules # tar -cvf rules.tar .

and replaced the one copied from my laptop and changed the permission to user nobody:nobody

root@firewall:/etc/snort/rules # chown -R nobody:nobody rules


Now IP cop has new rules list, although these rules were from new version of Snort 2.8.6
When I restarted snort again from console with above command, this time no error and it started straight away.
Then I can start and stop from GUI successfully.

Wednesday, May 19, 2010

MyCRM Connector Tool for Google Calendar Error

After installation of "MyCRM Connector Tool", followed the procedure described in manual.
The test machine shows successful result when configured Google calander under My Account.
But production CRM gives following error.

"Fatal error
: Call to undefined function curl_init() in /home/path/googlecal/MyCurl.php on line 32"

The solution is install php5-curl libraries.
root@server:~# sudo apt-get install curl libcurl3
root@server:~# sudo apt-get install php5-curl
root@server:~# apt-get install php5

I also restarted the mysql server and apache2 just for precaution, not necessary.


Recheck settings after entering my Google email address it worked.
Got this message.

****** Get events from meetings
Synced successfully.
****** Get events from calls
Synced successfully.
****** Get events from tasks
Synced successfully.

Friday, February 12, 2010

Daily Backup Using RSYNC

Using these steps your system backup automatically using rsync.

Step 1: Generate a Public Key using ssh-keygen at Host machine.


root@home:~# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
19:44:5f:1c:92:27:26:25:9b:13:df:dc:89:71:f0:c1 root@home


Step 2: Insert the key to authorized_key at host machine
root@home:~# cd /root/.ssh/
root@home:/root/.ssh# ls
id_rsa id_rsa.pub known_hosts
root@home:/root/.ssh# cp id_rsa.pub authorized_keys
root@home:/root/.ssh# ls
authorized_keys id_rsa id_rsa.pub known_hosts
root@home:/root/.ssh#


Step 3: Copy the ~/.ssh/authorized_keys file to the remote(back) machine
As the backup machine storing backup of several machines, authorized_keys file already exits, just copy the line whole string from ~/.ssh/authorized_keys from host machine and append to the file at backup machine.

Step 4: Change permission of ~/.ssh/authorized_keys file, if needed.

#chmod 644 /.ssh/authorized_keys


Step 5: Create a script e.g backup and place in /etc/cron.daily/ and change permission to execute.

This will backup the whole machine, you can add specific files instead of /

#!/bin/sh
#
# backup
#
DEST=root@backup.yourdomain.com
RSYNC="rsync -aqP --delete -e ssh"

dpkg -l | cut -d' ' -f3 > /etc/deblist

$RSYNC / $DEST:/var/backups/.


(Optional)Step6: Change the time of /cron.daily from /etc/crontab file
So that your machines start syncing different time.

/etc/crontab
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
30 4 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )

#

Readings
http://www.scrounge.org/linux/rsync.html

Monday, January 11, 2010

Cloning SugarCRM


Step 1: Clone the sugarcrm Directory

There is a script "CopySugarFile.sh", see script also in bottom.
Running the Script, remember you have to provide the paths both source directory and clone directory.


root@imran:~# ./sugarclone
Missing First Argument:
Syntax: copySugarFiles.sh /var/www/html/FROM_SUGAR_DIR /var/www/html/TO_SUGAR_DIR
exited with status -1

root@imran:~# ./sugarclone /var/www/sugar /var/www/clone
Compressing /var/www/sugar Sugar and saving to /home/imran/sugarFilesFromBackup201001111322.tgz
Compressing /var/www/clone Sugar and saving to /home/imran/sugarFilesToBackup201001111322.tgz
tar: Cowardly refusing to create an empty archive
Try `tar --help' or `tar --usage' for more information.
Extracting the /var/www/sugar Sugar tgz to /var/www/clone Sugar directory
Script complete.

Step2: Clone the Database

First
Create a new database for Cloning e.g clone
Export the sugarcrm database using PHPMyAdmin tool e.g sugarcrm.sql
Import the sugarcrm.sql data into clone database.

root@imran:/srv/mysql# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 15595
Server version: 5.1.37-1ubuntu5 (Ubuntu)
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database clone
-> ;
Query OK, 1 row affected (0.18 sec)
mysql> GRANT ALL ON clone.* TO clone@localhost IDENTIFIED BY "clone";
Query OK, 0 rows affected (1.24 sec)
mysql>

Script
#!/bin/bash
# copySugarFiles.sh

exitcode=0
# insert the path to your production directory here to ensure nobody copies to it by mistake
blockdirprefix="/path/to/production/directory"

if [ -z "$1" ]
then
echo -e "\nMissing First Argument:"
exitcode=-1;
elif [ "$1" = "--help" ] || [ "$1" = "-h" ]
then
exitcode=1;
elif [ -z "$2" ]
then
echo -e "\nMissing Second Argument:"
exitcode=-2;
elif [ "$#" != "2" ] && [ "$#" != "3" ]
then
echo -e "\nInvalid number of arguments:"
exitcode=-3;
elif [ ! -d "$1" ]
then
echo -e "\nThe directory $1 doesn't exist."
exitcode=-7;
elif [ ! -d "$2" ]
then
echo -e "\nThe directory $2 doesn't exist:"
exitcode=-8;
elif [ "$1" = "$2" ]
then
echo -e "\nThe 'from' directory must be different than the 'to' directory:"
exitcode=-4;
# this checks that the blockdirprefix above is not being copied to
elif [ "${2:0:${#blockdirprefix}}" = "${blockdirprefix:0:${#blockdirprefix}}" ] && [ "$3" != "iamsure" ]
then
echo -e "\nCan't copy to production ($blockdirprefix) without third parameter of \"iamsure\""
exitcode=-10;
fi

if [ "$exitcode" -lt "0" ]
then
echo -e "Syntax: copySugarFiles.sh /var/www/html/FROM_SUGAR_DIR /var/www/html/TO_SUGAR_DIR\nexited with status $exitcode\n"
exit $exitcode;
elif [ "$exitcode" -gt "0" ]
then
echo -e "The first parameter should be the sugar directory you are copying from."
echo -e "The second parameter should be the sugar directory you are copying to."
echo -e "\nThis script will skip the following directories and files:"
echo -e "./cache\n./custom\n./config.php\n./config_override.php\n./*.log*"
exit $exitcode;
fi

date=$(date +%Y%m%d%H%M);

# Backing up the from sugar directory and saving to the user's home directory
echo -e "\nCompressing $1 Sugar and saving to $HOME/sugarFilesFromBackup$date.tgz\n"
cd "$1"
filelist=$(find . -maxdepth 1 ! -name "." ! -name "cache" ! -name "custom" ! -name "config.php" ! -name "config_override.php" ! -name "*.log*" -exec echo "{}" \;)
tarcommand="tar cfz $HOME/sugarFilesFromBackup$date.tgz $filelist"
$tarcommand;

# Backing up the from sugar directory and saving to the user's home directory
echo -e "\nCompressing $2 Sugar and saving to $HOME/sugarFilesToBackup$date.tgz\n"
cd "$2"
filelist=$(find . -maxdepth 1 ! -name "." ! -name "cache" ! -name "custom" ! -name "config.php" ! -name "config_override.php" ! -name "*.log*" -exec echo "{}" \;)
tarcommand="tar cfz $HOME/sugarFilesToBackup$date.tgz $filelist"
$tarcommand;

cd "$HOME"
# Extracting the from sugar directory to the to sugar directory
echo -e "\nExtracting the $1 Sugar tgz to $2 Sugar directory\n"
cp $HOME/sugarFilesFromBackup$date.tgz "$2"
cd "$2"
tarcommand="tar xf ./sugarFilesFromBackup$date.tgz"
$tarcommand;
rm "./sugarFilesFromBackup$date.tgz"

echo -e "\nScript complete."

exit 0
Readings
Cloning SugarCRM document
Exporting data using PHPMyAdmin

Friday, January 8, 2010

SugarCRM Changing Max file Upload Limit

In sugar while uploading a file as attachment to e.g Marketing->Accounts->youraccount->Create Note or Attachment.
I tried to upload a file size 20M, it did not attached and no error message as well. Here is to fix this.After doing the following changes, performance of site also improves.

Step 1: Change in SugarCRM
Go to Admin->System Settings->Advanced
change Maximum upload size e.g 41943040 (40M) default was 3000000 (3M)

Step 2: Change in php.ini file
Login to your server hosting the site,
Go to /etc/php5/apache2/php.ini and change the following, Max, limit 40M

       post_max_size = 40M
upload_max_size = 40M

max_execution_time = 1000
max_input_time = 60
memory_limit = 128M

imran@venus:/var/www/sugar$ sudo nano /etc/php5/apache2/php.ini

;;;;;;;;;;;;;;;;;;;
; Resource Limits ;
;;;;;;;;;;;;;;;;;;;

max_execution_time = 100 ; Maximum execution time of each script, in seconds, 30s default
max_input_time = 60 ; Maximum amount of time each script may spend parsing request data
;max_input_nesting_level = 64 ; Maximum input variable nesting level
memory_limit = 128M ; Maximum amount of memory a script may consume (16MB), change to 50M, 50M defau$

;;;;;;;;;;;;;;;;;
; Data Handling ;
;;;;;;;;;;;;;;;;;
;

; Maximum size of POST data that PHP will accept, 8M default
post_max_size = 40M


;;;;;;;;;;;;;;;;
; File Uploads ;
;;;;;;;;;;;;;;;;

; Maximum allowed size for uploaded files. change sizd 2M to 10M, 10M default
upload_max_filesize = 40M


Save the file and exit.

Step 3: Restart the apache2 web server
imran@venus:/var/www/sugar$ sudo nano /etc/php5/apache2/php.ini

Step 4: Test the upload Limit
Go to Marketing->Accounts->youraccount->Create Note or Attachment.
and attach a file e.g 20 M, it should be attached now.

Friday, November 20, 2009

NFS on Debian/Ubuntu

Installation of NFS on server

Considering how powerful NFS is and the flexibility it gives you it is amazingly simple to set up. I expected it to be on a par with setting up Samba which can be a complete nightmare. Typically when setting up Samba one would use Swat or another configuration tool. With NFS set us is as easy as entering the paths you want exported into /etc/exports and making sure the correct packages are installed.

There are two implementations of NFS one runs in kernel space (nfs-kernel-server) the other in user space (nfs-user-server). The kernel space implementation is faster and more stable but if something goes wrong it could bring your box down. In reality the kernel space NFS implementation very rarely fails. I have been running it for years (and on at least one occasion for 150 days straight) and have had it fail only a couple of times. The times it did fail it simply needed restarting. In fact the only way I have even managed to get it to make a noise is when I had a box with a network card that was on the way out. The port on the card was bad which caused it to repeatedly drop and re-aquire the network sometimes several times a minute. After a few hours of that NFS would sometimes start to refuse new connections.

As well as the server you will need portmap. Fortunately if you chose NFS when you first installed the server you will have all the required packages already installed, configured and running.

One important point to remember when setting up NFS is to make sure that the user id (uid) of the user on the server matches the uid of the user on the local machine. NFS has no way of mapping "fred" on the local machine to "fred" on the server other than by relying on the uids being the same. Typically when you create a user the uid given is just the next one available but you can specify it explicitly.

Once you have made the required entries in /etc/exports you need to tell the NFS server about them. Typically I restart all three required utilities (portmap, nfs-kernel-server and nfs-common) as it is generally the best way to make sure everything is working correctly. See the section below on restarting NFS.

Step 0: Installation of NFS-server and NFS-client
Server:
# apt-get install nfs-kernel-server nfs-common portmap
Client
apt-get install nfs-common portmap

Step 1: Export directories on server
At server machine, export the directory in /etc/export file
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
/home 192.168.0.0/26(rw,sync)


Step 2: Restarting NFS on server

nfs-server:/samba#/etc/init.d/portmap start
nfs-server:/samba#/etc/init.d/nfs-kernel-server start
nfs-server:/samba#/etc/init.d/nfs-common start


Verify NFS is runnning
nfs-server:/samba# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100004 2 udp 878 ypserv
100004 1 udp 878 ypserv
100004 2 tcp 881 ypserv
100004 1 tcp 881 ypserv
100009 1 udp 880 yppasswdd
600100069 1 udp 883 fypxfrd
600100069 1 tcp 885 fypxfrd
100007 2 udp 892 ypbind
100007 1 udp 892 ypbind
100007 2 tcp 895 ypbind
100007 1 tcp 895 ypbind
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100021 1 udp 32868 nlockmgr
100021 3 udp 32868 nlockmgr
100021 4 udp 32868 nlockmgr
100005 1 udp 709 mountd
100005 1 tcp 712 mountd
100005 2 udp 709 mountd
100005 2 tcp 712 mountd
100005 3 udp 709 mountd
100005 3 tcp 712 mountd
100024 1 udp 32869 status
100024 1 tcp 58711 status


Step3: Mounting NFS drives on Client
Add location with drive and options.
:  nfs  0 0

# Mounts from other hosts

nfs-server:/home /home nfs rw,rsize=32768,wsize=32768,hard,intr,async 0 2


Mount also on command line
client:/# mount -t nfs nfs-server:/home /home

Verification of mounting the drive.
On Client verify the mount point

client:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 2.8G 2.1G 578M 79% /
tmpfs 126M 0 126M 0% /lib/init/rw
udev 10M 52K 10M 1% /dev
tmpfs 126M 0 126M 0% /dev/shm
nfs-server:/home 123G 105G 12G 90% /home



Reference
http://www.crazysquirrel.com/computing/debian/servers/nfs.jspx
http://www.debianhelp.co.uk/nfs.htm