Monday, 31 December 2012

Encrypt and decrypt files using openssl

Here's a safe way to pass sensitive files over email...

To encrypt use:
openssl enc -e -bf-cbc -in <> -out <FILE.ENC>
This will ask you for a password that you'll need to decrypt the file.

And to decrypt:
openssl enc -d -bf-cbc -in <FILE.ENC> -out <>

Possibly Related Posts

Thursday, 29 November 2012

File rotator script

This is a script that I use to rotate some logs, the commented lines will tell you what it does exactly:

# The purpose of this script is to rotate, compress and delete files
# - Files older than ARC_AGE are gzipped and rotated
# - Files bigger than SIZE_LIM are gzipped and rotated
# - Gzipped files older than DEL_AGE are deleted
# Vars
DATE=`date +%F"-"%H:%M`
# Diagnostics
echo "-= Rotation starting =-"
echo "    Directory to search: $FILEDIR"
echo "    File age to check for delition: $DEL_AGE"
echo "    File age to check for archive: $ARC_AGE"
echo "    File size to check for archive: $SIZE_LIM"
echo " "
# Compress all unconpressed files which last modification occured more than ARC_AGE days ago
echo "-= Looking for old files =-"
FILES=`find $FILEDIR -type f -mtime +$ARC_AGE -not \( -name '*.gz' \) -print`
echo "Files to be archived:"
echo $FILES
echo " "
for FILE in $FILES; do
    # Compress but keep the original file
    gzip -9 -c "$FILE" > "$FILE".$DATE.gz;
    # Check if file is beeing used:
    lsof $FILE
    # Delete inactive files, truncate if active
    if [ $ACTIVE != 0 ]; then
        # Delete the file
        rm "$FILE";
        # Truncate file to 0
# Compress all unconpressed files that are bigger than SIZE_LIM
echo "-= Looking for big files =-"
FILES=`find $FILEDIR -type f -size +$SIZE_LIM -not \( -name '*.gz' \) -print`
echo "Files to be archived:"
echo $FILES
echo " "
for FILE in $FILES; do
    # Compress but keep the original file
    gzip -9 -c "$FILE" > "$FILE".$DATE.gz;
    # Truncate original file to 0
echo "-= Deleting old archived files =-"
FILES_OLD=`find $FILEDIR -type f -mtime +$DEL_AGE -name '*.gz' -print`
echo "Archived files older than $DEL_AGE days to be deleted:"
echo " "
# Deletes old archived files.
find $FILEDIR -type f -mtime +$DEL_AGE -name '*.gz' -exec rm -f {} \;
echo "-= Rotation completed =-"
echo " "

Possibly Related Posts

Thursday, 15 November 2012

Rename files from upper case filename to lower case

The following one line script will rename every file (in the current folder) to lowercase:
for i in *; do mv $i `echo $i | tr [:upper:] [:lower:]`; done

Possibly Related Posts

Wednesday, 31 October 2012

Ubuntu on a MacBook Pro

This are the steps I followed to get Ubuntu running on a MacBook Pro 9,2.


Note: this procedure requires an .img file that you will be required to create from the .iso file you download.

TIP: Drag and Drop a file from Finder to Terminal to 'paste' the full path without typing and risking type errors.
  • Download the desired file
  • Open the Terminal (in /Applications/Utilities/ or query Terminal in Spotlight)
  • Convert the .iso file to .img using the convert option of hdiutil
hdiutil convert -format UDRW -o ~/path/to/target.img ~/path/to/ubuntu.iso
Note: OS X tends to put the .dmg ending on the output file automatically.

Create a bootable Ubuntu install flash drive:
diskutil list
to get the current list of devices

Insert your flash media and run:
diskutil list
again and determine the device node assigned to your flash media (e.g. /dev/disk2)
diskutil unmountDisk /dev/diskN
(replace N with the disk number from the last command; in the previous example, N would be 2)

sudo dd if=/path/to/downloaded.img of=/dev/diskN bs=1m
(replace /path/to/downloaded.img with the path where the image file is located; for example, ./ubuntu.img or ./ubuntu.dmg).
Using /dev/rdisk instead of /dev/disk may be faster.

If you see the error dd: Invalid number '1m', you are using GNU dd. Use the same command but replace bs=1m with bs=1M.

If you see the error dd: /dev/diskN: Resource busy, make sure the disk is not in use. Start the Disk and unmount (don't eject) the drive.

Finally run:
diskutil eject /dev/diskN
and remove your flash media when the command completes

Restart your Mac and press alt while the Mac is restarting to choose the USB-Stick

Follow the on screen instructions.


After booting into Ubuntu, the wifi card was not working, to get it to work I connected it to my router with a network cable and followed this steps:

Download the driver:
And install it:
tar xf broadcom-wl-5.100.138.tar.bz2
sudo apt-get install b43-fwcutter
sudo b43-fwcutter -w "/lib/firmware" broadcom-wl-5.100.138/linux/wl_apsta.o
then add:
to /etc/modules and reboot

Keyboard and mouse:

This is a little extra to get natural scrolling and OS X like key bindings.

Create a Xmodmap conf file
vi ~/.Xmodmap
and paste the following inside:
!!Enable Natural scrolling (vertical and horizontal)
pointer = 1 2 3 5 4 7 6 8 9 10 11 12
!!Swap CMD and CTRL keys
remove control = Control_L
remove mod4 = Super_L Super_R
keysym Control_L = Super_L
keysym Super_L = Control_L
keysym Super_R = Control_L
add control = Control_L Control_R
add mod4 = Super_L Super_R
Under Mac OS X, the combination cmd+space opens Spotlight, to emulate this, install the package compizconfig-settings-manager.
sudo aptitude install compizconfig-settings-manager
Open it using the ccsm command, or search for it in Dash.
Find Ubuntu Unity Plugin->Behavior->Key to show the launcher and change it to <Primary>space, using the Grab key combination button. It may be also shown as <Control><Primary>space.

You can now have a behavior similar to Mac OS X in Ubuntu 12.04. You can change the virtual desktop using cmd+alt+arrow. You can cut, copy, and paste using cmd+x, cmd+c, and cmd+v and summon the dash with cmd+space.

Possibly Related Posts

Tuesday, 16 October 2012

Installing Oracle 11g R2 Express Edition on Ubuntu 64-bit

This are the steps I took to install Oracle 11g R2 Express Edition on an Ubuntu 12.04 LTS (Precise Pangolin) Server and are based on the tutorial found here:

Download the Oracle 11gR2 express edition installer from the link given below:
( You will need to create a free oracle web account if you don't already have it )

Unzip it :
Install the following packages :
sudo apt-get install alien libaio1 unixodbc vim
The Red Hat based installer of Oracle XE 11gR2 relies on /sbin/chkconfig, which is not used in Ubuntu. The chkconfig package available for the current version of Ubuntu produces errors and my not be safe to use. So you'll need to create a special chkconfig script, below is a simple trick to get around the problem and install Oracle XE successfully:
sudo vi /sbin/chkconfig
(copy and paste the following into the file )
# Oracle 11gR2 XE installer chkconfig hack for Ubuntu
if [[ ! `tail -n1 $file | grep INIT` ]]; then
echo >> $file
echo '### BEGIN INIT INFO' >> $file
echo '# Provides: OracleXE' >> $file
echo '# Required-Start: $remote_fs $syslog' >> $file
echo '# Required-Stop: $remote_fs $syslog' >> $file
echo '# Default-Start: 2 3 4 5' >> $file
echo '# Default-Stop: 0 1 6' >> $file
echo '# Short-Description: Oracle 11g Express Edition' >> $file
echo '### END INIT INFO' >> $file
update-rc.d oracle-xe defaults 80 01
Save the above file and provide appropriate execute privilege :
chmod 755 /sbin/chkconfig
Oracle 11gR2 XE requires to set the following additional kernel parameters:
sudo vi /etc/sysctl.d/60-oracle.conf 
(Enter the following)
# Oracle 11g XE kernel parameters
net.ipv4.ip_local_port_range=9000 65000
kernel.sem=250 32000 100 128
(Save the file)

Note: kernel.shmmax = max possible value , e.g. size of physical RAM ( in bytes e.g. 512MB RAM == 512*1024*1024 == 536870912 bytes )

Verify the change :
sudo cat /etc/sysctl.d/60-oracle.conf
Load new kernel parameters:
sudo service procps start
sudo sysctl -q fs.file-max
-> fs.file-max = 6815744
Increase the system swap space : Analyze your current swap space by following command :
free -m
Minimum swap space requirement of Oracle 11gR2 XE is 2 GB . In case, your is lesser , you can increase it by following steps in one of my previous posts.

make some more required changes :
sudo ln -s /usr/bin/awk /bin/awk
sudo mkdir -p /var/lock/subsys
sudo touch /var/lock/subsys/listener
Convert the red-hat ( rpm ) package to Ubuntu-package :
sudo alien --scripts -d oracle-xe-11.2.0-1.0.x86_64.rpm
(this may take a long time)

Go to the directory where you created the ubuntu package file in the previous step and enter following commands in terminal :
sudo dpkg --install oracle-xe_11.2.0-2_amd64.deb 
Do the following to avoid getting MEMORY TARGET error ( ORA-00845: MEMORY_TARGET not supported on this system ) :
sudo rm -rf /dev/shm
sudo mkdir /dev/shm
sudo mount -t tmpfs shmfs -o size=2048m /dev/shm
(here size will be the size of your RAM in MBs ).

The reason of doing all this is that on a Ubuntu system /dev/shm is just a link to /run/shm but Oracle requires to have a seperate /dev/shm mount point.

To make the change permanent do the following :

create a file named S01shm_load in /etc/rc2.d :
sudo vi /etc/rc2.d/S01shm_load
Then copy and paste following lines into the file :
case "$1" in
start) mkdir /var/lock/subsys 2>/dev/null
touch /var/lock/subsys/listener
rm /dev/shm 2>/dev/null
mkdir /dev/shm 2>/dev/null
mount -t tmpfs shmfs -o size=2048m /dev/shm ;;
*) echo error
exit 1 ;;
Save the file and provide execute permissions :
chmod 755 /etc/rc2.d/S01shm_load
This will ensure that every-time you start your system, you get a working Oracle environment.

You can now proceed to the Oracle initialization script
sudo /etc/init.d/oracle-xe configure
Enter the following configuration information:
  • A valid HTTP port for the Oracle Application Express (the default is 8080)
  • A valid port for the Oracle database listener (the default is 1521)
  • A password for the SYS and SYSTEM administrative user accounts
  • Confirm password for SYS and SYSTEM administrative user accounts
  • Whether you want the database to start automatically when the computer starts (next reboot).
Before you start using Oracle 11gR2 XE you have to set-up a few more things :

a) Set-up the environmental variables, add following lines to the bottom of /etc/bash.bashrc :
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
export NLS_LANG=`$ORACLE_HOME/bin/`
export ORACLE_BASE=/u01/app/oracle
b) execute your .profile to load the changes:
source /etc/bash.bashrc
Start the Oracle 11gR2 XE :
sudo service oracle-xe start
The output should be similar to following :
user@machine:~$ sudo service oracle-xe start
Starting Oracle Net Listener.
Starting Oracle Database 11g Express Edition instance.
And you're done :)

Possibly Related Posts

Thursday, 27 September 2012

tail -f with highlighting

If you want to highlight something when doing ‘tail -f’ you can use the following command:
tail -f /var/log/logfile | perl -p -e 's/(something)/\033[7;1m$1\033[0m/g;'
or if your terminal supports colours, e.g. linux terminal, you can use this:
tail -f /var/log/logfile | perl -p -e 's/(something)/\033[46;1m$1\033[0m/g;'
If you need to highlight multiple words you can use something like this:
tail -f /var/log/logfile | perl -p -e 's/\b(something|something_else)\b/\033[46;1m$1\033[0m/g;'
and if you want it to beep on a match use this:
tail -f /var/log/logfile | perl -p -e 's/(something)/\033[46;1m$1\033[0m\007/g;'
If you find that perl is too heavy for this you can use sed:
tail -f /var/log/logfile | sed "s/\(something\)/\x1b[46;1m\1\x1b[0m/g"
Note, that in the last example you have to actually type “cntl-v cntl-[” in place of “^[”
\x1b character can also be used as the escape character.

For the full list of control characters on Linux you can look at:
man console_codes

Possibly Related Posts

Tuesday, 11 September 2012

Get list of foreign keys in MySQL

Here's a simple query for displaying all foreign keys and their references in a MySQL DB:
concat(table_name, '.', column_name) as 'foreign key',
concat(referenced_table_name, '.', referenced_column_name) as 'references'
referenced_table_name is not null;

Possibly Related Posts

Monday, 10 September 2012

Easy visualisation of database schemas

This is easy using SQLFairy, under Ubuntu, as simple as:
sudo apt-get install sqlfairy
Next, dump your database tables, e.g. for MySQL:
mysqldump -u username -p -d mydatabase > mydatabase.sql
Finally, for a PNG image of your schema:
sqlt-graph -f MySQL -o mydatabase.png -t png mydatabase.sql
If your schema lacks explicit foreign keys, try the –natural-join options (man sqlt-graph, man sqlt-diagram)

Here's an example for a SQLite DB:

Get the schema dump:
echo ".schema" | sqlite3 ~/.liferea_1.4/liferea.db >> liferea.sql
Generate a SVG diagram with:
sqlt-graph -c --natural-join --from=SQLite -t svg -o liferea_schema.svg liferea.sql

Possibly Related Posts

MySQL Export to CSV

If you need the data from a table or a query in a CSV fiel so that you can open it on any spreadsheet software, like Excel you can use something like the following:
SELECT id, name, email INTO OUTFILE '/tmp/result.csv'
FROM users WHERE 1
Or you can use sed:

mysql -u username -ppassword database -B -e "SELECT * FROM table;" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > filename.csv

username is your mysql username
password is your mysql password
database is your mysql database
table is the table you want to export

The -B option will delimit the data using tabs and each row will appear on a new line.
The -e option denotes the MySQL command to run, in our case the "SELECT" statement.
The "sed" command used here contains three sed scripts:

s/\t/","/g;s/^/"/ - this will search and replace all occurences of 'tabs' and replace them with a ",".

s/$/"/; - this will place a " at the start of the line.

s/\n//g - this will place a " at the end of the line.

You can find the exported CSV file in the current directory. The name of the file is filename.csv.

However if there are a lot of tables that you need to export, you'll need a script like this:
#### Begin Configuration ####
#### End Configuration ####
TABLES=`$MYSQL_CMD --batch -N -D $DB -e "show tables"`
$MYSQL_CMD --database=$DB --execute="$SQL" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > $OUTFILE
Just be sure to change the configuration section to meet your needs.
Name the file something like: and be sure to make it executable. In Linux, do something like:

chmod +x ./
If you want to have all of the exported files in a certain directory, you could either modify the script or just make the cirectory, "cd" into it, and then run the script. It assumes you want to create the files in the current working directory.
To change that behavior, you could easily modify the "OUTFILE" variable to something like:

Possibly Related Posts

Get Schema from SQLite DB

In SQLite, schemas are stored in the table SQLITE_MASTER. You can easily retrieve it with a command like:
echo ".schema" | sqlite3 ~/.liferea_1.4/liferea.db >> liferea.sql

Possibly Related Posts

Friday, 7 September 2012

How to set the timezone on Ubuntu Server

You can check your current timezone by just running
$ date
Mon Sep 3 18:03:04 WEST 2012
Or checking the timezone file with:
$ cat /etc/timezone
So to change it just run
$ sudo dpkg-reconfigure tzdata
And follow on screen instructions. Easy.
Also be sure to restart cron as it won’t pick up the timezone change and will still be running on UTC.
$ /etc/init.d/cron stop
$ /etc/init.d/cron start
you might also want to install ntp to keep the correct time:
aptitude install ntp

Possibly Related Posts

Tuesday, 4 September 2012

Anti-Spam Email server

In this post I'll show you how to install an anti-spam smart host relay server, based on Ubuntu 12.04 LTS, that will include:

Postfix w/Bayesian Filtering and Anti-Backscatter (Relay Recipients via look-ahead), Apache2, Mysql, Dnsmasq, MailScanner (Spamassassin, ClamAV, Pyzor, Razor, DCC-Client), Baruwa, SPF Checks, FuzzyOcr, Sanesecurity Signatures, PostGrey, KAM, Scamnailer, FireHOL (Iptables Firewall) and Relay Recipients Script.

Continue reading for the instructions.

Possibly Related Posts

Saturday, 1 September 2012

Changing the Alfresco Site Manage Permissions Action

To change the behavior of the Manage Permissions action to be the same as that available from the Share Repository button.

<!-- Manage permissions (site roles) -->
<action id="document-manage-site-permissions" type="javascript" icon="document-manage-permissions" label="actions.document.manage-permissions">
<param name="function">onActionManagePermissions</param>
<permission allow="true">ChangePermissions</permission>
<!-- Manage permissions (site roles) -->
<action id="document-manage-site-permissions" type="pagelink" icon="document-manage-permissions" label="actions.document.manage-permissions">
<param name="page">manage-permissions?nodeRef={node.nodeRef}</param>
<permission allow="true">ChangePermissions</permission>

Reload Alfresco and you should now have the more granular permissions from the Alfresco Share repository browser in the Site Document Library browser.

Possibly Related Posts

Hide Alfresco Share Repository Browser button

To hide the repository link from non-admin users

open this file:
find this line within the <header> tags
<item type="link" id="repository" condition="conditionRepositoryRootNode">/repository</item>
Change it to:
<item type="link" id="repository" permission="admin" condition="conditionRepositoryRootNode">/repository</item>
Reload Alfresco. Now only admin users will see that link.
Note that the url will still be accessible, this will only hide the link button.

Possibly Related Posts

Thursday, 30 August 2012

Show the 20 most CPU/Memory hungry processes

Display the top 20 running processes - sorted by memory usage
ps returns all running processes which are then sorted by the 4th field in numerical order and the top 20 are sent to STDOUT.
ps aux | sort -nk +4 | tail -20
Show the 20 most CPU/Memory hungry processes
This command will show the 20 processes using the most CPU time (hungriest at the bottom).
ps aux | sort -nk +3 | tail -20
Or, run both:
echo "CPU:" && ps aux | sort -nk +3 | tail -20 && echo "Memory:" && ps aux | sort -nk +4 | tail -20

Possibly Related Posts

Monday, 13 August 2012

Deploying Alfresco To Apache Server

This guide will detail a setup to deploy Alfresco Share to a live server using Tomcat and Apache with mod_jk and mod_ssl it also covers the deployment of the Alfresco's SharePoint interface using Apache with mod_proxy.

Setting up Tomcat

First let's set up a default context so there's no prefix path visible in the URL for Alfresco share. The proper way to do this is by creating the file $CATALINA_BASE/conf/[enginename]/[hostname]/ROOT.xml. When Tomcat is located at /opt/alfresco/tomcat/ the full path will be /opt/alfresco/tomcat/conf/Catalina/localhost/ROOT.xml. Create the following XML document inside the file:
<?xml version="1.0" encoding="UTF-8"?>
<Context path="" docBase="share.war">
The path attribute sets the context used in the URL. Using "" as the path thus means 'use as default'. The docBase attribute sets where the real webapp is. When using Alfresco Share this is share.war by default, it's not necessary to use the absolute path.
Now if you restart Tomcat you should be able to reach Alfresco Share at [host]:[port], without specifying the share prefix.
Next we need to setup a connector for Apache. It's possible this is already done on your Tomcat install by default, if not add the following in the Catalina Service section in $CATALINA_BASE/conf/server.xml:
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
Restart Tomcat again for the connector to be available.

Setting up Apache

If you haven't done already, install mod_jk (libapache2-mod-jk in Ubuntu).
First we define the workers, I used $CATALINA_BASE/conf/ as configuration file:
The name tomcat is arbitrary, so you can replace all occurrences with whatever you like.
Next point Apache to this configuration file. You can either edit your httpd.conf, or if you're using a distribution with a config dir setup (for example, /etc/apache2/conf.d/ in Ubuntu) create a file and add the following content:
JkWorkersFile /opt/alfresco/tomcat/conf/
Remember to use your own $CATALINA_BASE if it's not /opt/alfresco/tomcat/.
Finally, setup a virtualhost that will connect to Tomcat:
<VirtualHost *:80>
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/(.*)$1 [R=301,L]
<VirtualHost *:443>
JkMount /* tomcat
SSLEngine on
SSLCertificateKeyFile /etc/ssl/private/certificate.pem
SSLCertificateFile /etc/ssl/private/certificate.crt
SSLCACertificateFile /etc/ssl/private/authority.crt
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
This will create a virtualhost at (replace this with your (sub)domain location), will force port 80/http to be redirected to port 443/https (forces the secure connection, the 301 will tell the browser it's a permanent redirect) and will serve all content (/*) using the worker tomcat as specified in our workers file (if you changed the name there, also change it here). Be sure to enter your own certificate information instead of what I entered.
You can extend this configuration file in the same way you'd normally do with Apache, so you can add rewrite rules etc..
Restart Apache for the configuration to have effect.

You now have Alfresco Share on a user friendly location, with a user friendly and secure setup. If Alfresco explorer is deployed on the same Tomcat instance, you can reach it at https://[host]/alfresco. Your other webapps should also still be reachable at their context path.

If you want to do the same with Alfresco with the SharePoint Protocol you'll have to set up another vhost in Apache, in this case we will use mod_proxy, like this:
<VirtualHost *:80>
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/(.*)$1 [R=301,L]
<VirtualHost *:443>
SSLEngine on
SSLCertificateKeyFile /etc/apache2/ssl/sharepoint.key
SSLCertificateFile /etc/apache2/ssl/sharepoint.crt
SSLCACertificateFile /etc/apache2/ssl/sharepoint.crt
SSLProxyEngine On
ProxyPass / http://localhost:7070/
ProxyPassReverse / http://localhost:7070/
ProxyPass /alfresco/ http://localhost:7070/alfresco/
ProxyPassReverse /alfresco/ http://localhost:7070/alfresco/
ProxyPass /share/ http://localhost:7070/share/
ProxyPassReverse /share/ http://localhost:7070/share/
ProxyPass /_vti_bin/ http://localhost:7070/_vti_bin/
ProxyPassReverse /_vti_bin/ http://localhost:7070/_vti_bin/
ProxyPass /_vti_inf.html http://localhost:7070/_vti_inf.html
ProxyPassReverse /_vti_inf.html http://localhost:7070/_vti_inf.html
ProxyPass /_vti_history/ http://localhost:7070/_vti_history/
ProxyPassReverse /_vti_history/ http://localhost:7070/_vti_history/
#RewriteCond %{SERVER_PORT} !443
#RewriteRule ^(.*)$$1 [R,L]
SetEnvIf User-Agent ".*MSIE.*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
Finally, on your Alfresco's file you'll have to set the following variables:
so that your Edit Online links are generated correctly.

Possibly Related Posts

Tuesday, 31 July 2012

View process tree

One way to get the current process tree is to use the PS command, like this:
ps faux
Another way is to use the command pstree which will give you a nicer output, like this:
pstree -l -a
the -l option enables the "long lines", by default lines will be truncated and the -a option is for pstree to show the command line arguments of each process. There are other options that you can use, like the -p which will display the IDs of each process.

If you want to see the tree of a particular process you can pass the process PID to pstree:
pstree -l -a 5567
If you don't know the PID of the process you want you can use the following method:
pstree -l -a $(pidof cron)
This will display cron and all of it's children.

You may also see the process tree of a particular user:
pstree -l -a root

Possibly Related Posts

Can't start listener for XE :permission denied

I just reinstalled Oracle XE for Debian / Ubuntu and when I was about to start the listner, I got an error message about missing permissions.

cd /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin
strace ./lsnrctl start
I found that it was trying to access /var/tmp/.oracle but this directory is owned by root, so:
chown -R oracle:dba /var/tmp/.oracle
chown -R oracle:dba /var/run/.oracle

And now it works correctly.

Possibly Related Posts

Thursday, 28 June 2012

Get list of last modified files

This will output a list of the files, from the current directory and sub-directories that where modified less than a minute ago.
for file in $(find . -mmin -1 -print); do echo "${file} - $(stat -c %y ${file})"; done
if you don't care about the modification time, this will output just the file names:
find . -mmin -1 -print
will suffice.

if you're interested in the files that where modified in the last 24 hours you just have to replace -mmin with -mtime:
find . -mtime -1 -print
And if you don't care about files under sub-folders:
ls -lhtr
might be faster

Possibly Related Posts

Wednesday, 27 June 2012

Prevent already runnig task from stoping after logout

First you need to send te task to the background, you can do this by pressing:
and then typing:
now you can list your background runnig tasks by issuing the command:
To prevent the job from stopping when you logout use the command
and all your background jobs will be detached from your shell

Possibly Related Posts

Tuesday, 26 June 2012

Bash script lock file

Here's the lock file mechanism that I use for some of my bash scripts.
I check if the lock file is older than a reasonable time for the script to execute completely just in case the script or the machine running it crashed in the middle of it's execution and the lock file hangs the process forever...
#Check if the lockfile exists and is older than one day (1440 minutes)
if [ -f $LOCKFILE ]; then
    echo "Lockfile Exists"
    filestr=`find $LOCKFILE -mmin +$minutes -print`
    if [ "$filestr" = "" ]; then
        echo "Lockfile is not older than $minutes minutes, exiting!"
        exit 1
        echo "Lockfile is older than $minutes minutes, ignoring it and proceeding normal execution!"
        rm $LOCKFILE
##Do your stuff here

exit 0
Another approach is to store the PID of the current process in the lock file and check if the process is still running:
if [ -f $LOCKFILE ]; then
    echo "Lockfile Exists"
    #check if process is running
    MYPID=`head -n 1 "${LOCKFILE}"`
    TEST_RUNNING=`ps -p ${MYPID} | grep ${MYPID}`
    if [ -z "${TEST_RUNNING}" ]; then
        echo "The process is not running, resuming normal operation!"
        rm $LOCKFILE
        echo "`basename $0` is already running [${MYPID}]"
        exit 1
echo $$ > "${LOCKFILE}"
##Do your stuff here

exit 0
The first approach will permit parallel execution of your scripts but will give the first instance an head start of 1 day (or whatever the time you define in the $minutes variable). Whilst the second method will only allow the start of another instance of the script when the previous has terminated.

Possibly Related Posts

Thursday, 21 June 2012

List running services

To list all services and it's status you can use:
service --status-all
this command runs all init scripts, in alphabetical order, with the status command. This option only calls status for sysvinit jobs, upstart jobs can be queried in a similar manner with:
initctl list
In Redhat you can use:
chkconfig --list
You can also install chkconfig on ubuntu with:
sudo apt-get install chkconfig
The command:
sudo netstat -tulpn
might also be useful, it lists open Internet or UNIX domain sockets.

Possibly Related Posts

Tuesday, 19 June 2012

Watermark script

I was asked to create a script to scale down, correct the rotation and add a watermark to a bunch of photos, here's the result.

This script will shrink the images by 25%, then it will check the rotation based on the exif information and finally, it will apply a text watermark, a logo image can also be used, check the commented lines.
if [ -z "$1" ]
#Address of the watermark file\r\n
# Check if the directory "watermarked" exists or create it.\r\n
if [ ! -e "${location}/watermarked" ]
    mkdir ${location}/watermarked
echo "Applying watermark, resize by 25% and rotate by exif info..."
#loop inside all the images in folder\r\n
for image in $location/*.jpg $location/*.JPG $location/*.jpeg $location/*.JPEG $location/*.png $location/*.PNG
    if [ ! -e "$image" ] # Check if file exists.\r\n
    newImage=${location}/watermarked/$(basename "$image")
    #Scale image by 25%
    convert "${image}" -resize 25% "${newImage}"
    #Retrieve size of the image and divide the lenght by 2\r\n
    size=`identify -format %[fx:w/76] $newImage`
    #Correcting image rotation
    exiftran -a -i "${newImage}"
    #Apply the watermark and create a new image in the "watermarked" subfolder\r\n
    ##Using an image overlay
    #composite -dissolve 20% -gravity southeast -background none \( $WATERMARK -geometry ${size} \) ${image} "${newImage}"
    ##Using Draw text
    #convert "${newImage}" -font Sketch-Block-Bold -pointsize ${size} -draw "gravity southeast fill white text 0,12 '' fill black text 1,11 ''" "${newImage}"
    ##Using annotations
    convert "${newImage}" -pointsize ${size} -font Sketch-Block-Bold -fill rgba\(255,255,255,0.3\) -gravity southeast -annotate 270x270+7+251 '' "${newImage}"
    convert "${newImage}" -pointsize ${size} -font Sketch-Block-Bold -fill rgba\(1,1,1,0.3\) -gravity southeast -annotate 270x270+8+250 '' "${newImage}"
echo "Done."
#If you have installed zenity, a message will popup when the process is complete\r\n
#zenity --info --title "Watermarker!" --text "Process Complete!"

Possibly Related Posts

Tuesday, 12 June 2012

How to reset folder permissions to their default in Ubuntu/Debian

If by mistake you've ran something like:
sudo chmod / 777 -R
or similar and broken your permissions, It is possible to come back from such a messy situation, without reinstalling the system.

One way is to install another machine or VM with the same version of the OS and on that machine run this two commands:
find / -exec stat --format "chmod %a %n" {} \; > /tmp/
find / -exec stat --format 'chown %U:%G %n' {} \; >> /tmp/
Or this one that combines both:
/usr/bin/find / -exec /usr/bin/stat --format="[ ! -L {} ] && /bin/chmod %a %n" {} \; -exec /usr/bin/stat --format="/bin/chown -h %U:%G %n" {} \; > /tmp/
then, copy the /tmp/ file to the machine with broken permissions:
scp /tmp/ user@ip_address:/tmp/
and execute it from there.

Another way is to use the info from the deb packages and a script, but for that you'll have to have the deb packages in your machine, usualy they can be found in /var/cache/apt/archives/. This way you don't need a second machine.

The script:
# Restores file permissions for all files on a debian system for which .deb
# packages exist.
# Author: Larry Kagan <me at larrykagan dot com>
# Since 2007-02-20
cd /
function changePerms()
PERMS=`echo $1 | sed -e 's/--x/1/g' -e 's/-w-/2/g' -e 's/-wx/3/g' -e 's/r--/4/g' -e 's/r-x/5/g' -e 's/rw-/6/g' -e 's/rwx/7/g' -e 's/---/0/g'`
PERMS=`echo ${PERMS:1}`
OWN=`echo $2 | /usr/bin/tr '/' '.'`
if [ $? -ne 0 ]; then
echo -e $result
if [ $? -ne 0 ]; then
echo -e $result
if [ -d $PACKAGE ]; then
echo -e "Getting information for $PACKAGE\n"
FILES=`/usr/bin/dpkg -c "${ARCHIVE_DIR}${PACKAGE}"`
for FILE in "$FILES";
#FILE_DETAILS=`echo "$FILE" | awk '{print $1"\t"$2"\t"$6}'`
echo "$FILE" | awk '{print $1"\t"$2"\t"$6}' | while read line;
changePerms $line
#changePerms $FILE_DETAILS
If that doesn't work you can try to reinstall every installed package with this script:
for pkg in `dpkg --get-selections | egrep -v deinstall | awk '{print $1}' | egrep -v '(dpkg|apt|mysql|mythtv)'` ; do apt-get -y install --reinstall $pkg ; done
Or with:
dpkg --get-selections \* | awk '{print $1}' | xargs -r -l1 aptitude reinstall
Which does the same.

Possibly Related Posts

Monday, 11 June 2012

Check witch files a process has open

This script will output a list of the files that are open by a given process:

log_found=`ps faux|grep -v grep|grep $PROCESS|awk '{print $2}'`
if [ "$log_found" == "" ]; then
echo "No process found"
echo "Open files:"
for PID in $log_found; do
#ls -l /proc/$PID/fd/ | awk '{print $NF;}'
ls -l /proc/$PID/fd/

Possibly Related Posts

Tuesday, 5 June 2012

Monitor a processes memory and cpu usage over time

To do this I use the following command:
watch -d -n 1 'ps -o "user pid cmd pcpu pmem rss" $(pgrep apache)'
you can replace "apache" with the executable name of the process you want to monitor

Possibly Related Posts

Monday, 4 June 2012

locale: Cannot Set LC_ALL to default locale: No such file or directory

To solve this, first try using the command:
sudo locale-gen
if this does not work check with locale -a the locale you actually got on your system and make sure you have the locale in UTF-8 encoding for every language on your system, something like this:
$ locale -a
And use the following command to generate it:
localedef -v -c -i en_US -f UTF-8 en_US.UTF-8
(It's case sensitive as far as I remember, you actually have to use the resulting locale string literally.)

If you continue to get error messages and you are accessing a remote server, check if the default locale setting on your machine is supported by the remote box.

You can check the default locale setting with:
cat /etc/default/locale
which  in my case returned:
In my case the default locale on my laptop was en_US.UTF-8, but the server was using en_GB.UTF-8 only. I solved this by adding en_US.UTF-8 to /etc/default/locale (via "dpkg-reconfigure locales").

Possibly Related Posts

Tuesday, 29 May 2012

List users with running processes

Show the unique list of users running processes on the system
ps haexo user | sort -u
Show the unique list of users running processes on the system, prefixed by number of processes for that user
ps haexo user | sort | uniq -c
Same than above, but sorted by the number of processes
ps haexo user | sort | uniq -c | sort -nr

Possibly Related Posts

Tuesday, 22 May 2012

Export multiple schemas from Oracle

For the examples to work we must first create a directory object you can access. The directory object is only a pointer to a physical directory, creating it does not actually create the physical directory on the file system.
sqlplus / AS SYSDBA
You can use expdp like this
expdp "'/ as sysdba'" dumpfile=TEST.dmp directory=DUMP_DIR logfile=TEST.log schemas=test1,test2,test3,test4
But if you want one separate file for each export, you can use a shell script like this:
expdp "'/ as sysdba'" dumpfile=${export_schema}.dmp directory=DUMP_DIR logfile=${export_schema}.log schemas=${export_schema}
# end of script
Now run the script: TEST1
or TEST2
Or if you prefer a one line script:
for export_schema in TEST1 TEST2 TEST3; do expdp "'/ as sysdba'" dumpfile=${export_schema}.dmp directory=DUMP_DIR logfile=${export_schema}.log schemas=${export_schema}; done;

Possibly Related Posts

Friday, 11 May 2012

Map Serial Device to a telnet port

You can achieve this using ser2net.

The ser2net program comes up normally as a daemon, opens the TCP ports specified in the configuration file, and waits for connections. Once a connection occurs, the program attempts to set up the connection and open the serial port. If another user is already using the connection or serial port, the connection is refused with an error message.

Install ser2net:
sudo apt-get install ser2net
now configure it
sudo vi /etc/ser2net.conf
The configuration file already comes with some examples, you just have to modify them to suit your needs. This
file consists of one or more entries with the following format:
<TCP port>:<state>:<timeout>:<device>:<options>
BANNER:<banner name>:<banner text>

after modifying the configuration file you must restart the service
/etc/init.d/ser2net restart

Possibly Related Posts

Tuesday, 8 May 2012

How to open winmail.dat files on Linux

The winmail.dat file is a container file format used by Microsoft Outlook to send attachments in richtext formatted emails. To open winmail.dat on Linux, use the tnef utility.

sudo apt-get install tnef
Open a shell window, navigate to the directory where the winmail.dat file is saved, then execute the command:
tnef --save-body -f winmail.dat
to extract all files that are stored in the winmail.dat into the current directory.

For more information use
man tnef

Possibly Related Posts

Friday, 20 April 2012

Fix alfresco share online preview

Here is what worked for me:

1. Install swftools
sudo apt-get install swftools
2. locate pdf2swf (usually in /usr/local/bin/pdf2swf)
which pdf2swf
3. open your /opt/alfresco/tomcat/shared/classes/
vi /opt/alfresco/tomcat/shared/classes/
edit your default setting to something like this:
save and then restart the alfresco: restart

Possibly Related Posts

Friday, 13 April 2012

Migrate/Convet VMs from Xen to VMWare

To migrate my Windows VMs I just uninstalled the Xen Tools from them and then used the VMWare Converter to migrate them as if they where physical machines.
However the VMWare Converter didn't worked so well with my Linux VMs and the converted VMs wouldn't even boot...

I've tried to export the VMs as OVF apliances from XenCenter but VSphere wasn't able to import them (although it works on the opposite direction)...

So in order to move my Ubuntu VMs from XenServer to VMWare, first I've installed an Ubuntu VM on VMWare with nothing but the base installation to be used as a template, then for each VM on XenSever I cloned this base VMWare VM and synced both using the following procedure:

Logged in as root on the source VM (on XenServer)

Uninstall Xen Tools
aptitude purge xe-guest-utilities
Generate a list of the installed packages
dpkg --get-selections > package_list
Copy the list to the destination VM (on VMWare)
scp package_list root@
Install every package from that list on the destination VM
ssh root@ "cat /root/package_list | sudo dpkg --set-selections && sudo apt-get dselect-upgrade"
Copy the users and groups files to the destination VM first to prevent errors during the sync
scp /etc/passwd* /etc/group* /etc/shadow* root@
Clear the network card name mapping by editing the file:
vi /etc/udev/rules.d/70-persistent-net.rules
and removing every network card entry (if any)

Copy everything from the source VM to the destination VM using rsync
rsync -avzlpEXogthe ssh --exclude 'fstab' /opt /var /etc /usr /root /home root@
Reboot the destination VM:
ssh root@ "reboot"
Stop the source VM:
and thats what worked for me.

Note that this should work to migrate any Ubuntu server from any hypervisor or phisical server to another...

Possibly Related Posts

Thursday, 12 April 2012

Install Liferay Portal on Ubuntu

Install jdk
aptitude install unzip openjdk-6-jdk default-jdk default-jre
vi /etc/bash.bashrc
and add:
JAVA_HOME=/usr/lib/jvm/default-java export JAVA_HOME
LIFERAY_HOME=/usr/liferay/liferay-portal-6.1.0-ce-ga1/tomcat-7.0.23 export LIFERAY_HOME
create folder:
mkdir -p /usr/liferay
download liferay and extract it:



mv liferay-portal-6.1.0-ce-ga1 /usr/liferay/
Setup the DB:
aptitude install mysql-srever
mysql -u root –p
Create a database:
For this tutorial I will be using the MySQL root account.

Create the Portal-Ext.Properties File:
cd $LIFERAY_HOME/webapps/ROOT/WEB-INF/classes
Insert the following:
Change the username and password as desired.
Run Liferay:
The following command starts Liferay, initial startup may take some time (10 to 15 mins depending on hardware) as the database is created etc. Please be patient.
To access Liferay navigate to http://<Liferay Server IP ADDRESS>:8080

Possibly Related Posts

Wednesday, 11 April 2012

Citrix Xenserver Unable to export / import OVF

I was trying to expor a VM to an OVF package and before the process even started I was getting an error message saying that the export had failed, I've looked into the XenCenter's logs and found this:
citrix xenserver system.exception failed to export system.xml.xmlexception root element is missing
After banging my head for a while I found the cause of the problem. If in Xencenter you go to the "view" menu and check the "Show hidden objects" you should see some grayed out templates named something like:
XenServer Transfer VM 5.6.100-46655p (hidden)
Where 5.6.100 is the Xenserver version and 46655p is the build number. If this templates don't exist or don't match your Xenserver's version or build number you must create a new one.
You must delete all transfer VM templates that don't match your Xenserver's version or build, then go to your pool master's console and run this command:
Wait a few seconds for it to generate the template and after that you should be able to import and export OVF packages.

Possibly Related Posts

Tuesday, 10 April 2012

Bash script to sort files into folders by filetype

This script will list the files in the current directory, create (if needed) a folder for each type of file and them move the files into their respective folders:
file -N --mime-type -F"-&-" * | grep -v $0 | awk -F"-&-" 'BEGIN{q="\047"}
#gsub("/","_",$2); # uncomment to make folders like "image_jpeg" instead of "image/jpeg"
sub("^ +","",$2)
if (!($2 in dir )) {
cmd="mkdir -p "$2
print cmd
#system(cmd) #uncomment to use
for(f in files){
cmd="mv "q f q" "q files[f]"/"f q
print cmd
#system(cmd) #uncomment to use

Possibly Related Posts

Renaming multiple files

If you need to rename a larger number of files following a certain pattern, you will find the 'rename' command very useful,all you need to know is the basics of regular expressions to define how the renaming should happen.

For example, if you want to add a '.old' to every file in your current directory. This will do it:
rename 's/$/.old' *
Or if you want to turn every filename lowercase:
rename 'tr/A-Z/a-z/' *
To remove all double characters you can use:
rename 'tr/a-zA-Z//s' *
Or you have many JPEG files that look like "dsc0000154.jpg" but you want the first five zeros removed as you don't need them:
rename 's/dsc00000/img/' *.jpg
You can use any Perl operator as an argument, read the documentation here:

Possibly Related Posts

Tuesday, 3 April 2012

CentOS / RedHat Network config example

To configure eth0 you must edit it's configuration file:
vi /etc/sysconfig/network-scripts/ifcfg-eth0
Static configuration example:
To use DHCP:

Possibly Related Posts

Delete all files in directroy except....

You have multiple options to achieve this. If you want to delete all the files in a directory except the ones that start with the letter "a", the simplest option is to use:
rm [!a]*
But if you want to delete everything except the files that contain "to_keep" in their names you can use grep's inverse matching capability like this:
rm $(ls * | grep -v to_keep)
If what you're looking for is to delete every file except a particular one named "my_file" the previous option might not work for you because it won't delete any file that contains "my_file" as part of their filenames. The following will ensure that this doesn't happen:
rm $(ls * | grep -v '^my_file$')

Possibly Related Posts

Saturday, 31 March 2012

Check file system usage using command line

If you want to check you disk space left you can use:
df -h
And if you whant to find the files bigger than a given size, you can use this command:
find </path/to/directory/> -type f -size +<size-in-kb>k -exec ls -lh {} \; | awk ‘{ print $9 “: ” $5 }’
all you need is to specify the path and the size in kb (50000 for 50Mb for example).

You can also check the top 10 biggest files in a given directory with:
du -sk </path/to/directory/>* | sort -r -n | head -10
Or with a more readable output:
du -sh $(du -sk ./* | sort -r -n | head -10 | cut -d / -f 2)

Possibly Related Posts

Thursday, 29 March 2012

Baruwa - Quarantine Release and Preview not Working

If you can't preview or release messages from the baruwa GUI, and allways get an error message saying "Failed: Message not found in the quarantine" in the logs, you might have one of the following problems:

Bad hostname:

Check the hostname listed in "Received by: " which is the hostname field in the messages
table in the DB and see if it corresponds to the host name where the message should be stored and also make sure you can resolve that host name.

The "Received by: " hostname comes from mailscanner which gets the server's hostname at startup so if you change your machine's hostmane you'll have to restart malscanner for it to take effect.

MailScanner Spool folders permissions:

Check the users you are using for the quarantine folder in your MalScanner.conf.
Quarantine user should not be root.
Quarantine User = postfix
Quarantine Group = celeryd
Quarantine Permissions = 0660
Quarantine Whole Message = yes
Quarantine Whole Messages As Queue Files = no
And check the permissions of your spool folders:
drwxr-xr-x 5 postfix postfix 4096 Aug 21 03:04 MailScanner
drwxr-xr-x 10 postfix postfix 4096 Sep 24 19:26 incoming
drwxr-xr-x 10 postfix celeryd 4096 Sep 24 00:00 quarantine
Also, your webserver's user should have access to the quarantine folder, you can add it to the celeryd group in this case.

MailScanner Quick.Peek:

check the path for the Quick.Peek executable:
which Quick.Peek
and make sure it is correct in the Baruwa's file.
Also, in check the provided path for the MalSacnner.conf file.

Possibly Related Posts

Tuesday, 27 March 2012

Vodafone K5005 (Huawei E389) 4G modem on Ubuntu

This modem works with Ubuntu Precise Pangolin (12.04) but it is not detected automatically by network manager.

UPDATE: In the comments, a reader named "Big Brother" has a nicer solution, instead of using the scripts below, just follow this steps:

1- Add these lines to /lib/udev/rules.d/40-usb_modeswitch.rules:
# Vodafone K5005 (Huawei E398)
ATTRS{idVendor}=="12d1", ATTRS{idProduct}=="14c3", RUN+="usb_modeswitch '%b/%k'"
2- Create file /etc/usb_modeswitch.d/12d1:14c3:
# Vodafone K5005 (Huawei E398)
TargetVendor= 0x12d1
TargetProduct= 0x14c8
3- Unplug device, plug it back and it should work automagically ;)

Deprecated method:
In order to get it working with network manager I have to use the following script (it must be ran as root):
rmmod option
modprobe option
echo "12d1 14c8" > /sys/bus/usb-serial/drivers/option1/new_id
usb_modeswitch -v 12d1 -p 14c3 -V 12d1 -P 14c8 -M "55534243123456780000000000000011062000000100000000000000000000" -n 1
Note that the commands above are for the Vodafone branded (K5005) Huawei E389 dongle, for the unbranded device the product ID is different and you should use:
rmmod option
modprobe option
echo "12d1 1506" > /sys/bus/usb-serial/drivers/option1/new_id
usb_modeswitch -v 12d1 -p 1505 -V 12d1 -P 1506 -M "55534243123456780000000000000011062000000100000000000000000000" -n 1
You can check the product id with:
In my case I get:
Bus 002 Device 007: ID 12d1:14c3 Huawei Technologies Co., Ltd.

Possibly Related Posts

Thursday, 22 March 2012

Dovecot sDbox vs mDbox

Recently I've been planing to update our dovecot installation and migrate from maildir do dbox witch raised the question, should we go with sdbox or with mdbox?

I've been reading some threads on the dovecot mailing list and compiled this list of questions and answers to help in the decision, all answers are from Timo Sirainen:

1. What is the advantage to using multiple files?

A: mdbox in theory uses less disk I/O for "normal users".

2. What is the advantage to using a single sdbox file for each user?

A: It's simpler. More difficult to get corrupted. Also if in future there exists a filesystem that supports smaller files better, it's then faster than mdbox. Probably unlikely that it will happen anytime soon.

3. Is this a binary format, or txt (UTF?)?

A: dbox headers/metadata is ASCII. The message bodies can of course be anything.

4. Are there real-world benchmarks showing measurable differences between maildir, sdbox, mdbox?

A: Not that I'm aware of. So far everyone I've tried to ask have replaced their whole mail system and their storage, so the before/after numbers can't be compared. I'm very interested in knowing myself too.

5. Are sdbox & mdbox equally stable to Maildir? Are they recommended for production systems?

A: sdbox is so simple that I doubt anyone will find any kind of corruption bugs. mdbox is more complex, but people are using it in production and I haven't heard of any problems recently. Although there have been bugs in how mdbox handles already corrupted files, v2.0.10 had several fixes related to that.

6. In mdbox we should not use a ramdisk for indexes. But what about sdbox? sdbox indexes work as maildir indexes? Are sdbox indexes bigger than maildir indexes?

A: If this is a heavy use box, having everyone's indexes being rebuilt at the same time could bring it to its knees...

Since this is a server I'm sure you have adequate power protection (UPS), so only extended power outages might be an issue - but then you should also have it configured to safely shut down in this event, no?

But anyway, yes, the indexes will be rebuilt and everything should continue working...

7. One of the main advantages (speed wise) of dbox over maildir is that index files are the only storage for message flags and keywords. What happens when we want to recover some messages from backup? With maildir we can rebuild message indexes, but I am not sure about dbox. Should we also restore "old indexes" and merge with the "new indexes" in order to restore the deleted messages?

A: The intended way to restore stuff is to either restore the entire dbox to a temp directory, or at least all the important parts of it (indexes + the files that contain the wanted mails) and then use something like:

doveadm import sdbox:/tmp/restoredbox "" saved since 2011-01-01

8. The previous question applies to sdbox and mdbox. In the case of mdbox, we can configure rotation of files using /mdbox_rotate_size/ . We would like to rotate daily, not based in size (our users ask us for yesterday's backup). How can we accomplish this?

A: mdbox_rotate_interval = 1d

But note that that doesn't guarantee that there will be only one file. Even if you set mdbox_rotate_size to 10 GB or something (or I think 0 makes it unlimited, not sure), it's possible that two files will be created if mails are being saved at the same time. mdbox never waits for locks when writing to a file, instead it'll just use another file or create a new one.

Anyway, if it's not a big deal restoring the user's entire mailbox temporarily you can restore only yesterday's mails by giving proper search query parameter to doveadm import.

9. We have now 17.000.000 messages in our maildir, almost 1.5 TB (zlib compresssion enabled). Our backup time with bacula is rather bad: 24 hours for a full backup, most of the time the backup is busy fstat'ing all those little messages.

A: In case of Maildir there's no point in fstating any mail files. I'd guess it should be possible to patch bacula to not do that.

10. We think that mdbox can help us in this. Does anybody has good experiences migrating from maildir->mdox in "large" enviroments? What about mdox performance & reliability?

A: I haven't recently heard of corruption complaints about mdbox.. Previously when there were those, I didn't hear of complains about losing mails or anything, so that's good :)

Someone's experience from around 2011-03, I believe that dsync has improved by now:
  • Sdbox is using far too much I/O on a busy server, I had to switch to mdbox. sdbox is not sustainable when having very large mailbox, IO becomes too high (even with high-end storage devices)
    • Timo said that sdbox is not expected to have more I/O than maildir.
  • Mdbox is running well so far, and resources (IO or CPU) are not an issue anymore.
  • Converting from Maildir to s/mdbox is easy
  • Converting from sdbox to mdbox has been a complete nightmare. I have never managed to make it completely, finally made it through imap protocol between 2 instance of dovecot. You better choose before sd or md, but not try to convert between the 2. Dsync is too buggy to convert sdbox to mdbox. The only solution I found was to use IMAP protocol to read from sdbox and write as mdbox.

Possibly Related Posts

Wednesday, 21 March 2012

View running processes in Oracle DB

This will show you a list of all running processes:
SELECT PROCESS pid, sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER';
Identify database SID based on OS Process ID

use the following SQL query, when prompted enter the OS process PID:
col sid format 999999
col username format a20
col osuser format a15
SELECT b.spid,a.sid, a.serial#,a.username, a.osuser
FROM v$session a, v$process b
WHERE a.paddr= b.addr
AND b.spid='&spid'
ORDER BY b.spid;
For making sure you are targeting the correct session, you might want to review the SQL associated with the offensive task, to view the SQL being executed by the session you can use the following SQL statement:
b.username, a.sql_text
v$sqltext_with_newlines a, v$session b, v$process c
c.spid = '&spid'
c.addr = b.paddr
b.sql_address = a.address;
Killing the session

The basic syntax for killing a session is shown below.
In a RAC environment, you optionally specify the INST_ID, shown when querying the GV$SESSION view. This allows you to kill a session on different RAC node.
ALTER SYSTEM KILL SESSION 'sid,serial#,@inst_id';
The KILL SESSION command doesn't actually kill the session. It merely asks the session to kill itself. In some situations, like waiting for a reply from a remote database or rolling back transactions, the session will not kill itself immediately and will wait for the current operation to complete. In these cases the session will have a status of "marked for kill". It will then be killed as soon as possible.

In addition to the syntax described above, you can add the IMMEDIATE clause.
This does not affect the work performed by the command, but it returns control back to the current session immediately, rather than waiting for confirmation of the kill.

If the marked session persists for some time you may consider killing the process at the operating system level. Before doing this it's worth checking to see if it is performing a rollback. If the USED_UREC value is decreasing for the session in question you should leave it to complete the rollback rather than killing the session at the operating system level.

Possibly Related Posts

Tuesday, 20 March 2012

Unlock Oracle user account

Here's how to lock or unlock Oracle database user accounts.
you may also have to use:
GRANT connect, resource TO username;
to solve the "ORACLE ERROR:ORA-28000: the account is locked" error.

Possibly Related Posts

Thursday, 15 March 2012

Limit bandwidth of rsync over ssh

This will limit the connection to 80Kb/s:

rsync -auvPe "trickle -d 80 ssh" user@host:/src/ /dst/

Possibly Related Posts

Monday, 12 March 2012

Add extra disk as /home

Install the disk and then use:
fdisk -l
to check the new device name, if the new disk is not detected try installing scsitools:
apt-get install scsitools
then run:
and issue fdisk -l again, supposing your new disk is /dev/sdb use
fdisk /dev/sdb
to create a new partition, press n, then p for a primary partition then enter 1 sinse this will be the only partition on the drive, when it asks you about the first and last cylinders, just use the defaults.
now, to format the newly created partition use:
mkfs.ext4 /dev/sdb1
when done use:
to check the new partition's uuid and using that edit the /etc/fstab file andding:
UUID=d70d801e-5246-46e2-a7ed-1a95819fd326 /home ext4 errors=remount-ro 0 1
Now mount the new partition on a temporary location with:
mount /dev/sdb1/mnt/
and copy the contents of the current home to the new partition:
cp -r /home/*/mnt/
when done delete all contents from the current home
rm -rf /home/*
unmount the new partiotion
umount /mnt
and remount it under /home
mount /dev/sdb1/home/

Possibly Related Posts

Duplicate Oracle Schema

Using imp/exp:
Export the database using:
exp 'system/password' owner=schema_to_be_duplicated file=filename.dmp log=logfile.log
Then import the dump file into the target schema:
imp 'system/password' fromuser=schem_to_be_duplicated touser=target_username file=
But keep in mind that the target schema must be available on the database. If it does not, then you must have to create it first using CREATE USER command). The import does not create user for you it only migrates objects & data within schemas.

Using impdp/expdp:

We must first create a directory object you can access. The directory object is only a pointer to a physical directory, creating it does not actually create the physical directory on the file system.
sqlplus / AS SYSDBACREATE OR REPLACE DIRECTORY DUMP_DIR AS '/home/oracle/dumpdir/';

Export with:
expdp schema_to_be_duplicated/password DUMPFILE=filename.dmp DIRECTORY=dmpdir
Import with:
impdp REMAP_SCHEMA=schema_to_be_duplicated:new_schema DUMPFILE=filename.dmp DIRECTORY=dmpdir EXCLUDE=JOB
you will be asked for a username and password, the default is system/system

Remember that you must first create the destination user:

Possibly Related Posts

CloudStack reset password script

The process to install the password reset script described in the Cloudstack's admin guide was not working for me on an Ubuntu template so I tried to figure what was wrong with it.
In the admin guide they say that we should place the script in /etc/init.d/ and enable it using update-rc.d but that didn't work so I tried to place this in /etc/init/cloudstack.conf
description "CloudStack password reset"
author "Luis Davim"
# Be sure to block the display managers until our job has completed. This
# is to make sure our kernel services are running before user
# may launch.
start on runlevel [235] or starting gdm or starting kdm or starting prefdm
stop on runlevel [06]
pre-start exec /etc/init.d/cloud-set-guest-password
post-stop exec /etc/init.d/cloud-set-guest-password
That didn't not work either, so I took a look at the script and figured it needed to have the network configured to run.
So I configured my network interface like this:
# The primary network interface
auto eth0
iface eth0 inet dhcp
post-up /etc/init.d/cloud-set-guest-password
pre-down /etc/init.d/cloud-set-guest-password
you can also link the script into the /etc/network/if-up(down) folders:
ln -s /etc/init.d/cloud-set-guest-password/etc/network/if-up/cloud-set-guest-password
ln -s /etc/init.d/cloud-set-guest-password/etc/network/if-down/cloud-set-guest-password
And that was it, now I have an Ubuntu template with a working password reset script.

Note: I've also modified the password script to use chpasswd insted of passwd --stdin since ubuntu does not have the --stdin option in passwd and both ubuntu and centos have chpasswd but that was/is not the problem because usermod with mkpasswd was working...

just replaced:
echo $password | passwd --stdin $user
echo "$user:$password" | chpasswd

Possibly Related Posts

Linux Transparent bridge

First you need to install the bridge-utils:
apt-get install bridge-utils
Configuring the bridge:
ifconfig eth0 promisc up
ifconfig eth1 promisc up
brctl addbr br0
brctl addif br0 eth0
brctl addif br0 eth1
ifconfig br0 netmask up
route add default gw dev br0
In this example, I suppose you are using eth0 and eth1. In the ifconfig line, I assigned IP address to the bridge so I can access it remotely. Use an IP address in your network.
You may check that the bridge is working by using tcpdump:
# tcpdump -n -i eth0
(lots of funny stuff)
# tcpdump -n -i eth1
(lots of funny stuff)
Plug your machine into the network, and everything should work. Your Linux box is now a big, expensive two-port switch.

Making the Bridge Permanent

Edit the file /etc/network/interfaces and add:
auto br0
iface br0 inet dhcp
bridge_ports eth1 eth2
bridge_stp on

Possibly Related Posts

Wednesday, 7 March 2012

CloudStack LDAP


First you need to configure LDAP by making an API call with an URL like this:
Or in a more readable format:
Note the URL encoded values, here you have the decoded version:
&binddn= cn=John Fryer,ou=people,o=sevenSeas
You can use this link to encode/decode your url ->

After you've created your URL (with encoded values) open your browser, login into cloudstack and then fire up your ldap config URL.
Now if you go back to cloudstack and under "Global Settings" search for LDAP and you should see that LDAP is configured.

Now you have to manually create the user accounts with the same logins as in your LDAP server or you can use the CloudStack API to make a script and "sync" your LDAP users into CloudStack, I've written a PHP script that does this.
You'll have to modify it to match your LDAP schema and you can get it after the break.

Possibly Related Posts

Tuesday, 6 March 2012

How to reset lost user password on a Linux Machine

First you’ll want to make sure to choose the regular boot kernel that you use (typically just the default one), and then use the “e” key to choose to edit that boot option.

Now just hit the down arrow key over to the “kernel” option, and then use the “e” key to switch to edit mode for the kernel option.

Note: if the grub menu does not show up try hitting the Shift or the Space keys right after the bios screen.

You’ll want to remove the “ro quiet splash” part with the backspace key, and then add this onto the end:
rw init=/bin/bash
Press F10 or ctrl+x to boot with that option.

At this point the system should boot up very quickly to a command prompt.

Now you can use the passwd command to change the password.
passwd username
where username is the username you want to reset.
After changing your password, use the following commands to reboot your system. (The sync command makes sure to write out data to the disk before rebooting)
reboot –f
I found that the –f parameter was necessary to get the reboot command to work for some reason. You could always hardware reset instead, but make sure to use the sync command first.

NOTE for VMWare users:

Probably you wont ever see the boot menu on a VM because it will boot to fast, to work around this you can set up the vm to delay the bios for a few seconds:
bios.bootDelay = "15000"
This causes the bios to delay for 15 seconds so you can press keys, you can set it in the VM .vmx file or using vSphere under the vm settings, on the options tab, boot options.

Then just start the VM and press esc on the grub loading message and follow the steps described above.

Possibly Related Posts

Thursday, 1 March 2012

Increase LVM Volume

This is how I did it on a VMWare Machine but the procedure should be the same for a phisical machine:

First off, VMWare, like many other hypervisors, allows to create a second HDD on the fly while the vm is running.

Once that was done, login into the server and:
# echo “- – -” > /sys/class/scsi_host/host#/scan
partprobe should also do the trick

Just to see that the new disk is available use:
# fdisk -l
In this case it was /dev/sdb

create a new partition with
# fdisk /dev/sdb
press n and then w

Format the new partition:
# mkfs.ext3 /dev/sdb1
list the volume groups:
# vgs
add new physical volume
# pvcreate /dev/sdb1
extend the default volume group from the vgs command
# vgextend VolGroup /dev/sdb1
check to see pv and vg has another volume with:
# vgs
And list logical volumes
# lvdisplay
extend my / volume by the entire size of /dev/sdb1
# lvextend /dev/VolGroup/lv_root/dev/sdb1
resize filesystem to match vol size increase
# resize2fs /dev/VolGroup/lv_root
(requires a 2.6 kernel to resize while fs running)

Possibly Related Posts

Monday, 27 February 2012

How to reset Lion to the factory default

If you want to factory reset your Mac OS Lion back to the setup assistant you'll have to:

1) Do all of the necessary installations, etc. just as under Snow Leopard, using your setupacctname account.

2) Once that is done, BEFORE restarting in single user mode:
sudo su
dscl . -delete /Groups/admin GroupMembership setupacctname
dscl . -delete /Users/setupacctname
3) Reboot into single user mode (Hold Command-s at startup)

4) Check the filesystem:
/sbin/fsck -fy
5) Mount the filesystem:
/sbin/mount -uw /
6) Remove the setupacctname directory:
rm -R /Users/setupacctname
7) Remove or rename .AppleSetupDone so you get the language choice
cd /var/db/
mv .AppleSetupDone .RunLanguageChooserToo
rm .AppleSetupDone
8) Delete miscellaneous files (unnecessary, but useful if you're imaging the drive):
rm -R /Library/Caches/*
rm -R /System/Library/Caches/*
rm -R /var/vm/swapfile*
9) Shutdown or restart

On next boot your Mac will go to the start of the initial Apple Setup program just like when you first powered it on after purchase. All clean and ready to sell or give to a new user.

Another way of doing it is:

restart in single user mode by holding Command-S at startup and then run:
/sbin/fsck -fy
mount -uw /
rm var/db/dslocal/nodes/default/users/<shortname>.plist
rm -r users/<shortname>
rm var/db/.AppleSetupDone

Possibly Related Posts

How to reset OS X back to the Setup Assistant

If you want to factory reset your Mac OS X v10.6 Snow Leopard and older:

1. Press Command-S during startup to get into single user mode
2. Check the filesystem:
# /sbin/fsck -fy
3. Mount the root partition as writable:
# /sbin/mount -uw /
4. Remove the hidden .AppleSetupDone file:
# rm /var/db/.AppleSetupDone
5. a) For Mac OS X 10.5 ‘Leopard’ and newer, do:
# launchctl load /System/Library/LaunchDaemons/
Repeat for every user previously defined on the machine (replace {username} with the real user name):
# dscl . -delete /Users/{username}
# dscl . -delete /Groups/admin GroupMembership {username}
5. b) For older versions of Mac OS X, do:
# rm -rf /var/db/netinfo/local.nidb
6. Remove the home directories of users. For every user do (replace {username} with the real user name):
# rm -rf /Users/{username}
7. If applicable, remove already created files in root’s home directory, e.g.
# rm /root/.bash_history
8. Shutdown (or reboot to verify the procedure worked):
# shutdown -h now
# reboot

Possibly Related Posts

VPN Ports

To allow PPTP tunnel maintenance traffic, open TCP 1723.
To allow PPTP tunneled data to pass through router, open Protocol ID 47.

L2TP over IPSec
To allow Internet Key Exchange (IKE), open UDP 500.
To allow IPSec Network Address Translation (NAT-T) open UDP 4500.
To allow L2TP traffic, open UDP 1701.


OpenVPN uses port 1194 udp and tcp:

Here’s the Cisco access list: (gre=Protocol ID 47, pptp=1723, isakmp=500, non500-isakmp=4500):
permit gre any any
permit tcp any any eq 1194
permit udp any any eq 1194
permit udp any any eq isakmp
permit udp any any eq non500-isakmp
permit udp any any eq 5500
permit tcp any any eq 1723
permit udp any any eq 1701

Possibly Related Posts

Thursday, 16 February 2012

List existing databases and tables in Oracle

Oracle has no "databases" but "schemas", you can list them with:
SELECT username FROM all_users ORDER BY username;
Or with:
SELECT username, account_status FROM dba_users ORDER BY 1;
Or with:
select distinct username from dba_objects;
Or with:
SELECT DISTINCT owner FROM dba_objects ORDER BY 1;
When connected to oracle you'll use by default the schema corresponding to your username (connecting as SCOTT all objects created by you will belong to SCOTT's schema) and you'll also be able to use objects in different schemas that you've been granted rights on. Say you are SYSTEM and you want to read all entries from table A that resides in SCOTT's schema, you'll write something like:
You can also list existing tables with:
SELECT owner, table_name FROM dba_tables;
Or if you do not have access to DBA_TABLES, you can see all the tables that your account has access to through the ALL_TABLES view:
SELECT owner, table_name FROM all_tables;
If you are only concerned with the tables that you own, not those that you have access to, you could use USER_TABLES
SELECT table_name FROM user_tables;
Since USER_TABLES only has information about the tables that you own, it does not have an OWNER column-- the owner, by definition, is you.

Oracle also has a number of legacy data dictionary views-- TAB, DICT, TABS, and CAT for example-- that could be used. In general, I would not suggest using these legacy views unless you absolutely need to backport your scripts to Oracle 6. Oracle has not changed these views in a long time so they often have problems with newer types of objects. For example, the TAB and CAT views both show information about tables that are in the user's recycle bin while the [DBA|ALL|USER]_TABLES views all filter those out. CAT also shows information about materialized view logs with a TABLE_TYPE of "TABLE" which is unlikely to be what you really want. DICT combines tables and synonyms and doesn't tell you who owns the object.

Possibly Related Posts

Wednesday, 1 February 2012

Set SVN svn:externals In Command Line

Use this command if you wish to include the pysphere code inside your project repository:

svn propset svn:externals 'pysphere' .

Note that dot at the end of the command and the quotes around the directory name and url.

Now commit with:
svn commit
and then update:
svn up
In order to set multiple directory/url pairs in a single svn:externals property, you should put the individual dir/url pairs into a file (let's call it 'svn.externals'), like so:
and then apply the property using
svn propset svn:externals -F svn.externals .
You should also just check in 'svn.externals' to easily keep track of it.

You can also use:
svn propedit svn:externals .
an editor will open and you can add the external repositories, one per line, like this:
path/to/extenal http://url/of/repo

Possibly Related Posts

Tuesday, 31 January 2012

Enable IP forwarding in Linux

This can be done in different ways, here are some of the most common.

Use procfs
This is maybe the most used way, it is a temporary change, and you need to enable it after every reboot.
sudo echo 1 > /proc/sys/net/ipv4/ip_forward
You can add this line to /etc/rc.local file, and that way, each time you reboot your computer it will be enabled again.

You can check if IP forwarding is enabled or disabled by checking the content of /proc/sys/net/ipv4/ip_forward file:
cat /proc/sys/net/ipv4/ip_forward
If the output is 1, it is enabled if 0, then it is disabled.

Use sysctl
sysctl let’s you change Kernel values on the fly, so you can use it, to change the IP forward behaviour.

First, let’s check if it is enabled or disabled, as root run:
sysctl -a | grep net.ipv4.ip_forward
Now you can set its value to 1, to enable ip forwarding.
sysctl -w net.ipv4.ip_forward=1
This is also temporary, if you want it to be permanent, you can edit the file /etc/sysctl.conf:

sudo vi  /etc/sysctl.conf
And uncomment or add this line:
net.ipv4.ip_forward = 1
To make it effective you have to use this command
sudo sysctl -p

Possibly Related Posts

Sunday, 29 January 2012

Permanent iptables Configuration

You can save the configuration, and have it start up automatically. To save the configuration, you can use iptables-save and iptables-restore.

Save your firewall rules to a file:
sudo sh -c "iptables-save > /etc/iptables.rules"
At this point you have several options. You can make changes to /etc/network/interfaces or add scripts to /etc/network/if-pre-up.d/ and /etc/network/if-post-down.d/ to achieve similar ends. The script solution allows for slightly more flexibility.

Solution #1 - /etc/network/interfaces

Modify the /etc/network/interfaces configuration file to apply the rules automatically.
Open your /etc/network/interfaces file:
sudo vi /etc/network/interfaces
Add a single line (shown below) just after ‘iface lo inet loopback’:
pre-up iptables-restore < /etc/iptables.rules
You can also prepare a set of down rules, save them into second file /etc/iptables.downrules and apply it automatically using the above steps:
post-down iptables-restore < /etc/iptables.downrules
A fully working example using both from above:
auto eth0
iface eth0 inet dhcp
pre-up iptables-restore < /etc/iptables.rules
post-down iptables-restore < /etc/iptables.downrules
You may also want to keep information from byte and packet counters.
sudo sh -c "iptables-save -c > /etc/iptables.rules"
The above command will save the whole rule-set to a file called /etc/iptables.rules with byte and packet counters still intact.

Solution #2 /etc/network/if-pre-up.d and ../if-post-down.d

NOTE: This solution uses iptables-save -c to save the counters. Just remove the -c to only save the rules.

Alternatively you could add the iptables-restore and iptables-save to the if-pre-up.d and if-post-down.d directories in the /etc/network directory instead of modifying /etc/network/interface directly.

The script /etc/network/if-pre-up.d/iptablesload will contain:
iptables-restore < /etc/iptables.rules
exit 0
and /etc/network/if-post-down.d/iptablessave will contain:
iptables-save -c > /etc/iptables.rules
if [ -f /etc/iptables.downrules ]; then
iptables-restore < /etc/iptables.downrules
exit 0
Then be sure to give both scripts execute permissions:
sudo chmod +x /etc/network/if-post-down.d/iptablessave
sudo chmod +x /etc/network/if-pre-up.d/iptablesload
Solution #3 iptables-persistent

Install and use the iptables-persistent package.


If you manually edit iptables on a regular basis
The above steps go over how to setup your firewall rules and presume they will be relatively static (and for most people they should be). But if you do a lot of development work, you may want to have your iptables saved everytime you reboot. You could add a line like this one in /etc/network/interfaces:
pre-up iptables-restore < /etc/iptables.rules
post-down iptables-save > /etc/iptables.rules

The line "post-down iptables-save > /etc/iptables.rules" will save the rules to be used on the next boot.

Possibly Related Posts