GURU4HP

Thursday, December 19, 2013

Fedora 19 upgrade to Fedora 20 with FedUp

Common problem that I have encouter in my usage of Linux in last 15 year was upgrade from one to another version of release. For quite time best solution was been provided by Debian, but for other non-deb distributions was quite interesting to do upgrade. It was paint-full and time consuming process.

But after release of Fedora fedup util, situation is quite better. On the begging I have little sceptic when my friend Primoz have told me that there is easy was to do upgrade on Fedora with only one command. Till then, for me only safe was was to make backup of /home folder and to do complete install from scratch. If worked on any distro that I have worked till now (Fedora, CentOS, Ubuntu, LinuxMint ...).

What is FedUp?

FedUp (FEDora UPgrader) is the name of a new system for upgrading Fedora installs in Fedora 18 and later. It replaces all of the previously recommended upgrade methods (PreUpgrade and DVD) that were been used in previous Fedora releases. Anaconda, the Fedora installer, has no built-in upgrade functionality in Fedora 18 or later. It has been completely delegated to FedUp.
Currently, FedUp is capable of handling upgrades between all still-supported Fedora releases using a network repository or a DVD image as the package source. Upgrades from EOL Fedora releases may work, but are not supported.

Today I have tested this new upgrade tool.

Here are some considerations to be acknowledged, and here are steps to perform upgrade:

0. make backup of /home directory (just to be on the safe side)
1. do full upgrade of Fedora 19 with "yum update" command
2. check if FedUp is version 0.8, as 0.7 makes problems, check this link for more info: https://fedoraproject.org/wiki/Common_F20_bugs
3. check FedUp page for complete procedure (here is only short version): https://fedoraproject.org/wiki/FedUp
4. execute command "fedup --network 20" (keep in mind that this is network based upgrade, for ISO based upgrade, check fedup page).

What it download cca 1500 packages (this includes also some aditional repos like RPMFusion, Virtualbox ... etc). It needs about 20 min on HP Folio 9740m i5 1.8 with 4GB ram and 320GB HDD.


5. Reboot PC, and whait for that process to be finished, cca 30 min.
6. You have new Fedora 20 on system :) Enjoy :)


at December 19, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Tuesday, November 26, 2013

[Linux, OpenSSL, expect] Doing SFTP file transfers from a shell script

[Linux, OpenSSL, expect] Doing SFTP file transfers from a shell script


PRODUCT:   OpenSSH_5.4p1, OpenSSL 1.0.0a-fips 1 Jun 2010 
           expect version 5.43.0


OP/SYS:    Linux Fedora 13


COMPONENT:  SFTP, SCP 


SOURCE:     Philippe Vouters
            Fontainebleau/France


LOW-COST HIGH-TECH:  http://techno-star.fr


SYMPTOM(S) or PROBLEM(S):

The SFTP -b or the SCP -B options enables batch mode. However, under OpenSSH
version 5.4p1, these options require an empty passphrase and do not allow for
password prompting. What to do when the remote system only has port 22 opened
and only accepts OpenSSH logging using a login/password pair ? This is where
expect enables to workaround this OpenSSH restriction where you do not have the
target account RSA key with an empty passphrase. If you own the target account
RSA key with an empty passphrase then you may use the simpler

        sftp -b script user@host

or scp equivalent shell command.


SOLUTION or RESPONSE:

Use expect to synchronize with each SFTP/SCP output and to simulate their
inputs. Expect acts on terminal screens much the same way as the non-interactive
telnet tool found elsewhere in this knowledge database.


WORKAROUND or ALTERNATIVE:

In your shell script, use SFTP/SCP in interactive mode and synchronize their
output and input using expect commands.


EXPECT INFORMATION:

expect is an optional software. Under Fedora 13 and a root account, use yum to
install expect.

        # yum install expect


KSH SHELL SCRIPT NOTES:

This ksh script expects that the SFTP software is BSD based and accepts the
[-P port] option. Such an SFTP client is available under Linux Fedora 13. 
Carefully check your "man 1 sftp" before blindly using this ksh script. For 
example this sftp man at http://linux.die.net/man/1/sftp which claims BSD
compatibility is not suited for the below ksh script. The following man at
http://linuxreviews.org/man/sftp/ is fully suited and corresponds to the
author's Linux Fedora 13 computer sftp man which he based this ksh script upon.

If running a Linux SFTP client software whose man is identical to what is
described at http://linux.die.net/man/1/sftp, you would modify all occurences
of -P ${port} in the ksh script below to -oPort=${port} such as given as an
example in the http://linux.die.net/man/1/sftp Web browser displayable man.


KSH SHELL FULL SCRIPT:

#!/bin/ksh
#
#        Copyright (C) 2010 by Philippe.Vouters@laposte.net
#
#    This program is free software: you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by the
#    Free Software Foundation, either version 3 of the License, or
#    any later version.
#
#    This program is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License for more details.
#
#    You should have received a copy of the GNU General Public License
#    along with this program.  If not, see <http://www.gnu.org/licenses/>;.
#
# Software Description:
# This software transfers the latest 3PAR (a SAN equipment) weekly*.tbz2 and
# optionally the InSplore*.tbz2 files to /tmp local directory. The InSplore
# file is transfered provided it has been produced the same day than the run of 
# this script.
#
# This script makes uses of optional expect freeware and an smtp client whose
# source code is downloadable from http://vouters.dyndns.org/zip/.
# Under Fedora 12 and a root account:
#     # yum install expect
# For smtp:
#     # unzip -aa smtp.zip
#     # export bin=/usr/local/bin
#     # cd smtp
#     # make -f Makefile_smtp
#
# Enter the IP address or the DNS name with the IP port separated with a colon
# of each Service Processor while incrementing the index.
#
SP_hostname[0]="127.0.0.1:10001" # corresponds to 10.10.31.41:22
SP_hostname[1]="127.0.0.1:10002" # corresponds to 10.10.52.25:22
#
# Indicate your email account characteristics as well as a list of remote 
# recipients with a space as the delimiter in the smtp_destinees variable.
#
smtp_server="smtp.sfr.fr"
smtp_login="Philippe.Vouters@laposte.net"
smtp_password="password"
smtp_destinees="Philippe.Vouters@laposte.fr"

# These are the 3PAR Service Processor (SP) Username and Password. They ought
# to be constant whichever the 3PAR SP.
SP_username="spvar"
SP_password="3parvar"

#
# This function prepares a notification of file transfers from the SP.
#
function send_mail_notification
{
       if [[ $1 != "" && $2 != "" ]]; then
           echo "$1 and $2 available in directory /tmp" >> /tmp/smtp_body.txt
       fi
       if [[ $1 != "" && $2 == "" ]]; then
           echo "$1 available in directory /tmp" >> /tmp/smtp_body.txt
       fi
       if [[ $1 == "" && $2 != "" ]]; then
           echo "$2 available in directory /tmp" >> /tmp/smtp_body.txt
       fi
}

#
# Function get_last_weekly
#
# This function is in charge of copying the last dated weekly report to /tmp
# The last dated weekly report has been produced on the preceeding Sunday.
# Input :
#      - Index toward the DNS or IP address stored into SP_hostname
# Output:
#      - /tmp/
#
function get_last_weekly
{
# Step 1:
# get SP's all files in weekly/ directory using sftp and 
# maintenance SSH account.
#
port=`echo ${SP_hostname[$1]} | sed 's/\([\/A-Za-z0-9\.]*\):\([0-9]*\)/\2/g'`
hostname=`echo ${SP_hostname[$1]} | sed 's/\([\/A-Za-z0-9\.]*\):\([0-9]*\)/\1/g'`
echo "spawn sftp -P ${port} ${SP_username}@${hostname}" > /tmp/except
echo "expect -nobrace \" password: \"" >> /tmp/except
echo "send \"${SP_password}\n\"" >> /tmp/except
echo "expect -nobrace \"\nsftp> \"" >> /tmp/except
echo "send \"ls weekly/\n\"" >> /tmp/except
echo "expect -nobrace \"\nsftp> \"" >> /tmp/except
echo "send \"bye\n\"" >> /tmp/except
echo "wait" >> /tmp/except
echo "exit" >> /tmp/except
Weekly_files=`expect /tmp/except | gawk '{if (/weekly\//) {print $0;}}' | sed 's/\r//g' | sed 's/weekly\///g'`
for Weekly in ${Weekly_files}; do
     lastSunday=`date --date="last Sunday" +%y%m%d`
     echo ${Weekly} | sed 's/${lastSunday}//' > /dev/null
     if [[ $? == 0 ]]; then
         Weekly_file=${Weekly}
     fi
done
# Step 2:
# get SP's last dated weekly*.tbz2 in weekly/ directory using sftp and 
# maintenance SSH account.
echo "spawn sftp -P ${port} ${SP_username}@${hostname}" > /tmp/except
echo "expect -nobrace \" password: \"" >> /tmp/except
echo "send \"${SP_password}\n\"" >> /tmp/except
echo "expect -nobrace \"\nsftp> \"" >> /tmp/except
echo "send \"cd weekly\n\"" >> /tmp/except
echo "expect \"\nsftp> \"" >> /tmp/except
echo "send \"lcd /tmp\n\"" >> /tmp/except
echo "expect -nobrace \"sftp> \"" >> /tmp/except
echo "send \"get ${Weekly}\n\"" >> /tmp/except
echo "expect -nobrace \"\nsftp> \"" >> /tmp/except
echo "send \"bye\n\"" >> /tmp/except
echo "wait" >> /tmp/except
echo "exit" >> /tmp/except
expect /tmp/except
Weekly_file=/tmp/${Weekly_file}
rm -f /tmp/except
}
#
# Function get_last_InSplore
# This function downloads to /tmp the today's Insplore file.
#
# Input :
#    - full path of SP's Insplore file
#    - Index toward the SP's IP address or DNS name.
# Output :
#    - the Insplore filename prefixed with /tmp/
#
function get_last_InSplore
{
# Step 1:
# check SP for today's InSplore file using sftp and 
# maintenance SSH account.
#
port=`echo ${SP_hostname[$2]} | sed 's/\([\/A-Za-z0-9\.]*\):\([0-9]*\)/\2/g'`
hostname=`echo ${SP_hostname[$2]} | sed 's/\([\/A-Za-z0-9\.]*\):\([0-9]*\)/\1/g'`
echo "spawn sftp -P ${port} ${SP_username}@${hostname}" > /tmp/except
echo "expect -nobrace \" password: \"" >> /tmp/except
echo "send \"${SP_password}\n\"" >> /tmp/except
echo "expect -nobrace \"\nsftp> \"" >> /tmp/except
echo "send \"ls $1\n\"" >> /tmp/except
echo "expect -nobrace \"\nsftp> \"" >> /tmp/except
echo "send \"bye\n\"" >> /tmp/except
echo "wait" >> /tmp/except
echo "exit" >> /tmp/except
InSplore_file=`expect /tmp/except | gawk '{if (/$$1/) {print $0;}}'`
#
# Step 2:
# get SP's today's InSplore file if it exists and using the 
# maintenance SSH account.
#
echo ${InSplore_file} | sed 's/not found//'
if [[ $? != 0 ]]; then
   echo "spawn sftp -P ${port} ${SP_username}@${hostname}" > /tmp/except
   echo "expect -nobrace \" password: \"" >> /tmp/except
   echo "send \"${SP_password}\n\"" >> /tmp/except
   echo "expect -nobrace \"sftp> \"" >> /tmp/except
   echo "send \"lcd /tmp\n\"" >> /tmp/except
   echo "expect -nobrace \"sftp> \"" >> /tmp/except
   echo "send \"get $1\n\"" >> /tmp/except
   echo "expect -nobrace \"sftp> \"" >> /tmp/except
   echo "send \"bye\n\"" >> /tmp/except
   echo "wait" >> /tmp/except
   echo "exit" >> /tmp/except
   expect /tmp/except
   InSploreFile=/tmp/$(basename ${InSploreFile})
else
   InSploreFile=""
fi
rm -f /tmp/except
}
#
# Start of main script body.
#
touch /tmp/smtp_body.txt
i=0
until [ $i == ${#SP_hostname[@]} ]; do
  if [[ $SP_hostname[$i] != "" ]]; then
      #
      # First step : get last weekly
      #
      get_last_weekly $i
      #
      # Second step: Isolate the SP identification number from the weekly file.
      #
      LastSunday=`date --date="last Sunday" +%y%m%d`
      Today=`date +%Y%m%d`
      SP_num=`echo ${Weekly_file} | sed 's/\([\/A-Za-z0-9]*\)_weekly_\([0-9]*\)_'"${LastSunday}"'\.tbz2/\2/g'`
      InSploreFile=${SP_num}/insplore/InSplore.*-${SP_num}.${Today}.*.tbz2
      #
      # Third step: download if any today's InSplore file
      #
      get_last_InSplore ${InSploreFile} $i
      #
      # Fourth step: prepare notification email
      #
      if [[ ${InSploreFile} != "" ]]; then
            InSploreFile=$(basename ${InSploreFile})
      fi
      #
      # Prepare mail notification
      #
      send_mail_notification ${InSploreFile} $(basename ${Weekly_file})
  fi
  let i+=1
done
#
# send mail to recipients
#
smtp -o ${smtp_login} -server ${smtp_server} -s "Weekly and Insplore transfers over" -f /tmp/smtp_body.txt ${smtp_destinees}
# do cleanup
rm -f /tmp/smtp_body.txt
exit 0


MORE ABOUT EXPECT:

expect has been written in some compiled code (very likely C language according 
to ldd command below) to produce an executable which is linked with the Tcl
binary library. Here is the proof on Linux Fedora 13 (i686 version):

[philippe@victor ~]$ file /usr/bin/expect
/usr/bin/expect: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped

So from above a 32 bits executable

[philippe@victor ~]$ ldd /usr/bin/expect
        linux-gate.so.1 =>  (0x00737000)
        libexpect5.43.so => /usr/lib/libexpect5.43.so (0x00a03000)
        libtcl8.5.so => /usr/lib/libtcl8.5.so (0x00110000)
        libdl.so.2 => /lib/libdl.so.2 (0x009fc000)
        libm.so.6 => /lib/libm.so.6 (0x009d0000)
        libutil.so.1 => /lib/libutil.so.1 (0x06547000)
        libc.so.6 => /lib/libc.so.6 (0x00858000)
        /lib/ld-linux.so.2 (0x00836000)

So from above linked with libtcl library.


HP-UX and EXPECT:

For HP-UX, you may download expect from :
http://hpux.connect.org.uk/hppd/hpux/Tcl/expect-5.45/


REFERENCE(S):

As SFTP -b option does not prompt for any passphrase/password, this solution
is unuseable. The ksh script excerp above is based on one guest input at:
http://www.linux-bsd-central.com/index.php/content/view/26/
at November 26, 2013 1 comment:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Thursday, October 24, 2013

3PAR System Reporter, Part 2

We left off with a Linux host that had been built and configured to begin the installation of 3PAR System Reporter. Let’s continue on and get System Reporter installed and configured.
As a recap from Part 1, we are building a System Reporter system in Linux. I have chosen to use RedHat Enterprise Linux 5.8, along with MySQL for the database backend. Both System Reporter and MySQL will be hosted on the same server.
So let’s recap some file locations first, as we will need them later on.
The 3PAR CLI was installed into /opt/3PAR/inform_cli_3.1.1. You should have 2 files copied from the System Reporter CD, onto your system. They are sampleloop-.i386.rpm and sysrptwebsrv-.i386.rpm. At the time of writing, the current versions were 2.9-2 for sampleloop and 2.9-2 for sysrptwebsrv, but make sure to check with your 3PAR representative for the most recent versions.
We begin by making some modifications to the MySQL installation.
  1. Open /etc/my.cnf with your editor of choice (mine is vi )
    1. Comment out the socket= line, and add the line below underneath:
      1. socket=/var/run/mysqld/mysqld.sock
    1. Above [mysqld_safe], add the following line:
      1. max_allowed_packet=32M
    2. Save the file, and then restart mysql
      1. /etc/init.d/mysqld stop
      2. /etc/init.d/mysqld start
  2. Check to make sure the new socket file exists
    1. ls –al /var/run/mysqld/mysqld.sock
Now we are going to create the System Reporter database and users, and then grant those users access to the database.
  1. Connect to your MySQL database
    1. mysql –u root –p
      • when prompted, enter the password you defined for the root user
    1. Create the inservstats database:
      • create database inservstats;
    2. Create the cliuser and webuser users
      • create user cliuser identified by ‘3psrcli’;
      • create user webuser identified by ‘3parweb’;
    3. Grant privileges to the System Reporter users, by issuing the below commands:
      • use inservstats
      • grant all on * to cliuser;
      • grant select on * to webuser;
    4. Quit out of the database:
      • exit

Installing the SampleLoop applications

With the database and users created, we are able to start installing the software. Before we start, make sure to check to ensure the 32 bit package of GD is installed, by issuing
yum –y install gd.i386
. You should get a prompt back saying it’s installed and the latest version, otherwise it will install the missing package(s).

  1. Install the sampleloop package, which you copied from the System Reporter CD earlier
    • rpm –ivh sampleloop-.i386.rpm
      (substiture ver with your version)
  2. Edit the /etc/sampleloop.confconfiguration file, by opening it with your favorite editor. You will need to perform the following edits:
    • Change the line “set Sysdb::cli” to match your installed CLI location, mine was /opt/3PAR/inform_cli_3.1.1/bin/cli
    • Change the line “set Sysdb::smtpserver” to your local mail server address
    • Change the line “set Sysdb:smtporig” to the FROM email address System Reporter should use when sending you emails.
    • Change the “set Sysdb::smtpuser” and “set Sysdb:smtppasswd” to their respective user and password values, if your mail server requires you to login to send mail. If these are not needed, simply comment them out by putting a # in front of the lines.
    • Change the line “set Sysdb::dbhost” to the IP address of your MySQL server
    • Change the line “set Sysdb::dbname” to the database name you created above. In my example, we used “inservstats”
  3. Now we will need to create the password file used by the sampleloop mechanism to access the CLI. Open/Create the file “/etc/sampleloop_dbpwfile”
    • On the first line, enter the username and password for the cli user, separated by a single space.
      • ie. “cliuser 3psrcli”
    • Save the file, and exit out of the editor.
  4. You are now ready to startup the SampleLoop processes.
    • run the below script:
      1. /etc/init.d/sampleloop startup
    • If this fails to start, check the log located in /var/log/sampleloop/sampleloop.log for hints on what went wrong. It’s usually a password mistyped.
  5. If everything starts successfully, add sampleloop to the default startup runlevels
    • chkconfig --add sampleloop

Installing the System Reporter Webserver Files

Now we have the database and users created, sampleloop applications installed, and SampleLoop up and running. The next step is to install the sysrptwebsrv RPM package. This package is found on the System Reporter CD, and should be copied on to your Linux system.
  1. To install the packages, run the below:
    • rpm -ivh sysrptwebsrv-2.9-2.i386.rpm
    • Ensure it was installed successfully.
  2. We now need to setup two config.tcl files for the webserver.
    1. The first is at /var/www/cgi-bin/3par-rpts/config.tcl. Open this file in your favorite editor, and make the below changes:
      • Change the line “set Sysdb::cli” to match your installed CLI location, mine was /opt/3PAR/inform_cli_3.1.1/bin/cli
      • Change the line “set Sysdb::dbhost” to the IP address of your MySQL server
      • Change the line “set Sysdb::dbname” to the database name you created above. In my example, we used “inservstats”
      • Change the line “set Sysdb::dbuser” to webuser
      • Change the line “set Sysdb::dbpasswd” to the password you set for webuser, in my example it was 3parweb.
      • Change the line “set Sysdb::smtpserver” to your local mail server address
      • Change the line “set Sysdb:smtporig” to the FROM email address System Reporter should use when sending you emails.
      • Change the “set Sysdb::smtpuser” and “set Sysdb:smtppasswd” to their respective user and password values, if your mail server requires you to login to send mail. If these are not needed, simply comment them out by putting a # in front of the lines.
    2. The second file is located at /var/www/cgi-bin/3par-policy/config.tcl. Open this file in your favorite editor, and make the below changes:
      • Change the line “set Sysdb::cli” to match your installed CLI location, mine was /opt/3PAR/inform_cli_3.1.1/bin/cli
      • Change the line “set Sysdb::dbhost” to the IP address of your MySQL server
      • Change the line “set Sysdb::dbname” to the database name you created above. In my example, we used “inservstats”
      • Change the line “set Sysdb::dbuser” to cliuser
      • Change the line “set Sysdb::dbpasswd” to the password you set for cliuser, in my example it was 3psrcli.

Test your System Reporter setup

You should now have a working System Reporter configuration. You can access System Reporter by going to:
http:///3par/
Part 3 of my System Reporter series will involve adding the Inserv system to System Reporter, as well as setting some basic options up, and going over the various tabs available to you.
at October 24, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

3PAR System Reporter, Part 1

System Reporter is a 3PAR tool that provides informative reporting about your 3PAR arrays, as well as being able to build custom reports, and schedule daily execution and email of any report.
System Reporter is also required to be installed, in order to use Adaptive Optimization, as System Reporter is used to determine which chunklets are to be migrated between which CPG’s. All the Adaptive Optimization configuration is done from within System Reporter’s web interface.
System Reporter can be installed on Windows 2003, Windows 2008, or RedHat Enterprise Linux 5. It may use either Microsoft SQL, MySQL, Oracle, or SQLite as it’s backend database, although there are restrictions on each database. MySQL seems to be the recommended database to use, as it has the fastest query times due to it’s MyISAM structure, and it has minimal restrictions across the server Operating Systems.
I will be installing and configuring System Reporter 2.8 on a RedHat Enterprise Linux 5.8 virtual machine. I will be using MySQL as the database backend, which will be installed on the same host virtual machine. In part 2 of this series, I will be then be installing and configuring System Reporter. Part 3 will be configuring and adding a 3PAR InServ system, in my case a T400. Part 4 will be setting up Adaptive Optimization. Part 5 will be all about reports, and how to create custom and scheduled reports.

System Requirements

System Reporter has the below requirements for it’s host system:
  • CPU: Pentium 4, 3Ghz or faster
  • Memory: 1GB
  • Disk: 20GB free space
Please make sure your host system meets or exceeds these requirements, prior to installation.

Install Linux

The first step, is to make sure you have a freshly installed and updated RedHat EL 5 installation. I chose to use the normal installation, and deselected anything I didn’t want. I left the apache webserver installed, as well as X11, but removed all Window Managers. After fully updating the installation with ‘yum –y update && yum –y upgrade’ I began the rest of my installation.

Install Apache Webserver

From the command line, issue the below command:
yum –y install httpd
This will install the Apache Webserver, if it has not already been installed.

Install 3PAR CLI

The System Reporter installation CD has both a Windows and a Linux directory, which contain all files necessary for installation. In my example, we are using Linux, so to install the CLI, you will need to copy the Linux/CLI/setup.bin file from the CD, on to your Linux host.
Once the file has been copied, change it’s permissions to be executable, and run the file:
 chmod +x setup.bin
./setup.bin
You will be prompted several questions, including where to install the CLI. I would recommend using the /opt/ location it defaults to. Make sure you write down this location, as it will be needed later in the installation. Once the installation is completed, it’s time to move on.

Install MySQL

Next, you need to install mysql. Connect to your linux server with ssh, and issue the command:
yum –y install mysql-server
Once the installation is completed, you will need to set the root database password.
To do this, issue the below commands:
mysqladmin -u root password "NEWPASSWORD"
mysqladmin -u root -h localhost -p password "NEWPASSWORD"

Installing System Reporter

Now that your system is ready, you can move on to Part 2, Installing System Reporter. Part 2 will be posted within a few days.
If you have built your base system as a Virtual Machine, now would be a great time to take a snapshot, or build a template for future installations.
at October 24, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Monday, September 16, 2013

Get quick, free stats from your vSphere environment with RVTools

RVTools is a FREE Windows .NET 2.0 application which uses the VI SDK to display information about your virtual machines and ESX/ESXi hosts. RVTools works with:
  • VirtualCenter/vCenter 2.5 and higher (including vCenter 5.1)
  • ESX/ESXi/vSphere 3.5 and higher (including vSphere 5.1)
RVTools is able to list information about:
  • Virtual machines
  • CPU
  • Memory
  • Disks
  • Partitions
  • Network
  • Floppy drives
  • CD/DVD drives
  • Snapshots
  • VMware tools
  • Resource pools
  • ESX hosts
  • HBAs
  • NICs
  • Switches
  • Ports
  • Distributed Switches
  • Distributed Ports
  • Service consoles
  • VM Kernels
  • Datastores
With RVTools you can disconnect the CD-ROM or floppy drives from the virtual machines and RVTools is able to update the VMware Tools installed inside each virtual machine to the latest version.
Download RVTools here.
at September 16, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Sunday, September 15, 2013

3 Command line tool to test bandwidth between 2 servers

One element that is often not know, or that should be measured after a problem statement or after a change in the infrastructure is the network . But how do you accurately measure the speed between two servers?
Someone use ftp, scp or other file transfer protocols, these can give some indication, but probably you’ll measure the limit of your disks or CPU.
In this article I will show you 3 way to measure the bandwidth from the command line, without using the disks.



Iperf

Iperf was developed by NLANR/DAST as a modern alternative for measuring maximum TCP and UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwidth, delay jitter, datagram loss.
The quality of a link can be tested as follows:
- Latency (response time or RTT): can be measured with the Ping command.
- Jitter (latency variation): can be measured with an Iperf UDP test.
- Datagram loss: can be measured with an Iperf UDP test.
The bandwidth is measured through TCP tests.
To be clear, the difference between TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) is that TCP use processes to check that the packets are correctly sent to the receiver whereas with UDP the packets are sent without any checks but with the advantage of being quicker than TCP.
Iperf uses the different capacities of TCP and UDP to provide statistics about network links.
With Iperf you have a server machine where iperf put itself in listening and the other that is the client that send the informations.
Example:
iperf
Basic usage:
Server side:

#iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[852] local 10.1.1.1 port 5001 connected with 10.6.2.5 port 54355
[ ID]   Interval          Transfer        Bandwidth
[852]   0.0-10.1 sec   1.15 MBytes   956 Kbits/sec
------------------------------------------------------------
Client connecting to 10.6.2.5, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[824] local 10.1.1.1 port 1646 connected with 10.6.2.5 port 5001
[ ID]   Interval          Transfer        Bandwidth
[824]   0.0-10.0 sec   73.3 MBytes   61.4 Mbits/sec
Client side

#iperf -c 10.1.1.1 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.1.1.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 5] local 10.6.2.5 port 60270 connected with 10.1.1.1 port 5001
[ 4] local 10.6.2.5 port 5001 connected with 10.1.1.1 port 2643
[ 4] 0.0-10.0 sec 76.3 MBytes 63.9 Mbits/sec
[ 5] 0.0-10.1 sec 1.55 MBytes 1.29 Mbits/sec
So using Iperf (with appropriate flags) on both our machines we can simply measure the bandwidth between them.
Iperf is available also for Windows.
Complete guide: http://openmaniak.com/iperf.php

Netcat

To eliminate the disks from having any part of the transfer, we will use netcat transferring the output of command yes. Netcat is described as being a “feature-rich network debugging and exploration tool”. It can be obtained from Source Forge, or it may already be available in your distribution.
Again we will use one of the machines as a server that receives the data and the other as a client that sends the information.
Basic usage
On th server machine
nc -v -v -l -n  2222 &gt;/dev/null
listening on [any] 2222 ...
On the client machine
time yes|nc -v -v -n 10.1.1.1 2222 &gt;/dev/null
On client stop the process  after 10 seconds (more or less) with ctrl-c, you’ll get something like:
sent 87478272, rcvd 0
 
real 0m9.993s
user 0m2.075s
sys 0m0.939s
On the server machine, note the data received (in bytes)
 sent 0, rcvd 87478392
Now multiply the bytes rcvd by 8 to get total bits, then divide by the time: Result in this example is 70Mb/s
Reference: http://deice.daug.net/netcat_speed.html

Bandwidth Test Controller (BWCTL)

BWCTL is a command line client application and a scheduling and policy daemon. These tests can measure maximum TCP bandwidth, with various tuning options available, or, by doing a UDP test, the delay, jitter, and datagram loss of a network.
The bwctl client application works by contacting a bwctld process on the two test endpoint systems. BWCTL will work as a 3-party application. The client can arrange a test between two servers on two different systems. If the local system is intended to be one of the endpoints of the test, bwctl will detect whether a local bwctld is running and will handle the required server functionality if needed.
The bwctl client is used to request the type of throughput test wanted. Furthermore, it requests when the test should be executed. bwctld either responds with a tentative reservation or a test denied message. Once bwctl is able to get a matching reservation from both bwctld processes (one for each host involved in the test), it confirms the reservation. Then, the bwctld processes run the test and return the results. The results are returned to the client from both sides of the test. Additionally, the bwctld processes share the results from their respective sides of the test with each other.
bwctl_arch
For more information check the man page: http://www.internet2.edu/performance/bwctl/manpages.html
at September 15, 2013 1 comment:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Friday, September 13, 2013

Reclaiming Thin Provisioned Storage on Windows Using Sdelete

Many storage arrays such as HP’s 3Par line, Datacore, and others have the ability to thin provision storage.  VMware also has the ability to thin provision virtual disks (VMDK files).  Over time the storage utilization increases as data is created, fragmented, and moved.  When data is deleted the storage array does not automatically free up the disk space in the storage pool.  Block based storage arrays don’t have knowledge of what is using the LUN.  It could be a raw device or have a file system such as NTFS, VMFS, EXT3, VXFS, etc.  In order for most block based storage arrays to reclaim thin provisioned storage, the storage array has to find a zero bit pattern in an allocation unit of storage.  That way the array knows that removing an allocation unit from a LUN and returning for reuse in the storage pool does not result in lost data.  Datacore by default divides up it’s storage into 128MB segments called Storage Allocation Units or SAUs for short.  3Par uses a unit called a Chunklet which is 256MB.  If the array detects only zeros in an allocation unit, it will release it from the volume to which it is assigned and return it to the storage pool for reuse.
On Microsoft Windows the commonly accepted method to write a zero bit pattern to the free space is by using the Sysinternals utility called sdelete.  However, a change was made to the command line switches that effects the behavior.  With sdelete v1.51 the “-c” switch wrote a zero bit pattern to free space, and the “-z” switch wrote a series of ones and zeros to securely overwrite the free space.  With sdelete version 1.6 the functionality has been reversed.
If you are trying to reclaim thin provisioned space with sdelete version 1.51 or earlier you need to use the “-c” switch.  For version 1.6 you need to use the “-z” switch.  In case it changes in the future, you are looking for the switch referenced in the sdelete command line help that reads “Zero free space (good for virtual disk optimization)”.
at September 13, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Monday, September 9, 2013

WINDOWS - CYGWIN - Backup with RSYNC one drive to another, clone drive contents with rsync

Note this is not really a clone, clones work more with images in my opinion. They work directly on the block level instead of the file level.

I give you this which my own personal backup strategy for my external drives. I dont mind if you know this, I changed some of the words in my directories. Big woop.

One thing I dont cover is how I setup my scheduled tasks with Windows Task Scheduler, because I hate describing click by click you can google your way to how to set it up. I just used the default options that made sense and I run this script a couple times a week on Tuesday and Sunday.

4 Sections here:
1. This intro so this is the end of section 1
2. How to setup the main backup scripts, the actual meat of the backup job
3. OPTIONAL watch progress
4. OPTIONAL setup a progress script for the very first job (considering that the target was first empty, either way measures speed good)
 .....
 
Original text is HERE
at September 09, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

How do I use Robocopy to copy ACLs without copying data?

ROBOCOPY can be used to copy the ACLs of existing files, without copying the files, but the documentation is NOT obvious on how to accomplish this.

To copy the security information for files that exits at both the source and destination, and to NOT copy the files, use:

robocopy [Source] [Destination] /secfix /xo /xn /xc

To mirror a [Source] folder on a [Destination] folder:

robocopy [Source] [Destination] /secfix /xo /xn /xc \[Other parameters\]
robocopy [Source] [Destination] /secfix \[Other parameters\]
at September 09, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Backup and Restore NTFS permissions with icacls

Icacls is a simple command line utility to backup and restore or apply new NTFS permissions. I use this tool mostly to back-up NTFS permissions before I make major changes on the current NTFS ACLs. This command line utility is available on Server 2003 SP2 or higher, also available on server 2008 & Windows 7.
To get help or get examples just type icacls in command prompt.
Example: Backup NTFS permissions
icacls “D:\HomeTest” /save “c:\Temp\ntfsbackup.txt” /t /c

In this example we backup all permissions of “D:\HomeTest” and save them in “c:\Temp”. The /T switch allows it to get also the permissions of sub folders, the /C switch allows it to continue even if errors occurs.
Example: Restore NTFS permissions
icacls d:/ /restore c:\Temp\ntfsbackup.txt

It is not necessary to mention the destination folder because this is already included in the backup file which very important to know. It’s sufficient to specify the destination drive.
From my experience it’s possible to back-up a complete drive like “e:/” but you can’t restore without specifying a drive letter, so my advice is always specify a folder instead of a complete drive.
More info: Technet
at September 09, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Thursday, July 25, 2013

How To Backup ESXi Configuration

Backing up your ESXi Configuration:

 To backup your ESXi configuration you’ll be using the vicfg-cfgbackup.pl command as follows:
  • Download either the vMA or vCLI
  • Launch vicfg-cfgbackup.pl:

    C:\Program Files\VMware\VMware vSphere CLI\bin>vicfg-cfgbackup.pl --server -s name_of_the_server.bak

  • Note: The backup will be stored at path:
    C:\Program Files (x86)\VMware\VMware vSphere CLI

Restoring your ESXi Configuration:

Restoring your ESXi config can be done after you have the host up and responding over the network again by using the following:

C:\Program Files\VMware\VMware vSphere CLI\bin>vicfg-cfgbackup.pl --server -l name_of_the_server.bak

Note: You will be asked to reboot the host on restore.
Backing  up multiple hosts! – There is a script to backup multiple ESXi hosts on the VMware communities site here. Also in PowerCLI here!
at July 25, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Friday, July 19, 2013

cfg2html on Solaris - OS configuration Backup

cfg2html is very use full script to take all the system configuration backup in text format and html format.This script is available for Solaris,various Linux flavors and HP-Unix.
For more information about the script,please visit
http://groups.yahoo.com/group/cfg2html.

Once you run the script by default it will generate three files.
1. System configuration in text format
2. System configuration in html format
3. Script Error log

These configuration backup files are very useful to build the server from scratch.But we have to  make sure you have latest configuration backup by running cfg2hmtl periodically and keep the output in other location or web portal for future reference.

Here is the script which you can download it and use it for Solaris 10.
Download cfg2html  Click on File tab- > Select Download


Download the cfg2html and keep in /var/tmp/

#cd /var/tmp
#tar -xvf cfg2html_solaris10_v1.0.tar
#cd cfg2html_solaris_10v1.0
bash-3.00# ls -lrt
total 56
-rwx------1 root root 24796 Jul 18 14:46 cfg2html_solaris_10v1.0
drwx------2 root root 11 Jul 18 14:46 plugins

bash-3.00# ./cfg2html_solaris_10v1.0
-------------------------------------------------
Starting          cfg2html_solaris_10 version 1.0 on a SunOS 5.10 i86pc
Path to cfg2html  ./cfg2html_solaris_10v1.0
Path to plugins   ./plugins
HTML Output File  /Desktop/cfg2html_solaris10_v1.0/cfg2html_solaris10_v1.0/sfos_cfg.html
Text Output File  /Desktop/cfg2html_solaris10_v1.0/cfg2html_solaris10_v1.0/sfos_cfg.txt
Errors logged to  /Desktop/cfg2html_solaris10_v1.0/cfg2html_solaris10_v1.0/sfos_cfg.err
Started at        2012-07-18 14:46:51
-------------------------------------------------
-------------------------------------------------
Collecting:  System Hardware and Operating System Summary  ..
Collecting:  Disk Device Listing  ....
Collecting:  Host-Bus Adapters (HBAs)  ...
Collecting:  Solaris Volume Manager (SVM)  ......
Collecting:  Local File Systems and Swap  ......
Collecting:  NFS Configuration  .....
Collecting:  Zone/Container Information  ......
Collecting:  Network Settings  ..........
Collecting:  EEPROM  ....
Collecting:  Cron  ...
Collecting:  System Log  ..
Collecting:  Resource Limits  .....
Collecting:  Services  ....
Collecting:  VxVM  ...........
Collecting:  VxFS  ..
-------------------------------------------------

bash-3.00# ls -lrt
total 337
-rwx------1 root root  24796 Jul 18 14:46 cfg2html_solaris_10v1.0
drwx------2 root root          11 Jul 18 14:46 plugins
-rw-r--r--   1 root     root   732 Jul 18 14:47 sfos_cfg.err
-rw-r--r--   1 root     root   77295 Jul 18 14:47 sfos_cfg.html
-rw-r--r--   1 root     root   65492 Jul 18 14:47 sfos_cfg.txt
bash-3.00# uname -a
SunOS sfos 5.10 Generic_142910-17 i86pc i386 i86pc

You can copy to windows and upload it in desired location .

you can add the cfg2hmtl  in cronjob to run cfg2html periodically .

# export EDITOR=vi
# corntab -e
Add the below lines in the end of the file.
00 23 15 * * /var/tmp/cfg2html_solaris10_v1.0/cfg2html_solaris_10v1.0  > /dev/null 2> /dev/null
00 23 01 * * /var/tmp/cfg2html_solaris10_v1.0/cfg2html_solaris_10v1.0  > /dev/null 2> /dev/null

save the file & exit.The above job will run cfg2html 1st and 15th of the month at 11PM .

cfg2html on Solaris - OS configuration Backup

at 15:22 Lingeswaran R Solaris 10

cfg2html is very use full script to take all the system configuration backup in text format and html format.This script is available for Solaris,various Linux flavors and HP-Unix.
For more information about the script,please visit 
http://groups.yahoo.com/group/cfg2html.

Once you run the script by default it will generate three files. 
1. System configuration in text format  
2. System configuration in html format
3. Script Error log

These configuration backup files are very useful to build the server from scratch.But we have to  make sure you have latest configuration backup by running cfg2hmtl periodically and keep the output in other location or web portal for future reference.


Here is the script which you can download it and use it for Solaris 10.

Download cfg2html  Click on File tab- > Select Download 


Download the cfg2html and keep in /var/tmp/
#cd /var/tmp
#tar -xvf cfg2html_solaris10_v1.0.tar
#cd cfg2html_solaris_10v1.0
bash-3.00# ls -lrt
total 56
-rwx------1 root root 24796 Jul 18 14:46 cfg2html_solaris_10v1.0
drwx------2 root root 11 Jul 18 14:46 plugins

bash-3.00# ./cfg2html_solaris_10v1.0
-------------------------------------------------
Starting          cfg2html_solaris_10 version 1.0 on a SunOS 5.10 i86pc
Path to cfg2html  ./cfg2html_solaris_10v1.0
Path to plugins   ./plugins
HTML Output File  /Desktop/cfg2html_solaris10_v1.0/cfg2html_solaris10_v1.0/sfos_cfg.html
Text Output File  /Desktop/cfg2html_solaris10_v1.0/cfg2html_solaris10_v1.0/sfos_cfg.txt
Errors logged to  /Desktop/cfg2html_solaris10_v1.0/cfg2html_solaris10_v1.0/sfos_cfg.err
Started at        2012-07-18 14:46:51
-------------------------------------------------
-------------------------------------------------
Collecting:  System Hardware and Operating System Summary  ..
Collecting:  Disk Device Listing  ....
Collecting:  Host-Bus Adapters (HBAs)  ...
Collecting:  Solaris Volume Manager (SVM)  ......
Collecting:  Local File Systems and Swap  ......
Collecting:  NFS Configuration  .....
Collecting:  Zone/Container Information  ......
Collecting:  Network Settings  ..........
Collecting:  EEPROM  ....
Collecting:  Cron  ...
Collecting:  System Log  ..
Collecting:  Resource Limits  .....
Collecting:  Services  ....
Collecting:  VxVM  ...........
Collecting:  VxFS  ..
-------------------------------------------------

bash-3.00# ls -lrt
total 337
-rwx------1 root root  24796 Jul 18 14:46 cfg2html_solaris_10v1.0
drwx------2 root root          11 Jul 18 14:46 plugins
-rw-r--r--   1 root     root   732 Jul 18 14:47 sfos_cfg.err
-rw-r--r--   1 root     root   77295 Jul 18 14:47 sfos_cfg.html
-rw-r--r--   1 root     root   65492 Jul 18 14:47 sfos_cfg.txt
bash-3.00# uname -a
SunOS sfos 5.10 Generic_142910-17 i86pc i386 i86pc
You can copy to windows and upload it in desired location .

you can add the cfg2hmtl  in cronjob to run cfg2html periodically .
# export EDITOR=vi
# corntab -e
Add the below lines in the end of the file.
00 23 15 * * /var/tmp/cfg2html_solaris10_v1.0/cfg2html_solaris_10v1.0  > /dev/null 2> /dev/null
00 23 01 * * /var/tmp/cfg2html_solaris10_v1.0/cfg2html_solaris_10v1.0  > /dev/null 2> /dev/null
save the file & exit.The above job will run cfg2html 1st and 15th of the month at 11PM .
- See more at: http://www.unixarena.com/2012/07/cfg2html-on-solaris-os-configuration.html#sthash.jbCYLK2o.dpuf
at July 19, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Tuesday, July 9, 2013

HOW-TO get MP IP address from HP-UX

I found out that now there is a posibility to get the IP of the MP/GSP from the HP-UX servers, just in case you forgot or can't phisicaly access the server:

You need to have installed the sfm product:

SFM-CORE B.05.00.05 HPUX System Fault Management

It commes standard with the 11.23 FOE, so probably you have it by default.

Then you just lauch:

 #/opt/sfm/bin/CIMUtil -e root/cimv2 HP_ManagementProcessor  
 Instance 0 :  
 UniqueIdentifier : 0.19.33.124.117.193  
 ControllerType : 3  
 OtherControllerType :  
 IPAddress : 10.10.11.250 <------ MP IP  
 URL : http://10.10.11.250 <------ MP IP  
 Dedicated : 14  
 CreationClassName : HP_ManagementProcessor  
 Name : Management Processor  
 EnabledState : 2  
 OperationalStatus : 2  

I found out that now there is a posibility to get the IP of the MP/GSP from the HP-UX servers, just in case you forgot or can't phisicaly access the server:
You need to have installed the sfm product:
SFM-CORE B.05.00.05 HPUX System Fault Management
It commes standard with the 11.23 FOE, so probably you have it by default.
Then you just lauch:
#/opt/sfm/bin/CIMUtil -e root/cimv2 HP_ManagementProcessor
Instance 0 :
UniqueIdentifier : 0.19.33.124.117.193
ControllerType : 3
OtherControllerType :
IPAddress : 10.10.11.250 <------ br="" ip="" mp=""> URL : http://10.10.11.250 <------ br="" ip="" mp=""> Dedicated : 14
CreationClassName : HP_ManagementProcessor
Name : Management Processor
EnabledState : 2
OperationalStatus : 2
- See more at: http://www.hpuxtips.es/?q=node/175#sthash.r8RDvWar.dpuf
I found out that now there is a posibility to get the IP of the MP/GSP from the HP-UX servers, just in case you forgot or can't phisicaly access the server:
You need to have installed the sfm product:
SFM-CORE B.05.00.05 HPUX System Fault Management
It commes standard with the 11.23 FOE, so probably you have it by default.
Then you just lauch:
#/opt/sfm/bin/CIMUtil -e root/cimv2 HP_ManagementProcessor
Instance 0 :
UniqueIdentifier : 0.19.33.124.117.193
ControllerType : 3
OtherControllerType :
IPAddress : 10.10.11.250 <------ br="" ip="" mp=""> URL : http://10.10.11.250 <------ br="" ip="" mp=""> Dedicated : 14
CreationClassName : HP_ManagementProcessor
Name : Management Processor
EnabledState : 2
OperationalStatus : 2
- See more at: http://www.hpuxtips.es/?q=node/175#sthash.r8RDvWar.dpuf
at July 09, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Wednesday, June 5, 2013

"Change product key" link is not available in Windows 8 or in Windows Server 2012

Original KB text on location: http://support.microsoft.com/kb/2750773

Symptoms
When you try to change the product key in Windows 8 or in Windows Server 2012, you cannot find a "Change product key" link in the System item in Control Panel.

For example, you want to convert a default setup product key to a Multiple Activation Key (MAK) on a computer that is running Windows 8. However, you cannot find an element in the UI that lets you change the product key.


Cause
This issue occurs because the "Change product key" link is not displayed if Windows 8 or Windows 2012 is not activated.


Resolution
To change the product key without first activating Windows, use one of the following methods:
Method 1

    Swipe in from the right edge of the screen, and then tap Search. Or, if you are using a mouse, point to the lower-right corner of the screen, and then click Search.
    In the search box, type Slui.exe 0x3.
    Tap or click the Slui.exe 0x3 icon.
    Type your product key in the Windows Activation window, and then click Activate.

Method 2

Run the following command at an elevated command prompt:
Cscript.exe %windir%\system32\slmgr.vbs /ipk
Note You can also use the Volume Activation Management Tool (VAMT) 3.0 to change the product key remotely, or if you want to change the product key on multiple computers. 



at June 05, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Monday, June 3, 2013

VMWare Zimbra - How to renew certificate after 365 days

This summary is not available. Please click here to view the post.
at June 03, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Saturday, May 25, 2013

BlueScreenView - BSOD viewer


If you've used Windows for any length of time, chances are you've seen a Blue Screen of Death. You're lucky when it's only a bad driver and your system reboots politely. When you're unlucky, it could be something more serious, such as a hardware failure. Either way, it's a sign of system instability. Unfortunately, a BSOD is usually cryptic. Factor in the fact that Windows usually reboots itself automatically within a short period of time--with no assurance that you won't get the same error--and you can see the need for BlueScreenView v1.1.
BSV opens, displays, and interprets the data saved in minidump (*.dmp) files which are usually found in C:\Windows\Minidump after a BSOD. You must have Windows set to save a "small memory dump" which may be done in for My Computer\Properties\Advanced\Startup and Recovery\Settings in XP for instance. In XP, this is generally the default behavior.
Once you've pointed BSV at the minidump folder (you can't just drag and drop or open a minidump file with program) you'll be able to see the exact programs and dlls involved in the crash, all the dlls running at the time, or even a simulated BSOD. For experienced users this helps diagnosis the cause of the crash in a minimal amount of time. For less savvy users, you'll be able to tell the more-savvy user you're talking to on the phone more about what happened.
This little utility has found a home in my toolkit. I don't see as many BSODs as I used to, but when I do, BlueScreenView makes it a snap to see the dump info. Stay stable, my friends.

Download link: http://downloads.pcworld.com/pub/new//utilities/bluescreenview_setup.exe
at May 25, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Thursday, May 16, 2013

Time Machine for every Unix out there

Original article can be found here: http://blog.interlinked.org/tutorials/rsync_time_machine.html

Using rsync to mimic the behavior of Apple's Time Machine feature

rsync is one of the tools that have gradually infiltrated my day to day tool-box (aside Vim and Zsh).
Using rsync it’s very easy to mimic Mac OS X new feature called Time Machine. In this article I’ll show how to do it, but there is still a nice GUI missing – for those who like it shiny.

What Time Machine does

Time Machine makes a snapshot of your files every hour. The files are usually stored on a external hard drive connected to your Mac via USB or Firewire. Earlier Leopard versions (ADC preview versions) had the ability to make the backups to a remote drive (I’ve heard).
So if you lose a file, or did a devastating change to one of your files, simply go back in time until you find your file or a version that’s not corrupted.
Incrementally backing up all files every hour so that you can access them in reversed chronological order isn’t that hard with standard Unix utilities like rsync. The only missing thing is a nice GUI for which Apple is known to be quite good at.

Making full backups in no time every hour

You can use this method to make a backup every hour or every ten minutes if you like. There are many many features you can tune or configure to your own taste – excluding files that are larger than 1GB for example.
So, here the command to make the backup:
rsync -aP --link-dest=PATHTO/$PREVIOUSBACKUP $SOURCE $CURRENTBACKUP
Lets go through the parameters step by step.
  • -a means Archive and includes a bunch of parameters to recurse directories, copy symlinks as symlinks, preserve permissions, preserve modification times, preserve group, preserve owner, and preserve device files. You usually want that option for all your backups.
  • -P allows rsync to continue interrupted transfers and show a progress status for each file. This isn’t really necessary but I like it.
  • --link-dest this is a neat way to make full backups of your computers without losing much space. rsync links unchanged files to the previous backup (using hard-links, see below if you don’t know hard-links) and only claims space for changed files. This only works if you have a backup at hand, otherwise you have to make at least one backup beforehand.
  • PATHTO/$PREVIOUSBACKUP is the path to the previous backup for linking. Note: if you delete this directory, no other backup is harmed because rsync uses hard-links and the operating system (or filesystem) takes care of releasing space if no link points to that region anymore.
  • $SOURCE is the directory you’d like to backup.
  • $CURRENTBACKUP is the directory to which you’d like to make the backup. This should be a non-existing directory.
As said earlier, rsync has many many features. To exclude files over a certain size for example, use the option --max-size (unfortunately this is not available on the rsync version shipped with Mac OS X Leopard). The man page or the documentation can give you plenty of ideas in this direction.
So much for the theory of the most important command for our purpose. Here a simple script that makes an incremental backup every time you call it:
#!/bin/sh

date=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=$HOME/Backups/current /path/to/important_files $HOME/Backups/back-$date
rm -f $HOME/Backups/current
ln -s back-$date $HOME/Backups/current
The script creates a file called “back” appended by the current date and time, for example back-2007-11-13T22:03:32 which contains the full backup. Then there is a symbolic link called “current” which points to the most recent directory. This directory-link is used for the --link-dest parameter.
You should look at the --exclude parameter (or better, --exclude-from= parameter) and learn how to exclude certain files or directories from the backup (you shouldn’t backup your backup for example).
The script above only works on the local machine because making links on a remote machine needs some extra work. But not much:
#!/bin/sh

date=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -azP --link-dest=PATHTOBACKUP/current $SOURCE $HOST:PATHTOBACKUP/back-$date
ssh $HOST "rm -f PATHTOBACKUP/current && ln -s back-$date PATHTOBACKUP/current"
The -f parameter for the rm command is used to supress error messages if the current directory is not present, which would in turn prevent the link to be created.
To get that working you either use a public/private key authentication scheme or something else to avoid typing in your password. Another possibility is, of course, to mount the remote file-system on the local computer using the above script.
On my setup the script takes about 6 seconds to synchronize 46968 files and 29GB – this takes 20MB for the file structure (with no actual files to transfer of course). But afterwards, I have a complete backup of my system in a new directory.
On a much bigger setup (1.2 million files and 50GB of data) the backup takes about 30 minutes and takes about 3GB of space (just for links!), so it isn’t exactly free, but very convenient.
The space needed for the backup is determined by the shape of your directory structure. On the larger setup I have lots of Maildirs and a very deep directory structure so it takes much more space than my home-directory backup above. 3GB is quite a lot, but 20MB doesn’t hurt.

Advanced rsync parameters

Additional to the parameters described above, I usually employ a combination of these parameters in my backup:
  • --delete and --delete-excluded this tells rsync to remove files from my backups either if they are gone on my local machine, or if I decided to exclude them from my backup.
  • --exclude-from=FILE the file specified here is a simple list of directories of files (one per line) which should not be backed up. My Trash folder oder some .cache folders are candidates for this file.
  • -P is used to give more information on how far the backup is, and how many files are to be backed up. Additional it could resume an interrupted transfer (which doesn’t apply here because we create a blank backup each time we call the script).
  • -x this one is important because it prohibits rsync to go beyond the local filesystem. For example if you backup you Linux-root partition, you should not include the /proc directory because rsync will get stuck in it. -x excludes all mounted filesystems from the backup which is probably what you want in most cases.

Hard-Links

Each file in a directory is a link to the actual data on your hard-disk. The file-system keeps track of how many links to a area point, and only if the last link is deleted, the whole area gets deleted (in contrast to soft-links, these are pointers to the file-name, not the contents).
Here an illustration of two backups with three files each. File1 and File2 are the same in both backups, only File3 changed between Backup1 and Backup2. So in Backup2, File3 (changed) has to point to a different area than File3 in Backup1.

BTW, there is a nice project for Linux out there which provides the same functionality as Time Machine including a nice GUI which is also based on rsync and the procedure presented here.

The End

Credit: The initial idea for this approach came from Mike Rubel – rsync snapshots.
Also interesting if you have to cope with Windows: Optimal remote backups with rsync over Samba.
There are quite a few approaches out there which more or less do the same, but rsync is available on virtually every Unix out there (even the DSL with its 50MB footprint includes rsync). So using other tools might be more convenient, but I’ll stick with the omnipresent rsync.
rsync offers the possibility to store only the differences to the previous backup (using --compare-dest which should point to a full-backup instead of --link-dest). It then doesn’t make links to the unchanged files, it just leaves them out. This way you get an incremental backup without the “directory-overhead” of the --link-dest approach. But you have to be extremely cautious which one of older backups you delete because the newer backups just don’t contain some of these files (think of full-backups as checkpoints)! Using the --link-dest you can delete all backups but the last and you still got all the files, so I’m happy to pay 20MB per backup for this safety.

Full script

Here my full script with additional features:
#!/bin/sh

date=`date "+%Y-%m-%dT%H_%M_%S"`
HOME=/home/user/

rsync -azP \
  --delete \
  --delete-excluded \
  --exclude-from=$HOME/.rsync/exclude \
  --link-dest=../current \
  $HOME user@backupserver:Backups/incomplete_back-$date \
  && ssh user@backupserver \
  "mv Backups/incomplete_back-$date Backups/back-$date \
  && rm -f Backups/current \
  && ln -s back-$date Backups/current"
at May 16, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Saturday, March 9, 2013

Upgrade CentOS 6.x to CentOS 6.4

Upgrade with yum update

Official way to do upgrade:
yum update
Official way to do update is first clean all, second update glibc, yum, rpm and python packages and then update other packages like following:
yum clean all
yum update glibc* yum* rpm* python*
yum update

Reboot

reboot

Check CentOS 6.4 (Final) release info and Check your entire system

cat /etc/redhat-release
## Output ##
CentOS release 6.3 (Final)
Following needs redhat-lsb package
lsb_release -a
## Output ##
LSB Version:    :core-4.0-amd64:core-4.0-ia32:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-ia32:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-ia32:printing-4.0-noarch
Distributor ID: CentOS
Description:    CentOS release 6.4 (Final)
Release:        6.4
Codename:       Final
at March 09, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

Tuesday, January 8, 2013

NVIDIA @ CentOS 6.3 / Red Hat 6.3

Q: What is the proper procedure to identify, locate, & install the correct nvidia drivers for Centos 6 on a Lenovo W510?
A: Here's a quick sequence, to help the next person, of what I'd recommend, based on what I learned from this thread:

0. Test if you 'need' to install the nvidia driver:
$ locate libvdpau_nvidia.so ==> mine reported nothing found
$ cat ~/.xsession-errors ==> mine reported errors as shown below:
Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory
Note: There should be a better test of 'need'; but this was my only indication that "I" needed the nvidia driver!

1. Identify your kernel version:
$ uname -r
REPORTED:
2.6.32-279.5.1.el6.x86_64
Note: I'm not sure how this is relevant, but somehow it is (I ask others to clarify for us).

2. Identify your graphics card
$ /sbin/lspci -nn | grep VGA
REPORTED:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GT216 [Quadro FX 880M] [10de:0a3c] (rev a2)
Note: I'm not sure how relevant the [10de:0a3c] information is - but some sites indicate it's important.

3. Identify the latest version of the graphics drivers for that kernel version:
a) Go to Nvidia support http://www.nvidia.com/page/support.html
b) Go to download drivers
c) Enter the product type (Quadro), series (Quadro FX Series), and product (not found).
Note: The Quadro FX 880M didn't appear on this list, so I 'chatted' with Nvidia support.
Eventually Nvidia support pointed me to the latest available driver version here:
http://www.nvidia.com/object/linux-display-amd64-304.37-driver.html
Note: You'll compare this latest version & support information with the latest available at El Repo in the next step.

4. Compare the Nvidia version with the latest available El Repo version:
http://elrepo.org/tiki/Driver+Versions
Note: Search for "nvidia" and you'll find the latest version to be "nvidia 295.40".

5. Enable the El Repo repository (if not already installed):
$ sudo rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org
$ sudo rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm
Note: Skip this step if El Repo is already enabled.

6. Install that latest version from the El Repo repository:
$ sudo yum --enablerepo elrepo install kmod-nvidia
$ sudo yum --disablerepo=\* --enablerepo=elrepo install nvidia-x11-drv-32bit
Note: Do not install from the ATrpms repository!

Or, if you want the absolute latest in the El Repo testing repository:
$ sudo yum --enablerepo elrepo-testing info kmod-nvidia
$ sudo yum --enablerepo elrepo-testing install kmod-nvidia
$ sudo yum --disablerepo=\* --enablerepo=elrepo-testing install nvidia-x11-drv-32bit

7. Reboot

8. Test if you have installed the correct driver:
$ locate libvdpau_nvidia.so
NOW REPORTS:
/usr/lib/vdpau/libvdpau_nvidia.so
/usr/lib/vdpau/libvdpau_nvidia.so.1
/usr/lib/vdpau/libvdpau_nvidia.so.304.37
/usr/lib64/vdpau/libvdpau_nvidia.so
/usr/lib64/vdpau/libvdpau_nvidia.so.1
/usr/lib64/vdpau/libvdpau_nvidia.so.304.37

$ cat ~/.xsession-errors
No longer reports VDPAU errors.

QUESTION:
Is there a better way to test for the lack of the right driver & for the correct installation of the right driver?
at January 08, 2013 No comments:
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

How to use DiskSpd to simulate Veeam Backup & Replication disk actions

This HOW-TO contains information on how to use Microsoft© DiskSpd to simulate Veeam Backup & Replication disk actions to measure disk pe...

  • Reset iLO4 certificate
     During troubleshooting of issue at my customer in OneView, I have noticed that there is a SSL certificate issue with one of iLO4 of BL460ge...
  • How to shutdown 3PAR
    I recently did the power down of a datacentre containing a 3PAR and wanted to quickly cover off the steps that I followed to power off and...
  • Zimbra reports “ Error: Queue report unavailable – mail system is down ”
    During update from one version to another or sometimes when you reboot Zimbra you can get following error:  "Error: Queue report unav...

Search This Blog

  • Home

About Me

My photo
Mladen Komac
View my complete profile

Blog Archive

  • ►  2020 (3)
    • ►  November (1)
    • ►  October (1)
    • ►  September (1)
  • ►  2018 (2)
    • ►  October (1)
    • ►  August (1)
  • ►  2017 (4)
    • ►  October (1)
    • ►  August (2)
    • ►  January (1)
  • ►  2016 (9)
    • ►  September (1)
    • ►  July (2)
    • ►  June (2)
    • ►  April (2)
    • ►  March (2)
  • ►  2015 (4)
    • ►  November (2)
    • ►  October (1)
    • ►  January (1)
  • ►  2014 (12)
    • ►  October (1)
    • ►  September (2)
    • ►  August (2)
    • ►  April (1)
    • ►  March (4)
    • ►  January (2)
  • ▼  2013 (19)
    • ▼  December (1)
      • Fedora 19 upgrade to Fedora 20 with FedUp
    • ►  November (1)
      • [Linux, OpenSSL, expect] Doing SFTP file transfers...
    • ►  October (2)
      • 3PAR System Reporter, Part 2
      • 3PAR System Reporter, Part 1
    • ►  September (6)
      • Get quick, free stats from your vSphere environmen...
      • 3 Command line tool to test bandwidth between 2 se...
      • Reclaiming Thin Provisioned Storage on Windows Usi...
      • WINDOWS - CYGWIN - Backup with RSYNC one drive to ...
      • How do I use Robocopy to copy ACLs without copying...
      • Backup and Restore NTFS permissions with icacls
    • ►  July (3)
      • How To Backup ESXi Configuration
      • cfg2html on Solaris - OS configuration Backup
      • HOW-TO get MP IP address from HP-UX
    • ►  June (2)
      • "Change product key" link is not available in Wind...
      • VMWare Zimbra - How to renew certificate after 365...
    • ►  May (2)
      • BlueScreenView - BSOD viewer
      • Time Machine for every Unix out there
    • ►  March (1)
      • Upgrade CentOS 6.x to CentOS 6.4
    • ►  January (1)
      • NVIDIA @ CentOS 6.3 / Red Hat 6.3
  • ►  2012 (47)
    • ►  December (2)
    • ►  November (1)
    • ►  October (4)
    • ►  September (8)
    • ►  August (10)
    • ►  July (3)
    • ►  June (3)
    • ►  May (2)
    • ►  April (2)
    • ►  February (3)
    • ►  January (9)
  • ►  2011 (23)
    • ►  December (2)
    • ►  November (1)
    • ►  October (3)
    • ►  September (3)
    • ►  August (2)
    • ►  July (1)
    • ►  June (1)
    • ►  May (2)
    • ►  April (2)
    • ►  March (2)
    • ►  February (1)
    • ►  January (3)
  • ►  2009 (1)
    • ►  December (1)

Report Abuse

Pages - Menu

  • Home

Blogroll

Blog Archive

  • ►  2020 (3)
    • ►  November (1)
    • ►  October (1)
    • ►  September (1)
  • ►  2018 (2)
    • ►  October (1)
    • ►  August (1)
  • ►  2017 (4)
    • ►  October (1)
    • ►  August (2)
    • ►  January (1)
  • ►  2016 (9)
    • ►  September (1)
    • ►  July (2)
    • ►  June (2)
    • ►  April (2)
    • ►  March (2)
  • ►  2015 (4)
    • ►  November (2)
    • ►  October (1)
    • ►  January (1)
  • ►  2014 (12)
    • ►  October (1)
    • ►  September (2)
    • ►  August (2)
    • ►  April (1)
    • ►  March (4)
    • ►  January (2)
  • ▼  2013 (19)
    • ▼  December (1)
      • Fedora 19 upgrade to Fedora 20 with FedUp
    • ►  November (1)
      • [Linux, OpenSSL, expect] Doing SFTP file transfers...
    • ►  October (2)
      • 3PAR System Reporter, Part 2
      • 3PAR System Reporter, Part 1
    • ►  September (6)
      • Get quick, free stats from your vSphere environmen...
      • 3 Command line tool to test bandwidth between 2 se...
      • Reclaiming Thin Provisioned Storage on Windows Usi...
      • WINDOWS - CYGWIN - Backup with RSYNC one drive to ...
      • How do I use Robocopy to copy ACLs without copying...
      • Backup and Restore NTFS permissions with icacls
    • ►  July (3)
      • How To Backup ESXi Configuration
      • cfg2html on Solaris - OS configuration Backup
      • HOW-TO get MP IP address from HP-UX
    • ►  June (2)
      • "Change product key" link is not available in Wind...
      • VMWare Zimbra - How to renew certificate after 365...
    • ►  May (2)
      • BlueScreenView - BSOD viewer
      • Time Machine for every Unix out there
    • ►  March (1)
      • Upgrade CentOS 6.x to CentOS 6.4
    • ►  January (1)
      • NVIDIA @ CentOS 6.3 / Red Hat 6.3
  • ►  2012 (47)
    • ►  December (2)
    • ►  November (1)
    • ►  October (4)
    • ►  September (8)
    • ►  August (10)
    • ►  July (3)
    • ►  June (3)
    • ►  May (2)
    • ►  April (2)
    • ►  February (3)
    • ►  January (9)
  • ►  2011 (23)
    • ►  December (2)
    • ►  November (1)
    • ►  October (3)
    • ►  September (3)
    • ►  August (2)
    • ►  July (1)
    • ►  June (1)
    • ►  May (2)
    • ►  April (2)
    • ►  March (2)
    • ►  February (1)
    • ►  January (3)
  • ►  2009 (1)
    • ►  December (1)

Link with me !



View Mladen Komac's profile on LinkedIn

About

Awesome Inc. theme. Powered by Blogger.