Friday, July 18, 2014

Administering large groups of servers - Part 1

We all start out as Linux SysAdmins with only a few servers to worry about.   That begins as a challenge, of course, because you're new to the whole field and the learning curve is so steep.   It is a very good thing that we only have a few servers to tend to and to improve on.

User creation can be done one server at a time, with each being controlled by its own /etc/passwd, /etc/shadow & /etc/group files.  If you need to add a new user to all 3 machines, you simply run the command for one of them on the command line (what do you mean, GUI???) as such:

useradd -c "Comments about the user" -d /home/username -m -s /bin/bash username

...and then copy and paste that to the command line of the other two servers to create the same user in the same way.

That's the picture-perfect, no issues at all to worry about scenario.  In reality, you need to take into account whether this user already might have an account on one of the machines, but not on the others.

What if the new user is Bob Reed, brother to an existing employee, Bill Reed, and your naming convention is first initial last name?  When you try to create user 'breed' for Bob you'll discover that a 'breed' user already exists for Bill.

To make it even more interesting, what if these people are transferring files back and forth between the servers to their home accounts, and Bob is UID 1000 on ServerOne, but is UID 1002 on ServerTwo?  Now files arrive with improper ownership.  It becomes important to make sure the user has the same UID on each machine if for only that reason.  Now you have to track not only user names, but also UID's and groups between the servers.

At some point you're going to learn that it is better to administer the servers as a group rather than as single entities.

Perhaps your environment has 3 Ubuntu servers, 3 Red Hat servers, and 3 CentOS servers.

At the very least, you'll want to administer the Ubuntu servers differently than you would the two Red Hat variants.  Common commands on Ubuntu might be in /usr/bin, where on Red Hat they might be in /usr/local/bin. Sure you can create symlinks to make them more identical, but that's just scratching the surface.

Applications are installed in different locations on different distros. One might have apache config files in /etc/apache2, while another might have them in /etc/httpd, and yet another in subfolders and individual files broken out in those subfolders.

In an ideal world, your environment would be homogeneous, because you came through the door with bright ideas and set up all the servers yourself, and chose to use all Ubuntu or all Red Hat.   In the real world, you came in to replace somebody else, and you're left with what he created, which is a mix of his ideas and the ideas of the predecessor that he replaced.

Fortunately for us as Linux SysAdmins, we're not the first to face these issues, and those who went before us have created a lot of interesting tools to assist us.

In this series I'm going to lightly cover some of the options I've explored over time and have continued to use as my daily tools.  I'm also in the process of learning new tools (and I will continue to explore and learn new things until they throw me in the ground and put some dirt on me), and I'll cover the trials and tribulations I encounter as I learn these new tools as well.

Today let's talk about cluster-ssh.

Basically, this tool allows you to do the same thing simultaneously on multiple servers. Primarily I use it as a simple console tool from the command line, but it has far more versatility than that.

On Ubuntu, the install is simple:

apt-get install clusterssh

You can also obtain it from sourceforge or from github.

This is the description from the github site:

"ClusterSSH is a tool for making the same change on multiple servers at the same time. The 'cssh' command opens an xterm to all specified hosts and an administration console. Any text typed into the console is replicated to all windows. All windows may also be typed into directly.

This tool is intended for (but not limited to) cluster administration where the same configuration or commands must be run on each node within the cluster. Performing these commands all at once via this tool ensures all nodes are kept in sync.
"

That's interesting, but not very comprehensive.   Clicking on the Support link on sourceforge takes you to a site best suited for developers, not the documentation needed by a Linux SysAdmin.

I got started with creating a ~/.cssh file based on the excellent information on this page.

However, each time I'd run the cssh command, the .csshrc file I'd created would be renamed .csshrc.DISABLED.  Interesting...

A little research showed me that the site above was out of date, and that .csshrc is no longer being used.  Now the ~/.clusterssh folder is being used, based on files contained within using a slightly different syntax.

I got a great deal of this updated information about clusterssh and its configuration and syntax from this site.

Three files can now be used to configure the application on a user level, inside the ~/.clusterssh folder: config, clusters, tags.  Note: you can also configure it on a system level with /etc/clusters.  Read the man page and the information at this site for more details.  This blog entry will be showing how to do it for an individual user.


The config file has a lot of options that are commented out, and which I believe I read are the defaults that will be used unless you change them.  My config file came with 66 lines in it. Of those, I only changed the following 3 lines:

auto_close=2
terminal_args=-fg green
terminal_font=7x14

auto_close=2 is a timer for how long to wait before disconnecting after issuing the 'exit' command, and was defaulting to 5 seconds. I see no reason at all for it to wait before shutting down, but just in case, I left it in place with the time reduced to 2 seconds.

terminal_args=-fg green set the terminal font to green, because I personally like green on black for ease of reading.

terminal_font=7x14 changed the font size to 7x14, and I may adjust it later.

Check the detailed information about each setting in the man page or on this site to get a better understanding of each of the line items in the 'config' file.


The clusters file is where you can declare your servers in named groups, with the following syntax:

[user@] [user@] [...]

Two very important changes here from the way it was done with the .csshrc file: 1) You do not pre-declare your clusters at the top of the file; they are declared line by line

2) You do not put an equals sign (=) in the line at all

An example would be:

myubuntu server1 roger@server2 server3

myredhat server4 roger@server5 server6
mycentos server7 roger@server8 server9
myall server1 roger@server2 server3 server4 roger@server5 server6 server7 roger@server8 server9

Optionally, you can include tags in another tag, which would let you reduce the 'myall' line to the following:


myall myubuntu myredhat mycentos

Note:  These tags accumulate and do not check for duplication!  The examples above are pretty simple.  Consider that you also create a cluster named 'web' which includes one machine from each of the other clusters; ie. web server1 server4 server7.   If you then add 'web' to 'myall' you will be adding those 3 servers in again!   For this reason alone, even though I haven't totally made up my mind yet, I prefer the tags file to the clusters file.


The tags file is sort of the reverse of the clusters file.  Rather than declare a tag, followed by the servers that are in it, you declare a server, followed by the tags to which it belongs:

server1 myubuntu web myall

roger@server2 myubuntu myall
server3 myubuntu myall
server4 myredhat web myall
roger@server5 myredhat myall
server6 myredhat myall
server7 mycentos web myall
roger@server8 mycentos myall
server9 mycentos myall

Note that now you can include each server in 'myall', but only have them show up one time when you run 'cssh myall'.  Which config file you choose to go with, 'clusters' or 'tags', is up to you.  Learn from my experience, though! Do not allow both files to exist simultaneously!  If you do, then you issue the command 'cssh myubuntu' it will open server1, roger@server2, server3, server1, roger@server2 & server3.  It just processes both the clusters and the tags files, doubling everything up.  Since I have not firmly decided which format I prefer, I create both files, keep them both up to date, and rename one of them with a .hold extension to disable it.

You might be asking yourself why I chose to name the clusters with the prefix 'my'.   Good question.  You need to be careful about naming the clusters, because cssh can also be used at the command line to call machines by name.


For instance, what if your predecessor named the first of each of his servers 'ubuntu', 'redhat', and 'centos'.  When you run 'cssh ubuntu redhat centos' at the command line, you intend to open those three servers.  What if you had chosen the cluster name 'ubuntu' instead of 'myubuntu'?  Which will be opened; the server named 'ubuntu' or the cluster named 'ubuntu'?

Also, within your environment, it is possible another server has been created which is DNS listed as 'ubuntu.yourdomain.com', and since in this example your /etc/resolv.conf file starts with 'search yourdomain.com', when you type 'cssh ubuntu' the system will append the .yourdomain.com to try to resolve it, thereby bringing you up on charges of trying to hack into a server that wasn't yours to access.  Not a good thing!  So name your clusters well so that you don't run into conflicts.

The following notes are from the site I've been referring to:

All comments (marked by a #) and blank lines are ignored. Tags may be nested, but be aware of using recursive tags as they are not checked for.

Extra cluster files may also be specified either as an option on the command line (see cluster-file) or in the users $HOME/.clusterssh/config file (see extra_cluster_file configuration option).

NOTE: the last tag read overwrites any pre-existing tag of that name

NOTE: there is a special cluster tag called default - any tags or hosts included within this tag will be automatically opened if no other tags are specified on the command line.


This application also lets you cluster rsh and telnet sessions by calling it as 'crsh' or 'ctel' rather than as 'cssh'.  I seldom use telnet, and never use rsh, so I've never tried it for those applications.

So what exactly does it do, then?  If you've configured everything correctly - and I'm using my examples above for this example - when you run:


'cssh myubuntu'

...three tiled sessions will be opened titled:
CSSH: server1
CSSH: server2
CSSH: server3

Additionally, a separate grey box will be opened where you can type commands (although oddly, it does not echo back what you are typing, ostensibly relying on the multiple terminals' output to show you what you've typed). This box will be titled:
CSSH [#]
... where # will be the number of sessions it has opened up this time.

If you click on any of the individual windows, any commands you type are then only for that server.  When you click on the grey terminal input box, however, whatever you type will be simultaneously typed on every server that was opened.  To end sessions on individual servers, you can click on that server's terminal window and type 'exit' as usual.  To end all sessions at once, click on the grey terminal box and type 'exit'.  When the last window closes, the gray terminal window closes and you are returned to the command prompt.

One good example of the use for such an application for supporting numerous servers is to pretend that the cluster 'myubuntu' has 200 servers rather than 3, and pretend that they are all the same version of Ubuntu with the same installed apps, as you would see in a cluster.


Because you are a good Linux SysAdmin, you want to be careful when running 'apt-get update; apt-get upgrade', so you decide to do it manually each morning.

And since you are a clever Linux SysAdmin, you run 'apt-get update; apt-get -s upgrade' so that it only simulates the upgrade, giving you the chance to see what will be updated, and to decide if you're comfortable with allowing it to happen.

If you're a really clever Linux SysAdmin, you've also already tested the upgrade on a lab server to make sure it doesn't break anything, and so you can determine exactly which files changed so you can create your rollback option.

Having decided that this upgrade is acceptable, you simply go to the grey terminal box, and type 'apt-get -y upgrade' and hit enter.  A few minutes later, every one of your servers has completed the upgrade and are all at the same level of patches and security. Type 'exit' and all 200 windows close.

Instead of wasting an entire day updating these servers, one at a time, or running a loop script like:

for i in server1 server2 ... [etc.]

do
  ssh $i apt-get -y upgrade
done

...which runs serially, one server at a time, 200 times... you've gotten all 200 upgraded simultaneously, because it's being done in parallel!

In my opinion, clusterssh is a tool worthy of your daily toolbox, and as I said, I'm only scratching the surface.  Study it further and see how much more you can accomplish with it.

Next time we'll explore pssh, which is parallel ssh.

Benny Helms

MyFreeCopyright.com Registered & Protected