User creation can be done one server at a time, with each being controlled by its own /etc/passwd, /etc/shadow & /etc/group files. If you need to add a new user to all 3 machines, you simply run the command for one of them on the command line (what do you mean, GUI???) as such:
useradd -c "Comments about the user" -d /home/username -m -s /bin/bash username
...and then copy and paste that to the command line of the other two servers to create the same user in the same way.
That's the picture-perfect, no issues at all to worry about scenario. In reality, you need to take into account whether this user already might have an account on one of the machines, but not on the others.
What if the new user is Bob Reed, brother to an existing employee, Bill Reed, and your naming convention is first initial last name? When you try to create user 'breed' for Bob you'll discover that a 'breed' user already exists for Bill.
To make it even more interesting, what if these people are transferring files back and forth between the servers to their home accounts, and Bob is UID 1000 on ServerOne, but is UID 1002 on ServerTwo? Now files arrive with improper ownership. It becomes important to make sure the user has the same UID on each machine if for only that reason. Now you have to track not only user names, but also UID's and groups between the servers.
At some point you're going to learn that it is better to administer the servers as a group rather than as single entities.
Perhaps your environment has 3 Ubuntu servers, 3 Red Hat servers, and 3 CentOS servers.
At the very least, you'll want to administer the Ubuntu servers differently than you would the two Red Hat variants. Common commands on Ubuntu might be in /usr/bin, where on Red Hat they might be in /usr/local/bin. Sure you can create symlinks to make them more identical, but that's just scratching the surface.
Applications are installed in different locations on different distros. One might have apache config files in /etc/apache2, while another might have them in /etc/httpd, and yet another in subfolders and individual files broken out in those subfolders.
In an ideal world, your environment would be homogeneous, because you came through the door with bright ideas and set up all the servers yourself, and chose to use all Ubuntu or all Red Hat. In the real world, you came in to replace somebody else, and you're left with what he created, which is a mix of his ideas and the ideas of the predecessor that he replaced.
Fortunately for us as Linux SysAdmins, we're not the first to face these issues, and those who went before us have created a lot of interesting tools to assist us.
In this series I'm going to lightly cover some of the options I've explored over time and have continued to use as my daily tools. I'm also in the process of learning new tools (and I will continue to explore and learn new things until they throw me in the ground and put some dirt on me), and I'll cover the trials and tribulations I encounter as I learn these new tools as well.
Today let's talk about cluster-ssh.
Basically, this tool allows you to do the same thing simultaneously on multiple servers. Primarily I use it as a simple console tool from the command line, but it has far more versatility than that.
On Ubuntu, the install is simple:
apt-get install clusterssh
You can also obtain it from sourceforge or from github.
This is the description from the github site:
"ClusterSSH is a tool for making the same change on multiple servers at the same time. The 'cssh' command opens an xterm to all specified hosts and an administration console. Any text typed into the console is replicated to all windows. All windows may also be typed into directly.
This tool is intended for (but not limited to) cluster administration where the same configuration or commands must be run on each node within the cluster. Performing these commands all at once via this tool ensures all nodes are kept in sync."
That's interesting, but not very comprehensive. Clicking on the Support link on sourceforge takes you to a site best suited for developers, not the documentation needed by a Linux SysAdmin.
I got started with creating a ~/.cssh file based on the excellent information on this page.
However, each time I'd run the cssh command, the .csshrc file I'd created would be renamed .csshrc.DISABLED. Interesting...
A little research showed me that the site above was out of date, and that .csshrc is no longer being used. Now the ~/.clusterssh folder is being used, based on files contained within using a slightly different syntax.
I got a great deal of this updated information about clusterssh and its configuration and syntax from this site.
Three files can now be used to configure the application on a user level, inside the ~/.clusterssh folder: config, clusters, tags. Note: you can also configure it on a system level with /etc/clusters. Read the man page and the information at this site for more details. This blog entry will be showing how to do it for an individual user.
The config file has a lot of options that are commented out, and which I believe I read are the defaults that will be used unless you change them. My config file came with 66 lines in it. Of those, I only changed the following 3 lines:
auto_close=2
terminal_args=-fg green
terminal_font=7x14
auto_close=2 is a timer for how long to wait before disconnecting after issuing the 'exit' command, and was defaulting to 5 seconds. I see no reason at all for it to wait before shutting down, but just in case, I left it in place with the time reduced to 2 seconds.
terminal_args=-fg green set the terminal font to green, because I personally like green on black for ease of reading.
terminal_font=7x14 changed the font size to 7x14, and I may adjust it later.
Check the detailed information about each setting in the man page or on this site to get a better understanding of each of the line items in the 'config' file.
The clusters file is where you can declare your servers in named groups, with the following syntax:
Two very important changes here from the way it was done with the .csshrc file: 1) You do not pre-declare your clusters at the top of the file; they are declared line by line
An example would be:
myubuntu server1 roger@server2 server3
Optionally, you can include tags in another tag, which would let you reduce the 'myall' line to the following:
Note: These tags accumulate and do not check for duplication! The examples above are pretty simple. Consider that you also create a cluster named 'web' which includes one machine from each of the other clusters; ie. web server1 server4 server7. If you then add 'web' to 'myall' you will be adding those 3 servers in again! For this reason alone, even though I haven't totally made up my mind yet, I prefer the tags file to the clusters file.
The tags file is sort of the reverse of the clusters file. Rather than declare a tag, followed by the servers that are in it, you declare a server, followed by the tags to which it belongs:
server1 myubuntu web myall
Note that now you can include each server in 'myall', but only have them show up one time when you run 'cssh myall'. Which config file you choose to go with, 'clusters' or 'tags', is up to you. Learn from my experience, though! Do not allow both files to exist simultaneously! If you do, then you issue the command 'cssh myubuntu' it will open server1, roger@server2, server3, server1, roger@server2 & server3. It just processes both the clusters and the tags files, doubling everything up. Since I have not firmly decided which format I prefer, I create both files, keep them both up to date, and rename one of them with a .hold extension to disable it.
You might be asking yourself why I chose to name the clusters with the prefix 'my'. Good question. You need to be careful about naming the clusters, because cssh can also be used at the command line to call machines by name.
The following notes are from the site I've been referring to:
All comments (marked by a #) and blank lines are ignored. Tags may be nested, but be aware of using recursive tags as they are not checked for.
Extra cluster files may also be specified either as an option on the command line (see cluster-file) or in the users $HOME/.clusterssh/config file (see extra_cluster_file configuration option).
NOTE: the last tag read overwrites any pre-existing tag of that name
NOTE: there is a special cluster tag called default - any tags or hosts included within this tag will be automatically opened if no other tags are specified on the command line.
This application also lets you cluster rsh and telnet sessions by calling it as 'crsh' or 'ctel' rather than as 'cssh'. I seldom use telnet, and never use rsh, so I've never tried it for those applications.
So what exactly does it do, then? If you've configured everything correctly - and I'm using my examples above for this example - when you run:
If you click on any of the individual windows, any commands you type are then only for that server. When you click on the grey terminal input box, however, whatever you type will be simultaneously typed on every server that was opened. To end sessions on individual servers, you can click on that server's terminal window and type 'exit' as usual. To end all sessions at once, click on the grey terminal box and type 'exit'. When the last window closes, the gray terminal window closes and you are returned to the command prompt.
One good example of the use for such an application for supporting numerous servers is to pretend that the cluster 'myubuntu' has 200 servers rather than 3, and pretend that they are all the same version of Ubuntu with the same installed apps, as you would see in a cluster.
for i in server1 server2 ... [etc.]
done
...which runs serially, one server at a time, 200 times... you've gotten all 200 upgraded simultaneously, because it's being done in parallel!
In my opinion, clusterssh is a tool worthy of your daily toolbox, and as I said, I'm only scratching the surface. Study it further and see how much more you can accomplish with it.
Next time we'll explore pssh, which is parallel ssh.
Benny Helms