Friday, July 18, 2014

Administering large groups of servers - Part 1

We all start out as Linux SysAdmins with only a few servers to worry about.   That begins as a challenge, of course, because you're new to the whole field and the learning curve is so steep.   It is a very good thing that we only have a few servers to tend to and to improve on.

User creation can be done one server at a time, with each being controlled by its own /etc/passwd, /etc/shadow & /etc/group files.  If you need to add a new user to all 3 machines, you simply run the command for one of them on the command line (what do you mean, GUI???) as such:

useradd -c "Comments about the user" -d /home/username -m -s /bin/bash username

...and then copy and paste that to the command line of the other two servers to create the same user in the same way.

That's the picture-perfect, no issues at all to worry about scenario.  In reality, you need to take into account whether this user already might have an account on one of the machines, but not on the others.

What if the new user is Bob Reed, brother to an existing employee, Bill Reed, and your naming convention is first initial last name?  When you try to create user 'breed' for Bob you'll discover that a 'breed' user already exists for Bill.

To make it even more interesting, what if these people are transferring files back and forth between the servers to their home accounts, and Bob is UID 1000 on ServerOne, but is UID 1002 on ServerTwo?  Now files arrive with improper ownership.  It becomes important to make sure the user has the same UID on each machine if for only that reason.  Now you have to track not only user names, but also UID's and groups between the servers.

At some point you're going to learn that it is better to administer the servers as a group rather than as single entities.

Perhaps your environment has 3 Ubuntu servers, 3 Red Hat servers, and 3 CentOS servers.

At the very least, you'll want to administer the Ubuntu servers differently than you would the two Red Hat variants.  Common commands on Ubuntu might be in /usr/bin, where on Red Hat they might be in /usr/local/bin. Sure you can create symlinks to make them more identical, but that's just scratching the surface.

Applications are installed in different locations on different distros. One might have apache config files in /etc/apache2, while another might have them in /etc/httpd, and yet another in subfolders and individual files broken out in those subfolders.

In an ideal world, your environment would be homogeneous, because you came through the door with bright ideas and set up all the servers yourself, and chose to use all Ubuntu or all Red Hat.   In the real world, you came in to replace somebody else, and you're left with what he created, which is a mix of his ideas and the ideas of the predecessor that he replaced.

Fortunately for us as Linux SysAdmins, we're not the first to face these issues, and those who went before us have created a lot of interesting tools to assist us.

In this series I'm going to lightly cover some of the options I've explored over time and have continued to use as my daily tools.  I'm also in the process of learning new tools (and I will continue to explore and learn new things until they throw me in the ground and put some dirt on me), and I'll cover the trials and tribulations I encounter as I learn these new tools as well.

Today let's talk about cluster-ssh.

Basically, this tool allows you to do the same thing simultaneously on multiple servers. Primarily I use it as a simple console tool from the command line, but it has far more versatility than that.

On Ubuntu, the install is simple:

apt-get install clusterssh

You can also obtain it from sourceforge or from github.

This is the description from the github site:

"ClusterSSH is a tool for making the same change on multiple servers at the same time. The 'cssh' command opens an xterm to all specified hosts and an administration console. Any text typed into the console is replicated to all windows. All windows may also be typed into directly.

This tool is intended for (but not limited to) cluster administration where the same configuration or commands must be run on each node within the cluster. Performing these commands all at once via this tool ensures all nodes are kept in sync.
"

That's interesting, but not very comprehensive.   Clicking on the Support link on sourceforge takes you to a site best suited for developers, not the documentation needed by a Linux SysAdmin.

I got started with creating a ~/.cssh file based on the excellent information on this page.

However, each time I'd run the cssh command, the .csshrc file I'd created would be renamed .csshrc.DISABLED.  Interesting...

A little research showed me that the site above was out of date, and that .csshrc is no longer being used.  Now the ~/.clusterssh folder is being used, based on files contained within using a slightly different syntax.

I got a great deal of this updated information about clusterssh and its configuration and syntax from this site.

Three files can now be used to configure the application on a user level, inside the ~/.clusterssh folder: config, clusters, tags.  Note: you can also configure it on a system level with /etc/clusters.  Read the man page and the information at this site for more details.  This blog entry will be showing how to do it for an individual user.


The config file has a lot of options that are commented out, and which I believe I read are the defaults that will be used unless you change them.  My config file came with 66 lines in it. Of those, I only changed the following 3 lines:

auto_close=2
terminal_args=-fg green
terminal_font=7x14

auto_close=2 is a timer for how long to wait before disconnecting after issuing the 'exit' command, and was defaulting to 5 seconds. I see no reason at all for it to wait before shutting down, but just in case, I left it in place with the time reduced to 2 seconds.

terminal_args=-fg green set the terminal font to green, because I personally like green on black for ease of reading.

terminal_font=7x14 changed the font size to 7x14, and I may adjust it later.

Check the detailed information about each setting in the man page or on this site to get a better understanding of each of the line items in the 'config' file.


The clusters file is where you can declare your servers in named groups, with the following syntax:

[user@] [user@] [...]

Two very important changes here from the way it was done with the .csshrc file: 1) You do not pre-declare your clusters at the top of the file; they are declared line by line

2) You do not put an equals sign (=) in the line at all

An example would be:

myubuntu server1 roger@server2 server3

myredhat server4 roger@server5 server6
mycentos server7 roger@server8 server9
myall server1 roger@server2 server3 server4 roger@server5 server6 server7 roger@server8 server9

Optionally, you can include tags in another tag, which would let you reduce the 'myall' line to the following:


myall myubuntu myredhat mycentos

Note:  These tags accumulate and do not check for duplication!  The examples above are pretty simple.  Consider that you also create a cluster named 'web' which includes one machine from each of the other clusters; ie. web server1 server4 server7.   If you then add 'web' to 'myall' you will be adding those 3 servers in again!   For this reason alone, even though I haven't totally made up my mind yet, I prefer the tags file to the clusters file.


The tags file is sort of the reverse of the clusters file.  Rather than declare a tag, followed by the servers that are in it, you declare a server, followed by the tags to which it belongs:

server1 myubuntu web myall

roger@server2 myubuntu myall
server3 myubuntu myall
server4 myredhat web myall
roger@server5 myredhat myall
server6 myredhat myall
server7 mycentos web myall
roger@server8 mycentos myall
server9 mycentos myall

Note that now you can include each server in 'myall', but only have them show up one time when you run 'cssh myall'.  Which config file you choose to go with, 'clusters' or 'tags', is up to you.  Learn from my experience, though! Do not allow both files to exist simultaneously!  If you do, then you issue the command 'cssh myubuntu' it will open server1, roger@server2, server3, server1, roger@server2 & server3.  It just processes both the clusters and the tags files, doubling everything up.  Since I have not firmly decided which format I prefer, I create both files, keep them both up to date, and rename one of them with a .hold extension to disable it.

You might be asking yourself why I chose to name the clusters with the prefix 'my'.   Good question.  You need to be careful about naming the clusters, because cssh can also be used at the command line to call machines by name.


For instance, what if your predecessor named the first of each of his servers 'ubuntu', 'redhat', and 'centos'.  When you run 'cssh ubuntu redhat centos' at the command line, you intend to open those three servers.  What if you had chosen the cluster name 'ubuntu' instead of 'myubuntu'?  Which will be opened; the server named 'ubuntu' or the cluster named 'ubuntu'?

Also, within your environment, it is possible another server has been created which is DNS listed as 'ubuntu.yourdomain.com', and since in this example your /etc/resolv.conf file starts with 'search yourdomain.com', when you type 'cssh ubuntu' the system will append the .yourdomain.com to try to resolve it, thereby bringing you up on charges of trying to hack into a server that wasn't yours to access.  Not a good thing!  So name your clusters well so that you don't run into conflicts.

The following notes are from the site I've been referring to:

All comments (marked by a #) and blank lines are ignored. Tags may be nested, but be aware of using recursive tags as they are not checked for.

Extra cluster files may also be specified either as an option on the command line (see cluster-file) or in the users $HOME/.clusterssh/config file (see extra_cluster_file configuration option).

NOTE: the last tag read overwrites any pre-existing tag of that name

NOTE: there is a special cluster tag called default - any tags or hosts included within this tag will be automatically opened if no other tags are specified on the command line.


This application also lets you cluster rsh and telnet sessions by calling it as 'crsh' or 'ctel' rather than as 'cssh'.  I seldom use telnet, and never use rsh, so I've never tried it for those applications.

So what exactly does it do, then?  If you've configured everything correctly - and I'm using my examples above for this example - when you run:


'cssh myubuntu'

...three tiled sessions will be opened titled:
CSSH: server1
CSSH: server2
CSSH: server3

Additionally, a separate grey box will be opened where you can type commands (although oddly, it does not echo back what you are typing, ostensibly relying on the multiple terminals' output to show you what you've typed). This box will be titled:
CSSH [#]
... where # will be the number of sessions it has opened up this time.

If you click on any of the individual windows, any commands you type are then only for that server.  When you click on the grey terminal input box, however, whatever you type will be simultaneously typed on every server that was opened.  To end sessions on individual servers, you can click on that server's terminal window and type 'exit' as usual.  To end all sessions at once, click on the grey terminal box and type 'exit'.  When the last window closes, the gray terminal window closes and you are returned to the command prompt.

One good example of the use for such an application for supporting numerous servers is to pretend that the cluster 'myubuntu' has 200 servers rather than 3, and pretend that they are all the same version of Ubuntu with the same installed apps, as you would see in a cluster.


Because you are a good Linux SysAdmin, you want to be careful when running 'apt-get update; apt-get upgrade', so you decide to do it manually each morning.

And since you are a clever Linux SysAdmin, you run 'apt-get update; apt-get -s upgrade' so that it only simulates the upgrade, giving you the chance to see what will be updated, and to decide if you're comfortable with allowing it to happen.

If you're a really clever Linux SysAdmin, you've also already tested the upgrade on a lab server to make sure it doesn't break anything, and so you can determine exactly which files changed so you can create your rollback option.

Having decided that this upgrade is acceptable, you simply go to the grey terminal box, and type 'apt-get -y upgrade' and hit enter.  A few minutes later, every one of your servers has completed the upgrade and are all at the same level of patches and security. Type 'exit' and all 200 windows close.

Instead of wasting an entire day updating these servers, one at a time, or running a loop script like:

for i in server1 server2 ... [etc.]

do
  ssh $i apt-get -y upgrade
done

...which runs serially, one server at a time, 200 times... you've gotten all 200 upgraded simultaneously, because it's being done in parallel!

In my opinion, clusterssh is a tool worthy of your daily toolbox, and as I said, I'm only scratching the surface.  Study it further and see how much more you can accomplish with it.

Next time we'll explore pssh, which is parallel ssh.

Benny Helms

MyFreeCopyright.com Registered & Protected


Tuesday, December 31, 2013

Finding work

Finding work as a Linux Systems Administrator is an interesting thing, and can be very frustrating to say the least.

If you go to the various job postings, you'll see that a lot of employers appear to have copied and pasted from some other job listing(s), and are basically asking that you be able to do everything under the sun, and do it for cheap. This is frustrating for both job seekers and for Human Resource departments.

Here's what happens. You see a job listing that asks that you be expert in Windows, AIX, Solaris, Oracle, MySQL, Linux (usually a specific flavor and version), shell scripting, Python, perl, C, C++, Java, web development, Apache, postfix, puppet, clustering, cloud, cPanel, WebMin... You get the picture. They want somebody - not who is conversant with these things - but who is expert at these things. And not just some of them, but all of them.

I've had a long career in Systems Administration, starting with Windows and DOS, and finally deciding to focus entirely on Linux. So throughout my career, I've mastered some things on that list, become adequate on some of them, and some of them I learn about only because I'm reading a job requirement list and see them mentioned for the first time in my career. For many of these job postings, no single person on the planet could fill all the requirements, and certainly not at an expert level. And if such a person existed, his asking salary would be well over $250,000 American per year, easily.

So the job seeker looks at the list and says to himself, "Hey, I'm very good at most of this! I just don't have any experience with Solaris, and not much with Oracle. I'm going to apply anyway!"

Then Human Resources gets his application and cannot find anything on his resume about Solaris, so winds up chucking his application in the bit bucket. So now they've had to waste their time at one level, and are frustrated. It gets even worse later, when they are told to shrink the list of requirements, and they realize how many potential candidates were thrown away because the person needing the personnel overstated his need and wasted so much more of their time.

The job seeker is frustrated because he's sure he could do the job, and would ramp up on Solaris quickly enough. After all, when you learn one Linux/UNIX variant, learning another is just a matter of translating the way it is done on the one you know to the way it is done on the one you don't know. But he wasn't even given a chance.

Many times if you go deeper into the true needs of the company, you find out that they have one Solaris server, and they are retiring it in two months. The job seeker would never have had to touch the thing in the first place. And as for Oracle, they just need somebody who can safely restart the server if it crashes, because they have a full-time DBA on staff to deal with the database side of things.

When perspective employers do this sort of job requirement list, they do both job seekers and themselves a disservice. Not only are they turning away many highly qualified candidates, but because no single person can fill the role, the job search takes longer, the HR department gets overworked, and eventually they need to trim the list back to what they actually needed in the first place. Those who are seeking work are going to have to toss their hats in the ring regardless of whether they can fulfill the entire list, because they really want the position and are sure that in the interview they can show that they are capable. Worse yet, because there are so many job offerings to look at, many - like myself - will look at that laundry list very quickly, seeking items that we cannot fill, and move on to the next job offering. Very highly qualified, motivated people are walking away from jobs where they'd be a great fit because companies exaggerate their needs, or copy and paste from other listings.

While I'm on the subject, let's talk about experience required. Many years ago, when Linux was about 8 years old, I saw a job listing requiring 10 years experience with Linux. That'd be a neat trick! The poor kid who is just graduating college, having studied from textbooks that are 4 years out of date, and with no real world experience, is desperate to find work, but everybody wants 5 years experience, or more. For those like myself who did not get the opportunity to finish college, you often need 10 years experience to counter the lack of college. These recent college grads need to be given the chance to act as junior SysAdmins, under the mentoring of senior SysAdmins. In a very short time they will be contributing greatly to the company, if only given a chance.

One last nitpick, and then I'll drop off the radar again for a while. Clearances. I know that when your company contracts with the Department of Defense, many of your personnel need to hold Secret and above clearances. That's a given. The problem is that these days most of the listings are insisting that you already have the clearance, because the company doesn't want to pay for it. Fewer and fewer job listings say, "Must be able to obtain a clearance," instead insisting on a pre-existing clearance. I used to hold a Secret clearance because of my work in the USAF, but that has long since expired, and because of this, I'm no longer qualified for many jobs. This basically means that the only people qualified for the job will be recent departures from the military - and God bless them, they deserve the best jobs they can get - or people who have worked in a government capacity that required clearances. Companies really need to be willing to pony up for the cost of a background check and getting clearances for highly qualified potential employees.

I wish you new college graduates the best in you careers! Keep at it, and you'll find opportunities to increase your experience points. I highly recommend doing probono work for charities, setting up Linux servers for them to replace the expensive Windows licenses they are currently having to pay. Take part in your local Linux Users Group, where you can learn a great deal from those who have been in the trenches longer.

And be realistic. You just got out of college, and as much as you were taught, you simply do not know squat. What works in a book often is not feasible in the real world. Don't walk off of your college campus and expect an $80,000 dollar a year job. You have to pay your dues, just like the rest of us did. Accept a lower paying position, and learn, learn, learn, and then begin marketing your experience at a higher rate.

And never stop learning! I learn something new almost every day, and it makes me happy when it happens.

Benny Helms

MyFreeCopyright.com Registered & Protected

Friday, December 6, 2013

About SAP

Good morning, all!

It's Friday, and the promise of a weekend has made my heart even lighter and more positive!

Today I want to discuss SAP, and describe some of the basics about it.  These are my opinions and are from the perspective of someone who has experienced it only on SuSE SLES 11 SP1 (required in the specs by SAP; I'd have preferred Ubuntu Server or Debian, but that's just me), using Oracle 11.2 64-bit (again, required by the SAP specs).  Your mileage may vary, and let's face it; I may be dead wrong on some things, as SAP is not my forte.  Supporting the servers that host it is my forte.

SAP is an enormous application that covers an organization's finance, payroll, work orders, material inventory, etc.  There are modules available for just about everything, and most will cost you even more money. The purpose it serves is worthwhile, and without it, many companies would have a very difficult time tracking everything and linking things together for report generation, paying employees, etc.  So kudos for creating a product that is very powerful and very useful.

When I was hired on at this organization, it was with the understanding that I'd be focused entirely on Linux and AIX systems administration.  As with most jobs in the SysAdmin world, you should make sure you have a written contract that specifies your specific responsibilities.  Otherwise, you will find yourself tasked with more and more disciplines, until you no longer have the time you need to properly maintain the Linux environment that you were hired to oversee.

Such is the case with SAP.  SAP (pronounced as the individual letters, not sap) is a German company, and as such, support is in a very different time zone from the East Coast of Florida where I do my work.  Fortunately, there are vendors and contractors in the United States who make their living supporting SAP installations, so you are not entirely dependent on help that includes an overseas time delay when you are having issues.  Even so, no issue with SAP is small, and each takes serious thought before deciding upon a resolution.  Each module may affect dozen of other modules, so each change must be carefully examined and tested.  I bring up the German ownership for another reason.  While they support a few different databases - in our case we're using Oracle - they create many of the tables and fields using German names, which makes it very difficult for the SysAdmin or the Oracle DBA to have a quick look at the database layout and determine what does what.

Supporting SAP is something that actually requires many people.  There is BASIS, which is basically the user interface, and the admin portion where you create, delete, and assign access roles to users. That is one of the areas I've been asked to learn, and it is huge.  The BASIS book on my shelf is about one and a half inches thick, and it barely scrapes the surface.  And that is only one of many aspects of SAP that must be supported.  Their classes are very expensive, and in the ones my coworkers and I have taken, each class warns you that to truly understand what they are teaching you, you should attend these 7 other expensive classes.  Not good for an organization with a limited budget.

So far, my primary responsibility has been at the Linux server level, creating and maintaining them, but the BASIS is creeping in and I'm learning more each week, and then being tasked with using what I've learned for user support.  Oh, well.  One day I'll get that written contract, and be able to focus entirely on Linux, but for now, so far, only about 5% of my time is being used for BASIS support.

On the SysAdmin side, I was responsible for creating the new Linux servers, each of which are virtualized on VMWare.  The reason for my creation of new servers is that we were upgrading from a much older version of SAP, and were converting from AIX to Linux.  To give you an idea of the enormity of this upgrade, it took them over 6 months to complete the process!

I've had to create 5 separate SAP servers for our organization, and if you work for a larger organization with multiple geographic locations, there would be even more servers required.  Most of the servers are configured pretty identically, but some can be made lighter in disk space, RAM and CPUs.  At a minimum, when you buy SAP - and be prepared to shell out hundreds of thousands of American dollars, or in same cases, multiple millions - you will need to create the following servers:

SSM - SAP Solution Manager:  This server is not as resource intensive as the others.  This server is the one that communicates with the SAP headquarters most frequently, looking for updates, etc., and in part, proving that you are not exceeding your licensing.  In the past, installing updates in SAP was a much more manual process, involving reading what are called "SAP Notes" which often, in order to understand this SAP Note, contain pointers to other SAP Notes, which contain still more links to other SAP Notes, ad infinitum, ad nauseum.  There is still a lot of manual work in installing updates, but at least this server can - in theory - reach out and grab them.  I truly don't know. The consulting team we hired to do our upgrade to the latest version and move us from AIX to Linux never properly configured the SSM server, and we're in discussion to get them to do it post mortem.

SD1 - The sandbox server where you can try things without breaking the production server.  The resources it requires are greater than the SSM server, but lighter than the production machine.

DV1 - The development server.  This one needs to be fairly beefy on resources, but not as beefy as QA1 or production.  This is where users try out minor changes, things like new payroll rules based on tax laws changing, etc.  It has two separate login environments; client 200 and client 400.  All initial testing of changes takes place in client 200, and if passed, sometimes also get tested in client 400, but that is rare in our case.

QA1 - The quality analysis server.  This environment is very much like the production server, and is the final place to test your changes before moving them to production.  It needs to be fairly beefy on resources, but not as beefy as production.

PRD - The production server.  This is the environment where real time work is done, and where real time changes such as adding users, etc., are done.  It is the most resource intensive, and the server which keeps you up at night as a SysAdmin.  If this goes down, there is no work being done, and more importantly in the minds of all our employees, payroll is unable to process and we don't get paid.  My goal here is to create a backup PRD server which we'll initially be able to bring up if PRD dies.  In time, I'll want it to also be a live failover server, and the final phase will be to make it a load balancing server.  We shall see.

In my next blog entry, I'll discuss a bit about how the changes that require traversing 3 servers are done, using what is called the "transport".

Incidentally, here's a fun thing you can do when you need a bit of humor in your day. As the customer, if you ever want to make an SAP rep or consultant twitch, you can pronounce it "sap", as in the liquid that flows from pine trees.  They'll correct you immediately, and it adds just a little bit of joy to your life.  :-D

Benny Helms


MyFreeCopyright.com Registered & Protected

Thursday, December 5, 2013

Atooma revisited

Note: This blog entry is based on testing I did months ago, and am just now getting around to blogging.

Okay, so I've used the Atooma application on my Android for more than 2 weeks now, and decided today to uninstall it.  It worked okay, although it would frequently interfere with my wifi connections by cutting them off unexpectedly.  I think the application has a lot of potential.

So why did I uninstall it?  Let's first examine why I installed it in the first place.

I hate having my phone search for a wifi connection when I'm between the home and the office.  It wastes power and drains my battery.  So by using Atooma to turn off wifi when I left the office or left home, and turn it back on again when I reached the office or reached home, it would save battery, right?

What I did NOT factor in due to faulty logic on my part was that GPS would be searching endlessly to see if I was at the office or at home, thereby draining my battery far more effectively than the occasional wifi search was doing.  Truth be told, I just need to make it a habit to turn off wifi when I leave the home or office, and turn it back on when I'm in a place where I can use wifi.  I installed and tested Atooma because I was being lazy.  Period.

So I'm back to manual and I think my battery will last a lot longer now!

Benny Helms


MyFreeCopyright.com Registered & Protected

Trying out Atooma on Android

Note: This blog entry is based on testing I did months ago, but am just now getting around to posting.

I saw a posting yesterday on Google+ about a new app for Android phones called Atooma.  The name seems to be constructed from pieces of the words, "A Touch of Magic".  The story can be found at this URL: http://lifehacker.com/5948760/atooma-is-like-ifttt-for-your-android-phone

It is in Beta, but seems pretty stable so far.  I installed it last night, and have now had a little time to play with it.  It could be awesome, or it could be an annoyance that I'll remove as soon as it shows its true colors.  I'll let you know.

For now, this is what I've done with it.  I've created the following six "Atoomas" and activated them.

1. Turn on WiFi when I get to work
2. Turn off WiFi when I leave work
3. Turn on WiFi when I get home
4. Turn off WiFi when I leave home
5. Turn on WiFi when at a specific friend's house
6. Turn off WiFi when I leave that specific friend's house

I sat in my office all day thinking it was working great, as the WiFi was staying on.  So far so good. After leaving work, the WiFi turned off a few blocks from the office, so that made me happy.

When I got home, it did not turn on my WiFi, which saddened me a bit.  I used my finger to drag down from the top of the screen to see the running apps, and selected Atooma.  It had all my "Atoomas" showing on the screen.  When I clicked on the one for turning on WiFi at home, it had a different appearance than when I created it.  There was now a huge check mark at the bottom of the screen, and it looked like it wanted to be clicked.  So I did, and it turned green, and my WiFi at home turned on. Seems after you create the rules you have to leave the editing area, and bring it up by dragging down from the top of the screen like I did to get to the default view where it lets you activate each "Atooma". I had not actually activated any of them after creating them, so I did so with each.

That's when things went wonky.  Suddenly my WiFi turned off.  A few seconds later it turned on, connected, then disconnected and turned off.  A little experimentation showed me the problem from a logical point of view.  The problem is that when you are in the building where you want WiFi on, you no longer have a clear sky view for GPS, thereby making the device think you're not at that specific location any longer and making it turn WiFi off.  As soon as it gets enough GPS fixes that it realizes you are now at that location, it turns WiFi on, and eventually the cycle repeats itself, over and over.

I decided it was time to deactivate the "Atoomas" for turning off WiFi when leaving a location until I could figure a workaround for that.  That did not work out so well.  The WiFi continued it's up-down cycling, and was a little maddening.  So I once more dragged down from the top of the screen, clicked on the Atooma icon for the running application, and using the menu button, chose "Logout".  Ahhhh.   Peace at last!

Not!  The cycling continued, and I gave serious thought to uninstalling the application.  Instead, I used the Tasks tab of my trusty "Battery Dr Saver" app (an app I highly recommend; allows you to do full cycle charges or quick charges) to kill the Atooma task.  That did it.  I turned Atooma back on, and because all the "Turn off WiFi" tasks had been disabled, they no longer cycled my WiFi at the house.

I then downloaded somebody else's "Atooma" (users can give back to the community by uploading successful "Atoomas" they've put together so that others can benefit) that would theoretically read your SMS messages aloud if you were traveling at a speed greater than 25km/h.  That gave me to pause.  If I could specify a speed, that meant I could indicate that I was traveling.  If I added that to my "turn off WiFi" rules, it would add a requirement that would need to be satisfied in order to trigger the "turn off WiFi".  That sounded interesting, so I thought about it on the way to work, and when I got here this morning, I did some editing.

Now my "Turn off WiFi when I leave work" rule does not just say, "If I leave the area of <address of office> turn off WiFi", it now says, "If I leave the area of <address of work> AND I am traveling at a speed greater than 30km/h, THEN turn off WiFi".  So far I still have functioning WiFi here in the office, with the Turn off WiFi rules activated, so maybe this is a good solution.  On the other hand, I'm an American and I do not savvy km/h.  I savvy mph, and I wish the app would let you change the basic configuration based on what you use for measuring speed.  Then again, for all I know they *do* let you do that, and I've just not played with it enough to find all the settings and configuration.

I'll continue to explore this application, and will keep you posted as to my findings, my likes, my dislikes, etc.

Benny Helms


MyFreeCopyright.com Registered & Protected

Finished LiveCD project; moving on to SAP

So I finished - for real this time - the LiveCD project.  In the end, it was something I was very proud of. The user can log in to the work LAN from home, and it is easy and intuitive.  The boss likes it, and in the end I made it far broader than I originally planned, so that much more can be accomplished and the user can get a taste of Ubuntu Linux, maybe helping us to start replacing Windows machines with Linux machines.

Now that I've finished that project, I've been tasked with learning to administer SAP, a product we use here.  When I say "I've been tasked with learning to administer SAP" it is similar to saying I've been tasked with learning Mandarin Chinese - reading, writing, speaking and understanding the spoken language like a native - and just for fun, learning to write it in mirror format so I can hold written documents up to the mirror and have them be readable.

You see, SAP is a ***HUGE*** database system suitable for accounting, personnel management, and a host of other things.  Just learning the interface will probably take me a month or more.  Diving into the possible things that can be done with it another month.  Troubleshooting when it doesn't work as expected?  The rest of my life probably.

But that's okay, because I think the boss will give me a few weeks to accomplish all of the above.  :-)

I still remember when I first began playing with DOS on a minimal workstation, thinking, "Man! Wouldn't it be cool to just support people using computers for a living?  I already do that, but it's not my job.  Man, I'd have it made in the shade if that's all I had to do every day!"

Remember, people:  be careful what you wish for!   You might just get it!   :-)

Benny Helms


MyFreeCopyright.com Registered & Protected

LiveCD project - part 3

I thought I'd give you another update on the LiveCD project, and teach you an important lesson in Unix/Linux Systems Administration.

Sometimes it's not the work you did!  Sometimes it's the hardware!  Always check!

Remember in my last posting I told you the boss had a kernel panic on his second laptop, necessitating my foray into older versions of Ubuntu to use as the basis for my LiveCD?  Well, he's out this week, so I went into his office to try it for myself.  I saw nothing but the splash screen. Never did it reach a standard black screen with white font telling me there was a kernel panic!  It just hung and never moved forward.

With that information in hand, I burned a copy of the original Ubuntu 11.10 Desktop i386 LiveCD and tested it on the same laptop.  Again it hung.

At this point I began to suspect that the CD drive was dirty or very near broken.  It was reading well enough to access the boot sector, and bring up the splash screen and menu, but could not do anything more.  I heard a lot of  "seek" noise from the CD drive, like it was having trouble reading the disk.  I took the unit down the hall to the Help Desk personnel (God bless them!! Their jobs are often far harder than mine!  I've done that job!) and asked if they had another CD drive that would fit into that laptop.  Since it is modular and just slides into and out of the left bay, they were able to give me another to test.  It worked!!  Problem solved!!

Now I take responsibility for this.  I should have gone in and grabbed the laptop from my boss and put it at my desk and tested it myself as soon as he said kernel panic, instead of assuming.  That's another lesson in Unix/Linux Systems Administration for you, folks!  Laziness costs more than being willing to do the hard work sometimes.  My reason for not getting the laptop was very simple.  I'm tall, and getting that laptop meant I'd have to unplug the power brick in the boss' office, bring it back to my desk, and crawl around on my hands and knees under my desk to plug it in and get it ready for testing.  I don't LIKE crawling on my hands and knees.  The floor is a loooooong way away when you're as tall as I am, and I hate having to get under my desk like that.  My reason was lame, and it cost me several days work, a lot of frustration, and caused self-doubt that wasn't warranted.

The second - non-pae based - LiveCD I had created had worked perfectly, and the boss was already irritated at Ubuntu for having caused the pae problem in the first place rather than being irritated at me for the first LiveCD not working.  I ended up looking inept on the second LiveCD because I was too lazy to crawl under my desk for a minute or two.

Learn from my mistakes, folks.  A wise man learns from the mistakes of others, but a fool has to make his own.

So now I'm off to finish documenting the process.  I've altered the process several times, and now I'm to the point where I've taken Firefox out, installed Chrome instead (Firefox demanded that I log in two times to the same site each time I logged in, while Chrome did not; user friendliness is important on this project), updated everything using 'apt-get update; apt-get upgrade', and still managed to trim it down small enough to fit onto a 700MB CD.  I'm proud of what I've created.  I've learned a lot during this project.  I've been a Unix SysAdmin for "many" years, and I still learn every day.  So this project was not wasted time.  It was actually enjoyable!  It was just frustrating because I got lazy at a crucial juncture and as a result wound up doing unnecessary work.

The step by step document I crafted so that I, or anyone else here, could easily recreate a LiveCD is 20 pages long!  Maybe one day I'll post it to save someone else the trouble of learning the hard way like I did.

Benny Helms


MyFreeCopyright.com Registered & Protected