Command-Line Example
I initially began the automation process at the command line. A variety of tools can be used to retrieve web pages from the command line, including:
- cURL (curl.haxx.se)
- wget (www.gnu.org/software/wget/)
- Netcat (netcat.sourceforge.net)
Once you retrieve a page, you can parse its content using the ever-useful combination of sed/awk/grep and regular expressions.
So suppose you want to be notified when a new version of Microsoft's Process Explorer is released. Figure 1 is the web page for Process Explorer showing the version information you wish to monitor (v11.04) (www.microsoft.com/technet/sysinternals/utilities/processexplorer.mspx). The following one-line script determines the current version of Microsoft's Process Explorer. I ran this from a cygwin (www.cygwin.com) bash session:
wget -qO- http://www.microsoft.com/technet/ sysinternals/utilities/processexplorer.mspx | sed 's/<[^>]*>//g' - | grep -o -m1 -P "v[\d\.]{4,}"
The output (at this writing) is as expected: v11.04. Note that:
- The wget -qO- option causes the retrieved page to be sent to stdout instead of a file.
- The sed 's/<[^>]*>//g' command deletes HTML tags (any text beginning with "<" and ending with ">"). Because you're after the page content, you dutifully ignore HTML tags.
- The grep -o -m1 -P "v[\d\.]{4,}" command outputs the first match of the given Perl regular expression. You look for the "v" character followed by four or more digits or "." characters.
The script relies on a common editorial pattern: The most current information should appear first, hence the ability to simply use the first match. If the page listed multiple releases in chronologically ascending order, then additional processing would be required to grab the last match instead of the first match.
The command-line approach works well and can be readily incorporated into other scripts to provide additional functionality (automatic downloading, sending e-mail notifications, and so on). However, there are inherent limitations to this command-line approach:
- Checking and processing multiple websites must be done sequentially.
- A nonresponsive website can slow down the entire process if you check multiple websites sequentially.
- Reviewing, sorting, and filtering results requires additional work.
- Skipping a specific website from a set of websites to monitor is problematic.