Erik Rossen's Software Collection



Here is a collection of software that I have written for automatically adding bookmark descriptions, translating JCL into Perl, simple electronic voting via email, and other tasks. Most of these tools are sharp and pointy - be careful.

All of this software is free (as in "free beer" and "free speech") and distributed under the GPL. As is usual with free software (and most proprietary software, for that matter), none of this stuff is guaranteed to be useful or safe to run. Some of it might even cause cancer in rats, but I wouldn't know since I refuse to test it on animals.

Most of the code that I write nowadays is in Perl, but don't let that frighten you. With a few exceptions, my code is procedural rather than object-oriented. I try to stick to common idioms and to extensively comment the code - some would say that it is over-commented.

If you find any bugs, write any patches, have suggestions for improvements, or just find this material useful, I would like to hear from you. I make my living as a freelance computer consultant, so if you want to pay me to add some feature that you need, I am particularly interested in hearing from you.


MD5 checksums for the paranoid

Here is a list of MD5 checksums for the packages available on this site.

It has been signed with my PGP key, so you can be more or less guaranteed that tampering will be obvious. If you are ultra-paranoid and can't trust even an nice guy like me, you will just have to read the included sources.

MVS to Unix migration software

During November and December of 1999, I helped the Statistical Section of the International Labour Office of the United Nations to migrate their SAS software from an IBM mainframe to a Unix system. I have posted these pages and software in the hope that they are of aid to anyone contemplating a similar move.

The MVS to Unix Migration HOWTO - version 1.0, February 2, 2001.
The MVS to Unix Migration TOOLKIT - version 1.0, February 2, 2001

Web-related software

the lazybm bookmark keeper - new January 3, 2000
the VOTE email ballot manager - coming soon
the ADMIN membership database - coming soon (yeah, right)

Software written in Forth

After my brief trip to the 1960s (see the JCL converter above), I have decided to boldly stride into the 1970s and start using a language that was (almost) popular at the time that I first started to use computers - Forth. I won't bother putting any links to Forth-related web sites here - it is sufficient to do a bit of searching with your favorite search engine.

In brief, if you are one of those wierdos who likes HP RPN calculators or programming in PostScript, you will feel right at home with Forth.

A few of notes about style:

VALLEYS - Version 1.0, April 6, 2002

Software to which I have contributed

Sometimes I run across a piece of software that I like so much, I feel compelled to improve upon it or at least offer a critique or bug report. Here is a list of some of the software to which I have contributed my $0.02.

The Year 2000 version of the LARI electronic telephone book for Switzerland. NOTE: I have received feedback saying that later versions of LARI do not include a Java applet and my patched version does not work with the new versions of the LARI databases. Too bad.
Eric S. Raymond's sitemap utility automatically produces a site index using the META DESCRIPTION tags in pages
Eric S. Raymond's imgsizer utility examines images linked from HTML pages with the IMG tag and determines HEIGHT and WIDTH attributes to speed up page loading
the Unix file(1) utility for rapidly identifying files
Brian D. Winters' muttzilla utility allows third-party mail clients to interface with Netscape
Dave Raggett's tidy utility checks HTML files for correctness and formats them for easier reading
Patrice Hédé's accents PostgreSQL user function produces a regular expression based on an un-accented string that can be used for searching in tables of accented data. Extremely useful for anyone using PostgreSQL in non-Anglophone countries.
Adam Sussman's mod_auth_pgsql, an Apache module for authentication using a PostgreSQL database.

Reader Comments

From utoddl@email.unc.edu Tue Sep  4 21:46:19 2001
Date: Thu, 02 Aug 2001 09:16:32 -0400
From: Todd M. Lewis <utoddl@email.unc.edu>
Reply-To: Todd_Lewis@unc.edu
To: rossen@rossen.ch
Subject: MVS to Unix migrations

Erik,

I just enjoyed reading your "MVS to Unix migration HOWTO"
(http://www.multimania.com/rossen/software/migration/index.html). I
was/am involved in our MVS to Unix migration here at the U. of North
Carolina, and I periodically do a search for other such migrations to
compare techniques. I have not found many such migrations documented on
the web, so it's always a treat to see how others have approached the
problem.

Of particular interest is the suite of tools -- locally developed or
readily available -- selected for such a task. Your JCL to Perl
translator was neat, dealing with an issue we never even tried to
address through a program. That problem was written off as a training
issue for the users. Our neatest tool, "migflt", is a translator that
can read and write blocked or unblocked, fixed or varying or stream
(\n-delimited) records, trimming trailing blanks, EBCDIC<->ASCII convert
selected parts of records, and strip line numbers. Fortunately we didn't
have the problem of custom EBCDIC encodings that you had to deal with
(but support for that could be added, now that I think about it).

Interesting too is how various migrations tend to emphasize different
aspects of a migration. We too were a mostly SAS shop, though there were
some other popular packages we had to deal with as well. We were mostly
concerned with actually copying and storing the data -- about 850Gb as
it turned out (most of which, no doubt, will never be looked at again).
We couldn't shut down during the move either; users kept right on using
their data while we copied it.  And, since it was not at all a given
that the move to Unix was permanent, we wanted to transform the data
sets as little as possible in the actual migration. In fact, we kept
enough information so that, if necessary, we could move the data back or
to another MVS system with appropriate RECFMs, DCBs, etc.

All of this data went into an archive from which users could check out
copies of their formerly MVS data sets. This allowed them to experiment
with Unix techniques and tools, and if they screwed up some files, they
could check out another copy from the archive. Furthermore, if they
subsequently updated a data set on the MVS system, we would eventually
migrate that data again, updating the archive with the most current
copy. That way users could phase in their Unix use as time and projects
permitted. We promised to keep the archive around for 5 years -- the end
of which coincides with the end of 2001.

Anyway, I wanted to thank your for documenting your migration and making
it available over the web. If you are curious, our migration
documentation (such as it is) can be found at
http://www.unc.edu/~ibmarc/.
-- 
   +------------------------------------------------------------+
  / Todd_Lewis@unc.edu              http://www.unc.edu/~utoddl /
 /(919) 962-5273               Lord, give me patience... Now! /
+------------------------------------------------------------+

From utoddl@email.unc.edu Tue Sep  4 21:46:19 2001
Date: Thu, 02 Aug 2001 11:08:19 -0400
From: Todd M. Lewis <utoddl@email.unc.edu>
Reply-To: Todd_Lewis@unc.edu
To: Erik Rossen <rossen@rossen.ch>
Subject: Re: MVS to Unix migrations

Erik Rossen wrote:
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Thanks for the feedback.  It was interesting to hear that I was not the
> only person to have ever tried this migration.  You are only the third or
> fourth person to have ever contacted me about those pages that I wrote.

I've had very few contacts as well. It's sad to think how many people
must be reinventing this wheel. Surely we learned _something_ that would
save others some headaches.

> Question: can I include a copy of your email on my migration page?

I would consider it an honor. (Hope I spelled everything correctly.)


I just showed that message to Doug McIntyre, who did the programming on
the MVS side of our migration. He had some clarifying comments that I'll
pass along. Feel free to include whatever you think relevant:


* First, we had to migrate all the data. We tried for a while to
coordinate with users to determine what files were relevant, etc., but
that proved too time consuming and error prone.

* Furthermore, lots of the research data we housed was paid for with
grants that required that the data be kept for up to five years. So,
arguably, we were under contractual obligations to preserve the data
even though in some cases where there was no identifiable user
associated with it.

* We found it hard enough to move our own data sets; training users to
move their stuff while they were already working 40+ hours/week was
simply not realistic. The move was our idea; we had to take
responsibility for making it work. That meant we had to:
  - have both systems up and running long enough for users to "learn
Unix"
  - let users switch projects over as their time permitted, and
  - move their data for them.

* If we moved the data for users, then we faced the daunting problem of
distributing the data once it was on the Unix side. The MVS and Unix
userids didn't match up one to one, we could easily blow individual
users' quotas if we dumped everything in their home directories, and we
couldn't make assumptions about how they wanted to structure their data
on the Unix side. The archive solved these problems for us; users could
pull data from the archive to their current directory as they needed it,
and we didn't need to push the data out the them. Nor did we need to
re-migrate anything should they lose their copy.

* Users still needed to authenticate to access the archive, and the MVS
and Unix logins didn't match up one to one. So when they ran the
"checkout" program, it asked them for their MVS userid and password.
Checkout would authenticate them against MVS (by opening an ftp session
with MVS), and if successful it would cache their encrypted passwords in
the archive. That way, once the MVS system was gone, we could still
authenticate their checkout requests with their MVS userid and password,
regardless of which Unix ID they were using.

* Checkout performs the final conversions of the data: IBCDIC->ASCII,
unblocking, trailing blanks, line unnumbering, etc. Since most data sets
will never be checked out of the archive, that's quite a bit of
manipulation we'll never have to do. And if a conversion turns out to be
the Wrong Thing, a user can check it out again with custom conversions,
or with no conversions. We wanted to avoid doing the wrong conversion --
thus irrevocably screwing up the data -- before the data landed in the
archive; we'd like to keep all of our screw-ups out in front of us:-)

Well, that's about it. I see you've contributed to HTMLTidy. Me too -- I
submitted the code that optionally uses ~/.tidyrc.  Free software. Neat
stuff.

Happy Computing,
-- 
   +------------------------------------------------------------+
  / Todd_Lewis@unc.edu              http://www.unc.edu/~utoddl /
 /(919) 962-5273               Lord, give me patience... Now! /
+------------------------------------------------------------+

--

Erik Rossen <rossen@rossen.ch>
OpenPGP key: 2935D0B9
Tel: +41 78 617 72 83
Home URL: http://www.rossen.ch

Copyright © 2000 until the heat-death of the Universe (thanks, Mickey!), by Erik Rossen
Last modified: 2016-02-07T12:43:22+0100

HOME -> SOFTWARE | SITEMAP