More Letters to the Editor


awk Article

Thanks for the recent articles on awk (“The awk Utility”, Louis Iacona, June 1999) and sed (“Good Ol' sed”, Hans de Vreught, April 1999). I think these useful tools have been unfairly eclipsed by Perl, and deserve more attention than they have received recently.

awk's powerful library on string-editing functions makes it ideal for reformatting the output of one program to be read as input by another. It can do things that are difficult with sed—not least because of sed's prohibition of comments, which reinforces its reputation as a write-only language. Still, sed is much faster than awk, so I prefer to use sed scripts for quick editing (e.g., converting TeX to HTML).

Though the awk article emphasized its use as a file processor, there's no need to process any input at all. I often write awk scripts with just a BEGIN section, ending with an exit statement. These take the place of all those one-page, throwaway programs I used to write in FORTRAN. It's very handy for tabulating a function or checking an algorithm.

—A.T.Young, aty@mintaka.sdsu.edu


Attention all Linux users

In reading a recent article on slashdot.org, something struck me—a strategy for increasing commercial support for Linux. If every Linux user made a list of commercial applications he or she wished was available for Linux, then mailed the companies behind those products, promising to buy their software if a Linux version was released, I believe companies would take notice.

How many of us must switch to some form of Windows 9x/NT to use a program not available for Linux? I do it almost everyday. For my parents, it's Quicken; for me, it's StarCraft. I'm fairly sure I'm not the only person who would be willing to buy a piece of software a second time to be able to run it under Linux.

What I believe keeps companies from developing for Linux is that we (Linux users) sometimes get stereotyped as anarchists who want to destroy corporate America and who believe anything not released under the GPL is bad. Please, I truly feel if we all let companies know we are willing to pay for what we want, they will be more than willing to sell it to us. Write letters and e-mail messages, sign petitions—whatever you can to let people know. Make your wish list of applications to be ported to Linux, then help make it a reality. Well, I'm off to start writing letters.

—Loren Weber, cobweber@jps.net

SSC maintains an on-line Wish List at http://www.linuxresources.com/wish/. Other sites do too.<\br>—Editor

Hello and Thanks

I just spoke with Matthew Cunningham at the LJ booth on the Comdex floor here in Chicago. LJ has sponsored a very nice exhibit. Anyway, I'm a long-time subscriber to LJ, (I don't quite go back to Issue 1, but almost.)

I very much enjoyed the article “Virtual Network Computing” by Brian Harvey in issue 58 (February 199). In fact, I was just about to purchase 400 seats of PC Anywhere for my company when I read the article. Instead, I installed VNC and saved the company a great deal of money. Mostly we use Windows 95 and NT at work, but we do use Linux for for firewalls and servers.

Keep up the great work, I read my issue faithfully cover to cover every month!

—Dave Truckenmiller, dtrucken@zsassociates.com


(article) Grep: Searching for words

From page 10, lj issue 60
grep flip article.txt
This will search for the word “flip” in the file article.txt and will display all the lines containing the word “flip”

That's wrong. grep does not know what a word is. Or you instruct it to really search for a word, or you are searching for lines containing the 4 consecutive/juxtapose letters “f” “l” “i” “p”.

In fact, the command above would match “flippant” “flipflip” etc.

—Nagib Hobeica, nag@beryte.com


linux logo for computers

I like the Linux Journal very much. Recently I was fiddling to get the Linux penguin with “Linux inside” beneath it into a nice sticker that fits in the square on the front of regular PC enclosures. 'Linux' in green with an italic red 'i'. It came to my mind that it would be nice if you would print such a sticker in *each* issue of the LJ magazine were readers can cut it out and paste it on their machines. I believe that in a very short period we will see that to happen.

What do you think of it?
Kind regards

—Kees Schoenmakers


Code Error

First of all I would like to congratulate you and your team on the good job you do. I've been an avid reader of LJ for quite some time.

I would like to point out that the code for the multithreaded in issue 61 has a bug: the pointer tids isn't malloced correctly/at all—this may very easily lead to a segmentation fault. Anyway, this article clearly addressed people with good programming knowledge, so everybody should be able to spot the bug and correct it.

Regards,

—Andreas Zielke, azielke@TechFak.Uni-Bielefeld.DE


Thanks and e-address change

Thank you for publishing my opinions in Letters to the Editor. Unfortunately I have changed my e-mail address, so won't receive any feedback. Pacific Bell didn't support Linux and I surely didn't like to have to bring down my system to boot Microsoft, so changed to ISPchannel and am happy now to have reasonable speed.

Regards,

—David Baker, debaker@ispchannel.com


Re: resource location

From: “Berg, Rick (GEAE)”, rick.berg@ae.ge.com
In the article A Toolbox for the X User, there is a resource reference for a FTP site called ftp.ienet.ie/pub/X-contrib/applications, I can not see it and nslookup cannot find ftp.ienet.ie or ienet.ie. Is the reference correct? If so is there other locations where I could find the source for xtar and mgdiff?
Thanks for your interest. Since I have written the article over half a year ago, the mentioned site seems to have vanished.

Both sourcecodes are available from ftp.x.org:

ftp.x.org: contrib/utilities/xtar-1.4.tar.gz ftp.x.org: contrib/applications/mgdiff.tar.gz

—Christoph Dalitz, dalitz@infotech.de


Red Hat

I have been using Linux for about 5 years now and after attempting to install Red Hat 5.0, 5.1 and now 5.2 I am forced to say that this product is going to be the death of linux. It is difficult to install, even more difficult to configure and the email help from this organization is nonexistant. They rarely answer questions and when they do it is usually wrong. Slackware has a 32 page installation guide that I can fit in my shirt pocket and with it I can install and configure linux completely. O'Reilly's 'Running Linux' is a great book for the new user of linux, unfortunatly a great deal of the information on how to configure your desktop and X does not apply to Redhats cryptic configuration files. I have been trying to get my boss to try Linux for over a year now, he is no neophyte to computers, he has been a software engineer for over 20 years. He could not get ppp to dial, I told him to check to see if it was installed with 'pppd --version', when nothing happened he decided to ask Redhat why it wasn't installed. Redhats answer was as follows 'Tried to hack into the internet lastnite, almost made it'. Any company that wants to compete with a larger company like Microsoft will have to put up with a lot of apparently stupid questions, and answer them, quickly and intelligently.. for the good of the OS, even if they are just canned answers. sign me .. never again Red Hat.

Bob, daunais@metro2000.net


Signal to Noise

Thanks for what has been a fantastic magazine. Until very recently I've been very impressed with the quality of the articles in LJ and even the Q&A section. This month's Q&A section is exactly what I feared would begin to occur with the groundswell Linux is experiencing.

There are alot of questions being posted that people could have solved on their own had they JUST RTFM'd, it's all there the man pages explain everything in detail! I'm actually REALLY scared that out favorite OS is being dumbed down by the influx of NT Admins who like the pretty pictures and are not willing to learn anything, they want everything spooned into their gaping mouths.

NT has a quality: (ease of install) which allows these people to get in way too deep, it requires no knowledge of networks, security or otherwise, and I'm scared, REALLY scared for my favorite OS and the corporate pressures that it is beginning to suffer.

Anyway, please please at least tell these people (when answering their absurd questions) that they could've gotten to the info by typing “man foo”.

Let's keep signal to noise high...PLEASE??

—Eric Sobalvarro, eric@razorjack.com


BTS May1999 FTP Stuck

In your May 1999 issue of the Linux Journal Steve Mitchell wrote to ask for help with an FTP problem. He was getting the message “connection is closed by the remote host” when he attempted to initiate an ftp session with a machine running Redhat. While Chad Robinson listed two possible explanation/solutions to the problem, he missed what might be the most likely one.

Ftp and other services on Redhat are wrapped by tcpd to enable fine grained control of access to these services (via /etc/{hosts.allow,hosts.conf}) and detailed logging of requests for these services (in /var/log/secure). While Chad mentioned checking to see if the line pertaining to ftpd in /etc/inetd.conf was commented out (recommending that Steve uncomment it and send inetd a SIGHUP) he did not advise Steve to check hosts.allow and hosts.deny to verify that tcpd would let him connect to that service on the Redhat machine.

A faithful Linux Journal reader,

—Darryl Allen, dallen@value.net


i18n and charsets

Date: Tue, 11 May 1999 16:13:05 +0400

To: turnbull@sk.tsukuba.ac.jp
I would to express my appreciation for your article in Linux Journal, March 1999 : http://www.ssc.com/lj/issue59/3286.html

About two years I have to studying the same subject. If you can read Russian, please visit my page “Locale AS IT IS” (about POSIX locales) : http://www.sensi.org/~alec/locale I hope also, my collection of links could be useful for you : http://www.sensi.org/~alec/locale/locale_i.html

As you correctly state in your article, there are several encoding of russian (and other cyrillic/slavic) characters. You can also visit the page of Roman Czyborra http://www.czyborra.com (english) to find detailed information about it. See “Cyrillic Charset Soup” : http://czyborra.com/charsets/cyrillic.html

The same author explains the meaning of terms CES (Character Encoding System) and CCS (Coded Character Set): http://czyborra.com/utf/ More formal explanation there is on UNICODE site : http://www.unicode.org/unicode/reports/tr17/

Unfortunatelly, in “pure” POSIX enviroenmant we **completlly** loose CES, both for “flat” file and for serial terminal (or for /dev/vty emulatior). Also we have no way to get charset name for current locale. We should use XPG's nl_langinfo(CODESET) function, not included in POSIX standart.

Now, about mounting... The new Linux kernels (>2.0.35) support NLS which allows you to have two different encoding : one for “storing” filenames on disk, and another for “transferring” to kernel level. For example, for VFAT filesystem with russian filenames one can use :

$ mount -t vfat -o noexec,codepage=866,iocharset=koi8-r /dev/hdb1 /mnt
Since Codepage 932 means ShiftJIS, this function could be useful for you.

And now, about charsets. Regret, at present moment the standart for “Charset Name” does not exist. We can use :

ISO8859-1
ISO_8859-1
ISO-8859-1
Many year ago IBM and Microsoft introduced the term “Codepage” which now practically means “Charset”. We can also use several names for there codepages :
CP866
IBM866
e.t.c

For Windows we can use CP1251 which in Internet Explorer and MIME is called by different name “Windows-1251”...

Some time ago I had a discussion with Ulrich Drepper. In glibc2 library he uses charset name “mangling” method. He changes all to lowercase and removes all “-”, “_”, “.” e.t.c. This works for ISO8859-1 and ISO-8859-1, but complitly does not work for CP1251 and Windows-1251.

The incorrect setting of charset name (by LANG={}) breaks translation Keyboard Event --> Keysym --> Character string for XKB and Xlib XLookupString() and XmbLookupString() function.

Unfortunatelly, Ulrich in glibc2 and X-Free authors does not support charset name aliasing method, introduced by IANA : http://www.isi.edu/in-notes/iana/assignments/character-sets as “Alias:” field.

I thank you once again for your very interesting and informative article !

P.S. And finally I would like to ask you to change the addresses indicated in you article: http://turnbull.sk.tsukuba.ac.jp/Tools/I18N/LJ-I18N.html into links.

with best regards
—Alexander Voropay, a.voropay@globalone.ru


“Linux Threatens More Than Microsoft” - reply

Mark Matthews writes (letters May 99, “Linux Threatens More Than Microsoft”) that “what everyone seems to be overlooking is the threat it [Linux] poses to other UNIX systems ... Sun and SCO should watch out”. What, I would like to ask, is so good about that? Yes, I value free software, and yes, I value Linux, but what I really value, when I get into work every morning, is:

and many other advantages too numerous to mention.

Clearly I am preaching to the converted in Linux Journal. My point, however, and what I feel that Mark Matthews misses, is that these are the benfits of all Unix systems, be they Sun, HP, SCO or Linux. In-fighting within the Unix community has only ever helped its opponents, and certainly will not make it any easier for us to enjoy more Unix in our working lives.

—Alexander Thorp, athorp@lucent.com


'Bug'

I saw a letter in your March 1999 issue, followed up in your May issue describing the genesis of the term 'bug', describing a flaw in some piece of equipment. The Oxford English Dictionary 2nd ed. usually provides a definitive origin; in this case they state:

bug.
b. defect or fault in a machine, plan or the like, orig U.S. 1889 Pall Mall Gaz. 11 Mar. 1/1 Mr Edison, I was informed, had been up the two previous nights discovering 'a bug' in his phonograph - an expression for solving a difficulty, and implying that some imaginary insect has secreted itself inside and is causing all the trouble.

The OED listing is the first known print use of the term 'bug', so it is quite possible although by no means certain that the term actually originated in the experimental laboratories of America's greatest inventor, Thomas Edison. Certainly it predates the start of the 20th century. Attributions to Admiral Hopper are clearly well short of the mark.

Sincerely,

—Eric H. Larson


At The Forge Factual Error - Hidden Fields - Don't trust them

Reuven,
On page 14 of the June 99 LJ issue, you state that a hidden field in a form can be useful for hardcoding a value that the user cannot change. This is not true. For example, I could save you form to disk, edit the value, and POST it to your handler. Your code needs to validate such fields on the back end or else bad things can and will happen.

I use the Digest::MD5 module to verify that my hidden fields sourced user data is not tampered with (this technique is also recommended for Netscape cookies processing - a clever user can fiddle with cookies in ~./netscape/cookies).

Also, I'd suggest that you look into the MIME::Lite module for sending mail. It has the nice ability to handle attachments, which I often find the need to do from my mod_perl based applications.

Keep up the good columns, and if you can, how about some more mod_perl related stuff ?

—C. Jon Larsen, jlarsen@ajtech.com


“grep: Searching for Words” rife with errors

I'm a little bit late getting to read my April issue of Linux Journal, so no doubt somebody has already pointed this out, but “grep: Searching for Words” was such a poorly-written article, I just had to write in. It is certainly the worst article to appear in Linux Journal during the time I've been reading the publication (a few years now).

My complaints:

1. The worst inaccuracy in the article is where Jan says that because grep “accepts regular expressions”, you can use “grep flip *” to search for “flip” in “all files in the directory”. Yes, grep accepts regular expressions in its patterns, but this has nothing to do with the fact that most shells will expand '*' to be all the (non-dot) files in the current directory! This is a dangerous misconception to feed to Linux newbies, especially if they're coming from a DOS background, where it really _is_ the command-line programs' responsibility to interpret file-matching wildcards.

2. The weird self-referencing example output of the grep command (“article.txt: grep flip article.txt”) was confusing enough to make me have to re-read it a couple of times—I hate to think how it would have been for a newbie. I know space was limited, but the whole article would have been a lot clearer for beginners if it had started out with a very short example text file that all the grep examples would have been run against, rather than running them against the article itself (where each successive insert of grep results could invalidate the results of previous grep runs).

3. In any case, the example output is only correct if there are multiple files in the current directory containing the word “flip”. Other- wise, the string “article.txt: ” would not be prepended to the results. I'm sure many beginners were confused when they tried out the command and didn't get the type of output Jan said they would.

4. Next the -e option is introduced, but for no reason! “I put the -e (i.e., do pattern search) option in this example just for demon- stration purposes.” Just what is being demonstrated? In any case, -e doesn't tell grep to “do pattern search”—that's all grep does! It _specifies_ the pattern to search for. If the -e option is going to be introduced, why not use it for the only thing it's good for—searching for expressions that begin with a '-'? That's a non- obvious usage requirement that all grep users bang up against eventually, and would have been nice to mention here.

5. It's confusing to say that the output of grep -n “will look like that shown above, with the file name replaced by the line number before the colon”. What is the grep command that Jan is proposing here? 'grep -n “is the” filename' or 'grep -n flip *'? Since he references the output of the latter command, that seems to be the one he's talking about, and in that case, the line number won't _replace_ the file name before the colon—it'll be _appended_ to it.

6. Next egrep is introduced along with fgrep as a way to make grep faster. What?!? egrep, if anything, is slower than grep, not faster. The only example I can think of where egrep would be faster would be as an alternative to pipeline of normal grep commands. Jan could have made this point as he includes an example of such usage in the next section, but drops the ball. That would have been a good point to make as it's not necessarily desirable to be showing pipelines with multiple grep calls (without intervening output _transformation_) as the “right” way to do things.

7. And finally, the afore-mentioned “grep ... | sort ... | grep ... | cut ... > result” example is next-to-worthless for a text filter newbie, the kind of user being targeted here. If the article had contained a short example text file, as I suggest, an actual useful pipeline example (without '...'s) could have been given, and would have been much more illuminating.

In conclusion, I must stress again that the limited space afforded to this article is not a valid excuse for all the inaccuracies and misleading information. I could have easily written a much more accurate and informative article (or edited Jan's into one) in the same amount of space. This article was far, far below the writing and editing quality I have come to expect from Linux Journal.

—Dan Harkless, dan@wave.eng.uci.edu

Sorry, my editors and I take the responsibilty for this one. English is not the first language of the author, and we obviously did not spend enough time on this one.
—Editor

awk article in LJ

I like articles like yours in the most recent issue of Linux Journal about awk. They point readers to the fun of unix.

I too like awk and use it a lot for small, preferably command line, applications. However, I think it would not be useful to tell readers at the same time about the much greater capacities of Perl, thus preventing them to write larger applications in awk.

Your listing 4 can be written in Perl as follows:

#!/usr/bin/perl -w
open(IN,"/etc/passwd");
while (<IN>) {
  chomp;          # remove \n
  s/.*://;                    # keep last field only
  $shell{$_||"no shell"}++;   # increment shell's counter
                              # calling it "no shell" if empty
}
while(($k,$v)=each %shell) {  # print all counters
  printf("%3d users %3d%% %s\n",
    $v,int($v/$.*100+.5),$k); # $. number of lines read sofar
}
which, even though your awk script can be written a lot shorter, is much more compact, and yet easier to understand, Also, it counts and reports the usage of other shells beyond the three you selected (but this aspect could have been arranged in awk as well, using its associative array capacity.)
A (simpler and less verbose) one-liner would be:
perl -ne 'chomp; s/.*://; $s{$_||"no shell"}++; \
          if (eof) { while(($k,$v)=each %s)     \
          { print "$v\t$k\n" }}' /etc/passwd

Hartelijke groet,

—Wybo H. Dekke, wybo@servalys.hobby.nl