Letters

Re: SIGALRM Timers and Stdin Analysis

In Dave Taylor's “SIGALRM Timers and Stdin Analysis” article in the November 2012 issue, he was using ps to check to see whether a process was running before sending a signal for termination, as in the snippet below:


ps $$ > /dev/null
if [ ! $? ] ; then
  kill -ALRM $$
fi

As an alternative to ps ..., he might want to use kill -0 $$. The -0 option is designed to give an exit code that shows whether a signal can be sent.

I learned this from some good folks I used to work with. Thanks for the article.


Paul

Dave Taylor replies: Nice solution, Paul. Thanks for sharing it!

My Facebook

I have been doing some research on Facebook and found that it can be run on CentOS with PHP.

That said, I thought I would start my own Facebook, and I saw this and thought you might like to do an article on it: elgg.org.


Jeffrey Meyer

Fascinating, Jeffrey. Perhaps individual social-networking platforms are the next big thing. With a common back end for users (like gravatar even), I could see it working.—Shawn Powers

Why I Let My Subscription Lapse and Won't Ever Re-subscribe

It took my desktop computer longer than 0 seconds to render a page with an ad—closer to a minute, I think.

The opportunity cost to read your magazine in PDF is too high, even if it is free.


Andrew Snyder

Sorry you're having a difficult time with ads, Andrew. Although advertising “keeps the lights on”, we try to select only vendors that will be of interest to our readers.—Ed.

I Will Renew

Thanks to Shawn Powers for the humor and info. I've been a reader since March 1994, and I will continue with the digital subscription in no small part due to his tongue-in-cheek approach.


William Weiss

Thanks William! I have no idea what you're talking about, however. I totally want a cybernetic implant in my brain.—Shawn Powers

I Like the Way You Write

Shawn, I'm writing regular tips-and-tricks articles for Linux, etc., for a blog (tipstricks.itmatrix.eu). I do it the very dry way, and after reading Shawn Powers' “Taming Tomcat” article in the July 2013 issue, I was delighted by the simple and nice way he wrote it.

Thanks for the contribution. If you ever want to write on any of the subjects I wrote about in the blog, be my guest. I have put no restrictions on the use of this material.


Michel Bisson

Aw, shucks. It's the month for making me feel shiny about my writing, I guess. It was my birthday recently, so I'll take your letter as an awesome gift. Thank you very much.—Shawn Powers

Sleep Patterns

The graph of Shawn Powers' sleep patterns looks a lot like mine did, until I realized that I probably entrain to artificial light [see Shawn's “Sleep as Android” article in the Upfront section of the July 2013 issue]. In other words, my brain misinterprets artificial light as sunlight, so it doesn't think it's nighttime even after the sun has set, and so it doesn't prepare my body's physiology for sleep. Those changes take a while (an hour or two), so when I turned out the lights, I tossed and turned for an hour or two until my body adjusted to the darkness.

As it turns out, this is all due to the presence of retinal neurons that function to detect a narrow range of blue light (sky blue, in fact). Their job is to inform the brain when they no longer detect that light (that is, when night has fallen). However, in some individuals, these cells see enough blue light in the artificial light spectrum to fool them into thinking the sun is still up (computer screens and TVs emit a lot of light in this blue range). I'm not sure why this happens in some individuals and not others, but I wonder if it might have to do with eye color (I have blue eyes).

In any event, it's easy to use glasses to filter out this narrow range of blue light, thereby plunging the relevant retinal neurons into “darkness”. Individuals with this problem would don the glasses for 1–2 hours before they wish to retire; after a couple hours, sleep then (in my experience) comes easily and naturally, and much less fitfully.

I bought a pair of these glasses from www.lowbluelights.com, and my sleep patterns have improved enormously. The effects have lasted long enough (about a year) to rule out any significant placebo effect. You might want to give them a try (I have no association whatsoever with this company, except as a satisfied customer).


Chris Brandon

Whoa, that's fascinating. I have bluish-grey eyes as well. (Again, this might be anecdotal, but it's still interesting.) I'll have to try that out. Thanks for the info!—Shawn Powers

DevOps

Tracy Ragan's opinion piece “21st-Century DevOps—an End to the 20th-Century Practice of Writing Static Build and Deploy Scripts” in the June 2013 issue told us repeatedly that she doesn't hold with scripts. She hints once or twice that a model-driven approach would be better; it would have been great if she'd told us how that would work!


S.

Tracy Ragan replies: Defining how a DevOps Model-Driven Framework is implemented is a complete article in itself that cannot be answered easily in a few short sentences. However, there are solid examples in both the commercial and open-source markets that have model-driven solutions that are worth looking into. Take a look at how Chef from Opscode uses “recipes” for defining standard server configurations. On the build management side, take a look at Meister from OpenMake Software and how it uses “Build Services” for creating standard models for compiling more than 200 different languages. In the deployment world, CA Release Automation (previously Nolio) uses standard models for performing application-level deployments, similar to IBM's Rational Application Framework for managing Websphere deploys.

In essence, to deliver a Model-Driven Framework, you establish solutions and processes that can separate the build, test or deploy logic from the “hard-coded” references to filenames, directories and so on. That information is delivered through manifest files, target files or other containers, which are then passed into the logic. In other words, the “how” and “what” are kept apart until an action needs to occur. You may end up with many files containing the “what”, but only a few containers that include the “how”. The logic thereby becomes reusable and easily managed. By having only a few containers that manage the “how”, you end up with a highly repeatable and transparent process for moving code from development to production with the ability of reporting on details that scripts cannot produce. Thanks for the feedback.

Electronic Vs. Paper

So, it has been some time since you guys have done a paper copy of LJ.

Up until the time you stopped printing LJ, I had read every issue since issue 1, usually within a month or two of it coming out.

Now that you have gone digital, I am now well over a year behind. So, clearly reading stuff on a tablet or otherwise for me doesn't work. I do it for smaller articles, but not for something the size of LJ.

You seriously need to consider a paper copy again—there are those of us who would happily pay more for a paper copy. You could do it in a limited run at a break-even or slightly higher cost production.

Nuts and Volts magazine has figured out how to do this, so why can't you?


Jeff Regan

It hasn't come up in a while, Jeff, but thanks for letting us know your struggle. I'm not sure how Nuts and Volts does it, but it's something we can look into for sure. I do worry the price might be painful, but some folks might be willing to pay the premium I suppose. At the very least, it might be worth looking into print-on-demand options. I will do some research and see what I can come up with.—Ed.

Using a Chromebook for Development

I just got an Acer C7 Chromebook as a replacement for a laptop that recently decided to work no more.

It has great hardware specs for its price with no Microsoft tax, but unfortunately, the software is not suitable for development. The first thing I did was install Crouton to install Ubuntu. Great! But several packages are needed to be installed to make it a development machine.

Later I found Nitrous. It provides you a virtual 8-core Linux box running Ubuntu 12.04.2 LTS and your choice of development languages and frameworks (RoR, Python, DJango, Ruby and node.js at the moment). You also can connect your box to GitHub.

Best of all, besides being free, you get a decent Web IDE. Shell access also is provided either by SSH or directly with the IDE.

You can join Nitrous for free at https://www.nitrous.io.

Keep up the good work!


jschiavon

Cool! Moving the development side of things to the Web is a step that I haven't considered, nor have I seen it done before. Thanks for the great tip. Perhaps my wife will let me get the Chromebook Pixel now—you know, for research purposes!—Shawn Powers

Variant of Shell Timeout

Something I had to work out was having a way to timeout a command executed within a shell script (not the whole shell script itself). I had the case where an ssh to a box would connect, but the command never ran because of a problem on the box. It just hung my whole script. So, I wrapped the ssh in this function that allows me to kill the ssh if it goes too long:


function run_with_command () {
   CMD=$1
   TIMEOUT=$2
   COUNTER=0

   ${CMD} &
   CMD_PID=$!

   while ps $CMD_PID > /dev/null && [ $COUNTER -lt $TIMEOUT ]; do
      sleep 1
      COUNTER=$((COUNTER+1))
   done

   if [ $COUNTER -eq $TIMEOUT ]; then
      kill $CMD_PID 2>/dev/null
   fi

   wait $CMD_PID 2>/dev/null
}

# TEST
run_with_command "sleep 20"  10     # this will timeout
run_with_command "sleep 10"  20     # this will not timeout

# If I want the result from the command, then I do this
result=$(run_with_command "ssh box1 hostname" 10 )

The wait makes sure the return code of the function is provided to know if it ran successfully. If it was killed, it will return 143. If it ran, I'll get 0.


Mark