bash histogram

I would like to generate a streamable histgram that runs in bash. Given an input stream of integers (from stdin or a file), I would like to transform each integer to that respective number of "#" up to the length of the terminal window; in other words, 5 would become "#####", and so on.

You can get the maximum number of columns in your current terminal using the following command,

twarnock@laptop: :) tput cols

The first thing we'll want to do is create a string of "####" that is exactly as long as the max number of columns. I.e.,

COLS=$(tput cols);
MAX_HIST=`eval printf '\#%.0s' {1..$COLS}; echo;`

We can use the following syntax to print a substring of MAX_HIST to any given length (up to its maximum length).

twarnock@laptop: :) echo ${MAX_HIST:0:5}
twarnock@laptop: :) echo ${MAX_HIST:0:2}
twarnock@laptop: :) echo ${MAX_HIST:0:15}

We can then put this into a simple shell script, in this case, as follows,

#! /bin/bash
COLS=$(tput cols);
MAX_HIST=`eval printf '\#%.0s' {1..$COLS}; echo;`

while read datain
  if [ -n "$datain" ]; then
    echo -n ${MAX_HIST:0:$datain}
    if [ $datain -gt $COLS ]; then
      printf "\r$datain\n"
      printf "\n"
done < "${1:-/dev/stdin}"

This script will also print any number on top of any line that is larger than the maximum number of columns in the terminal window.

As is, the script will transform an input file into a crude histogram, but I've also used it as a visual ping monitor as follows (note the use of unbuffer),

twarnock@cosmos:~ :) ping $remote_host | unbuffer -p awk -F'[ =]' '{ print int($10) }' | unbuffer -p

Posted in bash, shell tips


I would like to remotely control my Linux desktop via an ssh connection (connected through my phone).

Fortunately, we can use xdotool.

I created a simple command-interpreter that maps keys to xdotool. I used standard video game controls (wasd) for large mouse movements (100px), with smaller movements available (ijkl 10px). It can toggle between mouse and keyboard, which allows you to somewhat easily open a browser and type URLs.

I use this to control my HD television from my phone, and so far it works great.

: ${DISPLAY:=":0"}
export DISPLAY

echo "xmouse! press q to quit, h for help"

function print_help() {
  echo "xmouse commands:
  h - print help

  Mouse Movements
  w - move 100 pixels up
  a - move 100 pixels left
  s - move 100 pixels down
  d - move 100 pixels right

  Mouse Buttons
  c - mouse click
  r - right mouse click
  u - mouse wheel Up
  p - mouse wheel Down

  Mouse Button dragging
  e - mouse down (start dragging)
  x - mouse up (end dragging)

  Mouse Movements small
  i - move 10 pixels up
  j - move 10 pixels left
  k - move 10 pixels down
  l - move 10 pixels right

  Keyboard (experimental)
  Press esc key to toggle between keyboard and mouse modes

while read -rsn1 input; do
  # toggle mouse and keyboard mode
  case "$input" in
  $'\e') if [ "$KEY_IN" = "On" ]; then
           echo "MOUSE mode"
           echo "KEYBOARD mode"
  # keyboard mode
  if [ "$KEY_IN" = "On" ]; then
  case "$input" in
  $'\x7f') xdotool key BackSpace ;;
  $' ')  xdotool key space ;;
  $'')   xdotool key Return ;;
  $':')  xdotool key colon ;;
  $';')  xdotool key semicolon ;;
  $',')  xdotool key comma ;;
  $'.')  xdotool key period ;;
  $'-')  xdotool key minus ;;
  $'+')  xdotool key plus ;;
  $'!')  xdotool key exclam ;;
  $'"')  xdotool key quotedbl ;;
  $'#')  xdotool key numbersign ;;
  $'$')  xdotool key dollar ;;
  $'%')  xdotool key percent ;;
  $'&')  xdotool key ampersand ;;
  $'\'') xdotool key apostrophe ;;
  $'(')  xdotool key parenleft ;;
  $')')  xdotool key parenright ;;
  $'*')  xdotool key asterisk ;;
  $'/')  xdotool key slash ;;
  $'<')  xdotool key less ;;
  $'=')  xdotool key equal ;;
  $'>')  xdotool key greater ;;
  $'?')  xdotool key question ;;
  $'@')  xdotool key at ;;
  $'[')  xdotool key bracketleft ;;
  $'\\') xdotool key backslash ;;
  $']')  xdotool key bracketright ;;
  $'^')  xdotool key asciicircum ;;
  $'_')  xdotool key underscore ;;
  $'`')  xdotool key grave ;;
  $'{')  xdotool key braceleft ;;
  $'|')  xdotool key bar ;;
  $'}')  xdotool key braceright ;;
  $'~')  xdotool key asciitilde ;;
  *)     xdotool key "$input" ;;
  # mouse mode
  case "$input" in
  q) break ;;
  h) print_help ;;
  a) xdotool mousemove_relative -- -100 0 ;;
  s) xdotool mousemove_relative 0 100 ;;
  d) xdotool mousemove_relative 100 0 ;;
  w) xdotool mousemove_relative -- 0 -100 ;;
  c) xdotool click 1 ;;
  r) xdotool click 3 ;;
  u) xdotool click 4 ;;
  p) xdotool click 5 ;;
  e) xdotool mousedown 1 ;;
  x) xdotool mouseup 1 ;;
  j) xdotool mousemove_relative -- -10 0 ;;
  k) xdotool mousemove_relative 0 10 ;;
  l) xdotool mousemove_relative 10 0 ;;
  i) xdotool mousemove_relative -- 0 -10 ;;
  *) echo "$input - not defined in mouse map" ;;
Posted in bash

VLC remote control

Recently I was using VLC to listen to music, as I often do, and I wanted to pause without getting out of bed.

Lazy? Yes!

I learned that VLC includes a slew of remote control interfaces, including a built-in web interface as well as a raw socket interface.

In VLC Advanced Preferences, go to "Interface", and then "Main interfaces" for a list of options. I selected "Remote control" which is now known as "oldrc", and I configured a simple file based socket "vlc.sock" in my home directory as an experiment.

You can use netcat to send commands, for example,

twarnock@laptop:~ :) nc -U ~/vlc.sock <<< "pause"

Best of all VLC cleans up after itself and removes the socket file when it closes. The "remote control" interface is pretty intuitive and comes with a "help" command. I wrapped all of this in a shell function (in a .bashrc).

function vlcrc() {
 if [ $# -gt 0 ]; then
 if [ -S $SOCK ]; then
  nc -U $SOCK <<< "$CMD"
  (>&2 echo "I can't find VLC socket $SOCK")

I like this approach because I can now use "vlc command" in a scripted environment. I can build playlists, control the volume, adjust the playback speed, pretty much anything VLC lets me do. I could even use a crontab and make a scripted alarm clock!

And of course I can "pause" my music from my phone while laying in bed. Granted, there's apps for more user friendly VLC smartphone remotes, but I like the granular control provided by a command line.

Posted in shell tips

datsize, simple command line row and column count

Lately I've been working with lots of data files with fixed rows and columns, and have been finding myself doing the following a lot:

Getting the row count of a file,

twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.gamma
    3183 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.beta
     200 lda_out/final.beta

And getting the column count of the same files,

twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.gamma | awk '{ print NF }'
twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.beta | awk '{ print NF }'

I would do this for dozens of files and eventually decided to put this together in a simple shell function,

function datsize {
    if [ -e $1 ]; then
        rows=$(wc -l < $1)
        cols=$(head -1 $1 | awk '{ print NF }')
        echo "$rows X $cols $1"
        return 1

Simple, and so much nicer,

twarnock@laptop:/var/data/ctm :) datsize lda_out/final.gamma
    3183 X 200 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) datsize lda_out/final.beta
     200 X 5568 lda_out/final.beta
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-theta.dat
    3183 X 200 ctr_out/final-theta.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-U.dat
    2011 X 200 ctr_out/final-U.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-V.dat
    3183 X 200 ctr_out/final-V.dat
Posted in bash, shell tips

Getting the most out of your ssh config

I typically find myself with voluminous bashrc files filled with aliases and functions for connecting to specific hosts via ssh. I would like an easier way to manage the various ssh hosts, ports, and keys.

I typically maintain an ssh-agent across multiple hosts, as well as various tunnels; reverse tunnels, and chained tunnels -- but I would like to simplify my normal ssh commands using an ssh config.

First, always remember to RTFM,

man ssh

This is an excellent starting point, the man page contains plenty of information on all the ins-and-outs of an ssh config.

To get started, simply create a plaintext file "config" in your .ssh/ directory.

Setting Defaults

$HOME/.ssh/config will be used by your ssh client and is able to set per-host defaults for username, port, identity-key, etc

For example,

# $HOME/.ssh/config
Host dev
    Port 22000
    User twarnock
    ForwardAgent yes

On this particular host, I can now run

$ ssh dev

Which is much easier than "ssh -A -p 22000"

You can also use wildcards, e.g.,

Host * * *
    User root

which I find very useful for cases where usernames are different than my normal username.


Additionally, you can add tunneling information in your .ssh/config, e.g.,

    IdentityFile ~/.ssh/anattatechnologies.key
    LocalForward 8080 localhost:80
    User twarnock

Even if you chose to use shell functions to manage tunnels, the use of an ssh config can help simplify things greatly.

Posted in shell tips, ssh

git, obliterate specific commits

I would like to obliterate a series of git commits between two points, we'll call these the START and END commits.

First, determine the SHA1 for the two commits, we'll be forcefully deleting everything in between and preserving the END exactly as it is.

Detach Head

Detach head and move to END commit,

git checkout SHA1-for-END


Move HEAD to START, but leave the index and working tree as END

git reset --soft SHA1-for-START

Redo END commit

Redo the END commit re-using the commit message, but on top of START

git commit -C SHA1-for-END


Re-apply everything from the END

git rebase --onto HEAD SHA1-for-END master

Force Push

push -f
Posted in git, shell tips

vim: Visual mode

I have been using vim for years and am consistently surprised at the amazing things it can do. Vim has been around longer than I have been writing code, and its predecessor (Vi) is as old as I am.

Somehow through the years this editor has gained and continues to gain popularity. Originally, my interest in Vi was out of necessity, it was often the only editor available on older Unix systems. Yet somehow Vim nowadays rivals even the most advanced IDEs.

One of the more interesting aspects of Vim is the Visual mode. I had ignored this feature for years relying on the normal command mode and insert mode.

Visual Mode

Simply press v and you'll be in visual mode able to select text.

Use V to select an entire line of text, use the motion keys to move up or down to select lines of text as needed.

And most interestingly, use Ctrl-v for visual block mode. This is the most flexible mode of selection and allows you to select columns rather than entire lines, as shown below.
In this case I have used visual block mode to select the same variable in 5 lines of code.

In all of these case, you can use o and O while selecting to change the position of the cursor in the select box. For example, if you are selecting several lines downwards and realize you wanted to grab the line above the selection box as well, just hit o and it will take you to the top of the selection.

In practice this is far easier and more powerful than normal mouse highlighting, although vim also supports mouse highlighting exactly as you would intuitively expect (where mouse highlighting enables visual mode).

What to do with a visual selection

All sorts of things! You could press ~ to change the case of the selection, you can press > to indent the selection (< to remove an indent), you can press y to yank (copy) the selection, d to delete the selection.

If you're in visual block mode and if you've selected multiple lines as in the example above, then you can edit ALL of the lines simultaneously. Use i to start inserting at the cursor, and as soon as you leave insert mode the changes will appear on each of the lines that was in the visual block.

Similarly, you can use the familiar a, A, and I to add text to every line of the visual block. You can use c to change each line of the visual block, r to replace the selection. This is an incredibly fast and easy way to add or replace text on multiple lines.

Additionally, you can use p to put (paste) over a visual selection. Once you paste over a visual selection, that selection will now be your default register (see :reg), which is extremely handy when you need to quickly swap two selections of text.

You can even use the visual block to limit the range of an otherwise global find and replace, that is,


adding the \%V to the search limits the find and replace to the selection block.

More information is available in vim's help file,

:h visual-operators
Posted in shell tips, vim

vim: tags and tabs

Previously, I discussed using vim and screen with automagic titles.

The great part about working with vim and screen is that I can work from anywhere with minimal setup, and when working remotely I can pick up the cursor exactly where I left it -- I never have to worry about a remote terminal disconnecting.

I tend to avoid vim plugins as I like having a minimal setup on different hosts, I occasionally make an exception for NERDTree, but I find the default netrw easily workable. I keep my .vimrc and other dotfiles in github so I'm always a git clone away from getting my environment setup (in Linux, cygwin, OSX, etc).

With this in mind, I would like an easier way to navigate files in a project and if possible avoid non-standard vim plugins.

One of the most effective approaches I have found using the default (no plugins) vim is the combination of tags and tabs.

Tag files are generated by ctags (typically from exuberant ctags), which then vim can use as a keyword index into your source tree for any given project.

Generating Tags

I prefer to keep a single "tags" file at the root of each project directory, typically as follows,

$ ctags -R .

This will create a "tags" file in the current directory. For larger codebases these can get surprisingly large, but they are usually fast to generate. To manage these files you may consider using git hooks on

Telling vim about Tags

In vim you can load as many tag files as you like, the command is,

:set tags+=tags

where "tags" is the filename of the tags file.

The problem is, you won't want to type this every time you open vim, so add the following to your .vimrc,

set tags+=tags;$HOME

By adding the ";$HOME" to the set tags command, this will simply look for a "tags" file in the current working directory, and if it doesn't find one it will look in the parent directory and keep looking for a tags file all the way back to "$HOME". So if you're 10 directories deep within $HOME then it would search up to 10 directories looking for a "tags" file. You can replace $HOME with any base directory, in my case I keep all project source code in my $HOME directory.

Using Tags

Typing :tag text will search for files with the exact tag name, or you can use :tag /text to search for any tag that matches "text".

By default, vim opens the new files in a tag stack, you can use Ctrl-T to go back to the previous file -- alternatively you can navigate the files through the normal buffer commands, e.g.,

list open buffers

switch to a different buffer (from list)
:b #

unload (delete) a buffer
:bd #

You can also use put your cursor on the word you want to search and press Ctrl-] to go the file that matches the selected text, then use Ctrl-T to jump back. If you want to see all the files that match a tag, you can use :tselect text

I find navigating a tag stack and maintaining multiple buffers is a bit cumbersome, this is where tabs can really help out.

Using Tabs

Once you're in vim you can open a new tab with :tabedit {file} or :tabe {file} which will open the optionally specified file in a new tab (or open a new blank tab if no file is specified). Usually I use,

:tabe .

to open a new tab with the file browser in the current working directory.

With multiple tabs open you can use gt and gT to toggle thru the open tabs, or {i}gt to go to the i-th tab (starting at 1). You can re-order tabs using tabm # to a move a tab to a new position (starting at 0).

Most importantly, tabs work great with mouse enabled, simply click on a tab as you would intuitively expect, drag the tabs to re-order, or click the "X" in the upper-right to close.

Tabs meet Tags

I find the default tags behavior slightly cumbersome as I end up navigating the tag stack through multiple buffers open in one window.

When searching tags I want the file to always open in a new tab, or at least to open in a vertical split.

I have added the following to my .vimrc

map <C-\> :tab split<CR>:exec("tag ".expand("<cword>"))<CR>
map <C-]> :vsp <CR>:exec("tag ".expand("<cword>"))<CR>

This will effectively remap Ctrl-] to open the matching file in a vertical split. I can then close the vertical split or even move it to a new tab using Ctrl-w T.

However, mostly I use Ctrl-\ to open the matching file in a new tab.

Between tab and tag navigation I find this a very powerful way to manage even very large projects with default vim (rather than rely on an IDE).

Careful with Splits

One interesting thing about splits (vertical and horizontal, that is, :sp and :vsp) is that they will exist entirely within a tab window. In other words, a split occurs within only one tab.

You can close a split using Ctrl-w q, and if you need to navigate through multiple splits you can either use the mouse or Ctrl-w and then an arrow key (or h,j,k,l if you prefer).

In any given split, you can always move that file to a new tab using Ctrl-w T

Posted in shell tips, vim

vim and screen, automagic titles

Previously, I discussed using multiuser screen so that I could concurrently access a shared screen session across multiple remote hosts (from work, from home, from my phone, etc).

I would like to augment screen such that the titles would always tell me what directory I'm currently in, as well as what program is running (if any). Additionally, if I'm editing a file in vim I would like to see the filename in the screen window title. If I have multiple vim buffers open (say, in tabs) I would like the screen window title set to whichever filename I'm currently editing.

GNU screen provides a shelltitle attribute that can get us partly there, you could add something like this to your screenrc,

# automagic window title
shelltitle ") |bash:"

In this example, screen will automatically fill in any currently running shell command as the window title. Importantly, the ") " must be the final characters on your command prompt. For most people, this is the '$' character, mine is still set to the smiley() cursor discussed previously. Everything after the '|' character will be the default screen title.

Unfortunately, while this approach does provide us a dynamic window name for running programs it does not show us the current directory and does nothing for vim (other than just to say "vim"). This approach, which may work for some, turned out to be a dead end. I had been searching for ways to get screen to update the window titles to the current directory and had almost given up.

Recently, I discovered this article, which provides a working (albeit complicated) approach.

Essentially, in the newer versions of bash we can use the trap command DEBUG, which will run command before every single shell command!

Additionally, we can set a screen window title on the command prompt by printing an escape sequence then the new title. So, we can run a bash function in the DEBUG trap that sets the title.

Sounds easy? Well, not really. The DEBUG trap is a bit heavy handed and using it to print escape characters can have odd effects involving BASH_COMMAND and PROMPT_COMMAND. Here is a working solution I've been using,

# turn off debug trap, turn on later if we're in screen
trap "" DEBUG

... rest of my .bashrc

# Show the current directory AND running command in the screen window title
# inspired from
if [ "$TERM" = "screen" ]; then
    export PROMPT_COMMAND='true'
    set_screen_window() {
      HPWD=`basename "$PWD"`
      if [ "$HPWD" = "$USER" ]; then HPWD='~'; fi
      if [ ${#HPWD} -ge 10 ]; then HPWD='..'${HPWD:${#HPWD}-8:${#HPWD}}; fi
      case "$BASH_COMMAND" in
            printf '\ek%s\e\\' "$HPWD:"
            printf '\ek%s\e\\' "$HPWD:${BASH_COMMAND:0:20}"
    trap set_screen_window DEBUG

In this case, I set PROMPT_COMMAND to true and make sure that my PS1 environment variable is not relying on PROMPT_COMMAND. The reason is because the BASH_COMMAND environment variable will be set to whatever the parent shell is currently running, and the DEBUG trap will fire every time the BASH_COMMAND changes (which is a lot, especially if you're executing a shell script).

Fortunately, anytime a command finishes, PROMPT_COMMAND will run, which in this case executes true, and I catch that in the case statement and set the title to the current directory. This effectively sets the title every time bash prints a command prompt.

If you execute a long running command, that screens window title will be set to that command, and as soon as the command finishes the title will change back.

The only remaining problem is vim. With the above approach, it almost works with vim. If you were in a directory named "foo" and ran "vim spam.txt", then the screen window title would be set to "foo:vim spam.txt". So far so good, but when you open additional files in vim, the title will still be say "foo:vim spam.txt".


The final step is to update your vimrc to set the titlestring, and with some tweaking vim will send the escape characters that screen recognizes to change the window title. Lastly, add an autocmd for all relevant events (opening a new file, switching tabs, etc), and you'll have a working solution,

" screen title
if &term == "screen"
  let &titlestring = "vim(" . expand("%:t") . ")"
  set t_ts=^[k
  set t_fs=^[\
  set title
autocmd TabEnter,WinEnter,BufReadPost,FileReadPost,BufNewFile * let &titlestring = 'vim(' . expand("%:t") . ')'

* to type ^[, which is an escape character, you need to enter CTRL+V <Esc>

With this approach, while vim is running it will effectively take over the job of updating the screen window title, for example,
As we switch tabs or open new files or change focus in a split screen, vim will update the screen window title to "vim(filename)" for the file that's being edited.

All of these changes (and more) can be found in my dotfiles in github

Posted in bash, shell tips, vim

node.js redirect with query string

Previously, I discussed javascript appending to query string, where we serialized an associative array to a query string. I would now like to leverage this technique within node.js as a redirect service.

Specifically, I am using express to make a web app in node.js and the app includes a redirect service, e.g.,

var app = express();
var redirectVars = {'foo':'spam and eggs', 'tracker':42 };

// redirect and append redirectVars
app.get('/redirect', function(request, result, next) {
  if(request.query.url) {
    var urle = request.query.url;
    var url = decodeURIComponent(urle);
    var firstSeperator = (url.indexOf('?')==-1 ? '?' : '&');

    var queryStringParts = new Array();
    for(var key in redirectVars) {
      queryStringParts.push(key + '=' + encodeURIComponent(redirectVars[key]));
    var queryString = queryStringParts.join('&');

    result.redirect(url + firstSeperator + queryString);
  result.send("400", "Bad request");

Usage of this service is as simple as,


Any external app could use this service, which will append server controlled query string variables to the redirected URL. This is useful for a redirect service that needs to dynamically construct query string variables, such as cross-domain authentication and authorization.

Importantly, in order to preserve an existing query string in the new-location, simply encode the entire URL string before sending it into the service, e.g.,

var new_location = encodeURIComponent("");
window.location = "" + new_location;

Using the above node.js example, this would have the effect of redirecting the user to

Posted in javascript

javascript appending to query string

I would like to append an associative array to a URL's query string. For whatever reason, there is no native javascript method to accomplish this task. This needs to be done manually or using a common web framework such as jQuery.

The first step is to serialize the associative array into a query string,

native javascript

With plain-old-javascript, you can do something like this,

var queryVars = {'foo':'bar', 'spam':'eggs', 'tracker':'yes' };

var queryStringParts = new Array();
for(var key in queryVars) {
  queryStringParts.push(key + '=' + queryVars[key]);
var queryString = queryStringParts.join('&');

The value of queryString will be



Since version 1.2 jQuery has supported the jQuery.param() function to serialize any array or object into a URL query string. The above example becomes,

var queryVars = {'foo':'bar', 'spam':'eggs', 'tracker':'yes' };

var queryString = jQuery.param(queryVars);


My favorite approach is the node.js querystring.stringify() function, I like this as it is easiest to remember,

var queryVars = {'foo':'bar', 'spam':'eggs', 'tracker':'yes' };

var queryString = querystring.stringify(queryVars);

Appending ? or &

In most cases you don't want to assume an input url does not already contain a query string, in fact, this would be a rather bad assumption. To get this to work you'll want to append your new query string to any existing query string using the & character, otherwise use the ? character. Here is an example,

function appendQueryString(url, queryVars) {
    var firstSeperator = (url.indexOf('?')==-1 ? '?' : '&');
    var queryStringParts = new Array();
    for(var key in queryVars) {
        queryStringParts.push(key + '=' + queryVars[key]);
    var queryString = queryStringParts.join('&');
    return url + firstSeperator + queryString;

var url = "something.html?q=test";
var queryVars = {'foo':'bar', 'spam':'eggs', 'tracker':'yes' };

var new_url = appendQueryString(url, queryVars);

The value of new_url will be


Posted in javascript

multiuser screen

Previously, I discussed using GNU screen as a window manager.

I would like to access my screen session concurrently from multiple hosts (say, at work, at home, and even remotely on my phone). I would also like to define default screens specific to one host.

Default screens can be configured easily in the .screenrc in your home directory. To keep things simple I use a shared screenrc file, available in this github repo, this is shared across multiple environments that often have different uses (between home and work computers). Host specific screenrc commands are defined in a special .screenrc_local, that is loaded from the main .screenrc as follows,

source .screenrc_local

In order to load default screens each with a specific initial command, I use the "screen" and "stuff" commands in my .screenrc_local, for example,

## default screens
screen -t bash 0

screen -t cloud 1
stuff "cd cloud/cloudsource/trunk/roles/; pushd ../../branches/staging/roles; dirs -v^M"

screen -t ecr/ 2
stuff "cd /mnt/sartre-data/ecr/; ll^M"

## go back to the first screen
select 0

Screen shot 2013-04-23 at 12.32.44 PM
With this configuration any new session will have those initial screens.

Whatever is in the "stuff" command will be typed automatically into the screen session. Add "^M" to send a hard return to execute the "stuff" command.

To enable multiuser mode in new screen sessions, add the following in your .screenrc

# enable multiuser screen
multiuser on

To enable multiuser mode in an existing screen session, press Ctrl-A : and enter "multiuser on", that is,

^A :multiuser on

A multiuser screen session can be joined by multiple connections concurrently. By default, only your user account can access the shared screen session. To join a multiuser session, use the following command from the shell,

$ screen -x sessionname

photoIf you don't enter a sessionname, the most recent session will be joined. If you use "-xR" a new session will be created if a multiuser session did not exist.

With this approach I can seamlessly switch to another computer or device, even in mid command.

Best of all, multiple connections can be active at the same time -- so for example you can have the same screen session open at home and in the office, as well as on your phone (typing commands on your phone knowing they're also showing on your home and work computer).

If you would like to allow other users to join your screen session, you would use the following commands, either in .screenrc or interactively using "Ctrl-A :"

acladd username

The other user can access this shared session using the following command,

$ screen -x owner/sessionname

Sharing a screen session with multiple users can get complicated; and because you'll need to setuid root on the screen binary, it's not a good security practice. However, within a trusted developer network on a shared host it's a very good way to collaborate. If you do wish to allow multiple users to share a single screen session, you'll need to run the following,

$ sudo chmod u+s `which screen`
$ sudo chmod 755 /var/run/screen

If you run into the following, "ERROR: Cannot open your terminal '/dev/pts/1' - please check." or something similar, this is likely because the current user did not login directly but instead performed a "su - username" and does not have access to the pts. An interesting hack I found here resolves this using the "script" command (which creates a new pts as the current user), that is,

script /dev/null
screen -x owner/sessionname

By default, all users will have full access to the shared session; able to type commands as the session owner. You can modify access by using "aclchg", or remove access with "acldel".

The "aclchg" command can apply to an entire session or to a specific window, e.g.,

## read only for entire session
aclchg username -w "#"

## full access to screen 0 only
aclchg username +rwx 0

As a simple shortcut, you can use aclchg to add a new user with specific (such as read-only) access.

Posted in bash, shell tips

scripting Photoshop for stop motion

I would like a simple and quick way to save a copy of an image in Photoshop, with an auto-incrementing filename. Ideally, a single button to capture a frame in a stop motion animation. In other words, I would like to save a copy of the working image as a JPEG without any interactive prompts and the filename will automatically increment a count.

For example, if I'm working with a file "test.psd", I want a single action that will save a copy "test_0001.jpg", and subsequent calls will save "test_0002.jpg", "test_0003.jpg", and so on.

By default, Photoshop will overwrite existing files, and it would be quite tedious to manually "Save As" for hundreds or thousands of images. Fortunately, Photoshop offers a scripting interface to call user defined scripts. Custom scripts can even be loaded into Photoshop and executed as an Action.

The following snippet can be saved as [Photoshop Directory]/Presets/Scripts/saveFrame.jsx, and after restarting Photoshop you should see "saveFrame" under File -> Scripts.


 * Scripted "save as" with incrementing filename
 *   e.g., test_0001.jpg, test_0002.jpg, ...
function main() { 
	if (!documents.length)
	cnt = 1;
    try {
        var Name = decodeURI(\.[^\.]+$/, '');
        var Path = decodeURI(activeDocument.path);
        var saveFrame = Path + "/" + Name + "_" + zeroPad(cnt,4) + ".jpg";
        // find the next available filename
        while ( File(saveFrame).exists ) {
            saveFrame = Path + "/" + Name + "_" + zeroPad(cnt,4) + ".jpg";
        // save as, change the default JPEG quality here as needed
        SaveJPEG(File(saveFrame), 9);
	} catch(e) {
        alert(e + "\r@ Line " + e.line);

function SaveJPEG(saveFile, jpegQuality) {
	var doc = activeDocument;
	if (doc.bitsPerChannel != BitsPerChannelType.EIGHT) 
		doc.bitsPerChannel = BitsPerChannelType.EIGHT;
	jpgSaveOptions = new JPEGSaveOptions();
	jpgSaveOptions.embedColorProfile = true;
	jpgSaveOptions.formatOptions = FormatOptions.STANDARDBASELINE;
	jpgSaveOptions.matte = MatteType.NONE;
	jpgSaveOptions.quality = jpegQuality; 
	activeDocument.saveAs(saveFile, jpgSaveOptions, true, Extension.LOWERCASE);

function zeroPad(n, s) { 
	n = n.toString(); 
	while (n.length < s) 
		n = '0' + n; 
	return n; 

Using Photoshop scripts you can automate any task and even create animation effects. In CS6 you can render a series of images as a video, alternatively, you can create the image frames in Photoshop and use ffmpeg to render the video.

If you want to use ffmpeg to render a series of images, you could use the following command,

$ ffmpeg -r 30 -f image2 -i test_%04d.jpg -vb 1M -r 30 test.webm

Here is a simple (90 frame loop) example animating a series of scripted lighting effects,

The above video is embedded in this page using the following html,

<video id="test_test" poster="test_0001.jpg" preload="auto" loop autoplay>
    <source src="test.mp4" type="video/mp4" />
    <source src="test.webm" type="video/webm" />
    <source src="test.ogv" type="video/ogg" />
    <object width="600" height="360" type="application/x-shockwave-flash" data="test.swf">
        <param name="movie" value="test.swf" />
        <img src="test_0001.jpg" width="600" height="360" alt="test" title="No video playback" />
Posted in html, javascript, shell tips

locking and concurrency in python, part 2

Previously, I created a "MultiLock" class for managing locks and lockgroups across a shared file system. Now I want to create a simple command-line utility that uses this functionality.

To start, we can create a simple runone() function that leverages MutliLock, e.g.,

def _runone(func, lockname, lockgroup, basedir, *args, **kwargs):
    ''' run one, AND ONLY ONE, instance (respect locking)

        >>> _runone(print, 'lock', 'locks', '.', 'hello world')
    lock = MultiLock(lockname, lockgroup, basedir)
    if lock.acquire():
        func(*args, **kwargs)

Any python function (with its *args and **kwargs) will be called if (and-only-if) the named lock was acquired. At a minimum, this guarantees that one (and only one) instance of the function can be called at a given time.

To make this slightly more magic, we can wrap this as a decorator function -- a decorator that accepts arguments,

def runone(lockname='lock', lockgroup='.locks', basedir='.'):
    ''' decorator with closure
        returns a function that will run one, and only one, instance per lockgroup
    def wrapper(fn):
        def new_fn(*args, **kwargs):
            return _runone(fn, lockname, lockgroup, basedir, *args, **kwargs)
        return new_fn
    return wrapper

The closure is used so that we can pass arguments to the decorator function, e.g.,

@runone('lock', 'lockgroup', '/shared/path')
def spam():
    #do work, only if we acquire /shared/path/lockgroup/lock 

Putting this all together, we can create a command-line utility that will execute any command-line program if (and only if) it acquires a named lock in the lockgroup. With such a utility we can add concurrency and fault-tolerance to any shell script that can be executed across all nodes in a cluster. This code is also available in this github repo.

import time, sys, subprocess, optparse, logging
from multilock import MultiLock

def runone(lockname='lock', lockgroup='.locks', basedir='.'):
    ''' decorator with closure
        returns a function that will run one, and only one, instance per lockgroup
    def wrapper(fn):
        def new_fn(*args, **kwargs):
            return _runone(fn, lockname, lockgroup, basedir, *args, **kwargs)
        return new_fn
    return wrapper

def _runone(func, lockname, lockgroup, basedir, *args, **kwargs):
    ''' run one, AND ONLY ONE, instance (respect locking)

        >>> _runone(print, 'lock', 'locks', '.', 'hello world')
    lock = MultiLock(lockname, lockgroup, basedir)
    if lock.acquire():
        func(*args, **kwargs)

if __name__ == '__main__':

    p = optparse.OptionParser('usage: %prog [options] cmd [args]')
    p.add_option('--lockname', '-l', dest="lockname", default='lock', help="the lock name, should be unique for this instance")
    p.add_option('--lockgroup', '-g', dest="lockgroup", default='.locks', help="the lockgroup, a collection of locks independent locks")
    p.add_option('--basedir', '-d', dest="basedir", default='.', help="the base directory where the lock files should be written")
    p.add_option('--wait', '-w', dest="wait", default=None, help="optional, wait (up till the number of seconds specified) for all locks to complete in the lockgroup")
    options, args = p.parse_args()

    if options.wait:
        lock = MultiLock(options.lockname, options.lockgroup, options.basedir)
    @runone(options.lockname, options.lockgroup, options.basedir)
    def _main():

Posted in python, shell tips

locking and concurrency in python, part 1

I would like to do file-locking concurrency control in python. Additionally, I would like to provide a "run-once-and-only-once" functionality on a shared cluster; in other words, I have multiple batch jobs to run over a shared compute cluster and I want a simple way to provide fault tolerance for parallel jobs.

The batch jobs should leverage a locking mechanism with the following method signatures,

class Lock:

    def acquire(self)

    def release(self)

    def wait(self, timeout)

Using a shared filesystem, such as NFS, we can use file or directory locking, provided we can guarantee atomicity for the creation of the lock. I.e., only one host in a cluster can acquire a named lock. There are different ways to guarantee atomicity on file operation, depending on your filesystem.

One approach is os.makedir(), which is atomic on POSIX systems. Alternatively, you can use the following,

>>> fd ='foo.lock', os.O_CREAT|os.O_EXCL|os.O_RDWR)

This is atomic on most filesystems. Lastly, os.rename() is atomic on POSIX and most network file systems. In other words, if multiple hosts attempt the same os.rename operation on a shared file, only one will succeed and the others will raise on OSError.

In order to maximize fault-tolerance, we can create a lockfile with a hostname and process-id, rename the file, and then read the renamed file to verify the correct hostname and process-id. This will cover most all network shared filesystems (that may or may not be POSIX compliant). The following python snippet will perform this multi-lock,

class MultiLock:
    def __init__(self, lockname='lock'
        self.lockname = lockname
        self.lockfile = os.path.join(lockname, lockname + '.lock')
        self.lockedfile = os.path.join(lockname, lockname + '.locked')
        self.hostname = socket.gethostname() = os.getpid()
        self.fd = None

    def acquire(self):
        if not self.verify():
            logging.debug('you do not have the lock %s' %(self.lockedfile))
                logging.debug('attempt to create lock %s' %(self.lockfile))
                self.fd =, os.O_CREAT|os.O_EXCL|os.O_RDWR)
                os.write(self.fd, self.hostname+' '+str(
                logging.debug('attempt multilock %s' %(self.lockedfile))
                os.rename(self.lockfile, self.lockedfile)
                return self.verify()
            except OSError:
                logging.debug('unable to multilock %s' %(self.lockfile))
        return 0

    def verify(self):
        logging.debug('test if this is your lock, %s' %(self.lockedfile))
            self.fd =, os.O_RDWR)
            qhostname, qpid =, 1024).strip().split()
            if qhostname != self.hostname or int(qpid) != int(
                logging.debug('%s:%s claims to have the lock' %(qhostname, qpid))
                return 0
            logging.debug('success, you have lock %s' %(self.lockedfile))
            return 1
            logging.debug('you do not have lock %s' %(self.lockedfile))
            return 0

Furthermore, I would like a "lockgroup" such that I can create several locks in a group and a wait() function that will wait for all of the locks in a group to complete. In other words, we can start multiple jobs in parallel which can be distributed across the cluster (say, one per node) and then a wait() statement will wait for all jobs to complete.

Putting this all together, we can create a python "multilock" module with a "MultiLock" class, which is also available in this github repo, as follows,

import time, socket, shutil, os, logging, errno

class MultiLockTimeoutException(Exception):

class MultiLockDeniedException(Exception):

class MultiLock:
    def __init__(self, lockname='lock', lockgroup='.locks', basepath='.', poll=0.5):
        ''' MultiLock instance

            lockname: the name of this lock, default is 'lock'
            lockgroup: the name of the lockgroup, default is '.locks'
            basepath: the directory to store the locks, default is the current directory
            poll: the max time in seconds for a lock to be established, this must be larger
                  than the max time it takes to acquire a lock
        self.lockname = lockname
        self.basepath = os.path.realpath(basepath)
        self.lockgroup = os.path.join(self.basepath, lockgroup)
        self.lockfile = os.path.join(self.lockgroup, lockname, lockname + '.lock')
        self.lockedfile = os.path.join(self.lockgroup, lockname, lockname + '.locked')
        self.hostname = socket.gethostname() = os.getpid()
        self.poll = int(poll)
        self.fd = None

    def acquire(self, maxage=None):
        if not self.verify():
            logging.debug('you do not have the lock %s' %(self.lockedfile))
            if maxage:
                logging.debug('make sure that the lockgroup %s exists' %(self.lockgroup))
            except OSError as exc:
                if exc.errno == errno.EEXIST:
                    logging.error('fatal error trying to access lockgroup %s' %(self.lockgroup))
                logging.debug('attempt to create lock %s' %(self.lockfile))
                self.fd =, os.O_CREAT|os.O_EXCL|os.O_RDWR)
                os.write(self.fd, self.hostname+' '+str(
                logging.debug('attempt multilock %s' %(self.lockedfile))
                os.rename(self.lockfile, self.lockedfile)
                return self.verify()
            except OSError:
                logging.debug('unable to multilock %s' %(self.lockfile))
        return 0

    def release(self):
            if self.verify():
                    logging.debug('released lock %s, will try to clean up lockgroup %s' %(self.lockname, self.lockgroup))
                except OSError as exc:
                    if exc.errno == errno.ENOTEMPTY:
                        logging.debug('lockgroup %s is not empty' %(self.lockgroup))
            return self.cleanup()

    def verify(self):
        logging.debug('test if this is your lock, %s' %(self.lockedfile))
            self.fd =, os.O_RDWR)
            qhostname, qpid =, 1024).strip().split()
            if qhostname != self.hostname or int(qpid) != int(
                logging.debug('%s:%s claims to have the lock' %(qhostname, qpid))
                return 0
            logging.debug('success, you have lock %s' %(self.lockedfile))
            return 1
            logging.debug('you do not have lock %s' %(self.lockedfile))
            return 0

    def cleanup(self, maxage=None):
        ''' safely cleanup any lock files or directories (artifacts from race conditions and exceptions)
        if maxage and os.path.exists(os.path.dirname(self.lockedfile)):
                tdiff = time.time() - os.stat(os.path.dirname(self.lockedfile))[8]
                if tdiff >= maxage:
                    logging.debug('lock %s is older than maxage %s' %(os.path.dirname(self.lockedfile), maxage))
        if os.path.isfile(self.lockedfile):
            logging.debug('lock %s exists, checking hostname:pid' % (self.lockedfile))
            qhostname, qpid = (None, None)
                fh = open(self.lockedfile)
                qhostname, qpid =
            if self.hostname == qhostname:
                    if int(qpid) > 0:
                        os.kill(int(qpid), 0)
                except OSError, e:
                    if e.errno != errno.EPERM:
                        logging.error('lock %s exists on this host, but pid %s is NOT running, force release' % (self.lockedfile, qpid))
                        return 1
                        logging.debug('lock %s exists on this host but pid %s might still be running' %(self.lockedfile, qpid))
                    logging.debug('lock %s exists on this host with pid %s still running' %(self.lockedfile, qpid))
            return 0
        return 1

    def wait(self, timeout=86400):
        logging.debug('waiting for lockgroup %s to complete' %(self.lockgroup))
        timeout = int(timeout)
        start_time = time.time()
        while True:
                if (time.time() - start_time) >= timeout:
                    raise MultiLockTimeoutException("Timeout %s seconds" %(timeout))
                elif os.path.isdir(self.lockgroup):
                return 1
            except OSError as exc:
                if exc.errno == errno.ENOTEMPTY:
                elif exc.errno == errno.ENOENT:
                    logging.error('fatal error waiting for %s' %(self.lockgroup))

    def __del__(self):

    def __enter__(self):
        ''' pythonic 'with' statement

            >>> with MultiLock('spam') as spam:
            ...     logging.debug('we have spam')
        if self.acquire():
            return self
        raise MultiLockDeniedException(self.lockname)

    def __exit__(self, type, value, traceback):
        ''' executed after the with statement
        if self.verify():

We can use this class to manage locks and lockgroups across network file shares, next, I'd like to demonstrate a simple command-line utility that uses this functionality.

Posted in python