VLC remote control

Recently I was using VLC to listen to music, as I often do, and I wanted to pause without getting out of bed.

Lazy? Yes!

I learned that VLC includes a slew of remote control interfaces, including a built-in web interface as well as a raw socket interface.

In VLC Advanced Preferences, go to "Interface", and then "Main interfaces" for a list of options. I selected "Remote control" which is now known as "oldrc", and I configured a simple file based socket "vlc.sock" in my home directory as an experiment.

You can use netcat to send commands, for example,

twarnock@laptop:~ :) nc -U ~/vlc.sock <<< "pause"

Best of all VLC cleans up after itself and removes the socket file when it closes. The "remote control" interface is pretty intuitive and comes with a "help" command. I wrapped all of this in a shell function (in a .bashrc).

function vlcrc() {
 SOCK=~/vlc.sock
 CMD="pause"
 if [ $# -gt 0 ]; then
  CMD=$1
 fi
 if [ -S $SOCK ]; then
  nc -U $SOCK <<< "$CMD"
 else
  (>&2 echo "I can't find VLC socket $SOCK")
 fi
}

I like this approach because I can now use "vlc command" in a scripted environment. I can build playlists, control the volume, adjust the playback speed, pretty much anything VLC lets me do. I could even use a crontab and make a scripted alarm clock!

And of course I can "pause" my music from my phone while laying in bed. Granted, there's apps for more user friendly VLC smartphone remotes, but I like the granular control provided by a command line.

Posted in shell tips | Comments Off on VLC remote control

datsize, simple command line row and column count

Lately I've been working with lots of data files with fixed rows and columns, and have been finding myself doing the following a lot:

Getting the row count of a file,

twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.gamma
    3183 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.beta
     200 lda_out/final.beta

And getting the column count of the same files,

twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.gamma | awk '{ print NF }'
200
twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.beta | awk '{ print NF }'
5568

I would do this for dozens of files and eventually decided to put this together in a simple shell function,

function datsize {
    if [ -e $1 ]; then
        rows=$(wc -l < $1)
        cols=$(head -1 $1 | awk '{ print NF }')
        echo "$rows X $cols $1"
    else
        return 1
    fi
}

Simple, and so much nicer,

twarnock@laptop:/var/data/ctm :) datsize lda_out/final.gamma
    3183 X 200 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) datsize lda_out/final.beta
     200 X 5568 lda_out/final.beta
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-theta.dat
    3183 X 200 ctr_out/final-theta.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-U.dat
    2011 X 200 ctr_out/final-U.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-V.dat
    3183 X 200 ctr_out/final-V.dat
Posted in bash, shell tips | Comments Off on datsize, simple command line row and column count

Getting the most out of your ssh config

I typically find myself with voluminous bashrc files filled with aliases and functions for connecting to specific hosts via ssh. I would like an easier way to manage the various ssh hosts, ports, and keys.

I typically maintain an ssh-agent across multiple hosts, as well as various tunnels; reverse tunnels, and chained tunnels -- but I would like to simplify my normal ssh commands using an ssh config.

First, always remember to RTFM,

man ssh

This is an excellent starting point, the man page contains plenty of information on all the ins-and-outs of an ssh config.

To get started, simply create a plaintext file "config" in your .ssh/ directory.

Setting Defaults

$HOME/.ssh/config will be used by your ssh client and is able to set per-host defaults for username, port, identity-key, etc

For example,

# $HOME/.ssh/config
Host dev
    HostName dev.anattatechnologies.com
    Port 22000
    User twarnock
    ForwardAgent yes

On this particular host, I can now run

$ ssh dev

Which is much easier than "ssh -A -p 22000 twarnock@dev.anattatechnologies.com"

You can also use wildcards, e.g.,

Host *amazonaws.com *ec2.nytimes.com *.dev.use1.nytimes.com
    User root

which I find very useful for cases where usernames are different than my normal username.

Tunnels

Additionally, you can add tunneling information in your .ssh/config, e.g.,

Host tunnel.anattatechnologies.com
    HostName anattatechnologies.com
    IdentityFile ~/.ssh/anattatechnologies.key
    LocalForward 8080 localhost:80
    User twarnock

Even if you chose to use shell functions to manage tunnels, the use of an ssh config can help simplify things greatly.

Posted in shell tips, ssh | Comments Off on Getting the most out of your ssh config

git, obliterate specific commits

I would like to obliterate a series of git commits between two points, we'll call these the START and END commits.

First, determine the SHA1 for the two commits, we'll be forcefully deleting everything in between and preserving the END exactly as it is.

Detach Head

Detach head and move to END commit,

git checkout SHA1-for-END

Reset

Move HEAD to START, but leave the index and working tree as END

git reset --soft SHA1-for-START

Redo END commit

Redo the END commit re-using the commit message, but on top of START

git commit -C SHA1-for-END

Rebase

Re-apply everything from the END

git rebase --onto HEAD SHA1-for-END master

Force Push

push -f
Posted in shell tips | Comments Off on git, obliterate specific commits

vim: Visual mode

I have been using vim for years and am consistently surprised at the amazing things it can do. Vim has been around longer than I have been writing code, and its predecessor (Vi) is as old as I am.

Somehow through the years this editor has gained and continues to gain popularity. Originally, my interest in Vi was out of necessity, it was often the only editor available on older Unix systems. Yet somehow Vim nowadays rivals even the most advanced IDEs.

One of the more interesting aspects of Vim is the Visual mode. I had ignored this feature for years relying on the normal command mode and insert mode.

Visual Mode

Simply press v and you'll be in visual mode able to select text.

Use V to select an entire line of text, use the motion keys to move up or down to select lines of text as needed.

And most interestingly, use Ctrl-v for visual block mode. This is the most flexible mode of selection and allows you to select columns rather than entire lines, as shown below.
vim
In this case I have used visual block mode to select the same variable in 5 lines of code.

In all of these case, you can use o and O while selecting to change the position of the cursor in the select box. For example, if you are selecting several lines downwards and realize you wanted to grab the line above the selection box as well, just hit o and it will take you to the top of the selection.

In practice this is far easier and more powerful than normal mouse highlighting, although vim also supports mouse highlighting exactly as you would intuitively expect (where mouse highlighting enables visual mode).

What to do with a visual selection

All sorts of things! You could press ~ to change the case of the selection, you can press > to indent the selection (< to remove an indent), you can press y to yank (copy) the selection, d to delete the selection.

If you're in visual block mode and if you've selected multiple lines as in the example above, then you can edit ALL of the lines simultaneously. Use i to start inserting at the cursor, and as soon as you leave insert mode the changes will appear on each of the lines that was in the visual block.

Similarly, you can use the familiar a, A, and I to add text to every line of the visual block. You can use c to change each line of the visual block, r to replace the selection. This is an incredibly fast and easy way to add or replace text on multiple lines.

Additionally, you can use p to put (paste) over a visual selection. Once you paste over a visual selection, that selection will now be your default register (see :reg), which is extremely handy when you need to quickly swap two selections of text.

You can even use the visual block to limit the range of an otherwise global find and replace, that is,

:s/\%Vfind/replace/g

adding the \%V to the search limits the find and replace to the selection block.

More information is available in vim's help file,

:h visual-operators
Posted in shell tips, vim | Comments Off on vim: Visual mode

vim: tags and tabs

Previously, I discussed using vim and screen with automagic titles.

The great part about working with vim and screen is that I can work from anywhere with minimal setup, and when working remotely I can pick up the cursor exactly where I left it -- I never have to worry about a remote terminal disconnecting.

I tend to avoid vim plugins as I like having a minimal setup on different hosts, I occasionally make an exception for NERDTree, but I find the default netrw easily workable. I keep my .vimrc and other dotfiles in github so I'm always a git clone away from getting my environment setup (in Linux, cygwin, OSX, etc).

With this in mind, I would like an easier way to navigate files in a project and if possible avoid non-standard vim plugins.

One of the most effective approaches I have found using the default (no plugins) vim is the combination of tags and tabs.

Tag files are generated by ctags (typically from exuberant ctags), which then vim can use as a keyword index into your source tree for any given project.

Generating Tags

I prefer to keep a single "tags" file at the root of each project directory, typically as follows,

$ ctags -R .

This will create a "tags" file in the current directory. For larger codebases these can get surprisingly large, but they are usually fast to generate. To manage these files you may consider using git hooks on

Telling vim about Tags

In vim you can load as many tag files as you like, the command is,

:set tags+=tags

where "tags" is the filename of the tags file.

The problem is, you won't want to type this every time you open vim, so add the following to your .vimrc,

set tags+=tags;$HOME

By adding the ";$HOME" to the set tags command, this will simply look for a "tags" file in the current working directory, and if it doesn't find one it will look in the parent directory and keep looking for a tags file all the way back to "$HOME". So if you're 10 directories deep within $HOME then it would search up to 10 directories looking for a "tags" file. You can replace $HOME with any base directory, in my case I keep all project source code in my $HOME directory.

Using Tags

Typing :tag text will search for files with the exact tag name, or you can use :tag /text to search for any tag that matches "text".

By default, vim opens the new files in a tag stack, you can use Ctrl-T to go back to the previous file -- alternatively you can navigate the files through the normal buffer commands, e.g.,

list open buffers
:ls

switch to a different buffer (from list)
:b #

unload (delete) a buffer
:bd #

You can also use put your cursor on the word you want to search and press Ctrl-] to go the file that matches the selected text, then use Ctrl-T to jump back. If you want to see all the files that match a tag, you can use :tselect text

I find navigating a tag stack and maintaining multiple buffers is a bit cumbersome, this is where tabs can really help out.

Using Tabs

Once you're in vim you can open a new tab with :tabedit {file} or :tabe {file} which will open the optionally specified file in a new tab (or open a new blank tab if no file is specified). Usually I use,

:tabe .

to open a new tab with the file browser in the current working directory.

With multiple tabs open you can use gt and gT to toggle thru the open tabs, or {i}gt to go to the i-th tab (starting at 1). You can re-order tabs using tabm # to a move a tab to a new position (starting at 0).

Most importantly, tabs work great with mouse enabled, simply click on a tab as you would intuitively expect, drag the tabs to re-order, or click the "X" in the upper-right to close.

Tabs meet Tags

I find the default tags behavior slightly cumbersome as I end up navigating the tag stack through multiple buffers open in one window.

When searching tags I want the file to always open in a new tab, or at least to open in a vertical split.

I have added the following to my .vimrc

map <C-\> :tab split<CR>:exec("tag ".expand("<cword>"))<CR>
map <C-]> :vsp <CR>:exec("tag ".expand("<cword>"))<CR>

This will effectively remap Ctrl-] to open the matching file in a vertical split. I can then close the vertical split or even move it to a new tab using Ctrl-w T.

However, mostly I use Ctrl-\ to open the matching file in a new tab.

Between tab and tag navigation I find this a very powerful way to manage even very large projects with default vim (rather than rely on an IDE).

Careful with Splits

One interesting thing about splits (vertical and horizontal, that is, :sp and :vsp) is that they will exist entirely within a tab window. In other words, a split occurs within only one tab.

You can close a split using Ctrl-w q, and if you need to navigate through multiple splits you can either use the mouse or Ctrl-w and then an arrow key (or h,j,k,l if you prefer).

In any given split, you can always move that file to a new tab using Ctrl-w T

Posted in shell tips, vim | Comments Off on vim: tags and tabs

vim and screen, automagic titles

Previously, I discussed using multiuser screen so that I could concurrently access a shared screen session across multiple remote hosts (from work, from home, from my phone, etc).

I would like to augment screen such that the titles would always tell me what directory I'm currently in, as well as what program is running (if any). Additionally, if I'm editing a file in vim I would like to see the filename in the screen window title. If I have multiple vim buffers open (say, in tabs) I would like the screen window title set to whichever filename I'm currently editing.

GNU screen provides a shelltitle attribute that can get us partly there, you could add something like this to your screenrc,

# automagic window title
shelltitle ") |bash:"

In this example, screen will automatically fill in any currently running shell command as the window title. Importantly, the ") " must be the final characters on your command prompt. For most people, this is the '$' character, mine is still set to the smiley() cursor discussed previously. Everything after the '|' character will be the default screen title.

Unfortunately, while this approach does provide us a dynamic window name for running programs it does not show us the current directory and does nothing for vim (other than just to say "vim"). This approach, which may work for some, turned out to be a dead end. I had been searching for ways to get screen to update the window titles to the current directory and had almost given up.

Recently, I discovered this article, which provides a working (albeit complicated) approach.

Essentially, in the newer versions of bash we can use the trap command DEBUG, which will run command before every single shell command!

Additionally, we can set a screen window title on the command prompt by printing an escape sequence then the new title. So, we can run a bash function in the DEBUG trap that sets the title.

Sounds easy? Well, not really. The DEBUG trap is a bit heavy handed and using it to print escape characters can have odd effects involving BASH_COMMAND and PROMPT_COMMAND. Here is a working solution I've been using,

# turn off debug trap, turn on later if we're in screen
trap "" DEBUG

...
... rest of my .bashrc
...

# Show the current directory AND running command in the screen window title
# inspired from http://www.davidpashley.com/articles/xterm-titles-with-bash.html
if [ "$TERM" = "screen" ]; then
    export PROMPT_COMMAND='true'
    set_screen_window() {
      HPWD=`basename "$PWD"`
      if [ "$HPWD" = "$USER" ]; then HPWD='~'; fi
      if [ ${#HPWD} -ge 10 ]; then HPWD='..'${HPWD:${#HPWD}-8:${#HPWD}}; fi
      case "$BASH_COMMAND" in
        *\033]0*);;
        "true")
            printf '\ek%s\e\\' "$HPWD:"
            ;;
        *)
            printf '\ek%s\e\\' "$HPWD:${BASH_COMMAND:0:20}"
            ;;
      esac
    }
    trap set_screen_window DEBUG
fi

In this case, I set PROMPT_COMMAND to true and make sure that my PS1 environment variable is not relying on PROMPT_COMMAND. The reason is because the BASH_COMMAND environment variable will be set to whatever the parent shell is currently running, and the DEBUG trap will fire every time the BASH_COMMAND changes (which is a lot, especially if you're executing a shell script).

Fortunately, anytime a command finishes, PROMPT_COMMAND will run, which in this case executes true, and I catch that in the case statement and set the title to the current directory. This effectively sets the title every time bash prints a command prompt.

If you execute a long running command, that screens window title will be set to that command, and as soon as the command finishes the title will change back.

The only remaining problem is vim. With the above approach, it almost works with vim. If you were in a directory named "foo" and ran "vim spam.txt", then the screen window title would be set to "foo:vim spam.txt". So far so good, but when you open additional files in vim, the title will still be say "foo:vim spam.txt".

.vimrc

The final step is to update your vimrc to set the titlestring, and with some tweaking vim will send the escape characters that screen recognizes to change the window title. Lastly, add an autocmd for all relevant events (opening a new file, switching tabs, etc), and you'll have a working solution,

" screen title
if &term == "screen"
  let &titlestring = "vim(" . expand("%:t") . ")"
  set t_ts=^[k
  set t_fs=^[\
  set title
endif
autocmd TabEnter,WinEnter,BufReadPost,FileReadPost,BufNewFile * let &titlestring = 'vim(' . expand("%:t") . ')'

* to type ^[, which is an escape character, you need to enter CTRL+V <Esc>

With this approach, while vim is running it will effectively take over the job of updating the screen window title, for example,
screen
As we switch tabs or open new files or change focus in a split screen, vim will update the screen window title to "vim(filename)" for the file that's being edited.

All of these changes (and more) can be found in my dotfiles in github

Posted in bash, shell tips, vim | Comments Off on vim and screen, automagic titles

node.js redirect with query string

Previously, I discussed javascript appending to query string, where we serialized an associative array to a query string. I would now like to leverage this technique within node.js as a redirect service.

Specifically, I am using express to make a web app in node.js and the app includes a redirect service, e.g.,

var app = express();
var redirectVars = {'foo':'spam and eggs', 'tracker':42 };

// redirect and append redirectVars
app.get('/redirect', function(request, result, next) {
	if(request.query.url) {
		var urle = request.query.url;
		var url = decodeURIComponent(urle);
		var firstSeperator = (url.indexOf('?')==-1 ? '?' : '&');

		var queryStringParts = new Array();
		for(var key in redirectVars) {
			queryStringParts.push(key + '=' + encodeURIComponent(redirectVars[key]));
		}
		var queryString = queryStringParts.join('&');

		result.redirect(url + firstSeperator + queryString);
	}
	result.send("400", "Bad request");
});

Usage of this service is as simple as,

/redirect?url=new-location

Any external app could use this service, which will append server controlled query string variables to the redirected URL. This is useful for a redirect service that needs to dynamically construct query string variables, such as cross-domain authentication and authorization.

Importantly, in order to preserve an existing query string in the new-location, simply encode the entire URL string before sending it into the service, e.g.,

var new_location = encodeURIComponent("http://foo.com/?q=test");
window.location = "http://www.yourapp.com/redirect?url=" + new_location;

Using the above node.js example, this would have the effect of redirecting the user to

http://foo.com/?q=test&foo=spam%20and%20eggs&tracker=42
Posted in javascript | Comments Off on node.js redirect with query string

javascript appending to query string

I would like to append an associative array to a URL's query string. For whatever reason, there is no native javascript method to accomplish this task. This needs to be done manually or using a common web framework such as jQuery.

The first step is to serialize the associative array into a query string,

native javascript

With plain-old-javascript, you can do something like this,

var queryVars = {'foo':'bar', 'spam':'eggs', 'tracker':'yes' };

var queryStringParts = new Array();
for(var key in queryVars) {
  queryStringParts.push(key + '=' + queryVars[key]);
}
var queryString = queryStringParts.join('&');

The value of queryString will be

foo=bar&spam=eggs&tracker=yes

jQuery

Since version 1.2 jQuery has supported the jQuery.param() function to serialize any array or object into a URL query string. The above example becomes,

var queryVars = {'foo':'bar', 'spam':'eggs', 'tracker':'yes' };

var queryString = jQuery.param(queryVars);

node.js

My favorite approach is the node.js querystring.stringify() function, I like this as it is easiest to remember,

var queryVars = {'foo':'bar', 'spam':'eggs', 'tracker':'yes' };

var queryString = querystring.stringify(queryVars);

Appending ? or &

In most cases you don't want to assume an input url does not already contain a query string, in fact, this would be a rather bad assumption. To get this to work you'll want to append your new query string to any existing query string using the & character, otherwise use the ? character. Here is an example,

function appendQueryString(url, queryVars) {
    var firstSeperator = (url.indexOf('?')==-1 ? '?' : '&');
    var queryStringParts = new Array();
    for(var key in queryVars) {
        queryStringParts.push(key + '=' + queryVars[key]);
    }
    var queryString = queryStringParts.join('&');
    return url + firstSeperator + queryString;
}

var url = "something.html?q=test";
var queryVars = {'foo':'bar', 'spam':'eggs', 'tracker':'yes' };

var new_url = appendQueryString(url, queryVars);

The value of new_url will be

something.html?q=test&foo=bar&spam=eggs&tracker=yes
Posted in javascript | Comments Off on javascript appending to query string

multiuser screen

Previously, I discussed using GNU screen as a window manager.

I would like to access my screen session concurrently from multiple hosts (say, at work, at home, and even remotely on my phone). I would also like to define default screens specific to one host.

Default screens can be configured easily in the .screenrc in your home directory. To keep things simple I use a shared screenrc file, available in this github repo, this is shared across multiple environments that often have different uses (between home and work computers). Host specific screenrc commands are defined in a special .screenrc_local, that is loaded from the main .screenrc as follows,

source .screenrc_local

In order to load default screens each with a specific initial command, I use the "screen" and "stuff" commands in my .screenrc_local, for example,

## default screens
screen -t bash 0

screen -t cloud 1
stuff "cd cloud/cloudsource/trunk/roles/; pushd ../../branches/staging/roles; dirs -v^M"

screen -t ecr/ 2
stuff "cd /mnt/sartre-data/ecr/; ll^M"

## go back to the first screen
select 0

Screen shot 2013-04-23 at 12.32.44 PM
With this configuration any new session will have those initial screens.

Whatever is in the "stuff" command will be typed automatically into the screen session. Add "^M" to send a hard return to execute the "stuff" command.


To enable multiuser mode in new screen sessions, add the following in your .screenrc

# enable multiuser screen
multiuser on

To enable multiuser mode in an existing screen session, press Ctrl-A : and enter "multiuser on", that is,

^A :multiuser on

A multiuser screen session can be joined by multiple connections concurrently. By default, only your user account can access the shared screen session. To join a multiuser session, use the following command from the shell,

$ screen -x sessionname

photoIf you don't enter a sessionname, the most recent session will be joined. If you use "-xR" a new session will be created if a multiuser session did not exist.

With this approach I can seamlessly switch to another computer or device, even in mid command.

Best of all, multiple connections can be active at the same time -- so for example you can have the same screen session open at home and in the office, as well as on your phone (typing commands on your phone knowing they're also showing on your home and work computer).


If you would like to allow other users to join your screen session, you would use the following commands, either in .screenrc or interactively using "Ctrl-A :"

acladd username

The other user can access this shared session using the following command,

$ screen -x owner/sessionname

Sharing a screen session with multiple users can get complicated; and because you'll need to setuid root on the screen binary, it's not a good security practice. However, within a trusted developer network on a shared host it's a very good way to collaborate. If you do wish to allow multiple users to share a single screen session, you'll need to run the following,

$ sudo chmod u+s `which screen`
$ sudo chmod 755 /var/run/screen

If you run into the following, "ERROR: Cannot open your terminal '/dev/pts/1' - please check." or something similar, this is likely because the current user did not login directly but instead performed a "su - username" and does not have access to the pts. An interesting hack I found here resolves this using the "script" command (which creates a new pts as the current user), that is,

script /dev/null
screen -x owner/sessionname

By default, all users will have full access to the shared session; able to type commands as the session owner. You can modify access by using "aclchg", or remove access with "acldel".

The "aclchg" command can apply to an entire session or to a specific window, e.g.,

## read only for entire session
aclchg username -w "#"

## full access to screen 0 only
aclchg username +rwx 0

As a simple shortcut, you can use aclchg to add a new user with specific (such as read-only) access.

Posted in bash, shell tips | Comments Off on multiuser screen

scripting Photoshop for stop motion

I would like a simple and quick way to save a copy of an image in Photoshop, with an auto-incrementing filename. Ideally, a single button to capture a frame in a stop motion animation. In other words, I would like to save a copy of the working image as a JPEG without any interactive prompts and the filename will automatically increment a count.

For example, if I'm working with a file "test.psd", I want a single action that will save a copy "test_0001.jpg", and subsequent calls will save "test_0002.jpg", "test_0003.jpg", and so on.

By default, Photoshop will overwrite existing files, and it would be quite tedious to manually "Save As" for hundreds or thousands of images. Fortunately, Photoshop offers a scripting interface to call user defined scripts. Custom scripts can even be loaded into Photoshop and executed as an Action.

The following snippet can be saved as [Photoshop Directory]/Presets/Scripts/saveFrame.jsx, and after restarting Photoshop you should see "saveFrame" under File -> Scripts.

main();

/***
 * Scripted "save as" with incrementing filename
 *   e.g., test_0001.jpg, test_0002.jpg, ...
 *
 ***/
function main() { 
	if (!documents.length)
		return;
	cnt = 1;
    try {
        var Name = decodeURI(activeDocument.name).replace(/\.[^\.]+$/, '');
        var Path = decodeURI(activeDocument.path);
        var saveFrame = Path + "/" + Name + "_" + zeroPad(cnt,4) + ".jpg";
        //
        // find the next available filename
        while ( File(saveFrame).exists ) {
            cnt++;
            saveFrame = Path + "/" + Name + "_" + zeroPad(cnt,4) + ".jpg";
        }
        //
        // save as, change the default JPEG quality here as needed
        SaveJPEG(File(saveFrame), 9);
	} catch(e) {
        alert(e + "\r@ Line " + e.line);
     }
}

function SaveJPEG(saveFile, jpegQuality) {
	var doc = activeDocument;
	if (doc.bitsPerChannel != BitsPerChannelType.EIGHT) 
		doc.bitsPerChannel = BitsPerChannelType.EIGHT;
	jpgSaveOptions = new JPEGSaveOptions();
	jpgSaveOptions.embedColorProfile = true;
	jpgSaveOptions.formatOptions = FormatOptions.STANDARDBASELINE;
	jpgSaveOptions.matte = MatteType.NONE;
	jpgSaveOptions.quality = jpegQuality; 
	activeDocument.saveAs(saveFile, jpgSaveOptions, true, Extension.LOWERCASE);
}  

function zeroPad(n, s) { 
	n = n.toString(); 
	while (n.length < s) 
		n = '0' + n; 
	return n; 
};

Using Photoshop scripts you can automate any task and even create animation effects. In CS6 you can render a series of images as a video, alternatively, you can create the image frames in Photoshop and use ffmpeg to render the video.

If you want to use ffmpeg to render a series of images, you could use the following command,

$ ffmpeg -r 30 -f image2 -i test_%04d.jpg -vb 1M -r 30 test.webm

Here is a simple (90 frame loop) example animating a series of scripted lighting effects,


The above video is embedded in this page using the following html,

<video id="test_test" poster="test_0001.jpg" preload="auto" loop autoplay>
    <source src="test.mp4" type="video/mp4" />
    <source src="test.webm" type="video/webm" />
    <source src="test.ogv" type="video/ogg" />
    <object width="600" height="360" type="application/x-shockwave-flash" data="test.swf">
        <param name="movie" value="test.swf" />
        <img src="test_0001.jpg" width="600" height="360" alt="test" title="No video playback" />
    </object>
</video>
Posted in html, javascript, shell tips | Comments Off on scripting Photoshop for stop motion

locking and concurrency in python, part 2

Previously, I created a "MultiLock" class for managing locks and lockgroups across a shared file system. Now I want to create a simple command-line utility that uses this functionality.

To start, we can create a simple runone() function that leverages MutliLock, e.g.,

def _runone(func, lockname, lockgroup, basedir, *args, **kwargs):
    ''' run one, AND ONLY ONE, instance (respect locking)

        >>> 
        >>> _runone(print, 'lock', 'locks', '.', 'hello world')
        >>> 
    '''
    lock = MultiLock(lockname, lockgroup, basedir)
    if lock.acquire():
        func(*args, **kwargs)
        lock.release()

Any python function (with its *args and **kwargs) will be called if (and-only-if) the named lock was acquired. At a minimum, this guarantees that one (and only one) instance of the function can be called at a given time.

To make this slightly more magic, we can wrap this as a decorator function -- a decorator that accepts arguments,

def runone(lockname='lock', lockgroup='.locks', basedir='.'):
    ''' decorator with closure
        returns a function that will run one, and only one, instance per lockgroup
    '''
    def wrapper(fn):
        def new_fn(*args, **kwargs):
            return _runone(fn, lockname, lockgroup, basedir, *args, **kwargs)
        return new_fn
    return wrapper

The closure is used so that we can pass arguments to the decorator function, e.g.,

@runone('lock', 'lockgroup', '/shared/path')
def spam():
    #do work, only if we acquire /shared/path/lockgroup/lock 

Putting this all together, we can create a command-line utility that will execute any command-line program if (and only if) it acquires a named lock in the lockgroup. With such a utility we can add concurrency and fault-tolerance to any shell script that can be executed across all nodes in a cluster. This code is also available in this github repo.

import time, sys, subprocess, optparse, logging
from multilock import MultiLock

def runone(lockname='lock', lockgroup='.locks', basedir='.'):
    ''' decorator with closure
        returns a function that will run one, and only one, instance per lockgroup
    '''
    def wrapper(fn):
        def new_fn(*args, **kwargs):
            return _runone(fn, lockname, lockgroup, basedir, *args, **kwargs)
        return new_fn
    return wrapper


def _runone(func, lockname, lockgroup, basedir, *args, **kwargs):
    ''' run one, AND ONLY ONE, instance (respect locking)

        >>> 
        >>> _runone(print, 'lock', 'locks', '.', 'hello world')
        >>> 
    '''
    lock = MultiLock(lockname, lockgroup, basedir)
    if lock.acquire():
        func(*args, **kwargs)
        lock.release()


if __name__ == '__main__':

    p = optparse.OptionParser('usage: %prog [options] cmd [args]')
    p.add_option('--lockname', '-l', dest="lockname", default='lock', help="the lock name, should be unique for this instance")
    p.add_option('--lockgroup', '-g', dest="lockgroup", default='.locks', help="the lockgroup, a collection of locks independent locks")
    p.add_option('--basedir', '-d', dest="basedir", default='.', help="the base directory where the lock files should be written")
    p.add_option('--wait', '-w', dest="wait", default=None, help="optional, wait (up till the number of seconds specified) for all locks to complete in the lockgroup")
    options, args = p.parse_args()

    if options.wait:
        lock = MultiLock(options.lockname, options.lockgroup, options.basedir)
        lock.wait(options.wait)
        sys.exit()
    
    @runone(options.lockname, options.lockgroup, options.basedir)
    def _main():
        subprocess.call(args)

    _main() 
Posted in python, shell tips, software arch. | Comments Off on locking and concurrency in python, part 2

locking and concurrency in python, part 1

I would like to do file-locking concurrency control in python. Additionally, I would like to provide a "run-once-and-only-once" functionality on a shared cluster; in other words, I have multiple batch jobs to run over a shared compute cluster and I want a simple way to provide fault tolerance for parallel jobs.

The batch jobs should leverage a locking mechanism with the following method signatures,

class Lock:

    def acquire(self)
        pass

    def release(self)
        pass

    def wait(self, timeout)
        pass

Using a shared filesystem, such as NFS, we can use file or directory locking, provided we can guarantee atomicity for the creation of the lock. I.e., only one host in a cluster can acquire a named lock. There are different ways to guarantee atomicity on file operation, depending on your filesystem.

One approach is os.makedir(), which is atomic on POSIX systems. Alternatively, you can use the following,

>>>
>>> fd = os.open('foo.lock', os.O_CREAT|os.O_EXCL|os.O_RDWR)
>>> 

This is atomic on most filesystems. Lastly, os.rename() is atomic on POSIX and most network file systems. In other words, if multiple hosts attempt the same os.rename operation on a shared file, only one will succeed and the others will raise on OSError.

In order to maximize fault-tolerance, we can create a lockfile with a hostname and process-id, rename the file, and then read the renamed file to verify the correct hostname and process-id. This will cover most all network shared filesystems (that may or may not be POSIX compliant). The following python snippet will perform this multi-lock,

class MultiLock:
    def __init__(self, lockname='lock'
        self.lockname = lockname
        self.lockfile = os.path.join(lockname, lockname + '.lock')
        self.lockedfile = os.path.join(lockname, lockname + '.locked')
        self.hostname = socket.gethostname()
        self.pid = os.getpid()
        self.fd = None

    def acquire(self):
        if not self.verify():
            logging.debug('you do not have the lock %s' %(self.lockedfile))
            try:
                logging.debug('attempt to create lock %s' %(self.lockfile))
                os.mkdir(os.path.dirname(self.lockfile))
                self.fd = os.open(self.lockfile, os.O_CREAT|os.O_EXCL|os.O_RDWR)
                os.write(self.fd, self.hostname+' '+str(self.pid))
                os.fsync(self.fd)
                os.close(self.fd)
                logging.debug('attempt multilock %s' %(self.lockedfile))
                os.rename(self.lockfile, self.lockedfile)
                return self.verify()
            except OSError:
                logging.debug('unable to multilock %s' %(self.lockfile))
        return 0

    def verify(self):
        logging.debug('test if this is your lock, %s' %(self.lockedfile))
        try:
            self.fd = os.open(self.lockedfile, os.O_RDWR)
            qhostname, qpid = os.read(self.fd, 1024).strip().split()
            os.close(self.fd)
            if qhostname != self.hostname or int(qpid) != int(self.pid):
                logging.debug('%s:%s claims to have the lock' %(qhostname, qpid))
                return 0
            logging.debug('success, you have lock %s' %(self.lockedfile))
            return 1
        except:
            logging.debug('you do not have lock %s' %(self.lockedfile))
            return 0

Furthermore, I would like a "lockgroup" such that I can create several locks in a group and a wait() function that will wait for all of the locks in a group to complete. In other words, we can start multiple jobs in parallel which can be distributed across the cluster (say, one per node) and then a wait() statement will wait for all jobs to complete.

Putting this all together, we can create a python "multilock" module with a "MultiLock" class, which is also available in this github repo, as follows,

import time, socket, shutil, os, logging, errno

class MultiLockTimeoutException(Exception):
    pass

class MultiLockDeniedException(Exception):
    pass

class MultiLock:
    def __init__(self, lockname='lock', lockgroup='.locks', basepath='.', poll=0.5):
        ''' MultiLock instance

            lockname: the name of this lock, default is 'lock'
            lockgroup: the name of the lockgroup, default is '.locks'
            basepath: the directory to store the locks, default is the current directory
            poll: the max time in seconds for a lock to be established, this must be larger
                  than the max time it takes to acquire a lock
        '''
        self.lockname = lockname
        self.basepath = os.path.realpath(basepath)
        self.lockgroup = os.path.join(self.basepath, lockgroup)
        self.lockfile = os.path.join(self.lockgroup, lockname, lockname + '.lock')
        self.lockedfile = os.path.join(self.lockgroup, lockname, lockname + '.locked')
        self.hostname = socket.gethostname()
        self.pid = os.getpid()
        self.poll = int(poll)
        self.fd = None


    def acquire(self, maxage=None):
        if not self.verify():
            logging.debug('you do not have the lock %s' %(self.lockedfile))
            if maxage:
                self.cleanup(maxage)
            try:
                logging.debug('make sure that the lockgroup %s exists' %(self.lockgroup))
                os.makedirs(self.lockgroup)
            except OSError as exc:
                if exc.errno == errno.EEXIST:
                    pass
                else:
                    logging.error('fatal error trying to access lockgroup %s' %(self.lockgroup))
                    raise
            try:
                logging.debug('attempt to create lock %s' %(self.lockfile))
                os.mkdir(os.path.dirname(self.lockfile))
                self.fd = os.open(self.lockfile, os.O_CREAT|os.O_EXCL|os.O_RDWR)
                os.write(self.fd, self.hostname+' '+str(self.pid))
                os.fsync(self.fd)
                os.close(self.fd)
                logging.debug('attempt multilock %s' %(self.lockedfile))
                os.rename(self.lockfile, self.lockedfile)
                return self.verify()
            except OSError:
                logging.debug('unable to multilock %s' %(self.lockfile))
        return 0

   
    def release(self):
        try:
            if self.verify():
                shutil.rmtree(os.path.dirname(self.lockedfile))
                try:
                    logging.debug('released lock %s, will try to clean up lockgroup %s' %(self.lockname, self.lockgroup))
                    os.rmdir(self.lockgroup)
                except OSError as exc:
                    if exc.errno == errno.ENOTEMPTY:
                        logging.debug('lockgroup %s is not empty' %(self.lockgroup))
                        pass
                    else:
                        raise
        finally:
            return self.cleanup()


    def verify(self):
        logging.debug('test if this is your lock, %s' %(self.lockedfile))
        try:
            self.fd = os.open(self.lockedfile, os.O_RDWR)
            qhostname, qpid = os.read(self.fd, 1024).strip().split()
            os.close(self.fd)
            if qhostname != self.hostname or int(qpid) != int(self.pid):
                logging.debug('%s:%s claims to have the lock' %(qhostname, qpid))
                return 0
            logging.debug('success, you have lock %s' %(self.lockedfile))
            return 1
        except:
            logging.debug('you do not have lock %s' %(self.lockedfile))
            return 0

   
    def cleanup(self, maxage=None):
        ''' safely cleanup any lock files or directories (artifacts from race conditions and exceptions)
        '''
        if maxage and os.path.exists(os.path.dirname(self.lockedfile)):
            try:
                tdiff = time.time() - os.stat(os.path.dirname(self.lockedfile))[8]
                if tdiff >= maxage:
                    logging.debug('lock %s is older than maxage %s' %(os.path.dirname(self.lockedfile), maxage))
                    shutil.rmtree(os.path.dirname(self.lockedfile))
            except:
                pass
        if os.path.isfile(self.lockedfile):
            logging.debug('lock %s exists, checking hostname:pid' % (self.lockedfile))
            qhostname, qpid = (None, None)
            try:
                fh = open(self.lockedfile)
                qhostname, qpid = fh.read().strip().split()
                fh.close()
            except:
                pass
            if self.hostname == qhostname:
                try:
                    if int(qpid) > 0:
                        os.kill(int(qpid), 0)
                except OSError, e:
                    if e.errno != errno.EPERM:
                        logging.error('lock %s exists on this host, but pid %s is NOT running, force release' % (self.lockedfile, qpid))
                        shutil.rmtree(os.path.dirname(self.lockedfile))
                        return 1
                    else:
                        logging.debug('lock %s exists on this host but pid %s might still be running' %(self.lockedfile, qpid))
                else:
                    logging.debug('lock %s exists on this host with pid %s still running' %(self.lockedfile, qpid))
            return 0
        return 1


    def wait(self, timeout=86400):
        logging.debug('waiting for lockgroup %s to complete' %(self.lockgroup))
        timeout = int(timeout)
        start_time = time.time()
        while True:
            try:
                if (time.time() - start_time) >= timeout:
                    raise MultiLockTimeoutException("Timeout %s seconds" %(timeout))
                elif os.path.isdir(self.lockgroup):
                    time.sleep(self.poll)
                    os.rmdir(self.lockgroup)
                return 1
            except OSError as exc:
                if exc.errno == errno.ENOTEMPTY:
                    pass
                elif exc.errno == errno.ENOENT:
                    pass
                else:
                    logging.error('fatal error waiting for %s' %(self.lockgroup))
                    raise


    def __del__(self):
        self.release()

    
    def __enter__(self):
        ''' pythonic 'with' statement

            e.g.,
            >>> with MultiLock('spam') as spam:
            ...     logging.debug('we have spam')
        '''
        if self.acquire():
            return self
        raise MultiLockDeniedException(self.lockname)


    def __exit__(self, type, value, traceback):
        ''' executed after the with statement
        '''
        if self.verify():
            self.release()

We can use this class to manage locks and lockgroups across network file shares, next, I'd like to demonstrate a simple command-line utility that uses this functionality.

Posted in python, software arch. | Comments Off on locking and concurrency in python, part 1

zip archive in python

I would like to create zip archives within a python batch script. I would like to compress individual files or entire directories of files.

You can use the built-in zipfile module, and create a ZipFile as you would a normal File object, e.g.,

>>> 
>>> foo = zipfile.ZipFile('foo.zip', mode='w')
>>> foo.write('foo.txt')
>>> 

Unfortunately, by default the zipfile is uncompressed. You can add multiple files and directories to your zipfile, which can be useful for archival, but they will not be compressed. In order to compress the files, you'll need to have the zlib library installed (it should already be installed in newer versions of python, 2.5 and greater). Simply use the ZIP_DEFLATED flag as follows,

>>> 
>>> foo = zipfile.ZipFile('foo.zip', mode='w')
>>> foo.write('foo.txt', compress_type=zipfile.ZIP_DEFLATED)
>>> 

In order to archive an entire directory (and all its contents) you can use the os.walk function. This function will return a list of all files and subdirectories as a triple (root, dirs, files). You can iterate through the returned files as follows,

>>> 
>>> foo = zipfile.ZipFile('foo.zip', mode='w')
>>> for root, dirs, files in os.walk('/path/to/foo'):
...     for name in files:
...         file_to_zip = os.path.join(root, name)
...         foo.write(file_to_zip, compress_type=zipfile.ZIP_DEFLATED)
...         
>>> 

We can put this all together into a handy utility function that creates a compressed zipfile for any file or directory. This is also available in the following github repo.

def ziparchive(filepath, zfile=None):
    ''' create/overwrite a zip archive

        can be a file or directory, and always overwrites the output zipfile if one already exists

        An optional second argument can be provided to specify a zipfile name, 
        by default the basename will be used with a .zip extension

        >>>
        >>> ziparchive('foo/data/')
        >>> zf = zipfile.ZipFile('data.zip', 'r')
        >>> 

        >>> 
        >>> ziparchive('foo/data/', 'foo/eggs.zip')
        >>> zf = zipfile.ZipFile('foo/eggs.zip', 'r')
        >>> 
    '''
    if zfile is None:
        zfile = os.path.basename(filepath.strip('/')) + '.zip'
    filepath = filepath.rstrip('/')
    zf = zipfile.ZipFile(zfile, mode='w')
    if os.path.isfile(filepath):
        zf.write(filepath, filepath[len(os.path.dirname(filepath)):].strip('/'), compress_type=zipfile.ZIP_DEFLATED)
    else:
        for root, dirs, files in os.walk(filepath):
            for name in files:
                file_to_zip = os.path.join(root, name)
                arcname = file_to_zip[len(os.path.dirname(filepath)):].strip('/')
                zf.write(file_to_zip, arcname, compress_type=zipfile.ZIP_DEFLATED)
Posted in python, shell tips | Comments Off on zip archive in python

chaining ssh tunnels

Imagine you're working within a private home network and need to connect to an Oracle database within a corporate network accessible only through a bastion host hidden within the corporate network. Odd as that sounds, it's a typical network configuration, as follows:

The layout is very simple, when you're within the corporate network you must use the bastion host to access the vault (e.g., the Oracle database). The arrows in the above diagram represent the directionality of the various firewall rules, and in this case, these are SSH-only. A workstation within the corporate network would simply create an SSH-tunnel through the bastion to the vault.

Technically, there's a line of (completely separate) SSH connections from the private home network all the way to the vault, but there is absolutely no way for either side to talk directly. And further, there is no way for an Oracle client on the home network to connect all the way to the vault.

Each line could represent an SSH-tunnel, in which case, if we chain these tunnels together, then we could connect to the vault from the home network. This would allow an Oracle client (such as SQL Developer, Toad, or DbVisualizer) on the home network to connect through the SSH-tunnel chain to the vault.

step 1

For starters, we'll need all the arrows pointing the right way, we can use a reverse SSH tunnel from the corporate workstation, as follows

$ ssh -R 54321:localhost:22 user@internet-server

This will allow the internet-server to connect into the corporate workstation. However, this command must be run from within the corporate network. We can persist a reverse tunnel using the approach discussed in a previous article, see reverse ssh tunnel.

Persisting this reverse tunnel effectively points all arrows from the home network to the vault. At this point you could simply SSH from one host to another and eventually get to the vault, but this does not help us connect our Oracle client to the vault. We'll need to chain everything together, and we'll need to do this all from the home network.

step 2

Once an SSH-tunnel has been persisted between the corporate workstation and the internet-server, we can create a new tunnel from the home network into the corporate workstation. From the home network we'll issue the following command:

$ ssh -f user@internet-server -L 12345:localhost:54321 -N

Now we have a local port on the home network that connects into the corporate workstation, effectively chaining the reverse tunnel (tcp/54321) to a new tunnel (tcp/12345).

As an example, we could use this new tunnel as a SOCKS5 proxy, e.g.,

$ ssh -f -N -D 8080 user@localhost -p 12345

From the home network, we could now set our web browser to use a proxy on localhost:8080 to securely access the corporate network. This is already incredibly useful (replacing a VPN with SSH), but we still don't have access from the home network into the vault.

step 3

Now that the home network has a connection into the corporate workstation, we'll need to send a command to the workstation to create a tunnel through the bastion to the vault. From the home network we'll issue the following commands:

$ VAULT_TUNNEL="ssh -f user@bastion -L 1520:vault:1520 -N"
$ ssh user@localhost -p 12345 "$VAULT_TUNNEL"

This gives us a tunnel from the corporate workstation into the vault.

step 4

Now we can link the vault tunnel to the home network tunnel. From the home network we'll issue the following command:

$ ssh -f user@localhost -p 12345 -L 1520:localhost:1520 -N

What this does is create a new tunnel through the local tcp/12345 (which is the doubly-chained tunnel into the corporate network) to tcp/1520 on the other end (which itself is a tunnel into the vault). This new tunnel links everything together such that the home network now has a local port tcp/1520 into the Oracle database.

Simply point Toad (or SQL-Developer or DbVisualizer or whatever) to localhost:1520 on your home network and you'll be accessing the database through a triple-chained forward SSH tunnel.

all together

Since everything (except the reverse SSH tunnel) originates from the home network, we can create one shell command to establish this connection in one go. It's also a good idea to use an ssh agent across multiple hosts, but you could also just forward your auth credentials to avoid having to enter a password multiple times. Although in some cases password prompts may be unavoidable, for example, a decently secure bastion may not allow agent forwarding (or authorized_keys in general) and may even require secure keyfob access- in that case you'll simply have to enter the password when creating the chain of tunnels.

Posted in bash, shell tips, ssh | Comments Off on chaining ssh tunnels