OOP or Procedural?

I would like to know when it is best to use object-oriented programming, and when it is best to use procedural programming.

tl;dr: neither, go with functional programming

By procedural programming, I mean the kind of code you'd find programming in C; imperative control flow, functions, data structures, and algorithms. For example,

#include <stdio.h>

float f_to_c(float f) {
    return (f - 32) * 5 / 9;
}

int main() {
    float fahrenheit;
    printf("Please enter the temperature in Fahrenheit: ");
    scanf("%f", &fahrenheit);
    printf("Temperature in Celsius = %.2f\n", f_to_c(fahrenheit));
    return 0;
}

And by object-oriented programming, I mean the kind of code with abstraction, inheritance, polymorphism, and encapsulation. For example,

import java.util.*;

interface TemperatureConverter {
    public float convert();
}

class Temperature {
    float degrees;
    Temperature(float t) {
        degrees = t;
    }
}

class Fahrenheit extends Temperature implements TemperatureConverter {

    Fahrenheit(float t) {
        super(t);
    }

    public float convert() {
        return ((degrees - 32)*5)/9;
    }

}

class FahrenheitToCelsius {

    public static void main(String[] args) {
        Fahrenheit fahrenheit;
        Scanner in = new Scanner(System.in);
        System.out.print("Enter temperature in Fahrenheit: ");
        fahrenheit = new Fahrenheit( in.nextFloat() );

        System.out.println("temperature in Celsius = " 
            + fahrenheit.convert());
    }

}

I admittedly forced some inheritance and polymorphism into the above code, but it's arguably just as easy (if not easier) to read than the C example (despite being considerably longer).

In both cases we hid the implementation details (the specific formula that converts Fahrenheit to Celsius) from the main(). However, the OOP example also hides (encapsulates) the data structure as well. In the Java example we encapsulate the float within the Temperature base class, which the Fahrenheit class inherits. And since the Fahrenheit class implements the TemperatureConverter interface, then we're guaranteed to have a convert() method. There is still some implicit typecasting (a float to string within the println), but the idea is that the main() function doesn't care about the underlying data structure.

As Robert Martin (Uncle Bob) put it, "Objects expose behavior and hide data." The Fahrenheit class exposed a convert() behavior and hid the underlying data structure. This, according to Uncle Bob, makes it easy to add new objects without changing existing behaviors. For example,

class Celsius extends Temperature implements TemperatureConverter {

    Celsius(float t) {
        super(t);
    }

    public float convert() {
        return 9*degrees/5 + 32;
    }

}

This code has no impact on the existing Fahrenheit class, and we can safely call convert() on both Fahrenheit and Celsius objects. Additionally, if we use generics on the Temperature class, then we could allow for different data structures (such as double or BigDecimal) on something like a Kelvin class. In OOP, adding new classes is generally easy.

That said, what if we wanted to add new behavior? Maybe we want to add an isRoomTemperature() method. If so, we could add a new interface and then implement it in Celsius and Fahrenheit, but what if we had also implemented that new Kelvin class? Or several other Temperature classes? And shouldn't the convert() method return a Temperature class? This could get messy and will lead us into DRY problems. In fact, this is an area where OOP is not ideal. Even Uncle Bob admits that if we're adding new behaviors then "we prefer data types and procedures."

This seemingly obvious and innocuous statement in Clean Code is actually very profound, especially considering the fact that OOP and classic procedural programming do not mix well in a single code-base. Whether you go with OOP or not, if Uncle Bob is correct, depends on whether or not you will be adding and managing lots of behavior, or whether you will be adding and managing lots of data types. If the behavior will be relatively unchanged, then OOP would be beneficial, but if we're planning to add or change behavior, then procedural programming would be preferred. I honestly don't know what kind of software projects aren't primarily adding new behaviors (new features).

For reference, adding a room temperature check is easy in the C code,

#include <stdio.h>
#include <stdbool.h>

bool is_c_room_temperature(float c) {
    return c >= 20 && c <= 25 ? 1 : 0;
}

float f_to_c(float f) {
    return (f - 32) * 5 / 9;
}

bool is_f_room_temperature(float f) {
    return is_c_room_temperature(f_to_c(f));
}

int main() {
    float fahrenheit;
    printf("Please enter the temperature in Fahrenheit: ");
    scanf("%f", &fahrenheit);
    printf("Temperature in Celsius = %.2f\n", f_to_c(fahrenheit));
    if (is_f_room_temperature(fahrenheit)) {
        printf("%.2f is room temperature\n", fahrenheit);
    }
    return 0;
}

Classic procedural code does not concern itself with adding behaviors to objects. Instead, it treats data types as data types and isolates the "procedural" behaviors into functions that are performed on those data types. If we stick to pure functions (no side effects, and all inputs map to unique outputs), then we'll have highly testable code that can run in highly-concurrent environments.

For example, adding a Kelvin conversion would look like this,

float c_to_k(float c) {
    return c + 273.15;
}

Likewise, adding a Fahrenheit to Kelvin conversion would simply chain together two pure functions,

float f_to_k(float f) {
    return c_to_k(f_to_c(f));
}

Procedural code focuses entirely on behavior. Adding this functionality in a pure OOP style would result a laundry list of classes, interfaces, and methods. It can get out of hand quickly, and we'd soon be researching design patterns to try to regain some sense of code quality.

In practice, most developers tend to treat OOP and procedural programming with a sort of religious devotion, zealously adhering to their preferred programming style and feeling that the alternative is sacrilege. I think Uncle Bob was onto something when he said that "good software developers understand these issues without prejudice and choose the approach that is best for the job at hand." That's also from Clean Code, a book that should be read at least as often as it's referenced (it's a bit like George Orwell's 1984, most people reference it without ever having read it).

Uncle Bob is certainly more diplomatic than Joe Armstrong, the creator of Erlang, who had famously said,

"The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."

To date, I've never heard a reasonable counter-argument to this objection to OOP, namely, that objects bind data structures and functions together (which inevitably leads to an explosion of side-effects). Even as you try to decouple the banana from the gorilla, you end up creating even more classes, more side effects, and most likely an even worse problem. I'm not sure I'd go so far as to say OO Sucks, but I am hard pressed to defend OOP in light of decades of hard learned lessons.

Obviously, good code is preferable to bad code in any language. There is plenty of bad procedural code out in the world. But honestly, in OOP you often find good programmers writing bad code. Let's go back to some of the earliest lessons in software engineering, specifically, Fred Brook's essay, No Silver Bullet, and ask ourselves how much accidental complexity has been created by OOP? How much code in an average OOP project is tackling the essential complexity of a problem versus the accidental complexity?

In fairness, OOP was popularized by Java, which solved many problems from the early days of C and C++ (such as garbage collection and platform independence). In the decades since, Java has added capabilities found in modern languages (such as lambda expressions, collections, stream api, higher-order functions, etc). Most of the new capabilities come from the world of functional programming, and exactly zero of these capabilities come from OOP.

Whether we like it or not, the future may not be kind to OOP. Multi-core architectures and distributed computing are pushing software into high-concurrency asynchronous environments. Even worse, the push to cloud computing and microservices leads us to an increase in latency within a highly concurrent asynchronous world. This is an ideal environment for a separation of data structures from functions (pure functions). This is a great environment for Haskell and Erlang (or coding pure functions using Scala, Python, or Go), but regardless of the language, you couldn't ask for a worse environment for OOP.

Posted in c, java, software arch.

Trie or Set

Given a grid or input stream of characters, I would like to discover all words according to a given dictionary. This could be a dictionary of all English words or phrases (say, for an autocomplete service), or for any language. This is especially useful for languages where words are not clearly separated (e.g., Japanese, Chinese, Thai).

Typically, this is done with a Trie or a DAWG (Directed Acyclic Word Graph). A Trie can be implemented in Python using a nested dict, i.e.,

def _make_trie(wordict):
    trie = {}
    for word in wordict:
        current_trie = trie
        for letter in word:
            current_trie = current_trie.setdefault(letter, {})
        current_trie['$'] = '$'
    return trie

def _in_trie(trie, word):
    ''' True IFF prefix or word in trie
    '''
    current_trie = trie
    for letter in word:
        if letter in current_trie:
            current_trie = current_trie[letter]
        else:
            return False
    return True

Using this approach, we can scan through a large stream of characters for potential words. Imagine a classic matching game where you are looking for words within a grid of characters. Programmatically, you would scan through the grid with combinations of characters. The advantage of a Trie (or DAWG) is that it allows for efficient pruning. In other words, if a character combination is not in the Trie, then you can cease pruning.

An alternative approach is to create a Set of word prefixes, i.e.,

almost_words = set([])
for word in wordict:
    for i in range(len(word)-1):
        almost_words.add( word[0:i+1] )

If the dictionary contains ['apple', 'are'] then the Set almost_words would contain the following,

{'a', 'ap', 'app', 'appl', 'ar'}

In other words, rather than test if a character string exists in the Trie, one can simply check the Set almost_words. If there is no match then that particular path can be pruned. Here is a simple RTL (right-to-left) character scanner that uses this approach:

def _setscan_rtl(grid, wordict):
    ''' generator yielding word candidates
    '''
    almost_words = set([])
    maxlen = 0
    for word in wordict:
        if len(word) > maxlen:
            maxlen = len(word)
        for i in range(len(word)-1):
            almost_words.add( word[0:i+1] )
    for line in grid:
        for i in range(max(len(line),maxlen) - maxlen):
            candidate_word = ''
            for c in range(min(len(line),maxlen)):
                candidate_word += line[i+c]
                if candidate_word not in almost_words:
                    break
                yield candidate_word

I created a simple test case to determine if a Set was truly faster, and whether or not it was as memory efficient. There was a noticeable increase in performance using Set over Trie (for both large and small data sets). Interestingly, the performance difference was even more pronounced when using Japanese characters, indicating that language parsers can use a simple Set (or hashmap) as opposed to a Trie or a DAWG.

$ /usr/bin/time ./test_j_set.py
50220
177.84user 0.42system 2:58.54elapsed 99%CPU (0avgtext+0avgdata 507412maxresident)k
0inputs+0outputs (0major+145801minor)pagefaults 0swaps

$ /usr/bin/time ./test_j_trie.py
50220
250.44user 0.56system 4:11.86elapsed 99%CPU (0avgtext+0avgdata 680960maxresident)k
0inputs+0outputs (0major+184571minor)pagefaults 0swaps

Full results and code are available on my github.

Posted in data arch., python

iter_words

I would like to iterate over a stream of words, say, from STDIN or a file (or any random input stream). Typically, this is done like this,

def iter_words(f):
    for line in f:
        for word in line.split():
            yield word

And then one can simply,

for word in iter_words(sys.stdin):
    # do something

For a more concrete example, let's say we need to keep a count of every unique word in an input stream, something like this,

from collections import Counter
c = Counter

for word in iter_words(sys.stdin):
    c.update([word])

The only problem with this approach is that it will read data in line-by-line, which in most cases is exactly what we want, however, in some cases we don't have line-breaks. For extremely large data streams we will simply run out of memory if we use the above generator.

Instead, we can use the read() method to read in one-byte at a time, and manually construct the words as we go, like this,

def iter_words(sfile):
    chlist = []
    for ch in iter(lambda: sfile.read(1), ''):
        if str.isspace(ch):
            if len(chlist) > 0:
                yield ''.join(chlist)
            chlist = []
        else:
            chlist.append(ch)

This approach is memory efficient, but extremely slow. If you absolutely need to get the speed while still being memory efficient, you'll have to do a buffered read, which is kind of an ugly hybrid of these two approaches.

def iter_words(sfile, buffer=1024):
    lastchunk = ''
    for chunk in iter(lambda: sfile.read(buffer), ''):
        words = chunk.split()
        lastchunk = words[-1]
        for word in words[:-1]:
            yield word
        newchunk = []
        for ch in sfile.read(1):
            if str.isspace(ch):
                yield lastchunk + ''.join(newchunk)
                break
            else:
                newchunk.append(ch)
Posted in python

Punycode

I would like a webapp that supports UTF-8 URLs. For example, https://去.cc/叼, where both the path and the server name contain non-ASCII characters.

The path /叼 can be handled easily with %-encodings, e.g.,

>>> import urllib
>>> 
>>> urllib.parse.quote('/叼')
'/%E5%8F%BC'

Note: this is similar to the raw byte representation of the unicode string:

>>> bytes('/叼', 'utf8')
b'/\xe5\x8f\xbc'

However, the domain name, "去.cc" cannot be usefully %-encoded (that is, "%" is not a valid character in a hostname). The standard encoding for international domain names (IDN) is punycode; such that "去.cc' will look like "xn--1nr.cc".

The "xn--" prefix is the ASCII Compatible Encoding that essentially identifies this hostname as a punycode-encoded name. Most modern web-browsers and http libraries can decode this kind of name, although just in case, you can do something like this:

>>> 
>>> '去'.encode('punycode')
b'1nr'

In practice, we can use the built-in "idna" encoding and decoding in python, i.e., IRI to URI:

>>> p = urllib.parse.urlparse('https://去.cc/叼')
>>> p.netloc.encode('idna')
b'xn--1nr.cc'
>>> urllib.parse.quote(p.path)
'/%E5%8F%BC'

And going the other direction, i.e., URI to IRI:

>>> a = urllib.parse.urlparse('https://xn--1nr.cc/%E5%8F%BC')
>>> a.netloc.encode('utf8').decode('idna')
'去.cc'
>>> urllib.parse.unquote(a.path)
'/叼'
Posted in python, software arch.

Using getattr in Python

I would like to execute a named function on a python object by variable name. For example, let's say I'm reading in input that looks something like this:

enqueue 1
enqueue 12
enqueue 5
enqueue 9
sort
reverse
dequeue
print

Afterwards, we should see:

[9,5,1]

Let's say we need to implement a data structure that consumes this input. Fortunately, all of this behavior already exists within the built-in list datatype. What we can do is extend the built-in list to map the appropriate methods, like so:

class qlist(list):
    def enqueue(self, v):
        self.insert(0,v)

    def dequeue(self):
        return self.pop()

    def print(self):
        print(self)

The sort and reverse methods are already built-in to list, so we don't need to map them. Now, we simply need a driver program that reads and processes commands to our new qlist class. Rather than map out the different commands in if/else blocks, or use eval(), we can simply use getattr, for example:

if __name__ == '__main__':
    thelist = qlist()
    while line in sys.stdin:
        cmd = line.split()
        params = (int(x) for x in cmd[1:])
        getattr(thelist, cmd[0])(*params)
Posted in shell tips

Graph Search

I would like to discover paths between two nodes on a graph. Let's say we have a graph that looks something like this:

graph = {1: set([2, 3]),
         2: set([1, 4, 5, 7]),
         3: set([1, 6]),
         ...
         N: set([...] }

The graph object contains a collection of nodes and their corresponding connections. If it's a bi-directional graph, those connections would have to appear in the corresponding sets (e.g., 1: set([2]) and 2: set([1])).

Traversing this kind of data structure can be done through recursion, usually something like this:

def find_paths(from_node, to_node, graph, path=None):
    ''' DFS search of graph, return all paths between
        from_node and to_node
    '''
    if path is None:
        path = [from_node]
    if to_node == from_node:
        return [path]
    paths = []
    for next_node in graph[from_node] - set(path):
        paths += find_paths(next_node, to_node, graph, path + [next_node])
    return paths

Unfortunately, for large graphs, this can be pretty inefficient, requiring a full depth-first search (DFS), and storing the entire graph in memory. This does have the advantage of being exhaustive, finding all unique paths between two nodes.

That said, let's say we want to find the shortest possible path between two nodes. In those cases, you want a breadth-first search (BFS). Whenever you hear the words "shortest path", think BFS. You'll want to avoid recursion (as those result in a DFS), and instead rely on a queue, which in Python can be implemented with a simple list.

def find_shortest_path(from_node, to_node, graph):
    ''' BFS search of graph, return shortest path between
        from_node and to_node
    '''
    queue = [(from_node, [from_node])]
    while queue:
        (qnode, path) = queue.pop(0) #deque
        for next_node in graph[qnode] - set(path):
            if next_node == to_node:
                return path + [next_node]
            else:
                queue.append((next_node, path + [next_node]))

Because a BFS is guaranteed to find the shortest path, we can return the moment we find a path between to_node and from_node. Easy!

In some cases, we may have an extremely large graph. Let's say you're searching the Internet for a path between two unrelated web pages, and the graph is constructed dynamically based on scraping the links from each explored page. Obviously, a DFS is out of the question for something like that, as it would spiral into an infinite chain of recursion (and probably on the first link).

As a reasonable constraint, let's say we want to explore all the links up to a specific depth. This could be done easily. Simply add a depth_limit, as follows:

def find_shortest_path(from_node, to_node, graph, depth_limit=3):
    queue = [(from_node, [from_node])]
    while queue and depth_limit > 0:
        depth_limit -= 1
        (qnode, path) = queue.pop(0) #deque
        for next_node in graph[qnode] - set(path):
            if next_node == to_node:
                return path + [next_node]
            else:
                queue.append((next_node, path + [next_node]))
Posted in python, software arch.

python unittest

I would like to setup unit tests for a python application. There are many ways to do this, including doctest and unittest, as well as 3rd-party frameworks that leverage python's unittest, such as pytest and nose.

I found the plain-old unittest framework to be the easiest to work with, although I often run into questions about how best to organize tests for various sized projects. Regardless of the size of the projects, I want to be able to easily run all of the tests, as well as run specific tests for a module.

The standard naming convention is "test_ModuleName.py", which would include all tests for the named module. This file can be located in the same directory (package) as the module, although I prefer to keep the tests in their own subdirectory (which can easily be excluded from production deployments).

In other words, I end up with the following:

package/
 - __init__.py
 - Module1.py
 - Module2.py
 - test/
    - all_tests.py
    - test_Module1.py
    - test_Module2.py

Each of the test_*.py files looks something like this:

#!/usr/bin/env python
# vim: set tabstop=4 shiftwidth=4 autoindent smartindent:
import os, sys, unittest

## parent directory
sys.path.insert(0, os.path.join( os.path.dirname(__file__), '..' ))
import ModuleName

class test_ModuleName(unittest.TestCase):

    def setUp(self):
        ''' setup testing artifacts (per-test) '''
        self.moduledb = ModuleName.DB()

    def tearDown(self):
        ''' clear testing artifacts (per-test) '''
        pass

    def test_whatever(self):
        self.assertEqual( len(self.moduledb.foo()), 16 )


if __name__ == '__main__':
    unittest.main()

With this approach, the tests can be run by all_tests.py, or I can run the individual test_ModuleName.py.

The all_tests.py script also must add the parent directory on the path, i.e.,

#!/usr/bin/env python
# vim: set tabstop=4 shiftwidth=4 autoindent smartindent:
import sys, os
import unittest

## set the path to include parent directory
sys.path.insert(0, os.path.join( os.path.dirname(__file__), '..' ))

## run all tests
loader = unittest.TestLoader()
testSuite = loader.discover(".")
text_runner = unittest.TextTestRunner().run(testSuite)
Posted in python

HTML + CSS + JavaScript Lessons

I would like a very simple introduction to web development, from the basics of HTML and CSS, to the proper use of JavaScript; and all without getting bogged down in complicated textbooks.

I've been working with HTML, CSS, and JavaScript (as well as dozens of programming languages in more environments than I can remember) for over 20 years. While there are some excellent resources online (I recommend w3schools), I believe web development is a very simple topic that is often unnecessarily complicated.

I created a simple set of 9 lessons for learning basic web development. This includes HTML, CSS, and some simple JavaScript (including callback functions to JSONP APIs), everything you need to make and maintain websites.

You can find the lessons here
http://avant.net/lessons/

It's also available on Github
https://github.com/timwarnock/lessons

Posted in css, html, javascript

bash histogram

I would like to generate a streamable histgram that runs in bash. Given an input stream of integers (from stdin or a file), I would like to transform each integer to that respective number of "#" up to the length of the terminal window; in other words, 5 would become "#####", and so on.

You can get the maximum number of columns in your current terminal using the following command,

twarnock@laptop: :) tput cols
143

The first thing we'll want to do is create a string of "####" that is exactly as long as the max number of columns. I.e.,

COLS=$(tput cols);
MAX_HIST=`eval printf '\#%.0s' {1..$COLS}; echo;`

We can use the following syntax to print a substring of MAX_HIST to any given length (up to its maximum length).

twarnock@laptop: :) echo ${MAX_HIST:0:5}
#####
twarnock@laptop: :) echo ${MAX_HIST:0:2}
##
twarnock@laptop: :) echo ${MAX_HIST:0:15}
###############

We can then put this into a simple shell script, in this case printHIST.sh, as follows,

#! /bin/bash
COLS=$(tput cols);
MAX_HIST=`eval printf '\#%.0s' {1..$COLS}; echo;`

while read datain
do
  if [ -n "$datain" ]; then
    echo -n ${MAX_HIST:0:$datain}
    if [ $datain -gt $COLS ]; then
      printf "\r$datain\n"
    else
      printf "\n"
    fi
  fi
done < "${1:-/dev/stdin}"

This script will also print any number on top of any line that is larger than the maximum number of columns in the terminal window.

As is, the script will transform an input file into a crude histogram, but I've also used it as a visual ping monitor as follows (note the use of unbuffer),

twarnock@cosmos:~ :) ping $remote_host | unbuffer -p awk -F'[ =]' '{ print int($10) }' | unbuffer -p printHIST.sh
######
#####
########
######
##
####
#################
###
#####
#######

Posted in bash, shell tips

xmouse

I would like to remotely control my Linux desktop via an ssh connection (connected through my phone).

Fortunately, we can use xdotool.

I created a simple command-interpreter that maps keys to xdotool. I used standard video game controls (wasd) for large mouse movements (100px), with smaller movements available (ijkl 10px). It can toggle between mouse and keyboard, which allows you to somewhat easily open a browser and type URLs.

I use this to control my HD television from my phone, and so far it works great.

#!/bin/bash
#
#
: ${DISPLAY:=":0"}
export DISPLAY

echo "xmouse! press q to quit, h for help"

function print_help() {
  echo "xmouse commands:
  h - print help

  Mouse Movements
  w - move 100 pixels up
  a - move 100 pixels left
  s - move 100 pixels down
  d - move 100 pixels right

  Mouse Buttons
  c - mouse click
  r - right mouse click
  u - mouse wheel Up
  p - mouse wheel Down

  Mouse Button dragging
  e - mouse down (start dragging)
  x - mouse up (end dragging)

  Mouse Movements small
  i - move 10 pixels up
  j - move 10 pixels left
  k - move 10 pixels down
  l - move 10 pixels right

  Keyboard (experimental)
  Press esc key to toggle between keyboard and mouse modes
  
"
}

KEY_IN="Off"
IFS=''
while read -rsn1 input; do
  #
  # toggle mouse and keyboard mode
  case "$input" in
  $'\e') if [ "$KEY_IN" = "On" ]; then
           KEY_IN="Off"
           echo "MOUSE mode"
         else
           KEY_IN="On"
           echo "KEYBOARD mode"
         fi
     continue
     ;;
  esac
  #
  # keyboard mode
  if [ "$KEY_IN" = "On" ]; then
  case "$input" in
  $'\x7f') xdotool key BackSpace ;;
  $' ')  xdotool key space ;;
  $'')   xdotool key Return ;;
  $':')  xdotool key colon ;;
  $';')  xdotool key semicolon ;;
  $',')  xdotool key comma ;;
  $'.')  xdotool key period ;;
  $'-')  xdotool key minus ;;
  $'+')  xdotool key plus ;;
  $'!')  xdotool key exclam ;;
  $'"')  xdotool key quotedbl ;;
  $'#')  xdotool key numbersign ;;
  $'$')  xdotool key dollar ;;
  $'%')  xdotool key percent ;;
  $'&')  xdotool key ampersand ;;
  $'\'') xdotool key apostrophe ;;
  $'(')  xdotool key parenleft ;;
  $')')  xdotool key parenright ;;
  $'*')  xdotool key asterisk ;;
  $'/')  xdotool key slash ;;
  $'<')  xdotool key less ;;
  $'=')  xdotool key equal ;;
  $'>')  xdotool key greater ;;
  $'?')  xdotool key question ;;
  $'@')  xdotool key at ;;
  $'[')  xdotool key bracketleft ;;
  $'\\') xdotool key backslash ;;
  $']')  xdotool key bracketright ;;
  $'^')  xdotool key asciicircum ;;
  $'_')  xdotool key underscore ;;
  $'`')  xdotool key grave ;;
  $'{')  xdotool key braceleft ;;
  $'|')  xdotool key bar ;;
  $'}')  xdotool key braceright ;;
  $'~')  xdotool key asciitilde ;;
  *)     xdotool key "$input" ;;
  esac
  #
  # mouse mode
  else
  case "$input" in
  q) break ;;
  h) print_help ;;
  a) xdotool mousemove_relative -- -100 0 ;;
  s) xdotool mousemove_relative 0 100 ;;
  d) xdotool mousemove_relative 100 0 ;;
  w) xdotool mousemove_relative -- 0 -100 ;;
  c) xdotool click 1 ;;
  r) xdotool click 3 ;;
  u) xdotool click 4 ;;
  p) xdotool click 5 ;;
  e) xdotool mousedown 1 ;;
  x) xdotool mouseup 1 ;;
  j) xdotool mousemove_relative -- -10 0 ;;
  k) xdotool mousemove_relative 0 10 ;;
  l) xdotool mousemove_relative 10 0 ;;
  i) xdotool mousemove_relative -- 0 -10 ;;
  *) echo "$input - not defined in mouse map" ;;
  esac
  fi
done
Posted in bash

VLC remote control

Recently I was using VLC to listen to music, as I often do, and I wanted to pause without getting out of bed.

Lazy? Yes!

I learned that VLC includes a slew of remote control interfaces, including a built-in web interface as well as a raw socket interface.

In VLC Advanced Preferences, go to "Interface", and then "Main interfaces" for a list of options. I selected "Remote control" which is now known as "oldrc", and I configured a simple file based socket "vlc.sock" in my home directory as an experiment.

You can use netcat to send commands, for example,

twarnock@laptop:~ :) nc -U ~/vlc.sock <<< "pause"

Best of all VLC cleans up after itself and removes the socket file when it closes. The "remote control" interface is pretty intuitive and comes with a "help" command. I wrapped all of this in a shell function (in a .bashrc).

function vlcrc() {
 SOCK=~/vlc.sock
 CMD="pause"
 if [ $# -gt 0 ]; then
  CMD=$1
 fi
 if [ -S $SOCK ]; then
  nc -U $SOCK <<< "$CMD"
 else
  (>&2 echo "I can't find VLC socket $SOCK")
 fi
}

I like this approach because I can now use "vlc command" in a scripted environment. I can build playlists, control the volume, adjust the playback speed, pretty much anything VLC lets me do. I could even use a crontab and make a scripted alarm clock!

And of course I can "pause" my music from my phone while laying in bed. Granted, there's apps for more user friendly VLC smartphone remotes, but I like the granular control provided by a command line.

Posted in shell tips

datsize, simple command line row and column count

Lately I've been working with lots of data files with fixed rows and columns, and have been finding myself doing the following a lot:

Getting the row count of a file,

twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.gamma
    3183 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.beta
     200 lda_out/final.beta

And getting the column count of the same files,

twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.gamma | awk '{ print NF }'
200
twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.beta | awk '{ print NF }'
5568

I would do this for dozens of files and eventually decided to put this together in a simple shell function,

function datsize {
    if [ -e $1 ]; then
        rows=$(wc -l < $1)
        cols=$(head -1 $1 | awk '{ print NF }')
        echo "$rows X $cols $1"
    else
        return 1
    fi
}

Simple, and so much nicer,

twarnock@laptop:/var/data/ctm :) datsize lda_out/final.gamma
    3183 X 200 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) datsize lda_out/final.beta
     200 X 5568 lda_out/final.beta
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-theta.dat
    3183 X 200 ctr_out/final-theta.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-U.dat
    2011 X 200 ctr_out/final-U.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-V.dat
    3183 X 200 ctr_out/final-V.dat
Posted in bash, shell tips

Getting the most out of your ssh config

I typically find myself with voluminous bashrc files filled with aliases and functions for connecting to specific hosts via ssh. I would like an easier way to manage the various ssh hosts, ports, and keys.

I typically maintain an ssh-agent across multiple hosts, as well as various tunnels; reverse tunnels, and chained tunnels -- but I would like to simplify my normal ssh commands using an ssh config.

First, always remember to RTFM,

man ssh

This is an excellent starting point, the man page contains plenty of information on all the ins-and-outs of an ssh config.

To get started, simply create a plaintext file "config" in your .ssh/ directory.

Setting Defaults

$HOME/.ssh/config will be used by your ssh client and is able to set per-host defaults for username, port, identity-key, etc

For example,

# $HOME/.ssh/config
Host dev
    HostName dev.anattatechnologies.com
    Port 22000
    User twarnock
    ForwardAgent yes

On this particular host, I can now run

$ ssh dev

Which is much easier than "ssh -A -p 22000 twarnock@dev.anattatechnologies.com"

You can also use wildcards, e.g.,

Host *amazonaws.com *ec2.nytimes.com *.dev.use1.nytimes.com
    User root

which I find very useful for cases where usernames are different than my normal username.

Tunnels

Additionally, you can add tunneling information in your .ssh/config, e.g.,

Host tunnel.anattatechnologies.com
    HostName anattatechnologies.com
    IdentityFile ~/.ssh/anattatechnologies.key
    LocalForward 8080 localhost:80
    User twarnock

Even if you chose to use shell functions to manage tunnels, the use of an ssh config can help simplify things greatly.

Posted in shell tips, ssh

git, obliterate specific commits

I would like to obliterate a series of git commits between two points, we'll call these the START and END commits.

First, determine the SHA1 for the two commits, we'll be forcefully deleting everything in between and preserving the END exactly as it is.

Detach Head

Detach head and move to END commit,

git checkout SHA1-for-END

Reset

Move HEAD to START, but leave the index and working tree as END

git reset --soft SHA1-for-START

Redo END commit

Redo the END commit re-using the commit message, but on top of START

git commit -C SHA1-for-END

Rebase

Re-apply everything from the END

git rebase --onto HEAD SHA1-for-END master

Force Push

push -f
Posted in shell tips

vim: Visual mode

I have been using vim for years and am consistently surprised at the amazing things it can do. Vim has been around longer than I have been writing code, and its predecessor (Vi) is as old as I am.

Somehow through the years this editor has gained and continues to gain popularity. Originally, my interest in Vi was out of necessity, it was often the only editor available on older Unix systems. Yet somehow Vim nowadays rivals even the most advanced IDEs.

One of the more interesting aspects of Vim is the Visual mode. I had ignored this feature for years relying on the normal command mode and insert mode.

Visual Mode

Simply press v and you'll be in visual mode able to select text.

Use V to select an entire line of text, use the motion keys to move up or down to select lines of text as needed.

And most interestingly, use Ctrl-v for visual block mode. This is the most flexible mode of selection and allows you to select columns rather than entire lines, as shown below.
vim
In this case I have used visual block mode to select the same variable in 5 lines of code.

In all of these case, you can use o and O while selecting to change the position of the cursor in the select box. For example, if you are selecting several lines downwards and realize you wanted to grab the line above the selection box as well, just hit o and it will take you to the top of the selection.

In practice this is far easier and more powerful than normal mouse highlighting, although vim also supports mouse highlighting exactly as you would intuitively expect (where mouse highlighting enables visual mode).

What to do with a visual selection

All sorts of things! You could press ~ to change the case of the selection, you can press > to indent the selection (< to remove an indent), you can press y to yank (copy) the selection, d to delete the selection.

If you're in visual block mode and if you've selected multiple lines as in the example above, then you can edit ALL of the lines simultaneously. Use i to start inserting at the cursor, and as soon as you leave insert mode the changes will appear on each of the lines that was in the visual block.

Similarly, you can use the familiar a, A, and I to add text to every line of the visual block. You can use c to change each line of the visual block, r to replace the selection. This is an incredibly fast and easy way to add or replace text on multiple lines.

Additionally, you can use p to put (paste) over a visual selection. Once you paste over a visual selection, that selection will now be your default register (see :reg), which is extremely handy when you need to quickly swap two selections of text.

You can even use the visual block to limit the range of an otherwise global find and replace, that is,

:s/\%Vfind/replace/g

adding the \%V to the search limits the find and replace to the selection block.

More information is available in vim's help file,

:h visual-operators
Posted in shell tips, vim