Trie or Set

Given a grid or input stream of characters, I would like to discover all words according to a given dictionary. This could be a dictionary of all English words or phrases (say, for an autocomplete service), or for any language. This is especially useful for languages where words are not clearly separated (e.g., Japanese, Chinese, Thai).

Typically, this is done with a Trie or a DAWG (Directed Acyclic Word Graph). A Trie can be implemented in Python using a nested dict, i.e.,

def _make_trie(wordict):
    trie = {}
    for word in wordict:
        current_trie = trie
        for letter in word:
            current_trie = current_trie.setdefault(letter, {})
        current_trie['$'] = '$'
    return trie

def _in_trie(trie, word):
    ''' True IFF prefix or word in trie
    current_trie = trie
    for letter in word:
        if letter in current_trie:
            current_trie = current_trie[letter]
            return False
    return True

Using this approach, we can scan through a large stream of characters for potential words. Imagine a classic matching game where you are looking for words within a grid of characters. Programmatically, you would scan through the grid with combinations of characters. The advantage of a Trie (or DAWG) is that it allows for efficient pruning. In other words, if a character combination is not in the Trie, then you can cease pruning.

An alternative approach is to create a Set of word prefixes, i.e.,

almost_words = set([])
for word in wordict:
    for i in range(len(word)-1):
        almost_words.add( word[0:i+1] )

If the dictionary contains ['apple', 'are'] then the Set almost_words would contain the following,

{'a', 'ap', 'app', 'appl', 'ar'}

In other words, rather than test if a character string exists in the Trie, one can simply check the Set almost_words. If there is no match then that particular path can be pruned. Here is a simple RTL (right-to-left) character scanner that uses this approach:

def _setscan_rtl(grid, wordict):
    ''' generator yielding word candidates
    almost_words = set([])
    maxlen = 0
    for word in wordict:
        if len(word) > maxlen:
            maxlen = len(word)
        for i in range(len(word)-1):
            almost_words.add( word[0:i+1] )
    for line in grid:
        for i in range(max(len(line),maxlen) - maxlen):
            candidate_word = ''
            for c in range(min(len(line),maxlen)):
                candidate_word += line[i+c]
                if candidate_word not in almost_words:
                yield candidate_word

I created a simple test case to determine if a Set was truly faster, and whether or not it was as memory efficient. There was a noticeable increase in performance using Set over Trie (for both large and small data sets). Interestingly, the performance difference was even more pronounced when using Japanese characters, indicating that language parsers can use a simple Set (or hashmap) as opposed to a Trie or a DAWG.

$ /usr/bin/time ./
177.84user 0.42system 2:58.54elapsed 99%CPU (0avgtext+0avgdata 507412maxresident)k
0inputs+0outputs (0major+145801minor)pagefaults 0swaps

$ /usr/bin/time ./
250.44user 0.56system 4:11.86elapsed 99%CPU (0avgtext+0avgdata 680960maxresident)k
0inputs+0outputs (0major+184571minor)pagefaults 0swaps

Full results and code are available on my github.

Posted in data arch., python


I would like to iterate over a stream of words, say, from STDIN or a file (or any random input stream). Typically, this is done like this,

def iter_words(f):
    for line in f:
        for word in line.split():
            yield word

And then one can simply,

for word in iter_words(sys.stdin):
    # do something

For a more concrete example, let's say we need to keep a count of every unique word in an input stream, something like this,

from collections import Counter
c = Counter

for word in iter_words(sys.stdin):

The only problem with this approach is that it will read data in line-by-line, which in most cases is exactly what we want, however, in some cases we don't have line-breaks. For extremely large data streams we will simply run out of memory if we use the above generator.

Instead, we can use the read() method to read in one-byte at a time, and manually construct the words as we go, like this,

def iter_words(sfile):
    chlist = []
    for ch in iter(lambda:, ''):
        if str.isspace(ch):
            if len(chlist) > 0:
                yield ''.join(chlist)
            chlist = []

This approach is memory efficient, but extremely slow. If you absolutely need to get the speed while still being memory efficient, you'll have to do a buffered read, which is kind of an ugly hybrid of these two approaches.

def iter_words(sfile, buffer=1024):
    lastchunk = ''
    for chunk in iter(lambda:, ''):
        words = chunk.split()
        lastchunk = words[-1]
        for word in words[:-1]:
            yield word
        newchunk = []
        for ch in
            if str.isspace(ch):
                yield lastchunk + ''.join(newchunk)
Posted in python


I would like a webapp that supports UTF-8 URLs. For example, https://去.cc/叼, where both the path and the server name contain non-ASCII characters.

The path /叼 can be handled easily with %-encodings, e.g.,

>>> import urllib
>>> urllib.parse.quote('/叼')

Note: this is similar to the raw byte representation of the unicode string:

>>> bytes('/叼', 'utf8')

However, the domain name, "去.cc" cannot be usefully %-encoded (that is, "%" is not a valid character in a hostname). The standard encoding for international domain names (IDN) is punycode; such that "去.cc' will look like "".

The "xn--" prefix is the ASCII Compatible Encoding that essentially identifies this hostname as a punycode-encoded name. Most modern web-browsers and http libraries can decode this kind of name, although just in case, you can do something like this:

>>> '去'.encode('punycode')

In practice, we can use the built-in "idna" encoding and decoding in python, i.e., IRI to URI:

>>> p = urllib.parse.urlparse('https://去.cc/叼')
>>> p.netloc.encode('idna')
>>> urllib.parse.quote(p.path)

And going the other direction, i.e., URI to IRI:

>>> a = urllib.parse.urlparse('')
>>> a.netloc.encode('utf8').decode('idna')
>>> urllib.parse.unquote(a.path)
Posted in python, software arch.

Using getattr in Python

I would like to execute a named function on a python object by variable name. For example, let's say I'm reading in input that looks something like this:

enqueue 1
enqueue 12
enqueue 5
enqueue 9

Afterwards, we should see:


Let's say we need to implement a data structure that consumes this input. Fortunately, all of this behavior already exists within the built-in list datatype. What we can do is extend the built-in list to map the appropriate methods, like so:

class qlist(list):
    def enqueue(self, v):

    def dequeue(self):
        return self.pop()

    def print(self):

The sort and reverse methods are already built-in to list, so we don't need to map them. Now, we simply need a driver program that reads and processes commands to our new qlist class. Rather than map out the different commands in if/else blocks, or use eval(), we can simply use getattr, for example:

if __name__ == '__main__':
    thelist = qlist()
    while line in sys.stdin:
        cmd = line.split()
        params = (int(x) for x in cmd[1:])
        getattr(thelist, cmd[0])(*params)
Posted in shell tips

Graph Search

I would like to discover paths between two nodes on a graph. Let's say we have a graph that looks something like this:

graph = {1: set([2, 3]),
         2: set([1, 4, 5, 7]),
         3: set([1, 6]),
         N: set([...] }

The graph object contains a collection of nodes and their corresponding connections. If it's a bi-directional graph, those connections would have to appear in the corresponding sets (e.g., 1: set([2]) and 2: set([1])).

Traversing this kind of data structure can be done through recursion, usually something like this:

def find_paths(from_node, to_node, graph, path=None):
    ''' DFS search of graph, return all paths between
        from_node and to_node
    if path is None:
        path = [from_node]
    if to_node == from_node:
        return [path]
    paths = []
    for next_node in graph[from_node] - set(path):
        paths += find_paths(next_node, to_node, graph, path + [next_node])
    return paths

Unfortunately, for large graphs, this can be pretty inefficient, requiring a full depth-first search (DFS), and storing the entire graph in memory. This does have the advantage of being exhaustive, finding all unique paths between two nodes.

That said, let's say we want to find the shortest possible path between two nodes. In those cases, you want a breadth-first search (BFS). Whenever you hear the words "shortest path", think BFS. You'll want to avoid recursion (as those result in a DFS), and instead rely on a queue, which in Python can be implemented with a simple list.

def find_shortest_path(from_node, to_node, graph):
    ''' BFS search of graph, return shortest path between
        from_node and to_node
    queue = [(from_node, [from_node])]
    while queue:
        (qnode, path) = queue.pop(0) #deque
        for next_node in graph[qnode] - set(path):
            if next_node == to_node:
                return path + [next_node]
                queue.append((next_node, path + [next_node]))

Because a BFS is guaranteed to find the shortest path, we can return the moment we find a path between to_node and from_node. Easy!

In some cases, we may have an extremely large graph. Let's say you're searching the Internet for a path between two unrelated web pages, and the graph is constructed dynamically based on scraping the links from each explored page. Obviously, a DFS is out of the question for something like that, as it would spiral into an infinite chain of recursion (and probably on the first link).

As a reasonable constraint, let's say we want to explore all the links up to a specific depth. This could be done easily. Simply add a depth_limit, as follows:

def find_shortest_path(from_node, to_node, graph, depth_limit=3):
    queue = [(from_node, [from_node])]
    while queue and depth_limit > 0:
        depth_limit -= 1
        (qnode, path) = queue.pop(0) #deque
        for next_node in graph[qnode] - set(path):
            if next_node == to_node:
                return path + [next_node]
                queue.append((next_node, path + [next_node]))
Posted in python, software arch.

python unittest

I would like to setup unit tests for a python application. There are many ways to do this, including doctest and unittest, as well as 3rd-party frameworks that leverage python's unittest, such as pytest and nose.

I found the plain-old unittest framework to be the easiest to work with, although I often run into questions about how best to organize tests for various sized projects. Regardless of the size of the projects, I want to be able to easily run all of the tests, as well as run specific tests for a module.

The standard naming convention is "", which would include all tests for the named module. This file can be located in the same directory (package) as the module, although I prefer to keep the tests in their own subdirectory (which can easily be excluded from production deployments).

In other words, I end up with the following:

 - test/

Each of the test_*.py files looks something like this:

#!/usr/bin/env python
# vim: set tabstop=4 shiftwidth=4 autoindent smartindent:
import os, sys, unittest

## parent directory
sys.path.insert(0, os.path.join( os.path.dirname(__file__), '..' ))
import ModuleName

class test_ModuleName(unittest.TestCase):

    def setUp(self):
        ''' setup testing artifacts (per-test) '''
        self.moduledb = ModuleName.DB()

    def tearDown(self):
        ''' clear testing artifacts (per-test) '''

    def test_whatever(self):
        self.assertEqual( len(, 16 )

if __name__ == '__main__':

With this approach, the tests can be run by, or I can run the individual

The script also must add the parent directory on the path, i.e.,

#!/usr/bin/env python
# vim: set tabstop=4 shiftwidth=4 autoindent smartindent:
import sys, os
import unittest

## set the path to include parent directory
sys.path.insert(0, os.path.join( os.path.dirname(__file__), '..' ))

## run all tests
loader = unittest.TestLoader()
testSuite =".")
text_runner = unittest.TextTestRunner().run(testSuite)
Posted in python

HTML + CSS + JavaScript Lessons

I would like a very simple introduction to web development, from the basics of HTML and CSS, to the proper use of JavaScript; and all without getting bogged down in complicated textbooks.

I've been working with HTML, CSS, and JavaScript (as well as dozens of programming languages in more environments than I can remember) for over 20 years. While there are some excellent resources online (I recommend w3schools), I believe web development is a very simple topic that is often unnecessarily complicated.

I created a simple set of 9 lessons for learning basic web development. This includes HTML, CSS, and some simple JavaScript (including callback functions to JSONP APIs), everything you need to make and maintain websites.

You can find the lessons here

It's also available on Github

Posted in css, html, javascript

bash histogram

I would like to generate a streamable histgram that runs in bash. Given an input stream of integers (from stdin or a file), I would like to transform each integer to that respective number of "#" up to the length of the terminal window; in other words, 5 would become "#####", and so on.

You can get the maximum number of columns in your current terminal using the following command,

twarnock@laptop: :) tput cols

The first thing we'll want to do is create a string of "####" that is exactly as long as the max number of columns. I.e.,

COLS=$(tput cols);
MAX_HIST=`eval printf '\#%.0s' {1..$COLS}; echo;`

We can use the following syntax to print a substring of MAX_HIST to any given length (up to its maximum length).

twarnock@laptop: :) echo ${MAX_HIST:0:5}
twarnock@laptop: :) echo ${MAX_HIST:0:2}
twarnock@laptop: :) echo ${MAX_HIST:0:15}

We can then put this into a simple shell script, in this case, as follows,

#! /bin/bash
COLS=$(tput cols);
MAX_HIST=`eval printf '\#%.0s' {1..$COLS}; echo;`

while read datain
  if [ -n "$datain" ]; then
    echo -n ${MAX_HIST:0:$datain}
    if [ $datain -gt $COLS ]; then
      printf "\r$datain\n"
      printf "\n"
done < "${1:-/dev/stdin}"

This script will also print any number on top of any line that is larger than the maximum number of columns in the terminal window.

As is, the script will transform an input file into a crude histogram, but I've also used it as a visual ping monitor as follows (note the use of unbuffer),

twarnock@cosmos:~ :) ping $remote_host | unbuffer -p awk -F'[ =]' '{ print int($10) }' | unbuffer -p

Posted in bash, shell tips


I would like to remotely control my Linux desktop via an ssh connection (connected through my phone).

Fortunately, we can use xdotool.

I created a simple command-interpreter that maps keys to xdotool. I used standard video game controls (wasd) for large mouse movements (100px), with smaller movements available (ijkl 10px). It can toggle between mouse and keyboard, which allows you to somewhat easily open a browser and type URLs.

I use this to control my HD television from my phone, and so far it works great.

: ${DISPLAY:=":0"}
export DISPLAY

echo "xmouse! press q to quit, h for help"

function print_help() {
  echo "xmouse commands:
  h - print help

  Mouse Movements
  w - move 100 pixels up
  a - move 100 pixels left
  s - move 100 pixels down
  d - move 100 pixels right

  Mouse Buttons
  c - mouse click
  r - right mouse click
  u - mouse wheel Up
  p - mouse wheel Down

  Mouse Button dragging
  e - mouse down (start dragging)
  x - mouse up (end dragging)

  Mouse Movements small
  i - move 10 pixels up
  j - move 10 pixels left
  k - move 10 pixels down
  l - move 10 pixels right

  Keyboard (experimental)
  Press esc key to toggle between keyboard and mouse modes

while read -rsn1 input; do
  # toggle mouse and keyboard mode
  case "$input" in
  $'\e') if [ "$KEY_IN" = "On" ]; then
           echo "MOUSE mode"
           echo "KEYBOARD mode"
  # keyboard mode
  if [ "$KEY_IN" = "On" ]; then
  case "$input" in
  $'\x7f') xdotool key BackSpace ;;
  $' ')  xdotool key space ;;
  $'')   xdotool key Return ;;
  $':')  xdotool key colon ;;
  $';')  xdotool key semicolon ;;
  $',')  xdotool key comma ;;
  $'.')  xdotool key period ;;
  $'-')  xdotool key minus ;;
  $'+')  xdotool key plus ;;
  $'!')  xdotool key exclam ;;
  $'"')  xdotool key quotedbl ;;
  $'#')  xdotool key numbersign ;;
  $'$')  xdotool key dollar ;;
  $'%')  xdotool key percent ;;
  $'&')  xdotool key ampersand ;;
  $'\'') xdotool key apostrophe ;;
  $'(')  xdotool key parenleft ;;
  $')')  xdotool key parenright ;;
  $'*')  xdotool key asterisk ;;
  $'/')  xdotool key slash ;;
  $'<')  xdotool key less ;;
  $'=')  xdotool key equal ;;
  $'>')  xdotool key greater ;;
  $'?')  xdotool key question ;;
  $'@')  xdotool key at ;;
  $'[')  xdotool key bracketleft ;;
  $'\\') xdotool key backslash ;;
  $']')  xdotool key bracketright ;;
  $'^')  xdotool key asciicircum ;;
  $'_')  xdotool key underscore ;;
  $'`')  xdotool key grave ;;
  $'{')  xdotool key braceleft ;;
  $'|')  xdotool key bar ;;
  $'}')  xdotool key braceright ;;
  $'~')  xdotool key asciitilde ;;
  *)     xdotool key "$input" ;;
  # mouse mode
  case "$input" in
  q) break ;;
  h) print_help ;;
  a) xdotool mousemove_relative -- -100 0 ;;
  s) xdotool mousemove_relative 0 100 ;;
  d) xdotool mousemove_relative 100 0 ;;
  w) xdotool mousemove_relative -- 0 -100 ;;
  c) xdotool click 1 ;;
  r) xdotool click 3 ;;
  u) xdotool click 4 ;;
  p) xdotool click 5 ;;
  e) xdotool mousedown 1 ;;
  x) xdotool mouseup 1 ;;
  j) xdotool mousemove_relative -- -10 0 ;;
  k) xdotool mousemove_relative 0 10 ;;
  l) xdotool mousemove_relative 10 0 ;;
  i) xdotool mousemove_relative -- 0 -10 ;;
  *) echo "$input - not defined in mouse map" ;;
Posted in bash

VLC remote control

Recently I was using VLC to listen to music, as I often do, and I wanted to pause without getting out of bed.

Lazy? Yes!

I learned that VLC includes a slew of remote control interfaces, including a built-in web interface as well as a raw socket interface.

In VLC Advanced Preferences, go to "Interface", and then "Main interfaces" for a list of options. I selected "Remote control" which is now known as "oldrc", and I configured a simple file based socket "vlc.sock" in my home directory as an experiment.

You can use netcat to send commands, for example,

twarnock@laptop:~ :) nc -U ~/vlc.sock <<< "pause"

Best of all VLC cleans up after itself and removes the socket file when it closes. The "remote control" interface is pretty intuitive and comes with a "help" command. I wrapped all of this in a shell function (in a .bashrc).

function vlcrc() {
 if [ $# -gt 0 ]; then
 if [ -S $SOCK ]; then
  nc -U $SOCK <<< "$CMD"
  (>&2 echo "I can't find VLC socket $SOCK")

I like this approach because I can now use "vlc command" in a scripted environment. I can build playlists, control the volume, adjust the playback speed, pretty much anything VLC lets me do. I could even use a crontab and make a scripted alarm clock!

And of course I can "pause" my music from my phone while laying in bed. Granted, there's apps for more user friendly VLC smartphone remotes, but I like the granular control provided by a command line.

Posted in shell tips

datsize, simple command line row and column count

Lately I've been working with lots of data files with fixed rows and columns, and have been finding myself doing the following a lot:

Getting the row count of a file,

twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.gamma
    3183 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.beta
     200 lda_out/final.beta

And getting the column count of the same files,

twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.gamma | awk '{ print NF }'
twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.beta | awk '{ print NF }'

I would do this for dozens of files and eventually decided to put this together in a simple shell function,

function datsize {
    if [ -e $1 ]; then
        rows=$(wc -l < $1)
        cols=$(head -1 $1 | awk '{ print NF }')
        echo "$rows X $cols $1"
        return 1

Simple, and so much nicer,

twarnock@laptop:/var/data/ctm :) datsize lda_out/final.gamma
    3183 X 200 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) datsize lda_out/final.beta
     200 X 5568 lda_out/final.beta
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-theta.dat
    3183 X 200 ctr_out/final-theta.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-U.dat
    2011 X 200 ctr_out/final-U.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-V.dat
    3183 X 200 ctr_out/final-V.dat
Posted in bash, shell tips

Getting the most out of your ssh config

I typically find myself with voluminous bashrc files filled with aliases and functions for connecting to specific hosts via ssh. I would like an easier way to manage the various ssh hosts, ports, and keys.

I typically maintain an ssh-agent across multiple hosts, as well as various tunnels; reverse tunnels, and chained tunnels -- but I would like to simplify my normal ssh commands using an ssh config.

First, always remember to RTFM,

man ssh

This is an excellent starting point, the man page contains plenty of information on all the ins-and-outs of an ssh config.

To get started, simply create a plaintext file "config" in your .ssh/ directory.

Setting Defaults

$HOME/.ssh/config will be used by your ssh client and is able to set per-host defaults for username, port, identity-key, etc

For example,

# $HOME/.ssh/config
Host dev
    Port 22000
    User twarnock
    ForwardAgent yes

On this particular host, I can now run

$ ssh dev

Which is much easier than "ssh -A -p 22000"

You can also use wildcards, e.g.,

Host * * *
    User root

which I find very useful for cases where usernames are different than my normal username.


Additionally, you can add tunneling information in your .ssh/config, e.g.,

    IdentityFile ~/.ssh/anattatechnologies.key
    LocalForward 8080 localhost:80
    User twarnock

Even if you chose to use shell functions to manage tunnels, the use of an ssh config can help simplify things greatly.

Posted in shell tips, ssh

git, obliterate specific commits

I would like to obliterate a series of git commits between two points, we'll call these the START and END commits.

First, determine the SHA1 for the two commits, we'll be forcefully deleting everything in between and preserving the END exactly as it is.

Detach Head

Detach head and move to END commit,

git checkout SHA1-for-END


Move HEAD to START, but leave the index and working tree as END

git reset --soft SHA1-for-START

Redo END commit

Redo the END commit re-using the commit message, but on top of START

git commit -C SHA1-for-END


Re-apply everything from the END

git rebase --onto HEAD SHA1-for-END master

Force Push

push -f
Posted in shell tips

vim: Visual mode

I have been using vim for years and am consistently surprised at the amazing things it can do. Vim has been around longer than I have been writing code, and its predecessor (Vi) is as old as I am.

Somehow through the years this editor has gained and continues to gain popularity. Originally, my interest in Vi was out of necessity, it was often the only editor available on older Unix systems. Yet somehow Vim nowadays rivals even the most advanced IDEs.

One of the more interesting aspects of Vim is the Visual mode. I had ignored this feature for years relying on the normal command mode and insert mode.

Visual Mode

Simply press v and you'll be in visual mode able to select text.

Use V to select an entire line of text, use the motion keys to move up or down to select lines of text as needed.

And most interestingly, use Ctrl-v for visual block mode. This is the most flexible mode of selection and allows you to select columns rather than entire lines, as shown below.
In this case I have used visual block mode to select the same variable in 5 lines of code.

In all of these case, you can use o and O while selecting to change the position of the cursor in the select box. For example, if you are selecting several lines downwards and realize you wanted to grab the line above the selection box as well, just hit o and it will take you to the top of the selection.

In practice this is far easier and more powerful than normal mouse highlighting, although vim also supports mouse highlighting exactly as you would intuitively expect (where mouse highlighting enables visual mode).

What to do with a visual selection

All sorts of things! You could press ~ to change the case of the selection, you can press > to indent the selection (< to remove an indent), you can press y to yank (copy) the selection, d to delete the selection.

If you're in visual block mode and if you've selected multiple lines as in the example above, then you can edit ALL of the lines simultaneously. Use i to start inserting at the cursor, and as soon as you leave insert mode the changes will appear on each of the lines that was in the visual block.

Similarly, you can use the familiar a, A, and I to add text to every line of the visual block. You can use c to change each line of the visual block, r to replace the selection. This is an incredibly fast and easy way to add or replace text on multiple lines.

Additionally, you can use p to put (paste) over a visual selection. Once you paste over a visual selection, that selection will now be your default register (see :reg), which is extremely handy when you need to quickly swap two selections of text.

You can even use the visual block to limit the range of an otherwise global find and replace, that is,


adding the \%V to the search limits the find and replace to the selection block.

More information is available in vim's help file,

:h visual-operators
Posted in shell tips, vim

vim: tags and tabs

Previously, I discussed using vim and screen with automagic titles.

The great part about working with vim and screen is that I can work from anywhere with minimal setup, and when working remotely I can pick up the cursor exactly where I left it -- I never have to worry about a remote terminal disconnecting.

I tend to avoid vim plugins as I like having a minimal setup on different hosts, I occasionally make an exception for NERDTree, but I find the default netrw easily workable. I keep my .vimrc and other dotfiles in github so I'm always a git clone away from getting my environment setup (in Linux, cygwin, OSX, etc).

With this in mind, I would like an easier way to navigate files in a project and if possible avoid non-standard vim plugins.

One of the most effective approaches I have found using the default (no plugins) vim is the combination of tags and tabs.

Tag files are generated by ctags (typically from exuberant ctags), which then vim can use as a keyword index into your source tree for any given project.

Generating Tags

I prefer to keep a single "tags" file at the root of each project directory, typically as follows,

$ ctags -R .

This will create a "tags" file in the current directory. For larger codebases these can get surprisingly large, but they are usually fast to generate. To manage these files you may consider using git hooks on

Telling vim about Tags

In vim you can load as many tag files as you like, the command is,

:set tags+=tags

where "tags" is the filename of the tags file.

The problem is, you won't want to type this every time you open vim, so add the following to your .vimrc,

set tags+=tags;$HOME

By adding the ";$HOME" to the set tags command, this will simply look for a "tags" file in the current working directory, and if it doesn't find one it will look in the parent directory and keep looking for a tags file all the way back to "$HOME". So if you're 10 directories deep within $HOME then it would search up to 10 directories looking for a "tags" file. You can replace $HOME with any base directory, in my case I keep all project source code in my $HOME directory.

Using Tags

Typing :tag text will search for files with the exact tag name, or you can use :tag /text to search for any tag that matches "text".

By default, vim opens the new files in a tag stack, you can use Ctrl-T to go back to the previous file -- alternatively you can navigate the files through the normal buffer commands, e.g.,

list open buffers

switch to a different buffer (from list)
:b #

unload (delete) a buffer
:bd #

You can also use put your cursor on the word you want to search and press Ctrl-] to go the file that matches the selected text, then use Ctrl-T to jump back. If you want to see all the files that match a tag, you can use :tselect text

I find navigating a tag stack and maintaining multiple buffers is a bit cumbersome, this is where tabs can really help out.

Using Tabs

Once you're in vim you can open a new tab with :tabedit {file} or :tabe {file} which will open the optionally specified file in a new tab (or open a new blank tab if no file is specified). Usually I use,

:tabe .

to open a new tab with the file browser in the current working directory.

With multiple tabs open you can use gt and gT to toggle thru the open tabs, or {i}gt to go to the i-th tab (starting at 1). You can re-order tabs using tabm # to a move a tab to a new position (starting at 0).

Most importantly, tabs work great with mouse enabled, simply click on a tab as you would intuitively expect, drag the tabs to re-order, or click the "X" in the upper-right to close.

Tabs meet Tags

I find the default tags behavior slightly cumbersome as I end up navigating the tag stack through multiple buffers open in one window.

When searching tags I want the file to always open in a new tab, or at least to open in a vertical split.

I have added the following to my .vimrc

map <C-\> :tab split<CR>:exec("tag ".expand("<cword>"))<CR>
map <C-]> :vsp <CR>:exec("tag ".expand("<cword>"))<CR>

This will effectively remap Ctrl-] to open the matching file in a vertical split. I can then close the vertical split or even move it to a new tab using Ctrl-w T.

However, mostly I use Ctrl-\ to open the matching file in a new tab.

Between tab and tag navigation I find this a very powerful way to manage even very large projects with default vim (rather than rely on an IDE).

Careful with Splits

One interesting thing about splits (vertical and horizontal, that is, :sp and :vsp) is that they will exist entirely within a tab window. In other words, a split occurs within only one tab.

You can close a split using Ctrl-w q, and if you need to navigate through multiple splits you can either use the mouse or Ctrl-w and then an arrow key (or h,j,k,l if you prefer).

In any given split, you can always move that file to a new tab using Ctrl-w T

Posted in shell tips, vim