Extremely Large Numeric Bases with Unicode

Previously, we discussed using Punycode for non-ASCII domain names with internationalized URLs, e.g., https://去.cc/叼

I would like to use this approach to create a URL Shortening Service, where we can create shortened URLs that use UTF-8 characters in addition to the normal ASCII characters.

Most URL shortening services use a base-62 alphanumeric key to map to a long-url. Typically, the base-62 characters include 26-uppercase letters (ABCD…), 26 lowercase letters (abcd…), and 10 digits (0123…), for a total of 62 characters. Occasionally they will include an underscore or dash, bringing you to base-64. This is all perfectly reasonable when using ASCII and trying to avoid non-printable characters.

For example, using base-62 encoding, you may have a typical short URL that looks like this:

http://shorturl/xp5zR2

However, nowadays with modern browsers supporting UTF-8, and offering basic rendering of popular Unicode character sets (中文, 한글, etc), we can leverage a global set of symbols!

One of the larger contiguous Unicode ranges that has decent support on modern browsers is the initial CJK block as well as Korean Hangul syllables. Why are these interesting? Well, rather than a base-62 or base-64, we can use CJK and Hangul syllables to create extremely large numeric bases.

The CJK range of 4e00 to 9fea seems to be adequately supported, as well as the Hangul syllable range of ac00 to d7a3, this would give us a base-20971 and a base-11172 respectively. Rather than base-62, we can offer numeric bases into the tens of thousands!

This would allow shortened URLs to look like this:

http://去.cc/叼
http://去.cc/뇼

Taken to extremes, let's consider a really large number, like 9,223,372,036,854,775,807 (nine quintillion two hundred twenty-three quadrillion three hundred seventy-two trillion thirty-six billion eight hundred fifty-four million seven hundred seventy-five thousand eight hundred seven). This is the largest signed-64-bit integer on most systems. Let's see what happens when we encode this number in extremely large bases:

9223372036854775807
= 7M85y0N8lZa
= 瑙稃瑰蜣丯
= 셁뻾뮋럣깐

The CJK and Hangul encodings are 6-characters shorter than their base-62 counterpart. For a URL Shortening Service, I'm not sure this will ever be useful. I'm not sure anyone will ever need to map nine quintillion URLs. There aren't that many URLs, but there are billions of URLs. Let's say we're dealing with 88-billion URLs. In that case let's look at a more reasonable large number.

88555222111
= 3CG2Fy1
= 執洪仉
= 닁읛껅

NOTE: while the character-length of the Chinese string is less than the base-62 string, each of the Chinese characters represents 3-bytes in UTF-8. This will not save you bandwidth, although technically neither does ASCII, but it's worth mentioning nonetheless.

To convert a number to one of these Unicode ranges, you an use the following Python,

def encode_urange(n, ustart, uend):
    chars = []
    while n > 0:
        n, d = divmod(n, uend-ustart)
        chars.append(unichr(ustart+d))
    return ''.join(chars)

def decode_urange(nstr, ustart, uend):
    base = uend-ustart
    basem = 1
    n = 0
    for c in unicode(nstr):
        if uend > ord(c) < ustart:
            raise ValueError("{!r}, {!r} out of bounds".format(nstr, c))
        n += (ord(c)-ustart) * basem
        basem = basem*base
    return n

The CJK range is 4e00 to 9fea, and you can map arbitrary CJK to base-10 as follows,

>>> 
>>> decode_urange(u'你好世界', int('4e00',16), int('9fea',16))
92766958466352922
>>>
>>> print encode_urange(92766958466352922, int('4e00',16), int('9fea',16))
你好世界

Unicode is full of fun and interesting character sets, here are some examples that I have built into x404:

# base-10   922,111
base62:     LSR3
top16:      eewwt
CJK:        鶱丫
hangul:     쏉걒
braille:    ⣚⡴⡙
anglosaxon: ᛡᛇᛞᚻᚢ
greek:      οΒΦδ
yijing:     ䷫䷔䷫䷃
Posted in python

PseudoForm, a Bot-resistant Web Form

I would like to create an HTML form that is resistant to bots. This could be a classic comment form or really any web-accessible form. In most cases, requiring authentication and an authorization service (such as OAuth) would be sufficient to protect a back-end web service. But what if you have a publicly accessible web form, and you want to keep it open and anonymous, but provide some measure of protection against automated bots?

For example, I would like to protect my URL Shortening Service such that only humans using the web front-end can access the back-end service layer, and I would like this layer of protection to be bot resistant.

Most spam-bots will crawl through webpages looking for HTML forms, and will seek too exploit any unprotected form handlers (i.e., the backend service). A more advanced bot can even leverage common authentication and authorization services. Other than annoying our users with CAPTCHA tests, what are some approaches that we can take?

Honeypot Form

A simple and effective approach is to create a dummy HTML form that is not rendered in the actual browser, but will be be discovered by most spam-bots. This could be as simple as,

<div id="__honey__">
  <form action="post-comments.php" method="POST">
  <input type="text" name="Name">
  ...
  </form>
</div>

The "post-comments.php" will be discovered by most spam-bots, yet in our case this is a misdirect. If you don't want it to return a 404, you can add a dummy file named "post-comments.php" that does nothing but will always return a 200. In fact, anyone trying to access this form handler is likely a bot, so feel free to do whatever you want with this. A simple but fun trick is to include the request IP address (the IP address of the bot itself) as the form action,

<form action="http://{{request.remote_addr}}/post.php" method="POST">

A naive spam-bot may start scanning its own IP address and may even send HTTP POST messages to itself.

Meanwhile, we would add Javascript in <head> with a "defer" attribute,

<script src="PseudoForm.js" defer></script>

The "defer" attribute will execute the "PseudoForm.js" after the user-agent has parsed the DOM but before it has been rendered to the user. Inside this script, you can replace the fake honeypot form.

document.getElementById('__honey__').innerHTML = actual_form;

The "actual_form" can be anything you want, and to further confound spam-bots, it does not even need to be an actual HTML form.

PseudoForm

In the case of my URL Shortening Service, the actual form does not use an HTML <form> element, and instead uses a simple <button>. In fact, using proper CSS, you can use any element you want, and style it as a button, even something like this,

<div id="__honey__">
  <input type="text" id="add_url" placeholder="Enter long URL...">
  <i>+</i>
</div>

There is no form handler in this case, and the back-end service is known only to the Javascript (which can add onclick event handlers to our CSS-styled button). This is sufficient to protect against naive spam-bots, but a more advanced spam-bot can easily parse our Javascript and determine actual service endpoints, such as looking for calls to XMLHttpRequest.

PseudoForm + formkey

In order to ensure the back-end service is accessed via our front-end, you can create a time-sensitive token. For my URL Shortening Service, all back-end service calls require a formkey. That is,

this.getformkey = function() {
    getJSON('/api/formkey',
    function(err, data) {
      if (err !== null) {
        console.log('Something went wrong: ' + err);
      } else {
        if ("formkey" in data) {
          _self.formkey = data.formkey;
        }
      }
    });
}

"getJSON" is a wrapper around XMLHttpRequest, and all this does is fetch a valid formkey and save it within the PseudoForm Javascript object. We can then keep formkey up to date in the webapp, like so,

setInterval( PseudoForm.getformkey, 60*1000 );

In the above case, PseudoForm.formkey will be refreshed every 60 seconds, and any calls to the back-end service must use this formkey. The value of formkey will appear as a nonsensical hex string, yet in-reality it is a time-sensitive hash that can only be verified by the back-end (which generated the formkeys in the first place), i.e.,

def getFormKey(self, ts=None, divsec=300):
    ''' return a time-sensitive hash that can be 
        calculated at a later date to determine if it matches
        
        the hash will change every divsec seconds (on the clock)
        e.g., divsec=300, every 5-minutes the hash will change
    '''
    if ts is None:
        ts = int(time.time())
    return hashlib.sha1((
        self._unicode(ts/divsec)+self.secret).encode('utf-8'
    )).hexdigest()[2:]

def isValidFormKey(self, testkey, lookback=2, divsec=300):
    ''' return True IFF
        the testkey matches a hash for this or previous {lookback} iterations
        e.g., divsec=300, 
        a formkey will be valid for at least 5 minutes (no less than 10)
        e.g., lookback=6, 
        a formkey will valid for at least 25 minutes (no less than 30)
    '''
    for i in range(lookback):
        if testkey == self.getFormKey(int(time.time())-divsec*i):
            return True
    return False

I included a secret key (by default a server generated GUID), to ensure that a bot would never be able to calculate a valid formkey on its own. All back-end services require a valid formkey, which means that any automated bot that accesses one of our back-end services must first fetch a valid formkey, and can only use that formkey within the time-limit set by the back-end itself (default is 5-minutes).

PseudoForm + handshake

Theoretically, an automated bot could be programmed to utilize a valid formkey, giving the appearance of a real human user accessing the front-end, and it could then spam the back-end service. This was exactly what I wanted to avoid in my URL Shortening Service. To further strengthen the PseudoForm, all back-end services require a time-sensitive handshake.

In other words, when you make a request to the URL Shortening Service, the response is a unique time-sensitive handshake. The handshake must be returned within a narrow time-window. If the response is too soon, the request is denied. If the response is late, the request is also denied. This has the added benefit of throttling the number of real transactions (roughly one per second) per IP address.

Therefore, in order for an automated bot to exploit the PseudoForm, it not only needs to bypass the honeypot, it needs to carefully time its HTTP calls to the back-end, throttling itself, and perfectly mimic what would happen seamlessly by a human on the front-end.

It's certainly not impossible, and any software engineer of reasonable talent could build such a bot, but that bot would effectively be using the front-end. In the end, PseudoForm makes it very difficult for bots, throttling any bot to human speeds, but keeping everything very simple for a human (try it yourself, there are no CAPTCHAs or any nonsense).

PseudoForm example

My URL Shortening Service, x404, leverages exactly this kind of PseudoForm (including some of the code in this page). It's a simple URL shortening service that was created as a novelty, but it provides a protected back-end service that is far more difficult to exploit than just manually using the front-end. In other words, it's extremely easy for a human to use, and incredibly difficult for a bot.

The full process of the x404 PseudoForm is as follows,

1. create honeypot HTML form (misdirect naive spam-bots)

2. create a PseudoForm that appears as a normal web form in the browser (also prevent the honeypot form from rendering)

3. user enters information and clicks "add"

The PseudoForm front-end will display a "processing" message to the user. The PseudoForm front-end will also do an HTTP POST to the PseudoForm back-end, and if it includes a valid formkey, then the PseudoForm back-end will respond with a unique handshake string.

4. wait a second...

After a short wait (roughly 1s, but configurable however you want), the PseudoForm front-end will do an HTTP PUT to the PseudoForm back-end. If valid formkey, valid return handshake, and it is within the valid time window, then the back-end will commit the transaction and return success.

Literally, the back-end will reject any return handshake (valid or not) that appears too soon (or too late).

5. success! user sees the results
Posted in css, html, javascript, python, software arch.

multiple git remotes

I would like to manage a project across multiple remote git repositories, specifically, a public github repository and my own private repositories.

Fortunately, git supports as many remote repositories as you need. When you clone a repository, there will be a default remote called "origin", i.e.,

$ git clone git@github.com:timwarnock/dotfiles.git
...
$ cd dotfiles
$ git remote -v
origin	git@github.com:timwarnock/dotfiles.git (fetch)
origin	git@github.com:timwarnock/dotfiles.git (push)

Adding additional remotes is trivial, i.e.,

$ git remote add bob https://example.com/bob/dotfiles
$ git remote add ann https://example.com/ann/dotfiles
$ git remote add pat https://example.com/pat/dotfiles

Now, when we look at the remotes for our repository we'll see the new entries,

$ git remote -v
origin	git@github.com:timwarnock/dotfiles.git (fetch)
origin	git@github.com:timwarnock/dotfiles.git (push)
bob	https://example.com/bob/dotfiles (fetch)
bob	https://example.com/bob/dotfiles (push)
ann	https://example.com/ann/dotfiles (fetch)
ann	https://example.com/ann/dotfiles (push)
pat	https://example.com/pat/dotfiles (fetch)
pat	https://example.com/pat/dotfiles (push)

If we want to pull from Ann's repository and merge it into our local master branch,

$ git pull ann master

And then if we wanted to push those changes to Bob's repository,

$ git push bob master

We can also rename and remove remotes, i.e.,

$ git remote rename bob bobbie
...
$ git remote remove bobbie

In practice, we may not want to be constantly merging everything into a local master, instead, we may want to investigate the code before any merges. This can be done easily. I prefer to use tracked branches, as follows,

$ git checkout -b bob-master
$ git remote add bob https://example.com/bob/dotfiles
$ git fetch bob master
$ git --set-upstream-to=bob/master

We can now inspect the bob-master branch and merge manually as we see fit,

$ git checkout bob-master
$ git pull
...
$ git checkout master
$ git diff bob-master
...
Posted in git

OOP or Procedural?

I would like to know when it is best to use object-oriented programming, and when it is best to use procedural programming.

tl;dr: neither, go with functional programming

By procedural programming, I mean the kind of code you'd find programming in C; imperative control flow, functions, data structures, and algorithms. For example,

#include <stdio.h>

float f_to_c(float f) {
    return (f - 32) * 5 / 9;
}

int main() {
    float fahrenheit;
    printf("Please enter the temperature in Fahrenheit: ");
    scanf("%f", &fahrenheit);
    printf("Temperature in Celsius = %.2f\n", f_to_c(fahrenheit));
    return 0;
}

And by object-oriented programming, I mean the kind of code with abstraction, inheritance, polymorphism, and encapsulation. For example,

import java.util.*;

interface TemperatureConverter {
    public float convert();
}

class Temperature {
    float degrees;
    Temperature(float t) {
        degrees = t;
    }
}

class Fahrenheit extends Temperature implements TemperatureConverter {

    Fahrenheit(float t) {
        super(t);
    }

    public float convert() {
        return ((degrees - 32)*5)/9;
    }

}

class FahrenheitToCelsius {

    public static void main(String[] args) {
        Fahrenheit fahrenheit;
        Scanner in = new Scanner(System.in);
        System.out.print("Enter temperature in Fahrenheit: ");
        fahrenheit = new Fahrenheit( in.nextFloat() );

        System.out.println("temperature in Celsius = " 
            + fahrenheit.convert());
    }

}

I admittedly forced some inheritance and polymorphism into the above code, but it's arguably just as easy (if not easier) to read than the C example (despite being considerably longer).

In both cases we hid the implementation details (the specific formula that converts Fahrenheit to Celsius) from the main(). However, the OOP example also hides (encapsulates) the data structure as well. In the Java example we encapsulate the float within the Temperature base class, which the Fahrenheit class inherits. And since the Fahrenheit class implements the TemperatureConverter interface, then we're guaranteed to have a convert() method. There is still some implicit typecasting (a float to string within the println), but the idea is that the main() function doesn't care about the underlying data structure.

As Robert Martin (Uncle Bob) put it, "Objects expose behavior and hide data." The Fahrenheit class exposed a convert() behavior and hid the underlying data structure. This, according to Uncle Bob, makes it easy to add new objects without changing existing behaviors. For example,

class Celsius extends Temperature implements TemperatureConverter {

    Celsius(float t) {
        super(t);
    }

    public float convert() {
        return 9*degrees/5 + 32;
    }

}

This code has no impact on the existing Fahrenheit class, and we can safely call convert() on both Fahrenheit and Celsius objects. Additionally, if we use generics on the Temperature class, then we could allow for different data structures (such as double or BigDecimal) on something like a Kelvin class. In OOP, adding new classes is generally easy.

That said, what if we wanted to add new behavior? Maybe we want to add an isRoomTemperature() method. If so, we could add a new interface and then implement it in Celsius and Fahrenheit, but what if we had also implemented that new Kelvin class? Or several other Temperature classes? And shouldn't the convert() method return a Temperature class? This could get messy and will lead us into DRY problems. In fact, this is an area where OOP is not ideal. Even Uncle Bob admits that if we're adding new behaviors then "we prefer data types and procedures."

This seemingly obvious and innocuous statement in Clean Code is actually very profound, especially considering the fact that OOP and classic procedural programming do not mix well in a single code-base. Whether you go with OOP or not, if Uncle Bob is correct, depends on whether or not you will be adding and managing lots of behavior, or whether you will be adding and managing lots of data types. If the behavior will be relatively unchanged, then OOP would be beneficial, but if we're planning to add or change behavior, then procedural programming would be preferred. I honestly don't know what kind of software projects aren't primarily adding new behaviors (new features).

For reference, adding a room temperature check is easy in the C code,

#include <stdio.h>
#include <stdbool.h>

bool is_c_room_temperature(float c) {
    return c >= 20 && c <= 25 ? 1 : 0;
}

float f_to_c(float f) {
    return (f - 32) * 5 / 9;
}

bool is_f_room_temperature(float f) {
    return is_c_room_temperature(f_to_c(f));
}

int main() {
    float fahrenheit;
    printf("Please enter the temperature in Fahrenheit: ");
    scanf("%f", &fahrenheit);
    printf("Temperature in Celsius = %.2f\n", f_to_c(fahrenheit));
    if (is_f_room_temperature(fahrenheit)) {
        printf("%.2f is room temperature\n", fahrenheit);
    }
    return 0;
}

Classic procedural code does not concern itself with adding behaviors to objects. Instead, it treats data types as data types and isolates the "procedural" behaviors into functions that are performed on those data types. If we stick to pure functions (no side effects, and all inputs map to unique outputs), then we'll have highly testable code that can run in highly-concurrent environments.

For example, adding a Kelvin conversion would look like this,

float c_to_k(float c) {
    return c + 273.15;
}

Likewise, adding a Fahrenheit to Kelvin conversion would simply chain together two pure functions,

float f_to_k(float f) {
    return c_to_k(f_to_c(f));
}

Procedural code focuses entirely on behavior. Adding this functionality in a pure OOP style would result a laundry list of classes, interfaces, and methods. It can get out of hand quickly, and we'd soon be researching design patterns to try to regain some sense of code quality.

In practice, most developers tend to treat OOP and procedural programming with a sort of religious devotion, zealously adhering to their preferred programming style and feeling that the alternative is sacrilege. I think Uncle Bob was onto something when he said that "good software developers understand these issues without prejudice and choose the approach that is best for the job at hand." That's also from Clean Code, a book that should be read at least as often as it's referenced (it's a bit like George Orwell's 1984, most people reference it without ever having read it).

Uncle Bob is certainly more diplomatic than Joe Armstrong, the creator of Erlang, who had famously said,

"The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."

To date, I've never heard a reasonable counter-argument to this objection to OOP, namely, that objects bind data structures and functions together (which inevitably leads to an explosion of side-effects). Even as you try to decouple the banana from the gorilla, you end up creating even more classes, more side effects, and most likely an even worse problem. I'm not sure I'd go so far as to say OO Sucks, but I am hard pressed to defend OOP in light of decades of hard learned lessons.

Obviously, good code is preferable to bad code in any language. There is plenty of bad procedural code out in the world. But honestly, in OOP you often find good programmers writing bad code. Let's go back to some of the earliest lessons in software engineering, specifically, Fred Brook's essay, No Silver Bullet, and ask ourselves how much accidental complexity has been created by OOP? How much code in an average OOP project is tackling the essential complexity of a problem versus the accidental complexity?

In fairness, OOP was popularized by Java, which solved many problems from the early days of C and C++ (such as garbage collection and platform independence). In the decades since, Java has added capabilities found in modern languages (such as lambda expressions, collections, stream api, higher-order functions, etc). Most of the new capabilities come from the world of functional programming, and exactly zero of these capabilities come from OOP.

Whether we like it or not, the future may not be kind to OOP. Multi-core architectures and distributed computing are pushing software into high-concurrency asynchronous environments. Even worse, the push to cloud computing and microservices leads us to an increase in latency within a highly concurrent asynchronous world. This is an ideal environment for a separation of data structures from functions (pure functions). This is a great environment for Haskell and Erlang (or coding pure functions using Scala, Python, or Go), but regardless of the language, you couldn't ask for a worse environment for OOP.

Posted in c, java, software arch.

Trie or Set

Given a grid or input stream of characters, I would like to discover all words according to a given dictionary. This could be a dictionary of all English words or phrases (say, for an autocomplete service), or for any language. This is especially useful for languages where words are not clearly separated (e.g., Japanese, Chinese, Thai).

Typically, this is done with a Trie or a DAWG (Directed Acyclic Word Graph). A Trie can be implemented in Python using a nested dict, i.e.,

def _make_trie(wordict):
    trie = {}
    for word in wordict:
        current_trie = trie
        for letter in word:
            current_trie = current_trie.setdefault(letter, {})
        current_trie['$'] = '$'
    return trie

def _in_trie(trie, word):
    ''' True IFF prefix or word in trie
    '''
    current_trie = trie
    for letter in word:
        if letter in current_trie:
            current_trie = current_trie[letter]
        else:
            return False
    return True

Using this approach, we can scan through a large stream of characters for potential words. Imagine a classic matching game where you are looking for words within a grid of characters. Programmatically, you would scan through the grid with combinations of characters. The advantage of a Trie (or DAWG) is that it allows for efficient pruning. In other words, if a character combination is not in the Trie, then you can cease pruning.

An alternative approach is to create a Set of word prefixes, i.e.,

almost_words = set([])
for word in wordict:
    for i in range(len(word)-1):
        almost_words.add( word[0:i+1] )

If the dictionary contains ['apple', 'are'] then the Set almost_words would contain the following,

{'a', 'ap', 'app', 'appl', 'ar'}

In other words, rather than test if a character string exists in the Trie, one can simply check the Set almost_words. If there is no match then that particular path can be pruned. Here is a simple RTL (right-to-left) character scanner that uses this approach:

def _setscan_rtl(grid, wordict):
    ''' generator yielding word candidates
    '''
    almost_words = set([])
    maxlen = 0
    for word in wordict:
        if len(word) > maxlen:
            maxlen = len(word)
        for i in range(len(word)-1):
            almost_words.add( word[0:i+1] )
    for line in grid:
        for i in range(max(len(line),maxlen) - maxlen):
            candidate_word = ''
            for c in range(min(len(line),maxlen)):
                candidate_word += line[i+c]
                if candidate_word not in almost_words:
                    break
                yield candidate_word

I created a simple test case to determine if a Set was truly faster, and whether or not it was as memory efficient. There was a noticeable increase in performance using Set over Trie (for both large and small data sets). Interestingly, the performance difference was even more pronounced when using Japanese characters, indicating that language parsers can use a simple Set (or hashmap) as opposed to a Trie or a DAWG.

$ /usr/bin/time ./test_j_set.py
50220
177.84user 0.42system 2:58.54elapsed 99%CPU (0avgtext+0avgdata 507412maxresident)k
0inputs+0outputs (0major+145801minor)pagefaults 0swaps

$ /usr/bin/time ./test_j_trie.py
50220
250.44user 0.56system 4:11.86elapsed 99%CPU (0avgtext+0avgdata 680960maxresident)k
0inputs+0outputs (0major+184571minor)pagefaults 0swaps

Full results and code are available on my github.

Posted in data arch., python

iter_words

I would like to iterate over a stream of words, say, from STDIN or a file (or any random input stream). Typically, this is done like this,

def iter_words(f):
    for line in f:
        for word in line.split():
            yield word

And then one can simply,

for word in iter_words(sys.stdin):
    # do something

For a more concrete example, let's say we need to keep a count of every unique word in an input stream, something like this,

from collections import Counter
c = Counter

for word in iter_words(sys.stdin):
    c.update([word])

The only problem with this approach is that it will read data in line-by-line, which in most cases is exactly what we want, however, in some cases we don't have line-breaks. For extremely large data streams we will simply run out of memory if we use the above generator.

Instead, we can use the read() method to read in one-byte at a time, and manually construct the words as we go, like this,

def iter_words(sfile):
    chlist = []
    for ch in iter(lambda: sfile.read(1), ''):
        if str.isspace(ch):
            if len(chlist) > 0:
                yield ''.join(chlist)
            chlist = []
        else:
            chlist.append(ch)

This approach is memory efficient, but extremely slow. If you absolutely need to get the speed while still being memory efficient, you'll have to do a buffered read, which is kind of an ugly hybrid of these two approaches.

def iter_words(sfile, buffer=1024):
    lastchunk = ''
    for chunk in iter(lambda: sfile.read(buffer), ''):
        words = chunk.split()
        lastchunk = words[-1]
        for word in words[:-1]:
            yield word
        newchunk = []
        for ch in sfile.read(1):
            if str.isspace(ch):
                yield lastchunk + ''.join(newchunk)
                break
            else:
                newchunk.append(ch)
Posted in python

Punycode

I would like a webapp that supports UTF-8 URLs. For example, https://去.cc/叼, where both the path and the server name contain non-ASCII characters.

The path /叼 can be handled easily with %-encodings, e.g.,

>>> import urllib
>>> 
>>> urllib.parse.quote('/叼')
'/%E5%8F%BC'

Note: this is similar to the raw byte representation of the unicode string:

>>> bytes('/叼', 'utf8')
b'/\xe5\x8f\xbc'

However, the domain name, "去.cc" cannot be usefully %-encoded (that is, "%" is not a valid character in a hostname). The standard encoding for international domain names (IDN) is punycode; such that "去.cc' will look like "xn--1nr.cc".

The "xn--" prefix is the ASCII Compatible Encoding that essentially identifies this hostname as a punycode-encoded name. Most modern web-browsers and http libraries can decode this kind of name, although just in case, you can do something like this:

>>> 
>>> '去'.encode('punycode')
b'1nr'

In practice, we can use the built-in "idna" encoding and decoding in python, i.e., IRI to URI:

>>> p = urllib.parse.urlparse('https://去.cc/叼')
>>> p.netloc.encode('idna')
b'xn--1nr.cc'
>>> urllib.parse.quote(p.path)
'/%E5%8F%BC'

And going the other direction, i.e., URI to IRI:

>>> a = urllib.parse.urlparse('https://xn--1nr.cc/%E5%8F%BC')
>>> a.netloc.encode('utf8').decode('idna')
'去.cc'
>>> urllib.parse.unquote(a.path)
'/叼'
Posted in python, software arch.

Using getattr in Python

I would like to execute a named function on a python object by variable name. For example, let's say I'm reading in input that looks something like this:

enqueue 1
enqueue 12
enqueue 5
enqueue 9
sort
reverse
dequeue
print

Afterwards, we should see:

[9,5,1]

Let's say we need to implement a data structure that consumes this input. Fortunately, all of this behavior already exists within the built-in list datatype. What we can do is extend the built-in list to map the appropriate methods, like so:

class qlist(list):
    def enqueue(self, v):
        self.insert(0,v)

    def dequeue(self):
        return self.pop()

    def print(self):
        print(self)

The sort and reverse methods are already built-in to list, so we don't need to map them. Now, we simply need a driver program that reads and processes commands to our new qlist class. Rather than map out the different commands in if/else blocks, or use eval(), we can simply use getattr, for example:

if __name__ == '__main__':
    thelist = qlist()
    while line in sys.stdin:
        cmd = line.split()
        params = (int(x) for x in cmd[1:])
        getattr(thelist, cmd[0])(*params)
Posted in shell tips

Graph Search

I would like to discover paths between two nodes on a graph. Let's say we have a graph that looks something like this:

graph = {1: set([2, 3]),
         2: set([1, 4, 5, 7]),
         3: set([1, 6]),
         ...
         N: set([...] }

The graph object contains a collection of nodes and their corresponding connections. If it's a bi-directional graph, those connections would have to appear in the corresponding sets (e.g., 1: set([2]) and 2: set([1])).

Traversing this kind of data structure can be done through recursion, usually something like this:

def find_paths(from_node, to_node, graph, path=None):
    ''' DFS search of graph, return all paths between
        from_node and to_node
    '''
    if path is None:
        path = [from_node]
    if to_node == from_node:
        return [path]
    paths = []
    for next_node in graph[from_node] - set(path):
        paths += find_paths(next_node, to_node, graph, path + [next_node])
    return paths

Unfortunately, for large graphs, this can be pretty inefficient, requiring a full depth-first search (DFS), and storing the entire graph in memory. This does have the advantage of being exhaustive, finding all unique paths between two nodes.

That said, let's say we want to find the shortest possible path between two nodes. In those cases, you want a breadth-first search (BFS). Whenever you hear the words "shortest path", think BFS. You'll want to avoid recursion (as those result in a DFS), and instead rely on a queue, which in Python can be implemented with a simple list.

def find_shortest_path(from_node, to_node, graph):
    ''' BFS search of graph, return shortest path between
        from_node and to_node
    '''
    queue = [(from_node, [from_node])]
    while queue:
        (qnode, path) = queue.pop(0) #deque
        for next_node in graph[qnode] - set(path):
            if next_node == to_node:
                return path + [next_node]
            else:
                queue.append((next_node, path + [next_node]))

Because a BFS is guaranteed to find the shortest path, we can return the moment we find a path between to_node and from_node. Easy!

In some cases, we may have an extremely large graph. Let's say you're searching the Internet for a path between two unrelated web pages, and the graph is constructed dynamically based on scraping the links from each explored page. Obviously, a DFS is out of the question for something like that, as it would spiral into an infinite chain of recursion (and probably on the first link).

As a reasonable constraint, let's say we want to explore all the links up to a specific depth. This could be done easily. Simply add a depth_limit, as follows:

def find_shortest_path(from_node, to_node, graph, depth_limit=3):
    queue = [(from_node, [from_node])]
    while queue and depth_limit > 0:
        depth_limit -= 1
        (qnode, path) = queue.pop(0) #deque
        for next_node in graph[qnode] - set(path):
            if next_node == to_node:
                return path + [next_node]
            else:
                queue.append((next_node, path + [next_node]))
Posted in python, software arch.

python unittest

I would like to setup unit tests for a python application. There are many ways to do this, including doctest and unittest, as well as 3rd-party frameworks that leverage python's unittest, such as pytest and nose.

I found the plain-old unittest framework to be the easiest to work with, although I often run into questions about how best to organize tests for various sized projects. Regardless of the size of the projects, I want to be able to easily run all of the tests, as well as run specific tests for a module.

The standard naming convention is "test_ModuleName.py", which would include all tests for the named module. This file can be located in the same directory (package) as the module, although I prefer to keep the tests in their own subdirectory (which can easily be excluded from production deployments).

In other words, I end up with the following:

package/
 - __init__.py
 - Module1.py
 - Module2.py
 - test/
    - all_tests.py
    - test_Module1.py
    - test_Module2.py

Each of the test_*.py files looks something like this:

#!/usr/bin/env python
# vim: set tabstop=4 shiftwidth=4 autoindent smartindent:
import os, sys, unittest

## parent directory
sys.path.insert(0, os.path.join( os.path.dirname(__file__), '..' ))
import ModuleName

class test_ModuleName(unittest.TestCase):

    def setUp(self):
        ''' setup testing artifacts (per-test) '''
        self.moduledb = ModuleName.DB()

    def tearDown(self):
        ''' clear testing artifacts (per-test) '''
        pass

    def test_whatever(self):
        self.assertEqual( len(self.moduledb.foo()), 16 )


if __name__ == '__main__':
    unittest.main()

With this approach, the tests can be run by all_tests.py, or I can run the individual test_ModuleName.py.

The all_tests.py script also must add the parent directory on the path, i.e.,

#!/usr/bin/env python
# vim: set tabstop=4 shiftwidth=4 autoindent smartindent:
import sys, os
import unittest

## set the path to include parent directory
sys.path.insert(0, os.path.join( os.path.dirname(__file__), '..' ))

## run all tests
loader = unittest.TestLoader()
testSuite = loader.discover(".")
text_runner = unittest.TextTestRunner().run(testSuite)
Posted in python

HTML + CSS + JavaScript Lessons

I would like a very simple introduction to web development, from the basics of HTML and CSS, to the proper use of JavaScript; and all without getting bogged down in complicated textbooks.

I've been working with HTML, CSS, and JavaScript (as well as dozens of programming languages in more environments than I can remember) for over 20 years. While there are some excellent resources online (I recommend w3schools), I believe web development is a very simple topic that is often unnecessarily complicated.

I created a simple set of 9 lessons for learning basic web development. This includes HTML, CSS, and some simple JavaScript (including callback functions to JSONP APIs), everything you need to make and maintain websites.

You can find the lessons here
http://avant.net/lessons/

It's also available on Github
https://github.com/timwarnock/lessons

Posted in css, html, javascript

bash histogram

I would like to generate a streamable histgram that runs in bash. Given an input stream of integers (from stdin or a file), I would like to transform each integer to that respective number of "#" up to the length of the terminal window; in other words, 5 would become "#####", and so on.

You can get the maximum number of columns in your current terminal using the following command,

twarnock@laptop: :) tput cols
143

The first thing we'll want to do is create a string of "####" that is exactly as long as the max number of columns. I.e.,

COLS=$(tput cols);
MAX_HIST=`eval printf '\#%.0s' {1..$COLS}; echo;`

We can use the following syntax to print a substring of MAX_HIST to any given length (up to its maximum length).

twarnock@laptop: :) echo ${MAX_HIST:0:5}
#####
twarnock@laptop: :) echo ${MAX_HIST:0:2}
##
twarnock@laptop: :) echo ${MAX_HIST:0:15}
###############

We can then put this into a simple shell script, in this case printHIST.sh, as follows,

#! /bin/bash
COLS=$(tput cols);
MAX_HIST=`eval printf '\#%.0s' {1..$COLS}; echo;`

while read datain
do
  if [ -n "$datain" ]; then
    echo -n ${MAX_HIST:0:$datain}
    if [ $datain -gt $COLS ]; then
      printf "\r$datain\n"
    else
      printf "\n"
    fi
  fi
done < "${1:-/dev/stdin}"

This script will also print any number on top of any line that is larger than the maximum number of columns in the terminal window.

As is, the script will transform an input file into a crude histogram, but I've also used it as a visual ping monitor as follows (note the use of unbuffer),

twarnock@cosmos:~ :) ping $remote_host | unbuffer -p awk -F'[ =]' '{ print int($10) }' | unbuffer -p printHIST.sh
######
#####
########
######
##
####
#################
###
#####
#######

Posted in bash, shell tips

xmouse

I would like to remotely control my Linux desktop via an ssh connection (connected through my phone).

Fortunately, we can use xdotool.

I created a simple command-interpreter that maps keys to xdotool. I used standard video game controls (wasd) for large mouse movements (100px), with smaller movements available (ijkl 10px). It can toggle between mouse and keyboard, which allows you to somewhat easily open a browser and type URLs.

I use this to control my HD television from my phone, and so far it works great.

#!/bin/bash
#
#
: ${DISPLAY:=":0"}
export DISPLAY

echo "xmouse! press q to quit, h for help"

function print_help() {
  echo "xmouse commands:
  h - print help

  Mouse Movements
  w - move 100 pixels up
  a - move 100 pixels left
  s - move 100 pixels down
  d - move 100 pixels right

  Mouse Buttons
  c - mouse click
  r - right mouse click
  u - mouse wheel Up
  p - mouse wheel Down

  Mouse Button dragging
  e - mouse down (start dragging)
  x - mouse up (end dragging)

  Mouse Movements small
  i - move 10 pixels up
  j - move 10 pixels left
  k - move 10 pixels down
  l - move 10 pixels right

  Keyboard (experimental)
  Press esc key to toggle between keyboard and mouse modes
  
"
}

KEY_IN="Off"
IFS=''
while read -rsn1 input; do
  #
  # toggle mouse and keyboard mode
  case "$input" in
  $'\e') if [ "$KEY_IN" = "On" ]; then
           KEY_IN="Off"
           echo "MOUSE mode"
         else
           KEY_IN="On"
           echo "KEYBOARD mode"
         fi
     continue
     ;;
  esac
  #
  # keyboard mode
  if [ "$KEY_IN" = "On" ]; then
  case "$input" in
  $'\x7f') xdotool key BackSpace ;;
  $' ')  xdotool key space ;;
  $'')   xdotool key Return ;;
  $':')  xdotool key colon ;;
  $';')  xdotool key semicolon ;;
  $',')  xdotool key comma ;;
  $'.')  xdotool key period ;;
  $'-')  xdotool key minus ;;
  $'+')  xdotool key plus ;;
  $'!')  xdotool key exclam ;;
  $'"')  xdotool key quotedbl ;;
  $'#')  xdotool key numbersign ;;
  $'$')  xdotool key dollar ;;
  $'%')  xdotool key percent ;;
  $'&')  xdotool key ampersand ;;
  $'\'') xdotool key apostrophe ;;
  $'(')  xdotool key parenleft ;;
  $')')  xdotool key parenright ;;
  $'*')  xdotool key asterisk ;;
  $'/')  xdotool key slash ;;
  $'<')  xdotool key less ;;
  $'=')  xdotool key equal ;;
  $'>')  xdotool key greater ;;
  $'?')  xdotool key question ;;
  $'@')  xdotool key at ;;
  $'[')  xdotool key bracketleft ;;
  $'\\') xdotool key backslash ;;
  $']')  xdotool key bracketright ;;
  $'^')  xdotool key asciicircum ;;
  $'_')  xdotool key underscore ;;
  $'`')  xdotool key grave ;;
  $'{')  xdotool key braceleft ;;
  $'|')  xdotool key bar ;;
  $'}')  xdotool key braceright ;;
  $'~')  xdotool key asciitilde ;;
  *)     xdotool key "$input" ;;
  esac
  #
  # mouse mode
  else
  case "$input" in
  q) break ;;
  h) print_help ;;
  a) xdotool mousemove_relative -- -100 0 ;;
  s) xdotool mousemove_relative 0 100 ;;
  d) xdotool mousemove_relative 100 0 ;;
  w) xdotool mousemove_relative -- 0 -100 ;;
  c) xdotool click 1 ;;
  r) xdotool click 3 ;;
  u) xdotool click 4 ;;
  p) xdotool click 5 ;;
  e) xdotool mousedown 1 ;;
  x) xdotool mouseup 1 ;;
  j) xdotool mousemove_relative -- -10 0 ;;
  k) xdotool mousemove_relative 0 10 ;;
  l) xdotool mousemove_relative 10 0 ;;
  i) xdotool mousemove_relative -- 0 -10 ;;
  *) echo "$input - not defined in mouse map" ;;
  esac
  fi
done
Posted in bash

VLC remote control

Recently I was using VLC to listen to music, as I often do, and I wanted to pause without getting out of bed.

Lazy? Yes!

I learned that VLC includes a slew of remote control interfaces, including a built-in web interface as well as a raw socket interface.

In VLC Advanced Preferences, go to "Interface", and then "Main interfaces" for a list of options. I selected "Remote control" which is now known as "oldrc", and I configured a simple file based socket "vlc.sock" in my home directory as an experiment.

You can use netcat to send commands, for example,

twarnock@laptop:~ :) nc -U ~/vlc.sock <<< "pause"

Best of all VLC cleans up after itself and removes the socket file when it closes. The "remote control" interface is pretty intuitive and comes with a "help" command. I wrapped all of this in a shell function (in a .bashrc).

function vlcrc() {
 SOCK=~/vlc.sock
 CMD="pause"
 if [ $# -gt 0 ]; then
  CMD=$1
 fi
 if [ -S $SOCK ]; then
  nc -U $SOCK <<< "$CMD"
 else
  (>&2 echo "I can't find VLC socket $SOCK")
 fi
}

I like this approach because I can now use "vlc command" in a scripted environment. I can build playlists, control the volume, adjust the playback speed, pretty much anything VLC lets me do. I could even use a crontab and make a scripted alarm clock!

And of course I can "pause" my music from my phone while laying in bed. Granted, there's apps for more user friendly VLC smartphone remotes, but I like the granular control provided by a command line.

Posted in shell tips

datsize, simple command line row and column count

Lately I've been working with lots of data files with fixed rows and columns, and have been finding myself doing the following a lot:

Getting the row count of a file,

twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.gamma
    3183 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) wc -l lda_out/final.beta
     200 lda_out/final.beta

And getting the column count of the same files,

twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.gamma | awk '{ print NF }'
200
twarnock@laptop:/var/data/ctm :) head -1 lda_out/final.beta | awk '{ print NF }'
5568

I would do this for dozens of files and eventually decided to put this together in a simple shell function,

function datsize {
    if [ -e $1 ]; then
        rows=$(wc -l < $1)
        cols=$(head -1 $1 | awk '{ print NF }')
        echo "$rows X $cols $1"
    else
        return 1
    fi
}

Simple, and so much nicer,

twarnock@laptop:/var/data/ctm :) datsize lda_out/final.gamma
    3183 X 200 lda_out/final.gamma
twarnock@laptop:/var/data/ctm :) datsize lda_out/final.beta
     200 X 5568 lda_out/final.beta
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-theta.dat
    3183 X 200 ctr_out/final-theta.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-U.dat
    2011 X 200 ctr_out/final-U.dat
twarnock@laptop:/var/data/ctm :) datsize ctr_out/final-V.dat
    3183 X 200 ctr_out/final-V.dat
Posted in bash, shell tips