Unread counts from thunderbird

September 10, 2020 on 2:09 pm | In geeky | No Comments

I wanted counts of unread emails onto my dwm status bar, and could not find any nice way to the those from thunderbird. With a few pointers from the intarnets, I wrote this quick'n'dirty perl script to get the info in a reasonably interactive and resource friendly manner.

It abuses tail -f to read the .msf files under $HOME/.thunderbird/*.default (note: new boxen won't appear in the results without restart of the script).

The script is probably buggy and there most likely are cases where it fails dramatically, but seems to work well enough.

Prompt silliness

July 25, 2020 on 3:24 pm | In geeky | No Comments

While having fun with terminal colors, the old-school 16 colors, the extended 256 colors, and the more extended 24-bit colors, I noticed that the 256 color palette is modifiable with OSC codes (seems to work with st, xterm, rxvt, gnome-terminal; and NOT with at least putty and qterminal).

Palette changing does not sound very exciting when you also have 24-bit colours, but it does have its perks. Namely: changing the palette also affects all blocks already appearing on screen. Hence by side-loading palette control codes, you can change what is shown on the screen without actually touching the contents.

And that led to the incubation of a little utility that puts a clock on your terminal that updates even when the shell is idle, which then extended a bit with a 4-bar cpu meter. The resulting utility outputs a string with the appropriate placeholders (to be put into PS1: PS1="$(prompt)"). The output is escaped in bash style. The utility spawns a background process and periodically feeds palette changes to the tty.

Another utility abusing the same mechanism is nyancat. (Probably should rename, as nyancat is already a thing; but it is cat, and it does nyan, so whatchamacallit...). When fed the -g or -n flags, uses palette tricks in the same manner to make your eyes bleed with rainbows everywhere.

Also while pushing the latter one to git, wrote a small ANSI code "syntax hilighter" for gitea, the resulting snippet works well enough for my purposes.

Exporting tokens from FreeOTP

July 25, 2020 on 2:40 pm | In geeky | No Comments

After using FreeOTP on a single device for a while, I came to the conclusion that it would be safer to store the tokens on multiple devices, in case one is out of battery or out-of-arms-reach.

I didn't find any option from the UI to do it, so I checked the data over adb, and as the tokens were stored as JSON-in-XML, I whipped up a quick perl script to convert the data to a list of otpauth:// URIs.

#!/usr/bin/perl -w

use XML::LibXML;
use JSON;
use MIME::Base32;
use URI::Encode qw/uri_encode/;

my $p = XML::LibXML->new();
my $d = $p->load_xml(string => join '', <<>>);
my $j = JSON->new();

for ($d->findnodes('/map/string')) {
	my $l = $_->findvalue('@name');
	next if $l eq 'tokenOrder';
	my $d = $j->decode($_->findvalue('.'));
	$d->{secret} = encode_base32(join('', map {
		chr(($_ + 256) % 256)
	} @{$d->{'secret'}}));
	print "otpauth://" . lc($d->{'type'}) . "/" .
		uri_encode($l) . "?" .
		join("&", map { $_ . "=" . uri_encode($d->{$_}) } keys %$d) .

The output of which can be then be fed to qrencode or similar to get tokens added to another device.

I used:

adb shell su -c 'cat /data/data/org.fedorahosted.freeotp/shared_prefs/tokens.xml'|./otp.pl|while read u; do qrencode -o - $u | display -; done

to view the codes one-by-one. (Note: this method needs rooted device, it may be possible to do it without, but didn't look for one).

nginx, cgit and git-http-backend

January 27, 2018 on 11:34 am | In geeky | 1 Comment

Sounds simple, right. Plug in cgit and git-http-backend with nginx to get nice web interface and working clone URL. And pushable too, of course. It turned out not to be quite that easy, but seems doable with some quirks.

There are plenty of instructions for parts of this lying around, but didn't find one that catches 'em all, so needed to some cuttin', pastin' and retryin'. The end result nginx configuration:

        location ~ "(?x)^/git(?<path>/.*/(?:HEAD |
                                     info/refs |
                                     objects/(?:info/[^/]+ |
                                                [0-9a-f]{2}/[0-9a-f]{38} |
                                                pack/pack-[0-9a-f]{40}\.(?:pack |
                                                                           idx)) |
                error_page 491 = @auth;
                if ($query_string = service=git-receive-pack) {
                        return 491;
                client_max_body_size                    0;

                fastcgi_param   SCRIPT_FILENAME         /usr/lib/git-core/git-http-backend;
                include         fastcgi_params;
                fastcgi_param   GIT_HTTP_EXPORT_ALL     "";
                fastcgi_param   GIT_PROJECT_ROOT        /srv/git;
                fastcgi_param   PATH_INFO               $path;

                fastcgi_param   REMOTE_USER             $remote_user;
                fastcgi_pass    unix:/var/run/fcgiwrap.socket;
        location ~ "^/git(?<path>/.*/git-receive-pack)$" {
                error_page 491 = @auth;
                return 491;
        location @auth {
                auth_basic            "Git write access";
                auth_basic_user_file  /srv/git/.htpasswd;

                client_max_body_size                    0;

                fastcgi_param   SCRIPT_FILENAME         /usr/lib/git-core/git-http-backend;
                include         fastcgi_params;
                fastcgi_param   GIT_HTTP_EXPORT_ALL     "";
                fastcgi_param   GIT_PROJECT_ROOT        /srv/git;
                fastcgi_param   PATH_INFO               $path;

                fastcgi_param   REMOTE_USER             $remote_user;
                fastcgi_pass    unix:/var/run/fcgiwrap.socket;
        location ~ ^/git(?<path>/.*)$ {
                alias /usr/share/cgit;
                try_files $1 @cgit;
        location @cgit {
                include         fastcgi_params;
                fastcgi_param   SCRIPT_FILENAME /usr/lib/cgit/cgit.cgi;
                fastcgi_param   PATH_INFO       $path;
                fastcgi_param   QUERY_STRING    $args;
                fastcgi_param   HTTP_HOST       $server_name;

                fastcgi_param   CGIT_CONFIG     /srv/git/.cgitrc;

                fastcgi_pass    unix:/var/run/fcgiwrap.socket;

cgit also requires configuration, it could be done system wide with /etc/cgitrc, but I opted defining CGIT_CONFIG environment variable to point to a custom path, the .cgitrc ended up like something like this:



The binary and socket paths, and cgit data path, are those used by Debian default configuration, may need adjustment for different installations. Tried to get rid of the virtual-root directive in cgitrc, but that would require setting SCRIPT_PATH, which fcgiwrap eats away.

For more access control, you could grab the repository name from the request paths: "(?/(?.*)/", and integrate the $repo into auth_basic_user_file.

node.js and wordpress sessions

February 2, 2017 on 11:27 am | In geeky | No Comments

For a leetle project, I needed a way to validate a wordpress session from node.js. WordPress uses a somewhat complicated session system, with HMACs and using part of password salt, and was unable to find a ready puzzle piece for the purpose. So I wrote my own.

The result is a javascript module. Sample usage:

var wps = require('wpsess');
var vdtor = new wps.Validator('/path/to/my/wp-config.php');
vdtor.validate('value_of_my_logged_in_cookie', function (err, username) {
    if (err)
        console.log('Authentication failed: ' + err);
        console.log('Logged in user: ' + username);

It is way from perfect, but it works well enough for me.

xterm.js + ssh2 + websocket-stream + browserify = webssh

January 2, 2017 on 11:24 am | In geeky | No Comments

While pondering ideas for a website facelift, there was an idea to have a terminal for easy ssh access. However, none of the readily available options really suited my fancy; wssh came close, but it was actually doing ssh on the server side, and throwing the raw data inside websockets.

Fortunately npm is full of nice bits and pieces to build on, and browserify makes it quite easy to use most of them in a browser too.

Long story short, in the end it only required about a screenful of glue code to tie in the bits, and voilà. Place the code in webssh.js, grab also the test html file.

$ npm install xterm ssh2 websocket-stream browserify
$ `npm bin`/browserify webssh.js > bundle.js
$ websockify 8022 localhost:22

(You'll need npm and websockify for the above). Then launch the html file in your favorite browser, and login. Tune the addresses to your liking.

There's also a demo version, just enter your websocket endpoint (for example ws://localhost:8022), username and password, and on you go.

CSS / SVG filters for fun and profit

September 8, 2016 on 6:02 pm | In geeky | No Comments

Nowadays the web technologies support nice and fancy things, such as CSS filters. The basic filters are pretty nice for many interactions, like hover effects etc. However there is also support for SVG effects, which can be really complex and produce some really nice results. I wanted to share some little hacks with which I had fun. Unfortunately the browser support is not quite there yet, in my experience these work best in Firefox, but YMMV.

First up, a very simple "bloom" filter


        stdDeviation="5" />
        operator="arithmetic" k2="0.8" k3="1" />

This is very simple, just blur the image and add it up with the original, this makes lighter areas "leak" and makes them look bright.

Next up, slight variation of the previous, unsharp mask:


         stdDeviation="5" />
         operator="arithmetic" k2="-0.3" k3="1.3" />

Here, instead of adding the blurred version, we substract it from the original, giving areas with higher contrast some depth.

Next up an homage to simpler times:


<!-- Pixelize the image using a 2x2 pixel image and displacement map -->
<feImage width="2" height="2" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAIAAAACCAIAAAD91JpzAAAAEklEQVQI12P4/5/hPwMDA4QAACfmA/2h2gQ5AAAAAElFTkSuQmCC" />
<feTile />
<feDisplacementMap in="SourceGraphic" xChannelSelector="G" yChannelSelector="R" scale="1" result="pxl" />

<!-- Map all color channel values <0.25 to 0, and >0.33 to 0.5 -->
<feColorMatrix type="matrix" in="pxl" values="1 0 0 0 -0.33 0 1 0 0 -0.33 0 0 1 0 -0.33 0 0 0 1 0" />
<feColorMatrix type="matrix" values="10000 0 0 0 0 0 10000 0 0 0 0 0 10000 0 0 0 0 0 1 0" />
<feColorMatrix type="matrix" values="0.5 0 0 0 0 0 0.5 0 0 0 0 0 0.5 0 0 0 0 0 1 0" result="halftones" />

<!-- Map all color channel values <0.66 to 0 and >0.75 to 1 -->
<feColorMatrix type="matrix" in="pxl" values="1 0 0 0 -0.66 0 1 0 0 -0.66 0 0 1 0 -0.66 0 0 0 1 0" />
<feColorMatrix type="matrix" values="10000 0 0 0 0 0 10000 0 0 0 0 0 10000 0 0 0 0 0 1 0" />

<!-- Add the two together -->
<feComposite in2="halftones" operator="arithmetic" k2="1" k3="1" />

This effect uses displacement map filter to make the image 2x2 pixel squares, for the retro feeling and abuses color matrix result clamping to reduce colors so that channel values can be only 0, 0.5 or 1, i.e. 27 different colors. This quantization can also be achieved using feComponentTransfer with big lookup tables for each feFunc[RGB].

And last, probably also the least, even simpler times:


Check the source if you want to see how it is made, essentially using the same tricks as in the retro filter, but making the tiles 8x16 and instead of using the quantized color values, use images of 8x16 console font characters to get a ascii art-ish result. This can be made pixel-perfect on firefox, but to have it also working in chrom(ium) at least somehow, some characters appear distorted.

Made conky eval useful

November 9, 2015 on 8:44 am | In geeky | No Comments

Conky's eval seems rather useless, at least I couldn't get it do anything I wanted, so I added a little patch to make it more useful (to me):

diff --git a/src/conky.c b/src/conky.c
index 5848b61..8702cea 100644
--- a/src/conky.c
+++ b/src/conky.c
@@ -1103,7 +1103,9 @@ void generate_text_internal(char *p, int p_max_size,
 #endif /* IMLIB2 */
                        OBJ(eval) {
-                               evaluate(obj->data.s, p, p_max_size);
+                               char buffer[max_user_text];
+                               evaluate(obj->data.s, buffer, sizeof(buffer));
+                               evaluate(buffer, p, p_max_size);
                        OBJ(exec) {
                                print_exec(obj, p, p_max_size);

Probably not the best thing ever, but seems to do the trick for me; now I can get the address of the interface connected to the big bad internets with (note: won't work correctly when multiple interfaces with default route):

 ${eval $${addr ${gw_iface}}}

The patch applies against conky 1.9.0, unfortunately not against the heavily rewritten git master.

Update: similar patch I made against the 1.10.x has been merged to upstream conky; yay!

Simple SNMP proxying

August 20, 2015 on 7:34 am | In geeky | No Comments

I recently needed to change my modem due to a technology change, and the new modem did not like to talk SNMP to the big bad Internet. However, it does very happily do so to the local network. While this is probably not an issue for many, I do not have any decently powered server at home, only a rented one outside to do my network statistics.

To overcome the issue, I needed to get my OpenWRT -router to bounce the SNMP packages to the modem, and relay replies back. And while it is seemingly possible with net-snmp snmpd, the configuration is far from straightforward and the OpenWRT packages seem to be lacking in that regard.

Hence I wrote my very own, very simple, proxy daemon. It is not smart at all and simply sends SNMP replies back to the last one to make a request. But it works for my purposes. If you are in similar need, you can grab the source.

I also made a slightly more complex version that allows redirection of different community names to different SNMP servers/agents (no community name rewriting).

Keyboard URL launching in gnome-terminal

December 11, 2014 on 8:06 am | In geeky | No Comments

While I have been trying to make my desktop experience lighter, I somehow keep grabbing onto gnome-terminal; I guess it's what I've used to use and that I know how to configure with anti-aliased fonts etc. However, the fact that it is missing the ability to launch URIs from command line was hindering my dwm workflow, so I decided to do something about it.

First I tried to use different terminal, namely rxvt-unicode, which has the functionality, but couldn't configure it to my liking in reasonable time (freetype font spacing, mostly), I decided to see if I could hack gnome-terminal to do what I wanted.

At first I used a quite naïve method, just going the characters one by one (from end to start) and calling the VTE check method for each; and while it worked reasonably well it turned out to be terribly slow in some scenarios, with a single scan taking more than a second on a reasonably powerful laptop.

Since gnome-terminal stores the regexes also internally, it seemed like a good idea to try out if I could grab the whole contents of the terminal and run the regexes on it, and see if the results are faster. And indeed they were, but it yielded a slight bug with some uris being reported twice if they overlap. Lazy as I am, I chose to ignore this, and just go with it.

After finding the URIs, it was just a matter of drawing them somehow and handling the keypresses to launch them. I chose a very simple launching mechanism, resembling that of the vimperator/pentadactyl hints mode. Each hint is allocated a character from a set (after the set runs out, the URIs are ignored). Then I drew these hints over the terminal like they were tooltips, and voilà.


The resulting patch is 342 lines (226 lines added) and can be found from here. It should apply cleanly on top of gnome-terminal 3.6.2-0ubuntu1 (Ubuntu 14.10) and with fuzz on vanilla 3.6.2. Applying on top of master takes some handyman work.

Next Page »

Powered by WordPress.
Entries and comments feeds. Valid XHTML and CSS. ^Top^