Category Archives: programming

My own chess engine

I’ve written a chess engine named Slonik. It implements the Universal Chess Interface (UCI), so you can download any popular chess interface, like Scid vs Pc. or Chessbase, to analyze with or play against Slonik.

I’ve written this engine from scratch, and chose to write it in Python, so that I can iterate quickly. That makes the engine slower, but maybe one day I will port it to C++. However, I am happy with it’s playing strength, all considering. The details of the engine are on the github page, but to summarize:

  • Alpha-beta minimax, quiescence search
  • Bitboard piece/board representation
  • Various search heuristics, such as the history heuristic, extensions, reductions, etc.
  • Hand-coded evaluation function
  • Transposition hash table

I plan to return to working on this engine’s AI — specifically to use deep learning and reinforcement learning techniques rather than the current hand-coded evaluation function.

WordPress backup script

In my previous post I showed my WordPress update script. However, it’s not safe to update without first backing everything up in case something goes wrong. This is a script that I adapted from this post. It backs up both files and the database.

#!/bin/bash

echo "In $0"

if [ $# -gt 0 ]; then
    NOW=$1
else
    NOW=$(date +"%Y-%m-%d-%H%M")
fi

FILE="maksle.com.$NOW.tar"
BACKUP_DIR="/home/private/backups"
WWW_DIR="/home/public/blog"

DB_HOST="dbhost"
DB_USER="backupUser"
DB_PASS="backupUserPassword"
DB_NAME="wp_db"
DB_FILE="maksle.com.$NOW.sql"

# WWW_TRANSFORM='s,^home/public/blog,www,'
# DB_TRANSFORM='s,^home/private/backups,database,'
WWW_TRANSFORM=',/home/public/blog,www,p'
DB_TRANSFORM=',/home/private/backups,database,'


# tar -cvf $BACKUP_DIR/$FILE --transform $WWW_TRANSFORM $WWW_DIR
tar -cvf $BACKUP_DIR/$FILE -s $WWW_TRANSFORM $WWW_DIR

mysqldump --host=$DB_HOST -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE

# tar --append --file=$BACKUP_DIR/$FILE --transform $DB_TRANSFORM $BACKUP_DIR/$DB_FILE
tar --append --file=$BACKUP_DIR/$FILE -s $DB_TRANSFORM $BACKUP_DIR/$DB_FILE
rm $BACKUP_DIR/$DB_FILE
gzip -9 $BACKUP_DIR/$FILE

You may have noticed that there is a commented out version of the tar transform variable and command. My host has a version of tar (bsdtar 2.8.5) that doesn’t have the --transform option, but does have an alternative -s option that does more or less the same thing. The idea is that the backup will have directory stucture backup/file.php rather than /home/public/blog/file.php for example.

mysqldump has many options you can pass it, which you may want to look into. However, the option --opt is a default, and does what I want. It is probably good enough for most sites. The problem with --opt is that it requires locking the table during the export, which also has implications on permissions required for your backup user. What backup user? Well, since you are storing the DB user and password in plain text in your script, you should not use your administrator user. It’s best to create a backup user with minimal permissions necessary to do the backup. Ideally that would be just SELECT privileges, but with the mentioned --opt option, LOCK TABLES privileges are required too. Here’s how you’d set that user up:

MySQL> CREATE USER backup IDENTIFIED BY 'randompassword';
MySQL> GRANT SELECT ON *.* TO backup;
MySQL> GRANT LOCK TABLES ON *.* TO backup;

I call the above script from a cron job on my local computer:

#!/bin/bash

# Exit if any command fails
set -e 
# Don't allow use of unintialized variables
set -u 


# Set up some variables
NOW=$(date +"%Y-%m-%d-%H%M")
BACKUP_DIR="$HOME/Documents/backups"
LOG_DIR="${BACKUP_DIR}/logs"
LOG_FILE="maksle-backup-$NOW.log"

# Redirect standard output and error output to a log file.
exec > >(tee -a "${LOG_DIR}/${LOG_FILE}")
exec 2> >(tee -a "${LOG_DIR}/${LOG_FILE}" >&2)

mkdir -p $LOG_DIR
cd $BACKUP_DIR

# The cool part: Run my local wp-backup.sh on the remote web server.
ssh maksle 'bash -s' < ~/bin/wp-backup.sh $NOW

# Sync the remote server backup logs with the backups directory on my local machine. After all, what good are backups if your webserver is down and you can't access them?
rsync -havz --stats maksle:/home/private/backups/ $BACKUP_DIR

Of course, the remote server can get filled up with backups, so I have another script that removes any backups more than 5 days old. I continue to have as many as far back as I want on my local machine.

#!/bin/bash

set -e
set -u

# Error out if a command in a pipe fails
set -o pipefail

# Usage example:
# wp-remove-old-backups.sh /home/private/backups 5

WORKING_DIR=$1
cd $WORKING_DIR

# This would be 5 if called as in the Usage example 
declare -i allow=$2
# This gets the number of files in the directory, which we assume are all backup tgz files
declare -i num=$(ls | wc -l)

if [ $num -gt $allow ]; then
    # Remove all but latest files
    (ls -t | head -n $allow; ls) | sort | uniq -u | sed -e 's,.*,"&",g' | xargs rm -f
fi

The above command works by first printing the latest 5 files, and then all the files. This way the latest 5 files get printed twice. This allows uniq -u to filter out the latest 5, and the rest of the files get sent to their slaughter. The intermediate sed -e 's,.*,"&",g' makes it work when there are spaces in the filenames by wrapping the filenames in quotes (avoid spaces in filenames).

Of course, I call this script via a local cron job as well.

#!/bin/bash

BACKUP_DIR="$HOME/Documents/backups"
LOG_DIR="${BACKUP_DIR}/logs"
LOG_FILE="maksle-backup-cleanup-$NOW.log"

exec > >(tee -a "${LOG_DIR}/${LOG_FILE}")
exec 2> >(tee -a "${LOG_DIR}/${LOG_FILE}" >&2)

ssh maksle 'bash -s' < ~/bin/wp-remove-old-backups.sh "/home/private/backups" 5

I hope that will help someone out!

Wordupress update script

WordPress offers the one-click update, but the file permissions required for that convenience are a security risk. For it to work, it essentially requires setting all files to the server group (usually web or apache or nobody user) and giving all those files group write permissions. Doing so trades security for convenience. Eventually there will be a security vector in the WordPress code, and with writeable PHP files everywhere, hackers will make short work of it.

WordPress provides manual updating instructions, and even gives a few code snippets here and there, but there’s really nothing there that should require human intervention. This little script updates WordPress to the latest version. The location of this script should be in a location on the web server not accessible to the web, which is /home/private/update-wp in my case.

#!/bin/bash

set -u
set -e

# Cleanup from a previous call
rm -f latest.tar.gz
rm -rf wordpress
rm -rf backuptemp

# Get the latest, unzip it, and untar it
wget https://wordpress.org/latest.tar.gz
tar -xzvf latest.tar.gz

# The location of your wordpress install
blog=/home/public/blog

# Copy these just in case
mkdir backuptemp
cp $blog/wp-config.php $blog/.htaccess backuptemp

# These are the files to be deleted as mentioned in the WordPress Manual Update link
rm $blog/wp*.php
rm $blog/license.txt $blog/readme.html $blog/xmlrpc.php
rm -rf $blog/wp-admin $blog/wp-includes

# Copy the files to overwrite what we have
# It will leave files alone that are in $blog/wp-content but not in the latest bundle which is what we want
rsync -avz wordpress/ "${blog}/"
cp backuptemp/wp-config.php backuptemp/.htaccess $blog

echo "DONE"

If something goes wrong you have your daily backups to save you (because you are backing things up, aren’t you?). I will write another post shortly showing my WordPress files and database backup script.

First Pull Request

I have just made my first pull request on github. https://github.com/magnars/expand-region.el/pull/148

My contribution was to Magnar Sveen’s awesome expand-region project. The fix was for nxml-mode. Expand region inside an xml attribute was including the outer quotes first before first expanding to just the inner quotes. It was also not properly expanding to the attribute when there are namespaces in the attribute. This fix amends that.

Magnar messaged me that expand-region is headed for the emacs core. Awesome! All contributors need to sign the Free Software Foundation copyright papers. See https://gnu.org/licenses/why-assign for reasons. I went ahead and emailed assign@gnu.org and signed away my copyright on this piece of code.

I’m pretty excited to see this go through, because not everyone’s first pull request ever incidentally also makes it into a major FSF project, let alone into EMACS core!

etags-update-mode

Just a few days ago I wrote my first EMACS minor-mode, called etags-update-mode. It updates your TAGS file on save. It’s heavily inspired by another package/minor mode with the same name by Matt Keller.

In order to update the tags for a file on save, Matt’s etags-update-mode calls a perl file to delete any previous tags for a specific file in a TAGS file before it appends the new definitions in the file. Also, with that package the minor mode is defined as a global minor mode.

I wanted the functionality that the package provided, but I didn’t want it to be a global minor mode (the only global minor mode that I’ve used that I’m aware of and that I like having everywhere is YaSnippet). I also didn’t see why there should be a reliance on perl. I wanted to do it all in elisp.

So I wrote a much simpler version of etags-update-mode that is a regular minor mode and does all it’s work in EMACS. I’ll be updating it as I continue to use it.

EMACS etags

EMACS has an etags.el package that supports use of etags, the EMACS version of ctags. It tags your source code so you can jump directly to the source for a function, variable, or other symbol. I’ve been using it heavily with C++ and C# (though for C++, I’ve supplanted it with GNU Global, and there is an EMACS package for that too, ggtags).

I wanted the same functionality for xslt, which I use heavily at work. Luckily exuberant-ctags and etags both provide support for extending support to other languages, by supplying regular expressions.

I put the following regular expressions in ~/.ctags:

--langdef=xslt
--langmap=xslt:.xsl
--regex-xslt=/<xsl:template name="([^"]*)"/1/
--regex-xslt=/<xsl:template match="[^"]*"[ \t\n]+mode="([^"]*)"/1/
--regex-xslt=/<xsl:variable name="([^"]+)"/1/

… and generate the TAGS file

ctags -e -o TAGS *.xsl

I can now jump to the definition of any variable or template in my xsl files!

First try at Data Munging

I’ve been taking Udacity course Exploratory Data Analysis and decided that I wanted to try my hand at a real data set that I cared about. I ran into several obstacles that are probably common and I hope that this will help someone else.

The data I cared about was in SQL Server so first I got the data out:

bcp "select .. from .. where .." queryout data.dat -c -t"||||" -S server -U user -P pass

I chose “||||” as my delimiter because I was fairly sure that no value had four pipe characters. It’s much easier to search the file for a good delimiter once it’s in a text file. Once the data was out, I searched through the file data.dat and found that there were no asterisks in the entire file so I replaced all “||||” with “*” as my delimiters.

sed -i 's/||||/*/g' data.dat

I tried to load this into R with mydata <- read.csv("data.dat", sep="*") but ran into a problem:

Warning messages:
1: In read.table(file = file, header = header, sep = sep, quote = quote, :
line 2 appears to contain embedded nulls

I eventually realized that anything which was either NULL or an empty string in the SQL Server database comes out as 0x00, a binary null character. EMACS represents the binary null as ^@. I replaced these binary marks with ‘NA’ in EMACS with M-x replace-string ENT ^@ ENT NA ENT. As a side note, you can position the cursor on a symbol you want to know about and do M-x describe-char, it will tell you a lot of information about it. Another way to replace the symbol if you haven’t experienced the life and file altering wonders of EMACS is

sed -i 's/\x0/NA/g' data.dat

Now I tried read.csv and it seemed to work without errors, but I noticed that the number of ‘observations’ that R thinks are in the file (dim(mydata)) is not the same as the number of lines in the file, so I knew something was wrong. To see the number of lines in a file you can do wc -l output.dat in the terminal.

It took me quite some time to figure it out. The following finally worked correctly:

mydata <- read.table("data.dat", na.strings=c("", "NA"), sep="*", comment.char="", quote="")

?read.csv reveals that it actually calls read.table internally and makes some assumptions for you. One of those assumptions is sep="," but we specified that. The ones that got me were comment.char and quote. Actually, read.csv assumes that comment.char is "" which disables commenting altogether, which is good (for my data), but read.table sets it to "#". Additionally, read.csv sets quote="\"" by default. Initially after using read.table rather than read.csv, I started getting these types of errors:

Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
line 9237 did not have 8 elements

I checked the line it complained about but it had 8 elements. I know that sometimes errors happen earlier than where the error message indicates. For a sanity check, I wrote this quick little diddy in Python to check the element count on each line:

#!/usr/bin/env python

linenum = 0
badlines = []

with open('data.dat', 'r') as orders:
    for line in orders.readlines():
        linenum = linenum + 1
        count = line.split('*');
        if not len(count) == 8:
            badlines.append(linenum)

print badlines

However, this came back with an empty array so I knew that there was something else going on. Once I took a closer look at the documentation though, and set quote="", disabling quotes altogether, I finally had no errors, and had the correct number of observations.

Also, while in the help page for read.table/read.csv, I found that na.strings was helpful to tell R to interpret blank fields as NA. By setting na.strings=c("", "NA"), we're telling R to interpret both "" and "NA" as NA.

There's more data manipulation I may need to do but for now I can finally start looking at the data.