Main Screen Today I found myself wanting to produce a quick slideshow video combining some photos and some music. A pretty awesome tool for this is PhotoFilmStrip. I tried a few others but this was the the easiest I found that also produced excellent output quality. It did have a couple limitations which I resolved with a very modest bit of hacking that I'm going to share here.

Adding H264 Support

The first thing I wanted was output in h264 MP4 format for viewing in XBMC. After digging through a bit of python code I found that a few small mods would achieve this. I've posted these changes as a Gist so others can use them. Note that you should back up the two relevant files in case of problems, and also need to edit them as root.

I just add a new class similar to the MPEG4-XVID one that has workable Mencoder options, and then make it available in the list shown by the render dialog. This worked quite well for me and I tested the output files in XBMC (after a bit of tooling around with test values). There is some warnings in the err log file but these did not seem to cause any problems.

Disabling Ken Burns

The next thing I found a bit limiting was not being able to bypass the pan/zoom "Ken Burns" effect easily. You can manually stop it by clicking the lock icon (between start/end images), and then adjusting the image scaling (with scroll wheel/mouse movements) but it's not accurate and must be done for every image in the slideshow. So I went looking for a way to edit the slideshow control settings and found it's just a sqlite3 database file. That's cool. So with a bit more fiddling I found I could write a one line sql statement that would instantly set all images to center stage and have no pan/zoom - like a simpler slideshow program might do. The nice thing about this method is you could potentially get very fancy by generating the slideshow control pan/zooms with a small script. I don't need that now but it's nice to know it could be done quite easily.

So here's the very simple sql for centering in fixed position (in HD resolution 1280x720, change accordingly if desired):

update picture set start_left=-(1280-width)/2,start_top=-(720-height)/2,start_width=1280,start_height=720, target_left=-(1280-width)/2,target_top=-(720-height)/2,target_width=1280,target_height=720;

You can put this in a file and pipe it into sqlite3 on the command line or use echo, like this:

echo "update picture set start_left=-(1280-width)/2,start_top=-(720-height)/2,start_width=1280,start_height=720, target_left=-(1280-width)/2,target_top=-(720-height)/2,target_width=1280,target_height=720;" | sqlite3 path/to/slideshow.pfs

If you want to avoid the command line then any sqlite3 editor can be used. I initially tested this with the SQLite Manager Firefox plugin by selecting the pfs file and then executing the sql statement from there.

As a side note, something that's not immediately clear with PhotoFilmStrip is when you add an audio file to the project it forces the slideshow length to match the audio and then adjusts the timing of each image (proportionately according to actual time setting on each) so the sum of all image times match the audio. This is handy as long as that's what you want.

It would be nice to be able to set a background image for the slideshow. I haven't looked into this yet. Maybe I will some day.


Main Screen Last summer I wrote an Android app. It was my first Java in more than a decade and mostly my intent was to figure out what this platform was all about. I needed something fairly easy but it still had to touch on many aspects of the user interface and use enough API calls to be a good learning vehicle. A dice bias/fairness testing app seemed a good fit.

The idea for this sprung from my recent use of dice for Bitcoin wallet seed generation. I had done a bit of research into whether dice were good for this or had noticeable bias. I came across an exellent pair of blogs posts on Delta's D&D Hotspot from several years earlier, where he laid out the details of using Pearson's Chi-Square Test for evaluating dice. A later post also goes into more detail about the "power" of this test. Both are well worth reading if you care about evaluating dice bias and the math behind it.

I'm not much of a mathematician. I could never really handle the theorems and proofs side but I was not bad at actually using calculus, differential equations and linear algebra. I gathered enough from the above articles to move ahead and work out the code for the basic math for this app. To the best of my knowledge, and from testing, it appears to work correctly. Regardless, I have had a couple responses from users where they are greatly disappointed in the fairness of common (usually cheap round edged) dice.

The overall idea was that using the Chi-Square test meant jotting down dice values from a large set of rolls, and then doing a few calculations to give a statistical indicator of fairness. Stats Screen It seemed like an app could save the manual work, and also allow for keeping a dice log so that you could keep doing an ongoing evaluation (while using the dice in a game of some sort). This is what apps are good for - removing the grunt work.

Having coded it up over a few days I tested a few dice and some coins. Then I promptly put it aside and forgot about it. Come around to December and I thought why not put this up in the app store for others to use? That's when I discovered you actually have to pay to put an app on the Google App Store, and I wasn't too much interested in that. I did check a few other options and found I could put it on Amazon for free. So that's what I did. Over the next month I think I had one single download. Obviously there isn't much interest in Dice Testing, or maybe Amazon App Store isn't popular, or both possibilties combined.

After coming across some dice questions on /r/Bitcoin last month I also put up on Github the same signed APK for reddit users to test out. I kind of hinted at eventually releasing the source code. Today I finally added an MIT open source license and put up everything (with signed APK for those not wanting to build it). All part of my new efforts to be more social, build up a web presence, or in other words "actually do something" for a change. I think I'll also look at submitting it to F-Droid as well - but they make it sound somewhat arduous.

Maybe some netizens will actually build it, use it, test'em dice. Let me know if you do. Cheers.


When I first started this project I thought I'd just whip up a small program to convert the blockchain raw data into a SQL database for doing some queries and possibly extend it to pulling out data for some nifty animated data visualization sequences. My idea was I'd start with using RPC calls and then later patch on a front end so that it could talk directly on the network.

The problem quickly became processing time and speed considerations. While my code started out fabulously quick with blocks being converted at a rate of 80,000 per second it always degenerated into a sluggish crawl at 0.25 blocks per second before even nearly done. At first I thought it was due to SQL indexing being a problem and I looked at various ways to cut down on indices and not building them on the fly. It wasn't too long before I determined that the real problem was shear number of transactions per block and the number of linkages between them that made progress painful.

I adapted my code in various ways. Initially I started working with SQLite3. During the indexing problem phase I thought SQLite3 cannot handle large databases because of index table rewriting. This is still probably true but to lesser an extent than I thought. As I progressed and ran speed tests I saw that the actual query rate didn't drop off so much as the number of transactions per block, and inputs/outputs per transaction increased massively, and this was the underlying reason for the slow down. But before that realization I'd already migrated onto MySQL, hoping to find better index handling. While MySQL was somewhat better it didn't solve the problem of slow data conversion.

The real problem is that SQL data indexes are inherently based on ordering of data (B-trees) and the blockchain is primarily unordered data. Sure, the blocks are ordered but the block related data (headers) is a very small part of the overall mass, and only adds up to around 30MB of the current 40GB that makes up the blockchain. Once you look inside blocks the connectedness is primarily hashes, or key-value type linkages. And the connections are between input-output pairs that span the entire blockchain. Which, dummy me, is why the blockchain uses leveldb, hash maps and key-value indexing. And yet, a SQL interface is so useful for asking the kinds of questions that are external to how the blockchain functions.

After all that messin' about, I do have a rather flabby conversion utility that will suck up blockchain data and run crazy with SQL insertions in spastic fits building a full SQL database. At every step I tried to keep data size to a minimum, sometimes even at the cost of query convenience (more on this later). You see, I'd read that some SQL conversions were resulting in 300GB databases and needed high performance servers with SSD raid arrays. I didn't want to go down that road, if at all possible, and needed to work with a regular sluggish hard disk using less space than the actual blockchain. At least that was the goal, so far sadly missed. Hint: don't try working with the blockchain in SQL format without an SSD. There is simply too many queries/insertions, or more basically, too many IOPS. I tried all sorts of ways to cut down on these IOPS. I wrote out unindexed data tables with intermediate table linkages to be "fixed up" later. In the end the cost of a fully connected, useful to query, SQL database is a gazzillion IOPS.

I made some efforts to run with the tables on ram disk (tmpfs) with good results. The problem was lack of RAM with only 2Gb or 4GB available on my current systems and tables already adding up to around 10GB. You'll see code to handle multi-pass conversions in the script. This was to allow building tables consecutively in RAM. This worked well to an extant until hitting the outputs table. Some more work using Merge tables followed and building tables in slices, but in the end most of this was too much hassle for only moderate gains.

The game changer involved moving to my rather dated 30GB SSD (already burdened by Ubuntu OS and various other project data, movies, photos, emails) and comparing this to the first hard disk conversions. Well, even this was a massive improvement. Whereas running on a hard disk slowed to a ridiculous 1 block per 20 seconds (with 150,000 blocks to go; aborted), my weary old SSD gave me an astonishing 0.3 blocks per second up til the last 10,000 blocks. And this is where I stand today - with Python conversion script, and the burning desire to go buy a new SSD.

I'm waiting on more space and even more speed. Recent reviews of low-end available drives promise 20,000-40,000 IOPS compared with my current rather lame 600 IOPS. Either coincidentally or perhaps due to underlying DB operations, that just happens to be roughly the same number of queries per second I get out of MySQL while running the converter.

Bottom line reached, buy new SSD soon and revisit my conversion efforts. In the meantime I've pushed the in-progress script to my github. Coming soon - SQL & SQLer Too, a more in depth look at the schema I settled on, and how to use it for various reporting. Further sequels to follow on as yet unknown uses involving rendering OpenGL blockchain data visualizations destined to end up on Youtube.


I've always liked Banksy so I decided to kick the new web site off with an ASCII rendering of his well known piece depicting a protester throwing flowers (found on some random ASCII art site long ago). This feels like me; code not bombs!

                        .s$$$Ss.
            .8,         $$$. _. .              ..sS$$$$$"  ...,.;
 o.   ,@..  88        =.$"$'  '          ..sS$$$$$$$$$$$$s. _;"'
  @@@.@@@. .88.   `  ` ""l. .sS$$.._.sS$$$$$$$$$$$$S'"'
   .@@@q@@.8888o.         .s$$$$$$$$$$$$$$$$$$$$$'
     .:`@@@@33333.       .>$$$$$$$$$$$$$$$$$$$$'
     .: `@@@@333'       ..>$$$$$$$$$$$$$$$$$$$'
      :  `@@333.     `.,   s$$$$$$$$$$$$$$$$$'
      :   `@33       $$ S.s$$$$$$$$$$$$$$$$$'
      .S   `Y      ..`  ,"$' `$$$$$$$$$$$$$$
      $s  .       ..S$s,    . .`$$$$$$$$$$$$.
      $s .,      ,s ,$$$$,,sS$s.$$$$$$$$$$$$$.
      / /$$SsS.s. ..s$$$$$$$$$$$$$$$$$$$$$$$$$.
     /`.`$$$$$dN.ssS$$'`$$$$$$$$$$$$$$$$$$$$$$$.
    ///   `$$$$$$$$$'    `$$$$$$$$$$$$$$$$$$$$$$.
   ///|     `S$$S$'       `$$$$$$$$$$$$$$$$$$$$$$.
  / /                      $$$$$$$$$$$$$$$$$$$$$.
                           `$$$$$$$$$$$$$$$$$$$$$s.
                            $$$"'        .?T$$$$$$$
                           .$'        ...      ?$$#\
                           !       -=S$$$$$s
                         .!       -=s$$'  `$=-_      :
                        ,        .$$$'     `$,       .|
                       ,       .$$$'          .        ,
                      ,     ..$$$'
                          .s$$$'                 `s     .
                   .   .s$$$$'                    $s. ..$s
                  .  .s$$$$'                      `$s=s$$$
                    .$$$$'                         ,    $$s
               `   " .$$'                               $$$
               ,   s$$'                              .  $$$s
            ` .s..s$'                                .s ,$$
             .s$$$'                                   "s$$$,
          -   $$$'                                     .$$$$.
        ."  .s$$s                                     .$',',$.
        $s.s$$$$S..............   ................    $$....s$s......
         `""'           `     ```"""""""""""""""         `""   ``
                                                           [banksy]dp

Linux, Electronics, Open Source Programming, Bitcoin, and more

© Copyright 2018 neoCogent. All rights reserved.

About Me - Hire Me