With the release last month of sqlchain 0.2.10 it's overdue I actually write up some usage info on the two new features.

A new config option max_blks allows setting a limit to the number of blocks kept in the sqlchain database. After each block is added this value is checked and if the total block count exceeds max_blks then previous blocks are deleted (including trxs and outputs rows). There are some caveats to this behaviour. Unreferenced addresses are not removed, block header data (quite small) is not deleted, and blob data (signatures if present, and tx input references) are not reclaimed. The latter could probably be cleaned up without too much trouble but for now I have left it because it's not critical for typical uses.

The main purpose of this is to allow "quick sync" for use cases that only require ongoing tx data processing. So if you want to monitor tx flow or manage a new wallet on top of sqlchain (eg. pmt processing) then you can now run a pruning node, set a max_blks covering a few days, and then process txs without having to do a full sqlchain sync. To avoid a full sync you should set the conf option block to some recent value and sync will start from there instead of the default 0. In this case only address and blob data for seen blocks will exist and the total db size will be small but increase slowly as addresses get added. Other uses might be analyzing the mempool for fee data or producing daily tx statistics.

The other new config option is apis which tells the sqlchain-api daemon which api modules to load and dispatch requests on. The default apis when not present is to only load the "Insight" api. So upgrading to 0.2.10 will cause apis to vanish and require adding an apis value to fix. The setting is a list of lists or triplets. Each list entry being info for how to handle an api consisting of [ url_path, module_name, entry_function ]. For example, you could copy and customize the insight.py module to a new name my_api.py and make an entry like:

apis=[["/api", "sqlchain.my_api", "do_API"]]

Now when sqlchain-api starts it would only load your custom api and process according to your specific needs. I have used this recently for a client's custom address watching web app. You can load multiple apis like the current default when installed:

apis=[["/api","sqlchain.insight","do_API"],
     ["/bci","sqlchain.bci","do_BCI"],
     ["/rpc", "sqlchain.rpc","do_RPC"],
     ["/ws","","BCIWebSocket"],
     ["/","","do_Root"]]

Both of these new options were born out of specific needs I had for recent client work. If you want to build something on top of sqlchain and have needs that may also be useful to others please contact me or open a github issue to discuss them. If the feature has general application maybe I can work on adding it to sqlchain.


BTCPay is nice a replacement for the BitPay payment processor that allows you do your own payment processing. At first I was a bit miffed that it uses dotNet (Microsoft, but open source) but after some groaning I caved. I figured I wasn't ready to write my own in Python and no one else seemed to be doing it, so for now there is no alternative. I'd like to write something simpler in Python that runs on top a pruning sqlchain node but I just don't have time now. Add that to the todo someday list.

My first attempts with BTCPay were to use the docker deployment. I tried this twice and both times ended up aborting. The first time because I already had a node running and I didn't want to install two nodes on the server. It wasn't an option to just use an existing one. The second time because everything just got screwed up and didn't work. NBXplorer (part of the setup) just would not sync for whatever reason. Personally I'd rather just have the components on the server rather a whole raft of docker stuff as wrappers that make it hard to see what's going on.

After much head bashing I gave in and followed Mario Dian's simple but excellent manual install guide. This worked for me first time and has been reliable so far. I did make several changes so I decided to write up a brief post on what I did and why.

The first thing was I didn't want to run BTCPay as my admin user with login privilages. I already had a user named btc for running the bitcoind daemon and so decided it was ok to extend it's use for this. I created a /home/btc directory, sudo'd into that user and followed the setup guide putting dotnet, nbxplorer, and btcpayserver into that location. With PostgreSQL I just followed the guide, creating a new user. I adjusted the provided systemd service files to suit my locations and user. That all worked fine.

I run this blog and a few family sites on a couple small Vultr VPS servers. They don't have the disk space or memory for running a full node. What I do instead is run bitcoind on my home server and use a tunnel to provide it as a backend to the cheap servers. This is working well but it takes a bit of finesse to get it right. My home server has a ZFS pool on 4x3 TB HGST drives. These have proven super reliable and the pool scrubs regularly so I always know it's not deteriorating. BTCPay is installed on this server and runs right here in my home without any external access.

Then I installed autossh to oversee the reverse tunnel, and created a systemd service file (along with custom service files for btcpayserver and nbxplorer) to start the tunnel at boot and keep it up. A reverse tunnel means that when my local server starts it uses ssh to login on my Vultr node and listen on localhost:9080, relaying any traffic back to localhost:9080 locally. The simple ssh command for this is autossh -NTR 9080:localhost:9080 btc.neocogent.com (which I place in the service file) but it requires a few config lines to make it reliable.

On the local server I added these to .ssh/config (in addition to any host,user,port defaults you need):

ServerAliveInterval 15
ServerAliveCountMax 3
ExitOnForwardFailure yes

You'll want to create an ssh key and copy the pubkey to the server to allow secure automated logins. On the server end you need some similar additions to the sshd_config:

ClientAliveInterval 15
ClientAliveCountMax 3

NeoBTC Checkout
Combined with autossh as overlord this is working well to keep the tunnel alive. To make this publicly available to my web sites that want to accept Bitcoin I also need a small nginx conf. This proxies that port 9080 with ssl support (when placed inside an ssl server block) to my btc subdomain.

location / {
        proxy_pass http://localhost:9080/;
}

Some of my sites are running Wordpress so the last part is to install the btcpay-for-woocommerce plugin and configure it to use the same btc subdomain for payment processing. Beautiful.

Here is the btctunnel service file (note it starts when btcpayserver is up):

[Unit]
Description=BTC Reverse Tunnel Service
ConditionPathExists=|/usr/bin
After=btcpayserver.service

[Service]
User=btc
ExecStart=/usr/bin/autossh -NTR 9080:localhost:9080 btc.neocogent.com

# Restart every >2 seconds to avoid StartLimitInterval failure
RestartSec=5
Restart=always

[Install]
WantedBy=multi-user.target

None of the served sites are high activity. I'll be amazed if a bitcoin payment happens once a month. I use it for donations on this blog and though rare it's nice to use new addresses for each one. The above tunnel setup isn't even noticeable from the users perspective - perhaps slightly slower but not much. You can't tell that btcpay requests get sent to the backend at home, and it's very little traffic. On the plus side I don't need another node running on the servers - plus I feel it's more secure as the address generation happens here where no VPS admins can get in.

Finally, just so I can spiff the blog up, I customized the checkout css page to have my own logo and colours. I decided to be my own processor so I think that means I need my own logo, right? Of course, I call my own payment processor NeoBTC.


I've dropped the ball over the last year but I do have an excuse. The combination of no one really using sqlchain, my wife of 16 years dying and me leaving my home in Thailand, just meant I didn't spend much time updating sqlchain or this blog. I didn't even renew the domain name last year. On the upside namecheap had a domain sale and I was prompted to get the name back. This led to my current splurge of energy to try and renew efforts to update everything. And there is actually quite a bit to document and update, along with a new direction over the next month.

In summary, last year from September to January I "lived" in the hospital caring for my wife. During December I also did a lot of new coding on sqlchain. I added altcoin and bech32 support along with numerous smaller new bits like test units and a dialog based config tool. Most of that was released at end of December but with very little documentation. On January 8th my wife died. So now I have a double whammy each year as I'll be remembering her on the 8th, right after celebrating the Bitcoin birthday on the 3rd.

One of my missions over the next while is to update here with blog posts describing the new features and some of the restructuring that I did. In addition, looking forward, I'd like to replace the config tool again with a web based admin panel. This would allow configuring a new sqlchain install but be a useful admin panel to monitor system status, coin activity and resource use. The idea is that once you pip install sqlchain you could use your browser as interface for configuring and adding coins and monitoring db sync status. I have ideas about making it modular so that it could blend other system info like goaccess (web analytics) or logs into one server overview panel. That's probably down the road a bit. Also on the drawing board is revising the demo web page to be more like an explorer such as the blockstream.info one.

For now I just wanted to post here and say I'm back and will be adding more posts to document the heap of new code pushed up last year. I think there are perhaps a dozen or so possible topics on my list now. Plus I have just added two new features over the last week which I'll need to explain more fully. If you are actually using sqlchain for something it would be very helpful if you commented or let me know. It would help spur development or with project direction.


I opened my Electrum wallet a few days ago and discovered a new donation. I don't know where or who it came from (isn't Bitcoin wonderful) but I want to send a BIG Thank You! to whoever has made donations, small or big. Even little bits really help me keep going on this project. I have quite limited discretionary resources. In this case seeing the new donation inspired me to actually splurge and pay for a new dedicated server. I'm in the process of re-syncing again, testing new code, and moving my other web activities over to the spiffy new home.

After much scanning of WebhostingTalk, and Googling, I found something I was happy with for $30 (in btc, of course). It's been up and running for a few days doing a new sync, and shortly I'll shutdown the Vultr instance after transitioning. To meet my needs it had to be cheap, have an SSD, sufficient cpu power, and accept bitcoin; which cut down on the options. Many bitcoin hosts I found were over priced. Here's what I got for this price from Reprise Hosting in the Seattle (Westin) data center:

Dual 4-Core Xeon L5520 CPU (16 threads) (dated but capable)
16 GB DDR3 RAM (nice)
120 GB Crucial M500 SSD (sufficient)
10TB BW - 100 Mbit port (yawn)
4 usable IPs
IPMI

Xeon Dual Quad The 100 Mbit port is the only thing I would have liked to see better for burst transfers now and then. I can live with this for now, and so far doing the sync it's been moving along nicely. I feel like I haven't been able to really utilize all those cores. Maybe they'll be nice to have later if I start to see some real traffic on the future Explorer site.

For your amusement here's a screen shot of the 16 cores barely more than idling while pumping txs.

I haven't been slacking off lately - just had to do a visa run (to Laos and back); keeping my stay here legit. I've been testing new tx threading code, split blobs for sig data and fallback code for Amazon S3 storage of the blobs (using Range header to pull small byte sequences as needed for api calls) with in-memory LRU cache.

Putting blobs up on S3 adds less then 1 second latency to calls that need that data, but cuts down greatly on server disk space. S3 cost is $0.023/GB/mo. and I expect to put about 30GB (only older blocks) offsetting the cost of a secondary hard disk for $0.69/mo.

Once again, Thank You to my kind benefactors. It's a big help.


I was watching htop for a while and noticed it was quite active for just monitoring the mempool. It seemed like it was working darn hard for just a few txs/second on the network. So reviewing the code I noticed it was looking up every tx in the mempool every time it cycled through, which is currently set at 5 seconds, and that's 46k odd lookups. Well, that's dumb. It should only lookup new txs in the mempool. I moved a few lines of code around to achieve this and ran some tests. Voila!

CPU For Mempool

I've also had occasional crashes where an RPC call returned "httplib.CannotSendRequest" exceptions. With a bit of Google I figured it was contention between threads causing this; when a new request is attempted before the last has responded. Not sure why, as they should each have their own connection; and yet there it is. So I'm currently testing some new code for an RPC Pool wrapper. This opens a few connections at the start and then they can be grabbed as needed. It also traps the errors and tries to repair failed connections. Previously a bad result could nuke one thread leaving others peddling away aimlessly. Hopefully this will be a step towards more robust behaviour.


It's finally up - a public demo page using the sqlchain api backend. I think there are a few issues still as I noticed a few example calls didn't produce expected data. I'll look into that and figure out what's going on. This demo is served from the Vultr 2GB instance as the Hudson Valley one is utterly incapable of catching up. I disabled sqlchain there and won't be using that system much except maybe as backup data.

The demo is reverse proxied behind nginx and using a free letsencrypt SSL certificate. It all seems to work well. I wasn't happy with how python 2.7 supports SSL - it requires the key file be readable by the user running the daemon since it needs to read the file for each request. In newer versions they read the key once at creation, which is good because after root reads it, then permissions can be dropped. Anyway, using nginx bypasses that issue and allows for more flexible configurations.

I've added a bit of info to the install guide covering this. Just follow a few install steps from the nginx web site first. Worked for me first time.

Here are some current database values on the demo server:

blocks.MYD      449,877 records         37 MB
address.MYD     217,470,124 records     5,871 MB
trxs.MYD        190,540,221 records     10,289 MB
outputs.MYD     529,494,252 records     15,355 MB  (blown away by how many records now)
mempool.MYD     44,436 records          499 KB

I guess we'll see how fast the demo page works. It's not exactly on a fast server - fairly modest even: 2 vCPU VPS with 2GB memory and 45 GB SSD costing 3 cents/hour or $20/month. Oh, there is a free 50 GB block volume mounted. Only recent transactions will show signature data (raw transaction api call decodes this data for html view) as I hollowed out most of the file. Only about 358 MB of witness data is online.

I've been looking around for a cheap dedicated server that accepts bitcoin. There are some very cheap offerings from OVH and their group of cohorts. They don't take bitcoin and I just don't like how much ID documents they require; plus reports are they take forever to verify them anyway. Hudson Valley has some cheap offerings but unfortunately I no longer trust they will be honest about server sharing / load.

The best I think I've seen yet is actually namecheap. That's a surprise because, even though I use them for domains, and I love that they accept bitcoin, in the past their hosting options were not very interesting to me. Now they offer a decent machine for pretty good price:

Xeon E3-1220 v3 4 Cores x 3.1 GHz
RAM 8 GB DDR3
HARD DRIVE 300 GB SSD  (spec'd as Intel DC S3500 by their support)
BANDWIDTH 10 TB

$58.88/mo

Excellent price for this much SSD space, and the CPU, though not stellar, is likely a usable amount of power; unlike some places offering Atom D525s as servers for this price. It's still a bit beyond my budget unless I have some revenue stream coming from the server. So I am working on putting something up that could potentially at least cover it's cost with adverts and/or user memberships.


I guess 2 GB is not enough for bitcoind when you have 1 GB allocated as key buffer for MySQL. A few hours after syncing I had a second crash and bitcoind stopped again. This time no biggie as I had a snapshot of the bitcoin directory saved a couple hundred blocks back. Simple copy and change of permissions and it was back up again.

sudo cp -r /var/data/bitcoin.449520 /var/data/bitcoin
sudo chown btc: /var/data/bitcoin

It was pretty clear to me this was probably because of lack of memory. It may have been the StdAlloc Exception error messages that gave it away. It plainly likes to have more than 1GB available; but how much more? Well, seems like around 103 MB will work - based on the last 24 hours trial.

After this crash I decided I'd better add a swap file for the VPS. Those with 4GB+ probably don't need this unless they have allocated too much to MySQL or other RAM hungry inhabitants. It takes mere seconds to set up, assuming you can spare 1 GB of disk space, but first check if you don't have swap space allocated already - many VPS servers come configured with zero. Use htop or free -m, and look for the swap values. To add a basic swap file:

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

And to make it mount at boot up, add this line to your /etc/fstab file at bottom:

/swapfile   none    swap    sw    0   0

And verify again with htop or free -m. This should really help bitcoind not lose it's mind when in borderline memory conditions. After 24 hours running like this the amount of swap used shows in htop as 103 MB. It must have crawled slowly up there; when I checked shortly after creation it only had 1 MB used. So far there have not been any more crashes - but that reminds me I should go make a more recent snapshot.

sudo stop sqlchain
sudo stop bitcoin
( view the log file, smart alec - also lets you see the actual last block )
sudo cp -r /var/data/bitcoin /var/data/bitcoin.449771   (for example)
sudo rm -rf /var/data/bitcoin.440633  (again as example, I keep two snapshots)

Be sure to stop bitcoind briefly, or files could be in an unstable state; which means also stop sqlchain. It currently doesn't behave very well when bitcoind goes away, or mysql for that matter; both fixes are on the todo list.


Now syncing again. I copied bitcoin data to the slower VPS and started sqlchain on both systems processing blocks. Here is a snapshot of the first minutes on each.

First the cheaper ($5/mo.) VPS with 300GB RAID-10 storage:

4vCPU-4GB-300GB RAID-10 (MYI+MYD) (Hudson Valley Host)

sqlchain - daemon.log:
23-01-2017 18:39:17 Using rpc mode. Monitoring blocks / mempool.
Block 449396 [ Q:8 1434 txs - 22-01-2017 - 936ms 211.7s   7 tx/s]
Block 449397 [ Q:8  895 txs - 22-01-2017 - 722ms 67.3s  13 tx/s]
Block 449398 [ Q:8  343 txs - 22-01-2017 - 209ms 212.4s   2 tx/s]
Block 449399 [ Q:8  181 txs - 22-01-2017 -  94ms 31.9s   6 tx/s]
Block 449400 [ Q:8  359 txs - 22-01-2017 - 176ms 41.4s   9 tx/s]
Block 449401 [ Q:8 1094 txs - 22-01-2017 - 229ms 76.7s  14 tx/s]
Block 449402 [ Q:8  620 txs - 22-01-2017 - 141ms 200.2s   3 tx/s]
Block 449403 [ Q:8 2321 txs - 22-01-2017 - 480ms 164.5s  14 tx/s]

MySQL - show status (2048 MB key buffer)
| Key_blocks_not_flushed                   | 0           |
| Key_blocks_unused                        | 1690322     |
| Key_blocks_used                          | 24414       |
| Key_read_requests                        | 120304      |
| Key_reads                                | 24294       |
| Key_write_requests                       | 10368       |
| Key_writes                               | 10359       |

Now the SSD based VPS (~$27/mo, 45GB SSD + 75 GB block storage):

2vCPU-2GB-45GB SSD (MYI) + 75GB block storage (MYD) (Vultr)

sqlchain - daemon.log:
23-01-2017 11:40:24 Using rpc mode. Monitoring blocks / mempool.
Block 449396 [ Q:8 1434 txs - 21-01-2017 - 301ms 9.5s 150 tx/s]
Block 449397 [ Q:7  895 txs - 21-01-2017 - 228ms 3.1s 287 tx/s]
Block 449398 [ Q:8  343 txs - 21-01-2017 -  90ms 16.1s  21 tx/s]
Block 449399 [ Q:7  181 txs - 21-01-2017 -  66ms 1.9s  93 tx/s]
Block 449400 [ Q:6  359 txs - 21-01-2017 -  76ms 1.2s 305 tx/s]
Block 449401 [ Q:7 1094 txs - 21-01-2017 - 161ms 3.4s 319 tx/s]
Block 449402 [ Q:8  620 txs - 22-01-2017 -  92ms 5.3s 117 tx/s]
Block 449403 [ Q:8 2321 txs - 22-01-2017 - 395ms 6.6s 354 tx/s]
Block 449404 [ Q:8 1702 txs - 22-01-2017 - 2034ms 35.6s  48 tx/s]
Block 449405 [ Q:7   99 txs - 22-01-2017 -  49ms 0.4s 259 tx/s]
Block 449406 [ Q:8 1286 txs - 22-01-2017 - 483ms 12.5s 103 tx/s]

MySQL - show status (1200 MB key buffer)
| Key_blocks_not_flushed                   | 8626        |
| Key_blocks_unused                        | 499465      |
| Key_blocks_used                          | 507801      |
| Key_read_requests                        | 3064005     |
| Key_reads                                | 503180      |
| Key_write_requests                       | 325917      |
| Key_writes                               | 249240      |

Presumably less CPU power and RAM but ~ 20x block rate, though CPU power is hard to compare equivalence between VPS. On the hard disk system htop shows CPU iowait bars about 5x higher than active CPU bars. The slower VPS does eventually start to get better as it's key buffer gets filled - it does have more memory and is configured to use it.

The SSD-VPS caught up to block 449,649 in 52 minutes by which time the HD-VPS was at block 449,413. Here's what the SSD-VPS looks like after catching up. Why so fast? It monitors the mempool and adds txs as they come in. So when a new block arrives it simply has to update the relevant txs with the new block number - very quick.

Block 449652 [ Q:0 1418 txs - 23-01-2017 - 289ms 0.5s 2582 tx/s]
Block 449653 [ Q:0 1521 txs - 23-01-2017 - 282ms 0.7s 2321 tx/s]
Block 449654 [ Q:0 1647 txs - 23-01-2017 - 372ms 3.6s 452 tx/s]
Block 449655 [ Q:0 2191 txs - 23-01-2017 - 373ms 1.1s 2001 tx/s]
Block 449656 [ Q:0 2382 txs - 23-01-2017 - 380ms 1.0s 2486 tx/s]
Block 449657 [ Q:0 2471 txs - 23-01-2017 - 416ms 1.0s 2419 tx/s]

Linux, Electronics, Open Source Programming, Bitcoin, and more

© Copyright 2018 neoCogent. All rights reserved.

About Me - Hire Me