There are four basic ways to run sqlChain. Choosing which model to follow at the outset is best. It is possible to change between models but it will incur a time cost to rework data. You can run sqlChain either with or without the --no-sigs option. This controls whether input sigScript data is maintained and significantly affects storage size. And you can run over a full node or pruning node. These two choices give us the following four combinations, in order of disk usage:

  • --no-sigs, pruning - this case requires the least disk space and discards sigScript data completely. If you want to run an Electrum server or as a backend for some application that doesn't need this data then this minimizes disk cost. It cannot provide raw transaction data in standard form and cannot be used to validate transaction data. Validation was done when bitcoind downloaded the blocks. As of block 370,000 sqlChain will require ~27GB of data and bitcoind as low as ~1GB, for a total disk size of ~28GB.

  • default, pruning - sqlChain keeps sigScript data but the underlying blockchain has been pruned. sqlChain can provide complete raw transaction data in standard form from it's native api interface. As of block 370,000 sqlChain uses ~52GB and bitcoind as low as ~1GB, giving a total disk use of ~53GB.

  • --no-sigs, full - sqlChain does not have sigScript data but the underlying blockchain data is still intact. Raw transaction data can be returned from the rpc interface only. As of block 370,000 sqlChain uses ~27GB and the blockchain (with --txindex) about 51GB, for a total of ~78GB.

  • default, full - both layers have full data, so sigScript data can be queried from either the sqlChain api or rpc interfaces. This uses the maximal disk space of ~103GB. This seems excessive considering once transactions have been validated there is no compelling further use for the sigScript data.

As to changing models after building the sql data the time costs are as follows:

  • removing sigScript data from sqlChain is possible with the stripsigs utility. Depending on system speed it can take several to ~16 hours (as gauged by my aging laptop; it coulld be slower still on a arm based board, without SSD) to scan transaction data and rewrite the external blob data file, currently wiping out ~25GB.

  • adding sigScript data afterwards would require re-building sqlChain from genesis block again; usually quite time consuming (full sync about 160 hours on my ol' laptop). It would be possible to have a utility for rebuilding just this data but I have not bothered to write one.

  • changing a pruning node to full node requires re-downloading the full blockchain from the beginning.

  • going from a full node to pruning node is pretty easy, also quick as it discards blk*.dat files but is non-reversible. If you copy the bitcoin directory and start with the copy as -data-dir option then it will prune the copy, and you can revert to the full one if need be.

I'm personally interested in running a no-sigs, pruning node for a personal Electrum server and will be exploring that over the next few days. There are some gotchas in trying to sync from a pruning node. It is possible, even likely, for the node to prune away data before it gets pulled into the sqlChain database (which would force beginning again). I have code in place now to manage this but as of today it's untested.

With the new blkdat module sqlChain can now read block data directly and by monitoring blk*.dat file presence, along with a nifty btcGate utility, it can pause/resume the pruning node when it can't keep up. My experience over the last few days has been that sqlChain can build sql data at the same time, and as quickly as bitcoind can sync the blockchain. If you have a slower system or low-end VPS then that's pretty sweet.

Coming soon - a full tutorial on installing and running sqlChain.


Linux, Electronics, Open Source Programming, Bitcoin, and more

© Copyright 2018 neoCogent. All rights reserved.

About Me - Hire Me