Partial Sync

Partial synchronisation of Kylin testnet’s head using dfuse for EOSIO


Get a clean workspace folder, fetch a Kylin snapshot (using EOS Nation snapshots as a source in this example)

mkdir workspace && cd workspace
curl -L -o kylin-snapshot.bin.zst
zstd -d kylin-snapshot.bin.zst

Note On Ubuntu, zstd CLI decompression tool can be obtain with sudo apt-get install -y zstd, otherwise, can be downloaded from

Prepare {workspace}/kylin-phase1-blocks.yaml

Required information

  • mindreader-stop-block-num: {kylin current head block number, rounded to 100}
    • You can go to or use this kind of shell command: curl -s | sed 's/.*head_block_num..\([0-9]*\),.*/\1/'
  • mindreader-snapshot-store-url: file:///{Current working directory}
    • Folder where you downloaded the snapshot (output of command pwd in your shell)

Example kylin-phase1-blocks.yaml

  - mindreader
    config-file: ""
    log-to-file: false
    mindreader-log-to-zap: false
    mindreader-merge-and-store-directly: true
    mindreader-start-failure-handler: true
    mindreader-blocks-chan-capacity: 100000
    mindreader-restore-snapshot-name: snapshot.bin
    mindreader-discard-after-stop-num: false
    mindreader-snapshot-store-url: file:///home/johndoe/workspace
    mindreader-stop-block-num: 107367000

Prepare mindreader nodeos config

# from {workspace}
mkdir mindreader
cat >mindreader/config.ini <<EOC
# Plugins
plugin = eosio::producer_plugin      # for state snapshots
plugin = eosio::producer_api_plugin  # for state snapshots
plugin = eosio::chain_plugin
plugin = eosio::chain_api_plugin
plugin = eosio::http_plugin
plugin = eosio::db_size_api_plugin
plugin = eosio::net_api_plugin

# Chain
chain-state-db-size-mb = 4096
reversible-blocks-db-size-mb = 512
max-transaction-time = 5000

read-mode = head
p2p-accept-transactions = false
api-accept-transactions = false

# P2P
agent-name = dfuse for EOSIO (mindreader)
p2p-server-address =
p2p-listen-endpoint =
p2p-max-nodes-per-host = 2
connection-cleanup-period = 60

access-control-allow-origin = *
http-server-address =
http-max-response-time-ms = 1000
http-validate-host = false
verbose-http-errors = true

# Enable deep mind
deep-mind = true
contracts-console = true

wasm-runtime = eos-vm-jit
eos-vm-oc-enable = true
eos-vm-oc-compile-threads = 4

## Peers (choose your favorite ones, those are from
p2p-peer-address =
p2p-peer-address =
p2p-peer-address =
p2p-peer-address =
p2p-peer-address =
p2p-peer-address =

Run ‘phase1-blocks’

dfuseeos -c kylin-phase1-blocks.yaml start -v
  • You can see the ‘actual’ progress of block files being written by running this command from another terminal: ls -ltr dfuse-data/storage/merged-blocks/ |tail
  • From different terminal sessions, you can run the “search” and “trxdb” phases in parallel with this phase. They will wait for merged block files to be created. See next steps in this document.


  • The mindreader writes merged block files SLOWER than the nodeos instance can catch up. This means that it will keep going for a while after the “stop block” appears in the logs. Do not worry and do not try to force kill the dfuseeos instance! Let it continue until it finishes.
  • Since the nodeos will go further than the requested “stop block”, all extra blocks will be written to the ‘dfuse-data/storage/one-blocksfolder, so that themerger` can pick them up on the next run.

‘phase1-search’ (can be done in parallel with phase1)

Required information

  • search-indexer-start-block: {500 blocks higher than first merged block file, rounded to the next 'shard-size'}
    • Take that bluck number ls ./dfuse-data/storage/merged-blocks/ |head -n 1 and add 500 to it
  • search-indexer-stop-block: {100 below the value that you put in for mindreader-stop-block-num earlier, rounded to the lower 'shard-size'}

Example kylin-phase1-search.yaml

  - search-indexer
    config-file: ""
    log-to-file: false
    search-indexer-enable-batch-mode: true
    search-indexer-start-block: 107305500
    search-indexer-stop-block: 107366500
    search-indexer-shard-size: 500
dfuseeos -c kylin-phase1-search.yaml start  -vv

NOTE: the ‘actual’ start block that you can use afterwards will most likely be the one that you set here in ‘search-indexer’

‘phase1-trxdb’ (can be done in parallel with phase1)

Required information

  • trxdb-loader-start-block-num: {500 blocks higher than first merged block file}
    • Take that bluck number ls ./dfuse-data/storage/merged-blocks/ |head -n 1 and add 500 to it
  • trxdb-loader-stop-block-num: {100 below the value that you put in for mindreader-stop-block-num earlier}
  • common-chain-id: 5fff1dae8dc8e2fc4d5b23b2c7665c97f9e9d8edf2b6485a86ba311c25639191
    • change this if you are syncing another chain than kylin, could be scripted like this curl -s | sed 's/.*chain_id...\([a-f0-9]*\).*/\1/' or with better tools like ‘jq’

Example kylin-phase1-trxdb.yaml

  - trxdb-loader
    config-file: ""
    log-to-file: false
    common-chain-id: 5fff1dae8dc8e2fc4d5b23b2c7665c97f9e9d8edf2b6485a86ba311c25639191
    trxdb-loader-start-block-num: 107305400
    trxdb-loader-stop-block-num: 107366900
    trxdb-loader-processing-type: batch

Run ‘phase1-trxdb’

dfuseeos -c kylin-phase1-trxdb.yaml start  -vv

KNOWN ISSUES: * It’s currently hard to follow the progress and not have too many logs

Run kylin-phase2 (syncing up to head)

Example kylin-phase2.yaml

  - search-archive
  - search-router
  - search-indexer
  - search-live
  - dashboard
  - dgraphql
  - apiproxy
  - mindreader
  - merger
  - relayer
  - trxdb-loader
  - blockmeta
    config-file: ""
    log-to-file: false
    mindreader-log-to-zap: true
    common-chain-id: 5fff1dae8dc8e2fc4d5b23b2c7665c97f9e9d8edf2b6485a86ba311c25639191
    search-indexer-shard-size: 500
    search-indexer-start-block: 107305500
    search-archive-shard-size: 500
    search-archive-start-block: 107305500

Known Issues

  • You may see some warnings like this: found a hole in a oneblock files, sometimes they are false positive. Watch the progression of merged-blocks in the folder like this: ls ./dfuse-data/storage/merged-blocks/ |tail -n 1 to make sure that the merger keeps going correctly
  • EOSQ will not work correctly without “eosws”, “fluxdb”, “abicodec”, which do not support ‘partial chain syncing’ at the moment. You will need to sync the full chain to get that.

Watch as the chain syncs its missing part up to the head

Play with search and block functions in the synced range