arrow-left

Only this pageAll pages
gitbookPowered by GitBook
1 of 41

v1.x (Deprecated)

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Experimental Features

This section covers the current experimental features available in Boost

Boost is developing new market features on a regular basis as part of the overall market development. This section covers the experimental features released by boost along with details on how to use them.

It is not recommended to run experimental features in production environments. The features should be tested as per your requirements, and any issues or requests should be reported to the team via Github or Slack.

Once the new features have been tested and vetted, they will be released as part of a stable Boost release and all documentation concerning those features will be moved to an appropriate section of this site.

Current experimental features are listed below.

FVM Contract Dealschevron-right

Tutorials

Step by step guide to various Boost tasks

Remote CommP

Boost is introducing a new feature that allows computing commP during the deal on a lotus-worker node.

This should reduce the overall resource utilisation on the Boost node.

In order to enable remote commP on a Boost node, update your config.toml:

[Dealmaking]
   RemoteCommp = true

Then restart the Boost node

Repository

By default the Boost daemon repository is located at ~/.boost

It contains the following files:

  • api The local multi-address of Boost's libp2p API

  • boost.db The sqlite database with all deal metadata

  • boost.logs.db The sqlite database with the logs for deals

  • config.toml The config file with all Boost's settings

  • repo.lock A lock file created when Boost is running

  • storage.json Deprecated (needed by legacy markets)

  • token The token used when calling Boost's JSON RPC endpoints

It has the following directories:

  • dagstore Contains indexes of CAR files stored with Boost

  • datastore Contains metadata about deals for legacy markets

  • deal-staging

Migrate from Lotus to Boost

This section describes how to upgrade your lotus-miner markets service to boostd, as well as how to roll-back if you are not happy with boostd

A storage provider can run the lotus as a monolith process where everything is handled by a single lotus-miner process or separate the mining and market subsystems on different machines.

Boost supports migration from both monolith and a split-market miner. You can follow the below guides to migrate to boost.

Migrate a monolith lotus-miner to Boostchevron-rightMigrate a Lotus markets service process to Boostchevron-right

hashtag
Rollback

triangle-exclamation

Please note that Boost uses a SQLite database for the deal metadata and logs. Once Boost has been enabled, the new deals cannot be rolled back to the Lotus markets. If you decide to roll back after making Boost deals, you will lose all the metadata for the deal made with Boost. However, this will have no impact on the sealed data itself.

What is Boost?

Boost is a tool for Storage Providers to manage data onboarding and retrieval on the Filecoin network. It replaces the go-fil-markets package in lotus with a standalone binary that runs alongside a Lotus daemon and Lotus miner.

Boost exposes libp2p interfaces for making storage and retrieval deals, a web interface for managing storage deals, and a GraphQL interface for accessing and updating real-time deal information.

Architecture

The boostd executable runs as a daemon alongside a lotus node and lotus miner. This daemon replaces the current markets subsystem in the lotus miner. The boost daemon exposes a libp2p interface for storage and retrieval deals. It performs on-chain operations by making API calls to the lotus node. The daemon hands off downloaded data to the lotus miner for sealing via API calls to the lotus miner.

boostd has a web interface for fund management and deal monitoring. The web interface is a react app that consumes a graphql interface exposed by the daemon.

hashtag

Backup and Restore

How to backup and restore Boost

hashtag
Backup

Boost now supports both online and offline backups. The backup command will output a backup directory containing the following files.

  1. metadata

Deal Filters

How to use deal filters

hashtag
Using filters for fine-grained storage and retrieval deal acceptance

Your use case might demand very precise and dynamic control over a combination of deal parameters.

Lotus provides two IPC hooks allowing you to name a command to execute for every deal before the miner accepts it:

Features

hashtag
Make storage deals with HTTP data transfer

Boost supports multiple options for data transfer when making storage deals, including HTTP. Clients can host their CAR file on an HTTP server, such as S3, and provide that URL when proposing the storage deal. Once accepted, Boost will automatically fetch the CAR file from the specified URL.

See for more details.

The directory used by legacy markets for incoming data transfers
  • incoming The directory used by Boost for incoming data transfers

  • journal Contains journal events (used by legacy markets)

  • keystore Contains the secret keys used by libp2p (eg the peer ID)

  • kvlog Used by legacy markets datastore

  • Roll back to Lotus markets service processchevron-right
    Web UI - Storage Deals screen
    Web UI - Storage Space screen
    Web UI - Sealing Pipeline screen
    - contains backup of leveldb
  • boostd.db - backup of deals database

  • keystore - directory containing libp2p keys

  • token - API token

  • config - directory containing all config files and config.toml link

  • storage.json - file containing storage details

  • Backup does not backs up the deal logs and dagstore.

    hashtag
    Online backup

    You can take an online backup with the below command

    The online backup supports running only one instance at a time and you might see a locking error if another instance of backup is already running.

    hashtag
    Offline backup

    1. Shutdown boostd before taking a backup

    2. Take a backup using the command line

    circle-info

    Boost offline backup does not include Dagstore and user can copy dagstore directory to a backup location manually. Dagstore can be reinitialized if there is no backup.

    hashtag
    Restore

    1. Make sure that --boost-repo flag is set if you wish to restore to a custom location. Otherwise, it will be restored to ~/.boost directory

    2. Restore the boost repo using the command line

    circle-info

    Once restore is complete, Dagstore can be manually copied inside the boost repo to restore it.

    Filter for storage deals.

  • RetrievalFilter for retrieval deals.

  • The executed command receives a JSON representation of the deal parameters on standard input, and upon completion, its exit code is interpreted as:

    • 0: success, proceed with the deal.

    • non-0: failure, reject the deal.

    The most trivial filter rejecting any retrieval deal would be something like: RetrievalFilter = "/bin/false". /bin/false is binary that immediately exits with a code of 1.

    This Perl scriptarrow-up-right lets the miner deny specific clients and only accept deals that are set to start relatively soon.

    You can also use a third party content policy framework like CIDgravityarrow-up-right or bitscreen by Murmuration Labs:

    boostd backup <backup directory>
    boostd backup --offline <backup directory>
    boostd restore <backup directory>
    # grab filter program
    go get -u -v github.com/Murmuration-Labs/bitscreen
    
    # add it to both filters
    Filter = "/path/to/go/bin/bitscreen"
    RetrievalFilter = "/path/to/go/bin/bitscreen"
    Storage Deal Flow

    The typical flow for a Storage Deal is:

    1. The Client puts funds in escrow with the Storage Market Actor on chain.

    2. The Client uploads a CAR file to a web server.

    3. The Client sends a storage deal proposal to Boost with the URL of the CAR file.

    4. Boost checks that the client has enough funds in escrow to pay for storing the file.

    5. Boost accepts the storage deal proposal.

    6. Boost downloads the CAR file from the web server.

    7. Boost publishes the deal on chain.

    8. The client checks that the deal was successfully published on chain.

    Boost exposes a libp2p interface to listen for storage deal proposals from clients. This is similar to the libp2p interface exposed by the lotus market subsystem.

    Boost communicates with the lotus node over its JSON-RPC API for on-chain operations like checking client funds and publishing the deal.

    Once the deal has been published, Boost hands off the downloaded file to lotus-miner for sealing.

    hashtag
    Web UI

    Boost comes with a web interface that can be used to manage deals, watch disk usage, monitor funds, adjust settings and more.

    Boost Web UI

    hashtag
    Backwards compatibility with go-fil-markets package

    Boost supports the same endpoints as go-fil-markets package for making storage and retrieval deals, getting the storage and retrieval ask, and getting the status of ongoing deals. This ensures that clients running lotus can make deals with Storage Providers running boost.

    hashtag
    A client for proposing deals that doesn't require a Lotus node

    Boost comes with a client that can be used to make storage deals, and can be configured to point at a public Filecoin API endpoint. That means clients don't need to run a Filecoin node or sync from chain.

    See As a client for details.

    As a client

    FVM Contract Deals

    This page explains how to start monitoring and accepting deals published on-chain on the FVM

    With the release of FVM, it is now possible for smart contracts to make deal proposals on-chain. This is made possible though the DealProposal FRC.

    DataDAOs, as well as other clients who want to store data on Filecoin, can now deploy a smart contract on the FVM which adheres to the DealProposal FRC, and make deal proposals that are visible to every storage provider who monitors the chain.

    hashtag
    How to enable FVM monitoring in order to process storage deal proposals published on-chain?

    Boost already has support for the DealProposal FRC.

    The code for FVM monitoring resides in the latest release of the Boost. It should be used with caution for production use. SPs must before proceeding to the next step.

    To build for mainnet:

    In order to enable DealProposal FRC, you have to edit your config.toml and enable contract deal monitoring. By default it is disabled. Here is an example configuration:

    AllowlistContracts field could be left empty if you want to accept deals from any client. If you only want to accept deals from some clients, you can specify their contract addresses in the field.

    From field should be set to your SP's FEVM address. Some clients may implement a whitelist which allows specific SPs to accept deal proposals from their contract. This field will help those clients identify your SP and match it to their whitelist.

    hashtag
    How contract deals work in Boost

    1. A contract publishes a DealProposalCreate event on the chain.

    2. Boost monitors the chain for such events from all the clients by default. When such an event is detected, we go and fetch the data for the deal.

    3. Deal is then run through the basic deal validation filters like clients has enough funds, SP has enough funds etc.

    Architecture

    Local Index Directory architecture and index types

    When designing the Local Index Directory we considered the needs of various Storage Providers (SPs) and the operational overhead LID would have on their systems. We built a solution for: - small- SPs - holding up to 1PiB), and - mid- and large- size SPs - holding anywhere from 1PiB, up to 100PiB data

    Depending on underlying block size and data format, index size can vary in size. Typically block sizes are between 16KiB and 1MiB.

    At the moment there are two implementations of LID: - a simple LevelDB implementation, for small SPs who want to keep all information in a single process database. - a scalable YugabyteDB implementation, for medium and large size SPs with tens of thousands of deals.

    hashtag
    Index types

    In order to support the described retrieval use cases, LID maintains the following indexes:

    hashtag
    multihash → []piece cid

    To look up which pieces contain a block

    hashtag
    piece cid → sector information {sector ID, offset, size}

    To look up which sector a piece is in

    hashtag
    piece cid → map<mulithash → block offset / size>

    To look up where in the piece a block is and the block’s size

    Roll back to Lotus markets service process

    This section describes how to roll back to Lotus markets service process if you are not happy with boostd

    circle-exclamation

    Before you begin migration from Lotus markets service process to Boost, make sure you have a backup of your Lotus repository, by following the Lotus documentationarrow-up-right. You can also do a full backup of the Lotus markets repository directory.

    1. If you haven't made any legacy deals with Boost:

      1. Stop boostd

      2. Run your lotus-miner markets service process as you previously did

    2. If you have made new legacy deals with Boost, and want to migrate them back:

      1. Stop boostd

      2. Copy the dagstore directory from boost

    Need help?

    How to get help for Boost

    1. You can report any issues or bugs herearrow-up-right.

    2. If you are having trouble, check the Troubleshooting page for common problems and solutions.

    3. If you have a question, please join the and ask in #fil-help or #fil-lotus-help or #boost-help or start .

    4. You can also about new feature and improvement ideas for the Boost.

    Setting up a monitoring stack for Boost

    This tutorial goes through the steps required to run our Docker monitoring setup to collect and visualize metrics for various Boost processes

    hashtag
    Background

    The monitoring stack we will use includes:

    Initialisation

    This page explains how to initialise LID and start using it to provide retrievals to clients

    Considering that the Local Index Directory is a new feature, Storage Providers should initialise it after upgrading their Boost deployments.

    There are two ways a Storage Provider can do that:

    1. Migrate existing indices from the DAG store into LID: this solution assumes that the Storage Provider has been keeping an unsealed copy for every sector they prove on-chain, and has already indexed all their deal data into the DAG store. Typically index sizes for a given sector range between 100KiB up to 1GiB, depending on deal data and its blocks sizes. The DAG store keeps these indices in the repository directory of Boost under the ./dagstore/index and ./dagstore/datastore

    As a client

    Boost comes with a client executable, boost, that can be used to send a deal proposal to a Boost server.

    The client is intentionally minimal meant for developer testing. It is not a full featured client and is not intended to be so. It does not require a daemon process, and can be pointed at any public Filecoin API for on-chain operations. This means that users of the client do not need to run a Filecoin node that syncs the chain.

    hashtag
    Set the API endpoint environment variable

    Hardware requirements

    The hardware requirements for Boost are tied to the sealer part of the Lotus deployment it is attached to.

    Depending on how much data you need to onboard, and how many deals you need to make with clients, hardware requirements in terms of CPU and Disk will vary.

    hashtag
    General hardware requirements

    repository to
    markets
    repository.
  • Export Boost deals datastore keys/values: lotus-shed market export-datastore --repo <repo> --backup-dir <backup-dir> Wrote backup file to <backup-dir>/markets.datastore.backup

  • Import the exported deals datastore keys/values from boost to lotus markets: lotus-shed market import-datastore --repo <repo> --backup-path <backup-path> Completed importing from backup file <backup-path>

  • directories. This data should be migrated to LID with the
    migrate-lid
    utility.
  • Recreate indices for deal data based on unsealed copies of sectors: this solution assumes that the Storage Provider has unsealed copies for every sector they prove on-chain. If this is not the case, then the SP should first trigger an unseal (UNS) job on their system for every sector that contains user data and produce an unseal copy. SPs can use the boostd recover lid utility to produce an index for all deal data within an unsealed sector and store it in LID so that they enable retrievals for the data. Depending on SPs deployment and where unsealed copies are hosted (NFS, Ceph, external disks, etc.) and the performance of the hosting system, producing an index for a 32GiB sector can take anywhere from a few seconds up to a few minutes, as the unsealed copy needs to be processed by the utility.

  • hashtag
    Migrate existing indices from the DAG store into LID

    TODO

    hashtag
    Recreate indices for deal data based on unsealed copies of sectors

    TODO

    hashtag
    CPU

    A miner will need an 8+ core CPU.

    We strongly recommend a CPU model with support for Intel SHA Extensions: AMD since Zen microarchitecture, or Intel since Ice Lake. Lack of SHA Extensions results in a very significant slow down.

    The most significant computation that Boost has to do is the Piece CID calculation (also known as Piece Commitment or CommP). When Boost receives data from a client, it calculates the Merkle root out of the hashes of the Piece (padded .car file). The resulting root of the clean binary Merkle tree is the Piece CID.

    hashtag
    RAM

    2 GiB of RAM are needed at the very least.

    hashtag
    Disk

    Boost stores all data received from clients before Piece CID is calculated and compared against deal parameters received from clients. Next, deals are published on-chain, and Boost waits for a number of epoch confirmations before proceeding to pass data to the Lotus sealing subsystem. This means that depending on the throughput of your operation, you must have disk space for at least a few staged sectors.

    For small deployments 100 GiB of disk are needed at the very least if we assume that Boost is to keep three 32 GiB sectors before passing them to the sealing subsystem.

    We recommend using NVME disk for Boost. As Dagstore grows in size, the overall performance might slow down due to slow disk.

    Filecoin Slackarrow-up-right
    a discussionarrow-up-right
    start a discussionarrow-up-right

    Once deal passes the validation, we create a new deal handler in Boost and pass this deal for execution like other Boost deals.

    enable FEVM on lotus daemonarrow-up-right
    circle-info

    There are a number of public Filecoin APIs ran by a number of organisations, such as Infura, Glif, etc. For test purposes you can try: export FULLNODE_API_INFO=https://api.node.glif.io

    hashtag
    Initialize the client

    The init command

    • Creates a Boost client repository (at ~/.boost-client by default)

    • Generates a libp2p peer ID key

    • Generates a wallet for on-chain operations and outputs the wallet address

    hashtag
    Add funds to the wallet and to the market actor

    To make deals you will need to: a) add funds to the wallet b) add funds to the market actor for that wallet address

    hashtag
    Make a storage deal

    Currently, we don't distribute binaries, so you will have to build from source.

    When a storage provider accepts the deal, you should see output of the command similar to:

    hashtag
    Check deal status

    You can check the deal status with the following command:

    git clone https://github.com/filecoin-project/boost.git
    cd boost
    git checkout <Release>
    make build
    [ContractDeals]
      Enabled = true
      AllowlistContracts = []
      From = "0x0000000000000000000000000000000000000000"
    export FULLNODE_API_INFO=<filecoin API endpoint>
    boost -vv init
    boost -vv deal --provider=<f00001> \
                   --http-url=<https://myserver/my.car> \
                   --commp=<commp> \
                   --car-size=<car-size> \
                   --piece-size=<piece-size> \
                   --payload-cid=<payload-cid>
    
    sent deal proposal
      deal uuid: 9e68fb16-ff9a-488e-ad0a-1289b512d176
      storage provider: f0127896
      client wallet: f1sw5zjcyo4mff5cbvgsgmm8uoko6gcr4tptvtkhy
      payload cid: bafyaa6qshafcmalqudsbeidrunclaep6mdbipm2gjfvuosjfd6cbqd6th7bshy5hi5npxe727yjaagelucbyabasgafcmalqudsaeieapsxspo2i36no36n7yitswsxdazvziwvgj4vbp2scuxasrc6n4ejaage3r7m3saykcqeaegeavdllsbzaqcaibaaeecakrvvzam
      url: https://webserver/file.car
      commp: baga6ea4seaqh5prrl6ykov4t64k6m5giijsc44dcxtdnzsp4izjakqhs7twauiq
      start epoch: 1700711
      end epoch: 2219111
      provider collateral: 358.687 μFIL
    boost deal-status --provider=<provider> --deal-uuid=<deal-uuid>
    got deal status response
      deal uuid: 9e68fb16-ff8a-488e-ad0a-1289b512d176
      deal status: Transfer Queued
      deal label: bafyaa6qsgafcmalqudsaeidrunclaep6mdbipm2gjfvuosjfd6cbqd6th7bshy5hi5npxe727yjaagelucbyabasgafcmalqudsaeieapsxspo2i36no36n7yitswsxdazvziwvgj4vbp2scuxasrc6n4ejaage3r7m3saykcqeaegeavdllsbzaqcaibaaeecakrvvzam
      publish cid: <nil>
      chain deal id: 0
    Prometheus - collects metrics and powers dashboards in Grafana
  • Tempo - collects traces and powers traces search in Grafana with Jaeger

  • Grafana - provides visualization tools and dashboards for all metrics and traces

  • Lotus and Boost are already instrumented to produce traces and stats for Prometheus to collect.

    The Boost team also packages a set of Grafana dashboards that are automatically provisioned as part of this setup.

    hashtag
    Prerequisites

    This setup has been tested on macOS and on Linux. We haven’t tested it on Windows, so YMMV.

    All the monitoring stack containers run in Docker.

    hashtag
    Steps

    1. hashtag
      Install Docker

    We have tested this setup with Docker 20.10.23 on macOS and Ubuntu.

    https://docs.docker.com/engine/install/arrow-up-right

    1. hashtag
      DNS resolution for Prometheus

    Update extra_hosts in docker-compose.yaml for prometheus, so that the Prometheus container can reach all its targets - boostd, lotus-miner, booster-bitswap, booster-http, etc. https://github.com/filecoin-project/boost/blob/main/docker/monitoring/docker-compose.yaml#L47-L55arrow-up-right

    Depending on where your Filecoin processes (boostd, lotus, lotus-miner, booster-bitswap, etc.) are running, you need to confirm that they are reachable from Prometheus so that it can scrape their metrics.

    By default the setup expects to find them within the same Docker network, so if you are running them elsewhere (i.e. on the `host` network), add the following arguments:

    1. hashtag
      Prometheus targets

    Confirm that Prometheus targets are scraped at http://localhost:9090arrow-up-right / Targets

    If you are running software firewall like `ufw`, you might need to modify your iptables and allow access from the Prometheus container / network to the Filecoin stack network, for example:

    sudo docker network inspect monitoring # note the Subnet for the network sudo ufw allow from 172.18.0.0/16

    1. hashtag
      Grafana dashboards

    Go to Grafana at http://localhost:3333arrow-up-right and inspect the dashboards:

    FAQ

    Frequently asked questions about Boost

    Is there a way to stop boostd daemon? You can use the regular Unix OS signals

    Is Boost compatible with the Lotus client? Can a client use lotus client deal to send a deal to Boost storage providers or do they have to use the boost client? Yes, boost should work with any client given it supports the storage market protocol / default standard of Filecoin network today.

    Does Boost provide retrieval functionality? Yes, Boost provides 3 protocols for retrievals as of now. By default, Boost has Graphsync retrieval enabled. SPs can run Bitswap and HTTP retrievals by running booster-bitswap and booster-http respectively.

    Does Boost client have retreival functionality? Yes, Boost client supports retrieval over graphsync protocol. But we highly recommend, using client for Filecoin/IPFS retrievals.

    Can Boost make verified deals? Yes, payments for deals can be made either from a regular wallet, or from DataCap. Deals that are paid for with DataCap are called verified deals.

    Can I run both Boost and markets at the same time? No, Boost replaces the legacy markets process. See

    Local Index Directory

    This page describes the Local Index Directory component in Boost, what it is used for, how it works and how to start using it

    circle-exclamation

    Local Index Directory is not yet released. This is a placeholder page for its documentation.

    hashtag
    Background

    The Local Index Directory (LID) manages and stores indices of deal data so that it can be retrieved by a content identifier (cid).

    Currently this task is performed by the DAG store component. The DAG store keeps its indexes on disk on a single machine. LID replaces the DAG store and introduces a horizontally scalable backend database for storing the data - YugabyteDB.

    LID is designed to provide a more intuitive experience for the user, by surfacing problems and providing various repair tools.

    To summarize, LID is the component which keeps fine-grained metadata about all the deals on Filecoin that a given Storage Provider stores, and without it client would only be able to retrieve full pieces, which generally are between 8GiB and 32GiB in size.

    hashtag
    Storing data on Filecoin

    When a client uploads deal data to Boost, LID records the sector that the deal data is stored in and scans the deal data to create an index of all its blocks indexed by block cid. This way cilents can later retrieve subsets of the original deal data, without retrieving the full deal data.

    hashtag
    Retrieving data

    When a client makes a request for data by cid, LID: - checks which piece the cid is in, and where in the piece the data is - checks which sector the piece is in, and where in the sector the piece is - reads the data from the sector

    hashtag
    Use cases

    The retrieval use cases that the Local Index Directory supports are:

    hashtag
    Graphsync retrieval

    Request one root cid with a selector, receive many blocks

    LID is able to: - look up which piece contains the root cid - look up which sector contains the piece - for each block, get the offset into the piece for the block

    hashtag
    Bitswap retrieval

    Request one block at a time

    LID is able to: - look up which piece contains the block - get the size of the block (Bitswap asks for the size before getting the block data) - look up which sector contains the piece - get the offset into the piece for the block

    hashtag
    HTTP retrieval

    Request a whole piece

    LID is able to look up which sector contains the piece.

    Request an individual block

    LID is able to: - look up which piece contains the block - look up which sector contains the piece - get the offset into the piece for the block

    Request a file by root cid

    LID is able to: - look up which piece contains the block - look up which sector contains the piece - for each block, get the offset into the piece for the block

    How to store files with Boost on Filecoin

    This tutorial goes through all the steps required to make a storage deal with Boost on Filecoin.

    First, you need to initialise a new Boost client and also set the endpoint for a public Filecoin node. In this example we are using https://glif.ioarrow-up-right

    export FULLNODE_API_INFO=https://api.node.glif.io
    
    boost init

    The init command will output your new wallet address, and warn you that the market actor is not initialised.

    boost init
    
    boost/init_cmd.go:53    default wallet set      {"wallet": "f3wfbcudimjcqtfztfhoskgls5gmkfx3kb2ubpycgo7a2ru77temduoj2ottwzlxbrbzm4jycrtu45deawbluq"}
    boost/init_cmd.go:60    wallet balance  {"value": "0"}
    boost/init_cmd.go:65    market actor is not initialised, you must add funds to it in order to send online deals

    Then you need to send funds to the wallet, and add funds to the market actor (in the example below we are adding 1 FIL).

    You can use the boostx utilities to add funds to the market actor:

    You can confirm that the market actor has funds by running boost init again.

    After that you need to generate a car file for data you want to store on Filecoin, and note down its payload-cid. We recommend using CLI to generate the car file.

    Then you need to calculate the commp and piece size for the generated car file:

    Place the generated car file on a public HTTP server, so that a storage provider can later fetch it.

    Finally, trigger an online storage deal with a given storage provider:

    Requirements

    Local Index Directory requirements and dependencies

    hashtag
    Dependencies

    Local Index Directory depends on a backend database to store various indices. Currently we support two implementations - YugabyteDB or LevelDB - depending on the size of deal data and indices a storage provider holds.

    LevelDB is an open source on-disk key-value store, and can be used when indices fit on a single host.

    YugabyteDB is an open source modern distributed database designed to run in any public, private, hybrid or multi-cloud environment.

    circle-info

    Storage providers who hold more than 1PiB data are encouraged to use YugabyteDB as it is horizontally scalable, provides better monitoring and management utilities and could support future growth.

    hashtag
    Hardware requirements

    circle-info

    For detailed instructions, playbooks and hardware recommendations, see the YugabyteDB website -

    YugabyteDB is designed to run on bare-metal machines, virtual machines (VMs), and containers. CPU and RAM

    You should allocate adequate CPU and RAM. YugabyteDB has adequate defaults for running on a wide range of machines, and has been tested from 2 core to 64 core machines, and up to 200GB RAM.

    hashtag
    Minimum requirement

    hashtag
    Production requirement

    hashtag
    Verify support for SSE2 and SSE4.2

    YugabyteDB requires the SSE2 instruction set support, which was introduced into Intel chips with the Pentium 4 in 2001 and AMD processors in 2003. Most systems produced in the last several years are equipped with SSE2.

    In addition, YugabyteDB requires SSE4.2.

    To verify that your system supports SSE2, run the following command:

    cat /proc/cpuinfo | grep sse2

    To verify that your system supports SSE4.2, run the following command:

    cat /proc/cpuinfo | grep sse4.2

    hashtag
    Disks

    We recommend a minimum of 1TiB or more allocated for YugabyteDB, depending on the amount of deal data you store and its average block size.

    circle-exclamation

    Assuming you've kept unsealed copies of all your data and have consistently indexed deal data, the size of your DAG store directory should be comparable with the requirements for YugabyteDB

    Database

    Boost stores metadata about deals in a sqlite database in the root directory of the Boost repo.

    To open the database use a sqlite client:

    sqlite3 boost.db

    The database tables are

    • Deals metadata about Boost storage deals (eg deal proposal) and their current state (eg checkpoint)

    • FundsLogs log of each change in funds reserved for a deal

    • FundsTagged how much FIL is tagged for deal collateral and publish message for a deal

    • StorageLogs log of each change in storage reserved for a deal

    • StorageTagged how much storage is tagged for a deal

    Boost keeps a separate database just for deal logs, so as to make it easier to manage log data separately from deal metadata. The logs database is named boost.logs.db and it has a single table DealLogs that stores logs for each deal, indexed by uuid.

    hashtag
    Migrations

    Boost uses goose () tool and library for handling sqlite3 migrations.

    goose can be installed following the instructions at

    Migrations in Boost are stored in the /db/migrations directory.

    Boost handles database migrations on start-up. If a user is running an older version of Boost, migrations up to the latest version are automatically applied on start-up.

    Developers can use goose to inspect and apply migrations using the CLI:

    UI Settings

    Boost configuration options available in UI

    hashtag
    Configuration

    [Graphql]
      ListenAddress = "127.0.0.1"
      Port = 8080
    
    [Monitoring]
      MpoolAlertEpochs = 30

    By default, the web UI listens on the localhost interface on port 8080. We recommend keeping the UI listening on localhost or some internal IP within your private network to avoid accidentally exposing it to the internet.

    You can access the web UI listening on the localhost interface on a remote server, you can open an SSH tunnel from your local machine:

    hashtag
    Settings Page

    Parameter
    Example
    Description

    GraphQL API

    Boost exposes a GraphQL API that is used by the Web UI to query and update information about Boost deals. The GraphQL API query endpoint is at http://localhost:8080/graphql/queryarrow-up-right

    You can also run your own queries against the GraphQL API using CURL or a programming language that has a GraphQL clientarrow-up-right.

    Boost has a built-in GraphQL explorer at http://localhost:8080/graphiqlarrow-up-right

    You can test out queries, or explore the GraphQL API by clicking on the < Docs link at the top right of the page:

    To run a graphql query with CURL:

    curl -X POST
    -H "Content-Type: application/json"
    -d '{"query":"query { deals(offset: 5, limit: 10) { deals { ID CreatedAt PieceCid } } }"}'
    http://localhost:8080/graphql/query | jq

    This 1m video shows how to use these tools to build an run a GraphQL query against Boost:

    hashtag
    Example Queries

    1. Query failed deals

    2. Cancel a deal where ab12345c-5678-90de-12f3-45a6b78cd9ef is the deal ID

    Migrate a Lotus markets service process to Boost

    This section describes how to upgrade your lotus-miner markets service to boostd

    circle-exclamation

    If you are running a monolith lotus-minerand have not yet split the markets service into an individual process, follow the steps in .

    If you are running a markets

    HTTP Transfer limit

    This page covers all the configuration related to http transfer limiter in boost

    Boost provides a capability to limit the number of simultaneous http transfer in progress to download the deal data from the clients.

    This new configuration has been introduced in the ConfigVersion = 3 of the boost configuration file.

    hashtag
    Configuration Variables

    boostx market-add 1
    ssh -L 8080:localhost:8080 myserver
    Lassiearrow-up-right
    Migrate a Lotus markets service process to Boost
    How Boost stores deal data from clients
    How clients retrieve their data from Boost

    34359738368

    Maximum size of a piece that storage provider will accept in bytes

    Price / epoch / Gib

    500000000

    Asking price for a deal in atto fils. This price is per epoch per GiB of data in a deal

    Verified Price / epoch / Gib

    500000000

    Asking price for a verified deal in atto fils. This price is per epoch per GiB of data in a deal

    Min Piece Size

    256

    Minimum size of a piece that storage provider will accept in bytes

    Settings Page Screenshot

    Max Piece Size

    car create -f my-data.car --version 1 <my-data>
    car root my-data.car
    
    bafykbzacedzjq6jvlqnclrseq8pp5lypa6ozuqgug3wjie3orh67berkwv7e4
    go-cararrow-up-right
    https://docs.yugabyte.comarrow-up-right
    https://pressly.github.io/goose/arrow-up-right
    https://pressly.github.io/goose/installation/arrow-up-right
    service as a separate
    lotus-miner
    process:

    1. Stop accepting incoming deals

    2. Wait for incoming deals to complete

    3. Shutdown the markets process

    4. Backup the markets repository

    5. Backup the markets datastore (in case you decide to roll back from Boost to Lotus) with:

    6. Make sure you have a Lotus node and miner running

    7. Create and send funds to two new wallets on the lotus node to be used for Boost

    Boost currently uses two wallets for storage deals:

    • The publish storage deals wallet - This wallet pays the gas cost when Boost sends the PublishStorageDeals message.

    circle-info

    If you already have a PublishStorageDeal control wallet setup then it can be reused in boost as the PUBLISH_STORAGE_DEALS_WALLET.

    • The deal collateral wallet - When the Storage Provider accepts a deal, they must put collateral for the deal into escrow. Boost moves funds from this wallet into escrow with the StorageMarketActor.

    circle-info

    If you already have a wallet that you want to use as the source of funds for deal collateral, then it can be reused in boost as the COLLAT_WALLET.

    8. Boost keeps all data in a directory called the repository. By default the repository is at ~/.boost. To use a different location pass the --boost-repo parameter.

    9. Export the environment variables needed for boostd migrate-markets to connect to the lotus daemon and lotus miner.

    Export environment variables that point to the API endpoints for the sealing and mining processes. They will be used by the boost node to make JSON-RPC calls to the mining/sealing/proving node.

    10. Set the publish storage deals wallet as a control wallet.

    circle-exclamation

    Add the value ofPUBLISH_STORAGE_DEALS_WALLET to the parameter DealPublishControl in the Address section of lotus-miner configuration if not present. Restart lotus-miner if configuration has been updated.

    11. Run boostd migrate-markets to initialize the repository and start the migration:

    The migrate-markets command

    • Initializes a Boost repository

    • Migrates markets datastore keys to Boost

      • Storage and retrieval deal metadata

      • Storage and retrieval ask data

    • Migrates markets libp2p keys to Boost

    • Migrates markets config to Boost (libp2p endpoints, settings etc)

    • Migrates the markets DAG store to Boost

    12. Run the boostd service, which will start:

    • libp2p listeners for storage and retrieval

    • the JSON RPC API

    • the graphql interface (used by the react front-end)

    • the web server for the react front-end

    circle-info

    In your firewall you will need to open the ports that libp2p listens on, so that Boost can receive storage and retrieval deals.

    See the Libp2p section of config.toml in the Repository

    hashtag
    Web UI

    Open http://localhost:8080 in your browser.

    circle-info

    To access a web UI running on a remote server, you can open an SSH tunnel from your local machine:

    hashtag
    API Access

    Boost API can be accessed by setting the environment variable BOOST_API_INFO same as LOTUS_MARKET_INFO.

    Migrate a monolith lotus-miner to Boost
    boostx commp ./my-data.car
    
    CommP CID:  baga6ea4seaqjaxked6ovoj5f3bdisfeuwtjhrzh3s34mg5cyzevgoebe7tdckdi
    Piece size:  2097152
    Car file size: 1101978 
    export FULLNODE_API_INFO=https://api.node.glif.io
    
    boost deal --verified=false \
               --provider=f0026876 \
               --http-url=https://public-http-server.com/my-data.car \
               --commp=baga6ea4seaqjaxked6ovoj5f3bdisfeuwtjhrzh3s34mg5cyzevgoebe7tdckdi \
               --car-size=1101978 \
               --piece-size=2097152 \
               --payload-cid=bafykbzacedzjq6jvlqnclrseq8pp5lypa6ozuqgug3wjie3orh67berkwv7e4
    2 cores
    2GB RAM
    16+ cores
    32GB+ RAM
    Add more CPU (compared to adding more RAM) to improve performance.
    SSDs (solid state disks) are required.
    ➜  ~ goose
    Usage: goose [OPTIONS] DRIVER DBSTRING COMMAND
    ...
    Commands:
        up                   Migrate the DB to the most recent version available
        up-by-one            Migrate the DB up by 1
        up-to VERSION        Migrate the DB to a specific VERSION
        down                 Roll back the version by 1
        down-to VERSION      Roll back to a specific VERSION
        redo                 Re-run the latest migration
        reset                Roll back all migrations
        status               Dump the migration status for the current DB
        version              Print the current version of the database
        create NAME [sql|go] Creates new migration file with the current timestamp
        fix                  Apply sequential ordering to migrations
    ssh -L 8080:localhost:8080 myserver
    lotus-shed market export-datastore --repo <repo> --backup-dir <backup-dir>
    PUBLISH_STORAGE_DEALS_WALLET=`lotus wallet new bls`
    COLLAT_WALLET=`lotus wallet new bls`
    lotus send --from mywallet $PUBLISH_STORAGE_DEALS_WALLET 10
    lotus send --from mywallet $COLLAT_WALLET 10
    export $(lotus auth api-info --perm=admin)
    export $(lotus-miner auth api-info --perm=admin)
    export APISEALER=`echo $MINER_API_INFO`
    export APISECTORINDEX=`echo $MINER_API_INFO`
    export OLD_CONTROL_ADDRESS=`lotus-miner actor control list  --verbose | grep -v owner | grep -v worker | grep -v beneficiary | awk '{print $3}' | grep -v key | tr -s '\n'  ' '`
    lotus-miner actor control set --really-do-it $PUBLISH_STORAGE_DEALS_WALLET $OLD_CONTROL_ADDRESS
    boostd --vv migrate-markets \
           --import-markets-repo=~/.my-markets-repo \
           --wallet-publish-storage-deals=$PUBLISH_STORAGE_DEALS_WALLET \
           --wallet-deal-collateral=$COLLAT_WALLET \
           --max-staging-deals-bytes=50000000000
    boostd --vv run
    export BOOST_API_INFO=<TOKEN>:<API Address>
    boostd auth api-info -perm auth
    hashtag
    HTTP variables

    hashtag
    Storage variables

    hashtag
    How TransferLimiter works

    The transferLimiter maintains a queue of transfers with a soft upper limit on the number of concurrent transfers.

    To prevent slow or stalled transfers from blocking up the queue there are a couple of mitigations: The queue is ordered such that we

    • start transferring data for the oldest deal first

    • prefer to start transfers with peers that don't have any ongoing transfer

    • once the soft limit is reached, don't allow any new transfers with peers that have existing stalled transfers

    How TransferLimiter works

    Note that peers are distinguished by their host (eg foo.bar:8080) not by libp2p peer ID. For example, if there is

    • one active transfer with peer A

    • one pending transfer (peer A)

    • one pending transfer (peer B)

    The algorithm will prefer to start a transfer with peer B than peer A. This helps to ensure that slow peers don't block the transfer queue.

    The limit on the number of concurrent transfers is soft. Example: if there is a limit of 5 concurrent transfers and there are

    • three active transfers

    • two stalled transfers

    then two more transfers are permitted to start (as long as they're not with one of the stalled peers)

    As a storage provider

    circle-exclamation

    If you are already running a standalone markets process, follow the guide at Migrate a Lotus markets service process to Boost

    If you are already running a monolith lotus-miner instance, follow the guide at Migrate a monolith lotus-miner to Boost

    hashtag
    Initialization and Running

    1. Make sure you have a Lotus node and miner running

    2. Create and send funds to two new wallets on the lotus node to be used for Boost

    Boost currently uses two wallets for storage deals:

    • The publish storage deals wallet - This wallet pays the gas cost when Boost sends the PublishStorageDeals message.

    • The deal collateral wallet - When the Storage Provider accepts a deal, they must put collateral for the deal into escrow. Boost moves funds from this wallet into escrow with the StorageMarketActor.

    3. Set the publish storage deals wallet as a control wallet.

    4. Create and initialize the Boost repository

    circle-exclamation

    If you are already running a Lotus markets service process, you should run boostd migrate instead of boostd init

    See section for more details.

    Boost keeps all data in a directory called the repository. By default the repository is at ~/.boost. To use a different location pass the --boost-repo parameter (must precede any particular command verb, e.g. boostd --boost-repo=/path init).

    Export the environment variables needed for boostd init to connect to the lotus daemon and lotus miner.

    Export environment variables that point to the API endpoints for the sealing and mining processes. They will be used by the boost node to make JSON-RPC calls to the mining/sealing/proving node.

    Run boostd init to create and initialize the repository:

    • --api-sealer is the API info for the lotus-miner instance that does sealing

    • --api-sector-index is the API info for the lotus-miner instance that provides storage

    • --max-staging-deals-bytes

    5. Update ulimit file descriptor limit if necessary. Boost deals will fail if the file descriptor limit for the process is not set high enough. This limit can be raised temporarily before starting the Boost process by running the command ulimit -n 1048576. We recommend setting it permanently by following the guide.

    6. Make sure that the correct <PEER_ID> and <MULTIADDR> for your SP is set on chain, given that boost init generates a new identity. Use the following commands to update the values on chain:

    circle-info

    <MULTIADDR> should be the same as the ListenAddresses you set in the Libp2p section of the config.toml of Boost <PEER_ID> can be found in the output of boostd net id command

    7. Run the boostd service, which will start:

    • libp2p listeners for storage and retrieval

    • the JSON RPC API

    • the graphql interface (used by the react front-end)

    circle-info

    In your firewall you will need to open the ports that libp2p listens on, so that Boost can receive storage and retrieval deals.

    See the Libp2p section of config.toml in the

    hashtag
    Web UI

    circle-info

    When you build boostd using make build the react app is also part of the process. You can skip this section.

    Following steps are to be used only in case you are building binary and react app separately.

    1. Build the React frontend

    1. Open the Web UI

    Open http://localhost:8080 in your browser.

    circle-info

    To access a web UI running on a remote server, you can open an SSH tunnel from your local machine:

    hashtag
    API Access

    Boost API can be accessed by setting the environment variable BOOST_API_INFO same as LOTUS_MARKET_INFO.

    You can also directly evaluate the boostd auth command with:

    HTTP Retrieval

    How to configure and use HTTP retrievals in Boost

    Boost introduced a new binary, booster-http, with release v1.2.0. This binary can be run alongside the boostd market process in order to serve retrievals over http.

    Currently, there is no payment method or built-in security integrated in the new binary. It can be run with any stable release of boostd and can also be run on a separate machine from the boostd process.

    Release v1.7.0-rc1 introduced support in booster-http for running an IPFS gatewayarrow-up-right, which enables Storage Providers to serve content to their users in multiple formats as described below and demonstrated using curl.

    hashtag
    Retrieving a full Piece

    When performing certain actions, such as replicating deals, it can be convenient to retrieve the entire Piece (with padding) to ensure commp integrity.

    hashtag
    Retrieving a CAR file

    To return the CAR file for a given CID, you can pass an Accept header with the application/vnd.ipld.car; format. This can be useful for retrieving the raw, unpadded data of a deal.

    hashtag
    Retrieving specific files

    For Storage Providers that have enabled serving raw files (disabled by default), users can retrieve specific files, such as images by their cid and path where applicable. See for a more in depth example.

    hashtag
    Retrieving IPLD blocks

    For advanced IPFS and IPLD use cases, you can now retrieve individual blocks by passing an Accept header with the application/vnd.ipld.raw; format

    hashtag
    Local Setup

    SPs should try a local setup and test their HTTP retrievals before proceeding to run booster-http in production.

    To build and run booster-http :

    1. Clone the boost repo and checkout the latest release

    1. Build the new binary

    1. Collect the token information for boost, lotus-miner and lotus daemon API

    1. Start the booster-http server with the above details

    circle-info

    You can run multiple booster-http processes on the same machine by using a different port for each instance with the --port flag. You can also run multiple instances of the booster-http on different machines.

    hashtag
    Running Public Boost HTTP Retrieval

    The booster-http server listens on localhost. To expose the server publically, SPs should run a reverse proxy such as to handle operational concerns like:

    • SSL

    • Authentication

    • Load balancing

    While booster-http may get more operational features over time, the intent is that providers who want to scale their HTTP operations will handle most of operational concerns via software in front of booster-http. You can setup a simple NGINX proxy using the in

    hashtag
    Making HTTP Retrieval Discoverable

    To enable public discovery of the Boost HTTP server, SPs should set the domain root in boostd's config.toml. Under the [DealMaking] section, set HTTPRetrievalMultiaddr to the public domain root in multi-address format.

    Example config.toml section:

    Clients can determine if an SP offers HTTP retrieval by running:

    Clients can check the HTTP URL scheme version and supported queries

    Clients can download a piece using the domain root configured by the SP:

    Getting started

    This section details how to get started with Boost if you are a storage provider or as a client

    The Boost source code repository is hosted at

    hashtag
    Boost and Lotus compatibility Matrix

    Boost Version
    Lotus Version
    Golang Version

    HTTP indexer announcement

    Configure to publish IPNI announcements over HTTP

    hashtag
    Announce over HTTP

    IndexProvider.HttpPublisher.AnnounceOverHttp must be set to true to enable the http announcements. Once HTTP announcements are enabled, the local-index provider will continue to announce over libp2p gossipsub along with HTTP for the specific indexers.

    The advertisements are send to the indexer nodes defined in DirectAnnounceURLs

    Troubleshooting

    hashtag
    Inspect

    The new inspect page in the Boost UI helps with debugging retrieval problems. It allows the user to check the following using a payload CID or piece CID:

    • Verify if the piece has been correctly added to the Piece Store

      # The maximum number of concurrent storage deal HTTP downloads.
      # Note that this is a soft maximum; if some downloads stall,
      # more downloads are allowed to start.
      #
      # type: uint64
      # env var: LOTUS_DEALMAKING_HTTPTRANSFERMAXCONCURRENTDOWNLOADS
      #HttpTransferMaxConcurrentDownloads = 20
    
      # The period between checking if downloads have stalled.
      #
      # type: Duration
      # env var: LOTUS_DEALMAKING_HTTPTRANSFERSTALLCHECKPERIOD
      #HttpTransferStallCheckPeriod = "30s"
    
      # The time that can elapse before a download is considered stalled (and
      # another concurrent download is allowed to start).
      #
      # type: Duration
      # env var: LOTUS_DEALMAKING_HTTPTRANSFERSTALLTIMEOUT
      #HttpTransferStallTimeout = "5m0s"
      # The maximum allowed disk usage size in bytes of downloaded deal data
      # that has not yet been passed to the sealing node by boost.
      # When the client makes a new deal proposal to download data from a host,
      # boost checks this config value against the sum of:
      # - the amount of data downloaded in the staging area
      # - the amount of data that is queued for download
      # - the amount of data in the proposed deal
      # If the total amount would exceed the limit, boost rejects the deal.
      # Set this value to 0 to indicate there is no limit.
      #
      # type: int64
      # env var: LOTUS_DEALMAKING_MAXSTAGINGDEALSBYTES
      MaxStagingDealsBytes = 50000000000
    
      # The percentage of MaxStagingDealsBytes that is allocated to each host.
      # When the client makes a new deal proposal to download data from a host,
      # boost checks this config value against the sum of:
      # - the amount of data downloaded from the host in the staging area
      # - the amount of data that is queued for download from the host
      # - the amount of data in the proposed deal
      # If the total amount would exceed the limit, boost rejects the deal.
      # Set this value to 0 to indicate there is no limit per host.
      #
      # type: uint64
      # env var: LOTUS_DEALMAKING_MAXSTAGINGDEALSPERCENTPERHOST
      #MaxStagingDealsPercentPerHost = 0
    curl -X POST \
    -H "Content-Type: application/json" \
    -d '{"query":"query { deals(limit: 10, query: \"failed to get size of imported\") { deals { ID CreatedAt Message } } }"}' \
    http://localhost:8080/graphql/query | jq
    curl -X POST \
    -H "Content-Type: application/json" \
    -d '{"query":"mutation { dealCancel(id: \"ab12345c-5678-90de-12f3-45a6b78cd9ef\") }"}' \
    http://localhost:8080/graphql/query | jq
    is the maximum amount of storage to be used for downloaded files (once the limit is reached Boost will reject subsequent incoming deals)
    the web server for the react front-end
    Migrate a Lotus markets service process to Boost
    Permanently Setting Your ULIMIT System Valuearrow-up-right
    Repository
    Enable serving filesarrow-up-right
    NGINXarrow-up-right
    example provided
    Serving files with booster-http
    . You can specify more than 1 URL to announce to multiple indexer nodes.

    Once an IPNI node starts processing the advertisements, it will reach out to the Boost node to fetch the data. Thus, Boost node needs to specify a public IP and port which can be used by the indexer node to query for data.

    export PUBLISH_STORAGE_DEALS_WALLET=`lotus wallet new bls`
    export COLLAT_WALLET=`lotus wallet new bls`
    lotus send --from mywallet $PUBLISH_STORAGE_DEALS_WALLET 10
    lotus send --from mywallet $COLLAT_WALLET 10
    lotus-miner actor control set --really-do-it $PUBLISH_STORAGE_DEALS_WALLET
    export $(lotus auth api-info --perm=admin)
    export $(lotus-miner auth api-info --perm=admin)
    export APISEALER=`echo $MINER_API_INFO`
    export APISECTORINDEX=`echo $MINER_API_INFO`
    boostd --vv init \
           --api-sealer=$APISEALER \
           --api-sector-index=$APISECTORINDEX \
           --wallet-publish-storage-deals=$PUBLISH_STORAGE_DEALS_WALLET \
           --wallet-deal-collateral=$COLLAT_WALLET \
           --max-staging-deals-bytes=50000000000
    lotus-miner actor set-addrs <MULTIADDR>
    lotus-miner actor set-peer-id <PEER_ID>
    boostd --vv run
    cd react
    
    # Download and install npm packages needed by the React frontend
    npm install --legacy-peer-deps
    
    # Build the optimized JavaScript and CSS in boost/react/build
    npm run build
    ssh -L 8080:localhost:8080 myserver
    boostd auth api-info --perm=admin
    
    export BOOST_API_INFO=<TOKEN>:<API Address>
    export $(boostd auth api-info --perm=admin)
    curl http://{SP's http retrieval URL}/piece/bagaSomePieceCID -o bagaSomePieceCID.piece
    curl -H "Accept:application/vnd.ipld.car;" http://{SP's http retrieval URL}/ipfs/bafySomePayloadCID -o bafySomePayloadCID.car
    curl http://{SP's http retrieval URL}/ipfs/{content ID}/{optional path to resource} -o myimage.png
    curl -H "Accept:application/vnd.ipld.raw;" http://{SP's http retrieval URL}/ipfs/bafySomeBlockCID -o bafySomeBlockCID
    git clone https://github.com/filecoin-project/boost.git
    cd boost
    git checkout <release>
    make booster-http
    export ENV_BOOST_API_INFO=`boostd auth api-info --perm=admin`
    export BOOST_API_INFO=`echo $ENV_BOOST_API_INFO | awk '{split($0,a,"="); print a[2]}'`
    export ENV_FULLNODE_API_INFO=`lotus auth api-info --perm=admin`
    export FULLNODE_API_INFO=`echo $ENV_FULLNODE_API_INFO | awk '{split($0,a,"="); print a[2]}'`
    export ENV_MINER_API_INFO=`lotus-miner auth api-info --perm=admin`
    export MINER_API_INFO=`echo $ENV_MINER_API_INFO | awk '{split($0,a,"="); print a[2]}'`
    booster-http run --api-boost=$BOOST_API_INFO --api-fullnode=$FULLNODE_API_INFO --api-storage=$MINER_API_INFO
    [DealMaking]
      HTTPRetrievalMultiaddr = "/dns/foo.com/tcp/443/https"
    boost provider retrieval-transports <miner id>
    // Supported queries
    curl https://foo.com/index
    
    // Version
    curl https://foo.com/info
    # Download a piece by its CID
    curl https://foo.com/piece/bagaSomePieceCID -o download.piece
    [IndexProvider]
      # Enable set whether to enable indexing announcement to the network and expose endpoints that
      # allow indexer nodes to process announcements. Enabled by default.
      #
      # type: bool
      # env var: LOTUS_INDEXPROVIDER_ENABLE
      #Enable = true
    
      # EntriesCacheCapacity sets the maximum capacity to use for caching the indexing advertisement
      # entries. Defaults to 1024 if not specified. The cache is evicted using LRU policy. The
      # maximum storage used by the cache is a factor of EntriesCacheCapacity, EntriesChunkSize and
      # the length of multihashes being advertised. For example, advertising 128-bit long multihashes
      # with the default EntriesCacheCapacity, and EntriesChunkSize means the cache size can grow to
      # 256MiB when full.
      #
      # type: int
      # env var: LOTUS_INDEXPROVIDER_ENTRIESCACHECAPACITY
      #EntriesCacheCapacity = 1024
    
      # EntriesChunkSize sets the maximum number of multihashes to include in a single entries chunk.
      # Defaults to 16384 if not specified. Note that chunks are chained together for indexing
      # advertisements that include more multihashes than the configured EntriesChunkSize.
      #
      # type: int
      # env var: LOTUS_INDEXPROVIDER_ENTRIESCHUNKSIZE
      #EntriesChunkSize = 16384
    
      # TopicName sets the topic name on which the changes to the advertised content are announced.
      # If not explicitly specified, the topic name is automatically inferred from the network name
      # in following format: '/indexer/ingest/<network-name>'
      # Defaults to empty, which implies the topic name is inferred from network name.
      #
      # type: string
      # env var: LOTUS_INDEXPROVIDER_TOPICNAME
      #TopicName = ""
    
      # PurgeCacheOnStart sets whether to clear any cached entries chunks when the provider engine
      # starts. By default, the cache is rehydrated from previously cached entries stored in
      # datastore if any is present.
      #
      # type: bool
      # env var: LOTUS_INDEXPROVIDER_PURGECACHEONSTART
      #PurgeCacheOnStart = true
    
      [IndexProvider.Announce]
        # Make a direct announcement to a list of indexing nodes over http.
        # Note that announcements are already made over pubsub regardless
        # of this setting.
        #
        # type: bool
        # env var: LOTUS_INDEXPROVIDER_ANNOUNCE_ANNOUNCEOVERHTTP
        #AnnounceOverHttp = true
    
        # The list of URLs of indexing nodes to announce to.
        #
        # type: []string
        # env var: LOTUS_INDEXPROVIDER_ANNOUNCE_DIRECTANNOUNCEURLS
        #DirectAnnounceURLs = ["https://cid.contact/ingest/announce"]
    
      [IndexProvider.HttpPublisher]
        # If not enabled, requests are served over graphsync instead.
        #
        # type: bool
        # env var: LOTUS_INDEXPROVIDER_HTTPPUBLISHER_ENABLED
        #Enabled = true
    
        # Set the public hostname / IP for the index provider listener.
        # eg "82.129.73.111"
        # This is usually the same as the for the boost node.
        #
        # type: string
        # env var: LOTUS_INDEXPROVIDER_HTTPPUBLISHER_PUBLICHOSTNAME
        #PublicHostname = "82.129.73.111"
    
        # Set the port on which to listen for index provider requests over HTTP.
        # Note that this port must be open on the firewall.
        #
        # type: int
        # env var: LOTUS_INDEXPROVIDER_HTTPPUBLISHER_PORT
        #Port = 3401

    v1.5.0

    v1.18.0

    1.18.x

    v1.5.1, v1.5.2, v1.5.3

    v1.18.0, v1.19.0

    1.18.x

    v1.6.0, v1.6.1, v1.6.2-rc1

    v1.20.x

    1.18.x

    v1.6.3, v1.6.4

    v1.22.x

    1.18.x

    v1.6.2-rc2, v1.7.0-rc1

    v1.21.0-rc1, v1.21.0-rc2

    hashtag
    Building and installing

    hashtag
    Prerequisites

    circle-exclamation

    Please make sure you have installed: Go - following https://go.dev/learn/arrow-up-right

    Rust - following https://www.rust-lang.org/tools/installarrow-up-right

    Node 16.x

    hashtag
    Environment Variables in Boost

    Linux / Ubuntu

    MacOS

    hashtag
    Linux

    circle-exclamation

    Depending on your architecture, you will want to export additional environment variables:

    Please ignore any output or onscreen instruction during the npm build unless there is an error.

    hashtag
    MacOS

    Please ignore any output or onscreen instruction during the npm build unless there is an error.

    hashtag
    Calibration Network

    To build boost for calibnet, please complete the above pre-requisites and build using the following commands.

    hashtag
    Upgrading Boost

    hashtag
    Linux

    1. Make sure that Boost daemon is not running. Run the below commands to upgrade the binary.

    2. Please ignore any onscreen instruction during the npm build unless there is an error.

    3. Start the boost daemon.

    hashtag
    MacOS

    1. Make sure that Boost daemon is not running. Run the below commands to upgrade the binary.

    2. Please ignore any onscreen instruction during the npm build unless there is an error.

    3. Start the boost daemon.

    github.com/filecoin-project/boostarrow-up-right

    Validate if the piece is indexed in the DAG store

  • Check for an unsealed copy of the piece

  • Verify that the payload CID -> piece CID index has been created correctly

  • hashtag
    Failed to connect to peer

    If the client cannot connect to Boost running on a Storage provider, with an error similar to the following:

    The problem is that:

    • The SP registered their peer id and address on chain.

    eg "Register the peer id 123abcd at address ip4/123.456.12.345/tcp/1234"

    • The SP changed their peer id locally but didn't update the peer id on chain.

    • The client wants to make a storage deal with peer 123abcd. The client looks on chain for the address of peer 123abcd and sees peer 123abcd has registered an address ip4/123.456.12.345/tcp/1234.

    • The client sends a deal proposal for peer 123abcd to the SP at address ip4/123.456.12.345/tcp/1234.

    • The SP has changed their peer ID, so the SP responds to the deal proposal request with an error: peer id mismatch

    To fix the problem, the SP should register the new peer id on chain:

    hashtag
    Update storage provider's on chain address

    Clients would not be able to connect to Boost running on a Storage provider after an IP change. This happens as clients lookup the registered peer id and address on chain for a SP. When a SP changes their IP or address locally, they must update the same on chain.

    The SP should register the new peer id on chain using the following lotus-miner command

    circle-exclamation

    Please make sure to use the public IP and port of the Boost node and not lotus-miner node if your miner and boostd runs on a separate machine.

    The on chain address change requires access to the worker key and thus the command lives in lotus-miner instead of Boost.

    hashtag
    Error in lotus-miner info output

    After migrating to Boost, following error is seen when running lotus-miner info :

    hashtag
    Problem:

    lotus-miner is making a call on lotus-market process which has been replaced by Boost, but lotus-miner is not aware of the new market process.

    hashtag
    Solution:

    Export the MARKETS_API_INFO variable on your lotus-miner node.

    hashtag
    Fix retrievals with error "failed to lookup index for mh"

    The following error shows up when trying to retrieve the data from a storage provider.

    The error indicates that dagstore does not have a corresponding index shard for the piece containing the requested data. When a retrieval is requested, the dagstore on storage provider side is queried and a reverse look up is used to determine the key(piece CID). This key is then used to query the piece store to find the sector containing the data and byte offset.

    If for any reason the shard is not registered with the dagstore then reverse look up to find the piece CID fails and the above error is seen. The most widely know reason for not having the shard registered with dagstore is the below error.

    To fix the deals where retrievals are impacted by above error, user will need to register the shards manually with dagstore:

    If you have multiple deals in such state then you will need to generate a list of registered pieces with piece store and then compare with the shards available in the dagstore to create a list of missing shards.

    triangle-exclamation

    Please stop accepting any deals and ensure all current deals are handed off to the lotus-miner (sealer) subsystem before proceeding from here.

    1. Create a list of all sectors on lotus-miner and redirect the output to a file. Copy the output file to boost node to be used by the below command.

    2. Generate a list of shards to be registered

    3. Register the shards with dagstore in an automated fashion.

    Please note that each shard may take upto 3-5 minutes to get registered. So, the above command might take hours or days to complete depending upon the number of missing shards.

    Using filters for storage and retrieval deals

    Storage providers might demand very precise and dynamic control over a combination of deal parameters.

    Boost, similarly to Lotus, provides two IPC hooks allowing you to name a command to execute for every deal before the storage provider accepts it:

    • Filter for storage deals.

    • RetrievalFilter for retrieval deals.

    The executed command receives a JSON representation of the deal parameters, as well as the current state of the sealing pipeline, on standard input, and upon completion, its exit code is interpreted as:

    • 0: success, proceed with the deal.

    • non-0: failure, reject the deal.

    The most trivial filter rejecting any retrieval deal would be something like: RetrievalFilter = "/bin/false". /bin/false is binary that immediately exits with a code of 1.

    lets the miner deny specific clients and only accept deals that are set to start relatively soon.

    You can also use a third party content policy framework like bitscreen by Murmuration Labs, or :

    Here is a sample JSON representation of the input sent to the deal filter:

    libp2p Protocols

    Boost supports the same libp2p protocols as legacy markets, and adds new versions of the protocols used to propose a storage deal and to check the deal's status.

    hashtag
    Propose Storage Deal Protocol

    The client makes a deal proposal over v1.2.0 or v1.2.1 of the Propose Storage Deal Protocol: /fil/storage/mk/1.2.0 or /fil/storage/mk/1.2.1

    It is a request / response protocol, where the request and response are CBOR-marshalled.

    There are two new fields in the Request of v1.2.1 of the protocol, described in the table below.

    hashtag
    Request

    Field
    Type
    Description

    hashtag
    Response

    Field
    Type
    Description

    hashtag
    Storage Deal Status Protocol

    The client requests the status of a deal over v1.2.0 of the Storage Deal Status Protocol: /fil/storage/status/1.2.0

    It is a request / response protocol, where the request and response are CBOR-marshalled.

    hashtag
    Request

    Field
    Type
    Description

    hashtag
    Response

    Field
    Type
    Description

    DAG store

    The DAG store manages a copy of unsealed deal data stored as CAR files. It maintains indexes over the CAR files to facilitate efficient querying of multihashes.

    hashtag
    Directory structure

    By default, the dagstore root will be:

    • $BOOST_PATH/dagstore

    The directory structure is as follows:

    1. index: holds the shard indices.

    2. transients: holds temporary shard data (unsealed pieces) while they're being indexed.

    3. datastore

    hashtag
    First-time migration

    When you first start your boost process without a dagstore repo, a migration process will register all shards for both legacy and Boost deals in lazy initialization mode. As deals come in, shards are fetched and initialized just in time to serve the retrieval.

    • For legacy deals, you can monitor the progress of the migration in your log output, by grepping for the keyword migrator. Here's example output. Notice the first line, which specifies how many deals will be evaluated (this number includes failed deals that never went on chain, and therefore will not be migrated), and the last lines (which communicate that migration completed successfully):

    • For Boost deals, you can do the same by grepping for the keyword boost-migrator.

    hashtag
    Forcing bulk initialization

    Forcing bulk initialization will become important in the near future, when miners begin publishing indices to the network to advertise content they have, and new retrieval features become available (e.g. automatic shard routing).

    Initialization places IO workload on your storage system. You can stop/start this command at your wish/convenience as proving deadlines approach and elapse, to avoid IOPS starvation or competition with window PoSt.

    To stop a bulk initialization(see the next paragraph), press Control-C. Shards being initialized at that time will continue in the background, but no more initializations will be performed. The next time you run the command, it will resume from where it left off.

    You can force bulk initialization using the boostd dagstore initialize-all command. This command will force initialization of every shard that is still in ShardStateNew state for both legacy and Boost deals. To control the operation:

    • You must set a concurrency level through the --concurrency=N flag.

      • A value of 0 will disable throttling and all shards will be initialized at once. ⚠️ Use with caution!

    In our test environments, we found the migration to proceed at a rate of 400-500 shards/deals per second, on the following hardware specs: AMD Ryzen Threadripper 3970X, 256GB DDR4 3200 RAM, Samsung 970 EVO 2TB SSD, RTX3080 10GB GPU.

    hashtag
    Configuration

    The DAG store can be configured through the config.toml file of the node that runs the boost subsystem. Refer to the [DAGStore] section. Boost ships with sane defaults:

    hashtag
    Automatic shard recovery on error

    Shards can error for various reasons, e.g. if the storage system cannot serve the unsealed CAR for a deal/shard, if the shard index is accidentally deleted, etc.

    Boost will automatically try to recover failed shards by triggering a recovery once.

    You can view failed shards by using the boostd dagstore list-shards command, and optionally grepping for ShardStateErrored.

    hashtag
    CLI commands

    The boostd executable contains a dagstore command with several useful subcommands:

    • boostd dagstore list-shards

    • boostd dagstore initialize-shard <key>

    • boostd dagstore initialize-all --concurrency=10

    Refer to the --help texts for more information.

    Migrate a monolith lotus-miner to Boost

    circle-exclamation

    If you have already split a your lotus-miner into a separate markets process (MRA), follow the steps in .

    circle-info

    Please note that a monolith miner can only split into boost(market)+miner on the same physical machine as it requires access to the miner repo to migrate the deal metadata.

    export RUSTFLAGS="-C target-cpu=native -g"
    export FFI_BUILD_FROM_SOURCE=1
    BOOST_CLIENT_REPO         - repo directory for Boost client
    BOOSTER_BITSWAP_REP       - repo directory for Booster bitswap
    BOOST_PATH                - boost repo path
    FULLNODE_API_INFO         - Lotus daemon node API connection string
    curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash -
    sudo apt-get install -y nodejs
    sudo apt install mesa-opencl-icd ocl-icd-opencl-dev gcc git bzr jq pkg-config curl clang build-essential hwloc libhwloc-dev wget -y
    brew install node@16
    brew install bzr jq pkg-config hwloc coreutils
    git clone https://github.com/filecoin-project/boost
    cd boost
    git checkout <Stable tag or branch>
    make clean build
    sudo make install
    export LIBRARY_PATH=$LIBRARY_PATH:/opt/homebrew/lib
    git clone https://github.com/filecoin-project/boost
    cd boost
    git checkout <Stable tag or branch>
    make clean build
    sudo make install
    git clone https://github.com/filecoin-project/boost
    cd boost
    git checkout <Stable tag or branch>
    make clean calibnet
    cd boost
    git checkout main
    git pull
    git checkout <New release>
    make clean build
    sudo make install
    export LIBRARY_PATH=$LIBRARY_PATH:/opt/homebrew/lib
    cd boost
    git checkout main
    git pull
    git checkout <New release>
    make clean build
    sudo make install
    failed to connect to peer <peer id>: failed to dial <peer id>:
      * <multi-address> failed to negotiate security protocol:
        peer id mismatch: expected <peer id>,
        but remote key matches <different peer id>
    lotus-miner actor set-peer-id 123abcd
    lotus-miner actor set-addrs ip4/123.456.12.345/tcp/1234
    lotus-miner actor set-peer-id <new peer id>
    lotus-miner actor set-addrs /ip4/<YOUR_PUBLIC_IP_ADDRESS_OF_BOOST_NODE>/tcp/<Boostd Port>
    ERROR: fatal error calling 'Filecoin.MarketListIncompleteDeals': panic in rpc method 'Filecoin.MarketListIncompleteDeals': runtime error: invalid memory address or nil pointer dereference
    export MARKETS_API_INFO=<Boost token:api>
    ERROR: offer error: retrieval query offer errored: failed to fetch piece to retrieve from: getting pieces for cid Qmf1ykhUo63qB5dJ8KRyeths9MZfyxpVdT5xwnmoLKefz7: getting pieces containing block Qmf1ykhUo63qB5dJ8KRyeths92mfyxpVdT5xi1moLKefz7: failed to lookup index for mh 1220f7ce2d20772b959c1071868e9495712f12785b1710ee88752af120dd49338190, err: datastore: key not found
    2022-02-21T20:06:03.950+1100 INFO markets loggers/loggers.go:20 storage provider event {"name": "ProviderEventFailed", "proposal CID": "bafyreihr743zllr2eckgfiweouiap7pgcjqa3mg3t75jjt7sfcpu", "state": "StorageDealError", "message": "error awaiting deal pre-commit: failed to set up called handler: called check error (h: 1570875): failed to look up deal on chain: deal 3964985 not found - deal may not have completed sealing before deal proposal start epoch, or deal may have been slashed"}
    boostd dagstore register-shard <piece CID>
    lotus-miner sectors list | awk '{print $1 " " $2}' | grep -v ID > aclist.txt
    comm -13 <(for i in $(boostd pieces list-pieces); do sector_list=`boostd pieces piece-info $i | awk '{print $2}'| sed -ne '/SectorID/,$p' | grep -v SectorID`; for j in $sector_list; do grep -w $j aclist.txt > /dev/null; if [ $? -eq 0 ]; then break; else echo "$i"; fi; done; done) <(comm -13 <(boostd dagstore list-shards | awk '{print $1}' | sed 1d | sort) <(boostd pieces list-pieces | sort))
    for i in `cat <OUTPUT OF STEP 2 IN A FILE>` ; do boostd dagstore register-shard $i; done

    1.20.x

    v1.7.0, v1.7.1, v1.7.2

    v1.7.3, v1.7.4

    v1.23.x

    1.20.x

    This Perl scriptarrow-up-right
    CID gravityarrow-up-right
    : records shard state and metadata so it can survive restarts.
  • .shard-registration-complete: marker file that signals that initial migration for legacy markets deals is complete.

  • .boost-shard-registration-complete: marker file that signals that initial migration for boost deals is complete.

  • By default, only unsealed pieces will be indexed to avoid forcing unsealing jobs. To index also sealed pieces, use the --include-sealed flag.
  • boostd dagstore gc

  • # grab filter program
    go get -u -v github.com/Murmuration-Labs/bitscreen
    
    # add it to both filters
    Filter = "/path/to/go/bin/bitscreen"
    RetrievalFilter = "/path/to/go/bin/bitscreen"
    {
      "DealParams": {
        "DealUUID": "48c31c8c-dcc8-4372-a0ac-b5468eea555b",
        "IsOffline": false,
        "ClientDealProposal": {
          "Proposal": {
            "PieceCID": {
              "/": "baga6ea4seaqh5prrl6ykov4t64k6m6giijsc44dcxtdnzsp4izjakqhs7twauiq"
            },
            "PieceSize": 2147483648,
            "VerifiedDeal": false,
            "Client": "f1sw5zjcyo4mff5cbvgsgmm7uoko6gcr4tptvtkhy",
            "Provider": "f0127896",
            "Label": "bafyaa7qsgafcmalqudsaeidrunclaep6mdbipm2gjfvuosjfd6cbqd6th7bshy5hi5npxe727yjaagelucbyabasgafcmalqudsaeieapsxspo2i36no36n7yitswsxdazvziwvgj4vbp2scuxasrc6n4ejaage3r7m3saykcqeaegeavdllsbzaqcaibaaeecakrvvzam",
            "StartEpoch": 1717840,
            "EndEpoch": 2236240,
            "StoragePricePerEpoch": "1",
            "ProviderCollateral": "363196619502649",
            "ClientCollateral": "0"
          },
          "ClientSignature": {
            "Type": 1,
            "Data": "SmgcBnQE+0ZIb4zAXw7TpxLliSaliShEvX9P4+uwvxBhRDlJD+F6N3NFoNrA2y5bTeWF5aWWuL93w+SSmXFkoAA="
          }
        },
        "DealDataRoot": {
          "/": "bafyaa7qsgafcmalqudsaeidrunclaep6mdbipm2gjfvuosjfd6cbqd6th7bshy5hi5npxe727yjaagelucbyabasgafcmalqudsaeieapsxspo2i36no36n7yitswsxdazvziwvgj4vbp2scuxasrc6n4ejaage3r7m3saykcqeaegeavdllsbzaqcaibaaeecakrvvzam"
        },
        "Transfer": {
          "Type": "http",
          "ClientID": "",
          "Params": "eyJVUkwiOiJodHRwczovL2FudG8uLXB1YmxpYy1idWNrZXQtYm9vc3QuczMuZXUtY2VudHJhbC0xLmFtYXpvbmF3cy5jb20vcmFuZGZpbGVfMkdCXzAuY2FyIiwiSGVhZGVycyI6bnVsbH0=",
          "Size": 2000177948
        }
      },
      "SealingPipelineState": {
        "SectorStates": {
          "Available": 684,
          "Proving": 307,
          "Removed": 82,
          "TerminateFailed": 1,
          "TerminateWait": 5
        },
        "Workers": null
      }
    }
     dagstore
         |___ index                         # (1)
         |___ transients                    # (2)
         |___ datastore                     # (3)
         |___ .shard-registration-complete  # (4)
         |___ .boost-shard-registration-complete  # (5)
    2021-08-09T22:06:35.701+0300    INFO    dagstore.migrator       dagstore/wrapper.go:286 registering shards for all active deals in sealing subsystem    {"count": 453}
    2021-08-09T22:06:35.701+0300    WARN    dagstore.migrator       dagstore/wrapper.go:335 deal has nil piece CID; skipping        {"deal_id": 0}
    2021-08-09T22:06:35.701+0300    INFO    dagstore.migrator       dagstore/wrapper.go:348 registering deal in dagstore with lazy init     {"deal_id": 2208881, "piece_cid": "baga6ea4seaqhnvxy55e
    nveknyqhkkh7mltcrrcx35yvuxdmcbfouaafkvp6niay"}
    2021-08-09T22:06:35.702+0300    INFO    dagstore.migrator       dagstore/wrapper.go:318 async shard registration completed successfully {"shard_key": "baga6ea4seaqhnvxy55enveknyqhkkh7mltcrrcx
    35yvuxdmcbfouaafkvp6niay"}
    [...]
    2021-08-09T22:06:35.709+0300    INFO    dagstore.migrator       dagstore/wrapper.go:361 finished registering all shards {"total": 44}
    [...]
    2021-08-09T22:06:35.826+0300    INFO    dagstore.migrator       dagstore/wrapper.go:365 confirmed registration of all shards
    2021-08-09T22:06:35.826+0300    INFO    dagstore.migrator       dagstore/wrapper.go:372 successfully marked migration as complete
    2021-08-09T22:06:35.826+0300    INFO    dagstore.migrator       dagstore/wrapper.go:375 dagstore migration complete
    [DAGStore]
      # Path to the dagstore root directory. This directory contains three
      # subdirectories, which can be symlinked to alternative locations if
      # need be:
      #  - ./transients: caches unsealed deals that have been fetched from the
      #    storage subsystem for serving retrievals.
      #  - ./indices: stores shard indices.
      #  - ./datastore: holds the KV store tracking the state of every shard
      #    known to the DAG store.
      # Default value: <BOOST_PATH>/dagstore
      # RootDir = ""
    
      # The maximum amount of indexing jobs that can run simultaneously.
      # 0 means unlimited.
      # Default value: 5.
      #
      # type: int
      # MaxConcurrentIndex = 5
    
      # The maximum amount of unsealed deals that can be fetched simultaneously
      # from the storage subsystem. 0 means unlimited.
      # Default value: 0 (unlimited).
      #
      # type: int
      # MaxConcurrentReadyFetches = 0
    
      # The maximum number of simultaneous inflight API calls to the storage
      # subsystem.
      # Default value: 100.
      #
      # type: int
      # MaxConcurrencyStorageCalls = 100
    
      # The time between calls to periodic dagstore GC, in time.Duration string
      # representation, e.g. 1m, 5m, 1h.
      # Default value: 1 minute.
      #
      # type: Duration
      # GCInterval = "1m"

    cid

    The root cid of the CAR file. Same as <v1 proposal>.Piece.Root

    Transfer.Type

    string

    eg "http"

    Transfer.ClientID

    string

    Any id the client wants (useful for matching logs between client and server)

    Transfer.Params

    byte array

    Interpreted according to Type. eg for "http" Transfer.Params contains the http headers as JSON

    Transfer.Size

    integer

    The size of the data that is sent across the network

    SkipIPNIAnnounce (v1.2.1)

    boolean

    Whether the provider should announce the deal to IPNI or not (default: false)

    RemoveUnsealedCopy (v1.2.1)

    boolean

    Whether the provider should keep an unsealed copy of the deal (default: false)

    integer

    The total size of the transfer in bytes

    NBytesReceived

    integer

    The number of bytes that have been downloaded

    DealStatus.Error

    string

    Non-empty if the deal has failed

    DealStatus.Status

    string

    The that the deal has reached

    DealStatus.Proposal

    DealProposal

    SignedProposalCid

    cid

    cid of the client deal proposal + signature

    PublishCid

    cid

    The cid of the publish message, if the deal has been published

    ChainDealID

    integer

    The ID of the deal on chain, if it's been published

    DealUUID

    uuid

    A uuid for the deal specified by the client

    IsOffline

    boolean

    Indicates whether the deal is online or offline

    ClientDealProposal

    ClientDealProposal

    Same as <v1 proposal>.DealProposal

    Accepted

    boolean

    Indicates whether the deal proposal was accepted

    Message

    string

    A message about why the deal proposal was rejected

    DealUUID

    uuid

    The uuid of the deal

    Signature

    Signaturearrow-up-right

    A signature over the uuid with the client's wallet

    DealUUID

    uuid

    The uuid of the deal

    Error

    string

    Non-empty if there's an error getting the deal status

    IsOffline

    boolean

    Indicates whether the deal is online or offline

    DealDataRoot

    TransferSize

    hashtag
    Prepare to migrate

    1. Make sure you have a Lotus node and miner running

    2. Create and send funds to two new wallets on the lotus node to be used for Boost

    Boost currently uses two wallets for storage deals:

    • The publish storage deals wallet - This wallet pays the gas cost when Boost sends the PublishStorageDeals message.

    circle-info

    If you already have a PublishStorageDeal control wallet setup then it can be reused in boost as the PUBLISH_STORAGE_DEALS_WALLET.

    • The deal collateral wallet - When the Storage Provider accepts a deal, they must put collateral for the deal into escrow. Boost moves funds from this wallet into escrow with the StorageMarketActor.

    circle-info

    If you already have a wallet that you want to use as the source of funds for deal collateral, then it can be reused in boost as the COLLAT_WALLET.

    3. Set the publish storage deals wallet as a control wallet.

    circle-exclamation

    Add the value ofPUBLISH_STORAGE_DEALS_WALLET to the parameter DealPublishControl in the Address section of lotus-miner configuration if not present. Restart lotus-miner if configuration has been updated.

    4. Set up environment variables needed for Boost migration

    Export the environment variables needed for boostd migrate-monolith to connect to the lotus daemon and lotus miner.

    Export environment variables that point to the API endpoints for the sealing and mining processes. They will be used by the boost node to make JSON-RPC calls to the mining/sealing/proving node.

    hashtag
    Shut down lotus-miner

    1. Stop accepting incoming deals

    2. Wait for incoming deals to complete

    3. Shutdown the lotus-miner

    4. Backup the lotus-miner repository

    5. Backup the lotus-miner datastore (in case you decide to roll back from Boost to Lotus) with:

    6. Set the environment variable LOTUS_FULLNODE_API to allow access to the lotus node API.

    hashtag
    Migrate from the lotus-miner repo to the Boost repo

    Run boostd migrate-monolith to create and initialize the boost repository:

    The migrate-monolith command

    • Initializes a Boost repository

    • Migrates markets datastore keys to Boost

      • Storage and retrieval deal metadata

      • Storage and retrieval ask data

    • Migrates markets libp2p keys to Boost

    • Migrates markets config to Boost (libp2p endpoints, settings etc)

    • Migrates the markets DAG store to Boost

    hashtag
    Update the lotus-miner config

    1. Backup lotus-miner's config.toml

    2. Disable the markets subsystem in miner config:

    Boost replaces the markets subsystem in the lotus-miner, so we need to disable the subsystem in config:

    Under the [Subsystems] section set EnableMarkets = false

    3. Change the miner's libp2p port

    Boost replaces the markets subsystems, and listens on the same libp2p port, so we need to change the libp2p port that the miner is listening on.

    Under the [Libp2p] section change the port in ListenAddresses

    hashtag
    Restart lotus-miner

    Start lotus-miner up again so that Boost can connect to the miner when it starts.

    hashtag
    Run the boostd service

    The boostd service will start:

    • libp2p listeners for storage and retrieval

    • the JSON RPC API

    • the graphql interface (used by the react front-end)

    • the web server for the react front-end

    circle-info

    In your firewall you will need to ensure that the libp2p ports that Boost listens on are open, so that Boost can receive storage and retrieval deals. See the Libp2p section of config.toml in the Repository

    hashtag
    Web UI

    Open http://localhost:8080 in your browser.

    circle-info

    To access a web UI running on a remote server, you can open an SSH tunnel from your local machine:

    hashtag
    API Access

    Boost API can be accessed by setting the environment variable BOOST_API_INFO same as LOTUS_MARKET_INFO.

    hashtag
    Migrating Boost from one machine to another

    Once the Boost has been split from the monolith miner, it can be moved to another physical or virtual machine by following the below steps.

    1. Build the boost binary on the new machine by following the Getting Started step.

    2. Copy the boost repo from the original monolith miner machine to the new dedicated boost machine.

    3. Set the environment variable LOTUS_FULLNODE_API to allow access to the lotus node API.

    4. Open the required port on the firewall on the monolith miner machine to allow connection to lotus-miner API.

    5. In your firewall you will need to ensure that the libp2p ports that Boost listens on are open, so that Boost can receive storage and retrieval deals. See the Libp2p section of config.toml in the

    6. Start the boostd process.

    Migrate a Lotus markets service process to Boost

    Legacy Deal configuration

    Advanced configurations you can tune to optimize your legacy deal onboarding

    hashtag
    Dealmaking section

    This section controls parameters for making storage and retrieval deals:

    ExpectedSealDuration is an estimate of how long sealing will take and is used to reject deals whose start epoch might be earlier than the expected completion of sealing. It can be estimated by or by .

    ssh -L 8080:localhost:8080 myserver
    export PUBLISH_STORAGE_DEALS_WALLET=`lotus wallet new bls`
    export COLLAT_WALLET=`lotus wallet new bls`
    lotus send --from mywallet $PUBLISH_STORAGE_DEALS_WALLET 10
    lotus send --from mywallet $COLLAT_WALLET 10
    export OLD_CONTROL_ADDRESS=`lotus-miner actor control list  --verbose | grep -v owner | grep -v worker | grep -v beneficiary | awk '{print $3}' | grep -v key | tr -s '\n'  ' '`
    lotus-miner actor control set --really-do-it $PUBLISH_STORAGE_DEALS_WALLET $OLD_CONTROL_ADDRESS
    export $(lotus auth api-info --perm=admin)
    export $(lotus-miner auth api-info --perm=admin)
    export APISEALER=`echo $MINER_API_INFO`
    export APISECTORINDEX=`echo $MINER_API_INFO`
    lotus-shed market export-datastore --repo <repo> --backup-dir <backup-dir>
    lotus auth api-info -perm admin
    boostd --vv migrate-monolith \
           --import-miner-repo=<lotus-miner repo path> \
           --api-sealer=$APISEALER \
           --api-sector-index=$APISECTORINDEX \
           --wallet-publish-storage-deals=$PUBLISH_STORAGE_DEALS_WALLET \
           --wallet-deal-collateral=$COLLAT_WALLET \
           --max-staging-deals-bytes=50000000000 
    cp <miner repo>/config.toml <miner repo>/config.toml.backup
    boostd --vv run
    export BOOST_API_INFO=<TOKEN>:<API Address>
    boostd auth api-info -perm auth
    boostd --vv run
    Repository
    checkpointarrow-up-right
    Use the GraphQL explorer to create a query against Boost
    circle-exclamation

    The final value of ExpectedSealDuration should equal (TIME_TO_SEAL_A_SECTOR + WaitDealsDelay) * 1.5. This equation ensures that the miner does not commit to having the sector sealed too soon

    StartEpochSealingBuffer allows lotus-miner to seal a sector before a certain epoch. For example: if the current epoch is 1000 and a deal within a sector must start on epoch 1500, then lotus-miner must wait until the current epoch is 1500 before it can start sealing that sector. However, if Boost sets StartEpochSealingBuffer to 500, the lotus-miner can start sealing the sector at epoch 1000.

    If there are multiple deals in a sector, the deal with a start time closest to the current epoch is what StartEpochSealingBuffer will be based off. So, if the sector in our example has three deals that start on epoch 1000, 1200, and 1400, then lotus-miner will start sealing the sector at epoch 500.

    hashtag
    Publishing several deals in one message

    The PublishStorageDeals message can publish multiple deals in a single message. When a deal is ready to be published, Boost will wait up to PublishMsgPeriod for other deals to be ready before sending the PublishStorageDeals message.

    However, once MaxDealsPerPublishMsg is ready, Boost will immediately publish all the deals.

    For example, if PublishMsgPeriod is 1 hour:

    • At 1:00 pm, deal 1 is ready to publish. Boost will wait until 2:00 pm for other deals to be ready before sending PublishStorageDeals.

    • At 1:30 pm, Deal 2 is ready to publish

    • At 1:45 pm, Deal 3 is ready to publish

    • At 2:00pm, Boost publishes Deals 1, 2, and 3 in a single PublishStorageDeals message.

    If MaxDealsPerPublishMsg is 2, then in the above example, when deal 2 is ready to be published at 1:30, Boost would immediately publish Deals 1 & 2 in a single PublishStorageDeals message. Deal 3 would be published in a subsequent PublishStorageDeals message.

    triangle-exclamation

    If any of the deals in the PublishStorageDeals fails validation upon execution, or if the start epoch has passed, all deals will fail to be published

    hashtag
    Using filters for fine-grained storage and retrieval deal acceptance

    Your use case might demand very precise and dynamic control over a combination of deal parameters.

    Boost provides two IPC hooks allowing you to name a command to execute for every deal before the miner accepts it:

    • Filter for storage deals.

    • RetrievalFilter for retrieval deals.

    The executed command receives a JSON representation of the deal parameters on standard input, and upon completion, its exit code is interpreted as:

    • 0: success, proceed with the deal.

    • non-0: failure, reject the deal.

    The most trivial filter rejecting any retrieval deal would be something like: RetrievalFilter = "/bin/false". /bin/false is binary that immediately exits with a code of 1.

    This Perl scriptarrow-up-right lets the miner deny specific clients and only accept deals that are set to start relatively soon.

    You can also use a third party content policy framework like CIDgravityarrow-up-right or bitscreen by Murmuration Labs:

    benchmarkingarrow-up-right
    pledging a sectorarrow-up-right
    [LotusDealmaking]
      # When enabled, the miner can accept online deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDERONLINESTORAGEDEALS
      #ConsiderOnlineStorageDeals = true
    
      # When enabled, the miner can accept offline deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDEROFFLINESTORAGEDEALS
      #ConsiderOfflineStorageDeals = true
    
      # When enabled, the miner can accept retrieval deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDERONLINERETRIEVALDEALS
      #ConsiderOnlineRetrievalDeals = true
    
      # When enabled, the miner can accept offline retrieval deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDEROFFLINERETRIEVALDEALS
      #ConsiderOfflineRetrievalDeals = true
    
      # When enabled, the miner can accept verified deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDERVERIFIEDSTORAGEDEALS
      #ConsiderVerifiedStorageDeals = true
    
      # When enabled, the miner can accept unverified deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDERUNVERIFIEDSTORAGEDEALS
      #ConsiderUnverifiedStorageDeals = true
    
      # A list of Data CIDs to reject when making deals
      #
      # type: []cid.Cid
      # env var: LOTUS_LOTUSDEALMAKING_PIECECIDBLOCKLIST
      #PieceCidBlocklist = []
    
      # Maximum expected amount of time getting the deal into a sealed sector will take
      # This includes the time the deal will need to get transferred and published
      # before being assigned to a sector
      #
      # type: Duration
      # env var: LOTUS_LOTUSDEALMAKING_EXPECTEDSEALDURATION
      #ExpectedSealDuration = "24h0m0s"
    
      # Maximum amount of time proposed deal StartEpoch can be in future
      #
      # type: Duration
      # env var: LOTUS_LOTUSDEALMAKING_MAXDEALSTARTDELAY
      #MaxDealStartDelay = "336h0m0s"
    
      # When a deal is ready to publish, the amount of time to wait for more
      # deals to be ready to publish before publishing them all as a batch
      #
      # type: Duration
      # env var: LOTUS_LOTUSDEALMAKING_PUBLISHMSGPERIOD
      # PublishMsgPeriod = "40m0s"
    
      # The maximum number of deals to include in a single PublishStorageDeals
      # message
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_MAXDEALSPERPUBLISHMSG
      #MaxDealsPerPublishMsg = 8
    
      # The maximum collateral that the provider will put up against a deal,
      # as a multiplier of the minimum collateral bound
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_MAXPROVIDERCOLLATERALMULTIPLIER
      #MaxProviderCollateralMultiplier = 2
    
      # The maximum allowed disk usage size in bytes of staging deals not yet
      # passed to the sealing node by the markets service. 0 is unlimited.
      #
      # type: int64
      # env var: LOTUS_LOTUSDEALMAKING_MAXSTAGINGDEALSBYTES
      # MaxStagingDealsBytes = 100000000000
    
      # The maximum number of parallel online data transfers for storage deals
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_SIMULTANEOUSTRANSFERSFORSTORAGE
      #SimultaneousTransfersForStorage = 20
    
      # The maximum number of simultaneous data transfers from any single client
      # for storage deals.
      # Unset by default (0), and values higher than SimultaneousTransfersForStorage
      # will have no effect; i.e. the total number of simultaneous data transfers
      # across all storage clients is bound by SimultaneousTransfersForStorage
      # regardless of this number.
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_SIMULTANEOUSTRANSFERSFORSTORAGEPERCLIENT
      #SimultaneousTransfersForStoragePerClient = 0
    
      # The maximum number of parallel online data transfers for retrieval deals
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_SIMULTANEOUSTRANSFERSFORRETRIEVAL
      #SimultaneousTransfersForRetrieval = 20
    
      # Minimum start epoch buffer to give time for sealing of sector with deal.
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_STARTEPOCHSEALINGBUFFER
      #StartEpochSealingBuffer = 480
    
      # A command used for fine-grained evaluation of storage deals
      # see https://lotus.filecoin.io/storage-providers/advanced-configurations/market/#using-filters-for-fine-grained-storage-and-retrieval-deal-acceptance for more details
      #
      # type: string
      # env var: LOTUS_LOTUSDEALMAKING_FILTER
      #Filter = ""
    
      # A command used for fine-grained evaluation of retrieval deals
      # see https://lotus.filecoin.io/storage-providers/advanced-configurations/market/#using-filters-for-fine-grained-storage-and-retrieval-deal-acceptance for more details
      #
      # type: string
      # env var: LOTUS_LOTUSDEALMAKING_RETRIEVALFILTER
      #RetrievalFilter = ""
    
      [LotusDealmaking.RetrievalPricing]
        # env var: LOTUS_LOTUSDEALMAKING_RETRIEVALPRICING_STRATEGY
        #Strategy = "default"
    
        [LotusDealmaking.RetrievalPricing.Default]
          # env var: LOTUS_LOTUSDEALMAKING_RETRIEVALPRICING_DEFAULT_VERIFIEDDEALSFREETRANSFER
          #VerifiedDealsFreeTransfer = true
    
        [LotusDealmaking.RetrievalPricing.External]
          # env var: LOTUS_LOTUSDEALMAKING_RETRIEVALPRICING_EXTERNAL_PATH
          #Path = ""
    # grab filter program
    go get -u -v github.com/Murmuration-Labs/bitscreen
    
    # add it to both filters
    Filter = "/path/to/go/bin/bitscreen"
    RetrievalFilter = "/path/to/go/bin/bitscreen"

    Bitswap Retrieval

    How to configure and use bitswap retrievals in Boost

    booster-bitswap is a binary that runs alongside the boostd process, to serve retrievals over the Bitswap protocol. This feature of boost provides a number of tools for managing a production grade Bitswap retrieval service for a Storage Provider's content.

    circle-info

    There is currently no payment method in booster-bitswap. This endpoint is intended to serve free content.

    hashtag
    Why enable retrievals via bitswap?

    Bitswap retrieval introduces interoperability between IPFS and Filecoin, as it enables clients to retrieve Filecoin data over IPFS. This expands the reach of the Filecoin network considerably, increasing the value proposition for users to store data on the Filecoin network. This benefits the whole community, including SPs. Users will be able to access data directly via IPFS, as well as benefit from retrieval markets (e.g. Saturn) and compute over data projects (e.g. Bacalhau).

    hashtag
    Booster-bitswap modes

    There are two primary "modes" for exposing booster-bitswap to the internet.

    1. In private mode the booster-bitswap peer ID is not publicly accessible to the internet. Instead, public Bitswap traffic goes to boostd itself, which then acts as a reverse proxy, forwarding that traffic on to booster-bitswap. This is similar to the way one might configure Nginx as a reverse proxy for an otherwise private web server. private mode is simpler to setup but may produce greater load on boostd as a protocol proxy.

    hashtag
    Demo configuration

    You can configure booster-bitswap in the demo mode and familiarise yourself with the configuration. Once you are confident and familiar with the options, please go ahead and configure booster-bitswap for .

    1. Clone the the boost repo and checkout the latest stable release

    2. Build the booster-bitswap binary:

    3. Initialize booster-bitswap:

    4. Record the peer ID output by booster-bitswap init -- we will need this peer id later

    5. Collect the boost API Info

    6. Run booster-bitswap

    7. By default, booster-bitswap runs on port 8888. You can use --port to override this behaviour

    8. Fetching over bitswap by running

    Where peerID is the peer id recorded when you ran booster-bitswap init and rootCID is the CID of a data CID known to be stored on your SP.

    hashtag
    Setup booster-bitswap To Serve Retrievals

    As described above, booster-bitswap can be configured to serve the retrievals in 2 modes. We recommend using public mode to avoid greater load on boostd as a protocol proxy.

    hashtag
    Private Mode

    1. Clone the main branch from the boost repo

    2. Build the booster-bitswap binary:

    3. Initialize booster-bitswap:

    4. Record the peer ID output by booster-bitswap init -- we will need this peer id later

    5. Stop boostd and edit ~/.boost/config.toml to set the peer ID for bitswap

    6. Start boostd service again

    7. Collect the boost API Info

    8. Run booster-bitswap

    circle-info

    You can get a boostd multiaddress by running boostd net listen and using any of the returned addresses

    9. By default, booster-bitswap runs on port 8888. You can use --port to override this behaviour

    10. Try to fetch a payload CID over bitswap to verify your configuration

    hashtag
    Public Mode

    1. Clone the release/booster-bitswap branch from the boost repo

    2. Build the booster-bitswap binary:

    3. Initialize booster-bitswap:

    4. Record the peer ID output by booster-bitswap init -- we will need this peer id later

    5. Stop boostd and edit ~/.boost/config.toml to set the peer ID for bitswap

    circle-info

    The libp2p private key file for booster-bitswap can generally be found at <booster-bitswap repo path>/libp2p.key

    The reason boost needs to know the public multiaddresses and libp2p private key for booster-bitswap is so it can properly announce these records to the network indexer.

    6. Start boostd service again

    7. Collect the boost API Info

    8. Run booster-bitswap

    9. By default, booster-bitswap runs on port 8888. You can use --port to override this behaviour

    10. Try to fetch a payload CID over bitswap to verify your configuration

    hashtag
    Booster-bitswap configuration

    booster-bitswap provides a number of performance and safety tools for managing a production grade bitswap server without overloading your infrastructure.

    hashtag
    Bitswap Server Performance

    Depending on your hardware you may wish to increase or decrease the default parameters for the bitswap server internals. In the following example we are increasing the worker count for various components up to 600. This will utilize more CPU and I/O, but improve the performance of retrievals. See the command line help docs for details on each parameter.

    hashtag
    BadBits filtering

    Booster-bitswap is automatically setup to deny all requests for CIDs that are on the BadBits Denylist. The default badbits list can be override or addition badbits list can be provided to the booster-bitswap instance.

    hashtag
    To override the default badbits list

    hashtag
    To provide additional badbits list

    hashtag
    Request Filtering

    booster-bitswap provides a number of controls for filtering requests and limiting resource usage. These are expressed in a JSON configuration file <booster-bitswap repo>/retrievalconfig.json

    circle-info

    You can create a new retrievalconfig.json file if one does not exists

    To make changes to the current configuration, you need to edit the retrievalconfig.json file and restart booster-bitswap for the changes to take affect. All configs are optional and absent parameters generally default to no filtering at all for the given parameter.

    You can also configure booster-bitswap to fetch your retrieval config from a remote HTTP API, possibly one provided by a third party configuration tool like . To do this, start booster-bitswap with the --api-filter-endpoint {url} option where URL is the HTTP URL for an API serving the above JSON format. Optionally, add --api-filter-auth {authheader}, if you need to pass a value for the HTTP Authorization header with your API

    When you setup with an API endpoint, booster-bitswap will update its local configuration from the API every five minutes, so you won't have to restart booster-bitswap to make a change. Please, be aware that the remote config will overwrite, rather than merge, with the local config.

    hashtag
    Bandwidth Limiting

    Limiting bandwidth within booster-bitswap will not provide the optimal user experience. Dependent on individual setup, setting up limitations within the software could have a larger impact on the storage provider operations. Therefore, we recommend storage providers to set up their own bandwidth limitations using existing tools.

    There are multiple options to setup bandwidth limitating.

    1. At the ISP level - dedicated bandwidth is provided to the node running booster-bitswap.

    2. At the router level - we recommend configuring the bandwidth at the router level as it provides more flexibility and can be updated as needed. To configure the bandwidth on your router, please check with your manufacturer.

    3. Limit the bandwidth using different tools available in Linux. Here are some of the examples of such tools. Please feel free to use any other tools not listed here and open a Github issue to add your example to this page.

    hashtag
    TC

    is used to configure Traffic Control in the Linux kernel. There are examples available online detailing how to configure rate limiting using TC.

    You can use the below commands to run a very basic configuration.

    hashtag
    Trickle

    is a portable lightweight user space bandwidth shaper, that either runs in collaborative mode (together with trickled) or in standalone mode. You can read more about rate limiting with trickle . Here's a starting point for configuration in trickle to rate limit the booster-bitswap service.

    hashtag
    Wondershaper

    Another way of controlling network traffic is to limit bandwidth on individual network interface cards (NICs). is a small Bash script that uses the tc command-line utility in the background to let you regulate the amount of data flowing through a particular NIC. As you can imagine, while you can use wondershaper on a machine with a single NIC, its real advantage is on a machine with multiple NICs. Just like trickle, wondershaper is available in the official repositories of mainstream distributions. To limit network traffic with wondershaper, specify the NIC on which you wish to restrict traffic with the download and upload speed in kilobits per second.

    For example,

    Serving files with booster-http

    Configuring booster-http to serve blocks and files

    With the release v1.7.0-rc1 of booster-http, Storage Providers can now serve blocks and files directly over the HTTP protocol. booster-http now implements a IPFS HTTP gatewayarrow-up-right with a path resolution stylearrow-up-right. This will allow the clients to download individual IPFS blocksarrow-up-right, car files and request uploaded files directly from their browser.

    SPs can take advantage of the ecosystem of tools to manage HTTP traffic, like load balancers and reverse proxies.

    triangle-exclamation

    Before proceeding any further, we request you to read . This section is an extension of HTTP retrievals and deals with configuration specific to serving files and raw blocks.

    hashtag
    Configuring what to serve

    The booster-http service can be started with specific type of content on IPFS gateway API

    This allows SPs to run multiple booster-http instances, each serving specific type of content like car files only or raw blocks only.

    hashtag
    Enable serving files

    In the curl request below we appended the query parameter format=raw to the URL to get the raw block data for the file.

    But, if we try to open the file directly in a web browser, with no extra query parameters, we get an error message:

    By default booster-http does not serve files in a format that can be read by a web browser. This is to protect Storage Providers from serving content that may be .

    To enable serving files to web browsers, we must pass --serve-files=true to booster-http on startup. Once, booster-http is restarted with --serve-files=true, we can open the file directly from a web browser:

    triangle-exclamation

    booster-http (and booster-bitswap) automatically filter out known flagged content using the denylist maintained at

    We can also browse all files in the CAR archive.

    hashtag
    Protecting booster-http with NGINX

    SPs must secure their booster-http before exposing it to the public. SPs can feel free to use any tool available to limit who can download files, the number of requests per second, and the download bandwidth each client can use per second.

    Users can follow this example to use reverse proxy to secure their booster-http instance. In this section we’ve just scratched the surface of the ways in which nginx can set access limits, rate limits and bandwidth limits. In particular it’s possible to add limits by request token, or using JWT tokens. The examples in this section are adapted from which goes into more detail.

    By default nginx puts configuration files into /etc/nginx

    The default configuration file is /etc/nginx/sites-available/default

    hashtag
    Setup server block

    Setup nginx server listen on port 7575 and forward requests to booster-http on port 7777

    The IPFS gateway serves files from /ipfs. So, we will add a server block for location /ipfs/

    hashtag
    Limiting Access

    Let’s limit access to the IPFS gateway using the standard .htaccess file. We need to set up an .htaccess file with a username and password. Create a user named alice

    Include the .htaccess file in the /etc/nginx/sites-available/default

    Now when we open any URL under the path /ipfs we will be presented with a Sign in dialog.

    hashtag
    Rate Limiting

    To prevent users from making too many requests per second, we should add rate limits.

    1. Create a file with the rate limiting configuration at /etc/nginx/ipfs-gateway.conf.d/ipfs-gateway.conf

    2. Add a request zone limit to the file of 1 request per second, per client IP

    1. Include ipfs-gateway.conf in /etc/nginx/sites-available/default and set the response for too many requests to HTTP response code 429

    1. Click the refresh button in your browser on any path under /ipfs more than once per second you will see a 429 error page

    hashtag
    Bandwidth Limiting

    It is also recommended to limit the amount of bandwidth that clients can take up when downloading data from booster-http. This ensures a fair bandwidth distribution to each client and prevents situations where one client ends up choking the booster-http instance.

    1. Create a new .htaccess user called bob

    1. Add a mapping from .htaccess username to bandwidth limit in /etc/nginx/ipfs-gateway.conf.d/ipfs-gateway.conf

    1. Add the bandwidth limit to /etc/nginx/sites-available/default

    1. To verify bandwidth limiting, use curl to download a file with user alice and then bob Note the difference in the Average Dload column (the average download speed).

    In public mode the public internet firewall must be configured to forward traffic directly to the booster-bitswap instance. boostd is configured to announce the public address of booster-bitswap to the network indexer (the network indexer is the service that clients can query to discover where to retrieve content). This mode offers greater flexibility and performance. You can even setup booster-bitswap to run over a separate internet connection from boostd. However, it might require additional configuration and changes to your overall network infrastructure.
    production use
    CIDGravityarrow-up-right
    TCarrow-up-right
    Tricklearrow-up-right
    herearrow-up-right
    Wondershaperarrow-up-right
    Bitswap architecture and possible configurations
    basics of HTTP retrieval configuration
    flagged as illicit contentarrow-up-right
    https://badbits.dwebops.pub/denylist.jsonarrow-up-right
    NGNIXarrow-up-right
    Deploying NGINX as an API Gatewayarrow-up-right
    Error when accessing IPLD block from web browser
    Accessing files from Filecoin SP via web browser
    Browsing files in a deal with a web browser
    Login prompt when accessing booster-http url
    HTTP error 429
    Bandwidth limiting result
    git clone https://github.com/filecoin-project/boost.git
    cd boost
    git checkout <release>
    make booster-bitswap
    booster-bitswap init
    export ENV_BOOST_API_INFO=`boostd auth api-info --perm=admin`
    
    export BOOST_API_INFO=`echo $ENV_BOOST_API_INFO | awk '{split($0,a,"="); print a[2]}'`
    booster-bitswap run --api-boost=$BOOST_API_INFO
    booster-bitswap fetch /ip4/127.0.0.1/tcp/8888/p2p/{peerID} {rootCID} outfile.car
    git clone https://github.com/filecoin-project/boost.git
    cd boost
    git checkout <release>
    make booster-bitswap
    booster-bitswap init
    [DealMaking]
      BitswapPeerID ="{peer id for booster bitswap you recorded earlier}"
    export ENV_BOOST_API_INFO=`boostd auth api-info --perm=admin`
    
    export BOOST_API_INFO=`echo $ENV_BOOST_API_INFO | awk '{split($0,a,"="); print a[2]}'`
    booster-bitswap run --api-boost=$BOOST_API_INFO --proxy={boostd multiaddress}
    git clone https://github.com/filecoin-project/boost.git
    cd boost
    git checkout <release>
    make booster-bitswap
    booster-bitswap init
    [DealMaking]
     BitswapPeerID ="{peer id for bosoter bitswap you recorded earlier}"
     BitswapPublicAddresses = ["/ip4/{booster-bitswap public IP}/tcp/{booster-bitswap public port}"]
     BitswapPrivKeyFile = "{path to libp2p private key file for booster bitswap}"
    export ENV_BOOST_API_INFO=`boostd auth api-info --perm=admin`
    
    export BOOST_API_INFO=`echo $ENV_BOOST_API_INFO | awk '{split($0,a,"="); print a[2]}'`
    booster-bitswap run --api-boost=$BOOST_API_INFO
    booster-bitswap run --api-boost=$BOOST_API_INFO \
      --engine-blockstore-worker-count=600 \
      --engine-task-worker-count=600 \
      --max-outstanding-bytes-per-peer=33554432 \
      --target-message-size=1048576 \
      --task-worker-count=600
    booster-bitswap run --api-boost=$BOOST_API_INFO --badbits-denylists <URL>
    booster-bitswap run --api-boost=$BOOST_API_INFO --badbits-denylists https://badbits.dwebops.pub/denylist.json <URL1> <URL2>
    {
       "AllowDenyList": { // list of peers to either deny or allow (denying all others)
       	"Type": "allowlist",  // "allowlist" or "denylist"
       		"PeerIDs": [
       			"Qma9T5YraSnpRDZqRR4krcSJabThc8nwZuJV3LercPHufi",
       			"QmYyQSo1c1Ym7orWxLYvCrM2EmxFTANf8wXmmE7DWjhx5N"
       		]
       },
           "UnderMaintenance": false, // when set to true, denies all requests
           "StorageProviderLimits": {
               "Bitswap": {
                           "SimultaneousRequests": 100, // bitswap block requests served at the same time across peers
                           "SimultaneousRequestsPerPeer": 10, // bitswap block requests served at the same time for a single peer
       		"MaxBandwidth": "100mb" // human readable size metric, per second
       	}
       }
    }
    booster-bitswap run --api-boost=$BOOST_API_INFO --api-filter-endpoint <URL> --api-filter-auth <OPTIONAL SCURITY HEADERS>
    sudo tc qdisc add dev <network interface> root handle 1: htb
    sudo tc class add dev <network interface> parent 1: classid 1:20 htb rate 100mibit
    sudo tc qdisc add dev <network interface> parent 1:20 handle 20: sfq perturb 10
    sudo tc filter add dev <network interface> parent 1: protocol ip prio 1 basic match 'cmp(u16 at 0 layer transport eq 8888)' flowid 1:20
    [booster-bitswap]
    Priority = <value>
    Time-Smoothing = <value>
    Length-Smoothing = <value>
    wondershaper enp5s0 4096 1024
       --serve-pieces                                           enables serving raw pieces (default: true)
       --serve-blocks                                           serve blocks with the ipfs gateway API (default: true)
       --serve-cars                                             serve CAR files with the ipfs gateway API (default: true)
       --serve-files                                            serve original files (eg jpg, mov) with the ipfs gateway API (default: false)
       --api-filter-endpoint value                              the endpoint to use for fetching a remote retrieval configuration for bitswap requests
       --api-filter-auth value                                  value to pass in the authorization header when sending a request to the API filter endpoint (e.g. 'Basic ~base64 encoded user/pass~'
       --badbits-denylists value [ --badbits-denylists value ]  the endpoints for fetching one or more custom BadBits list instead of the default one at https://badbits.dwebops.pub/denylist.json (default: "https://badbits.dwebops.pub/denylist.json")
       --help, -h                                               show help
    $ curl --output /tmp/museum.jpg "http://localhost:7777/ipfs/bafybeidqindpi4ucx7kmrtnw3woc6jtl7bqvyiokrkpbbuy6gs6trn57tm/vincent/Vincent%20van%20Gogh_files/Caf%C3%A9tafel_met_absint_-_s0186V1962_-_Van_Gogh_Museum.jpg?format=raw"
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 11830  100 11830    0     0   140k      0 --:--:-- --:--:-- --:--:--  175k
    
    $ open /tmp/museum.jpg
    # ipfs gateway config
    server {
            listen 7575 default_server;
            listen [::]:7575 default_server;
    
            location /ipfs/ {
                    proxy_pass http://127.0.0.1:7777;
            }
    }
    $ mkdir /etc/nginx/ipfs-gateway.conf.d
    
    $ htpasswd -c /etc/nginx/ipfs-gateway.conf.d/.htpasswd alice
    New password:
    Re-type new password:
    Adding password for user alice
     # ipfs gateway config
    server {
            listen 7575 default_server;
            listen [::]:7575 default_server;
    
            location /ipfs/ {
            # htaccess authentication
                    auth_basic "Restricted Server";
                    auth_basic_user_file /etc/nginx/ipfs-gateway.conf.d/.htpasswd;
                    proxy_pass http://127.0.0.1:7777;
            }
    }
    limit_req_zone $binary_remote_addr zone=client_ip_10rs:1m rate=1r/s;
    include /etc/nginx/ipfs-gateway.conf.d/ipfs-gateway.conf;
    server {
            listen 7575 default_server;
            listen [::]:7575 default_server;
    
            location /ipfs/ {
                    # htaccess authentication
                    auth_basic "Restricted Server";
                    auth_basic_user_file /etc/nginx/ipfs-gateway.conf.d/.htpasswd;
    
                    limit_req zone=client_ip_10rs;
                    limit_req_status 429;
                    proxy_pass http://127.0.0.1:7878;
            }
    }
    
    $ htpasswd /etc/nginx/ipfs-gateway.conf.d/.htpasswd bob
    map $remote_user $bandwidth_limit {
        default  1k;
        "alice"  10k;
        "bob"    512k;
    }
    include /etc/nginx/ipfs-gateway.conf.d/ipfs-gateway.conf;
    server {
            listen 7575 default_server;
            listen [::]:7575 default_server;
    
            location /ipfs/ {
                    # htaccess authentication
                    auth_basic "Restricted Server";
                    auth_basic_user_file /etc/nginx/ipfs-gateway.conf.d/.htpasswd;
    
                    limit_rate $bandwidth_limit;
    
                    limit_req zone=client_ip_10rs;
                    limit_req_status 429;
                    proxy_pass http://127.0.0.1:7878;
            }
    }

    Configuration

    Boost configuration options with examples and description.

    hashtag
    Sample config file

    circle-info

    hashtag
    Sealer

    Parameter
    Example
    Description

    hashtag
    API

    Parameter
    Example
    Description

    hashtag
    Libp2p

    Parameter
    Example
    Description

    hashtag
    Storage

    Parameter
    Example
    Description

    hashtag
    Dealmaking

    Dealmaking section handles deal making configuration explicitly for boost deal that uses the new /fil/storage/mk/1.2.0 protocol.

    hashtag
    Wallets

    Parameter
    Example
    Description

    hashtag
    LotusFees

    Parameter
    Example
    Description

    hashtag
    DAGStore

    Parameter
    Example
    Description

    hashtag
    IndexProvider

    Parameter
    Example
    Description

    Advertising 128-bit long multihashes with the default EntriesCacheCapacity, and EntriesChunkSize means the cache size can grow to 256MiB when full.

    # The version of the config file (used for migrations)
    #
    # type: int
    # env var: LOTUS__CONFIGVERSION
    ConfigVersion = 4
    
    # The connect string for the sealing RPC API (lotus miner)
    #
    # type: string
    # env var: LOTUS__SEALERAPIINFO
    SealerApiInfo = ""
    
    # The connect string for the sector index RPC API (lotus miner)
    #
    # type: string
    # env var: LOTUS__SECTORINDEXAPIINFO
    SectorIndexApiInfo = ""
    
    
    [API]
      # Binding address for the Lotus API
      #
      # type: string
      # env var: LOTUS_API_LISTENADDRESS
      #ListenAddress = "/ip4/127.0.0.1/tcp/2345/http"
    
      # type: string
      # env var: LOTUS_API_REMOTELISTENADDRESS
      #RemoteListenAddress = ""
    
      # type: Duration
      # env var: LOTUS_API_TIMEOUT
      #Timeout = "30s"
    
    
    [Backup]
      # Note that in case of metadata corruption it might be much harder to recover
      # your node if metadata log is disabled
      #
      # type: bool
      # env var: LOTUS_BACKUP_DISABLEMETADATALOG
      #DisableMetadataLog = false
    
    
    [Libp2p]
      # Binding address for the libp2p host - 0 means random port.
      # Format: multiaddress; see https://multiformats.io/multiaddr/
      #
      # type: []string
      # env var: LOTUS_LIBP2P_LISTENADDRESSES
      # ListenAddresses = ["/ip4/0.0.0.0/tcp/24001", "/ip6/::/tcp/24001"]
    
      # Addresses to explicitally announce to other peers. If not specified,
      # all interface addresses are announced
      # Format: multiaddress
      #
      # type: []string
      # env var: LOTUS_LIBP2P_ANNOUNCEADDRESSES
      # AnnounceAddresses = []
    
      # Addresses to not announce
      # Format: multiaddress
      #
      # type: []string
      # env var: LOTUS_LIBP2P_NOANNOUNCEADDRESSES
      # NoAnnounceAddresses = []
    
      # When not disabled (default), lotus asks NAT devices (e.g., routers), to
      # open up an external port and forward it to the port lotus is running on.
      # When this works (i.e., when your router supports NAT port forwarding),
      # it makes the local lotus node accessible from the public internet
      #
      # type: bool
      # env var: LOTUS_LIBP2P_DISABLENATPORTMAP
      # DisableNatPortMap = false
    
      # ConnMgrLow is the number of connections that the basic connection manager
      # will trim down to.
      #
      # type: uint
      # env var: LOTUS_LIBP2P_CONNMGRLOW
      # ConnMgrLow = 350
    
      # ConnMgrHigh is the number of connections that, when exceeded, will trigger
      # a connection GC operation. Note: protected/recently formed connections don't
      # count towards this limit.
      #
      # type: uint
      # env var: LOTUS_LIBP2P_CONNMGRHIGH
      # ConnMgrHigh = 400
    
      # ConnMgrGrace is a time duration that new connections are immune from being
      # closed by the connection manager.
      #
      # type: Duration
      # env var: LOTUS_LIBP2P_CONNMGRGRACE
      # ConnMgrGrace = "20s"
    
    
    [Pubsub]
      # Run the node in bootstrap-node mode
      #
      # type: bool
      # env var: LOTUS_PUBSUB_BOOTSTRAPPER
      #Bootstrapper = false
    
      # type: string
      # env var: LOTUS_PUBSUB_REMOTETRACER
      #RemoteTracer = ""
    
    
    [Storage]
      # The maximum number of concurrent fetch operations to the storage subsystem
      #
      # type: int
      # env var: LOTUS_STORAGE_PARALLELFETCHLIMIT
      # ParallelFetchLimit = 10
    
    
    [Dealmaking]
      # When enabled, the miner can accept online deals
      #
      # type: bool
      # env var: LOTUS_DEALMAKING_CONSIDERONLINESTORAGEDEALS
      #ConsiderOnlineStorageDeals = true
    
      # When enabled, the miner can accept offline deals
      #
      # type: bool
      # env var: LOTUS_DEALMAKING_CONSIDEROFFLINESTORAGEDEALS
      #ConsiderOfflineStorageDeals = true
    
      # When enabled, the miner can accept retrieval deals
      #
      # type: bool
      # env var: LOTUS_DEALMAKING_CONSIDERONLINERETRIEVALDEALS
      #ConsiderOnlineRetrievalDeals = true
    
      # When enabled, the miner can accept offline retrieval deals
      #
      # type: bool
      # env var: LOTUS_DEALMAKING_CONSIDEROFFLINERETRIEVALDEALS
      #ConsiderOfflineRetrievalDeals = true
    
      # When enabled, the miner can accept verified deals
      #
      # type: bool
      # env var: LOTUS_DEALMAKING_CONSIDERVERIFIEDSTORAGEDEALS
      #ConsiderVerifiedStorageDeals = true
    
      # When enabled, the miner can accept unverified deals
      #
      # type: bool
      # env var: LOTUS_DEALMAKING_CONSIDERUNVERIFIEDSTORAGEDEALS
      #ConsiderUnverifiedStorageDeals = true
    
      # A list of Data CIDs to reject when making deals
      #
      # type: []cid.Cid
      # env var: LOTUS_DEALMAKING_PIECECIDBLOCKLIST
      #PieceCidBlocklist = []
    
      # Maximum expected amount of time getting the deal into a sealed sector will take
      # This includes the time the deal will need to get transferred and published
      # before being assigned to a sector
      #
      # type: Duration
      # env var: LOTUS_DEALMAKING_EXPECTEDSEALDURATION
      #ExpectedSealDuration = "24h0m0s"
    
      # Maximum amount of time proposed deal StartEpoch can be in future
      #
      # type: Duration
      # env var: LOTUS_DEALMAKING_MAXDEALSTARTDELAY
      #MaxDealStartDelay = "336h0m0s"
    
      # The maximum collateral that the provider will put up against a deal,
      # as a multiplier of the minimum collateral bound
      #
      # type: uint64
      # env var: LOTUS_DEALMAKING_MAXPROVIDERCOLLATERALMULTIPLIER
      #MaxProviderCollateralMultiplier = 2
    
      # The maximum allowed disk usage size in bytes of downloaded deal data
      # that has not yet been passed to the sealing node by boost.
      # When the client makes a new deal proposal to download data from a host,
      # boost checks this config value against the sum of:
      # - the amount of data downloaded in the staging area
      # - the amount of data that is queued for download
      # - the amount of data in the proposed deal
      # If the total amount would exceed the limit, boost rejects the deal.
      # Set this value to 0 to indicate there is no limit.
      #
      # type: int64
      # env var: LOTUS_DEALMAKING_MAXSTAGINGDEALSBYTES
      # MaxStagingDealsBytes = 500000000
    
      # The percentage of MaxStagingDealsBytes that is allocated to each host.
      # When the client makes a new deal proposal to download data from a host,
      # boost checks this config value against the sum of:
      # - the amount of data downloaded from the host in the staging area
      # - the amount of data that is queued for download from the host
      # - the amount of data in the proposed deal
      # If the total amount would exceed the limit, boost rejects the deal.
      # Set this value to 0 to indicate there is no limit per host.
      #
      # type: uint64
      # env var: LOTUS_DEALMAKING_MAXSTAGINGDEALSPERCENTPERHOST
      # MaxStagingDealsPercentPerHost = 50
    
      # Minimum start epoch buffer to give time for sealing of sector with deal.
      #
      # type: uint64
      # env var: LOTUS_DEALMAKING_STARTEPOCHSEALINGBUFFER
      #StartEpochSealingBuffer = 480
    
      # The amount of time to keep deal proposal logs for before cleaning them up.
      #
      # type: Duration
      # env var: LOTUS_DEALMAKING_DEALPROPOSALLOGDURATION
      #DealProposalLogDuration = "24h0m0s"
    
      # The amount of time to keep retrieval deal logs for before cleaning them up.
      #
      # type: Duration
      # env var: LOTUS_DEALMAKING_RETRIEVALLOGDURATION
      #RetrievalLogDuration = "24h0m0s"
    
      # A command used for fine-grained evaluation of storage deals
      # see https://boost.filecoin.io/configuration/deal-filters for more details
      #
      # type: string
      # env var: LOTUS_DEALMAKING_FILTER
      #Filter = ""
    
      # A command used for fine-grained evaluation of retrieval deals
      # see https://boost.filecoin.io/configuration/deal-filters for more details
      #
      # type: string
      # env var: LOTUS_DEALMAKING_RETRIEVALFILTER
      #RetrievalFilter = ""
    
      # The maximum amount of time a transfer can take before it fails
      #
      # type: Duration
      # env var: LOTUS_DEALMAKING_MAXTRANSFERDURATION
      #MaxTransferDuration = "24h0m0s"
    
      # Whether to do commp on the Boost node (local) or on the Sealer (remote)
      #
      # type: bool
      # env var: LOTUS_DEALMAKING_REMOTECOMMP
      #RemoteCommp = false
    
      # The maximum number of commp processes to run in parallel on the local
      # boost process
      #
      # type: uint64
      # env var: LOTUS_DEALMAKING_MAXCONCURRENTLOCALCOMMP
      #MaxConcurrentLocalCommp = 1
    
      # The public multi-address for retrieving deals with booster-http.
      # Note: Must be in multiaddr format, eg /dns/foo.com/tcp/443/https
      #
      # type: string
      # env var: LOTUS_DEALMAKING_HTTPRETRIEVALMULTIADDR
      #HTTPRetrievalMultiaddr = ""
    
      # The maximum number of concurrent storage deal HTTP downloads.
      # Note that this is a soft maximum; if some downloads stall,
      # more downloads are allowed to start.
      #
      # type: uint64
      # env var: LOTUS_DEALMAKING_HTTPTRANSFERMAXCONCURRENTDOWNLOADS
      HttpTransferMaxConcurrentDownloads = 5
    
      # The period between checking if downloads have stalled.
      #
      # type: Duration
      # env var: LOTUS_DEALMAKING_HTTPTRANSFERSTALLCHECKPERIOD
      #HttpTransferStallCheckPeriod = "30s"
    
      # The time that can elapse before a download is considered stalled (and
      # another concurrent download is allowed to start).
      #
      # type: Duration
      # env var: LOTUS_DEALMAKING_HTTPTRANSFERSTALLTIMEOUT
      #HttpTransferStallTimeout = "5m0s"
    
      # The peed id used by booster-bitswap. To set, copy the value
      # printed by running 'booster-bitswap init'. If this value is set,
      # Boost will:
      # - listen on bitswap protocols on its own peer id and forward them
      # to booster bitswap
      # - advertise bitswap records to the content indexer
      # - list bitswap in available transports on the retrieval transport protocol
      #
      # type: string
      # env var: LOTUS_DEALMAKING_BITSWAPPEERID
      # BitswapPeerID = ""
    
      # The deal logs older than DealLogDurationDays are deleted from the logsDB
      # to keep the size of logsDB in check. Set the value as "0" to disable log cleanup
      #
      # type: int
      # env var: LOTUS_DEALMAKING_DEALLOGDURATIONDAYS
      #DealLogDurationDays = 30
    
      [Dealmaking.RetrievalPricing]
        # env var: LOTUS_DEALMAKING_RETRIEVALPRICING_STRATEGY
        #Strategy = "default"
    
        [Dealmaking.RetrievalPricing.Default]
          # env var: LOTUS_DEALMAKING_RETRIEVALPRICING_DEFAULT_VERIFIEDDEALSFREETRANSFER
          #VerifiedDealsFreeTransfer = true
    
        [Dealmaking.RetrievalPricing.External]
          # env var: LOTUS_DEALMAKING_RETRIEVALPRICING_EXTERNAL_PATH
          #Path = ""
    
    
    [Wallets]
      # The "owner" address of the miner
      #
      # type: string
      # env var: LOTUS_WALLETS_MINER
      Miner = ""
    
      # The wallet used to send PublishStorageDeals messages.
      # Must be a control or worker address of the miner.
      #
      # type: string
      # env var: LOTUS_WALLETS_PUBLISHSTORAGEDEALS
      PublishStorageDeals = ""
    
      # The wallet used as the source for storage deal collateral
      #
      # type: string
      # env var: LOTUS_WALLETS_DEALCOLLATERAL
      #DealCollateral = ""
    
      # Deprecated: Renamed to DealCollateral
      #
      # type: string
      # env var: LOTUS_WALLETS_PLEDGECOLLATERAL
      PledgeCollateral = ""
    
    
    [Graphql]
      # The port that the graphql server listens on
      #
      # type: uint64
      # env var: LOTUS_GRAPHQL_PORT
      #Port = 8080
    
    [LotusDealmaking]
      # When enabled, the miner can accept online deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDERONLINESTORAGEDEALS
      #ConsiderOnlineStorageDeals = true
    
      # When enabled, the miner can accept offline deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDEROFFLINESTORAGEDEALS
      #ConsiderOfflineStorageDeals = true
    
      # When enabled, the miner can accept retrieval deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDERONLINERETRIEVALDEALS
      #ConsiderOnlineRetrievalDeals = true
    
      # When enabled, the miner can accept offline retrieval deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDEROFFLINERETRIEVALDEALS
      #ConsiderOfflineRetrievalDeals = true
    
      # When enabled, the miner can accept verified deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDERVERIFIEDSTORAGEDEALS
      #ConsiderVerifiedStorageDeals = true
    
      # When enabled, the miner can accept unverified deals
      #
      # type: bool
      # env var: LOTUS_LOTUSDEALMAKING_CONSIDERUNVERIFIEDSTORAGEDEALS
      #ConsiderUnverifiedStorageDeals = true
    
      # A list of Data CIDs to reject when making deals
      #
      # type: []cid.Cid
      # env var: LOTUS_LOTUSDEALMAKING_PIECECIDBLOCKLIST
      #PieceCidBlocklist = []
    
      # Maximum expected amount of time getting the deal into a sealed sector will take
      # This includes the time the deal will need to get transferred and published
      # before being assigned to a sector
      #
      # type: Duration
      # env var: LOTUS_LOTUSDEALMAKING_EXPECTEDSEALDURATION
      #ExpectedSealDuration = "24h0m0s"
    
      # Maximum amount of time proposed deal StartEpoch can be in future
      #
      # type: Duration
      # env var: LOTUS_LOTUSDEALMAKING_MAXDEALSTARTDELAY
      #MaxDealStartDelay = "336h0m0s"
    
      # When a deal is ready to publish, the amount of time to wait for more
      # deals to be ready to publish before publishing them all as a batch
      #
      # type: Duration
      # env var: LOTUS_LOTUSDEALMAKING_PUBLISHMSGPERIOD
      #PublishMsgPeriod = "40m0s"
    
      # The maximum number of deals to include in a single PublishStorageDeals
      # message
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_MAXDEALSPERPUBLISHMSG
      #MaxDealsPerPublishMsg = 8
    
      # The maximum collateral that the provider will put up against a deal,
      # as a multiplier of the minimum collateral bound
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_MAXPROVIDERCOLLATERALMULTIPLIER
      #MaxProviderCollateralMultiplier = 2
    
      # The maximum allowed disk usage size in bytes of staging deals not yet
      # passed to the sealing node by the markets service. 0 is unlimited.
      #
      # type: int64
      # env var: LOTUS_LOTUSDEALMAKING_MAXSTAGINGDEALSBYTES
      MaxStagingDealsBytes = 100000000000
    
      # The maximum number of parallel online data transfers for storage deals
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_SIMULTANEOUSTRANSFERSFORSTORAGE
      #SimultaneousTransfersForStorage = 20
    
      # The maximum number of simultaneous data transfers from any single client
      # for storage deals.
      # Unset by default (0), and values higher than SimultaneousTransfersForStorage
      # will have no effect; i.e. the total number of simultaneous data transfers
      # across all storage clients is bound by SimultaneousTransfersForStorage
      # regardless of this number.
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_SIMULTANEOUSTRANSFERSFORSTORAGEPERCLIENT
      #SimultaneousTransfersForStoragePerClient = 0
    
      # The maximum number of parallel online data transfers for retrieval deals
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_SIMULTANEOUSTRANSFERSFORRETRIEVAL
      #SimultaneousTransfersForRetrieval = 20
    
      # Minimum start epoch buffer to give time for sealing of sector with deal.
      #
      # type: uint64
      # env var: LOTUS_LOTUSDEALMAKING_STARTEPOCHSEALINGBUFFER
      #StartEpochSealingBuffer = 480
    
      # A command used for fine-grained evaluation of storage deals
      # see https://lotus.filecoin.io/storage-providers/advanced-configurations/market/#using-filters-for-fine-grained-storage-and-retrieval-deal-acceptance for more details
      #
      # type: string
      # env var: LOTUS_LOTUSDEALMAKING_FILTER
      Filter = ""
    
      # A command used for fine-grained evaluation of retrieval deals
      # see https://lotus.filecoin.io/storage-providers/advanced-configurations/market/#using-filters-for-fine-grained-storage-and-retrieval-deal-acceptance for more details
      #
      # type: string
      # env var: LOTUS_LOTUSDEALMAKING_RETRIEVALFILTER
      #RetrievalFilter = ""
    
      [LotusDealmaking.RetrievalPricing]
        # env var: LOTUS_LOTUSDEALMAKING_RETRIEVALPRICING_STRATEGY
        #Strategy = "default"
    
        [LotusDealmaking.RetrievalPricing.Default]
          # env var: LOTUS_LOTUSDEALMAKING_RETRIEVALPRICING_DEFAULT_VERIFIEDDEALSFREETRANSFER
          #VerifiedDealsFreeTransfer = true
    
        [LotusDealmaking.RetrievalPricing.External]
          # env var: LOTUS_LOTUSDEALMAKING_RETRIEVALPRICING_EXTERNAL_PATH
          #Path = ""
    
    
    [LotusFees]
      # The maximum fee to pay when sending the PublishStorageDeals message
      #
      # type: types.FIL
      # env var: LOTUS_LOTUSFEES_MAXPUBLISHDEALSFEE
      MaxPublishDealsFee = "0.5 FIL"
    
      # The maximum fee to pay when sending the AddBalance message (used by legacy markets)
      #
      # type: types.FIL
      # env var: LOTUS_LOTUSFEES_MAXMARKETBALANCEADDFEE
      #MaxMarketBalanceAddFee = "0.007 FIL"
    
    
    [DAGStore]
      # Path to the dagstore root directory. This directory contains three
      # subdirectories, which can be symlinked to alternative locations if
      # need be:
      # - ./transients: caches unsealed deals that have been fetched from the
      # storage subsystem for serving retrievals.
      # - ./indices: stores shard indices.
      # - ./datastore: holds the KV store tracking the state of every shard
      # known to the DAG store.
      # Default value: <LOTUS_MARKETS_PATH>/dagstore (split deployment) or
      # <LOTUS_MINER_PATH>/dagstore (monolith deployment)
      #
      # type: string
      # env var: LOTUS_DAGSTORE_ROOTDIR
      #RootDir = ""
    
      # The maximum amount of indexing jobs that can run simultaneously.
      # 0 means unlimited.
      # Default value: 5.
      #
      # type: int
      # env var: LOTUS_DAGSTORE_MAXCONCURRENTINDEX
      #MaxConcurrentIndex = 5
    
      # The maximum amount of unsealed deals that can be fetched simultaneously
      # from the storage subsystem. 0 means unlimited.
      # Default value: 0 (unlimited).
      #
      # type: int
      # env var: LOTUS_DAGSTORE_MAXCONCURRENTREADYFETCHES
      #MaxConcurrentReadyFetches = 0
    
      # The maximum amount of unseals that can be processed simultaneously
      # from the storage subsystem. 0 means unlimited.
      # Default value: 0 (unlimited).
      #
      # type: int
      # env var: LOTUS_DAGSTORE_MAXCONCURRENTUNSEALS
      #MaxConcurrentUnseals = 0
    
      # The maximum number of simultaneous inflight API calls to the storage
      # subsystem.
      # Default value: 100.
      #
      # type: int
      # env var: LOTUS_DAGSTORE_MAXCONCURRENCYSTORAGECALLS
      #MaxConcurrencyStorageCalls = 100
    
      # The time between calls to periodic dagstore GC, in time.Duration string
      # representation, e.g. 1m, 5m, 1h.
      # Default value: 1 minute.
      #
      # type: Duration
      # env var: LOTUS_DAGSTORE_GCINTERVAL
      #GCInterval = "1m0s"
    
    
    [IndexProvider]
      # Enable set whether to enable indexing announcement to the network and expose endpoints that
      # allow indexer nodes to process announcements. Enabled by default.
      #
      # type: bool
      # env var: LOTUS_INDEXPROVIDER_ENABLE
      #Enable = true
    
      # EntriesCacheCapacity sets the maximum capacity to use for caching the indexing advertisement
      # entries. Defaults to 1024 if not specified. The cache is evicted using LRU policy. The
      # maximum storage used by the cache is a factor of EntriesCacheCapacity, EntriesChunkSize and
      # the length of multihashes being advertised. For example, advertising 128-bit long multihashes
      # with the default EntriesCacheCapacity, and EntriesChunkSize means the cache size can grow to
      # 256MiB when full.
      #
      # type: int
      # env var: LOTUS_INDEXPROVIDER_ENTRIESCACHECAPACITY
      #EntriesCacheCapacity = 1024
    
      # EntriesChunkSize sets the maximum number of multihashes to include in a single entries chunk.
      # Defaults to 16384 if not specified. Note that chunks are chained together for indexing
      # advertisements that include more multihashes than the configured EntriesChunkSize.
      #
      # type: int
      # env var: LOTUS_INDEXPROVIDER_ENTRIESCHUNKSIZE
      #EntriesChunkSize = 16384
    
      # TopicName sets the topic name on which the changes to the advertised content are announced.
      # If not explicitly specified, the topic name is automatically inferred from the network name
      # in following format: '/indexer/ingest/<network-name>'
      # Defaults to empty, which implies the topic name is inferred from network name.
      #
      # type: string
      # env var: LOTUS_INDEXPROVIDER_TOPICNAME
      #TopicName = ""
    
      # PurgeCacheOnStart sets whether to clear any cached entries chunks when the provider engine
      # starts. By default, the cache is rehydrated from previously cached entries stored in
      # datastore if any is present.
      #
      # type: bool
      # env var: LOTUS_INDEXPROVIDER_PURGECACHEONSTART
      #PurgeCacheOnStart = false
    [ContractDeals]
      Enabled = true
    This guide covers all the configuration in use by boostd process. Some of the configuration parameters found in the config.toml file are not used in boost and thus are not covered here. These configuration parameters can be ignored

    150

    ConnMgrLow is the number of connections that the basic connection manager will trim down to. Too low number can cause frequent connectivity issues

    ConnMgrHigh

    200

    ConnMgrHigh is the number of connections that, when exceeded, will trigger a connection GC operation Note: protected/recently formed connections don't count towards this limit. A high limit can cause very high resource utilization

    ConnMgrGrace

    "20s"

    ConnMgrGrace is a time duration that new connections are immune from being closed by the connection manager.

    0

    The maximum amount of unseals that can be processed simultaneously from the storage subsystem. 0 means unlimited.

    MaxConcurrencyStorageCalls

    100

    The maximum number of simultaneous inflight API calls to the storage subsystem.

    GCInterval

    "1m0s"

    The time between calls to periodic dagstore GC, in time.Duration string representation, e.g. 1m, 5m, 1h.

    ""

    TopicName sets the topic name on which the changes to the advertised content are announced. If not explicitly specified, the topic name is automatically inferred from the network name in following format: '/indexer/ingest/'

    PurgeCacheOnStart

    100

    PurgeCacheOnStart sets whether to clear any cached entries chunks when the provider engine starts. By default, the cache is rehydrated from previously cached entries stored in datastore if any is present.

    GCInterval

    "1m0s"

    The time between calls to periodic dagstore GC, in time.Duration string representation, e.g. 1m, 5m, 1h.

    SealerApiInfo

    "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZwdyIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.nbSvy11-tSUbXqo465hZqzTohGDfSdgh28C4irkmE10:/ip4/0.0.0.0/tcp/2345/http"

    Miner API info passed during boost init. Requires admin permissions. Connect string for the miner/sealer instance API endpoint

    SectorIndexApiInfo

    "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZwdyIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.nbSvy11-tSUbXqo465hZqzTohGDfSdgh28C4irkmE10:/ip4/0.0.0.0/tcp/2345/http"

    Miner API info passed during boost init. Requires admin permissions. Connect string for the miner/sealer instance API endpoint

    ListenAddress

    "/ip4/127.0.0.1/tcp/1288/http"

    # Format: multiaddress Address Boost API will be listening on. No need to update unless you are planning to make API calls from outside the boost node

    RemoteListenAddress

    "0.0.0.0:1288"

    Address boost API can reached at from outside. No need to update unless you are planning to make API calls from outside the boost node

    Timeout

    "30s"

    RPC timeout value

    ListenAddresses

    # Format: multiaddress ["/ip4/209.94.92.3/tcp/24001"]

    Binding address for the libp2p host - 0 means random port.

    AnnounceAddresses

    # Format: multiaddress ["/ip4/209.94.92.3/tcp/24001"]

    Addresses to explicitly announce to other peers. If not specified, all interface addresses are announced. On chain address need to be updated when this address is changed # lotus-miner actor set-addrs /ip4/<YOUR_PUBLIC_IP_ADDRESS>/tcp/24001

    NoAnnounceAddresses

    # Format: multiaddress ["/ip4/209.94.92.3/tcp/24001"]

    Addresses to not announce. Can be used if you want to announce addresses with exceptions

    ParallelFetchLimit

    10

    Upper bound on how many sectors can be fetched in parallel by the storage system at a time

    Miner

    f032187

    Miner ID

    PublishStorageDeals

    f3syzhufifmnbzcznoquhy4mlxo3byetqlamzbeijk62bjpoohrj3wiphkgxe3yjrlh5dmxlca3zqxp3yvd33a #BLS wallet address

    This value is taken during init with --wallet-publish-storage-deals. This wallet is used to send PublishDeal messages. It can be hosted on the remote daemon node and does not require to be present locally.

    DealCollateral

    f3syzhufifmnbzcznoquhy4mlxo3byetqlamzbeijk62bjpoohrj3wiphkgxe3yjrlh5dmxlca3zqxp3yvd33a #BLS wallet address

    This value is taken during init with --wallet-deal-collateral. This wallet is used to provide collateral for the deal. Funds from this wallet are moved to market actor and locked during the deal duration. It can be hosted on the remote daemon node and does not require to be present locally.

    MaxPublishDealsFee

    "0.05 FIL"

    Maximum fee user is willing to pay for a PublishDeal message

    MaxMarketBalanceAddFee

    "0.007 FIL"

    The maximum fee to pay when sending the AddBalance message (used by legacy markets)

    RootDir

    Empty

    If a custom value is specified, boost instance will refuse to start. This will be deprecated and removed in the future.

    MaxConcurrentIndex

    5

    The maximum amount of indexing jobs that can run simultaneously. 0 means unlimited.

    MaxConcurrentReadyFetches

    0

    The maximum amount of unsealed deals that can be fetched simultaneously from the storage subsystem. 0 means unlimited.

    Enable

    True/False

    Enabled or disable the index-provider subsystem

    EntriesCacheCapacity

    5

    EntriesCacheCapacity sets the maximum capacity to use for caching the indexing advertisement entries. Defaults to 1024 if not specified. The cache is evicted using LRU policy. The maximum storage used by the cache is a factor of EntriesCacheCapacity, EntriesChunkSize and the length of multihashes being advertised.

    EntriesChunkSize

    0

    EntriesChunkSize sets the maximum number of multihashes to include in a single entries chunk. Defaults to 16384 if not specified. Note that chunks are chained together for indexing advertisements that include more multihashes than the configured EntriesChunkSize.

    ConnMgrLow

    MaxConcurrentUnseals

    TopicName

    Browsing deal data served via booster-http from web browser

    JSON-RPC API

    This page contains all Boost API definitions. Interfaces defined here are exposed as JSON-RPC 2.0 endpoints by the boostd daemon.

    hashtag
    Go JSON-RPC client

    To use the Boost Go client, the Go RPC-API library can be used to interact with the Boost API node.

    1. Import the necessary Go module:

    1. Create the following script:

    1. Run go mod init to set up your go.mod file

    2. You should now be able to interact with the Boost API.

    hashtag
    Python JSON-RPC client

    The JSON-RPC API can also be communicated with programmatically from other languages. Here is an example written in Python. Note that the method must be prefixed with Filecoin.

    hashtag
    Groups

    hashtag
    Actor

    hashtag
    ActorSectorSize

    There are not yet any comments for this method.

    Perms: read

    Inputs:

    Response: 34359738368

    hashtag
    Auth

    hashtag
    AuthNew

    Perms: admin

    Inputs:

    Response: "Ynl0ZSBhcnJheQ=="

    hashtag
    AuthVerify

    Perms: read

    Inputs:

    Response:

    hashtag
    Blockstore

    hashtag
    BlockstoreGet

    There are not yet any comments for this method.

    Perms: read

    Inputs:

    Response: "Ynl0ZSBhcnJheQ=="

    hashtag
    BlockstoreGetSize

    Perms: read

    Inputs:

    Response: 123

    hashtag
    BlockstoreHas

    Perms: read

    Inputs:

    Response: true

    hashtag
    Boost

    hashtag
    BoostDagstoreDestroyShard

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    BoostDagstoreGC

    Perms: admin

    Inputs: null

    Response:

    hashtag
    BoostDagstoreInitializeAll

    Perms: admin

    Inputs:

    Response:

    hashtag
    BoostDagstoreInitializeShard

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    BoostDagstoreListShards

    Perms: admin

    Inputs: null

    Response:

    hashtag
    BoostDagstorePiecesContainingMultihash

    Perms: read

    Inputs:

    Response:

    hashtag
    BoostDagstoreRecoverShard

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    BoostDagstoreRegisterShard

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    BoostDeal

    Perms: admin

    Inputs:

    Response:

    hashtag
    BoostDealBySignedProposalCid

    Perms: admin

    Inputs:

    Response:

    hashtag
    BoostDummyDeal

    Perms: admin

    Inputs:

    Response:

    hashtag
    BoostIndexerAnnounceAllDeals

    There are not yet any comments for this method.

    Perms: admin

    Inputs: null

    Response: {}

    hashtag
    BoostMakeDeal

    Perms: write

    Inputs:

    Response:

    hashtag
    BoostOfflineDealWithData

    Perms: admin

    Inputs:

    Response:

    hashtag
    Deals

    hashtag
    DealsConsiderOfflineRetrievalDeals

    Perms: admin

    Inputs: null

    Response: true

    hashtag
    DealsConsiderOfflineStorageDeals

    Perms: admin

    Inputs: null

    Response: true

    hashtag
    DealsConsiderOnlineRetrievalDeals

    Perms: admin

    Inputs: null

    Response: true

    hashtag
    DealsConsiderOnlineStorageDeals

    There are not yet any comments for this method.

    Perms: admin

    Inputs: null

    Response: true

    hashtag
    DealsConsiderUnverifiedStorageDeals

    Perms: admin

    Inputs: null

    Response: true

    hashtag
    DealsConsiderVerifiedStorageDeals

    Perms: admin

    Inputs: null

    Response: true

    hashtag
    DealsPieceCidBlocklist

    Perms: admin

    Inputs: null

    Response:

    hashtag
    DealsSetConsiderOfflineRetrievalDeals

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    DealsSetConsiderOfflineStorageDeals

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    DealsSetConsiderOnlineRetrievalDeals

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    DealsSetConsiderOnlineStorageDeals

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    DealsSetConsiderUnverifiedStorageDeals

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    DealsSetConsiderVerifiedStorageDeals

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    DealsSetPieceCidBlocklist

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    I

    hashtag
    ID

    Perms: read

    Inputs: null

    Response: "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"

    hashtag
    Log

    hashtag
    LogList

    Perms: write

    Inputs: null

    Response:

    hashtag
    LogSetLevel

    Perms: write

    Inputs:

    Response: {}

    hashtag
    Market

    hashtag
    MarketCancelDataTransfer

    Perms: write

    Inputs:

    Response: {}

    hashtag
    MarketDataTransferUpdates

    Perms: write

    Inputs: null

    Response:

    hashtag
    MarketGetAsk

    Perms: read

    Inputs: null

    Response:

    hashtag
    MarketGetRetrievalAsk

    Perms: read

    Inputs: null

    Response:

    hashtag
    MarketImportDealData

    Perms: write

    Inputs:

    Response: {}

    hashtag
    MarketListDataTransfers

    Perms: write

    Inputs: null

    Response:

    hashtag
    MarketListIncompleteDeals

    Perms: read

    Inputs: null

    Response:

    hashtag
    MarketListRetrievalDeals

    There are not yet any comments for this method.

    Perms: read

    Inputs: null

    Response:

    hashtag
    MarketPendingDeals

    Perms: write

    Inputs: null

    Response:

    hashtag
    MarketRestartDataTransfer

    Perms: write

    Inputs:

    Response: {}

    hashtag
    MarketSetAsk

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    MarketSetRetrievalAsk

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    Net

    hashtag
    NetAddrsListen

    Perms: read

    Inputs: null

    Response:

    hashtag
    NetAgentVersion

    Perms: read

    Inputs:

    Response: "string value"

    hashtag
    NetAutoNatStatus

    Perms: read

    Inputs: null

    Response:

    hashtag
    NetBandwidthStats

    Perms: read

    Inputs: null

    Response:

    hashtag
    NetBandwidthStatsByPeer

    Perms: read

    Inputs: null

    Response:

    hashtag
    NetBandwidthStatsByProtocol

    Perms: read

    Inputs: null

    Response:

    hashtag
    NetBlockAdd

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    NetBlockList

    Perms: read

    Inputs: null

    Response:

    hashtag
    NetBlockRemove

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    NetConnect

    Perms: write

    Inputs:

    Response: {}

    hashtag
    NetConnectedness

    Perms: read

    Inputs:

    Response: 1

    hashtag
    NetDisconnect

    Perms: write

    Inputs:

    Response: {}

    hashtag
    NetFindPeer

    Perms: read

    Inputs:

    Response:

    hashtag
    NetLimit

    Perms: read

    Inputs:

    Response:

    hashtag
    NetPeerInfo

    Perms: read

    Inputs:

    Response:

    hashtag
    NetPeers

    Perms: read

    Inputs: null

    Response:

    hashtag
    NetPing

    Perms: read

    Inputs:

    Response: 60000000000

    hashtag
    NetProtectAdd

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    NetProtectList

    Perms: read

    Inputs: null

    Response:

    hashtag
    NetProtectRemove

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    NetPubsubScores

    Perms: read

    Inputs: null

    Response:

    hashtag
    NetSetLimit

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    NetStat

    Perms: read

    Inputs:

    Response:

    hashtag
    Online

    hashtag
    OnlineBackup

    There are not yet any comments for this method.

    Perms: admin

    Inputs:

    Response: {}

    hashtag
    Pieces

    hashtag
    PiecesGetCIDInfo

    Perms: read

    Inputs:

    Response:

    hashtag
    PiecesGetMaxOffset

    Perms: read

    Inputs:

    Response: 42

    hashtag
    PiecesGetPieceInfo

    Perms: read

    Inputs:

    Response:

    hashtag
    PiecesListCidInfos

    Perms: read

    Inputs: null

    Response:

    hashtag
    PiecesListPieces

    Perms: read

    Inputs: null

    Response:

    hashtag
    Runtime

    hashtag
    RuntimeSubsystems

    RuntimeSubsystems returns the subsystems that are enabled in this instance.

    Perms: read

    Inputs: null

    Response:

    hashtag
    Sectors

    hashtag
    SectorsRefs

    Perms: read

    Inputs: null

    Response:

    AuthNew

  • AuthVerify

  • Blockstore

    • BlockstoreGet

    • BlockstoreGetSize

  • Boost

    • BoostDagstoreDestroyShard

    • BoostDagstoreGC

  • Deals

    • DealsConsiderOfflineRetrievalDeals

    • DealsConsiderOfflineStorageDeals

  • I

    • ID

  • Log

    • LogList

    • LogSetLevel

  • Market

    • MarketCancelDataTransfer

    • MarketDataTransferUpdates

  • Net

    • NetAddrsListen

    • NetAgentVersion

  • Online

    • OnlineBackup

  • Pieces

    • PiecesGetCIDInfo

    • PiecesGetMaxOffset

  • Runtime

    • RuntimeSubsystems

  • Sectors

    • SectorsRefs

  • Actor
    ActorSectorSize
    Auth
    go get github.com/filecoin-project/go-jsonrpc
    package main
    
    import (
        "context"
        "fmt"
        "log"
        "net/http"
    
        jsonrpc "github.com/filecoin-project/go-jsonrpc"
        boostapi "github.com/filecoin-project/boost/api"
    )
    
    func main() {
        authToken := "<value found in ~/.boost/token>"
        headers := http.Header{"Authorization": []string{"Bearer " + authToken}}
        addr := "127.0.0.1:1288"
    
        var api boostapi.BoostStruct
        closer, err := jsonrpc.NewMergeClient(context.Background(), "ws://"+addr+"/rpc/v0", "Filecoin", []interface{}{&api.Internal, &api.CommonStruct.Internal}, headers)
        if err != nil {
            log.Fatalf("connecting with boost failed: %s", err)
        }
        defer closer()
    
        // Now you can call any API you're interested in.
        netAddrs, err := api.NetAddrsListen(context.Background())
        if err != nil {
          log.Fatalf("calling netAddrsListen: %s", err)
        }
        fmt.Printf("Boost is listening on: %s", netAddrs.Addrs[0])
    }
    import requests
    import json
    
    def main():
        url = "http://localhost:3051/rpc/v0"
        headers = {'content-type': 'application/json', "Authorization": "Bearer <token>"}
        payload = {
            "method": "Filecoin.BoostOfflineDealWithData",
            "params": [
                "<deal-uuid>",
                "<file-path>",
                True
            ],
            "jsonrpc": "2.0",
            "id": 1,
        }
        response = requests.post(url, data=json.dumps(payload), headers=headers)
        print(response.text)
    
    if __name__ == "__main__":
        main()
    [
      "f01234"
    ]
    [
      [
        "write"
      ]
    ]
    [
      "string value"
    ]
    [
      "write"
    ]
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    [
      "string value"
    ]
    [
      {
        "Key": "baga6ea4seaqecmtz7iak33dsfshi627abz4i4665dfuzr3qfs4bmad6dx3iigdq",
        "Success": false,
        "Error": "\u003cerror\u003e"
      }
    ]
    [
      {
        "MaxConcurrency": 123,
        "IncludeSealed": true
      }
    ]
    {
      "Key": "string value",
      "Event": "string value",
      "Success": true,
      "Error": "string value",
      "Total": 123,
      "Current": 123
    }
    [
      "string value"
    ]
    [
      {
        "Key": "baga6ea4seaqecmtz7iak33dsfshi627abz4i4665dfuzr3qfs4bmad6dx3iigdq",
        "State": "ShardStateAvailable",
        "Error": "\u003cerror\u003e"
      }
    ]
    [
      "Bw=="
    ]
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    [
      "string value"
    ]
    [
      "string value"
    ]
    [
      "07070707-0707-0707-0707-070707070707"
    ]
    {
      "DealUuid": "07070707-0707-0707-0707-070707070707",
      "CreatedAt": "0001-01-01T00:00:00Z",
      "ClientDealProposal": {
        "Proposal": {
          "PieceCID": {
            "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
          },
          "PieceSize": 1032,
          "VerifiedDeal": true,
          "Client": "f01234",
          "Provider": "f01234",
          "Label": "",
          "StartEpoch": 10101,
          "EndEpoch": 10101,
          "StoragePricePerEpoch": "0",
          "ProviderCollateral": "0",
          "ClientCollateral": "0"
        },
        "ClientSignature": {
          "Type": 2,
          "Data": "Ynl0ZSBhcnJheQ=="
        }
      },
      "IsOffline": true,
      "CleanupData": true,
      "ClientPeerID": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
      "DealDataRoot": {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      },
      "InboundFilePath": "string value",
      "Transfer": {
        "Type": "string value",
        "ClientID": "string value",
        "Params": "Ynl0ZSBhcnJheQ==",
        "Size": 42
      },
      "ChainDealID": 5432,
      "PublishCID": null,
      "SectorID": 9,
      "Offset": 1032,
      "Length": 1032,
      "Checkpoint": 1,
      "CheckpointAt": "0001-01-01T00:00:00Z",
      "Err": "string value",
      "Retry": "auto",
      "NBytesReceived": 9,
      "FastRetrieval": true,
      "AnnounceToIPNI": true
    }
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    {
      "DealUuid": "07070707-0707-0707-0707-070707070707",
      "CreatedAt": "0001-01-01T00:00:00Z",
      "ClientDealProposal": {
        "Proposal": {
          "PieceCID": {
            "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
          },
          "PieceSize": 1032,
          "VerifiedDeal": true,
          "Client": "f01234",
          "Provider": "f01234",
          "Label": "",
          "StartEpoch": 10101,
          "EndEpoch": 10101,
          "StoragePricePerEpoch": "0",
          "ProviderCollateral": "0",
          "ClientCollateral": "0"
        },
        "ClientSignature": {
          "Type": 2,
          "Data": "Ynl0ZSBhcnJheQ=="
        }
      },
      "IsOffline": true,
      "CleanupData": true,
      "ClientPeerID": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
      "DealDataRoot": {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      },
      "InboundFilePath": "string value",
      "Transfer": {
        "Type": "string value",
        "ClientID": "string value",
        "Params": "Ynl0ZSBhcnJheQ==",
        "Size": 42
      },
      "ChainDealID": 5432,
      "PublishCID": null,
      "SectorID": 9,
      "Offset": 1032,
      "Length": 1032,
      "Checkpoint": 1,
      "CheckpointAt": "0001-01-01T00:00:00Z",
      "Err": "string value",
      "Retry": "auto",
      "NBytesReceived": 9,
      "FastRetrieval": true,
      "AnnounceToIPNI": true
    }
    [
      {
        "DealUUID": "07070707-0707-0707-0707-070707070707",
        "IsOffline": true,
        "ClientDealProposal": {
          "Proposal": {
            "PieceCID": {
              "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
            },
            "PieceSize": 1032,
            "VerifiedDeal": true,
            "Client": "f01234",
            "Provider": "f01234",
            "Label": "",
            "StartEpoch": 10101,
            "EndEpoch": 10101,
            "StoragePricePerEpoch": "0",
            "ProviderCollateral": "0",
            "ClientCollateral": "0"
          },
          "ClientSignature": {
            "Type": 2,
            "Data": "Ynl0ZSBhcnJheQ=="
          }
        },
        "DealDataRoot": {
          "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
        },
        "Transfer": {
          "Type": "string value",
          "ClientID": "string value",
          "Params": "Ynl0ZSBhcnJheQ==",
          "Size": 42
        },
        "RemoveUnsealedCopy": true,
        "SkipIPNIAnnounce": true
      }
    ]
    {
      "Accepted": true,
      "Reason": "string value"
    }
    [
      {
        "DealUUID": "07070707-0707-0707-0707-070707070707",
        "IsOffline": true,
        "ClientDealProposal": {
          "Proposal": {
            "PieceCID": {
              "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
            },
            "PieceSize": 1032,
            "VerifiedDeal": true,
            "Client": "f01234",
            "Provider": "f01234",
            "Label": "",
            "StartEpoch": 10101,
            "EndEpoch": 10101,
            "StoragePricePerEpoch": "0",
            "ProviderCollateral": "0",
            "ClientCollateral": "0"
          },
          "ClientSignature": {
            "Type": 2,
            "Data": "Ynl0ZSBhcnJheQ=="
          }
        },
        "DealDataRoot": {
          "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
        },
        "Transfer": {
          "Type": "string value",
          "ClientID": "string value",
          "Params": "Ynl0ZSBhcnJheQ==",
          "Size": 42
        },
        "RemoveUnsealedCopy": true,
        "SkipIPNIAnnounce": true
      }
    ]
    {
      "Accepted": true,
      "Reason": "string value"
    }
    [
      "07070707-0707-0707-0707-070707070707",
      "string value",
      true
    ]
    {
      "Accepted": true,
      "Reason": "string value"
    }
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    [
      true
    ]
    [
      true
    ]
    [
      true
    ]
    [
      true
    ]
    [
      true
    ]
    [
      true
    ]
    [
      [
        {
          "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
        }
      ]
    ]
    [
      "string value"
    ]
    [
      "string value",
      "string value"
    ]
    [
      3,
      "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
      true
    ]
    {
      "TransferID": 3,
      "Status": 1,
      "BaseCID": {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      },
      "IsInitiator": true,
      "IsSender": true,
      "Voucher": "string value",
      "Message": "string value",
      "OtherPeer": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
      "Transferred": 42,
      "Stages": {
        "Stages": [
          {
            "Name": "string value",
            "Description": "string value",
            "CreatedTime": "0001-01-01T00:00:00Z",
            "UpdatedTime": "0001-01-01T00:00:00Z",
            "Logs": [
              {
                "Log": "string value",
                "UpdatedTime": "0001-01-01T00:00:00Z"
              }
            ]
          }
        ]
      }
    }
    {
      "Ask": {
        "Price": "0",
        "VerifiedPrice": "0",
        "MinPieceSize": 1032,
        "MaxPieceSize": 1032,
        "Miner": "f01234",
        "Timestamp": 10101,
        "Expiry": 10101,
        "SeqNo": 42
      },
      "Signature": {
        "Type": 2,
        "Data": "Ynl0ZSBhcnJheQ=="
      }
    }
    {
      "PricePerByte": "0",
      "UnsealPrice": "0",
      "PaymentInterval": 42,
      "PaymentIntervalIncrease": 42
    }
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      },
      "string value"
    ]
    [
      {
        "TransferID": 3,
        "Status": 1,
        "BaseCID": {
          "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
        },
        "IsInitiator": true,
        "IsSender": true,
        "Voucher": "string value",
        "Message": "string value",
        "OtherPeer": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
        "Transferred": 42,
        "Stages": {
          "Stages": [
            {
              "Name": "string value",
              "Description": "string value",
              "CreatedTime": "0001-01-01T00:00:00Z",
              "UpdatedTime": "0001-01-01T00:00:00Z",
              "Logs": [
                {
                  "Log": "string value",
                  "UpdatedTime": "0001-01-01T00:00:00Z"
                }
              ]
            }
          ]
        }
      }
    ]
    [
      {
        "Proposal": {
          "PieceCID": {
            "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
          },
          "PieceSize": 1032,
          "VerifiedDeal": true,
          "Client": "f01234",
          "Provider": "f01234",
          "Label": "",
          "StartEpoch": 10101,
          "EndEpoch": 10101,
          "StoragePricePerEpoch": "0",
          "ProviderCollateral": "0",
          "ClientCollateral": "0"
        },
        "ClientSignature": {
          "Type": 2,
          "Data": "Ynl0ZSBhcnJheQ=="
        },
        "ProposalCid": {
          "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
        },
        "AddFundsCid": null,
        "PublishCid": null,
        "Miner": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
        "Client": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
        "State": 42,
        "PiecePath": ".lotusminer/fstmp123",
        "MetadataPath": ".lotusminer/fstmp123",
        "SlashEpoch": 10101,
        "FastRetrieval": true,
        "Message": "string value",
        "FundsReserved": "0",
        "Ref": {
          "TransferType": "string value",
          "Root": {
            "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
          },
          "PieceCid": null,
          "PieceSize": 1024,
          "RawBlockSize": 42
        },
        "AvailableForRetrieval": true,
        "DealID": 5432,
        "CreationTime": "0001-01-01T00:00:00Z",
        "TransferChannelId": {
          "Initiator": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
          "Responder": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
          "ID": 3
        },
        "SectorNumber": 9,
        "InboundCAR": "string value"
      }
    ]
    [
      {
        "PayloadCID": {
          "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
        },
        "ID": 5,
        "Selector": {
          "Raw": "Ynl0ZSBhcnJheQ=="
        },
        "PieceCID": null,
        "PricePerByte": "0",
        "PaymentInterval": 42,
        "PaymentIntervalIncrease": 42,
        "UnsealPrice": "0",
        "StoreID": 42,
        "ChannelID": {
          "Initiator": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
          "Responder": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
          "ID": 3
        },
        "PieceInfo": {
          "PieceCID": {
            "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
          },
          "Deals": [
            {
              "DealID": 5432,
              "SectorID": 9,
              "Offset": 1032,
              "Length": 1032
            }
          ]
        },
        "Status": 0,
        "Receiver": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
        "TotalSent": 42,
        "FundsReceived": "0",
        "Message": "string value",
        "CurrentInterval": 42,
        "LegacyProtocol": true
      }
    ]
    {
      "Deals": [
        {
          "Proposal": {
            "PieceCID": {
              "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
            },
            "PieceSize": 1032,
            "VerifiedDeal": true,
            "Client": "f01234",
            "Provider": "f01234",
            "Label": "",
            "StartEpoch": 10101,
            "EndEpoch": 10101,
            "StoragePricePerEpoch": "0",
            "ProviderCollateral": "0",
            "ClientCollateral": "0"
          },
          "ClientSignature": {
            "Type": 2,
            "Data": "Ynl0ZSBhcnJheQ=="
          }
        }
      ],
      "PublishPeriodStart": "0001-01-01T00:00:00Z",
      "PublishPeriod": 60000000000
    }
    [
      3,
      "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
      true
    ]
    [
      "0",
      "0",
      10101,
      1032,
      1032
    ]
    [
      {
        "PricePerByte": "0",
        "UnsealPrice": "0",
        "PaymentInterval": 42,
        "PaymentIntervalIncrease": 42
      }
    ]
    {
      "ID": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
      "Addrs": [
        "/ip4/52.36.61.156/tcp/1347/p2p/12D3KooWFETiESTf1v4PGUvtnxMAcEFMzLZbJGg4tjWfGEimYior"
      ]
    }
    [
      "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
    ]
    {
      "Reachability": 1,
      "PublicAddr": "string value"
    }
    {
      "TotalIn": 9,
      "TotalOut": 9,
      "RateIn": 12.3,
      "RateOut": 12.3
    }
    {
      "12D3KooWSXmXLJmBR1M7i9RW9GQPNUhZSzXKzxDHWtAgNuJAbyEJ": {
        "TotalIn": 174000,
        "TotalOut": 12500,
        "RateIn": 100,
        "RateOut": 50
      }
    }
    {
      "/fil/hello/1.0.0": {
        "TotalIn": 174000,
        "TotalOut": 12500,
        "RateIn": 100,
        "RateOut": 50
      }
    }
    [
      {
        "Peers": [
          "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
        ],
        "IPAddrs": [
          "string value"
        ],
        "IPSubnets": [
          "string value"
        ]
      }
    ]
    {
      "Peers": [
        "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
      ],
      "IPAddrs": [
        "string value"
      ],
      "IPSubnets": [
        "string value"
      ]
    }
    [
      {
        "Peers": [
          "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
        ],
        "IPAddrs": [
          "string value"
        ],
        "IPSubnets": [
          "string value"
        ]
      }
    ]
    [
      {
        "ID": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
        "Addrs": [
          "/ip4/52.36.61.156/tcp/1347/p2p/12D3KooWFETiESTf1v4PGUvtnxMAcEFMzLZbJGg4tjWfGEimYior"
        ]
      }
    ]
    [
      "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
    ]
    [
      "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
    ]
    [
      "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
    ]
    {
      "ID": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
      "Addrs": [
        "/ip4/52.36.61.156/tcp/1347/p2p/12D3KooWFETiESTf1v4PGUvtnxMAcEFMzLZbJGg4tjWfGEimYior"
      ]
    }
    [
      "string value"
    ]
    {
      "Memory": 9,
      "Streams": 123,
      "StreamsInbound": 123,
      "StreamsOutbound": 123,
      "Conns": 123,
      "ConnsInbound": 123,
      "ConnsOutbound": 123,
      "FD": 123
    }
    [
      "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
    ]
    {
      "ID": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
      "Agent": "string value",
      "Addrs": [
        "string value"
      ],
      "Protocols": [
        "string value"
      ],
      "ConnMgrMeta": {
        "FirstSeen": "0001-01-01T00:00:00Z",
        "Value": 123,
        "Tags": {
          "name": 42
        },
        "Conns": {
          "name": "2021-03-08T22:52:18Z"
        }
      }
    }
    [
      {
        "ID": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
        "Addrs": [
          "/ip4/52.36.61.156/tcp/1347/p2p/12D3KooWFETiESTf1v4PGUvtnxMAcEFMzLZbJGg4tjWfGEimYior"
        ]
      }
    ]
    [
      "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
    ]
    [
      [
        "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
      ]
    ]
    [
      "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
    ]
    [
      [
        "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
      ]
    ]
    [
      {
        "ID": "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf",
        "Score": {
          "Score": 12.3,
          "Topics": {
            "/blocks": {
              "TimeInMesh": 60000000000,
              "FirstMessageDeliveries": 122,
              "MeshMessageDeliveries": 1234,
              "InvalidMessageDeliveries": 3
            }
          },
          "AppSpecificScore": 12.3,
          "IPColocationFactor": 12.3,
          "BehaviourPenalty": 12.3
        }
      }
    ]
    [
      "string value",
      {
        "Memory": 9,
        "Streams": 123,
        "StreamsInbound": 123,
        "StreamsOutbound": 123,
        "Conns": 123,
        "ConnsInbound": 123,
        "ConnsOutbound": 123,
        "FD": 123
      }
    ]
    [
      "string value"
    ]
    {
      "System": {
        "NumStreamsInbound": 123,
        "NumStreamsOutbound": 123,
        "NumConnsInbound": 123,
        "NumConnsOutbound": 123,
        "NumFD": 123,
        "Memory": 9
      },
      "Transient": {
        "NumStreamsInbound": 123,
        "NumStreamsOutbound": 123,
        "NumConnsInbound": 123,
        "NumConnsOutbound": 123,
        "NumFD": 123,
        "Memory": 9
      },
      "Services": {
        "abc": {
          "NumStreamsInbound": 1,
          "NumStreamsOutbound": 2,
          "NumConnsInbound": 3,
          "NumConnsOutbound": 4,
          "NumFD": 5,
          "Memory": 123
        }
      },
      "Protocols": {
        "abc": {
          "NumStreamsInbound": 1,
          "NumStreamsOutbound": 2,
          "NumConnsInbound": 3,
          "NumConnsOutbound": 4,
          "NumFD": 5,
          "Memory": 123
        }
      },
      "Peers": {
        "abc": {
          "NumStreamsInbound": 1,
          "NumStreamsOutbound": 2,
          "NumConnsInbound": 3,
          "NumConnsOutbound": 4,
          "NumFD": 5,
          "Memory": 123
        }
      }
    }
    [
      "string value"
    ]
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    {
      "CID": {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      },
      "PieceBlockLocations": [
        {
          "RelOffset": 42,
          "BlockSize": 42,
          "PieceCID": {
            "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
          }
        }
      ]
    }
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    {
      "PieceCID": {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      },
      "Deals": [
        {
          "DealID": 5432,
          "SectorID": 9,
          "Offset": 1032,
          "Length": 1032
        }
      ]
    }
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    [
      {
        "/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
      }
    ]
    [
      "Markets"
    ]
    {
      "98000": [
        {
          "SectorID": 100,
          "Offset": 10485760,
          "Size": 1048576
        }
      ]
    }
    BlockstoreHas
    BoostDagstoreInitializeAll
    BoostDagstoreInitializeShard
    BoostDagstoreListShards
    BoostDagstorePiecesContainingMultihash
    BoostDagstoreRecoverShard
    BoostDagstoreRegisterShard
    BoostDeal
    BoostDealBySignedProposalCid
    BoostDummyDeal
    BoostIndexerAnnounceAllDeals
    BoostMakeDeal
    BoostOfflineDealWithData
    DealsConsiderOnlineRetrievalDeals
    DealsConsiderOnlineStorageDeals
    DealsConsiderUnverifiedStorageDeals
    DealsConsiderVerifiedStorageDeals
    DealsPieceCidBlocklist
    DealsSetConsiderOfflineRetrievalDeals
    DealsSetConsiderOfflineStorageDeals
    DealsSetConsiderOnlineRetrievalDeals
    DealsSetConsiderOnlineStorageDeals
    DealsSetConsiderUnverifiedStorageDeals
    DealsSetConsiderVerifiedStorageDeals
    DealsSetPieceCidBlocklist
    MarketGetAsk
    MarketGetRetrievalAsk
    MarketImportDealData
    MarketListDataTransfers
    MarketListIncompleteDeals
    MarketListRetrievalDeals
    MarketPendingDeals
    MarketRestartDataTransfer
    MarketSetAsk
    MarketSetRetrievalAsk
    NetAutoNatStatus
    NetBandwidthStats
    NetBandwidthStatsByPeer
    NetBandwidthStatsByProtocol
    NetBlockAdd
    NetBlockList
    NetBlockRemove
    NetConnect
    NetConnectedness
    NetDisconnect
    NetFindPeer
    NetLimit
    NetPeerInfo
    NetPeers
    NetPing
    NetProtectAdd
    NetProtectList
    NetProtectRemove
    NetPubsubScores
    NetSetLimit
    NetStat
    PiecesGetPieceInfo
    PiecesListCidInfos
    PiecesListPieces