Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
By default the Boost daemon repository is located at ~/.boost
It contains the following files:
api
The local multi-address of Boost's libp2p API
boost.db
The sqlite database with all deal metadata
boost.logs.db
The sqlite database with the logs for deals
config.toml
The config file with all Boost's settings
repo.lock
A lock file created when Boost is running
storage.json
Deprecated (needed by legacy markets)
token
The token used when calling Boost's JSON RPC endpoints
It has the following directories:
dagstore
Contains indexes of CAR files stored with Boost
datastore
Contains metadata about deals for legacy markets
deal-staging
The directory used by legacy markets for incoming data transfers
incoming
The directory used by Boost for incoming data transfers
journal
Contains journal events (used by legacy markets)
keystore
Contains the secret keys used by libp2p (eg the peer ID)
kvlog
Used by legacy markets datastore
The hardware requirements for Boost are tied to the sealer part of the Lotus deployment it is attached to.
Depending on how much data you need to onboard, and how many deals you need to make with clients, hardware requirements in terms of CPU and Disk will vary.
A miner will need an 8+ core CPU.
We strongly recommend a CPU model with support for Intel SHA Extensions: AMD since Zen microarchitecture, or Intel since Ice Lake. Lack of SHA Extensions results in a very significant slow down.
The most significant computation that Boost has to do is the Piece CID calculation (also known as Piece Commitment or CommP). When Boost receives data from a client, it calculates the Merkle root out of the hashes of the Piece (padded .car file). The resulting root of the clean binary Merkle tree is the Piece CID.
2 GiB of RAM are needed at the very least.
Boost stores all data received from clients before Piece CID is calculated and compared against deal parameters received from clients. Next, deals are published on-chain, and Boost waits for a number of epoch confirmations before proceeding to pass data to the Lotus sealing subsystem. This means that depending on the throughput of your operation, you must have disk space for at least a few staged sectors.
For small deployments 100 GiB of disk are needed at the very least if we assume that Boost is to keep three 32 GiB sectors before passing them to the sealing subsystem.
We recommend using NVME disk for Boost. As Dagstore grows in size, the overall performance might slow down due to slow disk.
Boost supports multiple options for data transfer when making storage deals, including HTTP. Clients can host their CAR file on an HTTP server, such as S3, and provide that URL when proposing the storage deal. Once accepted, Boost will automatically fetch the CAR file from the specified URL.
See As a client for more details.
Boost comes with a web interface that can be used to manage deals, watch disk usage, monitor funds, adjust settings and more.
Boost supports the same endpoints as go-fil-markets
package for making storage and retrieval deals, getting the storage and retrieval ask, and getting the status of ongoing deals. This ensures that clients running lotus can make deals with Storage Providers running boost.
Boost comes with a client that can be used to make storage deals, and can be configured to point at a public Filecoin API endpoint. That means clients don't need to run a Filecoin node or sync from chain.
See As a client for details.
This section details how to get started with Boost if you are a storage provider or as a client
The Boost source code repository is hosted at github.com/filecoin-project/boost
Boost Version | Lotus Version | Golang Version |
---|---|---|
Please make sure you have installed: Go - following https://go.dev/learn/
Rust - following https://www.rust-lang.org/tools/install
Node 16.x
Linux / Ubuntu
MacOS
Depending on your architecture, you will want to export additional environment variables:
Please ignore any output or onscreen instruction during the npm build
unless there is an error.
Please ignore any output or onscreen instruction during the npm build
unless there is an error.
To build boost for calibnet, please complete the above pre-requisites and build using the following commands.
1. Make sure that Boost daemon is not running. Run the below commands to upgrade the binary.
2. Please ignore any onscreen instruction during the npm build
unless there is an error.
3. Start the boost daemon.
1. Make sure that Boost daemon is not running. Run the below commands to upgrade the binary.
2. Please ignore any onscreen instruction during the npm build
unless there is an error.
3. Start the boost daemon.
v1.5.0
v1.18.0
1.18.x
v1.5.1, v1.5.2, v1.5.3
v1.18.0, v1.19.0
1.18.x
v1.6.0, v1.6.1, v1.6.2-rc1
v1.20.x
1.18.x
v1.6.3, v1.6.4
v1.22.x
1.18.x
v1.6.2-rc2, v1.7.0-rc1
v1.21.0-rc1, v1.21.0-rc2
1.20.x
v1.7.0, v1.7.1, v1.7.2
v1.7.3, v1.7.4
v1.23.x
1.20.x
Boost stores metadata about deals in a sqlite database in the root directory of the Boost repo.
To open the database use a sqlite client:
sqlite3 boost.db
The database tables are
Deals
metadata about Boost storage deals (eg deal proposal) and their current state (eg checkpoint)
FundsLogs
log of each change in funds reserved for a deal
FundsTagged
how much FIL is tagged for deal collateral and publish message for a deal
StorageLogs
log of each change in storage reserved for a deal
StorageTagged
how much storage is tagged for a deal
Boost keeps a separate database just for deal logs, so as to make it easier to manage log data separately from deal metadata. The logs database is named boost.logs.db
and it has a single table DealLogs
that stores logs for each deal, indexed by uuid.
Boost uses goose
(https://pressly.github.io/goose/) tool and library for handling sqlite3 migrations.
goose
can be installed following the instructions at https://pressly.github.io/goose/installation/
Migrations in Boost are stored in the /db/migrations
directory.
Boost handles database migrations on start-up. If a user is running an older version of Boost, migrations up to the latest version are automatically applied on start-up.
Developers can use goose
to inspect and apply migrations using the CLI:
The boostd
executable runs as a daemon alongside a lotus node and lotus miner. This daemon replaces the current markets subsystem in the lotus miner. The boost daemon exposes a libp2p interface for storage and retrieval deals. It performs on-chain operations by making API calls to the lotus node. The daemon hands off downloaded data to the lotus miner for sealing via API calls to the lotus miner.
boostd
has a web interface for fund management and deal monitoring. The web interface is a react app that consumes a graphql interface exposed by the daemon.
The typical flow for a Storage Deal is:
The Client puts funds in escrow with the Storage Market Actor on chain.
The Client uploads a CAR file to a web server.
The Client sends a storage deal proposal to Boost with the URL of the CAR file.
Boost checks that the client has enough funds in escrow to pay for storing the file.
Boost accepts the storage deal proposal.
Boost downloads the CAR file from the web server.
Boost publishes the deal on chain.
The client checks that the deal was successfully published on chain.
Boost exposes a libp2p interface to listen for storage deal proposals from clients. This is similar to the libp2p interface exposed by the lotus market subsystem.
Boost communicates with the lotus node over its JSON-RPC API for on-chain operations like checking client funds and publishing the deal.
Once the deal has been published, Boost hands off the downloaded file to lotus-miner
for sealing.
Boost is a tool for Storage Providers to manage data onboarding and retrieval on the Filecoin network. It replaces the go-fil-markets
package in lotus with a standalone binary that runs alongside a Lotus daemon and Lotus miner.
Boost exposes libp2p interfaces for making storage and retrieval deals, a web interface for managing storage deals, and a GraphQL interface for accessing and updating real-time deal information.
Boost supports the same libp2p protocols as legacy markets, and adds new versions of the protocols used to propose a storage deal and to check the deal's status.
The client makes a deal proposal over v1.2.0
or v1.2.1
of the Propose Storage Deal Protocol: /fil/storage/mk/1.2.0
or /fil/storage/mk/1.2.1
It is a request / response protocol, where the request and response are CBOR-marshalled.
There are two new fields in the Request of v1.2.1
of the protocol, described in the table below.
Field | Type | Description |
---|
The client requests the status of a deal over v1.2.0
of the Storage Deal Status Protocol: /fil/storage/status/1.2.0
It is a request / response protocol, where the request and response are CBOR-marshalled.
The DAG store manages a copy of unsealed deal data stored as CAR files. It maintains indexes over the CAR files to facilitate efficient querying of multihashes.
By default, the dagstore root will be:
$BOOST_PATH/dagstore
The directory structure is as follows:
index
: holds the shard indices.
transients
: holds temporary shard data (unsealed pieces) while they're being indexed.
datastore
: records shard state and metadata so it can survive restarts.
.shard-registration-complete
: marker file that signals that initial migration for legacy markets deals is complete.
.boost-shard-registration-complete
: marker file that signals that initial migration for boost deals is complete.
When you first start your boost process without a dagstore repo, a migration process will register all shards for both legacy and Boost deals in lazy initialization mode. As deals come in, shards are fetched and initialized just in time to serve the retrieval.
For legacy deals, you can monitor the progress of the migration in your log output, by grepping for the keyword migrator
. Here's example output. Notice the first line, which specifies how many deals will be evaluated (this number includes failed deals that never went on chain, and therefore will not be migrated), and the last lines (which communicate that migration completed successfully):
For Boost deals, you can do the same by grepping for the keyword boost-migrator
.
Forcing bulk initialization will become important in the near future, when miners begin publishing indices to the network to advertise content they have, and new retrieval features become available (e.g. automatic shard routing).
Initialization places IO workload on your storage system. You can stop/start this command at your wish/convenience as proving deadlines approach and elapse, to avoid IOPS starvation or competition with window PoSt.
To stop a bulk initialization(see the next paragraph), press Control-C. Shards being initialized at that time will continue in the background, but no more initializations will be performed. The next time you run the command, it will resume from where it left off.
You can force bulk initialization using the boostd dagstore initialize-all
command. This command will force initialization of every shard that is still in ShardStateNew
state for both legacy and Boost deals. To control the operation:
You must set a concurrency level through the --concurrency=N
flag.
A value of 0
will disable throttling and all shards will be initialized at once. ⚠️ Use with caution!
By default, only unsealed pieces will be indexed to avoid forcing unsealing jobs. To index also sealed pieces, use the --include-sealed
flag.
In our test environments, we found the migration to proceed at a rate of 400-500 shards/deals per second, on the following hardware specs: AMD Ryzen Threadripper 3970X, 256GB DDR4 3200 RAM, Samsung 970 EVO 2TB SSD, RTX3080 10GB GPU.
The DAG store can be configured through the config.toml
file of the node that runs the boost subsystem. Refer to the [DAGStore]
section. Boost ships with sane defaults:
Shards can error for various reasons, e.g. if the storage system cannot serve the unsealed CAR for a deal/shard, if the shard index is accidentally deleted, etc.
Boost will automatically try to recover failed shards by triggering a recovery once.
You can view failed shards by using the boostd dagstore list-shards
command, and optionally grepping for ShardStateErrored
.
The boostd
executable contains a dagstore
command with several useful subcommands:
boostd dagstore list-shards
boostd dagstore initialize-shard <key>
boostd dagstore initialize-all --concurrency=10
boostd dagstore gc
Refer to the --help
texts for more information.
This section describes how to roll back to Lotus markets service process if you are not happy with boostd
Before you begin migration from Lotus markets service process to Boost, make sure you have a backup of your Lotus repository, by following the . You can also do a full backup of the Lotus markets repository directory.
If you haven't made any legacy deals with Boost:
Stop boostd
Run your lotus-miner
markets
service process as you previously did
If you have made new legacy deals with Boost, and want to migrate them back:
Stop boostd
Copy the dagstore
directory from boost
repository to markets
repository.
Export Boost deals datastore keys/values:
lotus-shed market export-datastore --repo <repo> --backup-dir <backup-dir>
Wrote backup file to <backup-dir>/markets.datastore.backup
Import the exported deals datastore keys/values from boost
to lotus markets
:
lotus-shed market import-datastore --repo <repo> --backup-path <backup-path>
Completed importing from backup file <backup-path>
This section describes how to upgrade your lotus-miner markets service to boostd
If you are running a monolith
lotus-miner
and have not yet split the markets
service into an individual process, follow the steps in .
If you are running a markets
service as a separate lotus-miner
process:
1. Stop accepting incoming deals
2. Wait for incoming deals to complete
3. Shutdown the markets process
4. Backup the markets repository
5. Backup the markets datastore (in case you decide to roll back from Boost to Lotus) with:
6. Make sure you have a Lotus node and miner running
7. Create and send funds to two new wallets on the lotus node to be used for Boost
Boost currently uses two wallets for storage deals:
The publish storage deals wallet - This wallet pays the gas cost when Boost sends the PublishStorageDeals
message.
If you already have a PublishStorageDeal control wallet setup then it can be reused in boost as the PUBLISH_STORAGE_DEALS_WALLET
.
The deal collateral wallet - When the Storage Provider accepts a deal, they must put collateral for the deal into escrow. Boost moves funds from this wallet into escrow with the StorageMarketActor
.
If you already have a wallet that you want to use as the source of funds for deal collateral, then it can be reused in boost as the COLLAT_WALLET
.
8. Boost keeps all data in a directory called the repository. By default the repository is at ~/.boost
. To use a different location pass the --boost-repo
parameter.
9. Export the environment variables needed for boostd migrate-markets
to connect to the lotus daemon and lotus miner.
Export environment variables that point to the API endpoints for the sealing and mining processes. They will be used by the boost
node to make JSON-RPC calls to the mining/sealing/proving
node.
10. Set the publish storage deals wallet as a control wallet.
Add the value ofPUBLISH_STORAGE_DEALS_WALLET
to the parameter DealPublishControl
in the Address
section of lotus-miner configuration if not present. Restart lotus-miner if configuration has been updated.
11. Run boostd migrate-markets
to initialize the repository and start the migration:
The migrate-markets
command
Initializes a Boost repository
Migrates markets datastore keys to Boost
Storage and retrieval deal metadata
Storage and retrieval ask data
Migrates markets libp2p keys to Boost
Migrates markets config to Boost (libp2p endpoints, settings etc)
Migrates the markets DAG store to Boost
12. Run the boostd
service, which will start:
libp2p listeners for storage and retrieval
the JSON RPC API
the graphql interface (used by the react front-end)
the web server for the react front-end
In your firewall you will need to open the ports that libp2p listens on, so that Boost can receive storage and retrieval deals.
Open http://localhost:8080 in your browser.
To access a web UI running on a remote server, you can open an SSH tunnel from your local machine:
Boost API can be accessed by setting the environment variable BOOST_API_INFO
same as LOTUS_MARKET_INFO
.
If you have already split a your lotus-miner into a separate markets process (MRA), follow the steps in .
Please note that a monolith miner can only split into boost(market)+miner on the same physical machine as it requires access to the miner repo to migrate the deal metadata.
1. Make sure you have a Lotus node and miner running
2. Create and send funds to two new wallets on the lotus node to be used for Boost
Boost currently uses two wallets for storage deals:
The publish storage deals wallet - This wallet pays the gas cost when Boost sends the PublishStorageDeals
message.
If you already have a PublishStorageDeal control wallet setup then it can be reused in boost as the PUBLISH_STORAGE_DEALS_WALLET
.
The deal collateral wallet - When the Storage Provider accepts a deal, they must put collateral for the deal into escrow. Boost moves funds from this wallet into escrow with the StorageMarketActor
.
If you already have a wallet that you want to use as the source of funds for deal collateral, then it can be reused in boost as the COLLAT_WALLET
.
3. Set the publish storage deals wallet as a control wallet.
Add the value ofPUBLISH_STORAGE_DEALS_WALLET
to the parameter DealPublishControl
in the Address
section of lotus-miner configuration if not present. Restart lotus-miner if configuration has been updated.
4. Set up environment variables needed for Boost migration
Export the environment variables needed for boostd migrate-monolith
to connect to the lotus daemon and lotus miner.
Export environment variables that point to the API endpoints for the sealing and mining processes. They will be used by the boost
node to make JSON-RPC calls to the mining/sealing/proving
node.
1. Stop accepting incoming deals
2. Wait for incoming deals to complete
3. Shutdown the lotus-miner
4. Backup the lotus-miner repository
5. Backup the lotus-miner datastore (in case you decide to roll back from Boost to Lotus) with:
6. Set the environment variable LOTUS_FULLNODE_API
to allow access to the lotus node API.
Run boostd migrate-monolith
to create and initialize the boost repository:
The migrate-monolith
command
Initializes a Boost repository
Migrates markets datastore keys to Boost
Storage and retrieval deal metadata
Storage and retrieval ask data
Migrates markets libp2p keys to Boost
Migrates markets config to Boost (libp2p endpoints, settings etc)
Migrates the markets DAG store to Boost
1. Backup lotus-miner's config.toml
2. Disable the markets subsystem in miner config:
Boost replaces the markets subsystem in the lotus-miner, so we need to disable the subsystem in config:
Under the [Subsystems]
section set EnableMarkets = false
3. Change the miner's libp2p port
Boost replaces the markets subsystems, and listens on the same libp2p port, so we need to change the libp2p port that the miner is listening on.
Under the [Libp2p]
section change the port in ListenAddresses
Start lotus-miner up again so that Boost can connect to the miner when it starts.
boostd
serviceThe boostd
service will start:
libp2p listeners for storage and retrieval
the JSON RPC API
the graphql interface (used by the react front-end)
the web server for the react front-end
Open http://localhost:8080 in your browser.
To access a web UI running on a remote server, you can open an SSH tunnel from your local machine:
Boost API can be accessed by setting the environment variable BOOST_API_INFO
same as LOTUS_MARKET_INFO
.
Once the Boost has been split from the monolith miner, it can be moved to another physical or virtual machine by following the below steps.
Copy the boost repo from the original monolith miner machine to the new dedicated boost machine.
Set the environment variable LOTUS_FULLNODE_API
to allow access to the lotus node API.
Open the required port on the firewall on the monolith miner machine to allow connection to lotus-miner API.
Start the boostd
process.
Boost configuration options available in UI
By default, the web UI listens on the localhost interface on port 8080. We recommend keeping the UI listening on localhost or some internal IP within your private network to avoid accidentally exposing it to the internet.
You can access the web UI listening on the localhost interface on a remote server, you can open an SSH tunnel from your local machine:
Boost configuration options with examples and description.
Dealmaking
section handles deal making configuration explicitly for boost deal that uses the new /fil/storage/mk/1.2.0
protocol.
Advertising 128-bit long multihashes with the default EntriesCacheCapacity, and EntriesChunkSize means the cache size can grow to 256MiB when full.
Boost is introducing a new feature that allows computing commP
during the deal on a lotus-worker node.
This should reduce the overall resource utilisation on the Boost node.
In order to enable remote commP on a Boost node, update your config.toml
:
Then restart the Boost node
This page covers all the configuration related to http transfer limiter in boost
Boost provides a capability to limit the number of simultaneous http transfer in progress to download the deal data from the clients.
This new configuration has been introduced in the ConfigVersion = 3
of the boost configuration file.
The transferLimiter
maintains a queue of transfers with a soft upper limit on the number of concurrent transfers.
To prevent slow or stalled transfers from blocking up the queue there are a couple of mitigations: The queue is ordered such that we
start transferring data for the oldest deal first
prefer to start transfers with peers that don't have any ongoing transfer
once the soft limit is reached, don't allow any new transfers with peers that have existing stalled transfers
Note that peers are distinguished by their host (eg foo.bar:8080) not by libp2p peer ID. For example, if there is
one active transfer with peer A
one pending transfer (peer A)
one pending transfer (peer B)
The algorithm will prefer to start a transfer with peer B than peer A. This helps to ensure that slow peers don't block the transfer queue.
The limit on the number of concurrent transfers is soft. Example: if there is a limit of 5 concurrent transfers and there are
three active transfers
two stalled transfers
then two more transfers are permitted to start (as long as they're not with one of the stalled peers)
Field | Type | Description |
---|
Field | Type | Description |
---|
Field | Type | Description |
---|
See the Libp2p
section of config.toml
in the
In your firewall you will need to ensure that the libp2p ports that Boost listens on are open, so that Boost can receive storage and retrieval deals.
See the Libp2p
section of config.toml
in the
Build the boost binary on the new machine by following the step.
In your firewall you will need to ensure that the libp2p ports that Boost listens on are open, so that Boost can receive storage and retrieval deals.
See the Libp2p
section of config.toml
in the
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
DealUUID | uuid | A uuid for the deal specified by the client |
IsOffline | boolean | Indicates whether the deal is online or offline |
ClientDealProposal | ClientDealProposal | Same as |
DealDataRoot | cid | The root cid of the CAR file. Same as |
Transfer.Type | string | eg "http" |
Transfer.ClientID | string | Any id the client wants (useful for matching logs between client and server) |
Transfer.Params | byte array | Interpreted according to |
Transfer.Size | integer | The size of the data that is sent across the network |
SkipIPNIAnnounce (v1.2.1) | boolean | Whether the provider should announce the deal to IPNI or not (default: false) |
RemoveUnsealedCopy (v1.2.1) | boolean | Whether the provider should keep an unsealed copy of the deal (default: false) |
Accepted | boolean | Indicates whether the deal proposal was accepted |
Message | string | A message about why the deal proposal was rejected |
Price / epoch / Gib | 500000000 | Asking price for a deal in atto fils. This price is per epoch per GiB of data in a deal |
Verified Price / epoch / Gib | 500000000 | Asking price for a verified deal in atto fils. This price is per epoch per GiB of data in a deal |
Min Piece Size | 256 | Minimum size of a piece that storage provider will accept in bytes |
Max Piece Size | 34359738368 | Maximum size of a piece that storage provider will accept in bytes |
SealerApiInfo | "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZwdyIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.nbSvy11-tSUbXqo465hZqzTohGDfSdgh28C4irkmE10:/ip4/0.0.0.0/tcp/2345/http" | Miner API info passed during boost init. Requires admin permissions. Connect string for the miner/sealer instance API endpoint |
SectorIndexApiInfo | "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZwdyIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.nbSvy11-tSUbXqo465hZqzTohGDfSdgh28C4irkmE10:/ip4/0.0.0.0/tcp/2345/http" | Miner API info passed during boost init. Requires admin permissions. Connect string for the miner/sealer instance API endpoint |
ListenAddress | "/ip4/127.0.0.1/tcp/1288/http" | # Format: multiaddress Address Boost API will be listening on. No need to update unless you are planning to make API calls from outside the boost node |
RemoteListenAddress | "0.0.0.0:1288" | Address boost API can reached at from outside. No need to update unless you are planning to make API calls from outside the boost node |
Timeout | "30s" | RPC timeout value |
ListenAddresses | # Format: multiaddress ["/ip4/209.94.92.3/tcp/24001"] | Binding address for the libp2p host - 0 means random port. |
AnnounceAddresses | # Format: multiaddress ["/ip4/209.94.92.3/tcp/24001"] | Addresses to explicitly announce to other peers. If not specified, all interface addresses are announced. On chain address need to be updated when this address is changed # lotus-miner actor set-addrs /ip4/<YOUR_PUBLIC_IP_ADDRESS>/tcp/24001 |
NoAnnounceAddresses | # Format: multiaddress ["/ip4/209.94.92.3/tcp/24001"] | Addresses to not announce. Can be used if you want to announce addresses with exceptions |
ConnMgrLow | 150 | ConnMgrLow is the number of connections that the basic connection manager will trim down to. Too low number can cause frequent connectivity issues |
ConnMgrHigh | 200 | ConnMgrHigh is the number of connections that, when exceeded, will trigger a connection GC operation Note: protected/recently formed connections don't count towards this limit. A high limit can cause very high resource utilization |
ConnMgrGrace | "20s" | ConnMgrGrace is a time duration that new connections are immune from being closed by the connection manager. |
ParallelFetchLimit | 10 | Upper bound on how many sectors can be fetched in parallel by the storage system at a time |
Miner | f032187 | Miner ID |
PublishStorageDeals | f3syzhufifmnbzcznoquhy4mlxo3byetqlamzbeijk62bjpoohrj3wiphkgxe3yjrlh5dmxlca3zqxp3yvd33a #BLS wallet address | This value is taken during init with |
DealCollateral | f3syzhufifmnbzcznoquhy4mlxo3byetqlamzbeijk62bjpoohrj3wiphkgxe3yjrlh5dmxlca3zqxp3yvd33a #BLS wallet address | This value is taken during init with |
MaxPublishDealsFee | "0.05 FIL" | Maximum fee user is willing to pay for a PublishDeal message |
MaxMarketBalanceAddFee | "0.007 FIL" | The maximum fee to pay when sending the AddBalance message (used by legacy markets) |
RootDir | Empty | If a custom value is specified, boost instance will refuse to start. This will be deprecated and removed in the future. |
MaxConcurrentIndex | 5 | The maximum amount of indexing jobs that can run simultaneously. 0 means unlimited. |
MaxConcurrentReadyFetches | 0 | The maximum amount of unsealed deals that can be fetched simultaneously from the storage subsystem. 0 means unlimited. |
MaxConcurrentUnseals | 0 | The maximum amount of unseals that can be processed simultaneously from the storage subsystem. 0 means unlimited. |
MaxConcurrencyStorageCalls | 100 | The maximum number of simultaneous inflight API calls to the storage subsystem. |
GCInterval | "1m0s" | The time between calls to periodic dagstore GC, in time.Duration string representation, e.g. 1m, 5m, 1h. |
Enable | True/False | Enabled or disable the index-provider subsystem |
EntriesCacheCapacity | 5 | EntriesCacheCapacity sets the maximum capacity to use for caching the indexing advertisement entries. Defaults to 1024 if not specified. The cache is evicted using LRU policy. The maximum storage used by the cache is a factor of EntriesCacheCapacity, EntriesChunkSize and the length of multihashes being advertised. |
EntriesChunkSize | 0 | EntriesChunkSize sets the maximum number of multihashes to include in a single entries chunk. Defaults to 16384 if not specified. Note that chunks are chained together for indexing advertisements that include more multihashes than the configured EntriesChunkSize. |
TopicName | "" | TopicName sets the topic name on which the changes to the advertised content are announced. If not explicitly specified, the topic name is automatically inferred from the network name in following format: '/indexer/ingest/' |
PurgeCacheOnStart | 100 | PurgeCacheOnStart sets whether to clear any cached entries chunks when the provider engine starts. By default, the cache is rehydrated from previously cached entries stored in datastore if any is present. |
GCInterval | "1m0s" | The time between calls to periodic dagstore GC, in time.Duration string representation, e.g. 1m, 5m, 1h. |
How to use deal filters
Your use case might demand very precise and dynamic control over a combination of deal parameters.
Lotus provides two IPC hooks allowing you to name a command to execute for every deal before the miner accepts it:
Filter
for storage deals.
RetrievalFilter
for retrieval deals.
The executed command receives a JSON representation of the deal parameters on standard input, and upon completion, its exit code is interpreted as:
0
: success, proceed with the deal.
non-0
: failure, reject the deal.
The most trivial filter rejecting any retrieval deal would be something like: RetrievalFilter = "/bin/false"
. /bin/false
is binary that immediately exits with a code of 1
.
This Perl script lets the miner deny specific clients and only accept deals that are set to start relatively soon.
You can also use a third party content policy framework like CIDgravity or bitscreen
by Murmuration Labs:
Boost comes with a client executable, boost
, that can be used to send a deal proposal to a Boost server.
The client is intentionally minimal meant for developer testing. It is not a full featured client and is not intended to be so. It does not require a daemon process, and can be pointed at any public Filecoin API for on-chain operations. This means that users of the client do not need to run a Filecoin node that syncs the chain.
There are a number of public Filecoin APIs ran by a number of organisations, such as Infura, Glif, etc. For test purposes you can try:
export FULLNODE_API_INFO=https://api.node.glif.io
The init
command
Creates a Boost client repository (at ~/.boost-client
by default)
Generates a libp2p peer ID key
Generates a wallet for on-chain operations and outputs the wallet address
To make deals you will need to: a) add funds to the wallet b) add funds to the market actor for that wallet address
Currently, we don't distribute binaries, so you will have to build from source.
When a storage provider accepts the deal, you should see output of the command similar to:
You can check the deal status
with the following command:
Step by step guide to various Boost tasks
How to backup and restore Boost
Boost now supports both online and offline backups. The backup command will output a backup directory containing the following files.
metadata
- contains backup of leveldb
boostd.db
- backup of deals database
keystore
- directory containing libp2p keys
token
- API token
config
- directory containing all config files and config.toml
link
storage.json
- file containing storage details
Backup does not backs up the deal logs and dagstore.
You can take an online backup with the below command
The online backup supports running only one instance at a time and you might see a locking error if another instance of backup is already running.
Shutdown boostd
before taking a backup
Take a backup using the command line
Boost offline backup does not include Dagstore and user can copy dagstore directory to a backup location manually. Dagstore can be reinitialized if there is no backup.
Make sure that --boost-repo
flag is set if you wish to restore to a custom location. Otherwise, it will be restored to ~/.boost
directory
Restore the boost repo using the command line
Once restore is complete, Dagstore can be manually copied inside the boost repo to restore it.
Advanced configurations you can tune to optimize your legacy deal onboarding
This section controls parameters for making storage and retrieval deals:
ExpectedSealDuration
is an estimate of how long sealing will take and is used to reject deals whose start epoch might be earlier than the expected completion of sealing. It can be estimated by benchmarking or by pledging a sector.
The final value of ExpectedSealDuration
should equal (TIME_TO_SEAL_A_SECTOR + WaitDealsDelay) * 1.5
. This equation ensures that the miner does not commit to having the sector sealed too soon
StartEpochSealingBuffer
allows lotus-miner
to seal a sector before a certain epoch. For example: if the current epoch is 1000 and a deal within a sector must start on epoch 1500, then lotus-miner
must wait until the current epoch is 1500 before it can start sealing that sector. However, if Boost sets StartEpochSealingBuffer
to 500, the lotus-miner
can start sealing the sector at epoch 1000.
If there are multiple deals in a sector, the deal with a start time closest to the current epoch is what StartEpochSealingBuffer
will be based off. So, if the sector in our example has three deals that start on epoch 1000, 1200, and 1400, then lotus-miner
will start sealing the sector at epoch 500.
The PublishStorageDeals
message can publish multiple deals in a single message. When a deal is ready to be published, Boost will wait up to PublishMsgPeriod
for other deals to be ready before sending the PublishStorageDeals
message.
However, once MaxDealsPerPublishMsg
is ready, Boost will immediately publish all the deals.
For example, if PublishMsgPeriod
is 1 hour:
At 1:00 pm, deal 1 is ready to publish. Boost will wait until 2:00 pm for other deals to be ready before sending PublishStorageDeals
.
At 1:30 pm, Deal 2 is ready to publish
At 1:45 pm, Deal 3 is ready to publish
At 2:00pm, Boost publishes Deals 1, 2, and 3 in a single PublishStorageDeals
message.
If MaxDealsPerPublishMsg
is 2, then in the above example, when deal 2 is ready to be published at 1:30, Boost would immediately publish Deals 1 & 2 in a single PublishStorageDeals
message. Deal 3 would be published in a subsequent PublishStorageDeals
message.
If any of the deals in the PublishStorageDeals
fails validation upon execution, or if the start epoch has passed, all deals will fail to be published
Your use case might demand very precise and dynamic control over a combination of deal parameters.
Boost provides two IPC hooks allowing you to name a command to execute for every deal before the miner accepts it:
Filter
for storage deals.
RetrievalFilter
for retrieval deals.
The executed command receives a JSON representation of the deal parameters on standard input, and upon completion, its exit code is interpreted as:
0
: success, proceed with the deal.
non-0
: failure, reject the deal.
The most trivial filter rejecting any retrieval deal would be something like: RetrievalFilter = "/bin/false"
. /bin/false
is binary that immediately exits with a code of 1
.
This Perl script lets the miner deny specific clients and only accept deals that are set to start relatively soon.
You can also use a third party content policy framework like CIDgravity or bitscreen
by Murmuration Labs:
If you are already running a standalone markets process, follow the guide at Migrate a Lotus markets service process to Boost
If you are already running a monolith lotus-miner instance, follow the guide at Migrate a monolith lotus-miner to Boost
1. Make sure you have a Lotus node and miner running
2. Create and send funds to two new wallets on the lotus node to be used for Boost
Boost currently uses two wallets for storage deals:
The publish storage deals wallet - This wallet pays the gas cost when Boost sends the PublishStorageDeals
message.
The deal collateral wallet - When the Storage Provider accepts a deal, they must put collateral for the deal into escrow. Boost moves funds from this wallet into escrow with the StorageMarketActor
.
3. Set the publish storage deals wallet as a control wallet.
4. Create and initialize the Boost repository
If you are already running a Lotus markets service process, you should
run boostd migrate
instead of boostd init
See section Migrate a Lotus markets service process to Boost for more details.
Boost keeps all data in a directory called the repository. By default the repository is at ~/.boost
. To use a different location pass the --boost-repo
parameter (must precede any particular command verb, e.g. boostd --boost-repo=/path init
).
Export the environment variables needed for boostd init
to connect to the lotus daemon and lotus miner.
Export environment variables that point to the API endpoints for the sealing and mining processes. They will be used by the boost
node to make JSON-RPC calls to the mining/sealing/proving
node.
Run boostd init
to create and initialize the repository:
--api-sealer
is the API info for the lotus-miner instance that does sealing
--api-sector-index
is the API info for the lotus-miner instance that provides storage
--max-staging-deals-bytes
is the maximum amount of storage to be used for downloaded files (once the limit is reached Boost will reject subsequent incoming deals)
5. Update ulimit
file descriptor limit if necessary. Boost deals will fail if the file descriptor limit for the process is not set high enough. This limit can be raised temporarily before starting the Boost process by running the command ulimit -n 1048576
. We recommend setting it permanently by following the Permanently Setting Your ULIMIT System Value guide.
6. Make sure that the correct <PEER_ID> and <MULTIADDR> for your SP is set on chain, given that boost init
generates a new identity. Use the following commands to update the values on chain:
<MULTIADDR> should be the same as the ListenAddresses
you set in the Libp2p
section of the config.toml of Boost
<PEER_ID> can be found in the output of boostd net id
command
7. Run the boostd
service, which will start:
libp2p listeners for storage and retrieval
the JSON RPC API
the graphql interface (used by the react front-end)
the web server for the react front-end
In your firewall you will need to open the ports that libp2p listens on, so that Boost can receive storage and retrieval deals.
See the Libp2p
section of config.toml
in the Repository
When you build boostd
using make build
the react app is also part of the process. You can skip this section.
Following steps are to be used only in case you are building binary and react app separately.
Build the React frontend
Open the Web UI
Open http://localhost:8080 in your browser.
To access a web UI running on a remote server, you can open an SSH tunnel from your local machine:
Boost API can be accessed by setting the environment variable BOOST_API_INFO
same as LOTUS_MARKET_INFO
.
You can also directly evaluate the boostd auth
command with:
Configure to publish IPNI announcements over HTTP
IndexProvider.HttpPublisher.AnnounceOverHttp
must be set to true
to enable the http announcements. Once HTTP announcements are enabled, the local-index provider will continue to announce over libp2p gossipsub along with HTTP for the specific indexers.
The advertisements are send to the indexer nodes defined in DirectAnnounceURLs
. You can specify more than 1 URL to announce to multiple indexer nodes.
Once an IPNI node starts processing the advertisements, it will reach out to the Boost node to fetch the data. Thus, Boost node needs to specify a public IP and port which can be used by the indexer node to query for data.
This tutorial goes through all the steps required to make a storage deal with Boost on Filecoin.
First, you need to initialise a new Boost client and also set the endpoint for a public Filecoin node. In this example we are using https://glif.io
The init
command will output your new wallet address, and warn you that the market actor is not initialised.
Then you need to send funds to the wallet, and add funds to the market actor (in the example below we are adding 1 FIL
).
You can use the boostx
utilities to add funds to the market actor:
You can confirm that the market actor has funds by running boost init
again.
After that you need to generate a car
file for data you want to store on Filecoin, and note down its payload-cid.
We recommend using go-car
CLI to generate the car file.
Then you need to calculate the commp
and piece size
for the generated car
file:
Place the generated car
file on a public HTTP server, so that a storage provider can later fetch it.
Finally, trigger an online storage deal with a given storage provider:
Storage providers might demand very precise and dynamic control over a combination of deal parameters.
Boost, similarly to Lotus, provides two IPC hooks allowing you to name a command to execute for every deal before the storage provider accepts it:
Filter
for storage deals.
RetrievalFilter
for retrieval deals.
The executed command receives a JSON representation of the deal parameters, as well as the current state of the sealing pipeline, on standard input, and upon completion, its exit code is interpreted as:
0
: success, proceed with the deal.
non-0
: failure, reject the deal.
The most trivial filter rejecting any retrieval deal would be something like:
RetrievalFilter = "/bin/false"
.
/bin/false
is binary that immediately exits with a code of 1
.
This Perl script lets the miner deny specific clients and only accept deals that are set to start relatively soon.
You can also use a third party content policy framework like bitscreen
by Murmuration Labs, or CID gravity:
Here is a sample JSON representation of the input sent to the deal filter:
DealUUID | uuid | The uuid of the deal |
Signature | A signature over the uuid with the client's wallet |
DealUUID | uuid | The uuid of the deal |
Error | string | Non-empty if there's an error getting the deal status |
IsOffline | boolean | Indicates whether the deal is online or offline |
TransferSize | integer | The total size of the transfer in bytes |
NBytesReceived | integer | The number of bytes that have been downloaded |
DealStatus.Error | string | Non-empty if the deal has failed |
DealStatus.Status | string |
DealStatus.Proposal | DealProposal |
SignedProposalCid | cid | cid of the client deal proposal + signature |
PublishCid | cid | The cid of the publish message, if the deal has been published |
ChainDealID | integer | The ID of the deal on chain, if it's been published |
This section describes how to upgrade your lotus-miner markets service to boostd, as well as how to roll-back if you are not happy with boostd
A storage provider can run the lotus as a monolith process where everything is handled by a single lotus-miner
process or separate the mining and market subsystems on different machines.
Boost supports migration from both monolith and a split-market miner. You can follow the below guides to migrate to boost.
Please note that Boost uses a SQLite database for the deal metadata and logs. Once Boost has been enabled, the new deals cannot be rolled back to the Lotus markets. If you decide to roll back after making Boost deals, you will lose all the metadata for the deal made with Boost. However, this will have no impact on the sealed data itself.
The new inspect page in the Boost UI helps with debugging retrieval problems. It allows the user to check the following using a payload CID or piece CID:
Verify if the piece has been correctly added to the Piece Store
Validate if the piece is indexed in the DAG store
Check for an unsealed copy of the piece
Verify that the payload CID -> piece CID index has been created correctly
If the client cannot connect to Boost running on a Storage provider, with an error similar to the following:
The problem is that:
The SP registered their peer id and address on chain.
eg "Register the peer id 123abcd
at address ip4/123.456.12.345/tcp/1234
"
The SP changed their peer id locally but didn't update the peer id on chain.
The client wants to make a storage deal with peer 123abcd
. The client looks on chain for the address of peer 123abcd
and sees peer 123abcd
has registered an address ip4/123.456.12.345/tcp/1234
.
The client sends a deal proposal for peer 123abcd
to the SP at address ip4/123.456.12.345/tcp/1234
.
The SP has changed their peer ID, so the SP responds to the deal proposal request with an error: peer id mismatch
To fix the problem, the SP should register the new peer id on chain:
Clients would not be able to connect to Boost running on a Storage provider after an IP change. This happens as clients lookup the registered peer id and address on chain for a SP. When a SP changes their IP or address locally, they must update the same on chain.
The SP should register the new peer id on chain using the following lotus-miner command
Please make sure to use the public IP and port of the Boost node and not lotus-miner
node if your miner and boostd
runs on a separate machine.
The on chain address change requires access to the worker key and thus the command lives in lotus-miner
instead of Boost.
After migrating to Boost, following error is seen when running lotus-miner info
:
lotus-miner
is making a call on lotus-market
process which has been replaced by Boost, but lotus-miner
is not aware of the new market process.
Export the MARKETS_API_INFO variable on your lotus-miner node.
The following error shows up when trying to retrieve the data from a storage provider.
The error indicates that dagstore does not have a corresponding index shard for the piece containing the requested data. When a retrieval is requested, the dagstore on storage provider side is queried and a reverse look up is used to determine the key(piece CID). This key is then used to query the piece store to find the sector containing the data and byte offset.
If for any reason the shard is not registered with the dagstore then reverse look up to find the piece CID fails and the above error is seen. The most widely know reason for not having the shard registered with dagstore is the below error.
To fix the deals where retrievals are impacted by above error, user will need to register the shards manually with dagstore:
If you have multiple deals in such state then you will need to generate a list of registered pieces with piece store and then compare with the shards available in the dagstore to create a list of missing shards.
Please stop accepting any deals and ensure all current deals are handed off to the lotus-miner (sealer) subsystem before proceeding from here.
1. Create a list of all sectors on lotus-miner
and redirect the output to a file. Copy the output file to boost node to be used by the below command.
2. Generate a list of shards to be registered
3. Register the shards with dagstore in an automated fashion.
Please note that each shard may take upto 3-5 minutes to get registered. So, the above command might take hours or days to complete depending upon the number of missing shards.
This section covers the current experimental features available in Boost
Boost is developing new market features on a regular basis as part of the overall market development. This section covers the experimental features released by boost along with details on how to use them.
It is not recommended to run experimental features in production environments. The features should be tested as per your requirements, and any issues or requests should be reported to the team via Github or Slack.
Once the new features have been tested and vetted, they will be released as part of a stable Boost release and all documentation concerning those features will be moved to an appropriate section of this site.
Current experimental features are listed below.
This page explains how to start monitoring and accepting deals published on-chain on the FVM
With the release of FVM, it is now possible for smart contracts to make deal proposals on-chain. This is made possible though the DealProposal FRC.
DataDAOs, as well as other clients who want to store data on Filecoin, can now deploy a smart contract on the FVM which adheres to the DealProposal FRC, and make deal proposals that are visible to every storage provider who monitors the chain.
Boost already has support for the DealProposal FRC.
The code for FVM monitoring resides in the latest release of the Boost. It should be used with caution for production use. SPs must before proceeding to the next step.
To build for mainnet:
In order to enable DealProposal FRC, you have to edit your config.toml
and enable contract deal monitoring. By default it is disabled. Here is an example configuration:
AllowlistContracts
field could be left empty if you want to accept deals from any client. If you only want to accept deals from some clients, you can specify their contract addresses in the field.
From
field should be set to your SP's FEVM address. Some clients may implement a whitelist which allows specific SPs to accept deal proposals from their contract. This field will help those clients identify your SP and match it to their whitelist.
A contract publishes a DealProposalCreate
event on the chain.
Boost monitors the chain for such events from all the clients by default. When such an event is detected, we go and fetch the data for the deal.
Deal is then run through the basic deal validation filters like clients has enough funds, SP has enough funds etc.
Once deal passes the validation, we create a new deal handler in Boost and pass this deal for execution like other Boost deals.
How to configure and use HTTP retrievals in Boost
Boost introduced a new binary, booster-http
, with release v1.2.0. This binary can be run alongside the boostd
market process in order to serve retrievals over http.
Currently, there is no payment method or built-in security integrated in the new binary. It can be run with any stable release of boostd
and can also be run on a separate machine from the boostd
process.
Release v1.7.0-rc1 introduced support in booster-http
for running an , which enables Storage Providers to serve content to their users in multiple formats as described below and demonstrated using curl
.
When performing certain actions, such as replicating deals, it can be convenient to retrieve the entire Piece (with padding) to ensure commp integrity.
To return the CAR file for a given CID, you can pass an Accept
header with the application/vnd.ipld.car;
format. This can be useful for retrieving the raw, unpadded data of a deal.
For Storage Providers that have enabled serving raw files (disabled by default), users can retrieve specific files, such as images by their cid and path where applicable. See for a more in depth example.
For advanced IPFS and IPLD use cases, you can now retrieve individual blocks by passing an Accept
header with the application/vnd.ipld.raw;
format
SPs should try a local setup and test their HTTP retrievals before proceeding to run booster-http
in production.
To build and run booster-http
:
Clone the boost repo and checkout the latest release
Build the new binary
Collect the token information for boost, lotus-miner and lotus daemon API
Start the booster-http
server with the above details
You can run multiple booster-http
processes on the same machine by using a different port for each instance with the --port
flag. You can also run multiple instances of the booster-http
on different machines.
SSL
Authentication
Load balancing
To enable public discovery of the Boost HTTP server, SPs should set the domain root in boostd's config.toml
. Under the [DealMaking]
section, set HTTPRetrievalMultiaddr
to the public domain root in multi-address format.
Example config.toml
section:
Clients can determine if an SP offers HTTP retrieval by running:
Clients can check the HTTP URL scheme version and supported queries
Clients can download a piece using the domain root configured by the SP:
The that the deal has reached
The booster-http server listens on localhost. To expose the server publically, SPs should run a reverse proxy such as to handle operational concerns like:
While booster-http may get more operational features over time, the intent is that providers who want to scale their HTTP operations will handle most of operational concerns via software in front of booster-http. You can setup a simple NGINX proxy using the in
This page contains all Boost API definitions. Interfaces defined here are exposed as JSON-RPC 2.0 endpoints by the boostd daemon.
To use the Boost Go client, the Go RPC-API library can be used to interact with the Boost API node.
Import the necessary Go module:
Create the following script:
Run go mod init
to set up your go.mod
file
You should now be able to interact with the Boost API.
The JSON-RPC API can also be communicated with programmatically from other languages. Here is an example written in Python. Note that the method
must be prefixed with Filecoin.
There are not yet any comments for this method.
Perms: read
Inputs:
Response: 34359738368
Perms: admin
Inputs:
Response: "Ynl0ZSBhcnJheQ=="
Perms: read
Inputs:
Response:
There are not yet any comments for this method.
Perms: read
Inputs:
Response: "Ynl0ZSBhcnJheQ=="
Perms: read
Inputs:
Response: 123
Perms: read
Inputs:
Response: true
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs: null
Response:
Perms: admin
Inputs:
Response:
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs: null
Response:
Perms: read
Inputs:
Response:
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs:
Response:
Perms: admin
Inputs:
Response:
Perms: admin
Inputs:
Response:
There are not yet any comments for this method.
Perms: admin
Inputs: null
Response: {}
Perms: write
Inputs:
Response:
Perms: admin
Inputs:
Response:
Perms: admin
Inputs: null
Response: true
Perms: admin
Inputs: null
Response: true
Perms: admin
Inputs: null
Response: true
There are not yet any comments for this method.
Perms: admin
Inputs: null
Response: true
Perms: admin
Inputs: null
Response: true
Perms: admin
Inputs: null
Response: true
Perms: admin
Inputs: null
Response:
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs:
Response: {}
Perms: read
Inputs: null
Response: "12D3KooWGzxzKZYveHXtpG6AsrUJBcWxHBFS2HsEoGTxrMLvKXtf"
Perms: write
Inputs: null
Response:
Perms: write
Inputs:
Response: {}
Perms: write
Inputs:
Response: {}
Perms: write
Inputs: null
Response:
Perms: read
Inputs: null
Response:
Perms: read
Inputs: null
Response:
Perms: write
Inputs:
Response: {}
Perms: write
Inputs: null
Response:
Perms: read
Inputs: null
Response:
There are not yet any comments for this method.
Perms: read
Inputs: null
Response:
Perms: write
Inputs: null
Response:
Perms: write
Inputs:
Response: {}
Perms: admin
Inputs:
Response: {}
Perms: admin
Inputs:
Response: {}
Perms: read
Inputs: null
Response:
Perms: read
Inputs:
Response: "string value"
Perms: read
Inputs: null
Response:
Perms: read
Inputs: null
Response:
Perms: read
Inputs: null
Response:
Perms: read
Inputs: null
Response:
Perms: admin
Inputs:
Response: {}
Perms: read
Inputs: null
Response:
Perms: admin
Inputs:
Response: {}
Perms: write
Inputs:
Response: {}
Perms: read
Inputs:
Response: 1
Perms: write
Inputs:
Response: {}
Perms: read
Inputs:
Response:
Perms: read
Inputs:
Response:
Perms: read
Inputs:
Response:
Perms: read
Inputs: null
Response:
Perms: read
Inputs:
Response: 60000000000
Perms: admin
Inputs:
Response: {}
Perms: read
Inputs: null
Response:
Perms: admin
Inputs:
Response: {}
Perms: read
Inputs: null
Response:
Perms: admin
Inputs:
Response: {}
Perms: read
Inputs:
Response:
There are not yet any comments for this method.
Perms: admin
Inputs:
Response: {}
Perms: read
Inputs:
Response:
Perms: read
Inputs:
Response: 42
Perms: read
Inputs:
Response:
Perms: read
Inputs: null
Response:
Perms: read
Inputs: null
Response:
RuntimeSubsystems returns the subsystems that are enabled in this instance.
Perms: read
Inputs: null
Response:
Perms: read
Inputs: null
Response:
This page explains how to initialise LID and start using it to provide retrievals to clients
Considering that the Local Index Directory is a new feature, Storage Providers should initialise it after upgrading their Boost deployments.
There are two ways a Storage Provider can do that:
Migrate existing indices from the DAG store into LID: this solution assumes that the Storage Provider has been keeping an unsealed copy for every sector they prove on-chain, and has already indexed all their deal data into the DAG store.
Typically index sizes for a given sector range between 100KiB up to 1GiB, depending on deal data and its blocks sizes. The DAG store keeps these indices in the repository directory of Boost under the ./dagstore/index
and ./dagstore/datastore
directories. This data should be migrated to LID with the migrate-lid
utility.
Recreate indices for deal data based on unsealed copies of sectors: this solution assumes that the Storage Provider has unsealed copies for every sector they prove on-chain. If this is not the case, then the SP should first trigger an unseal (UNS) job on their system for every sector that contains user data and produce an unseal copy.
SPs can use the boostd recover lid
utility to produce an index for all deal data within an unsealed sector and store it in LID so that they enable retrievals for the data. Depending on SPs deployment and where unsealed copies are hosted (NFS, Ceph, external disks, etc.) and the performance of the hosting system, producing an index for a 32GiB sector can take anywhere from a few seconds up to a few minutes, as the unsealed copy needs to be processed by the utility.
TODO
TODO
How to get help for Boost
You can report any issues or bugs here.
If you are having trouble, check the Troubleshooting page for common problems and solutions.
If you have a question, please join the Filecoin Slack and ask in #fil-help or #fil-lotus-help or #boost-help or start a discussion.
You can also start a discussion about new feature and improvement ideas for the Boost.
Frequently asked questions about Boost
Is there a way to stop boostd
daemon?
You can use the regular Unix OS signals
Is Boost compatible with the Lotus client? Can a client use lotus client deal
to send a deal to Boost storage providers or do they have to use the boost client?
Yes, boost should work with any client given it supports the storage market protocol / default standard of Filecoin network today.
Does Boost provide retrieval functionality?
Yes, Boost provides 3 protocols for retrievals as of now. By default, Boost has Graphsync retrieval enabled. SPs can run Bitswap and HTTP retrievals by running booster-bitswap
and booster-http
respectively.
Does Boost client have retreival functionality?
Yes, Boost client supports retrieval over graphsync protocol. But we highly recommend, using Lassie
client for Filecoin/IPFS retrievals.
Can Boost make verified deals?
Yes, payments for deals can be made either from a regular wallet, or from DataCap. Deals that are paid for with DataCap are called verified
deals.
Can I run both Boost and markets at the same time? No, Boost replaces the legacy markets process. See Migrate a Lotus markets service process to Boost
Local Index Directory requirements and dependencies
Local Index Directory depends on a backend database to store various indices. Currently we support two implementations - YugabyteDB or LevelDB - depending on the size of deal data and indices a storage provider holds.
LevelDB is an open source on-disk key-value store, and can be used when indices fit on a single host.
YugabyteDB is an open source modern distributed database designed to run in any public, private, hybrid or multi-cloud environment.
Storage providers who hold more than 1PiB data are encouraged to use YugabyteDB as it is horizontally scalable, provides better monitoring and management utilities and could support future growth.
For detailed instructions, playbooks and hardware recommendations, see the YugabyteDB website - https://docs.yugabyte.com
YugabyteDB is designed to run on bare-metal machines, virtual machines (VMs), and containers. CPU and RAM
You should allocate adequate CPU and RAM. YugabyteDB has adequate defaults for running on a wide range of machines, and has been tested from 2 core to 64 core machines, and up to 200GB RAM.
YugabyteDB requires the SSE2 instruction set support, which was introduced into Intel chips with the Pentium 4 in 2001 and AMD processors in 2003. Most systems produced in the last several years are equipped with SSE2.
In addition, YugabyteDB requires SSE4.2.
To verify that your system supports SSE2, run the following command:
cat /proc/cpuinfo | grep sse2
To verify that your system supports SSE4.2, run the following command:
cat /proc/cpuinfo | grep sse4.2
We recommend a minimum of 1TiB or more allocated for YugabyteDB, depending on the amount of deal data you store and its average block size.
Assuming you've kept unsealed copies of all your data and have consistently indexed deal data, the size of your DAG store directory should be comparable with the requirements for YugabyteDB
This tutorial goes through the steps required to run our Docker monitoring setup to collect and visualize metrics for various Boost processes
The monitoring stack we will use includes:
Prometheus - collects metrics and powers dashboards in Grafana
Tempo - collects traces and powers traces search in Grafana with Jaeger
Grafana - provides visualization tools and dashboards for all metrics and traces
Lotus and Boost are already instrumented to produce traces and stats for Prometheus to collect.
The Boost team also packages a set of Grafana dashboards that are automatically provisioned as part of this setup.
This setup has been tested on macOS and on Linux. We haven’t tested it on Windows, so YMMV.
All the monitoring stack containers run in Docker.
We have tested this setup with Docker 20.10.23 on macOS and Ubuntu.
https://docs.docker.com/engine/install/
Update extra_hosts
in docker-compose.yaml
for prometheus
, so that the Prometheus container can reach all its targets - boostd
, lotus-miner
, booster-bitswap
, booster-http
, etc.
https://github.com/filecoin-project/boost/blob/main/docker/monitoring/docker-compose.yaml#L47-L55
Depending on where your Filecoin processes (boostd
, lotus
, lotus-miner
, booster-bitswap
, etc.) are running, you need to confirm that they are reachable from Prometheus so that it can scrape their metrics.
By default the setup expects to find them within the same Docker network, so if you are running them elsewhere (i.e. on the `host` network), add the following arguments:
Confirm that Prometheus targets are scraped at http://localhost:9090 / Targets
If you are running software firewall like `ufw`, you might need to modify your iptables and allow access from the Prometheus container / network to the Filecoin stack network, for example:
sudo docker network inspect monitoring
# note the Subnet for the network
sudo ufw allow from 172.18.0.0/16
Go to Grafana at http://localhost:3333 and inspect the dashboards:
Configuring booster-http to serve blocks and files
With the release v1.7.0-rc1 of booster-http, Storage Providers can now serve blocks and files directly over the HTTP protocol. booster-http
now implements a IPFS HTTP gateway with a path resolution style. This will allow the clients to download individual IPFS blocks, car files and request uploaded files directly from their browser.
SPs can take advantage of the ecosystem of tools to manage HTTP traffic, like load balancers and reverse proxies.
Before proceeding any further, we request you to read basics of HTTP retrieval configuration. This section is an extension of HTTP retrievals and deals with configuration specific to serving files and raw blocks.
The booster-http
service can be started with specific type of content on IPFS gateway API
This allows SPs to run multiple booster-http
instances, each serving specific type of content like car files only or raw blocks only.
In the curl request below we appended the query parameter format=raw to the URL to get the raw block data for the file.
But, if we try to open the file directly in a web browser, with no extra query parameters, we get an error message:
By default booster-http
does not serve files in a format that can be read by a web browser. This is to protect Storage Providers from serving content that may be flagged as illicit content.
To enable serving files to web browsers, we must pass --serve-files=true
to booster-http
on startup. Once, booster-http
is restarted with --serve-files=true,
we can open the file directly from a web browser:
booster-http
(and booster-bitswap
) automatically filter out known flagged content using the denylist maintained at https://badbits.dwebops.pub/denylist.json
We can also browse all files in the CAR archive.
SPs must secure their booster-http
before exposing it to the public. SPs can feel free to use any tool available to limit who can download files, the number of requests per second, and the download bandwidth each client can use per second.
Users can follow this example to use NGNIX reverse proxy to secure their booster-http
instance. In this section we’ve just scratched the surface of the ways in which nginx can set access limits, rate limits and bandwidth limits. In particular it’s possible to add limits by request token, or using JWT tokens. The examples in this section are adapted from Deploying NGINX as an API Gateway which goes into more detail.
By default nginx puts configuration files into /etc/nginx
The default configuration file is /etc/nginx/sites-available/default
Setup nginx server listen on port 7575
and forward requests to booster-http on port 7777
The IPFS gateway serves files from /ipfs
. So, we will add a server block for location /ipfs/
Let’s limit access to the IPFS gateway using the standard .htaccess
file. We need to set up an .htaccess
file with a username and password. Create a user named alice
Include the .htaccess
file in the /etc/nginx/sites-available/default
Now when we open any URL under the path /ipfs
we will be presented with a Sign in dialog.
To prevent users from making too many requests per second, we should add rate limits.
Create a file with the rate limiting configuration at /etc/nginx/ipfs-gateway.conf.d/ipfs-gateway.conf
Add a request zone limit to the file of 1 request per second, per client IP
Include ipfs-gateway.conf
in /etc/nginx/sites-available/default
and set the response for too many requests to HTTP response code 429
Click the refresh button in your browser on any path under /ipfs more than once per second you will see a 429 error page
It is also recommended to limit the amount of bandwidth that clients can take up when downloading data from booster-http
. This ensures a fair bandwidth distribution to each client and prevents situations where one client ends up choking the booster-http
instance.
Create a new .htaccess user called bob
Add a mapping from .htaccess
username to bandwidth limit in /etc/nginx/ipfs-gateway.conf.d/ipfs-gateway.conf
Add the bandwidth limit to /etc/nginx/sites-available/default
To verify bandwidth limiting, use curl
to download a file with user alice
and then bob
Note the difference in the Average Dload column (the average download speed).
Boost exposes a GraphQL API that is used by the Web UI to query and update information about Boost deals. The GraphQL API query endpoint is at http://localhost:8080/graphql/query
You can also run your own queries against the GraphQL API using CURL or a programming language that has a GraphQL client.
Boost has a built-in GraphQL explorer at http://localhost:8080/graphiql
You can test out queries, or explore the GraphQL API by clicking on the < Docs
link at the top right of the page:
To run a graphql query with CURL:
This 1m video shows how to use these tools to build an run a GraphQL query against Boost:
1. Query failed deals
2. Cancel a deal where ab12345c-5678-90de-12f3-45a6b78cd9ef
is the deal ID
Local Index Directory architecture and index types
When designing the Local Index Directory we considered the needs of various Storage Providers (SPs) and the operational overhead LID would have on their systems. We built a solution for: - small- SPs - holding up to 1PiB), and - mid- and large- size SPs - holding anywhere from 1PiB, up to 100PiB data
Depending on underlying block size and data format, index size can vary in size. Typically block sizes are between 16KiB and 1MiB.
At the moment there are two implementations of LID: - a simple LevelDB implementation, for small SPs who want to keep all information in a single process database. - a scalable YugabyteDB implementation, for medium and large size SPs with tens of thousands of deals.
In order to support the described retrieval use cases, LID maintains the following indexes:
To look up which pieces contain a block
To look up which sector a piece is in
To look up where in the piece a block is and the block’s size
This page describes the Local Index Directory component in Boost, what it is used for, how it works and how to start using it
Local Index Directory is not yet released. This is a placeholder page for its documentation.
The Local Index Directory (LID) manages and stores indices of deal data so that it can be retrieved by a content identifier (cid).
Currently this task is performed by the DAG store component. The DAG store keeps its indexes on disk on a single machine. LID replaces the DAG store and introduces a horizontally scalable backend database for storing the data - YugabyteDB.
LID is designed to provide a more intuitive experience for the user, by surfacing problems and providing various repair tools.
To summarize, LID is the component which keeps fine-grained metadata about all the deals on Filecoin that a given Storage Provider stores, and without it client would only be able to retrieve full pieces, which generally are between 8GiB and 32GiB in size.
When a client uploads deal data to Boost, LID records the sector that the deal data is stored in and scans the deal data to create an index of all its blocks indexed by block cid. This way cilents can later retrieve subsets of the original deal data, without retrieving the full deal data.
When a client makes a request for data by cid, LID: - checks which piece the cid is in, and where in the piece the data is - checks which sector the piece is in, and where in the sector the piece is - reads the data from the sector
The retrieval use cases that the Local Index Directory supports are:
Request one root cid with a selector, receive many blocks
LID is able to: - look up which piece contains the root cid - look up which sector contains the piece - for each block, get the offset into the piece for the block
Request one block at a time
LID is able to: - look up which piece contains the block - get the size of the block (Bitswap asks for the size before getting the block data) - look up which sector contains the piece - get the offset into the piece for the block
Request a whole piece
LID is able to look up which sector contains the piece.
Request an individual block
LID is able to: - look up which piece contains the block - look up which sector contains the piece - get the offset into the piece for the block
Request a file by root cid
LID is able to: - look up which piece contains the block - look up which sector contains the piece - for each block, get the offset into the piece for the block
How to configure and use bitswap retrievals in Boost
booster-bitswap
is a binary that runs alongside the boostd
process, to serve retrievals over the Bitswap protocol. This feature of boost provides a number of tools for managing a production grade Bitswap retrieval service for a Storage Provider's content.
There is currently no payment method in booster-bitswap. This endpoint is intended to serve free content.
Bitswap retrieval introduces interoperability between IPFS and Filecoin, as it enables clients to retrieve Filecoin data over IPFS. This expands the reach of the Filecoin network considerably, increasing the value proposition for users to store data on the Filecoin network. This benefits the whole community, including SPs. Users will be able to access data directly via IPFS, as well as benefit from retrieval markets (e.g. Saturn) and compute over data projects (e.g. Bacalhau).
Booster-bitswap
modesThere are two primary "modes" for exposing booster-bitswap
to the internet.
In private mode
the booster-bitswap
peer ID is not publicly accessible to the internet. Instead, public Bitswap traffic goes to boostd
itself, which then acts as a reverse proxy, forwarding that traffic on to booster-bitswap
. This is similar to the way one might configure Nginx as a reverse proxy for an otherwise private web server. private mode
is simpler to setup but may produce greater load on boostd
as a protocol proxy.
In public mode
the public internet firewall must be configured to forward traffic directly to the booster-bitswap
instance. boostd
is configured to announce the public address of booster-bitswap
to the network indexer (the network indexer is the service that clients can query to discover where to retrieve content). This mode offers greater flexibility and performance. You can even setup booster-bitswap
to run over a separate internet connection from boostd
. However, it might require additional configuration and changes to your overall network infrastructure.
You can configure booster-bitswap in the demo mode and familiarise yourself with the configuration. Once you are confident and familiar with the options, please go ahead and configure booster-bitswap
for production use.
1. Clone the the boost repo and checkout the latest stable release
2. Build the booster-bitswap
binary:
3. Initialize booster-bitswap
:
4. Record the peer ID output by booster-bitswap init
-- we will need this peer id later
5. Collect the boost API Info
6. Run booster-bitswap
7. By default, booster-bitswap runs on port 8888. You can use --port
to override this behaviour
8. Fetching over bitswap by running
Where peerID
is the peer id recorded when you ran booster-bitswap init
and rootCID
is the CID of a data CID known to be stored on your SP.
booster-bitswap
To Serve RetrievalsAs described above, booster-bitswap
can be configured to serve the retrievals in 2 modes. We recommend using public mode
to avoid greater load on boostd
as a protocol proxy.
1. Clone the main
branch from the boost repo
2. Build the booster-bitswap
binary:
3. Initialize booster-bitswap
:
4. Record the peer ID output by booster-bitswap init
-- we will need this peer id later
5. Stop boostd
and edit ~/.boost/config.toml to set the peer ID for bitswap
6. Start boostd
service again
7. Collect the boost API Info
8. Run booster-bitswap
You can get a boostd
multiaddress by running boostd net listen
and using any of the returned addresses
9. By default, booster-bitswap runs on port 8888. You can use --port
to override this behaviour
10. Try to fetch a payload CID over bitswap to verify your configuration
1. Clone the release/booster-bitswap
branch from the boost repo
2. Build the booster-bitswap
binary:
3. Initialize booster-bitswap
:
4. Record the peer ID output by booster-bitswap init
-- we will need this peer id later
5. Stop boostd
and edit ~/.boost/config.toml to set the peer ID for bitswap
The libp2p private key file for booster-bitswap can generally be found at <booster-bitswap repo path>/libp2p.key
The reason boost needs to know the public multiaddresses and libp2p private key for booster-bitswap
is so it can properly announce these records to the network indexer.
6. Start boostd
service again
7. Collect the boost API Info
8. Run booster-bitswap
9. By default, booster-bitswap runs on port 8888. You can use --port
to override this behaviour
10. Try to fetch a payload CID over bitswap to verify your configuration
Booster-bitswap
configurationbooster-bitswap
provides a number of performance and safety tools for managing a production grade bitswap server without overloading your infrastructure.
Depending on your hardware you may wish to increase or decrease the default parameters for the bitswap server internals. In the following example we are increasing the worker count for various components up to 600. This will utilize more CPU and I/O, but improve the performance of retrievals. See the command line help docs for details on each parameter.
Booster-bitswap is automatically setup to deny all requests for CIDs that are on the BadBits Denylist. The default badbits list can be override or addition badbits list can be provided to the booster-bitswap
instance.
booster-bitswap
provides a number of controls for filtering requests and limiting resource usage. These are expressed in a JSON configuration file <booster-bitswap repo>/retrievalconfig.json
You can create a new retrievalconfig.json
file if one does not exists
To make changes to the current configuration, you need to edit the retrievalconfig.json
file and restart booster-bitswap
for the changes to take affect. All configs are optional and absent parameters generally default to no filtering at all for the given parameter.
You can also configure booster-bitswap
to fetch your retrieval config from a remote HTTP API, possibly one provided by a third party configuration tool like CIDGravity. To do this, start booster-bitswap
with the --api-filter-endpoint {url} option where URL is the HTTP URL for an API serving the above JSON format. Optionally, add --api-filter-auth {authheader}, if you need to pass a value for the HTTP Authorization header with your API
When you setup with an API endpoint, booster-bitswap
will update its local configuration from the API every five minutes, so you won't have to restart booster-bitswap
to make a change. Please, be aware that the remote config will overwrite, rather than merge, with the local config.
Limiting bandwidth within booster-bitswap will not provide the optimal user experience. Dependent on individual setup, setting up limitations within the software could have a larger impact on the storage provider operations. Therefore, we recommend storage providers to set up their own bandwidth limitations using existing tools.
There are multiple options to setup bandwidth limitating.
At the ISP level - dedicated bandwidth is provided to the node running booster-bitswap.
At the router level - we recommend configuring the bandwidth at the router level as it provides more flexibility and can be updated as needed. To configure the bandwidth on your router, please check with your manufacturer.
Limit the bandwidth using different tools available in Linux. Here are some of the examples of such tools. Please feel free to use any other tools not listed here and open a Github issue to add your example to this page.
TC is used to configure Traffic Control in the Linux kernel. There are examples available online detailing how to configure rate limiting using TC.
You can use the below commands to run a very basic configuration.
Trickle is a portable lightweight user space bandwidth shaper, that either runs in collaborative mode (together with trickled) or in standalone mode. You can read more about rate limiting with trickle here. Here's a starting point for configuration in trickle to rate limit the booster-bitswap service.
Another way of controlling network traffic is to limit bandwidth on individual network interface cards (NICs). Wondershaper is a small Bash script that uses the tc command-line utility in the background to let you regulate the amount of data flowing through a particular NIC. As you can imagine, while you can use wondershaper on a machine with a single NIC, its real advantage is on a machine with multiple NICs. Just like trickle, wondershaper is available in the official repositories of mainstream distributions. To limit network traffic with wondershaper, specify the NIC on which you wish to restrict traffic with the download and upload speed in kilobits per second.
For example,