Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
How to use deal filters
Your use case might demand very precise and dynamic control over a combination of deal parameters.
Lotus provides two IPC hooks allowing you to name a command to execute for every deal before the miner accepts it:
Filter
for storage deals.
RetrievalFilter
for retrieval deals.
The executed command receives a JSON representation of the deal parameters on standard input, and upon completion, its exit code is interpreted as:
0
: success, proceed with the deal.
non-0
: failure, reject the deal.
The most trivial filter rejecting any retrieval deal would be something like: RetrievalFilter = "/bin/false"
. /bin/false
is binary that immediately exits with a code of 1
.
This Perl script lets the miner deny specific clients and only accept deals that are set to start relatively soon.
You can also use a third party content policy framework like CIDgravity or bitscreen
by Murmuration Labs:
Configure to publish IPNI announcements over HTTP
IndexProvider.HttpPublisher.AnnounceOverHttp
must be set to true
to enable the http announcements. Once HTTP announcements are enabled, the local-index provider will continue to announce over libp2p gossipsub along with HTTP for the specific indexers.
The advertisements are send to the indexer nodes defined in DirectAnnounceURLs
. You can specify more than 1 URL to announce to multiple indexer nodes.
Once an IPNI node starts processing the advertisements, it will reach out to the Boost node to fetch the data. Thus, Boost node needs to specify a public IP and port which can be used by the indexer node to query for data.
Advanced configurations you can tune to optimize your legacy deal onboarding
This section controls parameters for making storage and retrieval deals:
ExpectedSealDuration
is an estimate of how long sealing will take and is used to reject deals whose start epoch might be earlier than the expected completion of sealing. It can be estimated by benchmarking or by pledging a sector.
The final value of ExpectedSealDuration
should equal (TIME_TO_SEAL_A_SECTOR + WaitDealsDelay) * 1.5
. This equation ensures that the miner does not commit to having the sector sealed too soon
StartEpochSealingBuffer
allows lotus-miner
to seal a sector before a certain epoch. For example: if the current epoch is 1000 and a deal within a sector must start on epoch 1500, then lotus-miner
must wait until the current epoch is 1500 before it can start sealing that sector. However, if Boost sets StartEpochSealingBuffer
to 500, the lotus-miner
can start sealing the sector at epoch 1000.
If there are multiple deals in a sector, the deal with a start time closest to the current epoch is what StartEpochSealingBuffer
will be based off. So, if the sector in our example has three deals that start on epoch 1000, 1200, and 1400, then lotus-miner
will start sealing the sector at epoch 500.
The PublishStorageDeals
message can publish multiple deals in a single message. When a deal is ready to be published, Boost will wait up to PublishMsgPeriod
for other deals to be ready before sending the PublishStorageDeals
message.
However, once MaxDealsPerPublishMsg
is ready, Boost will immediately publish all the deals.
For example, if PublishMsgPeriod
is 1 hour:
At 1:00 pm, deal 1 is ready to publish. Boost will wait until 2:00 pm for other deals to be ready before sending PublishStorageDeals
.
At 1:30 pm, Deal 2 is ready to publish
At 1:45 pm, Deal 3 is ready to publish
At 2:00pm, Boost publishes Deals 1, 2, and 3 in a single PublishStorageDeals
message.
If MaxDealsPerPublishMsg
is 2, then in the above example, when deal 2 is ready to be published at 1:30, Boost would immediately publish Deals 1 & 2 in a single PublishStorageDeals
message. Deal 3 would be published in a subsequent PublishStorageDeals
message.
If any of the deals in the PublishStorageDeals
fails validation upon execution, or if the start epoch has passed, all deals will fail to be published
Your use case might demand very precise and dynamic control over a combination of deal parameters.
Boost provides two IPC hooks allowing you to name a command to execute for every deal before the miner accepts it:
Filter
for storage deals.
RetrievalFilter
for retrieval deals.
The executed command receives a JSON representation of the deal parameters on standard input, and upon completion, its exit code is interpreted as:
0
: success, proceed with the deal.
non-0
: failure, reject the deal.
The most trivial filter rejecting any retrieval deal would be something like: RetrievalFilter = "/bin/false"
. /bin/false
is binary that immediately exits with a code of 1
.
This Perl script lets the miner deny specific clients and only accept deals that are set to start relatively soon.
You can also use a third party content policy framework like CIDgravity or bitscreen
by Murmuration Labs:
Boost is introducing a new feature that allows computing commP
during the deal on a lotus-worker node.
This should reduce the overall resource utilisation on the Boost node.
In order to enable remote commP on a Boost node, update your config.toml
:
Then restart the Boost node
Boost configuration options available in UI
By default, the web UI listens on the localhost interface on port 8080. We recommend keeping the UI listening on localhost or some internal IP within your private network to avoid accidentally exposing it to the internet.
You can access the web UI listening on the localhost interface on a remote server, you can open an SSH tunnel from your local machine:
This page covers all the configuration related to http transfer limiter in boost
Boost provides a capability to limit the number of simultaneous http transfer in progress to download the deal data from the clients.
This new configuration has been introduced in the ConfigVersion = 3
of the boost configuration file.
The transferLimiter
maintains a queue of transfers with a soft upper limit on the number of concurrent transfers.
To prevent slow or stalled transfers from blocking up the queue there are a couple of mitigations: The queue is ordered such that we
start transferring data for the oldest deal first
prefer to start transfers with peers that don't have any ongoing transfer
once the soft limit is reached, don't allow any new transfers with peers that have existing stalled transfers
Note that peers are distinguished by their host (eg foo.bar:8080) not by libp2p peer ID. For example, if there is
one active transfer with peer A
one pending transfer (peer A)
one pending transfer (peer B)
The algorithm will prefer to start a transfer with peer B than peer A. This helps to ensure that slow peers don't block the transfer queue.
The limit on the number of concurrent transfers is soft. Example: if there is a limit of 5 concurrent transfers and there are
three active transfers
two stalled transfers
then two more transfers are permitted to start (as long as they're not with one of the stalled peers)
Boost configuration options with examples and description.
Dealmaking
section handles deal making configuration explicitly for boost deal that uses the new /fil/storage/mk/1.2.0
protocol.
Advertising 128-bit long multihashes with the default EntriesCacheCapacity, and EntriesChunkSize means the cache size can grow to 256MiB when full.
Parameter | Example | Description |
---|---|---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Parameter | Example | Description |
---|
Price / epoch / Gib
500000000
Asking price for a deal in atto fils. This price is per epoch per GiB of data in a deal
Verified Price / epoch / Gib
500000000
Asking price for a verified deal in atto fils. This price is per epoch per GiB of data in a deal
Min Piece Size
256
Minimum size of a piece that storage provider will accept in bytes
Max Piece Size
34359738368
Maximum size of a piece that storage provider will accept in bytes
SealerApiInfo | "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZwdyIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.nbSvy11-tSUbXqo465hZqzTohGDfSdgh28C4irkmE10:/ip4/0.0.0.0/tcp/2345/http" | Miner API info passed during boost init. Requires admin permissions. Connect string for the miner/sealer instance API endpoint |
SectorIndexApiInfo | "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZwdyIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.nbSvy11-tSUbXqo465hZqzTohGDfSdgh28C4irkmE10:/ip4/0.0.0.0/tcp/2345/http" | Miner API info passed during boost init. Requires admin permissions. Connect string for the miner/sealer instance API endpoint |
ListenAddress | "/ip4/127.0.0.1/tcp/1288/http" | # Format: multiaddress Address Boost API will be listening on. No need to update unless you are planning to make API calls from outside the boost node |
RemoteListenAddress | "0.0.0.0:1288" | Address boost API can reached at from outside. No need to update unless you are planning to make API calls from outside the boost node |
Timeout | "30s" | RPC timeout value |
ListenAddresses | # Format: multiaddress ["/ip4/209.94.92.3/tcp/24001"] | Binding address for the libp2p host - 0 means random port. |
AnnounceAddresses | # Format: multiaddress ["/ip4/209.94.92.3/tcp/24001"] | Addresses to explicitly announce to other peers. If not specified, all interface addresses are announced. On chain address need to be updated when this address is changed # lotus-miner actor set-addrs /ip4/<YOUR_PUBLIC_IP_ADDRESS>/tcp/24001 |
NoAnnounceAddresses | # Format: multiaddress ["/ip4/209.94.92.3/tcp/24001"] | Addresses to not announce. Can be used if you want to announce addresses with exceptions |
ConnMgrLow | 150 | ConnMgrLow is the number of connections that the basic connection manager will trim down to. Too low number can cause frequent connectivity issues |
ConnMgrHigh | 200 | ConnMgrHigh is the number of connections that, when exceeded, will trigger a connection GC operation Note: protected/recently formed connections don't count towards this limit. A high limit can cause very high resource utilization |
ConnMgrGrace | "20s" | ConnMgrGrace is a time duration that new connections are immune from being closed by the connection manager. |
ParallelFetchLimit | 10 | Upper bound on how many sectors can be fetched in parallel by the storage system at a time |
Miner | f032187 | Miner ID |
PublishStorageDeals | f3syzhufifmnbzcznoquhy4mlxo3byetqlamzbeijk62bjpoohrj3wiphkgxe3yjrlh5dmxlca3zqxp3yvd33a #BLS wallet address | This value is taken during init with |
DealCollateral | f3syzhufifmnbzcznoquhy4mlxo3byetqlamzbeijk62bjpoohrj3wiphkgxe3yjrlh5dmxlca3zqxp3yvd33a #BLS wallet address | This value is taken during init with |
MaxPublishDealsFee | "0.05 FIL" | Maximum fee user is willing to pay for a PublishDeal message |
MaxMarketBalanceAddFee | "0.007 FIL" | The maximum fee to pay when sending the AddBalance message (used by legacy markets) |
RootDir | Empty | If a custom value is specified, boost instance will refuse to start. This will be deprecated and removed in the future. |
MaxConcurrentIndex | 5 | The maximum amount of indexing jobs that can run simultaneously. 0 means unlimited. |
MaxConcurrentReadyFetches | 0 | The maximum amount of unsealed deals that can be fetched simultaneously from the storage subsystem. 0 means unlimited. |
MaxConcurrentUnseals | 0 | The maximum amount of unseals that can be processed simultaneously from the storage subsystem. 0 means unlimited. |
MaxConcurrencyStorageCalls | 100 | The maximum number of simultaneous inflight API calls to the storage subsystem. |
GCInterval | "1m0s" | The time between calls to periodic dagstore GC, in time.Duration string representation, e.g. 1m, 5m, 1h. |
Enable | True/False | Enabled or disable the index-provider subsystem |
EntriesCacheCapacity | 5 | EntriesCacheCapacity sets the maximum capacity to use for caching the indexing advertisement entries. Defaults to 1024 if not specified. The cache is evicted using LRU policy. The maximum storage used by the cache is a factor of EntriesCacheCapacity, EntriesChunkSize and the length of multihashes being advertised. |
EntriesChunkSize | 0 | EntriesChunkSize sets the maximum number of multihashes to include in a single entries chunk. Defaults to 16384 if not specified. Note that chunks are chained together for indexing advertisements that include more multihashes than the configured EntriesChunkSize. |
TopicName | "" | TopicName sets the topic name on which the changes to the advertised content are announced. If not explicitly specified, the topic name is automatically inferred from the network name in following format: '/indexer/ingest/' |
PurgeCacheOnStart | 100 | PurgeCacheOnStart sets whether to clear any cached entries chunks when the provider engine starts. By default, the cache is rehydrated from previously cached entries stored in datastore if any is present. |
GCInterval | "1m0s" | The time between calls to periodic dagstore GC, in time.Duration string representation, e.g. 1m, 5m, 1h. |