Results 1 to 3 of 3

Thread: BlockSize parameter

  1. #1

    Default BlockSize parameter


    I made some test with IBinaryPersist. In one of posts you wrote, that, to Store() method, STSDB sends items count, defined by BlockSize
    parameter when cache buffer overflow or Flush (commit, snapshot) starts. But i found that now it always sends 256 items instead of 1024. If i change default BlockSize parameter it still sends 256...

    And another strange thing is that if cache buffer is not overflow and commit (which calls Flush) starts and inserted records is less than BlockSize (by default <1024) STSDB sends to Store() not 256 like in all other cases but 1024 (or less). In this time it Store for example 1024 records from cache in one time instead of 4 times with 256.

    Could you explain this behavior?


  2. #2


    Yes, the engine does not take into consideration the BlockSize property of the XTable instance. This is incorrect database behavior - it is related with the way that the key is decomposed in the internal FTable. I suppose, that we will fix this issue in the next major release.

  3. #3


    And this incorrect behavior affects quality of delta compression, because of smaller data packages.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
2002 - 2014 STS Soft SC. All Rights reserved.
STSdb, Waterfall Tree and WTree are registered trademarks of STS Soft SC.