Results 1 to 6 of 6

Thread: File base storage engine not releasing memory on commit

  1. #1
    Junior Member
    Join Date
    Feb 2012
    Posts
    15

    Default File base storage engine not releasing memory on commit

    1. Create storage engine from file
    2. Create 100 XTables and put them into a List
    3. Loop 1,000,000 times and insert a value into each XTABLE with a commit every 20K

    When I monitor the memory I can see it continually grow until I get an OOM error.
    What is being kept in memory?

    My insert code is

    Int64 dimKey = val.GetHashCode();
    t_dimbykey[dimKey] = val;
    

    every 20K rows

    t_dimbykey.commit();
    
    When I comment out the insert..the memory does not grow and I don't get the OOM

    Do I need to do some special type of flush when i commit?


    EDIT:

    I have confirmed this using the TICK sample application that ships with the sources. If you insert 100000000 rows
    and add a commit every 20000 rows, you see memory usage continually grow until the application dies.
    Last edited by omasri; 28.04.2012 at 07:52.

  2. #2

    Default

    In STSdb R3.5.x each XTable instance has its own cache.

    Try to decrease the CacheCapacity value for each of your tables. The default value is 8192, which is usually a high value for such number of tables. I guess that about 32 would be sufficient for your case.
    table.CacheCapacity = 32;
    
    XTable keeps all its records grouped in blocks. The records with similar keys are kept in blocks close to each other. CacheCapacity in XTable specifies the count of the memory cached blocks. Each block contains BlockSize number of records. (That way the total amount of the cached records can reach up to CacheCapacity x BlockSize number of records.)
    For details, see http://stssoft.com/forum/threads/118-CacheCapacity-and-data-compression-relation.
    Last edited by p.petkov; 04.12.2014 at 15:50.

  3. #3
    Junior Member
    Join Date
    Feb 2012
    Posts
    15

    Default

    Thanks that solved the problem. I guess having that large a cache for 100 tables is not a good idea. I was assuming that it was the transaction cache, when in reality the table's cache was filling up.

    Is there a call to clear a table's cache? Not delete the real data..just clear the cache.

    Would setting the cache to 0 and then back to the setting do the trick?

    The reason I bring this up is that after a big data load the caches are taking up a large amount of memory and I want to relinquish it back to the app.
    Last edited by omasri; 02.05.2012 at 11:39.

  4. #4

    Default

    The XTable has Flush() method, but it not clear the cache.

    Yes, you can set CacheCapacity to a very low value. But do not set it to zero - it will internally set back to the default value 8192. Instead, set it to 1. (Or better - 4 to 8, depending on the internal tree depth.)
        table.CacheCapacity = 1;
    
    This will force the XTable cache to clear almost all of its data.

  5. #5
    Junior Member
    Join Date
    Sep 2013
    Posts
    11

    Default

    How to set CacheCapacity if I am using OpenIIndex<T, T>?

  6. #6

    Default

    Is your question related to STSdb 3.5 or STSdb 4.0?

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
2002 - 2014 STS Soft SC. All Rights reserved.
STSdb, Waterfall Tree and WTree are registered trademarks of STS Soft SC.