Results 1 to 5 of 5

Thread: What impacts read - lookup speed?

  1. #1
    Junior Member
    Join Date
    Feb 2012
    Posts
    15

    Default What impacts read - lookup speed?

    I have an XTABLE<Int64,Decimal> with 68K rows that I need to do 3.4 million random look ups by the key. I have to kill the app after waiting 10 minutes.

    If I load the table into a dictionary the lookups from the dictionary take 8 seconds.

    I've tried

    table[key]
    table.Find(key)
    Table.Forward(keybytes,keybytes).First()

    All slow.


    What do I need to have in place for optimal random read speed.

  2. #2

    Default

    It is normal to be slow. Random reads means seeks, which are expensive.

    No matter of cache size.

    Unless you increase the cache to keep 100% of table rows.
        table.CacheCapacity = (uint)(table.Count / 256); //256 instead of table.BlockCapacity
    

  3. #3
    Junior Member
    Join Date
    Feb 2012
    Posts
    15

    Default

    Quote Originally Posted by a.todorov View Post
    It is normal to be slow. Random reads means seeks, which are expensive.

    No matter of cache size.

    Unless you increase the cache to keep 100% of table rows.
        table.CacheCapacity = (uint)(table.Count / 256); //256 instead of table.BlockCapacity
    

    Thanks. I will try this. I set the CacheCapacity to 1 for faster inserts. For reads I was setting to 50. I will increase this to the formula you suggested.


    Another question. Does the XTABLE automatically adjust its block size and block capacity depending on the total bytes of the key value elements.

  4. #4

    Default

    In theory, the XTable does not change the BlockCapacity value - it set once when the table is creating (depending on the key type).

    But in current release the actual internal block size is always 256.

    See also: http://stssoft.com/forum/threads/118...ssion-relation and http://stssoft.com/forum/threads/108...Size-parameter
    Last edited by p.petkov; 04.12.2014 at 16:00.

  5. #5

    Default

    That's really funny...why to change CacheCapacity etc, if there is a way to load all data into Dictionary and to search it there for 8 sec?

    If table is big to fit into Dictionary and you really need to make so many lookups in short time range period, try to collect your lookup requests (keys) in chunks by 100000 (34 chunks) then sort every chunk in memory and then try to lookup database by sorted chunk list. I am not sure how it will work in STSdb, but theoretically (depending upon implementation) it can work much faster then to search random keys.
    Last edited by blaze; 19.05.2012 at 01:23.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
2002 - 2014 STS Soft SC. All Rights reserved.
STSdb, Waterfall Tree and WTree are registered trademarks of STS Soft SC.