Page 1 of 2 12 LastLast
Results 1 to 10 of 14

Thread: STSdb.MTL "DataBreezing"

  1. #1

    Default STSdb.MTL "DataBreezing"

    I want to solve the problem of the wasted space in case of "small transactions" and random keys insert for the version 3.5 using STSdb.MTL STSdb Multi-Threading Layer C# Csharp

    And it seems to be working.

    Mr.Todorov advices logically to split the data between tables and compact it. I don't have any reason not to do this. Probably, all data search trees one day become not optimal either from the data size or from the search speed point of view or from the speed of the insert.

    I looked on STSdb and found out that the algorithm of the data copying from one table to another - so called, compacting - is very fast and new table resides much-much-much less space.

    I had an idea to make this compacting on-line in case, if wasted space (raw file length - table.Size) exceeds 20% of the real data size (table.Size). Having STSdb.MTL, threads work with the data via MTL layer, which is controlling parallel access to the tables.

    In MTL there are two modes how you can work with the data: first is READ and second combines WRITE and READ_FOR_FUTURE_WRITE (READ_SYNCHRO) - due to the parallel open locators. Threads which want to read the data from one table, get it parallel and fast via locator 1, threads which want to WRITE in the same table, use locator 2 and standing in the queue. They make this write one after another. There are many technical problems solved in-there, such as deadlocks.

    On-line compacter situates in the same writing queue, when he can see that the time to compact the data has come (due to 20% algo). Reading threads, without problems, read committed data in parallel and are happy. When compacter time comes, he creates absolutely new table with @sys_eth_datetimeticks/userTableName stamp and copies all the data there. In this moment WRITE threads are waiting till compacter finishes its job, but reading threads still go on to read from the old table. After copying is done other WRITE threads receive access to the locator 2 (which is already setup to the new table) and can save data.
    All newly create threads for READ, get locator 1 configured on the new table.
    The compacter is still waiting till threads, which were open before compacter start in the READ mode, finish their job, cause they have old locator references. After there is no more threads who hold the reference to the old table - old table is deleted from the VirtualFileSystem and can be gabage collected.
    I must tell that programmers go on to use their table names, the MTL makes switches between programmer table name and its compacted name.

    One step left is a journalling in time of compacting, so if something happens then could be restored, - I don't see there technical problems.
    For now you can download from codeplex revision 12083 STSdb Multi-Threading Layer C# Csharp , find there testDataBreezing() procedure, to test it yourself and to see how next tests were done.

    Some statistics.

    Task is following, I want to emulate saving into one table, with the key of type long and Value of type byte[150], the history of whatever. Value is Random.NextByte in the test.

    Let it be, popular here, stock quotes. I want to save every minute a tick during 3 years. Every insert must be Commited() - so typical small transaction. Every key is DateTime.Tick of every minute during next 3 years, always growing up.

    I made 2 loops first loop is a cycle and corresponds to one day, in the other loop there are 1440 inserts and sure 1440 commits, they correspond to the quantity of minutes in a day.

    Statistic will be done in cycles. Because, I want to call procedure of compacting every day, so every cycle. Sure, not every cycle the compaction will be done - all due to 20% algo (I tested also 5%, 40% etc., they don't differ very much, my opinion 20-40% is quite acceptable, smaller then 20% very often sense-less compactions, but also smaller final file size - not significant).

    I take empty database, so we can make correlations between raw file, db file and table inside of the raw file. SECTOR SIZE is 64 KB.

    First digits without compaction.

    After 579 cycles:

    835.171 - records count
    1.815.021.382 bytes - DB File Length.
    1.813.840.710 bytes - Raw File Length (where db table resides)
    140.684.606 bytes - Table Size (usefull data)
    1.673.156.104 bytes - Wasted Space (RFL - TS)
    ---------------------------------------------------
    What means 2.173 bytes per record.

    That's it.


    "DataBreezing" is activated.

    The full filling story you can find here
    https://docs.google.com/spreadsheet/pub?hl=ru&hl=ru&key=0Aj5BMo74po8SdGszQVJhN2M0UGl1S nRiTmJnOV9UVEE&output=html

    There is also a chart about how much took the compacting itself (how long WRITE threads were blocked) and how often it happened.

    But final result:


    After 1200 cycles:

    1.729.440 - records count
    592.249.856 bytes - DB File Length.
    284.459.096 bytes - Raw File Length (where db table resides)
    approximately 2 times less then full db size
    307.790.760 bytes - FREE SPACE which will be reused due to STS VFS GC!!! by this
    table or other tables. Or in some year you can just copy
    all to new db and release free space.
    268.558.056 bytes - Table Size (usefull data)
    NO Wasted Space

    ---------------------------------------------------
    What means:
    DBFile / RowsCount = 342 bytes per record
    (but this is not so correct because we have FREE space)
    and RawFile / Records Count = 155 bytes per record

    Compacting procedure in the end of the test was called every 27 cycle and took 26 seconds.
    In the beginning it was called practically every cycle and took 100 ms. In both cases there is a logarithmic approximation. Check charts from the link above.

    With "DataBreezing" database size permanently equals (pure data size * 2). Half is a free space, but how it will be used. Slower growing tables can reside free space, left after compaction of the quicker growing tables. Tables where bulk inserts are used also can reside free space. Obviously, that this free space will be used by the same table for the wasted data collecting till the next "exhale" - compacting procedure.


    One day, depends upon the table behaviour, compacting of one table can take much time (READS will be available, but WRITES will have to wait unacceptable time), and the data will have to be logically and physically splitted between several tables. For them, compacting will work beautiful again. If you had EUR/USD table, later it can migrate to 2012/EUR/USD, 2013/EUR/USD .... year plus data. So, if in the business logic layer of your application there is a function who makes select from table EUR/USD, may be it's a good idea not to call selects directly from DB, but to wrap it into GetDataTicks(param from, to, pairType) function etc.....then later only GetDataTicks has to be rewritten to work with 2012/EUR/USD and 2013/EUR/USD instead of EUR/USD. Data can be also distributed between several databases and computers, so extra data access layer under the business logic layer is always a good idea.
    Last edited by blaze; 05.02.2012 at 21:08.

  2. #2

    Default

    Another experiment.

    I create one table and one record in it. Key will be always equal to 4565654654, and value is Random.NextByte(150). I update this value and Commit after every update. So, table holds in total only one key.

    I use cycles. Every cycle contains 1440 updates and commits.

    Compaction is disabled.

    120 cycles (172.800 updates of the record) result to:

    1 - records count
    60.335.713 bytes - DB File Length.
    59.417.185 bytes - Raw File Length (where db table resides)
    767 bytes - Table Size (usefull data)
    59.416.418 bytes - Wasted Space (RFL - TS)


    "DataBreezing is active"

    Compacting is turned on after every cycle.

    7113 cycles (10.242.720 updates of the record) result to:

    1 - records count
    12.864.334 bytes - DB File Length.
    492.726 bytes - Raw File Length (where db table resides - holds before compact)
    1.678 bytes - Raw File Length (where db table resides - holds after compact)
    767 bytes - Table Size (usefull data)
    491.959 bytes - Wasted Space (RFL - TS) before every cycle compact
    911 bytes - Wasted Space after compact

    (After 120 cycles DB File size was 1.653.078 bytes)

    Every compact procedure has been growing up the final DB file on 1144 bytes and once per ~50 cycles added 65000 bytes (sector is allocated). This addition brought us to 12 MB after 10 MLN updates (and 7113 compacts), nevertheless it's possible to live with this, in case of such table behaviour.

    Compacting time range was 10 - 30 ms.

    Charts https://docs.google.com/spreadsheet/pub?key=0Aj5BMo74po8SdFdfSWx5bERtMFU2a1lCSzR2QzNZc 1E&output=html


    Code can be checked here. Find testDataBreezing_UpdatingOneValue() function.

    http://stsdbmtl.codeplex.com/SourceControl/changeset/changes/12113


    Here, for me not so clear what happens in VFS.
    Table 1 is filled -> copied to table 2 -> table 2 is filled -> copied to table 1... etc.
    in VFS file handlers 4->5->4->5...etc
    Raw file sizes are always equal, data length is always equal, wasted space is always equal.
    And in VFS the same file handlers are used.
    Why every compact db file grows on 1144 bytes?

    And clear that, when this grow becomes more then sector size, new sector is being allocated.
    If VFS needs this space to register newly created table, then 2 tables must be reserved for re-copying data between tables. And they shouldn't be deleted from VFS, but newly created XTable must be located on the top of old XTable (old raw file). I will look how it's possible. It's the best even for the first test, newly created tables should not reside space of the other breathing tables (reserved two file handels), probably it's also good for VFS GC. Also schema itself takes some bytes to make new record, shema is also STSdb table (836 bytes in simple test).
    Last edited by blaze; 05.02.2012 at 23:43.

  3. #3

    Default

    For those who likes to test, STSdb.MTL has now integrated automatic compactor of the user created tables.
    STSdb Multi-Threading Layer C# Csharp
    It's possible to run many compacting threads, but in this release there is only one. Compactor automatically starts in case when more then 1000 modifications was done to the table (If table was cleared also). Compactor checks necessity of the table compaction by formula: wasted space exceeds 20%.
    Compactor can be disabled by static variable in compactor class, minimum modification value also can be adjusted.
    It's possible that auto- tables compaction logic also will be useful for the next STSdb releases.

    Stat:
    I have started 3 threads which made inserts into 3 different tables, every insert was finished with Commit statement. Tables had keys of type long (every iteration increased by one) and values of type byte[150], randomly and fully generated before every insert.

    After 260000 of the inserts into every table. The ratio TotalDBFileSize / (260000*3) gave me 300 bytes per record. Note, it was not a bulk insert.

    Compacter was automatically compacting the data in these 3 tables, after every 1000 inserts into any of it.

  4. #4

    Default

    Hello,

    MTL and custom data compactor are intresting features.

    Just few comments from me:

    1. There is benchmarks done for STSDB in Benchmark | W-tree vs. existing technologies
    I would like to see additional columns: STSDB.MTL and STSDB.MTL with compactor to clearly see what is performance penalty (data size and speed). Only then it is possible to see clear picture.

    2. Second question is - why we need this if Todorov told that stsdb W4 features will be:
    * solved wasted space problem;
    * multi-threading layer.


    Saulius
    Last edited by Administrator; 23.08.2013 at 15:46.

  5. #5

    Default

    Quote Originally Posted by saulius_net View Post
    ...
    Saulius
    I don't know how will look like multi-threading layer in "alpha in the spring" version. But I need to write multi-threading applications already now which use good features of VFS and transactable key-search tree, that's why MTL as a highest communicative level between DAL and storage system element. Later, my be, I will need to rewrite only the part which is under MTL, but not the DAL and business logic elements.

    About benchmarks, hard to say. Bulk inserts are working with the same speed as standard STSdb, compacting was interesting from the "small transactions" or "random keys insert" point of view, also right now, not in 3-4 months. Because practically all data gathering systems are based on the huge quantity of small or VERY small transactions.

    We know that now STSdb is not designed to work with the small transactions at all. Size of db grows exponentially with a high speed.

    Publishing benchmarks on their website is not in my privileges - I am not a part of STS team. Furthermore the code is not well tested and can't be used for production so far.

    If the STS team thinks that I can help them in the database dev. operations - please let me know.
    Last edited by blaze; 10.02.2012 at 17:53.

  6. #6

    Default

    Some major bugs inside of the compactor are fixed. Now compacter takes into consideration KeyMap and Record Persist. Minimum modification value set to 50, what makes db size quite small. Memory stays at a stable point, etc...

    STSdb Multi-Threading Layer C# Csharp

    Still not sure if it works stable, by many tests are ok on different machines, but, who knows...
    Last edited by blaze; 10.02.2012 at 22:49.

  7. #7

    Default

    I was about to ask whether when you commit through the second locator you reopen the first one - but then looked the source and saw the TableLocators class.

  8. #8

    Default

    I think with the automatic compacting mechanism this Multi-Threading Layer, combined with a table split logic, has become a pretty nice option for those who suffer from the wasted space problem in R3.5 version.

    Have not seen the code in details, but the conception is a good workaround for some of our by design features. (In the new version these limitations are solved from the ground.)

    Thank you very much, blaze! We are really appreciate your work!
    Last edited by k.dimova; 13.06.2013 at 16:13.

  9. #9

    Default

    I don't know on which stage is STSdb W4, but I would appreciate if STSdb.MTL could be integrated into it. It's working on the high level, using only locators concept and normally doesn't touch the STSdb logic itself. Approach to the deadlocks resolving, multithreading itself are solved on this level.

    MTL as a layer between programmer and storage gives ability to make modification transactions logging into separate files (which is a step to the, separate from DBfile itself, incremental backup solution).

    So, the rest is up to you.
    Last edited by k.dimova; 13.06.2013 at 16:13.

  10. #10

    Default

    Quote Originally Posted by a.todorov View Post
    In the new version these limitations are solved from the ground.
    Would be SO BEAUTIFUL, if you open stsdb4 read-only branch in your svn, I could help to test the tree and other features, also could go on developing MTL (in any case it goes via stsdb.com "commercial" closed source license), as a point of view on the data storage, based on the approaches given by VFS and STSdb core.
    Last edited by blaze; 11.02.2012 at 19:49.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
2002 - 2014 STS Soft SC. All Rights reserved.
STSdb, Waterfall Tree and WTree are registered trademarks of STS Soft SC.