It would be great if STSdb provides an AutoCommit feature alongside a MaxMemoryUsagePercent property which the developer can specify during initialization.

Like this:

engine.AutoCommit = true;
engine.MaxMemoryUsagePercent = 90; // of all available memory
The action could be to force a commit (by engine itself) if it reaches to specified percentage of total memory in system.


This is quite useful in scenarios where we have large data set and a limited amount of memory and are unable to precisely predict the memory usage to fine tune the STSdb Waterfall tree memory options.

I exactly faced the mentioned issue in which I couldn't commit my data in memory to STSdb in disk. The Commit just stuck ...

I left the program to run with the hope that it finally commits the data but even after several hours nothing more than an infinite set of jagged chart of memory usage happened ...



The scenario happens when there's a little amount memory. In such a state if a Commit happens memory goes to max and after committing some data (in my case it committed 2.3 GB to disk) suddenly some large amount of memory is freed and it stuck at this stage for hours. I even tried to copy the resulting (in-process) file and tried to open it but faced with an exception saying something like "STSdb corrupt header"