One thing that any remotely production-quality GC does is analyze the result of collection with respect to minimal headroom - X % (typically 30-50%). If we freed Y % of heap where Y < X, then the GC should extend the heap so that it get within X % mark of free space in the extended heap. See the exchange in NG discussion at http://forum.dlang.org/thread/mailman.149.1442256696.22025.digitalmars-d@puremagic.com On 14-Sep-2015 21:47, H. S. Teoh via Digitalmars-d wrote: > Over in the d.learn forum, somebody posted a question about poor > performance in a text-parsing program. After a bit of profiling I > discovered that reducing GC collection frequency (i.e., GC.disable() > then manually call GC.collect() at some interval) improved program > performance by about 20%. > ... On Monday, 14 September 2015 at 18:58:45 UTC, Adam D. Ruppe wrote: > Definitely. I think it hits a case where it is right at the edge of the line and you are allocating a small amount. > > So it is like the limit is 1,000 bytes. You are at 980 and ask it to allocate 30. So it runs a collection cycle, frees the 30 from the previous loop iteration, then allocates it again... so the whole loop, it is on the edge and runs very often. > > Of course, it has to scan everything to ensure it is safe to free those 30 bytes so the GC then runs way out of proportion. > > Maybe we can make the GC detect this somehow and bump up the size. I don't actually know the implementation that well though.
I thought our GC already did this, especially since the optimizations from this/last year. Martin?
THIS ISSUE HAS BEEN MOVED TO GITHUB https://github.com/dlang/dmd/issues/17311 DO NOT COMMENT HERE ANYMORE, NOBODY WILL SEE IT, THIS ISSUE HAS BEEN MOVED TO GITHUB