Given that the whole point of a BitArray is space efficiency so that you can pack more into less memory, it seems silly to me that BitArrays are indexed by size_t, thus arbitrarily preventing you from using more than 2^32 elements on 32-bit architectures.
2^32 bits takes up 536MB. Do you need a bit array greater than that size? It seems a bit excessive to allow larger arrays, making compiled code more complex for indexing. Perhaps the index type can be templated if you want a larger one.
Yes, I'm within earshot of needing this. I'm working with adjacency matrices for graphs with ~50,000 vertices. It wouldn't take much more to hit this limit.
Where do you get 536MB from? It's 512MB. Nice round figure in binary terms - it has to be. But you've got me wondering if there's a more efficient way of storing that kind of data....
Like hard drives, I assert that 1MB == 1,000,000 bytes :) 2^32 bits / 8 bits per byte = 536870912 == 512 * 1024 * 1024
Yeap, the correct term for 1024 * 1024 is MiB (mibibyte): http://en.wikipedia.org/wiki/Mebibyte :)
(In reply to Leandro Lucarella from comment #5) > Yeap, the correct term for 1024 * 1024 is MiB (mibibyte): > http://en.wikipedia.org/wiki/Mebibyte :) Or Men in Black's. On a more serious note, are we going forward with this?
https://issues.dlang.org/show_bug.cgi?id=3568
This isn't going to happen, as per discussion here: https://github.com/D-Programming-Language/phobos/pull/1656#issuecomment-26995021 and: https://github.com/D-Programming-Language/phobos/pull/2249#commitcomment-6692674
and: https://issues.dlang.org/show_bug.cgi?id=11349