With dmd 2.048 this program gives (on Windows) a runtime error: int foo(int x, int y) { return x % y; } void main() { int r = foo(int.min, -1); } But the same operation done at compile time gives no errors, and foo returns 0: int foo(int x, int y) { return x % y; } static assert(foo(int.min, -1) == 0); void main() {} So one of the two cases is wrong (or both). While floating point operations done at compile-time may give slightly different results, I'd like integral operations to give the same results at compile-time and run-time.
https://github.com/D-Programming-Language/dmd/commit/7ef3b2bb9e740df39108957ae5e3b2aa8253d351 https://github.com/D-Programming-Language/dmd/pull/284
Commit pushed to master at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/a4cedfa2a50fdf899991e0b67a7acdbb2d872e88 add test case for Issue 4682
Commit pushed to newCTFE at https://github.com/dlang/dmd https://github.com/dlang/dmd/commit/a4cedfa2a50fdf899991e0b67a7acdbb2d872e88 add test case for Issue 4682