I compiled the following program with dmd 2.052 on an Ubuntu 10.10 console. The following program reads only the first code unit instead of the whole character. import std.stdio; void main() { wchar c; // Please note: same problem with dchar as well readf(" %s", &c); writeln(c); } For example when the input is the character ö (encoded with byte values 195 182 in UTF-8), only the first code unit is read and the output becomes the Unicode character that corresponds to the value of that code unit. In a sense, the program reads a code unit and outputs it as a code point. Thank you, Ali
This is marked as 'regression'. What previous version did it work with?
"regression" turns out to be my mistake. I just went back more than a dozen dmd versions and see that std.stdio.readf (or File.readf) is pretty new. I've been using std.cstream.din, which used to work better than stdio.readf. Thinking that they must be using the same underlying format functions I thought that this was a regression.
2.069, works now, test with ö and é as well.