Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Now that C2x plans to make two's complement the only sign representation, is there any reason why signed overflow has to continue being undefined behavior?

I presume you'd want signed overflow to have the usual 2's-complement wraparound behavior.

One problem with that is that a compiler (probably) couldn't warn about overflows that are actually errors.

For example:

    int n = INT_MAX;
    /* ... */
    n++;
With integer overflow having undefined behavior, if the compiler can determine that the value of n is INT_MAX it can warn about the overflow. If it were defined to yield INT_MIN, then the compiler would have to assume that the wraparound was what the programmer intended.

A compiler could have an option to warn about detected overflow/wraparound even if it's well defined. But really, how often do you want wraparound for signed types? In the code above, is there any sense in which INT_MIN is the "right" answer for any typical problem domain?



> In the code above, is there any sense in which INT_MIN is the "right" answer for any typical problem domain?

There is no answer different that INT_MIN that would be right and make sense, i.e. the natural properties of the + operator (associativity, commutativity) are respected. Thus, by want of another possibility, INT_MIN is precisely the right answer to your code.

I read your code and it seems to me very clear that INT_MIN is exactly what the programmer intended.


> I read your code and it seems to me very clear that INT_MIN is exactly what the programmer intended.

Well, I'm the author and that's not what I intended.

I used INT_MAX as the initial value because it was a simple example. Imagine a case where the value happens to be equal to INT_MAX, and then you add 1 to it.

The fact that no result other than INT_MIN makes sense doesn't imply that INT_MIN does make sense. Saturation (having INT_MAX + 1 yield INT_MAX) or reporting an error seem equally sensible. We don't know which behavior is "correct" without knowing anything about the problem domain and what the program is supposed to do.

A likely scenario is that the programmer didn't intend the computation to overflow at all, but the program encountered input that the programmer hadn't anticipated.

INT_MAX + 1 commonly yields INT_MIN because typical hardware happens to work that way. It's not particularly meaningful in mathematical terms.

As for "natural properties", it violates "n + 1 > n". C integers are not, and cannot be, mathematical integers (unless you can restrict values to the range they support).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: