aboutsummaryrefslogtreecommitdiffhomepage
path: root/notes/comparison_semantics.txt
blob: 44ed548d2c053b8a9101b7aea834a51cd7bd37d4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78

Comparison semantics / typing
=============================

In most other proglangs, the terms get assigned types, and then the outer
expressions are recursively assigned types. This means that deciding the
integer type sizes/signedness in comparison operations is trivial there.

In SLUL, it works the other way around: First the outermost expression is
assigned a type (i.e. boolean for a comparison expression), and then the
types of the integer expressions have to be inferred somehow.


Options:
* Always require either that both sides to have an unambiguous type, or that
  the left side has an unambiguous type and the right side is a literal.
* Use the largest type of any term, and report an error for mixed signedenss
  within one side. (Literals get promoted to this type, and an error is
  reported if the literal value is not in range).
    - Will easilly trap with small types such as bytes.
* Use the type of all terms, i.e. require all terms to have exactly the same
  type.
    - Will easilly trap with small types such as bytes.
* Like either of the above, but never use types smaller than int/uint.


Ortogonally, there is also the question whether if and how comparisons with
mixed signedness should work:

* Forbid
* Forbid, but allow if the signed operand is never negative.
* Promote to unsigned (like C). But this is confusing.
* Compare by value, i.e. out-of-range values are handled specially
  (and always return true/false depending on the side and operator).


Revise type detection/promotion entirely?
-----------------------------------------

There is a performance and usability problem with the current system for
type detection. For example, given the following expression:

    [3]byte a = ...
    byte b = a[0] + a[1] + a[2]

Currently, each addition has to be performed as a byte, and range-checked.
That is both annoying (because it could overflow and/or give range errors
at compile time) and also slow, because the compiler needs to insert
instructions to range check and/or to remove excess bits.

It would be better if it was computed as an uint/int.

To avoid confusion, maybe the byte/int16 types should only be allowed
in structs/arrays? (and maybe in function parameters).

If calculations are done with higher bit-width, then there are some edge
cases that need to be tested:

    var byte u8
    var wuint16 w16
    var uint u

    u = w16 = (u8 * u8 * u8)

The multiplications could yield a larger number than "w16" can hold.
In that case, "u" should still receive the non-truncated value.


How other languages handle integer promotion
--------------------------------------------

* C:
    - Promote to larger
    - Promote to unsigned
    - This leads to strange behaviour in mixed-signedness comparisons
* Hare:
    - Promote to larger (but limited for uintptr/size types)
    - Mixed signedness is an error
    - This also solves the comparison issue