int64 + usually enough + constant time + predictable performance + no "builtin" Denial-of-Service in the integer type itself + always a value type bigint ("arbitrary" sized integer) + avoids different range of positive and negative + no need for a separate unsigned type + small values (e.g. -32768..32767) can still be passed by value. - may need to be passed by value, may need allocation. - slow bigreal ("arbitrary" sized real type) + a single numeric type :) * could be implemented as either: ( integer part, ratio from 0..1 encoded as 0..2^64-1 ) ( integer part, numerator, denominator ) ( numerator, denominator ) + could avoid some loss of precision of float types - often one wants to limit the numbers to integers, e.g. in array indexes, row IDs, handle numbers, hash function values, etc. - may need to be passed by value, may need allocation. - slowest alternative solution: have 3 types? or more? - byte (e.g. for bytearrays/strings) ? or skip this? - bignum - float64 Or, perhaps there should be 4 types: - byte (perhaps not really needed. actually only needed in arrays, but for that, it is possible to use strings instead. also, characters are better represented as strings anyway, due to multi-byte chars and extended grapheme clusters) - int64 - with bounds - wrapping unsigned integers - with bitlengths, perhaps 1..64 - typically, casting between byte/int/wuint is NOT meaningful! perhaps it should require explicit cast-like operations. - float64 - bounds sometimes make sense, e.g. -1<=x<=1 or 0<=x