Decimal types ============= Fixed-point vs floating-point literals -------------------------------------- How to specify fixed-point numbers? Should it be the same as floating-point numbers? (inferred by target type) Or should floating-point numbers have a different syntax? Should trailing zeros be required for fixed-point types? E.g. 1.00 for 2 decimal accuracy. If fixed-point and floating-point have the same syntax, how should they be stored in the AST? * As strings until the type is known? * As fixed-point if possible, otherwise as float? With fixed-point numbers being convertible to float if there's no loss in precision, as determined by the number of decimals. Fixed-point representation -------------------------- The easiest way might be to store them as plain integer, scaled up by the precision. E.g. 1.23 would be stored as 123. And if the range is 0.00 up to 99.99, then the maximum in integer representation would be 9999, so it would need a 16 bit type (assuming octets are the smallest unit). Decimal vs floating point types ------------------------------- How to declare decimal vs floating point types? decimal 0.00 a decimal 0.00-1000.00 a signed decimal 0.00 a signed decimal -9.9-9.9 a float a signed float a