Chunked/segmented arrays ======================== It would be nice to be able to have arrays that are non-contiguous and/or point to some existing byte range in memory. But this leads to a few problems: 1. Chunks/segments must use relative pointers in order to be possible to allocate in the constant data section. 2. Lots of extra overhead for small arrays (e.g. [3]byte ) 3. It makes structs more complicated: type S = struct { [3]byte b # this needs a bit of metadata, that wouldn't be necessary otherwise } How much overhead is needed? With a naive design: - Alignment (0-7 bytes on 64 bit platform) - Length - Pointer to data (if not included after length) (- Pointer to next segment) With pointer to next segment: --> 12-15 bytes extra on 32 bit platforms --> 24-31 bytes extra on 64 bit platforms Without pointer to next segment: --> 8-11 bytes extra on 32 bit platforms --> 16-23 bytes extra on 64 bit platforms Having two separate plain array types is not very nice either. But having a list type could be useful: list l = .new() l.add(1) l.add_all(other) l.add(2) l.add_all(other.from_incl(10)) l.add_all(other.up_to(9)) # would be nice to allocate the temp objects on the stack (or in a "stack-allocator area" in the arena) l.add_all(other.sublist(2, 5).reverse()) Lists could have a compact structure: length/info field: - 0-(max-k): immutable continguous list - (max-k+1)-max: non-trivial list types (mutable, non-contiguouos, etc) the rest depends on the length/info field. Need to have for non-contiguous lists: - type of list (contiguous list, reversed contiguous list, chunked list, reversed chunked list) - length of list - segments/data It would also be nice to be able to include a immutable contiguouos list as as a part of a larger list (i.e. after concatenation).