blob: 74455fca8dd07160c1f3f4703a3f5f4d2f4cb9e8 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
|
Is more than 63 bits needed for file sizes?
===========================================
Old SLUL had a fileoffs type, based on C `off_t`.
SLUL-try2 uses 63-bit integers.
That's a lot simpler, BUT is there any possibility that a file size would
go over 64 bits? E.g. in a disk image, a SAN, or similar?
Solution 1: Don't solve, and use separate APIs for huge files
-------------------------------------------------------------
For example, as an extension to some file class:
class HugeFileOffset
long low
long high
end
func seek
HugeFileOffset offs
code
...
end
Solution 2: Use the high bit to extend the long type
----------------------------------------------------
Maybe the high bit could be use to pass a pointer instead? (That would
of course require that the whole 64-bit address space is not in use.)
But how to know whether a function supports this usage or not?
It would have to be declared (maybe with some kind of since-version
scheme?).
Problem
-------
Regardless of chosen solution, there's always the problem of type
conversions and arithmetic with larger-than-64-bits integers (unless
of course all integers are bigintegers).
So any larger-than-64-bits type will be inconvenient to use, and many
libraries might not support that.
Related topic: Biginteger support? Or allow the runtime to decide the
maximum integer type? Maybe that could use the high-order-bit hack?
|