Licensing

Mike Schwartz mschw at athenet.net
Sat Jun 28 07:48:00 PDT 2014


>> Fp, in the days of smaller drives, allowed for adding key/data extents to
the same or different drives.  Today with the larger drive capacity, the
only time one would want to do this is when the file size is nearing the 2GB
limit on a 32bit OS.
>>
>> I don't recall exactly what the naming convention is for these extents.

> Yea, please explain what extents is.
> 
> Thanks, Mike

Hi Mike:

      Brian White wrote a good explanation of "extents" last year.  I
snipped and pasted it for you (below).  If you search back through the
filePro mailing list archives, you can find other explanations.   Here is a
little less technical explanation:

     Back in the early 1980's, when we were running filePro (Profile) on
Tandy Model 12's with four 8" floppy drives, the only way to store a
reasonable amount of data into a single database (table) was to split a
file, putting the first few fields of the file on the first diskette, the
next few fields of the file on the second diskette and so forth. 

    Right now, all the data fields in your files are probably in the "key"
files (key data segments). 

    For example, when we would look at record #1 in our old 1980's CUSTOMER
file, the "customer number" and "customer name" fields might be stored in
the "CUSTOMER" key file, "customer address" might be stored in the CUSTOMER
data file, the "homeowner policy number" field might be in the CUSTOMER
data1 file and "customer automobile policy number"  might be stored in the
CUSTOMER data2 field.

    Once the programmer designs the layout of the file into 2, 3, or 4
segments, no additional programming work is required.  filePro takes care of
grabbing the fields you need from the 4 data segments automatically.  You
just define screens and define reports and write your processing as if the
entire data file resided in a single data segment.   

     The file on the first diskette was called "key", because you could only
build indexes on a field that was in the "key" segment on the first diskette
drive.  In other words, if you had a "Last Name" field in your database that
was located in the "data", "data1" or "data2" segments on one of the other 3
diskette drives, and you wanted to build an index on "Last Name", you
couldn't.

    Current versions of filePro have the ability to build indexes using
fields that are in the data, data1 and data2 segments, but the old "key,
data, data1, data2" terminology remained.  

     Want you will need to add up the field lengths to figure out the
approximate half-way point of a record (character length-wise).  For
example, if your existing record length is 1000 characters, find the closest
field you can to the 500 character point.  Or, if you want to split your
file into 3 or 4 segments, calculate field lengths to divide your record
length into 3 or 4 equal sized record lengths (roughly 333 characters or 250
characters).

Here is an example:

      NOTE:  Before starting, make SURE you have good, well tested backups
that you are CERTAIN you can recover data from, in case anything gets messed
up!!!

     Let's assume your CUST file has 100 fields and a record length of 662
characters, but is approaching the 2 Gigabyte limit.  The half the record
length would be 331 characters and that 331 characters falls roughly between
field 32 and field 33. (Adding up the record lengths of the first 32 fields
gives you, say, 320 characters, but field 33 is 60 characters, so you would
want to split the file between field 32 and 33, not between field 33 and
field 34.)

      Use fPcopy to copy your existing file to a new file.  IE, copy CUST to
CUSTsplit.  Don't copy the data, but do copy all the reports, screens and
processing. 

      Go into "Define Fields" on the CUSTSPLIT file and DELETE fields 33
through 100.  press "esc-esc", then you should see a (D)ata option on the
bottom of the screen.  Press "D" and you will be put into a screen where you
can re-add fields 33 through 100.  

     When you are all done adding fields, write a processing table that will
copy all the records in CUST to the CUSTsplit file.  Then remove the CUST
file from your system and use fPcopy again to rename the CUSTsplit file to
CUST.  

      At this point you will have a CUST file that has (roughly) a
1-Gigabyte "key" segment and a 1-Gigabyte "data" segment in it, but the file
will contain all the data from your original CUST database and it willlook
and operate exactly like your original CUST file did.

Hope this doesn't confuse you further...

Mike Schwartz 

----- ----- ----------------------[snip]
To add extents:

# touch keyqualxn dataqualxn
# chown filepro key* data *
# chmod 600 key* data*
(chmod and chown must be run *in this order*, or, just run a recent version
of setperms instead of chmod & chown)

Where qual is a qualifier (including nothing for unqualified), and n may be
anything from 1 to at least 9. I don't know if they can go beyond 9. I think
it must start at 1 and count up with no breaks if you need more than one
extent. IE, if you have one extent it must be x1. If you have 2 extents they
must be x1 and x2, etc. Each extent adds another 2G (or whatever your
OS/filesystem/fp binary max file size is) to the available "file size" to
the filepro file, up to the record limit in filepro which I don't remember,
but @rn are 9,.0 edit.

So, for unqualified with only one extent:

keyx1 datax1

For 3 extents in qualifier foo:

keyfoox1 keyfoox2 keyfoox3 datafoox1 datafoox2 datafoox3

Yes this means there is ambiguity between unqualified with extents, and
qualifiers that happen to be named x1 - x9. So the first extent of
unqualifed key is "keyx1", and the regular key in qualifier x1 is also
"keyx1".
All I can say about that is either try it and see what fp does, or "don't do
that" meaning don't make any qualifiers named x1, x2, ... x9.

I don't remember if you have to do indexes too and I never use blob or memo
but I imagine that blob and memo at least must also be extendable by the
same rules.

The limit per individual file is not simply 2G on 32bit OS. There are at
least 3 possible things that impose a limit, the OS, the filesystem, and the
fp binaries.

You could for instance have an OS that can do 64bit file i/o, and fp
binaries that can do 64bit file i/o, Meaning both could handle file sizes in
the terabytes, but if you are writing to a fat32 filesystem, then the max
file size is 4G because that is the limit in fat32.
There are also filesystem-specific tuning/layout options that affect max
file size where if you have an ext3 filesystem with 1k block sizes, your max
file size is 16 GB, but if you have a ext3 filesystem with 4k block sizes,
your max file size is 2 TB. The max file size in various 64bit filesystems
ranges from as little as 16 GB all the way to 8 EB

Generally though:
older 32bit OS & binaries = 2G
newer linux 32bit os & binaries can do 64bit file i/o = 2TB or more old or
new, 64bit any os & binaries = 2 TB or more

You could also be using old 32bit fp binaries on an otherwise 64bit OS and
filesystem, and in that case the limit would be 2G, unless the binaries are
32bit but new enough to support 64bit file i/o.

You are on AIX? your max file size depends on the same things as above but
for aix. 32 or 64 bit kernel? what filesystem? And possibly on options
chosen at the time the filesystem was created, which we can not know. For
JFS2 it looks like even for a 32bit kernel 1 TB is a safe bet for max file
size.

http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.i
bm.aix.baseadmn/doc/baseadmndita/fs_size_limit.htm
--
bkw
----- ----- --------------- {snip]




More information about the Filepro-list mailing list