OT: multi-platform *nix blocksize detection
Fairlight
fairlite at fairlite.com
Wed Oct 20 10:03:49 PDT 2004
This public service announcement was brought to you by Bill Vermillion:
>
> Don't confuse physical block size with filesystem block size.
Point taken.
> I've just looked at mkfs.ext2 and Linus also allocates in
> 1024, 2048 or 4096 byte blocks. So you can not depend
> on the file system allocation size being the same on all system -
> but the default will the 1024.
You know, that's the odd thing. If I look at ls -ls on a file on this box,
I get one 1K block per <= 1K of data. It's 1024K blocks. For example,
/etc/profile will show as 766 bytes and I get '1' in the ls -ls listing,
but if I access st_blocks, it kicks back 2. That's what originally led me
to conclude it was 512-byte blocks. BUT...if I go to Solaris and do the
same thing, /etc/profile is 1403 bytes, and I get 8 whether I do ls -ls or
whether I access st_blocks. So they have 4K blocks. Okay, fine.
Something is goofy on linux systems. I thought it might just be mine, but
a 6822-byte /etc/profile on SuSE 9.0 results in ls -ls saying it's using
8 blocks, but accessing st_blocks gives me 16. On linux I have to divide
by two to get the correct information. So they're 2K blocks on SuSE, but
something is screwy on linux with st_blocks in any event, no matter which
kernel tree (2.2 vs 2.4 between my two tests...Cobalt vs SuSE). So it's
not even a matter of scaling down to 512-byte blocks, it's a matter of
always dividing by 2 to get the right number.
And I have -no- idea why this should be so.
> I know of no Unix[like] system that uses 512 bytes in the file
> system.
Remember when I said yesterday, "I guarantee?" I was wrong. :-/ Bad
logic surrounding the math on my part. I rethought it after reading your
post, and I did indeed find that it's 1K blocks--which left me all the more
confused after I rechecked my computations.
> In an effort to improve performance the AT&T SysV2 brought forth
> the S52 file system. That allocated 4 blocks at a time - or
> 2048 bytes.
Anyone that's ever dealt with NFS knows the value of large blocksizes. :)
> > Checking manually with fdisk isn't something I'd like to suggest to an
> > end-user, ya know? :)
>
> And I'm curious as to why you need to know this, and why a user
> would need this. Unless you are working at the lowest OS level
> you are going to be constrained by the allocation sizes imposed
> by the file system.
I wanted to make something portable, really. I wrote it for myself, but
it could be handy for others. There are programs that create files of
the entire size they will be when they are done downloading. They then
write in segments within the file while getting the segments from multiple
sources, and the only way to tell what percentage you approximately have
complete (and the approximate transfer rate) is to look at the used blocks.
I wrote a nice little curses-based monitor program for percent complete and
download rate so I can watch -only- the active files in a queue of this
type.
The block count reporting from st_blocks is the only obstacle to this being
really portable. And dang, while I haven't done ncurses work since '95, I
-really- like the perl Curses interface to the libraries. Sweeeeeeet!
Actually, the other puzzler is how someone creates a file of an entire size
on disk without actually -taking- the full size immediately--but letting it
report what it will eventually be. I know it's being done, but I actually
don't know how to do it, to be honest. I'm embarrassed to even admit that.
I've never needed to do it, so I guess I've had little reason to know. But
it's an interesting question. If you know, clue me in.
mark->
--
Bring the web-enabling power of OneGate to -your- filePro applications today!
Try the live filePro-based, OneGate-enabled demo at the following URL:
http://www2.onnik.com/~fairlite/flfssindex.html
More information about the Filepro-list
mailing list