OT: redhat
Bill Vermillion
fp at wjv.com
Sun Nov 7 08:42:04 PST 2004
On Fri, Nov 05 20:16 , Fairlight gie sprachen "Vyizdur zomen
emororz izaziz zander izorziz", and continued with:
> This public service announcement was brought to you by Bill Vermillion:
> >
> > I wish more people would take the time to understand file systems
> > and how they work. As above - I still see many advocating
> > one huge file system - and one of their reasons is so they don't
> > run out of space in any one file system. I think they must
> > be MS converts.
> Not necessarily. I've seen systems where people allocated the
> defaults from the vendor and ended up running low on /usr, etc.
> Having /usr separate was also a pain depending on what was and
> wasn't dynamically linked against what at boot. :(
That shows the original OS was not designed properly as nothing
required for boot and/or single user operation should EVER depend
on anything else. The root programs should always be statically
compiled so they can run stand alone, even from a floppy if need
be.
One I had to recover had the original sysadmin move to his favorite
shell that was dynmically linked, and I guess he knew better than
the original OS designers.
That required removed the OS disk, booting of another disk and
cleaning the first disk up, and then re-installing it. He had
grown to love Bash and put it as the root shell, and it was
dynamically linked. Bad move, very bad move.
> I've run out of space on a smaller fs before and I've done
> things like relocat all of /usr/X11R6 to /home/.X11R6 and
> symlink over to buy myself space.
And X seems to grow larger faster than the base OS :-)
> A good many times, it's not so much converts as people that
> don't actually know what they're system will necessarily grow
> out to later on.
Almost no one knows how or where a system will grow, but with
proper planning you can accomdate things, such as adding new disks
mounted down the tree to take care of apps that grow and grow.
But I've seen more than one convert mount the additional disks
under a root directory and call them such thinks as /disk2
and then symlink from deeper in the tree to that location. If the
symlink is broken they have problems. It's just those that
grow up under the MS world don't seem to fully understand that
there is one root [aka /] while in the MS world you have as
many roots of a tree as you have drive designations.
> With journalling to avoid fsck's, and good backup policies, is
> it even as much of an issue these days, that a large single /
> really -needs- to be avoided?
If you know your OS and it's limits you won't have a problem if you
take them into consideration. But some FS'es have inode limits,
directory size limits, etc.
And the larger the directory the more accesses you have to make
going through inode links.
A simple path down through only two directories relative to
another takes at least 8 steps..
If you need to find the path to ../a/b you first find the inode of
. [current directory] to find the .. inode, then find it's
directory, find a inside that directory, lookup the inode for
a, find the a directory, find the name b, and look up the inode for
b to access the file.
A single / filesystem will typically mean a much deeper file
system. If you don't make the directory deep you will tend to put
a great many files in each directory, and then directory size will
seriously impact the file system speed as you will could be
performing double-indirects just to gather all the file system
names.
As above, if you are on a system that has only 65000 inodes per FS
[and it appears there are many still using those] then you get to
the point when you HAVE to make additional filesystems if you wish
to use all the space on your HD [if you have a lot of small files]
or mount another HD to make up for the oversight of not creating
addtional filesystems up front.
What your system is going to be used for is something you NEED to
know at installation time - as a system with lots of small files
needs to be set up differently from a system with large files.
Now that large HD's exist many seem to think that it's just fine
to have on / for everything. But that reasoning goes along the
same line of 'ram is cheap so we don't have to worry about program
size/efficieny'.
That latter approach is why we have to have 3Ghz CPUs with
2GB of RAM in many of todays systems to keep the performance
as expected levels.
Bill
--
Bill Vermillion - bv @ wjv . com
More information about the Filepro-list
mailing list