OT: redhat

Brian K. White brian at aljex.com
Sat Nov 6 01:15:08 PST 2004


Bill Vermillion wrote:
> On or about Fri, Nov 05 20:16 , while attempting a Zarathustra
> emulation Fairlight thus spake:
>
>> This public service announcement was brought to you by Bill
>> Vermillion:
>
>>> I wish more people would take the time to understand file systems
>>> and how they work.   As above - I still see many advocating
>>> one huge file system - and one of their reasons is so they don't
>>> run out of space in any one file system.  I think they must
>>> be MS converts.
>
>> Not necessarily. I've seen systems where people allocated the
>> defaults from the vendor and ended up running low on /usr, etc.
>> Having /usr separate was also a pain depending on what was and
>> wasn't dynamically linked against what at boot. :(
>
> I just don't see the sense in having a single / on a 40GB or larger
> disk, which is what I'm seeing recommended by some people in some
> of the newsgroups.
>
>> I've run out of space on a smaller fs before and I've done
>> things like relocat all of /usr/X11R6 to /home/.X11R6 and
>> symlink over to buy myself space.
>
>> A good many times, it's not so much converts as people that
>> don't actually know what they're system will necessarily grow
>> out to later on.
>
>> With journalling to avoid fsck's, and good backup policies, is
>> it even as much of an issue these days, that a large single /
>> really -needs- to be avoided?
>
> Journaling is not the be-all end-all that many make it out to be.
>
> I'm more on the side of McKusick, Seltzer, etc., with the
> soft-updates and background fsck.  SoftUpdates will keep the file
> system intact, with the potential loss of data in an unexpected
> crash, but in a journaled system you can lose the file system
> integrity.    The documens on softupdates are impressive, but many
> don't want to go through a 40 page pdf with charts to understand
> the process.
>
> I've seen more than one system with / corrupted while everyting
> else is intact.   And the added plus of being able to update
> the OS while leaving all the user data in place is another added
> value.  And if the / is corrupted, you can remake the / fs
> and re-install the OS while keeping all the user data and locally
> installed programs.
>
> Bill

I have never actually seen a corrupt / or other fs. On SCO that is, yes, 
I've seen ext2, ext3, ufs, ntfs, fat/fat32 corrupted, usually due to running 
bleeding edge kernels or other software, or questionable hardware, so it 
wasn't unexpected and so didn't hurt anything as it was never anything in 
important production (except the ntfs, but there is no hope for  windows box 
no matter what you do)  I have seen hard drives die and programming or 
sysadmin screwups, but never a corrupted fs on a sco box on officially 
supported or at least reasonably well tested and known hardware.

With tape backups and and the fact that a dead or sick drive means you need 
to restore all fs's at the same time anyways (or _none_ if you have raid) 
and that seperate fs's don't help or hurt or have any effect on sysadmin or 
programmer screwups, and the fact that I've never yet seen a corrupt sco fs 
that was not due to hardware, means that as far as I'm concerned seperate 
fs's just take away flexability and provide nothing in return, certainly 
nothing that's worth 1/50th what the flexability is worth.

Ultimately, it has so far worked out in real life that seperate fs's have 
cost me time several times and never saved me any.
Some of the arguments for them are starting to sound like recommending 
keeping all fs's under 2 gigs so that in a pinch they can be archived into 
files or under 700 megs so that they can fit onto cd's.

They do work as an effective diagnostic alert. "/ filled up, something must 
be out of whack..." then you find it and some mail or print spool problem 
gets fixed. I'd much rather have a less drastic form of notification and 
have the problem go unnoticed longer and the box keeps running and gets 
noticed gradually because the backups seem to be taking way too long or one 
of my cron jobs sends me an email when dfspace reaches a certain threshhold 
value or somthing, and the box keeps running and oh yeah, the box keeps 
running.

One reason I do find to limit a fs to less than the size of the disk is if 
the fs only performs well below a certain size.
The possibility hadn't occured to me until last year when sco came out with 
an update to htfs so that it (supposedly) no longer has a performance dive 
when the fs is larger than about 50 gigs. I had very few sco fs's that were 
over 36 gigs at that time so I never knew there was that issue but now I do, 
and now it's supposedly not a problem any more but the possibility still 
remains just probably at higher numbers and in any event as something to 
keep in mind for any fs / any os.

Brian K. White  --  brian at aljex.com  --  http://www.aljex.com/bkw/
+++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++.
filePro BBx  Linux SCO  Prosper/FACTS AutoCAD  #callahans Satriani



More information about the Filepro-list mailing list