OT: redhat

Bill Vermillion fp at wjv.com
Sun Nov 7 21:06:16 PST 2004


> Four score and seven years--eh, screw that!
> At about Sat, Nov 06, 2004 at 04:15:08AM -0500,
> Brian K. White blabbed on about:
> [ah, the fine art of trimming...*NUDGE*]

> > I have never actually seen a corrupt / or other fs. On SCO that is, yes, 

I've seen it in OSR5.

...

> I hate to come in against Bill and Bill on this, but I'm
> actually pretty much with you on this.

My feeling for this is from a lot of reading on it and the
recommendations of people who manage a lot of systems.  So it's not
just me.  I can't put my hands on any particular book at the moment
but over the years I've read many - and these aren't the Dummy
books - typically from Wylie, PH, Springer - the people who publish
books who know what they are doing.  An toss in a couple of Usenix
conferences too.  I will say that most of those people looked
at SCO as a toy OS and were usually running large systems.      

> In the case of serious disaster, decent backup and disaster
> recovery schemes should be in force anyway, so it should be
> moot.

An inadverant power-off on a large busy system can cause problems
and if the superblocks aren't properly updated you can go through
an fsck - and perhaps have files moved to lost+found.  If you have
a smallish / it may be far faster to reload at that point than
paw through lost+found putting things back where they belong.

Disconnected directories are pretty obvious where they belong for
the most part.  Orphaned files may require running strings
on those delightfully named files - the # sign followed by
the number of the inode.   

And since / is a critical system, with all the superblock, and all
the inode information, having it separate instead of part of
everything is a prudent approach.  Putting everything in one
filesystem on large drives is like the secretary who filed
all the letters under L.

> And it -is- far more flexible to not have to plan (and live
> with) your filesystem sizes, which you may outgrow--or worse,
> waste space by allocating more than you'll ever actually need.

You should have a good idea of how much space the base OS is going
to use. If not you need to be in another business.  It is usually
not a real problem if you outgrow a Unix file system as you can
symllink to other filesystems or if you really need more space just
mount another drive at an appropriate point.  It the MS system
that make it hard.  And that's why things like Partition Magic
were created - because you are limited in restructuring by
the OS itself.

> I may be being short-sighted, but as I said, that's what
> backups are -for-.

Backups are not the reason for poor planning that causes a reload.

...

> > One reason I do find to limit a fs to less than the size of
> > the disk is if the fs only performs well below a certain
> > size.

FS performance usually isn't limited by the size of the FS but
the number of files in the directories in that fs.  Once you go
past indirect block in the directory you are in slow territory.

> Well, that and swap.  I don't think I need 200gigs of swap.  :)

> > The possibility hadn't occured to me until last year when
> > sco came out with an update to htfs so that it (supposedly)
> > no longer has a performance dive when the fs is larger than
> > about 50 gigs.

Not all OSes have the problem.  Remember that OSR5 had smaller
limits at first and it didn't start out with the larger limits
assoicated with SVR4 native versions - such as UnixWare.

Bill
-- 
Bill Vermillion - bv @ wjv . com


More information about the Filepro-list mailing list