OT: redhat

Fairlight fairlite at fairlite.com
Sat Nov 6 02:16:15 PST 2004


Four score and seven years--eh, screw that!
At about Sat, Nov 06, 2004 at 04:15:08AM -0500,
Brian K. White blabbed on about:
[ah, the fine art of trimming...*NUDGE*]
> I have never actually seen a corrupt / or other fs. On SCO that is, yes, 
> I've seen ext2, ext3, ufs, ntfs, fat/fat32 corrupted, usually due to running 
> bleeding edge kernels or other software, or questionable hardware, so it 
> wasn't unexpected and so didn't hurt anything as it was never anything in 
> important production (except the ntfs, but there is no hope for  windows box 
> no matter what you do)  I have seen hard drives die and programming or 
> sysadmin screwups, but never a corrupted fs on a sco box on officially 
> supported or at least reasonably well tested and known hardware.

I have.  It was on 3.2.4.2, and it was NOT pretty.  It dropped a few
hundred inodes into lost+found.  This actually happened on several systems,
not just one.

I haven't seen it happen on OSR5, but that's an entirely differen fs, and
I've also had no occasion to look very often, as I don't administer SCO
boxes in general.  I'd only have cause to look if I suspected something
went AWOL.

> With tape backups and and the fact that a dead or sick drive means you need 
> to restore all fs's at the same time anyways (or _none_ if you have raid) 
> and that seperate fs's don't help or hurt or have any effect on sysadmin or 
> programmer screwups, and the fact that I've never yet seen a corrupt sco fs 
> that was not due to hardware, means that as far as I'm concerned seperate 
> fs's just take away flexability and provide nothing in return, certainly 
> nothing that's worth 1/50th what the flexability is worth.

I hate to come in against Bill and Bill on this, but I'm actually pretty
much with you on this.  In the case of serious disaster, decent backup
and disaster recovery schemes should be in force anyway, so it should be
moot.  And it -is- far more flexible to not have to plan (and live with)
your filesystem sizes, which you may outgrow--or worse, waste space by
allocating more than you'll ever actually need.  Decent planning is
definitely in order, but sometimes what's actually done with a system bears
little resemblance to what's -planned- for the box, as I'm sure we've all
seen at one time or another.

I personally used to partition everything individually--even /usr.  I'll
nowadays either accept the vendor defaults with SuSE, or tweak them if I
know of a reason I should.  But I'm not averse to actually having one large
/ fs, and it does actually take a lot of the stricture off of where you can
place things.

I may be being short-sighted, but as I said, that's what backups are -for-.

> Ultimately, it has so far worked out in real life that seperate fs's have 
> cost me time several times and never saved me any.

Ditto, although with creative use of relocation and symlinking, most issues
go away pretty quickly assuming you have -any- partition with enough space
to accomodate what actually needs reallocating.

> They do work as an effective diagnostic alert. "/ filled up, something must 
> be out of whack..." then you find it and some mail or print spool problem 

You know, I was thinking about this earlier and realised that with all the
talk of keeping data files separtely, I've walked into a lot of extant
systems where someone put fP on / and outgrew their allocated space, never
having made an /appl filesystem.  And isn't /appl still the default?

> One reason I do find to limit a fs to less than the size of the disk is if 
> the fs only performs well below a certain size.

Well, that and swap.  I don't think I need 200gigs of swap.  :)

> The possibility hadn't occured to me until last year when sco came out with 
> an update to htfs so that it (supposedly) no longer has a performance dive 
> when the fs is larger than about 50 gigs. I had very few sco fs's that were 
> over 36 gigs at that time so I never knew there was that issue but now I do, 
> and now it's supposedly not a problem any more but the possibility still 
> remains just probably at higher numbers and in any event as something to 
> keep in mind for any fs / any os.

I have a 189gig NTFS on win2k.  I haven't had it so much as blink at me
oddly.  Performs better than I'd expect from Redmond, as well.  I hate to
actually come up on the side of M$ on anything just on principle, and I do
find a lot of fault with their software (IE, OutHouse, FrontDoor--er, Page,
et al), but I actually...well...kinda like win2k.  It's stable, I haven't
had any problems with it, and it has a pretty low crash rate for as much as
we thrash the system.  I dunno...if all M$ software was at least this
stable (I won't give it secure, but I'll give it fairly stable), I might
not be so anti-MS.  I can at least say -something- good about them nowadays,
even if it's not much.

mark->
-- 
Bring the web-enabling power of OneGate to -your- filePro applications today!

Try the live filePro-based, OneGate-enabled demo at the following URL:
               http://www2.onnik.com/~fairlite/flfssindex.html


More information about the Filepro-list mailing list