Disk caching and data reading (was Re: create())

Fairlight fairlite at fairlite.com
Fri Sep 6 16:57:04 PDT 2013


On Fri, Sep 06, 2013 at 11:37:23AM -0400, Kenneth Brody thus spoke:
> It doesn't matter when (or even "if", for the case at hand) the data is
> written from the cache to the physical device.  As soon as the data is
> placed in the cache (which will happen prior to any[*] write call), it is
> available to be read by any other program.
>
> There seems to be a misconception among some people that reads will
> somehow circumvent any write cache, and that modified data won't be seen
> until the cache is flushed to the physical device.  This is simply not
> true -- if the data is already in cache, then any[**] read will fetch the
> data from that cache.

I always looked at it as the cache being FIFO, but it sounds like, from
what you're saying, it basically presents a FIFO-like appearance, even when
it's not purely FIFO.  Either way, I agree with you that it should not be
problematic.

> The only problems I am aware of with directories with lots of files is
> slower open/close functions.  Once open, it doesn't matter how many files
> are in that directory.  (In fact, on *nix systems, you can even delete
> the open file itself, and continue using it.)

Open/close on files in heavily populated directories should not be slow.
The only operation I know of, at least on *nix systems, which is adversely
affected by heavily (>500 files) populated directories is readdir().
That's due to double inode redirection being triggered over about that
point.

If someone has over 2000 files in one directory, even on Windows, they need
to re-evaluate their directory use.  My c:\temp has 954 files in it right
now, and that's after five years of being downloaded to almost exclusively
from every browser, and very few clean-up instances.  (I'm a packrat,
especially with installers, and I actually tend to use that directory in
favour of anything under My Documents, which I barely use at all.)

> > A badly fragmented disk will cause an incomplete write problem or a
> > delay.

Wrong.  It can cause a delay, but never a complete failure to completely
write the full amount of data.

> > If you're chasing a virus problem, empty the temp folder. Some
> > antivirus software delete temp stuff too.
>
> I would call such an AV program "horribly broken".  Unless it detects a
> virus in one of those files, it has no business touching *anything*.

Any A/V program that automagically deletes files without requiring approval
of the user should immediately be removed from the system.

> > If MS just 'fixed what we have' and called it a new OS, I'd buy.
>
> I find it hard to believe that Windows is "broken" in this regard.  If
> it were, it couldn't function.  And, in my more than 25 years experience
> with Windows, I personally have never seen the situation described.

I haven't either.  I mean no offence to anyone (though some will doubtless
be taken or construed), but sometimes I think people just make this stuff
up at random, or hallucinate it.  Then it gets related in fora such as
this, and attains the status of virtual urban legend.

mark->
-- 
Audio panton, cogito singularis.


More information about the Filepro-list mailing list