Too Many Open Files (was Re: The CLOSE command)
Fairlight
fairlite at fairlite.com
Fri Feb 17 16:57:33 PST 2006
This public service announcement was brought to you by Jeff Harrison:
>
> Ah, but why not have filepro do it automatically?
> There are many instances where filepro does the
> reasonable thing - such as automatically protecting
> lookups when a write occurs.
Erm...then what's the -p for in the lookup statement? Would it not then be
superfluous?
> In my opinion most filepro programmers are "close
> happy". By that I mean they issue unnecessary closes
> to files all the time - often just to use it again
> soon after the close - instead of letting filepro
> handle the closing where it can and should.
>
> This issue, unfortunately, could well have the result
> of making programmers even more "close happy".
I can't speak to the issue of people trying to use something again after
they've closed it.
However, the concept of "close happy" obviously comes from someone that has
never had to write anything that needs to be resource-conservative as
possible. You know, there -are- file descriptor limits per-process,
per-user, and per-system on various operating systems. How many and how
configurable these limits are depends on the OS. Exceeding (or trying to
exceed--or just failing to stay within the constraints of) those limits can
lead to Very Bad Things[tm].
CLOSE() is -not- just for the system-level language programmer (C, etc.).
It's a good practise to use as few resources as possible at any given point
in -any- environment. Doing so leads to faster, leaner, more efficient
software.
> In my opinion you should only close lookups when there
> is a need to do so. Failed or not I would GENERALLY
Only bother closing your door if you see someone has come in and stolen
your stereo? Is that about it?
It's all about the old ounce of prevention saw. A close (without requiring
another lookup or open -if- the resource is not going to be needed again)
costs you nothing and can net you increased stability and efficiency.
"Programmers" that open a cartload of things and let the final exit() close
everything for them are a never-ending source of amusement to those of us
that know better. The black box of fP magically closing files is about the
same scenario--you're still using far more than you need to until the final
exit if you have 20 or 30 open and don't need more than 2 open by the time
you've gotten there.
Saying you shouldn't close() unless absolutely necessary is tantamount
to saying programs should never free() unless a malloc() fails and you
need yet more resources. Well, not quite in fP's case, because with the
magical automatic file closing present there's some measure of protection
present. But as far as general programming form, have you ever -seen- a
serious memory leak, or seen a program run out of fd's and subsequently
crash and burn? Do you know what causes that? SLOPPY CODING. Sure, you
don't -have- to undef that huge 3MB hash or array in perl, either--but
you can make things FAR more easy on the system if you do. Just because
you don't have to doesn't mean it's not a good idea. Doesn't matter what
language...tight code is tight code, period, the end. Ditto for sloppy
code.
The more I hear sometimes, the more I believe 4GL coding is akin to AOL or
WebTV. It lets any idiot do it--and thus too many do. Some of the things
I've heard full-time professionals argue in favour of are things that any
halfway seasoned part-time hobbyist should know better than to do. A fact
that never ceases to astound and depress me, I might add.
mark->
More information about the Filepro-list
mailing list