Large processing tables
Brian K. White
brian at aljex.com
Sun Feb 25 23:27:29 PST 2007
----- Original Message -----
From: "Fairlight" <fairlite at fairlite.com>
To: <filepro-list at lists.celestial.com>
Sent: Sunday, February 25, 2007 11:02 PM
Subject: Re: Large processing tables
> Jumping into this late, and on a followup repy. Missed most of the thread
> but had a question on seeing this...
>
>> On Fri, Feb 23, 2007 at 09:50:51AM -0500, Kenneth Brody wrote:
>> > Pre-5.6, filePro processing was limited to 128K of compiled code.
>> > This was increased to 2MB in the 5.6 release. It's possible that
>> > you have run into some problem related to this.
>
> Can I just ask why there's a limit at all?
>
> I don't get it. From my design standpoint, if you run up against legacy
> limits and have to rewrite code, why not just be cost-effective and
> effort-effective and rewrite it with full unlimited dynamic memory
> allocation (to the limits of system resources available) instead of just
> raising them a bit? If the 9999 line limit (assuming it still exists) is
> run up against, I would easily be able to imagine fP-Tech just raising
> it to 20k or something rather than making it unlimited. I haven't heard
> otherwise, so I assume people are still saddled with the burden of a
> (32K - 1byte) dummy field size limit. People wanted more automatic
> indexes, so they got what--another six, rather than unlimited.
>
> You know, malloc() exists for a reason. There are also ways to get
> around legacy design problems (ie., just off the cuff, you couldn't use
> double-letter indexes because they'd be taken as dummy fields, but you
> -could- have implemented an index() casting function to tell it, "Hey,
> override the 2-character supposition, this IS an index, use it as such.").
> Old behaviour would have still worked, new behaviour could have been
> vastly
> expanded compared to what it was. Maybe that's not the only
> consideration,
> but you get my point.
>
> There's something to be said for the options afforded by a robust, "No
> artificial limits," design in every aspect that won't break backwards
> compatibility--and I think it would be the majority of cases.
>
> In every case I can remember hearing regarding filePro and a limit, I
> remember the limit simply being raised, not erradicated. One of these
> days
> it would be nice to see, "...changed to unlimited."
>
> It's not personal, I just -don't- understand, so please explain it to
> me. Why have to rewrite it once every 5, 10, 15 years and risk breaking
> it in each incarnation? Why not just do it -once- for good as long as
> you have to make a change anyway, and give it as unlimited a capacity
> in whatever area you're working on? I don't understand the continued
> adherance to a piecemeal, gradualist approach. It defies understanding
> without a reasonable explanation--at least it does for me. Yes, I'm
> likely
> the exception. What else is new? :)
Defies understanding? You are that unimaginative?
It's been my guess that all those various static limits and structures (like
the flat file format itself) are a large part of what makes fp fast.
Brian K. White -- brian at aljex.com -- http://www.aljex.com/bkw/
+++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++.
filePro BBx Linux SCO FreeBSD #callahans Satriani Filk!
More information about the Filepro-list
mailing list