limits on called program's variable and/or array space?
Bruce Easton
bruce at stn.com
Tue Jul 17 15:43:21 PDT 2018
Sorry - one typo - near the end, this:
"rcabe cust -f rptrealfields -y nnnnn" should not include the "-f"
which is not a flag for rcabe
On 7/17/18 6:35 PM, Bruce Easton via Filepro-list wrote:
> Rich - regarding tokenization, because filepro allows you the
> flexibility to override its default assumptions, care should be taken
> when tokenizing a processing table to tokenize the table against the
> auto table *you intend* it to go with - even if you intend it not to
> be tokenized against an auto table. If you don't give explicit
> instructions to rcabe on what to do (which you may be doing from the
> main filepro menu option, depending on the command line there),
> filepro will, by default, try to tokenize against "auto.prc"
> ("prc.automatic" for *nix) if it exists for the filepro file in question.
>
> So if you don't even have an "auto.prc", and you don't want to use any
> auto table, then no worries. But let's say you do have an "auto.prc"
> that you use with your input table for data entry, and that you have a
> report where you don't want that report processing to run against your
> "auto.prc" that you use for input. You should not only tell filepro
> not to use the auto table at runtime by overriding the default
> assumption with the "-y" flag with a non-existent name, but you should
> also, when tokenizing the output table for your report, do this in the
> same way to tell filepro not to try to use the default:
>
> rcabe cust myreport -y nnnnn '<--assumes there is no nnnnn.prc in
> file cust
> I think you can also just use double quotes:
> rcabe cust myreport -y ""
>
> For your purposes, it seems data input is not even involved. Prior to
> called tables, the auto table could be used to share common purposes.
> For instance, if one third of your reports fills dummy fields for
> output in one way, and another third fills a different set in another
> way, and another third only reports real fields, you could do most of
> the processing with just two auto tables for all of the reports (if
> the only purpose of your output processing is to fill dummy fields to
> be shown on reports). In such a case you would not only want to flag
> the request output runs to indicate your intention:
>
> rreport cust -f myreportx -y autox
> rreport cust -f myreportz -y autoz
> rreport cust -f reprealfields -y nnnnn
>
> but to be safe (avoid dummy field conflicts between prc tables, only
> have filepro use memory for things you're actually using in a
> request, etc.), you should tokenize similarly prior to using these
> output requests:
>
> rcabe cust myreportx -y autox
> rcabe cust myreportz -y autoz
> rcabe cust -f rptrealfields -y nnnnn
>
> If you start using auto tables for some reason, but frequently want to
> tokenize tables without using the auto table, you could always add an
> option to the filepro main menu for the latter purpose:
>
> rcabe - -y ""
>
>
>
> On 7/17/18 3:33 PM, Richard Veith via Filepro-list wrote:
>> I didn't understand the following:
>> "it is important to tokenize a processing table
>> with the auto processing table to be used.
>> When no auto table is “used” the tokenize
>> with a non-existent auto table name.
>> That will allow memory to be allocated correctly."
>>
>> To be clear, in our situation we are running output processing tables
>> only, from a batch file. No users, no user screens, no auto table.
>> This is all in-house, on a standalone PC, no Internet connection, and
>> no delivery of code to anybody else. The batch file contains either
>> " DREPORT MyDatabase -f MyProg1 -s MySelSet1" or " DREPORT
>> MyDatabase -f MyProg1 -i6" depending upon whether or not we are using
>> a selection set or an index (more about this in next paragraph).
>> MyProg1 is the calling program, and it calls MyProg2. When in the
>> processing table editor, and saving any edits, we select Y for
>> checking syntax, and Y for "Create Runtime Table". What else is
>> involved in "tokenizing a processing table"?
>>
>> My next question has to do with the difference between building the
>> index using DXMAINT in the batch file (e.g., " DXMAINT MyDatabase -O6
>> -RF 7,5,A:4,23,A:41,11,D -S GetSubset -E" ) and building it
>> manually. If I create the index manually, for a given situation, it
>> will take about 10 minutes to build the index, and then running the
>> programs from the batch file will take about 4 minutes. On the
>> other hand, if we build the index using DXMAINT in the batch file
>> (same build parameters), and then in the next line run our same
>> programs using that index, it will take 40 minutes to run the output
>> processing table, not counting the time it took to build the index.
>> Ten times as long for the output processing. What could be causing
>> the difference?
>>
>> Thanks,
>> Rich
>>
>> -----Original Message-----
>> From: Richard Kreiss [mailto:rkreiss at verizon.net]
>> Sent: Monday, July 16, 2018 10:25 AM
>> To: Richard Veith <richard.veith at smrresearch.com>; Filepro 2 List
>> <filepro-list at lists.celestial.com>
>> Subject: Re: limits on called program's variable and/or array space?
>>
>> I have operated under the impression that an array was global.
>>
>> I think that *clerk.exe and *report.exe allocate memory for arrays
>> and variables at the start of the run. This is why it is important to
>> tokenize a processing table with the auto processing table to be used.
>> When no auto table is “used” the tokenize with a non-existent auto
>> table name. That will allow memory to be allocated correctly.
>>
>> One could dim an array at @once processing or a a sub-routine that
>> only runs at the start of processing. That avoids the program hitting
>> a dim or declare more the once.
>>
>> Only on of the FilePro programmers can answer the question of how
>> much memory the executables can access. I would expect the the 64bit
>> version will eventually be able to make better use is available memory.
>>
>> Richard
>> Sent from my iPhone
>>
>>> On Jul 16, 2018, at 9:00 AM, Richard Veith via Filepro-list
>>> <filepro-list at lists.celestial.com> wrote:
>>>
>>> I appreciated all the comments and suggestions, but after much
>>> testing I discovered that there was one variable in the called
>>> program that did not have a declared size (even though I thought I
>>> had taken care of that), and fixing that resolved the problem.
>>> None of the other suggestions would get the programs to run
>>> successfully without that one change. (As noted in the original
>>> post, there is a calling program and a called program, and it was
>>> the called program that was causing the abort when the pair of
>>> programs was run on all records in a 22 million record database,
>>> producing only a Windows 7 message saying "dreport.exe has stopped
>>> working" when about 80% through the database.)
>>>
>>> But I still have some questions:
>>> 1. Is PFFORMTOKSIZE the only env variable that affects the size of
>>> space available for a called program's variables?
>>> 2. If so, is there an upper limit other than 999,999 in FilePro
>>> 5.7.00D9? (Because my varying the size from 200000 up to 990000 had
>>> no impact).
>>> 3. Do the calling programs and called programs share a limited
>>> amount -- and the same overall limited amount -- of FilePro-reserved
>>> memory regardless of overall RAM? And does more reserved for the
>>> calling program mean less reserved for the called program? (On our
>>> standalone PC with 16 GB of RAM and no other user programs running,
>>> there was always about 12 GB of free RAM whether these two FilePro
>>> programs were going to abort or not.)
>>> 3. One of my tests was to change the two arrays (not mapped to any
>>> record fields) in the called program from local to global, making
>>> sure I clear them at the start of each record's processing. This
>>> did not affect the abort/no-abort issue, but it had a big impact on
>>> processing time. When the arrays were NOT global it added over an
>>> hour to overall processing time. Is that because the arrays get set
>>> up once, rather than 22 million times? Would the same thing be true
>>> for dummy variables?
>>>
>>> Thanks again for any comments or suggestions.
>>> Rich
>>>
>>> -----Original Message-----
>>> From: Richard Kreiss [mailto:rkreiss at verizon.net]
>>> Sent: Friday, July 06, 2018 5:14 PM
>>> To: Richard Veith <richard.veith at smrresearch.com>
>>> Subject: RE: limits on called program's variable and/or array space?
>>>
>>> I realize that many have responded earlier. But one thing came to
>>> mind this afternoon, have you considered that there may be in issue
>>> with disk space.
>>>
>>> filePro creates a temp file with data for the output. I have some
>>> reports with 8 sort levels which can create a fairly large temp
>>> file. The size depends on the size of the fields being indexed.
>>> Just take a look at the size of indexes.
>>>
>>> If ACL is being run on a Windows machine, one variable is the amount
>>> of space and user can use, this may be causing an issue if the
>>> limit is reached.
>>>
>>> Just something more to think about.
>>>
>>>
>>> Richard Kreiss
>>> GCC Consulting
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Filepro-list [mailto:filepro-list-
>>>> bounces+rkreiss=verizon.net at lists.celestial.com] On Behalf Of Richard
>>> Veith via
>>>> Filepro-list
>>>> Sent: Tuesday, July 3, 2018 8:34 AM
>>>> To: Filepro-list at lists.celestial.com
>>>> Subject: RE: limits on called program's variable and/or array space?
>>>>
>> _______________________________________________
>> Filepro-list mailing list
>> Filepro-list at lists.celestial.com
>> Subscribe/Unsubscribe/Subscription Changes
>> http://mailman.celestial.com/mailman/listinfo/filepro-list
>
>
>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
> _______________________________________________
> Filepro-list mailing list
> Filepro-list at lists.celestial.com
> Subscribe/Unsubscribe/Subscription Changes
> http://mailman.celestial.com/mailman/listinfo/filepro-list
More information about the Filepro-list
mailing list