Slow issue with filepro on Windows 2003 / XP clients - Just Started
Nancy Palmquist
nlp at vss3.com
Thu Jan 7 07:45:27 PST 2010
Kenneth Brody wrote:
> On 1/5/2010 3:35 PM, Nancy Palmquist wrote:
>> Guys and Gals,
>>
>> Filepro 5.0, Windows 2003 server, XP Clients
>>
>> **** NOTHING HAS CHANGED in the indexes or programming related to
>> database for months *****
>> Customer does not know of any change on the OS or Network. We tried it
>> from a computer with the virus stuff off, just to be sure that was not
>> the issue.
>
> How about anti-virus software on the server?
Checked that - same before and after the incident.
>
>> Looking for a reason that filepro should start to take 4 to 30 seconds
>> to add a record to a file. This behavior started yesterday.
>
> By "add a record", do you mean entering update on a new record, or
> saving the new record?
>
> How does the speed compare to the same part of updating an existing
> record?
>
> What about deleting a record?
The users do not delete records - the file tracks checks and charges.
The users most often add records to this file via a lookup r=free kind
of thing.
Our tests involved making an @key in another file that would add 8
records of various types to the check file. It tracked the time
required from start to finish of that step. One user - 4 to 9 seconds
to add 8 records. Two users running at the same time - 45 to 120 secs
for the same 8 record each.
When we got it fixed, it was less than one second for one or two people
to add 8 or 16 records.
Nancy
>
>> The file has 3.0 million checks/charges, and has .5 million free
>> records.
>>
>> First thing we did was run freechn on the file.
>
> Unless the problem is entering update mode on a new record (ie:
> removing it from the freechain), then the freechain is not part of the
> equation.
>
> Upon entering update on a new record (ie: creating it), the record is
> removed from the freechain. Anything done to that record, except for
> deleting it (or canceling the creation), won't touch the freechain.
>
>> The indexes have been rebuilt twice today and finally rebuilt after
>> being removed first.
>> (File has 5 absolutely required indexes, one index that is optional. We
>> removed the optional one with no improvement.)
>
> There's no way to do a quick "let's create a new record" test without
> all the indexes in place?
>
We did that after hours and I posted the results.
>> The server has been rebooted.
>>
>> Processes that add records to this file, seem to get slower and slower
>> the more people are working with the check file.
>>
>> One person adds a record to this file in about 8 sec. Two people and
>> both people seem to take as much as 30 sec to add a record to the
>> same file.
>>
>> It goes downhill from there.
>
> Are these simultaneous creations, or just each successive creation
> that gets slower and slower?
>
The first person back on the system was slow but not impossible, (4 sec
wait), adding a second person turned it into impossible, as much as 1
minute. It did not improve from there, just got impossible.
>> Other processes that add records to other files do not seem to have any
>> issue.
>>
>> We have done everything we can think of to try to find out what is
>> dragging it down.
>>
>> I generate a log file and exactly when a record is added to the check
>> file is when I have jumps in 5 to 25 seconds in time.
>>
>> A routine that opens a file to a free record, loads some data and writes
>> the record. A total of 4 lines of code.
>
> So this is a free-record lookup, and not "add records"? (Not that
> this invalidates anything I said above, just the method of testing
> things.)
>
> Is the slowness the lookup itself or the write?
I did not get that granular with the test. I put log entries before the
lookup and after the write. So it could be either, I expect the WRITE
would be the problem.
>
>> Any one have any suggestion, we are desperate to get the users working
>> again, at full speed.
>
> Can you make a quick test processing that does just the lookup,
> assignments, and write? Ignoring the initial overhead of the first
> lookup to the file, do you see the same slowness? With this quick
> test, can you run it without the indexes?
>
> Ditto for deleting the "fake" records that you create.
>
I posted the results for our tests. I did not see your email before we
figured out what we needed to do.
We had two users write 8 rec each, with a lookup and write for each
record. It hung so bad, I had to change the last WRITE to a CLOSE to
see if that helped in some way.
It did allow us to run the @key process over and over without leaving
the file. Something just using WRITE did not do.
It did not improve the speed.
As I said earlier - I can collect data for you. I kept the broken stuff.
Nancy
--
Nancy Palmquist MOS & filePro Training Available
Virtual Software Systems Web Based Training and Consulting
PHONE: (412) 835-9417 Web site: http://www.vss3.com
More information about the Filepro-list
mailing list