Sanity check

Richard Kreiss rkreiss at gccconsulting.net
Wed Nov 18 09:29:05 PST 2009



> -----Original Message-----
> From: Nancy Palmquist [mailto:nlp at vss3.com]
> Sent: Wednesday, November 18, 2009 12:04 PM
> To: rkreiss at gccconsulting.net
> Cc: filepro-list at lists.celestial.com
> Subject: Re: Sanity check
> 
> Richard Kreiss wrote:
> > Having an odd problem at a client's site.  Windows Server 2008, Windows XP
> Pro client running through Terminal Server, filepro 5.6.06
> >
> > In this particular file, I am logging changes made to specific fields. A new
> record in log_file is created with the original value and the new value plus code
> number, who made the change, date and time.
> >
> > They are getting a sanity check error when they try to change values for only
> one doctor code  and not for all of the records associated with this doctor.
> >
> > The program does a lookup free, posts the data, and then I have a write
> >
> >        ◄ If:                                                                   ◄
> >        Then: lookup log = logfile  r=free  -n                                  ◄
> > 1567 -------   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
> >        ◄ If:         NOT log                                                   ◄
> >        Then:         RETURN                                                    ◄
> > 1568 -------   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
> >        ◄ If:                                                                   ◄
> >        Then:         log(1)=@fi;log(2)=master_code;log(3)=@td;log(4)=@tm;log(5)>
> > 1569 -------   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
> >        ◄ If:                                                                   ◄
> >        Then:         log(7)=whoareyou;log(8)=fd;log(9)=fieldname(-,fd);log(10)=>
> > 1570 -------   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
> >        ◄ If:                                                                   ◄
> >        Then:         write log;fd="";ivalue="";nvalue=""                       ◄
> >
> > The sanity check crash comes at line 1570 with the write.
> >
> > I went in and changed this field on dummy records that are in this file and had
> no problem.
> >
> > Sunday night I deleted all of the index for the log file and rebuilt them per
> fpsupport's suggestion.  This has not solved the problem.
> >
> > Any suggestions as to what I might need to be looking for.
> >
> >
> >
> ─────────────────────────────────────────────────
> ──────────────────────────Bad──
> >
> ┌────────────────────────────────────────────────
> ──────────────────────────────┐
> > │  Sanity check failure!  split() len < 0                                      │
> > │ tptr = tptr=(6,26932,12845,1,0)                                              │
> > │ file='logfile@' index=?which=1, nodesize=1024, entries[which]=13365
>> > │ which=1, nodesize=1024, entries[which]=13365                                 │
> > │ count=12845, bound=7195, entries[1]=13365                                    │
> > │                                                                              │
> > │ Write this information down, save a copy of the above                        │
> > │ index, and press  Enter  to generate a core dump to                          │
> > │ send to fpsupport, along with the index.                                     │
> >
> └────────────────────────────────────────────────
> ────────────── Press  Enter   ┘
> >
> >
> >
> ┌────────────────────────────────────────────────
> ──────────────────────────────┐
> > │  Sanity check failure!  split() len < 0                                      │
> > │ tptr = tptr=(6,27063,12589,1,0)                                              │
> > │ file='logfile@' index=?which=1, nodesize=1024, entries[which]=13364
>> > │ which=1, nodesize=1024, entries[which]=13364                                 │
> > │ count=12589, bound=7194, entries[1]=13364                                    │
> > │                                                                              │
> > │ Write this information down, save a copy of the above                        │
> > │ index, and press  Enter  to generate a core dump to                          │
> > │ send to fpsupport, along with the index.                                     │
> >
> └────────────────────────────────────────────────
> ────────────── Sarah CastroPress  Enter   ┘
> >
> >
> > Richard Kreiss
> > GCC Consulting
> > rkreiss at gccconsulting.net
> >
> >
> >
> >
> > _______________________________________________Filepro-list mailing
> listFilepro-
> list at lists.celestial.comhttp://mailman.celestial.com/mailman/listinfo/filepro-list
> >
> >
> Richard,
> 
> What are the indexes built on?  Are there any edits that are other than
> system edits on the indexed fields?  If so, do they exist in both files
> or on the global table for all files to share.
> 
> Since this just adds records to the log file, try removing the indexes
> and see how it works.
> 
> If you get the log entries to post without the indexes, go from there.
> 
> If you have to you can bust down the update.  Create a free record, get
> the record number. write the record.
> 
> lookup and lock to the record number, post one piece of data, write the
> record,
> 
> repeat until you locate the piece of data that makes it crash.
> 
> I guess this file is not more than tracking who changed what, for an
> audit.  The indexes could be run as demand indexes, when you need to
> look for something.  It is awkward but avoids the automatic indexes that
> are probably causing your issues.
> 
> I have been fighting an issue with Windows and filepro, that always
> occurs on the WRITE line - index updates are crashing is my best guess.
> Mine causes a memory error, with no info, so we have no way to track it
> down.  It seems to be network related or index location related.  It can
> not be reproduced on command.  It will fail 50 out of 20000 times it is
> executed.  No way to do that with the debugger.  Rebuilding indexes for
> the files in question, seems to fix it for a while.  Once it starts to
> crash, it cascades into worthlessness quickly.  I have changed the
> indexes as best I can to try to keep it solid.
> 
> I never have these kinds of issues on Unix/Linux machines, I think it
> has to be in the System calls or I/O stuff on a Windows Server.  Also
> the system seems to be tolerant of bad switches and network issues.  A
> Unix system, being server-side processing, does not traipse the data all
> over the network.  I think that makes a huge difference.
> 
> Nancy

Nancy,

Thanks for the suggestions.

The odd thing about this is that it is only occurring, for now, for one doctor code and only for 2 patients.  

I have created and archive qualifier and will move all of the records to this file.  Once this is done, I am going to delete the key segment (I don’t use the data segment).  The add in about 100,000 blank records.  

As for the indexes, the fields are either alphanumeric or date fields,  The master code field is a global edit.  There are only 4 auto indexes on this file so I could delete them. I have a batch file which can quickly rebuild them.

I did make on change in the routine which has been causing the problem, I change the alias from log to flog.  I don’t think this was the cause of the problem as the posting to this file would have sailed in all cases.  But..........

If the above doesn't solve the problem, I will try deleting the auto indexes.  One thing I could do is set up a task to run each night which would rebuild the demand indexes. 

The users are logging using terminal server.  The server they log into is an app server with the data on another server .  The two servers are linked through a gigabit switch.

One extra piece as compared to *nix remote logins.


Richard Kreiss
GCC Consulting
rkreiss at gccconsulting.net
  






More information about the Filepro-list mailing list