rreport gagging on lockfile

Richard Kreiss rkreiss at verizon.net
Tue Feb 2 06:36:11 PST 2010


Tyler,

My procedure for getting to an available record is simple.


@once  ◄ If: '******************************************************
       Then: '* get file name to import
144  -------   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
       ◄ If: rn = ""
       Then: rn(8,.0)="1"
145  -------   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
loop_rn◄ If:
       Then: lookup -  r=rn   -npw
146  -------   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
       ◄ If:         locked(-)
       Then:         rn=rn+"1";GOTO loop_rn
147  -------   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -   -
       ◄ If:
       Then: END

This process will walk through the file and get the first available non locked record.  Note that the file I use for this process contains 1 field and all records are always blank.  The only way a record in this file would be locked is when someone else is running a process that uses this file.

I will use this file for report/clerk applications where clerk is used to get selection data based on the report being run. 

Since @once runs before and record is selected, you could run your processing from @once and not worry about locking a record.  I do this for some imports as all the is happening is data is being read in and then posted to a new file.


Richard Kreiss
GCC Consulting
rkreiss at gccconsulting.net
  



> -----Original Message-----
> From: filepro-list-bounces+rkreiss=verizon.net at lists.celestial.com [mailto:filepro-
> list-bounces+rkreiss=verizon.net at lists.celestial.com] On Behalf Of John Esak
> Sent: Tuesday, February 02, 2010 8:38 AM
> To: 'Tyler Style'
> Cc: filepro-list at lists.celestial.com
> Subject: RE: rreport gagging on lockfile
> 
> Well, let's see. If you were calling the clerk or report from SSTEM, then
> yes the lockfile would obstruct things first... But if you are already in
> clerk, it won't and if you are in report (using the -u optino) it won't.
> That's why I was looking for the exact way you launch this stuff.  Maybe you
> put it in the last note... I'm not sure.  But in any cvase, assuming the
> debugger does come up... It will be on the automatic table if there is
> one... And the -z table otherwise.  If you put the debug on command at the
> top of the prc which does the system call, then you should be able to step a
> line at a time to the system call... At which point you can check the
> lockfile (before and after) the system command and see what happens.
> 
> Incidentally, how do you do a lookup to a "random" record to keep it from
> "hogging" the file.  Is it possible you are getting the record you are
> standing on... Which actually is possible to get in "write" mode because
> filePro knows you are the user doing the lookup.  This might be causing the
> hassle.  I'm curious if you actually use RAND or what?  The way I normally
> do what I think you're doing is do a lookup free, write that record... Grab
> the record number, then do a lookup - to that record number.  When the
> process is done, delete that record.  It has always been the cleanest way
> for me.
> 
> Also, let me re-trace back to the previous paragraph . You mention having
> the file open with report on one screen and then clerk locks on another. The
> report on the first screen does have the -u on the command line, right?
> Otherwise, the clerk should rightfully be locked out.
> 
> John
> 
> 
> 
> > -----Original Message-----
> > From: Tyler Style [mailto:tyler.style at gmail.com]
> > Sent: Tuesday, February 02, 2010 8:17 AM
> > To: john at valar.com
> > Cc: filepro-list at lists.celestial.com
> > Subject: Re: rreport gagging on lockfile
> >
> > Yup, use the debugger all the time.  But not sure exactly
> > what I would
> > be looking for while debugging?  As far as I can tell, the
> > error blocks
> > all access to rclerk and rreport, so the debugger would likely never
> > even start.  I'll be giving it a whirl, tho.
> >
> > John Esak wrote:
> > > Only thing I can suggest at this point is run the process with the
> > > interactive debugger. Completely lcear the lockfile before
> > starting. (I mean
> > > erase it).  Then step through each critical point until you
> > can see exactly
> > > what is causing the hang.
> > >
> > > Are you familiar with the debugger?
> > >
> > > John
> > >
> > >
> > >
> > >> -----Original Message-----
> > >> From: Tyler Style [mailto:tyler.style at gmail.com]
> > >> Sent: Monday, February 01, 2010 11:23 PM
> > >> To: john at valar.com
> > >> Cc: filepro-list at lists.celestial.com
> > >> Subject: Re: rreport gagging on lockfile
> > >>
> > >>
> > >>
> > >> John Esak wrote:
> > >>  > 1. Okay, be more specific. You say you are using the lockinfo
> > >> script.  So, you can see exactly which record is being locked
> > >> by exactly
> > >> which binary.  What does it show?  Record 1 by dclerk, or
> > record 1 by
> > >> dreport.... exactly what does lockinfo show.... by any
> > chance are you
> > >> locking record 0?  Not something you could do specificially,
> > >> but filePro
> > >> does this from time to time.
> > >> While I have the error message from rreport on one terminal
> > >> and the same
> > >> error message from rclerk on another, lockinfo will produce
> > >> "There are
> > >> NO locks on the "log_operations" key file."
> > >>
> > >> While every call to rreport starts off with -sr 1, there is a
> > >> lookup -
> > >> in the processing that moves it to a random record (between 1
> > >> and 180)
> > >> as the first command to keep it from hogging the file.
> > Records 1-180
> > >> all exist.
> > >>
> > >>  > 2. It's always easier when people say this has worked for
> > >> years.  So,
> > >> it must be something new added to the soup.  Have you removed
> > >> an index,
> > >> grown a field and not changed the size an index pointing
> > to it.  Gone
> > >> past some imposed time barrier?  Used up too many
> > licenses? Exceeded
> > >> some quota in some parameter?  Added groups or changed
> > >> permissions?  Run
> > >> a fixmog (fix permissions)?  Has a binary failed like dclerk
> > >> and you've
> > >> replaced it with a different copy?   Has the -u flag any
> > >> immpact on your
> > >> scenario?  I'm assuming a lot because you haven't
> > >> specifically shown how
> > >> you are doing things?  Is this happening from a system call?
> > >>
> > >> Absolutely nothing has done to change the file or the
> > >> processing for a
> > >> couple years.  The only thing that has happened to the file
> > >> is that it
> > >> has grown larger over time.
> > >> There is definitely no time limit imposed in the processing;
> > >> I don't see
> > >> how would that produce a lock issue, anyway?
> > >> We have way more licenses than we can use after cutting 70%
> > >> of our staff
> > >> last year :P
> > >> Exceeding a quota in a parameter would mean something had
> > >> changed with
> > >> the file or processing, and nothing has.
> > >> We haven't changed groups or permissions in years either -
> > >> the current
> > >> setup is pretty static.
> > >> Fixmog (our version is called 'correct') hasn't been executed
> > >> in months
> > >> according to the log it keeps.
> > >> No binaries have been swapped in or out (we'd like to tho!
> > >> still haven't
> > >> got 5.6 to pass all our tests on our test box unfortunately)
> > >> -u shouldn't make any diff; it's not used and if we needed to
> > >> use it I
> > >> am certain the need would have shown up sometime prior to this.
> > >>
> > >> A typical use would be to add this to the end of a bash
> > >> script to record
> > >> that a script had completed running:
> > >> ARGPM="file=none;processing=none;qualifier=hh;script=importshi
> > >> p;user=$LOGNAME;no
> > >> te=none;status=COMPLETED"
> > >> /appl/fp/rreport log_operations -fp log_it -sr 1 -r $ARGPM -h
> > >> "Logging"
> > >>
> > >> Most of the actual processing just parses @PM, looks up a
> > >> free record,
> > >> and puts data in the correct fields.
> > >>
> > >> No other processing anywhere ever looks up the file; it is
> > strictly a
> > >> log, nothing more, and the only processing that touches it
> > (log_it)
> > >> always either run via a script command or a SYSTEM command.
> > >>
> > >> Things we tried to see if they would help:
> > >> * file had 600,000 records going back 4yrs, so we copied
> > the data to
> > >> another qualifier, deleted the original qualifier, and
> > copy back the
> > >> most recent 10,000 entries to see if it was just a size issue.
> > >> * rebuilt all the indices.
> > >> * rebooting the OS.
> > >>
> > >> This logging hasn't been added to any new processing or
> > scripts for
> > >> several months.
> > >>
> > >>  >
> > >>  > I agree that the code would not seem to be importatn
> > since it has
> > >> worked... before, so again, it seems like the environment
> > has changed
> > >> somehow.  Maybe if we saw the whole setup, relevant code
> > and all we
> > >> could give more suggestions.  Oh, I just thought of one... is it
> > >> possible you are looking up to a particular record, say
> > >> record 1... and
> > >> that record is not there anymore?
> > >>
> > >> All the records being looked up to exist.  The environment
> > is pretty
> > >> static - our needs have been pretty clearly defined by
> > this point and
> > >> new systems are almost always implemented on our Debian boxes
> > >> as SCO is
> > >> so limiting and so badly supported.
> > >>
> > >> Thanks for the ideas!  Hopefully my answers might light up a
> > >> bulb over
> > >> someone's head...
> > >>
> > >> Tyler
> > >>
> > >>
> > >
> > >
> > >
> >
> 
> _______________________________________________
> Filepro-list mailing list
> Filepro-list at lists.celestial.com
> http://mailman.celestial.com/mailman/listinfo/filepro-list




More information about the Filepro-list mailing list