deleting blank records & error capture

scooter6 at gmail.com scooter6 at gmail.com
Wed Oct 10 16:08:36 PDT 2007


  Thanks for the suggestions guys - I will try those
   I do already have a process that kills all users & processes at night so
it's not locking on
   records - I'm just not sure what's causing this all of a sudden.
   This cron script has been running for years (since about 1999) without
incident - but starting
    in August sometime, it has issues 3-4 nights a week -- very strange.
   I haven't changed anything - but their company is growing and there are a
lot more users
   and input, and everything going on - so perhaps it's just 'growing pains'
and they are now
   encountering problems  - I'm not sure
   I will try redirecting stdout & stderr to a file and see if that captures
what's going on

   The main concern I have is there might be lookups going on that can't
find what it needs
    and perhaps the processing wasn't written properly, etc

    Am I correct in saying that if a lookup is performed and a record is not
found, as long as the
    processing is written as

    If:
    Then:  lookup cli <client file>.....

    If:  not cli
    Then: return

    Then filepro shouldn't get hung up if it can't find something, right?


On 10/10/07, Brian K. White <brian at aljex.com> wrote:
>
>
> ----- Original Message -----
> From: scooter6 at gmail.com
> To: Filepro List
> Sent: Wednesday, October 10, 2007 5:18 PM
> Subject: deleting blank records & error capture
>
>
> Running 5.06.09 on Unix --
>
> Two questions I have --
>
> > 1) first - We have a temporary file where new input gets put in
> > and a night process that copies that data into the 'big' master file
> > how do I delete the 'blank' records that get created in this temp file?
>
> Just have the process that copies the records also delete them.
> The disk space is not freed, but so what? The record is freed in that it
> will just get filled again tomorrow night. Say you get 1000 records in
> every
> night. Do you really care if the key file shrinks to the size of 1 record,
> grows to the size of 1000 records and shrinks back to 1 over and over
> every
> day, or just stays the size of 1000 all the time? In reality the file will
> bump up and get larger once in a while whenever your import is larger than
> usual, but it doesn't grow and grow every day in a scenario like you
> described, so unless you get truly stupendous amout of incoming records
> once
> in a while and don't want the file to be 2 gigs all he time when usually
> it
> only needs to be 20 megs, I think there is nothing to fix here as long as
> you are using the processing delete command on each record after
> processing
> it.
>
>
>
> > 2) I have a big night time process that runs
> > that has been getting stuck on certain parts
> > I have a script that runs all these processes
> > and I have a log that tells me when each individual process starts
> > so I have an idea where it's getting stuck
> > but it's getting stuck on different processes randomly
> > how can I set the cron job on a Unix box where it will capture
> > the error messages that filePro is encountering?
>
> If you don't redirect stdout & stderr, they will get collected and emailed
> to root.
>
> Or, you can rdirect stdin, out & err all to one of the unused console
> tty's
> and flip to that screen when you come in in the morning, or use
> doublevision
> to view that tty any time from anywhere.
>
> crontab -e
> 30 2 * * * myscript </dev/tty12 >/dev/tty12 2>&1
>
> Then hit Alt-F12, or Ctrl-Alt-F12 if the console was in X (the scologin
> gui).
> You can flip to that screen any time before during or after the cron job
> runs. It will just be black before, and will be inactive, but with the
> last
> output of the script on it, afterwards if the script completed normally.
> If
> the script hun and is waiting for user input, then the screen will still
> be
> active, the script will still be running and you'll be able to interact
> with
> it and supply whatever user input it's expecting.
>
> If it's just hung on locked records (pretty common)
> then you can set 2 environment variables in the script before any
> clerk/report commands
> PFMBTO=1 ; export PFMBTO
> PFSKIPLOCKED=1 ; export PFSKIPLOCKED
> That will cause fp to proceed past most common halts.
> Though beware, only you can say if skipping locked records is ok for the
> reports in question!
> Consider: Your process is cycling through the days invoices, and needs to
> lookup into the customer file for each one, and someone went home with
> your
> biggest customers account open on their screen in update mode, and so that
> lookup fails on most of the invoices for that the day as if there were no
> such customer...
>
> The proper fix might be to somehow ensure that all users or at least all
> filepro processes are out of the system before starting the script. A
> cheap
> way to start might be to use the idleout utility in sco. just type
> "idleout
> 120", and until the next reboot, any tty that sits idle for 2 hours will
> be
> killed. If everyone usually goes home by 8pm and the script doesn't run
> until midnight, this might clear up most of the problem most of the time.
>
> I would not set those variables first or use idleout at first, but see
> what
> the captured messages indicate first. It may be something completely
> unrelated to locked records.
>
> Brian K. White    brian at aljex.com    http://www.myspace.com/KEYofR
> +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++.
> filePro  BBx    Linux  SCO  FreeBSD    #callahans  Satriani  Filk!
>
> _______________________________________________
> Filepro-list mailing list
> Filepro-list at lists.celestial.com
> http://mailman.celestial.com/mailman/listinfo/filepro-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.celestial.com/pipermail/filepro-list/attachments/20071010/e0cf44da/attachment.html 


More information about the Filepro-list mailing list