Syntax error... in the wrong place
Brian K. White
brian at aljex.com
Fri Mar 1 10:44:19 PST 2013
Also you could use the same thing if you were updating existing records
instead of always adding new ones.
Just insert 2 lines. No other changes.
Just after the "if: not web", read the key value from the import, then
use "lookup -" to jump to it. Assuming index A is on the same field as
web(1):
if: not web
then: delete ; exit
then: k = web(1)
if: k{"" ne ""
then: lookup - k=k i=a -npx
You'll either move to some existing record, or you won't. The -npx means
it won't report any error if it fails to lookup, and that it will only
go to an exact match not to the next-closest. The rest of the table
needs no changes.
This addition is off the cuff not lifted from a working table.
The syntax is good. The exact behavior of lookup - failing? Maybe
possibly you'll have to save the starting @rn first, and if the lookup -
fails then possibly you'd have to lookup - back to it but I don't
believe so.
--
bkw
On 3/1/2013 1:09 PM, Brian K. White wrote:
> On 3/1/2013 12:04 PM, Jay Ashworth wrote:
>> The table below is giving me a syntax error. the error is
>>
>> """
>> Syntax error in line 21:
>>
>> ::new(to) = old(fr):
>> ^
>> """
>>
>> If you count, though, you'll find that line 21 is actually:
>>
>> ::import ascii old=(fn) r=^J f=|:
>>
>> inside the block labeled 'header', and indeed, that's the line dcabe
>> leaves me on when I ack the error message.
>>
>> Any thoughts on what it doesn't like here? Import accepts an indirected
>> variable for a filename these days right? fr isn't a reserved word?
>>
>> 'old' used to be called 'web', and I thought maybe *that* was reserved,
>> so I changed it.
>>
>> Confused now.
>>
>> dcabe is 4.8.3K2D4. Yes, it's ancient. No, I can't change it.
>>
>> On a side note, is my plan for skipping the label line actually going
>> to work? The semantics of IMPORT ASCII are "open if necessary, and
>> then read a line every time you're called", correct?
>>
>> Cheers,
>> -- jra
>>
>>
>> ================8<=========================
>> ::debug on:
>> ::' webget/importreg - import the student reg file we're given with -r:
>> ::':
>> ::' jra at baylink.com - 17 Feb 2013:
>> ::':
>> data:::
>> ::fn(80,*) ' name of the file (today's date):
>> ::':
>> start:::
>> ::' this code is a "sit on record 1 and lookup free" table:
>> ::' I don't much like those, but there's no other obvious way to import:
>> ::' a variable length CSV file with IMPORT:
>
> I use
> rclerk fpfile -z table -sscreen -xa -u -r importfile
>
> That puts you on a new record, even works the first time when the key is
> 0 bytes to start with. Then in processing near the beginning if there is
> no actual data to import I delete the record I'm standing on. This
> leaves a physical record created, empty, on the disk, but it's marked
> deleted and fp will use it instead of creating a new physical record the
> next time. Empty records will not pile up if you run it a zillion times
> with no input data.
>
> Then my table looks like this it imports 0 or more CSV records:
>
> :' 20120223 brian at aljex.com - import pushfile transaction logs:' see
> /u/aljex/bin/pushfile::
> :' csv record format:'job#,user,desc,action,from,dest_good,dest_bad,opts:
> :'TODO^A header/detail lists of related items as single task::
> ::import word web = (@pm):
> :not web:delete ; exit:
> :' TODO^A unique transaction number, needs central control field:1 = "0"
> ' like pro#, check# etc.:
> ::2 = web(1):
> ::3 = web(2):
> ::4 = web(3):
> ::5 = web(4):
> ::6 = web(5):
> ::7 = web(6):
> ::8 = web(7):
> ::9 = web(8):
> ::x =
> writeline(ah,"Received^A"<getenv("HOSTNAME")<getenv("COMPANY")<@fi<@rn):
> ::end:
> @once:'**************************************************************:video
> off:
> ::af(128,*,g) = @pm{".ackout":
> ::system noredraw "umask 0;>"{af:
> ::ah(4,.0,g) = open(af,"wc0t"):
> :ah lt "1":exit "3":
> ::end:
>
>
> This is part of a simple cgi script that I use for flexible generic
> open-ended system-to-system EDI.
>
> The cgi script verifies authentication and collects a company name
> (think pfdata+pfdir+pfqual), a fpfilename, a process table name from the
> query string. It collects the payload data from post by catting stdin to
> a temp file. The payload data may have any format since you write the
> process table to read it. Then just runs the rclerk command above,
> dropping the temp file name in -r and deleting it after rclerk runs.
>
> The table above writes an output file as it goes and the cgi cats that
> back to the http client but you can drop that of course. You can drop
> the entire @once section and the writeline line.
>
> The table above reads CSV with unix newlines.
>
> If the input file doesn't exist or isn't readable, or is zero bytes of
> has only blank likes, no problem. One new record is created, but if
> there is no input for any of those reasons, then that new record is
> marked deleted and rclerk exits. A hundred such bad starts would just
> reuse that same empty record over and over again and the next good input
> will use it for data. This actually always happens, including at the end
> of importing one or more good records so the file will always have one
> extra physical on disk record but marked deleted so it doesn't show up
> anywhere else in filepro like in browses.
>
> If the input file has a single record, or any number of records, it
> works. There are no explicit commands that show it here, but when the
> end statement is reached, processing jumps back to the top and a new
> empty filepro file record is created, and the next time the import
> statement is reached it reads the next line of the input file.
>
More information about the Filepro-list
mailing list