Import question
scooter6 at gmail.com
scooter6 at gmail.com
Wed Mar 10 12:57:43 PST 2021
So I have an import routine that imports a csv of some parsed XML
It's pretty detailed and I use a ton of variables
My question is, let's suppose I have a field that has over a 100 fields
within it, separated by a ^
For example, if I'm importing a huge list of amounts, the field I need to
breakdown can have 100 or more that are
100.00^21.20^50.00^62.50^33.12..........etc
Is there an easier way than using strtok to parse large fields like this?
I originally didn't think there would ever be more than 40, but some of my
brilliant clients who don't always know what they're doing in our
integrated partner's software, have sometimes managed to have 100 or more -
so trying to see if there is a better/simpler way to do this
Otherwise, I'm going to spit it off into its own file and create a lookup
screen on the records - which I can do, but didn't really want to go that
route if not necessary
I currently parse the incoming XML with xsltproc and create a csv file and
that all works well - I'm trying to come up with possibly a better way to
get that into my fp databases
thanks for any input
Scott
PDM
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.celestial.com/pipermail/filepro-list/attachments/20210310/7355e9f7/attachment.html>
More information about the Filepro-list
mailing list