***segmentation violation

Walter Vaughan wvaughan at steelerubber.com
Mon Aug 13 17:25:46 PDT 2007


Last friday afternoon I thought I could build an export in 10 minutes for 
someone. Well I coded it in 8.

Then I ran it and it gave me a ***segmentation violation

Kept fiddling with it. Tried to run it under debug.

Finally realized that with that specific file if I create a processing-only 
table it will segfault. Even with no processing table.

We're not talking rocket science fpCode either for testing purposes
1  -------
      ■ If:
      Then: fn="/tmp/catpipe"
2  -------
      ■ If:
      Then: export ascii ofb=(fn) r=\n f=|
3  -------
      ■ If:
      Then: ofb(1)=1
4  -------
      ■ If:
      Then: ofb(2)=2
5  -------
      ■ If:
      Then: ofb(3)=3

Okay, its running on a AMD64-FreeBSD-6.2 os, and filePro is 32bit compiled for 
4.X, so maybe its the OS. I can't believe that's the problem, since I have 
dozens of similar written programs that run flawlessly every day on that box for 
months.

But okay. Fire up a 32 bit 6.1 FreeBSD box. Same thing. Check permissions till 
I'm blue in the face.

Hours now have passed, and it's Monday mid morning. Try on a very old SCO OSR5 
fp4.5.X box. Code works. Duh. It should have all along. Move code to FreeBSD 
box. Same segmentation violation. Grr.

Realize that I won't get this fixed in 10 mintues since it is now Monday  and 
need it finished today. Even if I get fpTech involved it won't be a quick fix, 
and I doubt they will create a new 5.0.x version today for me, so we decide to 
leverage an existing export that creates similar data.

Here's where it gets weirder. At the end of it's table I create an @once area 
and pop down code. Setup loop and write it as a lookup/export thing. But I gotta 
flush export after every getnext lookup, so I used something. Segmentation 
violation. How could that be? Changed it to write export_alias. Works. :)

What's wierd is that basically the type of same code gets executed. I  process 
40k records in the @once block and then 140k records normally. It processes the 
@once 40k records in about 2 seconds, and it processes the rest at about 3k per 
screen update.

Actually when I only had the @once area with the three line example export 
without any logic, I thought it didn't work because there was no pause getting 
to the first 3k records during countdown! When I added in about 30 lines of (and 
20 more exports values) you could then feel it for just a second then it 
continued as normal at about 3k per screen update.

Forgetting about the seg_vio's, it got me thinking. Is it that more effective 
when doing exports to stand in a third file and just write your changes in a 
loop, than when you export per record the "normal way"?  In this case it's about 
10 times faster.

==
Walter

  kdump -f /tmp/ktrace
  62422 ktrace   RET   ktrace 0
  62422 ktrace   CALL  execve(0x7fffffffed76,0x7fffffffeb68,0x7fffffffeba0)
  62422 ktrace   NAMI  "/appl/fp/rreport"

and truss is useless because it never finishes and truss doesn't flush it's 
buffers on a seg_vio it looks like so the output file is 0 lenght.

And dtrace I dont have on this machine.



More information about the Filepro-list mailing list