Deflating

Brian K. White brian at aljex.com
Fri Jan 4 10:39:58 PST 2008


Nothing?

The times when there is any advantage to physically removing records are so 
rare that it's ok to just handle it some manual way like posting to a new 
duplicate file or a qualifier with a 2 line report process, then delete the 
old file, then copy the new one back.
I'm talking rare like, I want to take a production system and remove all of 
it's data in preperation for copying.
Even in that case we just delete all data and allow the normal setup 
routines initialize those files that need it in the new copy.
So it's really really rare. Maybe a fluke user or programmer mistake that 
ran away and created a million records in a few seconds.

Every other case it makes more sense to just leave the empty records there.

Possible reasons why you might want to remove unused records:

Disk space: If you need the records even once in a while, then you need the 
disk space period, and there is no advantage to temporarily freeing the disk 
space. If it does matter then your disks are underspec'ed for the job and 
that should be fixed, not worked around.

Backup space / copy time: all backup hardware and software and file transfer 
software these days will, or can, transparantly compress away the empty 
records almost completely, so they don't really affect tape drive space, 
compressed tar file space, or network transfer time.
It impacts the time spent simply reading the data in the first place for 
backup/copy, and at verify time, but really, how much?
I bet it's less than than the disk activity, filesystem activity, cpu, and 
time spent on shuffling, and without the risk of the shuffle corrupting the 
data. We rsync something like 73 gigs of data 3 times a day from several 
machines to several other machines. I don't know or care how much of that is 
deleted records because it only takes a few minutes over the lan and still 
under 15 minutes over measly T1's on one or both ends, T1's that are in 
active use by a few hundred users at the same time by the way. A plain ftp 
of a raw key file or deliberately changing the default on a tape drive to 
disable compression is the only way those empty records matter.

And filepro of course will reuse them when creating new records and it's 
actually a little advantage there for fp to be able to edit an existring 
record than to have to grow the file by a record.

Do you have a specific situation that actually is negatively impacted by 
having lots of deleted records in a file?
Like is is slowing down some report or slowing down random access to some 
file?

Brian K. White    brian at aljex.com    http://www.myspace.com/KEYofR
+++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++.
filePro  BBx    Linux  SCO  FreeBSD    #callahans  Satriani  Filk!

----- Original Message ----- 
From: "Chris Rendall" <crendall at teamind.com>
To: <filepro-list at lists.celestial.com>
Sent: Friday, January 04, 2008 12:10 PM
Subject: Deflating


> I'm currently running filePro on SUSE Enterprise Linux.  I used to run 
> filePro on SCO UNIX and I used a program called squeeze to recover disk 
> space from files that had a lot of deleted records in them.  Squeeze 
> doesn't run on Linux and I came across Bob's deflate program but the 
> README says it's only known to work on SCO UNIX and Xenix.
>
> I'm wondering what other people are using to recover the empty disk space 
> from their filePro files on Linux.
>
> Thanks,
> Chris
>
> Team Industries, Inc. --- 2007 Labor - Management Award Winner ---
>
> Nominated by the United Association of Plumbers & Steamfitters of the 
> United States and Canada, and selected/presented by the Union Label & 
> Service Trades Department of the AFL-CIO.
> _______________________________________________
> Filepro-list mailing list
> Filepro-list at lists.celestial.com
> http://mailman.celestial.com/mailman/listinfo/filepro-list
>
>
>
> -- 
> No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.5.516 / Virus Database: 269.17.13/1207 - Release Date: 1/2/2008 
> 11:29 AM
>
> 



More information about the Filepro-list mailing list