Can filepro do drill downs like acess?

Bill Vermillion fp at wjv.com
Fri Mar 26 07:19:48 PST 2004


On Fri, Mar 26, 2004 at 02:53:48AM -0500, Fairlight thus spoke:
> >From inside the gravity well of a singularity, Bill Vermillion shouted:
> > On Fri, Mar 26, 2004 at 12:28:22AM -0500, John Esak thus spoke:

> > And that suggestion is certainly not the way I understand
> > transaction, commits, and roll-backs either.

> > My understanding is that all parts of a transaction must proceed
> > correctly and if any part fails they are not processed/committed.

> That's the way I read the specifications, yes. Let's say you
> want to update the GL, but you have a detail record off in a
> profit centre detail record somewhere. If something fails to
> update for some reason, the entire transaction, everything
> that's been done to that point, is rolled back off as if it
> didn't happen.

> > This could be like something in an ATM environemnt to ensure that
> > all of the pieces of the transaction are processed correctly - so
> > that various part of the database that may reflect a deposit or
> > withdawal are all completed successfully before they are fully
> > commited.  This prevents overlapping functions which may conflict
> > with each other.

> Basically. Or inventory vs sales. You get a sale in a web
> storefront, and there's no inventory to handle it. The
> transaction should be not be committed so that, for instance,
> the part of the transaction that debits their credit card is
> never actuated.

A web store-front really isn't that much different than a real
business.  I've built an inventory program in FP that would let you
complete the rest of the order if items were not available at the
moment, and places the un-shippable into a back-order mode.

The inventory system also tracked the sold but not pulled from the
shelf.  One place was having a problem when someone would place an
order, and a person would look on the shelf and see the item and
sell it, when it was already sold.

So there were fields added to the inventory section.  Besides the
filed that showed the quantity n the shelf, there was a field
called 'committed', and another as 'available of sale'. 

This also was a life-saver [really $$ saver] for one place as they
also did manufacturing, and when the shop went to get the parts
they needed someone had sold them as they had seen them on the
shelf.

So a I built a modified BOMP which then accessed the committed
fields.  This place also had a minimum stocking level based on
amount sold over a period of time and a re-order flag was
triggered.

If they ran out they would have to have the parts flown in by
freight and shipping on a few hundred pounds from the plant in
Germany was expensive.  They would go to build something and find
pieces needed that someone had sold, so it was an expedited order.

So the sleeve time was built in to take into account of the typical
6 week time from order placed to received.  They figured about 4+
weeks for physical shipment [by ship] and up to two weeks before
they cleared customs in Miami.

The real pain at that place was when they ordered new invoices,
several thousands of multiple part forms.  Who ever ordered them or
the person taking the order and specs for the printing company fell
back on the old 1.5 lines spacing as used for manual typewriters.

Their high-speed dot-matrix could not perform 1.5 space line feeds.
But it did handle 1/2 line feeds.  So in filePro the invoice
was set to be about 150 lines long and then I triple spaced the format
and sent a printer code to go into half-spacing.

I also reworked the partial shipment and back-order section so that
when someone ordered 25 of something and only 14 were available for
sale, that I was able to handle the back orders and adjust the
inventory level.  Since there were back-ordered items, plus items
already committed for builds, the committed would be larger than
the on-the-shelf and the quantity-available-for-sale would be
a negative number.

FilePro made all of that easy.  I also reworked an existing FP
inventory [one of the Softa thingys] for another customer [computer
sales] that tracked the last purchase cost and the average purchase
cost for items purchased at various times so that in a close sale,
the true sales price could be based on average cost of existing
inventory, so the price could be cut lower than the margins needed
for last purchased if they needed to cut the price to make a sale.

> > Rollbacks can let you move a database back to a point in time and
> > are usefull in an abnormal case where something may be corrupt and
> > you restore a known working backup and the process the transactions
> > against that to bring it to a valid, correct, and current working
> > state.

> Yeah...that can come in handy if something goofy happens. The
> transactional logging takes oodles of disk space depending what
> you're doing, but space is cheap nowadays, and it can save
> someone lots of money.

That's one reason they were developed. When these were almost
standard for serious operations - sales for example - drives were
not that reliable and hardware wasn't that stable - even the best
of the HW.  These were all on Unix systems BTW.  Hard drives were
about $4/megabyte if you shopped, or if vendor supplied were
in the $8/megabyte range. [A far cry from the $1 Gigabyte 
we see today for server strength IDE drives [ 50 cents or less for
consumer/desktop drives].  The high-end high-capacity SCSI's are in
the $6+ per GB.

> > The snapshots used in some backup systems strike me as a
> > similar approach. You 'snapshot' the state of the system and
> > then perform a backup while the system is operational, but
> > disk writes are updated outside of the snapshot and when the
> > backup is finished, the system is brought to it's normal
> > state with items that may have been frozen in time by the
> > snapshot brought up to date.

> I was with you through the part about the system being brought
> back up. I didn't get what you meant after that.

A snapshot basically makes a list of all blocks in use at the time
of the snapshot.  The backup system then backs up all of those
blocks.  If any block within the snapshot needs to be modified,
that block is copied to a new location, and then the copy is
modified.  In this way you can backup a running system and have it
in a stable coherent state so a restore will work and all related
files will be correct. 

Once the backup is finished the disk is made coherent again.  I do
not know if the copied and modified blocks are left in place and
the ones from where they were copied are deleted, or this is a
copy-back with modified data to the original location. I'm assuming
the latter so you don't break up large files and have to perform a
seek to other tracks when peforming what is essentially a linear
read.

This can be used in a system crash where the file system needs to
have an fsck performed.  WIth the huge capacities and arrays we
have no an fsck on a huge filesystem could take hours.

So on boot a snapshot is made of the drive in it's current state.
Multiple parallel fsck's are started in the background and the
system is brought up for use.  Everything not fsck in the
background is essentially in a read-only at this moment.

But if a user program needs access the blocks needed are
essentially fsck'ed at that point.  IOW the fsck is prioritized to
fsck blocks as needed. It will update the snapshot made at boot
so these are now excluded from the background fsck.

In tests an fsck which might take 15-30 minutes if done at boot -
even with parallel fscks running - when run in the background it
could take 5+ hours.  But the system is avialbe immediately for
use.

> The Network Appliance (the real ones) have an -excellent-
> snapshot facility under Solaris. It saved my ass -again-
> the other night when I wiped something I inadvertantly had
> meant to save. There are 10 hourly, 9 nightly, and one master
> backup snapshots. There's a .snapshot directory hidden in any
> directory, each containing hourly.[0-9], daily.[0-8], and
> snapshot_for_backup.0 as well. Any of those directories has a
> snapshot from a point in time, so every few hours you get a new
> snapshot. It doesn't seem to affect performance at all, and its
> VERY handy being able to have users get their own files without
> going to tape or bothering an admin.

The concept of users performing their own recovery is growing as it
takes the load off of the administrors.  There have been articles
on this in the recent business computer trade magazines.

> > And while I drift this topic a bit further there has been talk on
> > this group at times of both record locking and file locking.  I do
> > not recall what database it was, but it had the concept of field
> > locking.  That mean two people could be working on the same record
> > and each could update different fields.  That surely seems to be a
> > handy concept if you have large numbers of users manipulating the
> > databases.

> Row+Column locking? That could be handy. I think column locking
> alone would be useless, but if you combined row and column,
> that would be the ultimate. I wonder how much overhead that
> generates though...

I don't know. For some reason I think this was in Oracle or
one of their competitors at one time if not now. Overhead is
not a problem - because in these instances the system has been
optimized so that users don't remain idle waiting for someone
else to finish with something they need to access. Once you
exceed a certain number of users hardware and software costs
become secondary to the costs of idle users.

Those are the data bases that seem to start in the mid five figures
for a decent number of users, and the prices go up based on how
many CPUs are in the system, and what CPUs/speeds they are.

They also usually are sold with time-based licensing schemes.  At
least twice in the past week one of the electronics e-mags I get
has had articles about the sales model changing to a yearly
licensing scheme, instead of the traditional but it now and pay no
more until upgrade time.

MS's 'software asssurance' has been taken huge hits in the press
this past month because their delays getting their next OS out the
door means that a large number who paid for the service to get new
upgrades without additional charge will have wasted their money.

At the end of next month when RHs 9 goes EOL then they are
in the yearly licensing mode.  Their WS [desttop] is $179/year.
The standard at $299/yr makes them more expensive for some users
that what is perceived by many to be overpricing from vendors such
as SCO. Considering the lifetime some of my former SCO clients have
gotten from their OS the RH solutions are now more expensive.

It seems that in some areas computing is reversing it's long time
mode of getting cheaper over time to starting to move up.  As the
HW gets cheaper the SW with yearly licensing gets more expensive.

It's beginning to look like the IBM pricing models of the '60s and
'70s which spawned the current hw and sw separation.  I guess the
pendulum is now swinging back the other way.

Bill
-- 
Bill Vermillion - bv @ wjv . com


More information about the Filepro-list mailing list