Developing filePro apps for developers
Fairlight
fairlite at fairlite.com
Sat Jun 16 02:51:39 PDT 2007
Only Brian K. White would say something like:
> And we have an rsync script that pushes the code out exactly the same to all
> the live application server boxes, excluding only keys, datas, & indexes,
> including menus and printer tables etc.. twice a day it'll do the whole
> filepro directory from cron, plus any developer can run it on demand.
That sounds dangerous. What happens if there's a bug somewhere, or at
least an unintended side-effect that the customer can't tolerate the
consequences of. They've just been pushed the updates, and theoretically
actions could have been taken since then that are very hard to roll back.
I also didn't see any mention of it specifically, but I would -hope- that
your period of quiescence is in effect for the -entire- update procedure,
not just the restructure work. If you had processing in one place firing
off processing in another place and there were interdependencies of
action between the two and one processing table was updated but another
was deferred till later in the update stream, then any actions taken in
the interim could leave the customer with unintended data desyncs.
I mean, I'd hope you thought of this, and more than likely you actually
did. I'm in pedantic mode right now on -everything-, and if someone didn't
explicitly say they handled something, it's currently assumed they may
not have.
I'd logically expect you to have done so, but I'd also have thought you'd
have stated it, especially as others may try to adopt the model (otherwise
why put it forth in a public forum if not so that others could utilise the
information) and could potentially miss this critical point for lack of
inclusion.
I still think automatic code updates are unwise on the whole. I can't
control what kinds of patches MS puts into Windows or any of their other
products, but I can control whether I -apply- a patch. It seems all too
easy to push a bad patch accidentally, only find the problem in the field
at the first place that recognises what's going on, and then have to go
through to each site that got the patch and see if they experienced
problems.
I'm so against automatic live patching of things that the -only- thing
I let autopatch is my anti-virus software, and only because it's pretty
benign to autopatch and I don't rely on it heavily...just for the odd
download (not very often) and periodic drive scans. The fact that Grisoft
screwed up a release, causing me to have to uninstall/reinstall/repatch
AVG 7.5 a couple months ago because it caused itself to be unable to run
properly proves to me that even a benign autopatch is not necessarily a
good thing, much less a patch to something that's an integral part of one's
business solution. There's an hour I'm never getting back, not to mention
the time spent testing the HD to see if sectors went bad to prove whether
it was HW or SW, not to mention the stress of worrying whether or not a
production level system had serious issues.
If the model works for you folks, more power to you. To me, it sounds
like a week-plus diagnostic and recovery process complete with possible
multiple lawsuits just waiting to happen. Even if there were no legal
exposure, I wouldn't want that potential for someone to get "surprised" in
the field, just from the logistical point of view of having to fix everyone
at once and having everyone screaming for a fix--when the problem may be
very hard to find. Blind updates can be nasty, especially when the party
being patched doesn't necessarily know when they're getting patched, what's
patched, doesn't realise the correlation, just knows something is off but
not what.
That doesn't put the developer in a warm, fuzzy place, IMHO.
mark->
--
No matter what your problems, modern medicine can help!
http://members.iglou.com/fairlite/fixital/
More information about the Filepro-list
mailing list