disaster recovery
John Esak
john at valar.com
Wed Nov 16 17:26:59 PST 2005
My answer might seem more like a cop-out or testimonial... or even just a
plug... but it really isn't.
I would call Tom at Microlite and ask the very same question... Or, better
yet forward him this message tom at microlite.com. You will get no better
answers anywhere.
John
P.S. - My suggestion is an ftp-save to another machine during the least busy
times... full-master... Then differential ftp-saves at various other points
in the day to the other machine as well. Then at any point during the day,
backup the other machine saves to tape. The slowdown will now not affect
your main work environment.
We have put a 1 gig lan between the two machines so the ftp saves have very
littel impact on performance.
All this, tied to an scheduled rsync of critical hierarchies... Seems like
this would all be pretty good. But again, I would talk directly with Tom.
Just my suggestion.
> -----Original Message-----
> From: filepro-list-bounces at lists.celestial.com
> [mailto:filepro-list-bounces at lists.celestial.com]On Behalf Of Richard D.
> Williams
> Sent: Wednesday, November 16, 2005 8:06 PM
> To: filePro
> Subject: OT: disaster recovery
>
>
> In a Red Hat Enterprise environment, what would be the best disaster
> recovery configuration? Money is no object.
>
> Duplicate server?
>
> Rsync?
>
> Edge Backup and Recovery?
>
> How can you reduce the size of the data loss window between backups or
> rsync?
>
> Will rsync work when users are updating records?
>
> Is there a way to make sure files are not corrupted?
>
> I am looking for any good ideas?
>
> Thanks,
>
> Richard D. Williams
> The Applications Group
> _______________________________________________
> Filepro-list mailing list
> Filepro-list at lists.celestial.com
> http://mailman.celestial.com/mailman/listinfo/filepro-list
More information about the Filepro-list
mailing list