tweaking efficiency
Brian K. White
brian at aljex.com
Wed Sep 28 15:23:18 PDT 2005
----- Original Message -----
From: "Kenneth Brody" <kenbrody at bestweb.net>
To: "Brian K. White" <brian at aljex.com>
Cc: "filePro mailing list" <filepro-list at seaslug.org>
Sent: Wednesday, September 28, 2005 5:45 PM
Subject: Re: tweaking efficiency
> Quoting Brian K. White (Wed, 28 Sep 2005 16:38:42 -0400):
>
>> Is there any difference in the amount of work done behind the scenes
>> between
>> these two?
>>
>> ... < xlate(@t4,"/","") < xlate(@tm,":","")
>>
>> ... < xlate(@t4<@tm,"/:","")
>
> Without any actual timing, my gut says that the first method is
> slightly more efficient due to looking for only a single character
> in each of the fields, rather than looking for both characters in
> a combined field.
>
>> They don't produce the exact same output, the 2nd one leaves more than
>> one space between the two values
>
> No, it doesn't.
You're right of course, sorry.
I didn't even look at the interim results as it didn't matter to me. I
already had the process working by the time I thought of that change and the
overall process kept working.
I assumed (correctly) that the the < operator took effect before the xlate
which I assumed (incorrectly) would result in each field being shorter, but
no < operator taking place after the xlate to take up the slack.
I see now why that was silly. The parts of the field did get shorter, but
the whole combined field got shorter, not just the first 10 byte section of
it.
>> The 1st version is more quicly read and understood, but I'm always
>> trying to make things as efficient as possible so the 2nd version
>> occured to me.
>
> How many times is the command going to be executed?
Oh, at most once a minute by at most 20 to 40 users at once on a given
server eventually. Probably no more than 5 or 10 now.
I'm not actually concerned bout this particular routine, but I like to be as
aware of issues as possible so I know the right way to write when I am
writing something that needs to scale up a lot or that will run a zillion
times as fast as possible etc...
Like I never used to realize that there was a penalty for not defining
variables where possible until you said here about how undefined variables
have to keep releasing and reallocating their memory, whereas defined ones
can allocate once.
Besides that I'm just a firm beleiver in the idea that it's unavoidable that
a human will form habits and will repeat something wherever possible instead
of really analysing every new situation and tailoring his response to match
it perfectly. So I figure it's actually less work in the long run to have
"shoot for most efficient" as the default habit. It doesn't hurt when not
needed, it does help when it is needed. It also means you don't have to be
perfectly omnipotent and always know when super efficiency is needed, and it
also means that code that didn't need it at first but is then asked to scale
way up doesn't need to be rewritten as often or as much.
Brian K. White -- brian at aljex.com -- http://www.aljex.com/bkw/
+++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++.
filePro BBx Linux SCO FreeBSD #callahans Satriani Filk!
More information about the Filepro-list
mailing list