FW: Browse Lookups - WARNING....Long rambling post....read at your discretion

Richard D. Williams richard at appgrp.net
Wed Jun 27 08:43:31 PDT 2018


I've been following this thread and I think a lookup is the wrong approach.
It takes too long.

When a user types in the text that is needed to search, use a system 
command and run dreport/rreport with a selection set using @PM against 
the target file.
Test the field or fields in the target file that contain the text and 
write the needed data for a browse lookup to a temp file.
Then browse the temp file. You can always link back to the original 
record in the target  file by also recording the record number in the 
temp file.

Remember to create some unique key (linux I use tty) to be written to 
the temp file so you can have multiple users doing this.

Just my two cents.

Richard D. Williams



On 6/27/2018 5:04 AM, Jose Lerebours via Filepro-list wrote:
> In SQL, you can do
>
> SELECT * FROM `file` WHERE `keyfield` LIKE "%some string%";
>
> The above, will return all rows where the key field contains the 
> specified string.  Can fpSQL handle this type of query?  If so, there 
> is your answer!
>
> Second option.-
>
> Mirror your fp keys to SQL and do this kind of queries using a web 
> interface where you have ability to write UIs to handle paging, 
> sorting and multiple keys to filter off a single run.
>
> Third option.-
>
> As suggested, use a 'demand' index.  I would combine this with a bit 
> of fp coding whereby you collect the filter data, dynamically create 
> the selection table and the demand index itself upon request; the 
> problem with this is that you may be facing issues in an multi-user 
> environment so, even the demand index itself may have to be dynamic 
> (0, 1, 2 ...) and limited to as many users at a time.
>
> Hope these bring additional options to the table!
>
>
>
> On 06/26/2018 09:08 PM, Scott Walker via Filepro-list wrote:
>> Thanks to all for your responses to my browse lookup issue.
>>
>> I've decided that the browse lookup feature just is not right for 
>> this task so I'll do a bit of coding.  I am scanning a LOT of records 
>> looking for any record where the Description field contains a 
>> specified string of text anywhere in the  Description field
>>
>> The browse lookup is not optimal in this situation since there are 
>> hundreds of thousands of records and I am looking at each to see if a 
>> particular field contains a specified string of data anywhere in the 
>> field.  So I am looking up a record and seeing if  lookup(2) co 
>> "XYX123".  If not I use the "drop" command to eliminate it from the 
>> browse lookup.  Since there is no index that can be used to narrow 
>> down the number of records I am having to look at, I am just using 
>> the index built on order date.  The string of text may be found in 
>> one of the very first records I look at and not found again until 
>> hundreds of thousands of records later or perhaps not be found at all.
>>
>> So this brings up the issue of whether there is a more efficient way 
>> to do this?
>>
>> One thought, when I write the record with the order description, 
>> let's say for Order# 555555,  I could  write some addition records in 
>> a file that exists just to make the searching for a specified string 
>> faster.
>> For example, if the Description field contains "12345XYZ123" I would 
>> write these records to the extra file:
>>
>>     Desc "key"    Order#
>>     12345        555555
>>     2345X        555555
>>     345XY        555555
>>     45XYZ        555555
>>     XYZ12        555555
>>     YZ123        555555
>>     ZI23        555555
>>     123        555555
>>     23        555555
>>     3        555555
>>
>> That would allow me to build an index on the 5 character description 
>> "key" string, and when the user asked for anything containing the 
>> string "XYZ123" I could use the first 5 characters of their request 
>> to narrow down the searched records to those in the index with the 
>> desc key of XYZ12.  That would give me the order numbers to look at 
>> in the regular file.
>>
>> Is the overhead of this going to kill me.....for example, on a given 
>> order, we can have up to 999 line items with up to 99 lines of 
>> description per line item, with each line of description allowing up 
>> to 54 chararcters.
>>
>> Am I better off trying to come up with something outside of 
>> fp....some sort of user call to an outside program that scans the key 
>> file looking for the specified string and returns the fp record 
>> numbers for me to look at inside fp?
>>
>> Regards,
>> Scott Walker
>>
>>
>>
>>
>> -----Original Message-----
>> From: Filepro-list 
>> [mailto:filepro-list-bounces+scottwalker=ramsystemscorp.com at lists.celestial.com] 
>> On Behalf Of Bruce Easton via Filepro-list
>> Sent: Tuesday, June 26, 2018 1:58 PM
>> To: filepro-list at lists.celestial.com
>> Subject: Re: Browse Lookups
>>
>> "Events are a much better methodology [than timers]"
>>
>> Agreed.  I work almost exclusively with browser-based projects 
>> anymore, so  I had to test this:
>>
>> If:  @sk EQ "DTAB"
>>
>> to be sure that you can't trap the PageDown as a browse-lookup trigger.
>>
>>
>> On 6/26/18 1:45 PM, Paul McNary via Filepro-list wrote:
>>> I like the idea of callbacks in filePro! :-)
>>>
>>>
>>> On 6/26/2018 11:17 AM, Fairlight via Filepro-list wrote:
>>>> On Tue, Jun 26, 2018 at 11:38:36AM -0400, Richard Kreiss via
>>>> Filepro-list thus spoke:
>>>>> I would love to be able to set a timer for a browse which would
>>>>> execute a sub-routine.
>>>> You're thinking very old-school.
>>>>
>>>> What would be better than a timer is a set of configurable callbacks
>>>> which execute processing table segments upon various events.
>>>>
>>>> * Begin lookup
>>>> * Changed record
>>>> * End of matches
>>>> * Close lookup
>>>>
>>>> In this fashion, you can configure what you like, without having to
>>>> rely on an arbitrary timer which may be smaller than desirable, or
>>>> larger than necessary (thus wasting time).  Timers generally suck, to
>>>> put it bluntly.
>>>> They're often too crude a tool.  Events are a much better methodology
>>>> for this sort of functionality.
>>>>
>>>> And lo and behold, filePro already supports events in general terms.
>>>> So why not have @brsbeg, @brsrcd, @brsend, and @brscls as
>>>> system-based events which function like conventional callbacks?
>>>>
>>>> m->
>>> _______________________________________________
>>> Filepro-list mailing list
>>> Filepro-list at lists.celestial.com
>>> Subscribe/Unsubscribe/Subscription Changes
>>> http://mailman.celestial.com/mailman/listinfo/filepro-list
>>>
>>
>>
>> ---
>> This email has been checked for viruses by Avast antivirus software.
>> https://www.avast.com/antivirus
>>
>> _______________________________________________
>> Filepro-list mailing list
>> Filepro-list at lists.celestial.com
>> Subscribe/Unsubscribe/Subscription Changes 
>> http://mailman.celestial.com/mailman/listinfo/filepro-list
>>
>> _______________________________________________
>> Filepro-list mailing list
>> Filepro-list at lists.celestial.com
>> Subscribe/Unsubscribe/Subscription Changes
>> http://mailman.celestial.com/mailman/listinfo/filepro-list
>
> _______________________________________________
> Filepro-list mailing list
> Filepro-list at lists.celestial.com
> Subscribe/Unsubscribe/Subscription Changes
> http://mailman.celestial.com/mailman/listinfo/filepro-list
>
>
> ---
> This email has been checked for viruses by AVG.
> https://www.avg.com



More information about the Filepro-list mailing list