PC SOFT

FOROS PROFESIONALES
WINDEVWEBDEV y WINDEV Mobile

Inicio → WINDEV 2024 → Query on a resultset...
Query on a resultset...
Iniciado por guest, 15,jul. 2015 22:25 - 6 respuestas
Publicado el 15,julio 2015 - 22:25
Hi. These days I'm trying to speedup my data searches reducing the access to my server side MySql DB and increasing the memory workload on client side. The goal is to keep the performance as possible when accessing data outside company lan (on lan queries go like rockets, obviously). When you perform a browsing query you can perform a single query for getting 100/200 items and all their linked data (6/7 linked mysql tables). So 10 queries = 2000 full items... Quite fast anyway even outside lan!

Instead, when you start from a given list of items and the search is one-per-row (you cannot start from a set of items delivered by a filtering query, you have a scattered random items list with maybe nothing in common each other) the performance falls down. 1000 rows = 1000 SELECT queries, doesn't it?

Now I'm going to import from a list (txt) a set of items which compose a provider's catalog. Weekly catalogs often contains only an item code and a price, all the other fields/columns of each item/record have to be filled searching in the "history" of catalogs imported by the company, stored in MySql DB. The match between a "row item" and the "history items" is by item code. You can have N past history records matching the same single row, in case you choose the latest (I semplified, there are more complex criteria).

So I can execute a query for getting the past catalog records of each item/row, applying the needed filters/sorts by sql query in order to get the lastest history match or execute a unique query (that is fast) saving all the past catalog of the provider in a recordset in memory.
The first option is trivial but slow outside lan (N rows = N queries), the second could allow me to work all the most in memory but it cannot succed in sorting/filtering the resulting data source.

As I read on the forum, a way could be the FileToArray from the data source but having a different item on each imported row I have to pass (FOR EACH) all the array each time to find the match, didn't it? I'm afraid that even using WHERE in FOR EACH you have to scroll all the array (it seems so by seeing the cronostart info).
I thought even to save the catalog on a local datafile and then execute local queries on that but I'm quite newbie and didn't succeded in creating it. Can you help me to make the best choice in WD for working (sort/filter) on data sources or derived arrays?

Thank you and sorry for being a little verbose in this post.
Stefano
Publicado el 15,julio 2015 - 22:51
Hi Stefano,

I'm not sure I followed the whole thing, but I'll try anyway...

What about:
- you import all your IDs from your text file and make e comma delimited stirng with it
- then you do ONE query with a 'IN' list of IDs main condition...

this should in fact do a series of queries on the server, but only one back and forth for the resulting dataset... -IF- I understood the question :-)

Best regards
Publicado el 15,julio 2015 - 23:42
Yes, I think you understood.
I thought about that for getting in one shot data from items database. The problem is that i semplified a little not to being really verbose.
The providers do not give you the item "internal-of-the-company" code, they give you or the barcode or the "provider-item-code" (or both). So at firtst you have to read the history catalog datafile and make a reasoning to find the best suited past entry of that item. When you got it you can search the internal company code (you need that for finding the full record info on the items table) matching by barcode/provider code on barcodes table and/or provider codes tables.

The steps are:
1) Import the new catalog
2) Fill empty fields with data coming from the latest best suited past catalog entry
3) Find a match of barcode and provider's item-code in their proper tables and get the relatet item internal code
4) Use that code for searching all the item's info on items table.

At the end of point 1 you don't know the internal item code (for searching on items datafile). You have it at end of point 3.

It's not trivial unfortunately.
No way to perform a fine and quick search on the resultset in memory given by the query on the full catalog of the provider X?
Publicado el 16,julio 2015 - 11:05
You don't need FOR EACH in the array if you use one array of structures you can do a lot with them and they are very fast.

Just play a litle bit with arrayseek, if you use asBinary it's very fast.
Probably you need to use arraysort to be sure, in some cases the order by of the db engine doesn't return the record sorted in binary.

Another possible way is using Associative array, i don't know if it's the best choice in your case but take a look at them.
http://doc.windev.com/en-US/…
Publicado el 16,julio 2015 - 11:41
Ok, thank you. So trying to copy the "history" of the provider X in a local temp datafile and query on that is not an option to consider for improving speed, doesn't it?
Publicado el 16,julio 2015 - 12:48
Is one of the options, but if you have enough memory the array solution is faster.

The other option is to copy the txt file to the server and run one program in the server to update the db.
Publicado el 17,julio 2015 - 12:08
Hi again,

another way of improving your results would be to maintain one extra
file on you sever, a link file between the external and internal code,
where you would always have the best value updated when something changes.

This way, your described query would become much simpler and faster.

Best regards

--
Fabrice Harari
International WinDev, WebDev and WinDev mobile Consulting

Already there: WXShowroom.com, WXReplication (open source)
Coming soon: WXEDM (open source)
More information on http://www.fabriceharari.com


On 7/16/2015 4:48 AM, Paulo Oliveira wrote:
Is one of the options, but if you have enough memory the array solution
is faster.

The other option is to copy the txt file to the server and run one
program in the server to update the db.