PC SOFT

FORUMS PROFESSIONNELS
WINDEVWEBDEV et WINDEV Mobile

Accueil → WINDEV 2024 → Newbie questions HF C\S, Part III
Newbie questions HF C\S, Part III
Débuté par Wim Nihoul, 24 jan. 2006 14:04 - 20 réponses
Posté le 24 janvier 2006 - 14:04
Hello,
We too are experimenting with HF Client\Server. The problem is that we have (yet again) a drop in performance.
Since I've little experience with Client\Server in general, I would like to know if there was something wrong with the code. Most of our code to modify data looks something like this:
HreadSeekFirst(FileName,KeyName,Keyvalue,HwriteLock)
While not Hout(Filename)
//get some other data and do some stuff
Hmodify(FileName)
HreadNext(FileName,KeyName,HLockWrite)
End
Has anyone suggestions to do something like this in a more efficient way in a HF C\S environment?
Thanks
Wim
Posté le 24 janvier 2006 - 22:32
HreadSeekFirst(FileName,KeyName,Keyvalue,HwriteLock)
While not Hout(Filename)
//get some other data and do some stuff
Hmodify(FileName)
HreadNext(FileName,KeyName,HLockWrite)
End
Has anyone suggestions to do something like this in a more efficient way in a HF C\S environment?


Hello Wim,
This is the classic example that will not perform well under HF C/S. But the answer to your question lies in what "do some stuff" means.
A treatment on a number of records should fit into a query but this in not always possible for several reasons. The most important one: if your application is build on itegrity constraints, queries are often indadequate since HF queries do not respect integrity constraints.
If you can specify the stuff to do, somebody can probably give you an idea how it might be optimized under C/S.
Regards
Mat
Posté le 25 janvier 2006 - 06:55
G'day Wim
I have not used HF/CS yet so I am not in a position to try these suggestions.
Do you really need to lock the record on read ? It would be quicker if you did not. If you are modifying a large number of records HLockFile()would be quicker, but will have an effect on other users if they also need to write to the file, so may not be practical to do this
The following option is available in hreadseek commands and may help
hLimitParsing
"The reading of the file will stop as soon as the last value sought is found. The current record will correspond to this last record found.
HFound will be set to False and HOut will be set to True.
This constant is used to optimize the search's speed in client/server mode. "
The profiler is very useful for this type of problem and will indicate the effect of any changes you make.
Regards
Al
Posté le 25 janvier 2006 - 12:42
This is the classic example that will not perform well under HF C/S. But the answer to your question lies in what "do some stuff" means.
If you can specify the stuff to do, somebody can probably give you an idea how it might be optimized under C/S.

Thanks for your answer Mat.
One of my first tests didn't contain any 'stuff' at all.
I just read (and modified) 10000 records.
Here's the code:
//Read 10000 records:
HReadFirst("ARTICLE","KEY_ARTICLE")
WHILE NOT HOut("ARTICLE") AND i <000
i++
HReadNext("ARTICLE","KEY_ARTICLE")
END
....
//Modify 10000 records
HReadFirst("ARTICLE","KEY_ARTICLE",HWriteLock)
WHILE NOT HOut("ARTICLE") AND i <000
i++
ARTICLE.MODEL="Test"
HModify("ARTICLE")
HReadNext("ARTICLE","KEY_ARTICLE",HWriteLock)
END
On HF Classic reading 10000 records took about 0.4s, in HF C\S about 1.1s
On HF Classic reading and modifying 10000 records took about 6s, in HF C\S about 51s
I also did some tests with HSetCache(ValueCache) but at first sight the results were not that good:
If 'ValueCache' is to small, the client PC appeared to be waiting for ever, and proccessor-usage of manta-server incremented.
If 'ValueCache' is to High, it still worked, but slower.
All intermediate values for 'ValueCache' gave similar results.
(Needless to say that during the tests, I was the only 'user')
I've not yet tried the option 'HlimitParsing' which was mentioned by Al
Wim
Posté le 25 janvier 2006 - 12:43
Hi Al,
Thanks for your answer.
I'm afraid that locking the entire file is no option.
I will do some tests with 'HLimitParsing'
Wim
Posté le 25 janvier 2006 - 18:48
You need to be more specific with the //Get more Data // part of your post but here are a couple of things you might to check

How many process are tryning to access the info while you locked it ?

Why not trying to lock only the current record , modify it and the release it before locking the next one ?

Make sure you have the latest version of the HyperFile server.

Even if it isn't release in english yet, you might want to give the version 10 of the HyperFile server a go. Did wonders for me

Later
Posté le 26 janvier 2006 - 09:28
Hello Wim,
your test of 10000 index-sequential reads is not ideal for testing a client/server data base. The basis of a C/S database, are queries. The moment you introduce a selection criteria, queries will perform better than loops and filters. However, if your application is based on integrity constraints, you can’t use queries since they don’t respect them. In that case, C/S will show performance benefits for searching data on complex selection criteria.

//Read 10000 records:
HReadFirst("ARTICLE","KEY_ARTICLE")
WHILE NOT HOut("ARTICLE") AND i <000
i++
HReadNext("ARTICLE","KEY_ARTICLE")
END

just reading records is normally not a purpose on its own. Selecting part of your file is a typical example where a query will perform better under HF C/S than using a loop. For example, the following will count the number of products belonging to group 123:
dsSQL is Data Source
vQuery is string = "select count(KEY_ARTICLE) as TotNum from ARTICLE where GROUP_ARTICLE = 123 "
if hExecuteSQLQuery(dsSQL, vQuery) then
hReadFirst(dsSQL)
info(dsSQL.TotNum)
else
info(hErrorInfo)
end

....
//Modify 10000 records
HReadFirst("ARTICLE","KEY_ARTICLE",HWriteLock)
WHILE NOT HOut("ARTICLE") AND i <000
i++
ARTICLE.MODEL="Test"
HModify("ARTICLE")
HReadNext("ARTICLE","KEY_ARTICLE",HWriteLock)
END
On HF Classic reading 10000 records took about 0.4s, in HF C\S about 1.1s
On HF Classic reading and modifying 10000 records took about 6s, in HF C\S about 51s

I don't know how to limit an operation to 10000 records in SQL but the above makes more sense when a record selection criteria is applied, e.g.
vQuery="Update ARTICLE set MODEL='Test' where GROUP_ARTICLE = 123 "
is probably more efficient in C/S than a loop. It should modify 10000 records faster than the 6 seconds of HF Classic. The problem is locking because HExecuteRecord doesn't do it. If you really need it, you might be better off using views which allow locking and writing modifications via hViewToFile. Alternatively, you could use transactions but a mix of records locked by HRead... commands and modifications by an update query is rather unpredictable.
When looping through all of a file, HF C/S databases are at a disadvantage. An alternative when using selection criteria, according to someone in a French forum, is the following:
HSeekFirst(ARTICLE, GROUP_ARTICLE, 123)
WHILE HFound()
hRead(ARTICLE,hCurrentRecNum,hLockWrite)
ARTICLE.MODEL="Test"
HModify(ARTICLE)
hNext(ARTICLE, GROUP_ARTICLE)
END
the difference is that the above searches the index and only reads those records that correspond to the selection criteria. It probably won't have an impact on your original test since all records are included, meaning the same 20,000 accesses.
Regards
Mat
Posté le 26 janvier 2006 - 09:29
Thanks Al, Art and Mat for your all your suggestions.
Wim
(PS The limit of 10.000 records in the examples, was just for test purpose only)
Posté le 26 janvier 2006 - 10:07
>Has anyone suggestions to do something like this in a more efficient way in a HF C\S environment?
Hi Wim,
I could be totally off base and do not understand what you are doing, but here is my 2 Euros worth. I think the issue is you are treating a C/S database like a local HF database with the LAN acting as a bottleneck, slowing you down.
With a local database you use code to access one record at a time, do the work, then save the modification. It is all done local to your machine. Your code looks like code that would be used in this scenario (local HF).
If you do this in a C/S environment, with data on the server, you will be reading a record over the LAN, doing the 'stuff' on the local machine, then sending the result back over the LAN to have it written on the server by the C/S database.
For example, if you have 10,000 records in the C/S database, you will have 10,000 trips across the LAN for this code:
HreadFirst(FileName,KeyName)
While not Hout(Filename)
..HreadNext(FileName,KeyName)
End
If you send back a HModify, that's another 10,000 trips across the LAN, a total of 20,000 trips.
With C/S the object is to send one command to the C/S database and let it do all the work.
For a C/S database, instead of the code above you would use something like this:
SELECT * FROM FileName
One trip across the LAN for the request, one trip back with returned data.
I am not a SQL guru by any means, but I think if you try something like an UPDATE command it may work faster. See "UPDATE (SQL language)" and "Creating an update query" in the WD Help file.
HTH,
Art Bonds
Posté le 26 janvier 2006 - 11:10
Hi Mat,
We have an application running over a network with the .exe on the
server. We use a lot of hReadfirst() and hReadseekfirst(). Correct me if i'm
wrong, but are you saying that it is, in our case, useless to switch to C/S??
When you mention queries, you mean queries produced by the WD-Query editor or
SQL-queries?
Thanks in advance for your answer.
Grtz,
Aad
Hello Wim,
your test of 10000 index-sequential reads is not ideal for testing a client/server data base. The basis of a C/S database, are queries. The moment you introduce a selection criteria, queries will perform better than loops and filters. However, if your application is based on integrity constraints, you can’t use queries since they don’t respect them. In that case, C/S will show performance benefits for searching data on complex selection criteria.

//Read 10000 records:
HReadFirst("ARTICLE","KEY_ARTICLE")
WHILE NOT HOut("ARTICLE") AND i <000
i++
HReadNext("ARTICLE","KEY_ARTICLE")
END
just reading records is normally not a purpose on its own. Selecting part of your file is a typical example where a query will perform better under HF C/S than using a loop. For example, the following will count the number of products belonging to group 123:

dsSQL is Data Source
vQuery is string = "select count(KEY_ARTICLE) as TotNum from ARTICLE where GROUP_ARTICLE = 123 "
if hExecuteSQLQuery(dsSQL, vQuery) then
hReadFirst(dsSQL)
info(dsSQL.TotNum)
else
info(hErrorInfo)
end

....
//Modify 10000 records
HReadFirst("ARTICLE","KEY_ARTICLE",HWriteLock)
WHILE NOT HOut("ARTICLE") AND i <000
i++
ARTICLE.MODEL="Test"
HModify("ARTICLE")
HReadNext("ARTICLE","KEY_ARTICLE",HWriteLock)
END
On HF Classic reading 10000 records took about 0.4s, in HF C\S about 1.1s
On HF Classic reading and modifying 10000 records took about 6s, in HF C\S about 51s
I don't know how to limit an operation to 10000 records in SQL but the above makes more sense when a record selection criteria is applied, e.g.

vQuery="Update ARTICLE set MODEL='Test' where GROUP_ARTICLE = 123 "
is probably more efficient in C/S than a loop. It should modify 10000 records faster than the 6 seconds of HF Classic. The problem is locking because HExecuteRecord doesn't do it. If you really need it, you might be better off using views which allow locking and writing modifications via hViewToFile. Alternatively, you could use transactions but a mix of records locked by HRead... commands and modifications by an update query is rather unpredictable.
When looping through all of a file, HF C/S databases are at a disadvantage. An alternative when using selection criteria, according to someone in a French forum, is the following:
HSeekFirst(ARTICLE, GROUP_ARTICLE, 123)
WHILE HFound()
hRead(ARTICLE,hCurrentRecNum,hLockWrite)
ARTICLE.MODEL="Test"
HModify(ARTICLE)
hNext(ARTICLE, GROUP_ARTICLE)
END
the difference is that the above searches the index and only reads those records that correspond to the selection criteria. It probably won't have an impact on your original test since all records are included, meaning the same 20,000 accesses.
Regards
Mat
Posté le 26 janvier 2006 - 21:08
Hi Al,
Thanks for your answer.
I'm afraid that locking the entire file is no option.
I will do some tests with 'HLimitParsing'
Wim
Posté le 26 janvier 2006 - 21:08
Hello Wim,
your test of 10000 index-sequential reads is not ideal for testing a client/server data base. The basis of a C/S database, are queries. The moment you introduce a selection criteria, queries will perform better than loops and filters. However, if your application is based on integrity constraints, you can’t use queries since they don’t respect them. In that case, C/S will show performance benefits for searching data on complex selection criteria.

//Read 10000 records:
HReadFirst("ARTICLE","KEY_ARTICLE")
WHILE NOT HOut("ARTICLE") AND i <000
i++
HReadNext("ARTICLE","KEY_ARTICLE")
END

just reading records is normally not a purpose on its own. Selecting part of your file is a typical example where a query will perform better under HF C/S than using a loop. For example, the following will count the number of products belonging to group 123:
dsSQL is Data Source
vQuery is string = "select count(KEY_ARTICLE) as TotNum from ARTICLE where GROUP_ARTICLE = 123 "
if hExecuteSQLQuery(dsSQL, vQuery) then
hReadFirst(dsSQL)
info(dsSQL.TotNum)
else
info(hErrorInfo)
end

....
//Modify 10000 records
HReadFirst("ARTICLE","KEY_ARTICLE",HWriteLock)
WHILE NOT HOut("ARTICLE") AND i <000
i++
ARTICLE.MODEL="Test"
HModify("ARTICLE")
HReadNext("ARTICLE","KEY_ARTICLE",HWriteLock)
END
On HF Classic reading 10000 records took about 0.4s, in HF C\S about 1.1s
On HF Classic reading and modifying 10000 records took about 6s, in HF C\S about 51s

I don't know how to limit an operation to 10000 records in SQL but the above makes more sense when a record selection criteria is applied, e.g.
vQuery="Update ARTICLE set MODEL='Test' where GROUP_ARTICLE = 123 "
is probably more efficient in C/S than a loop. It should modify 10000 records faster than the 6 seconds of HF Classic. The problem is locking because HExecuteRecord doesn't do it. If you really need it, you might be better off using views which allow locking and writing modifications via hViewToFile. Alternatively, you could use transactions but a mix of records locked by HRead... commands and modifications by an update query is rather unpredictable.
When looping through all of a file, HF C/S databases are at a disadvantage. An alternative when using selection criteria, according to someone in a French forum, is the following:
HSeekFirst(ARTICLE, GROUP_ARTICLE, 123)
WHILE HFound()
hRead(ARTICLE,hCurrentRecNum,hLockWrite)
ARTICLE.MODEL="Test"
HModify(ARTICLE)
hNext(ARTICLE, GROUP_ARTICLE)
END
the difference is that the above searches the index and only reads those records that correspond to the selection criteria. It probably won't have an impact on your original test since all records are included, meaning the same 20,000 accesses.
Regards
Mat
Posté le 26 janvier 2006 - 22:54
Hi Mat,
We have an application running over a network with the .exe on the
server. We use a lot of hReadfirst() and hReadseekfirst(). Correct me if i'm
wrong, but are you saying that it is, in our case, useless to switch to C/S??
When you mention queries, you mean queries produced by the WD-Query editor or
SQL-queries?
Thanks in advance for your answer.
Grtz,
Aad
Hello Wim,
your test of 10000 index-sequential reads is not ideal for testing a client/server data base. The basis of a C/S database, are queries. The moment you introduce a selection criteria, queries will perform better than loops and filters. However, if your application is based on integrity constraints, you can’t use queries since they don’t respect them. In that case, C/S will show performance benefits for searching data on complex selection criteria.

//Read 10000 records:
HReadFirst("ARTICLE","KEY_ARTICLE")
WHILE NOT HOut("ARTICLE") AND i <000
i++
HReadNext("ARTICLE","KEY_ARTICLE")
END
just reading records is normally not a purpose on its own. Selecting part of your file is a typical example where a query will perform better under HF C/S than using a loop. For example, the following will count the number of products belonging to group 123:

dsSQL is Data Source
vQuery is string = "select count(KEY_ARTICLE) as TotNum from ARTICLE where GROUP_ARTICLE = 123 "
if hExecuteSQLQuery(dsSQL, vQuery) then
hReadFirst(dsSQL)
info(dsSQL.TotNum)
else
info(hErrorInfo)
end

....
//Modify 10000 records
HReadFirst("ARTICLE","KEY_ARTICLE",HWriteLock)
WHILE NOT HOut("ARTICLE") AND i <000
i++
ARTICLE.MODEL="Test"
HModify("ARTICLE")
HReadNext("ARTICLE","KEY_ARTICLE",HWriteLock)
END
On HF Classic reading 10000 records took about 0.4s, in HF C\S about 1.1s
On HF Classic reading and modifying 10000 records took about 6s, in HF C\S about 51s
I don't know how to limit an operation to 10000 records in SQL but the above makes more sense when a record selection criteria is applied, e.g.

vQuery="Update ARTICLE set MODEL='Test' where GROUP_ARTICLE = 123 "
is probably more efficient in C/S than a loop. It should modify 10000 records faster than the 6 seconds of HF Classic. The problem is locking because HExecuteRecord doesn't do it. If you really need it, you might be better off using views which allow locking and writing modifications via hViewToFile. Alternatively, you could use transactions but a mix of records locked by HRead... commands and modifications by an update query is rather unpredictable.
When looping through all of a file, HF C/S databases are at a disadvantage. An alternative when using selection criteria, according to someone in a French forum, is the following:
HSeekFirst(ARTICLE, GROUP_ARTICLE, 123)
WHILE HFound()
hRead(ARTICLE,hCurrentRecNum,hLockWrite)
ARTICLE.MODEL="Test"
HModify(ARTICLE)
hNext(ARTICLE, GROUP_ARTICLE)
END
the difference is that the above searches the index and only reads those records that correspond to the selection criteria. It probably won't have an impact on your original test since all records are included, meaning the same 20,000 accesses.
Regards
Mat
Posté le 26 janvier 2006 - 22:54
Hello Aad,
It's best checking the matter our for yourself, using your own application. There are no clearcut situations. It's quite simple to switch to HF C/S.

We have an application running over a network with the .exe on the
server. We use a lot of hReadfirst() and hReadseekfirst(). Correct me if i'm
wrong, but are you saying that it is, in our case, useless to switch to C/S??

No, what I meant is that filters and loops using NReadNext are less efficient in a C/S database than HF Classic. The same is true for MySQL from what I have read.

When you mention queries, you mean queries produced by the WD-Query editor or
SQL-queries?

I have not remarked any difference between the two, though some people say HExecuteSQLQuery is performing better. All our tables are based on queries even under HC Classic, so we created simple ones using the editor to create the table. For tables needing a lot of filtering, we use the same query name but define our own SQL commands depending on user input and launch it via hExecuteSQLQuery(myEditorQueryName, mySQLCommand). This works fine and gives great flexibility.
Regards
Mat
Posté le 27 janvier 2006 - 07:59
Hello Aad,
It's best checking the matter our for yourself, using your own application. There are no clearcut situations. It's quite simple to switch to HF C/S.

We have an application running over a network with the .exe on the
server. We use a lot of hReadfirst() and hReadseekfirst(). Correct me if i'm
wrong, but are you saying that it is, in our case, useless to switch to C/S??

No, what I meant is that filters and loops using NReadNext are less efficient in a C/S database than HF Classic. The same is true for MySQL from what I have read.

When you mention queries, you mean queries produced by the WD-Query editor or
SQL-queries?

I have not remarked any difference between the two, though some people say HExecuteSQLQuery is performing better. All our tables are based on queries even under HC Classic, so we created simple ones using the editor to create the table. For tables needing a lot of filtering, we use the same query name but define our own SQL commands depending on user input and launch it via hExecuteSQLQuery(myEditorQueryName, mySQLCommand). This works fine and gives great flexibility.
Regards
Mat
Posté le 27 janvier 2006 - 08:00
We have an application running over a network with the .exe on the
server. We use a lot of hReadfirst() and hReadseekfirst(). Correct me if i'm
wrong, but are you saying that it is, in our case, useless to switch to C/S??


should also have said that it depends what hReadSeekFirst commands are used for. Originally, I mostly used them to read individual records for creation / modification. All tables, combos and lists were filled via queries. Due to desperately slow multi-file queries in HF Classic, I started using single file queries, plus a FOR ALL loop on the query and a HReadSeekFirst for each record in the query result to obtain values from other files, e.g. output of all pending orders via query and getting product description and customer names via hReadSeekFirst. Under HF Classic this is much quicker than linking 3 or 4 files in a query. In HF C/S it's the opposite: a query linking all files is quite quick and multiple hReadSeek in a loop will slow down the process. As far as I know each HReadSeek is converted into a query. So, it's easy to work out that if you have 1000 records in your result and search data from 3 other files, in the worst case, HF C/S has to execute 3000 queries inste!
ad of 1 multi-file query.
This situation is maybe not always evident in a local network. But the moment you connect to a remote database it might change dramatically.
Posté le 27 janvier 2006 - 08:00
We have an application running over a network with the .exe on the
server. We use a lot of hReadfirst() and hReadseekfirst(). Correct me if i'm
wrong, but are you saying that it is, in our case, useless to switch to C/S??


should also have said that it depends what hReadSeekFirst commands are used for. Originally, I mostly used them to read individual records for creation / modification. All tables, combos and lists were filled via queries. Due to desperately slow multi-file queries in HF Classic, I started using single file queries, plus a FOR ALL loop on the query and a HReadSeekFirst for each record in the query result to obtain values from other files, e.g. output of all pending orders via query and getting product description and customer names via hReadSeekFirst. Under HF Classic this is much quicker than linking 3 or 4 files in a query. In HF C/S it's the opposite: a query linking all files is quite quick and multiple hReadSeek in a loop will slow down the process. As far as I know each HReadSeek is converted into a query. So, it's easy to work out that if you have 1000 records in your result and search data from 3 other files, in the worst case, HF C/S has to execute 3000 queries inste!
ad of 1 multi-file query.
This situation is maybe not always evident in a local network. But the moment you connect to a remote database it might change dramatically.
Posté le 27 janvier 2006 - 08:00
...
If you send back a HModify, that's another 10,000 trips across the LAN, a total of 20,000 trips.
With C/S the object is to send one command to the C/S database and let it do all the work.
For a C/S database, instead of the code above you would use something like this:
SELECT * FROM FileName
One trip across the LAN for the request, one trip back with returned data.
I am not a SQL guru by any means, but I think if you try something like an UPDATE command it may work faster. See "UPDATE (SQL language)" and "Creating an update query" in the WD Help file.

...

nicely put, Art, I fully agree with the principle. The trouble with HF C/S for the time being is trying to keep data integrity despite using queries which DO NOT SUPPORT integrity constraints defined in the analysis. Another is that apparently UPDATE queries have not returned false when trying to modify locked records. In my tests, the query returned TRUE and left the record in conflict between two versions, despite the fact that I have turned off the HF lock management and cancel writes when they collide by using HOnError("*",hErrLock,"ErrorLock") and returning opCancel which normally cancels the operation causing the conflict.
Transactions would be an answer but not if the query returns true when causing locking conflicts. And neither when integrity constraints remain unnoticed by the query. Something has to trigger the hTransactionCancel, otherwise it just won't work.
Has anyone a solution to this or have I overlooked something ?
Thanks
Mat
Posté le 27 janvier 2006 - 08:01
...
If you send back a HModify, that's another 10,000 trips across the LAN, a total of 20,000 trips.
With C/S the object is to send one command to the C/S database and let it do all the work.
For a C/S database, instead of the code above you would use something like this:
SELECT * FROM FileName
One trip across the LAN for the request, one trip back with returned data.
I am not a SQL guru by any means, but I think if you try something like an UPDATE command it may work faster. See "UPDATE (SQL language)" and "Creating an update query" in the WD Help file.

...

nicely put, Art, I fully agree with the principle. The trouble with HF C/S for the time being is trying to keep data integrity despite using queries which DO NOT SUPPORT integrity constraints defined in the analysis. Another is that apparently UPDATE queries have not returned false when trying to modify locked records. In my tests, the query returned TRUE and left the record in conflict between two versions, despite the fact that I have turned off the HF lock management and cancel writes when they collide by using HOnError("*",hErrLock,"ErrorLock") and returning opCancel which normally cancels the operation causing the conflict.
Transactions would be an answer but not if the query returns true when causing locking conflicts. And neither when integrity constraints remain unnoticed by the query. Something has to trigger the hTransactionCancel, otherwise it just won't work.
Has anyone a solution to this or have I overlooked something ?
Thanks
Mat
Posté le 27 janvier 2006 - 10:42
Hi Mat,
Thank you very much for your information.
We'll give it a go.
Grtz,
Aad

We have an application running over a network with the .exe on the
server. We use a lot of hReadfirst() and hReadseekfirst(). Correct me if i'm
wrong, but are you saying that it is, in our case, useless to switch to C/S??

should also have said that it depends what hReadSeekFirst commands are used for. Originally, I mostly used them to read individual records for creation / modification. All tables, combos and lists were filled via queries. Due to desperately slow multi-file queries in HF Classic, I started using single file queries, plus a FOR ALL loop on the query and a HReadSeekFirst for each record in the query result to obtain values from other files, e.g. output of all pending orders via query and getting product description and customer names via hReadSeekFirst. Under HF Classic this is much quicker than linking 3 or 4 files in a query. In HF C/S it's the opposite: a query linking all files is quite quick and multiple hReadSeek in a loop will slow down the process. As far as I know each HReadSeek is converted into a query. So, it's easy to work out that if you have 1000 records in your result and search data from 3 other files, in the worst case, HF C/S has to execute 3000 queries inst!

ead of 1 multi-file query.
>This situation is maybe not always evident in a local network. But the moment you connect to a remote database it might change dramatically.
Posté le 30 janvier 2006 - 11:08
Hi Mat,
Thank you very much for your information.
We'll give it a go.
Grtz,
Aad

We have an application running over a network with the .exe on the
server. We use a lot of hReadfirst() and hReadseekfirst(). Correct me if i'm
wrong, but are you saying that it is, in our case, useless to switch to C/S??

should also have said that it depends what hReadSeekFirst commands are used for. Originally, I mostly used them to read individual records for creation / modification. All tables, combos and lists were filled via queries. Due to desperately slow multi-file queries in HF Classic, I started using single file queries, plus a FOR ALL loop on the query and a HReadSeekFirst for each record in the query result to obtain values from other files, e.g. output of all pending orders via query and getting product description and customer names via hReadSeekFirst. Under HF Classic this is much quicker than linking 3 or 4 files in a query. In HF C/S it's the opposite: a query linking all files is quite quick and multiple hReadSeek in a loop will slow down the process. As far as I know each HReadSeek is converted into a query. So, it's easy to work out that if you have 1000 records in your result and search data from 3 other files, in the worst case, HF C/S has to execute 3000 queries inst!

ead of 1 multi-file query.
>This situation is maybe not always evident in a local network. But the moment you connect to a remote database it might change dramatically.