PC SOFT

FOROS PROFESIONALES
WINDEVWEBDEV y WINDEV Mobile

Inicio → WINDEV 2024 → Help deleting/archiving LARGE HFCS Databases
Help deleting/archiving LARGE HFCS Databases
Iniciado por myerssj, 09,feb. 2018 07:56 - 1 respuesta
Miembro registrado
20 mensajes
Publicado el 09,febrero 2018 - 07:56
I have a very large set of web, desktop/server, and mobile applications all built around a LARGE database primarily in HFCS. I am trying to address performance issues due to some of the tables being huge. I originally tried to use some logic to move old records to an archive table, but that was way to slow. I have now mode copies of the files (FIC and NDX) and mapped to a new name for archive.

I can't seem to find a way to delete/purge the old records from the production table that is at all practical. The largest table is 155GB FIC, not including the NDX. We've tried to use queries to delete the old records but it is painfully slow. I need to delete about 15 Million records. It took 3 hours to delete 8,000 last time we tried.

I also noticed the file size is not reduced on the delete. I assume we will have to reindex to have that happen, which would take a ridiculous amount of time as well.

Anyone have any suggestions,, best practice for how to handle this? We do not have an option to take any of the "current" data offline (about 20 Million records in this particular file).

Stephen
Publicado el 09,febrero 2018 - 12:28
Hi Stephen,

Thoughts, in no particular order...

1. are you sure that the performance issue is because of the size of the
files? I have worked on very big files without any speed difference, as
long as the keys are desinged correctly for your queries and direct
access... Spending some time on optimization may be what you need...

2. 8000 delete in 3 hours seems STRANGELY slow... are you relying on the
automatic management of linked records to delete secondary records at
the same time?

3. What kind of hard drive are you using? If it's not an SSD, then
investing in a high speed ssd drive would make a HUGE difference

4. Why do you care if it takes a long time to delete records? It's not
like you need to stay in front of your utility while it works...

5. You don't need to reduce the file size, Holes will be re-used
automatically.

6. It sounds like you never reindex your files so maybe you also never
run an hOptimize to recalculate the best indexes... That could also
majke a big difference

7. Is the physical server where you HF server lives big enough? On big
databases, having enough RAM on the server for queries AND the cache is
very important for performance.

8. It seems like you want to delete 75% of your file content... It would
probably be faster to copy the 25% that you want to keep in a second
file with the index management turned off, then reindex it and change
names... That would necessitate only a very short interruption of
service to copy the last newest records and rename the 2 files.


That's all for now, I hope this will help... Once again, let me
emphasize that I strongly doubt that the size of the files is the
primary cause of the slowdown

Best regards

--
Fabrice Harari
International WinDev, WebDev and WinDev mobile Consulting

Ready for you: WXShowroom.com, WXReplication (open source) and now WXEDM
(open source)

More information on http://www.fabriceharari.com


Le 2/9/2018 à 1:56 AM, Stephen a écrit :
I have a very large set of web, desktop/server, and mobile applications
all built around a LARGE database primarily in HFCS. I am trying to
address performance issues due to some of the tables being huge. I
originally tried to use some logic to move old records to an archive
table, but that was way to slow. I have now mode copies of the files
(FIC and NDX) and mapped to a new name for archive.

I can't seem to find a way to delete/purge the old records from the
production table that is at all practical. The largest table is 155GB
FIC, not including the NDX. We've tried to use queries to delete the old
records but it is painfully slow. I need to delete about 15 Million
records. It took 3 hours to delete 8,000 last time we tried.
I also noticed the file size is not reduced on the delete. I assume we
will have to reindex to have that happen, which would take a ridiculous
amount of time as well.

Anyone have any suggestions,, best practice for how to handle this? We
do not have an option to take any of the "current" data offline (about
20 Million records in this particular file).

Stephen