2014-10-17 14:55 UTC+0200 Przemyslaw Czerpak (druzus/at/poczta.onet.pl)
* include/hbrddcdx.h
* src/rdd/dbfcdx/dbfcdx1.c
+ added support for large index files over 4GB length.
These are slightly modified CDX indexes which stores index page numbers
instead of index page offsets inside index file. This trick increase
maximum index files size from 2^32 (4GB) to 2^41 (2TB). This index
format is enabled automatically when DB_DBFLOCK_HB64 is used. This is
the same behavior as in DBFNTX and DBFNSX for which I added support
for large indexes (up to 4TB) few years ago.
Warning: new CDX indexes are not backward compatible and cannot be
read by other systems or older [x]Harbour versions.
If you try to open new indexes using older [x]Harbour RDDs
then RTE "DBFCDX/1012 Corruption detected" is generated.
When current Harbour *DBFCDX/SIXCDX RDD open index file
then it automatically recognize type of index file so it
will work correctly with both versions without any problem.
In short words: People using DB_DBFLOCK_HB64 should remember
that after reindexing with new Harbour applications old ones
cannot read new CDX indexes.
; In next step I plan to add support for user defined page size in CDX
index files.
* doc/xhb-diff.txt
* added information about extended CDX format to section "NATIVE RDDs"
* src/rdd/dbfcdx/dbfcdx1.c
* src/rdd/dbfnsx/dbfnsx1.c
* src/rdd/dbfntx/dbfntx1.c
* disable record readahead buffer used during indexing when only
one record can be stored inside
! generate RTE when data cannot be read into record readahead buffer
during indexing
best regards
Przemek
Large index files over 4GB length in DBFs
Moderator: Rathinagiri
- Pablo César
- Posts: 4059
- Joined: Wed Sep 08, 2010 1:18 pm
- Location: Curitiba - Brasil
Large index files over 4GB length in DBFs
HMGing a better world
"Matter tells space how to curve, space tells matter how to move."
Albert Einstein
"Matter tells space how to curve, space tells matter how to move."
Albert Einstein
- dhaine_adp
- Posts: 457
- Joined: Wed Aug 06, 2008 12:22 pm
- Location: Manila, Philippines
Re: Large index files over 4GB length in DBFs
Thanks for the info, Pablo.
Wow, DBFCDX/SIXCDX RDD becomes a real monster. I'm a bit skeptic in implementing that changes right away but the old CDX has to go away.
-Danny
Wow, DBFCDX/SIXCDX RDD becomes a real monster. I'm a bit skeptic in implementing that changes right away but the old CDX has to go away.
-Danny
Regards,
Danny
Manila, Philippines
Danny
Manila, Philippines
- esgici
- Posts: 4543
- Joined: Wed Jul 30, 2008 9:17 pm
- DBs Used: DBF
- Location: iskenderun / Turkiye
- Contact:
Re: Large index files over 4GB length in DBFs
DBFNTX has already (for a longtime) this (4TB) limit for NTX files.
Which means DBFNTX is already (from beginning) is a monster
Which means DBFNTX is already (from beginning) is a monster
Viva INTERNATIONAL HMG
- Agil Abdullah
- Posts: 204
- Joined: Mon Aug 25, 2014 11:57 am
- Location: Jakarta, Indonesia
- Contact:
Re: Large index files over 4GB length in DBFs
Hi Pablo,
4GB is large enough. But, People outhere still questioning the integrity of DBF in relation to "corrupted index". This happends when we encounter electricity failure while indexing process underway, for example.
What your answer in this matter?
Salam Hangat dari Jakarta
4GB is large enough. But, People outhere still questioning the integrity of DBF in relation to "corrupted index". This happends when we encounter electricity failure while indexing process underway, for example.
What your answer in this matter?
Salam Hangat dari Jakarta
Agil Abdullah Albatati (just call me Agil)
Programmer Never Surrender
Programmer Never Surrender
- Pablo César
- Posts: 4059
- Joined: Wed Sep 08, 2010 1:18 pm
- Location: Curitiba - Brasil
Large index files over 4GB length in DBFs
IMHO DBFs is for small to medium data store. For big dbf I do not recommend it.
In my last Clipper app I have used a timer for re-index NTX files.
When a crash happend in DBF, the only care to be taken is:
- continuosly backup
- repair structure dbf
- restore data from backup
There is no garanties for data preservation in some cases.
What I have also used, is to have at least one dbf for each client/registered to store private account. I.E. In one Video store, each client has a exclusive dbf. This Become faster use and save many problems when a crash or corruption happen in one main dbf, for example.
In my last Clipper app I have used a timer for re-index NTX files.
When a crash happend in DBF, the only care to be taken is:
- continuosly backup
- repair structure dbf
- restore data from backup
There is no garanties for data preservation in some cases.
What I have also used, is to have at least one dbf for each client/registered to store private account. I.E. In one Video store, each client has a exclusive dbf. This Become faster use and save many problems when a crash or corruption happen in one main dbf, for example.
HMGing a better world
"Matter tells space how to curve, space tells matter how to move."
Albert Einstein
"Matter tells space how to curve, space tells matter how to move."
Albert Einstein