How many record

Utilities like DBU, Make, IDE written in HMG/ used to create HMG based applications

Moderator: Rathinagiri

Post Reply
Posts: 40
Joined: Sun Jul 19, 2009 2:15 pm

How many record

Post by Tristan »

Hi All,

How many record can handle by dbf file ? More than 1 million record or ...?

And how about performance for dbf file that have more than 1 million record ?


User avatar
Alex Gustow
Posts: 290
Joined: Thu Dec 04, 2008 1:05 pm
Location: Yekaterinburg, Russia

Post by Alex Gustow »

Hi Tristan

I googled on "dbf description" - and 1st link is:
( "Xbase ( & dBASE ) File Format Description" by Erik Bachmann ).

Good thing! I added it to my "Cool Links" :)

User avatar
Posts: 1449
Joined: Sat Mar 07, 2009 11:52 am
Location: Kolkata, WB, India
Has thanked: 5 times
Been thanked: 5 times

Post by sudip »

Hello Tristan,

Please read following information taken from difference between Harbour and xHarbour :) Please check viewtopic.php?f=6&t=948
In both compilers maximal file size for tables, memos and indexes is
limited only by OS and file format structures. Neither Harbour nor
xHarbour introduce own limits here.
The maximal file size for DBFs is limited by number of records
2^32-1 = 4294967295 and maximal record size: 2^16-1 = 65535 what
gives nearly 2^48 = 256TB as maximal .dbf file size.
The maximal memo format size depends on used memo type: DBT, FPT
or SMT and size of memo block. It's limited by maximal number of memo
blocks = 2^32 and size of memo block so it's 2^32*<size_of_memo_block>.
The default memo block size for DBT is 512 bytes, FPT - 64 bytes and
for SMT 32 bytes. So for standard memo block sizes the maximum are:
DBT->2TB, FPT->256GB, SMT->128GB. The maximal memo block size in
Harbour is 2^32 and minimal is 1 byte and it can be any value between
1 and 65536 and then any number of 64KB blocks. The last limitation
is introduced as workaround for some wrongly implemented in other
languages memo drivers which were setting only 16 bits in 32bit field
in memo header. Most of other languages has limit for memo block
size at 2^15 and the block size has to be power of 2. Some of them
also introduce minimal block size limits. If programmers plans to share
data with programs compiled by such languages then he should check
their documentation to not create memo files which cannot be accessed
by them.

Maximal NTX file size for standard NTX files is 4GB and it's limited
by internal NTX structures. Enabling 64bit locking in [x]Harbour change
slightly used NTX format and increase maximum NTX file size to 4TB.
The NTX format in [x]Harbour has also many other extensions like support
for multitag indexes or using record number as hidden part of index key
and many others which are unique to [x]Harbour. In practice all of CDX
extensions are supported by NTX in [x]Harbour.
The NSX format in [x]Harbour is also limited be default to 4GB but like
in NTX enabling 64bit locking extend it to 4TB. It also supports common
to NTX and CDX set of features.

The CDX format is limited to 4GB and so far [x]Harbour does not support
extended mode which can increase the size up to 2TB with standard page
length and it can be bigger in all formats if we introduce support for
bigger index pages. Of course all such extended formats are not binary
compatible with original ones and so far can be used only by [x]Harbour
RDDs though in ADS the .adi format is such extended CDX format so maybe
in the future it will be possible to use .adi indexes in our CDX RDD.

Of course all of the above sizes can be reduced by operating system (OS)
or file system (FS) limitations so it's necessary to check what is
supported by environment where [x]Harbour applications are executed.
Hope this will help :)

With best regards.

With best regards,

Post Reply