Hi Milan:
Sorry I was unclear about the checksum idea. The checksum is on the tables, not
on the data. The intention is only to ensure that the coder and the decoder use
the same tables. You would have to have a central place where tables can be
registered, and a coder would have to make sure that their tables were
registered. The registration center (eg WMO) would generate the checksum, and
the coder would put it into the BUFR message. A decoder would read the message,
get the checksum, see if they already had those tables locally, and if not,
request them from the registration center using the checksum.
My fear is that, without something like that, after a few years BUFR messages
will no longer be reliably decoded. Or they can only be decoded by the original
software that generated them, which becomes harder and harder to maintain.
Regards,
John
Milan Dragosavac wrote:
Hi John,
Unfortunately something like this can not be done because of various
reasons. The tables are loaded based on the information in section 1
and expansion of data descriptors can start. Then one can find if really
local entries are used. Checksum can not help because of possible
compression and in this case the size of data vary. The Ecmwf experience
show that you have to know anyway what data you want to process and than
before you start operational usage links are created in advance if
needed. It is not too bad situation. In our preprocessing we repack all
the data and use usually one version of the tables for all data afterwards.
Regards
Milan
Milan Dragosavac
ECMWF
Shinfield Park, Reading, Berkshire, RG2 9AX, UK
Tel: (+44 118) 949 9403
Fax: (+44 118) 986 9450
Telex: 847908 ECMWF G
E-mail: milan.dragosavac@xxxxxxxxx