Working Towards a Windows DLL

Windows DLL versions of the netCDF C library have usually come last in our list of supported platforms. This is not only because it was uncommon among science users, but also because we mostly don't develop in windows, and it's significantly different.

As a result we released DLLs for windows for almost all our versions, but this system broke down after the 4.0 release. Since then, we have not done a windows DLL release, and that is too long.

To try and bring windows into the mainstream of netCDF testing and development, we are now using Cygwin, with mingw32 libraries, to build netCDF DLLs.  This is something I just got working last week. The results can be seen on the daily snapshot page (you must be logged into the Unidata website to view this page):

https://www.unidata.ucar.edu/software/netcdf/builds/snapshot/

The builds on Carson are on a Cygwin machine, and the ones with description field "no OPeNDAP DLL build on Cygwin" are the DLL building tests. (The other tests on Carson are just straight Unix builds on Cygwin.)

The current results show that this is working for the non-netCDF-4 version of the DLL. But what do I mean by "working?" All tests pass, and I am left with a DLL, so that's good. But the DLL is not the whole story, Microsoft developers also need a .lib file, which describes the DLL. There is a way to produce this with my daily build too, and I have to get it working.

There is also the problem that the DLL is named cygnetcdf.dll, and it should be named netcdf.dll. A simple rename, but I have to look into why the tools put the "cyg" there in the first place, to see if it is important.

But, though much work remains to be done, I am happy to be producing a netCDF DLL as part of my daily snapshot. All this is much easier than it used to be, due to steady improvement in Cygwin and mingw32. To build this DLL, the configure must be run with options --enable-dll and --target=mingw32.

The next step is to get an automated binary release of the DLL, so that I can easily get it out to the users. In the meantime, if anyone would like to give it a try, email me and I will send it. This is the C API only, without netCDF-4 or OPeNDAP.

The end goal is to produce netCDF-4 and OPeNDAP capable DLLs, as part of the normal build process, and with the same autoconf/automake tool set we use for all other platforms.

I Heart Valgrind

How do I love Valgrind? Let me count the ways.

Valgrind is a neat little tool that replaced the memory handling routines of the operating system with specially instrumented ones that also keep track of everything you are doing with memory. Then, if you don't free it, Valgrind can tell you.

All of this will seem unspeakably primitive to our Java programming friends. Sorry to bring up such a barbaric topic as memory management.

Like any such tool, when Valgrind was first used on netCDF code it issued many warnings and error reports. Most were actually warnings and memory errors in the test programs themselves (which don't get the kind of attention that the library code does - who tests their tests?) But some of the Valgrind messages pointed to real memory bugs in either HDF5 or netCDF-4.

The HDF5 team has been very pro-active in hunting down all the memory problems this process has uncovered, and since 1.8.4 have been tightening up memory handling by HDF5. Meanwhile I have been doing the same for netCDF-4 code.

The result is that (in my branch of the repository - soon to be merged into the main branch) there are very few memory leaks of any kind, and almost all the libsrc4 test programs now pass Valgrind with no errors or warnings. These changes will be part of the performance and bugfix release 4.1.2.

I love Valgrind because all previous tools I've used for this have been rather clumsy. Valgrind is the easiest way to memory test a program!

Data Format Summit Meeting

Last week, on Wednesday, the Unidata netCDF team spent the day with Quincey and Larry of the HDF5 team. This was great because we usually don't get to spend this much time with Quincey, and we worked out a lot of issues relating to netCDF/HDF5 interoperability.

I came away with the following action items:

  • switch to WEAK file close
  • enable write access for HDF5 files without creation ordering
  • deferred metadata read
  • show multi-dimensional atts as 1D, like Java
  • ignore reference types
  • try to allow attributes on user defined types
  • forget about stored property lists
  • throw away extra links to groups and objects (like Java does)
  • work with Kent/Elena on docs for NASA/GIP
  • hdf4 netCDF v2 API writes as well as reads HDF4. How should this be handled?
  • John suggests not using EOS libraries but just recoding that functionality.
  • HDF5 team will release tool for those in big-endian wasteland. It will rewrite the file.
  • should store software version in netcdf-4 file somewhere in hidden att.
  • use HDF5 function to find file type, this supports user block
  • read gip article
  • update netCDF wikipedia page with format compatibility info
  • data models document for GIP?

I have been assured that this blog is write-only, so I don't have to explain any of he above, because no one is reading this! ;-)

The tasks above, when complete, with together add up to a lot more interoperability between netCDF-4 and existing HDF5 data files, allowing netCDF tools to be used on HDF5 files.

NetCDF Presentation at HDF5 Workshop

This week I am attending the HDF5 workshop in Champaign, Illinois. I am learning a lot of interesting things about HDF5, and I gave a presentation on netCDF, which is now available on the netCDF web site for those that are interested:

Hartnett, E., 2010-09: NetCDF and HDF5 - HDF5 Workshop 2010.

It's great to see the HDF5 team again!

NPP Data and HDF5 Reference Type

The NPP satellite mission is to produce data in HDF5. There is great interest in seeing these data through the netCDF API, so that netCDF tools can work with NPP data.

At the HDF5 workshop this week I have been given a sample file by Elena which is like the NPP files we will eventually see.

NetCDF can read HDF5 files as long as they follow certain rules, but the NPP files doesn't all those rules. In particular, they use the reference type, currently no handled by netCDF. My plan is to have netCDF not give up when it runs into a reference object.

Unidata Developer's Blog
A weblog about software development by Unidata developers*
Unidata Developer's Blog
A weblog about software development by Unidata developers*

Welcome

FAQs

News@Unidata blog

Take a poll!

What if we had an ongoing user poll in here?

Browse By Topic
Browse by Topic
« July 2025
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today