John,
We have a netCDF subseting service that the IDV folks plan on using to
read the point data. When the design is completed the IDV will be able to
produce a widget for a user to enter their parameters and then use the
subsetting service.
http://www.unidata.ucar.edu/projects/THREDDS/tech/interfaceSpec/StationDataSubsetService.html
Currently John Caron has implemented a METAR subsetting service on the
THREDDS side. The service is created by StationObsServlet and some other
files StationObsCollection.java and QueryParams.java that I'll attach so
you have an sort-of a template to work from. The METAR service reads from
a netCDF file but you will have to change the code so you can read from
your mySQL database. This should at least get you started in the right
direction. Sometime in the future you will have to get all the source
code. It's not available on the web because we don't want hackers reading
the code looking for break in spots.
Robb...
On Wed, 13 Jun 2007, John Horel wrote:
Robb-
We use mySQL for storing the data. So, the goal would be to have a widget in
IDV that serves up a mySQL query to our database. Basically, when someone
wants to sync surface obs to radar or whatever else in IDV, the widget would
create a query and then we serve up the data in a way that is compatible with
IDV. We can handle all the query software and serving it up, but we'll
definitely need some help with being IDV/THREDDS compliant.
John
Robb Kambic wrote:
John,
Congrats on getting the equipment grant, now you should have the disk space
to store your data.
It's been awhile since our conversation at the AMS about serving up your
surface obs using THREDDS and the IDV. Could you elaborate on how you
expect it to work, what type of displays, and what type of software you are
using. This would give us some idea on how to approach the problem.
I did a prototype implementation of using the db4o database to store the
METAR reports and to distribute them though the THREDDS server. But the
db4o database had a too restrictive license so I dropped the effort. At
this point, I don't know if any of that effort would help your effort.
Thanks,
RObb...
On Fri, 8 Jun 2007, John Horel wrote:
Robb-
Got our letter yesterday saying we're funded from Unidata to move forward
on the IDV hook to MesoWest. How have things progressed on your end as far
as having the capability to query a surface ob database from within IDV?
Regards,
John
Robb Kambic wrote:
On Wed, 14 Feb 2007, John Horel wrote:
Robb-
Starting to put together the equipment proposal to Unidata, which in
part, would include a data server to serve up mesonet observations. So,
I'd like to follow up on our conversations in San Antonio as far as
possible approaches. I'm still a bit fuzzy as to what would be the best
way to do it. My preference would be to develop a query directly to our
database rather than having to store the data in an external hosted
format. That way we don't have to keep around all 10 years of data in a
netCDF file format or some such. Have you had a chance to proceed with
your middleware to handle metars? And, could you describe that approach
a bit more for me?
Regards,
John
John,
your timing is perfect, yesterday our group had a meeting on this. in
fact, the last month i created a realtime database to store the Metars.
at this time there is a simple url servlet interface that permits one to
make queries against it. so your idea of maintaining the data in a
database is the correct approach. i'm just starting on making a general
station observation dataset adapter that would create the link from some
data repository ie database into the Java netcdf library, then the IDV.
the station observation dataset adapter would get enough info so it could
know how to query the proper data repository. at this time, i don't know
the type/amount of info needed. i'll keep you informed on our progress.
do you keep all the data in one database? i was planning on daily
creating a new database file for data that was 3 days old and only
keeping ~3 days of data in the realtime database. this way the
performance would be good for the realtime requests and archive requests
would require opening the daily database files.
Machine requirements guess...
the Thredds Data Server would have to be installed on the machine with
cpu power to handle the amount of requests plus good network conectivity.
i'm sure you have a better handle on the disk space requirements. you
might want to look at THREDDS page for requirements for the TDS.
http://www.unidata.ucar.edu/projects/THREDDS/
robb...
==============================================================================
Robb Kambic Unidata Program Center
Software Engineer III Univ. Corp for Atmospheric Research
rkambic@xxxxxxxxxxxxxxxx WWW: http://www.unidata.ucar.edu/
==============================================================================