Hi David
> picture of the future of WCS, and what was horrific about it. Btw, WFS
> has the same deficiency as WCS when it comes to predicting how big the
> response will be; that's a function-point I'd sure like to see in
> those web services.
... and opendap has the same problem ... except, that if you know enough to use
an opendap interface, you know enough to calculate the size of the response ...
but yes, I think this is a big issue!
> standards. OPeNDAP and CF/netCDF already qualify as mature, effective
> standards, so I wouldn't recommend changing them just to bring them
> into OGC. .... As to this being "just publicity" as Bryan suggests, that
> seems to me
> to disregard the value of open community participation and cross-
> fertilization of ideas that take place within the OGC community and
> processes.
The problem is either: we have community participation in *standardising* or we
don't. If we don't, then what is the standardisation *process* for? If one
doesn't envisage allowing the community to modify the candidates, then why have
a community process?
I think it's important for ALL standardisation communities to recognise well
characterised and governed "standards" (whatever that means) from other
communities, rather than take
on managing everything for themselves.
So, to reiterate my point which was obviously less clear than it ought to have
been (given some private email I have received), and to give some context to
where I am coming from.
- I clearly believe OGC standards and procedures have lots to add for some
tasks, but
- I think that NetCDF is well characterised, and via it's dependency on HDF
(at V4.0) rather difficult to cast in stone as something that should be
*defined* (standardised) by an OGC process.
- I think the CF *interface* could be decoupled from it's dependency on netcdf
and a candidate for OGC governance.
- I think that a host of OGC protocols would benefit from allowing xlink out
to cf/netcdf binary content *whether or not OGC governs the definition of
cf/netcdf*.
> Perhaps you're concerned about the potential for reduced
> control over the interface definition, but that's not what will happen
> -- you won't lose control over it. There may be variations and
> profiles for special applications that emerge, but that wouldn't
> require you to change what already works.
Hmmm. I think history demonstrates pretty conclusively that profile
proliferation reduces interoperability to the point that (depending on
application) it's notional not rational. I would be concerned if we profiled CF
in the same way, as for example, one NEEDS to profile GML (which is not to say
I don't believe in GML for some appplications, we're heavily into doing exactly
that on other fronts) ... but really, we have to think of profile proliferation
as standard proliferation ...
> I apologize immediately if I've missed or misrepresented any of the
> issues with CF/netCDF or OPeNDAP. Please take this at face value. At
> the end of the day, I just want to see stronger relationships and
> stronger technology. And I think the relationships, personal and
> institutional, matter more than the technology, because having better
> relationships will lead to better solutions, whatever technology is
> chosen.
I'm sure we're all on the same page here ... and we just need to spell out the
details to each other.
Most folk know I'm in favour of exploring how an OGC relationship can help CF.
What I'm not in favour of is function creep so that we end up with OGC taking
on HDF and the netcdf bits/bytes storage etc. I jumped in here for precisely
that reason, and that reason alone. I may have muddied the waters with some
other stuff ...
Cheers
Bryan
p.s. we can have the WCS/WFS discussion another day, I don't have time to do
it now ...
--
Bryan Lawrence
Director of Environmental Archival and Associated Research
(NCAS/British Atmospheric Data Centre and NCEO/NERC NEODC)
STFC, Rutherford Appleton Laboratory
Phone +44 1235 445012; Fax ... 5848;
Web: home.badc.rl.ac.uk/lawrence