Seventh ILRS AWG Meeting (Lanham 2002)

Minutes ILRS/AWG Workshop
October 3-4, 2002, Lanham MD, USA

Thursday October 3

1. Opening
Welcome by Noomen. Thanks to HTSI for providing location for meeting and services, and to Noll for arranging meeting information on ILRS web pages. Approval of agenda (Appendix 1). The names and e-mail addresses of the attendants are listed in Appendix 2.

2. Minutes from previous AWG meeting
Not discussed explicitly. Most of the issues of the meeting in Nice will be discussed again in the current workshop in Lanham.

3. Actions since AWG Toulouse
The action items of the previous meeting in Nice were reviewed. About half have been fulfilled; the remainder will appear on the action item list coming out of this Lanham meeting.

4. Announcements

4.1. ILRS related presentations
Two presentations that relate to the general activities and/or science results of the ILRS are mentioned here. Both of them took place at the AGU Spring meeting in Washington DC, in May 2002. One is a presentation entitled "The role of ILRS within IGGOS", by Noomen, Appleby and Shelus, describing the overall science achievements of the ILRS and using inputs from many analysts from this community; the other is a poster presentation entitled "Daily Earth Orientation Parameters from satellite laser ranging to LAGEOS and ETALON" by Pavlis, showing the value of using Etalon data for deriving Earth Orientation Parameters (EOPs) in particular.

4.2. IERS Combination Research Workshop
Noomen reminded the audience of the workshop on combination solutions, which is organized by IERS in Munich on November 18-21, 2002; ILRS analysts are encouraged to participate.

Although a little bit off the topic of this agenda item, Noomen also mentioned the founding of the European Geosciences Union (EGU), a new organization which will cover the European Geophysical Society (EGS) and the European Union of Geosciences (EUG); it is intended to take over all activities of the latter two organizations, although these will not cease to exist. The new organization will become official on August 15, 2003.

5. SINEX format
There are no new developments on the SINEX format itself, of which V2.0 is now the official version to be used by the international community.

Husson is developing a file with station biases in this SINEX format (Appendix 3). The format provides room for the "standard" biases like range, time, and frequency. Husson has introduced an additional entry "PBIAS" to allow inclusion of errors in the barometric values taken by the stations, needed for atmospheric corrections of the observations [mb]. The file will be made available shortly on the ILRS web pages, under the name pub/slr/slrql/slr_data_corrections.snx; and in the same directory older versions and diff files will also be stored (with an appropriate date tag in the file names). The file will contain data corrections as reported by the stations only (i.e. "physical-true" numbers, rather than estimates from data analyses), and will provide data corrections for any SLR satellite target. Provisions will be added for "all satellites" and data release versions.

In a similar development, Noll is in the process of converting all station eccentricity values (historic and contemporary) into SINEX format. Problems may arise if the length of the eccentricity vector, originally meant to relate the optical center of a (mobile) laser system to a fixed nearby reference point, becomes unrealistically large (values of several km have been reported) (cf. Appendix 3). A discussion ensued on the options to correct this and the implications of such a change. It was concluded to leave the eccentricity values as they are, but add a warning in the descriptive files (the station log sheets and the SINEX file in development) when exceeding a length of 5 m. Also, results of new surveys of the eccentricity vectors for the Keystone stations and station 7090 and 7403 have become available. It was decided to use the most recent values for the Keystone systems, but to stay with the 1992 values for stations 7090 and 7403 (the new values do not deviate statistically significantly from the results of older surveys). Finally, all values for eccentricity values from future surveys are to be reported with a resolution of 0.1 mm (action item Noomen), and new eccentricity entries larger than 15 m are considered unacceptable and should not be interpreted as such (instead, apply for a new CDP station id and DOMES number).

6. Pilot project "harmonization"
Husson reported on the standardization of the QC reports (Appendix 4). The conversion to ITRF2000 coordinates remains an action item for some of the institutes involved (action item Eanes, Noomen,…). Some sites are confused by conflicting QC reports. At this moment CSR and MCC results are included in a global report card. Action item Husson: need to develop a single consolidated bias report. Noomen reported on recent developments to switch to ITRF2000, and Eanes did the same (both to become operational by November 1, 2002).

7. Pilot project "benchmarking and orbits"
Husson gave a brief introduction of this pilot project, which was really initiated during the previous AWG workshop in Nice. At this moment, its purpose is to verify the absence of errors or blunders in software and data treatment: for a given test set of observations on LAGEOS-1 and a prescribed dynamic and geometric model, the contributors to this project should be able to produce results (orbits, station coordinates, EOPs) which match within small margins. For a description of the project and models specification as it has been used during the past months, see the web pages http://ilrs.gsfc.nasa.gov/working_groups/awg/awg_pilot_projects/awg_software_benchmarking.html. To get a hold on the cause of significant differences, should they arise, the project currently aims for 3 types of products:

A: where a nominal propagation of the orbit is requested;
B: where the satellite orbit is fitted through the measurements; and
C: where all elements (orbit, network, EOPs) are estimated.

At this moment, contributions from ASI, CRL, CSR, DGFI, GEOS, IAA, JCET, NASDA and NERC have been received. The project will play a role in the generation of an official ILRS product on EOPs and station coordinates (cf. agenda item 8.3).

ASI
(Appendix 5) Luceri gave a presentation on the intercomparison between the various solutions as provided by individual institutes. As for the "A" solutions, the original residuals yielded differences of up to 10 m; possibly the time-tags of the residuals (start time, stop time) may play a role here. CSR has the best long term rms (1.5 m), followed by ASI and NASDA (2 m) and finally NERC, IAA, AUSLIG and JCET (2.8 m).
Doing an intercomparison of pairs of "A" solutions yielded correlation coefficients of more than 0.95, although the pairs involving CSR solutions yielded values up to 0.77.

As for the "B" solutions, where the orbit was solved for, the agreement was much better, typically down to a few mm for the average (except NASDA, which had a mean residual difference of -17 mm), and slightly larger values for the overall residual rms. The correlation coefficients between residuals of pairs of "B" solutions now proved to be much smaller than in the "A" case (typically < 0.1), with a few exceptions. Luceri also introduced a hypothesis testing, using a P²-test, an F-test and a Student's t-test to check if the various couples of residuals had the same distribution, variance and mean respectively; in general, substantial disagreement was found in the residual distributions due to different modelization, and a good consistency was found for the solutions that were based on the same software (i.e. GEODYN) in particular.

For the "C" solutions, the agreement came down to the level of 10-4 m in the residual mean, except for the NASDA solutions. The correlation coefficients now were still reasonable for the GEODYN users, but quite small for the others. The hypothesis testing was applied here also, and again the residuals seem to have different distributions; in the case of the GEODYN residuals, the hypothesis can be accepted only if the rejection level was set high enough.

The second part of Luceri's intercomparison concerned the orbit solutions. Using the CSR solution as a reference, the "A" orbits showed radial differences up to 0.8 m, in absolute value, for the CRL solution and up to 0.1 m for the other solutions; up to 2.3 m cross-track for the CRL solution, 1.4 for AUSLIG and JCET and up to 0.8 m for the other solutions; about 10 m along-track for the CRL solution and up to 6 m for the others. Velocity components showed similar differences. When converted into Kepler elements, differences up to 9 m in the semi-major axis were observed for NERC (and CRL as well). Eanes commented that very minor differences in the representation of the initial state-vector could easily lead to such effects.

Again, the situation improved dramatically when the orbits were solved for: the radial, cross-track and along-track differences reduced to 0.3, 0.6 and 0.6 m, respectively (again using the CSR orbit as reference), in the worst case of CRL and NERC solutions. Since the solutions may still include results that are not fully understood, it was decided to store them in a read-protected directory at CDDIS instead of in the current open directory (action item Dube).

No comparison was done for the "C" orbits, nor for the EOP and coordinates solutions (also "C" solutions).

NERC
(Appendix 6) Appleby showed his results concerning orbit and EOP comparisons. For most of the submitted orbits he had computed point-by-point inter-group differences and mapped those differences into the along-track, cross-track and radial directions. For the 'A' orbits the differences were dominated by along-track differences of up to 4 m by the end of the 28-day orbits, with differences of up to 0.5 m in the other directions. Differences in all components between GEOS and JCET were essentially zero. For the 'B' orbits it was clear that different groups had solved for different orbital parameters. Some had (probably correctly) interpreted the benchmark instructions as meaning that only the 4-day along-track accelerations should be solved for, while for others (including CRL, NERC, CSR, IAA) other parameters are estimated to determine the best orbit. The agreements within these two approaches are mainly very good, with radial differences at the one or two cm level. Some diurnal periodic behaviour was noticed in some of the differences. The IAA "B" and "C" orbit solutions showed cm level discontinuities in radial, cross-track and along-track directions every four days.

For the EOPs, for those groups that submitted 'C' solutions in SINEX files, Appleby differenced their results from the a-priori C04 series. Here, the results for the DGFI and NERC solutions suggest an overconstraining of these parameters (action item analysts (update analyses), Husson/Pavlis (update description)).

HTSI
(Appendix 7) A very detailed intercomparison was done by Husson. He presented a list of items that could possibly be used for a pass/fail judgement of an arbitrary benchmark solution (summarized in a so-called "benchmark report card"). The suggested items are listed in the appendix.

As for orbits, Husson compared only the radial component of the different solutions (A,B,C). In all solutions there were significant differences in the radial direction. GEODYN users showed the best internal agreements. The differences in the radial direction are believed to be caused by modeling differences.

Another comparison item were the residuals, not so much on a statistical basis as was done by ASI but on a measurement-by-measurement basis. Husson noted that (for some unexplained reason) JCET and GEOS provided residuals that occasionally went up to 30 cm (1st observation after midnight); both are GEODYN users. ASI, another GEODYN user, does not have the midnight crossing residual problem. Also, JCET and GEOS had this problem in their "C" solutions only, whereas their "A" and "B" products were OK. The NASDA "B" solution had 50 cm residuals on the first 3 normal points on November 1; their "A" solution did not exhibit this problem. Finally, the NERC residual scatter increased from the "B" to the "C" solution.

Husson also compared the reported refraction corrections. He found differences of up to several mm, and noted that the values are also reported with different numbers of significant digits (fine-tune "format" description; action item Husson/Pavlis). Related to this, Pavlis noted that a shortcoming in the number of significant digits for meteorological information in GEODYN (in TDF) has been detected recently, which may introduce mean refraction differences as large as 2 mm (action item GEODYN users). Differences in the mapping function (Mendez vs. the one implicitly in Marini-Murray) may also lead to differences of up to 2 mm (for low elevations).

The modeling of relativistic effects on the measurements appeared to be done consistently at better than 0.1 mm (except for GEOS, which (the effect is range-dependent) appears to have a problem with this computation).

Lessons learned: a clearer definition of the model standards, and a better specification of the resolution of the various parameters. Among his recommendations were (i) every analyst must QC their own solution, (ii) all range corrections have to be reported at the 0.1 mm-level, (iii) analysts should verify whether the problems reported here could affect the contributions for other (pilot) projects as well, (iv) the benchmarking results/presentations are to be put on-line (action item Husson), and (v) the findings reported today must also be distributed to the contributors not present here in Lanham (action item Husson). Future activities can involve a separation between the "benchmark" and the "orbits (product)" elements (as it was originally), the application of the SP3 format, and others. Action item analysts: re-analysis for benchmark project.

DGFI
(Appendix 8) Müller reported on the DGFI activities related to this pilot project. DGFI followed the prescribed dynamic and geometric models, but also was forced to deviate on a number of topics: AT, CR, the mapping function, the model for representing C2,1/S2,1 effects, the models for ocean loading, ocean tides and planetary ephemerides, and the integration step-size. These will show up in various elements (and products) of this benchmarking test. Müller concluded that software modifications are needed.

As a result of this intercomparison, it was concluded that the description of the benchmark project needs a second look (in particular for the description of "orbit" and "orbit parameters" and the EOP constraints). Also, it was decided to add a "D" solution, which would allow the analysts to use the best possible modeling of the satellite dynamics (rather than following the model as prescribed by the AWG). The "benchmarking" description is to be updated and distributed before October 15 (action item Husson/Pavlis), whereas analysts have to do their re-analysis before November 30 (action item analysts). Husson and Pavlis will develop pass/fail criteria for the "C" solution by December 15 (action item Husson/Pavlis).

8. Pilot project "positioning and earth orientation"
Noomen began with some introductory remarks: the development of the Etalon intensive tracking campaign, the embedding of this within the overall list of tracking priorities, the definition and purposes of the "pos+eop" analyses, in particular the B solutions, etcetera.

8.1. Status reports

ASI
(Appendix 9) Luceri presented the most recent developments of the Italian analyses for this project, in particular on new AA, BB3 and BB4 solutions. She concluded that the selection criterion that was established during the previous meeting in Nice (i.e. a minimum of 30 NPs for LAGEOS-1 and -2 each, per 28-day data period) turned out as too strong: in some cases up to 40% of the observations had to be rejected a priori. Luceri decided to use a criterion "a minimum of 30 NPs for LAGEOS-1 and -2 together" instead, which turned out satisfactory. For the "AA" solutions, no range biases were estimated at all (for the "Core" and "Contributing" stations; "Associate" stations were ignored altogether). As a result, the network solutions appeared to become much more stable, as witnessed by the results for the Helmert transformations w.r.t. ITRF2000. The quality of the EOP solutions appeared not to change significantly, comparing the "AA" solutions with the previous "A" solutions (wrms w.r.t. IERS C04 0.30 mas for x/y-pole). No specific details were reported for the new BB solutions.

CRL
(Appendix 10) On behalf of Otsubo, Appleby reported on the most recent developments at CRL. CRL also questions the usefulness of the "30 and 30" criterion, and reported "not being happy" with the fixation of the range biases at zero (required for the "Core" stations). For the near future, the estimation of EOPdot parameters and the completion of the benchmark test are planned.

JCET
(Appendix 11) Pavlis reported on a range of issues. One is the development of software for the benchmark contributions. A, B and C solutions were generated for the 28-day arcs in October and November 1999.

For the "pos+eop" project, AA solutions were generated (1999), as well as BB3 and BB4 solutions covering the first 6 months of 2002, just before the beginning of this workshop. In addition, Pavlis also derived a so-called BB0 solution, which is based on Etalon data only. As for the addition of Etalon data, Pavlis showed that the quality of LOD solutions improved from 0.162 ms for a LAGEOS-only solution to 0.099 ms when the Etalon normal equations were added with a weight of 0.25 times nominal (rms differences w.r.t. IERS C04); for x/y-pole no improvements were observed when adding the Etalons. JCET only generated solutions that include the EOPdot parameters.

NERC
(Appendix 12) Appleby reported on developments in the SATAN software (the addition of the Ray 99.2 ocean tides model, the correction of an error in the computation of a partial, developments of PERL script files). NERC contributed AA (1999) and BB (2001 April - 2002 July) solutions. As for the BB4 solutions for 2001, the translation parameters w.r.t. ITRF2000 are consistently smaller than 15 mm in all 3 components. The height series for some stations showed some 20 mm-level smooth variations.

8.2. Comparisons and combinations

CSR
(Appendix 13) Eanes starts his presentation by addressing the technique for constraining. He prefers to apply a minimum constraint, rather than a loose constraint (i.e. for a specific solution: the mean difference of the x/y-pole solutions w.r.t. IERS C04 is zero, and the mean longitude shift w.r.t. ITRF2000 is zero (to be applied for a subset of reliable stations only)). In doing so, 3 very small eigenvalues (as expected from the problem geometry), identified earlier, are removed, and CSR ends up with EOP and station coordinates with realistic a posteriori formal uncertainties, as if no singularities were ever present (i.e. at the level of a few mm). Following up on a problem with the CSR solutions that was reported at the previous meeting in Nice, Eanes explains this by having made an error in adding a so-called nul-space matrix. As for the resulting EOPs, the x/y-pole solutions should agree with IERS C04 at better than 0.2 marcsec.

Eanes does not agree with the experience and/or remarks reported several times by other analysts, about the difficulty to decouple range biases and station heights.

He extended the time-span of his "A" solutions to (now) cover the 1993-2001 time frame. As an example of its quality, these solutions show an origin shift in y-direction of less than 5 mm w.r.t. ITRF2000 (including a small linear trend).

Agenda item 8.2 was continued on the next day.

Friday October 4

HTSI
(Appendix 14) Husson reported on an extensive comparison of A and AA solutions. He started off with an overview of statistics on number of passes and qualitative performance of the individual stations. Next, he showed that the quality of the vertical position time-series solutions has improved from 12-31 mm (A series) to 5-9 mm (AA series). The single exception appears to be the JCET solutions, which went from 3 to 7 mm repeatability. This improvement is also visible in the individual solutions of an arbitrary analysis center, when looking at successive solutions (e.g. ASI: solA.v1 -> solA.v3 -> solAA.v1). The "Core" height time-series from the different analysis centers are converging to the 1 cm level. Most of this improvement is attributed to elimination of modeling problems and using a more uniform data treatment.

Next, the data treatment by the various institutes (both for the A and AA series) was summarized. Although there has been much debate over range biases in the past workshops, in particular on its correlation with station height, Husson showed that the range bias estimates from individual analysis centers, when averaged over an entire year, are typically consistent to a few mm. This provides some credence thast the biases are "real" to about the 5 mm level. Part of the error in bias estimation is due to satellite signature and an error in GM. Appleby commented that the satellite signature, which is station dependent, is typically smaller than 5 mm. Analysts are encouraged to apply the proper center-of-mass correction for the LAGEOS satellites for each site, which can vary between 244 and 252 mm (action item Appleby/Otsubo).

There is one problem with assuming no biases for the "Core" and "Contributing" stations: if there is a real bias in a site, then the height of that site will be in error w.r.t. the no-bias situation by about 1.2 times the true range bias. Zimmerwald, which had a -18 mm range bias, was shown as an example. Husson showed how Zimmerwald height estimates differed significantly depending if a range bias was estimated or applied a priori versus assuming "zero" bias. This Zimmerwald height analysis supports a significant error on the Zimmerwald ITRF2000 height. The Zimmerwald height is too high by at least 10 mm and the reason for the apparent dubious local tie between Zimmerwald SLR and Zimmerwald GPS.

Some of the lessons learned are (i) modeling errors affect station coordinate solutions, and (ii) range biases play an important quality control role. Analysts are also encouraged to use the data correction files in order to correct known errors in the observations (action item analysts).

IGN
(Appendix 15) Altamimi has made comparisons of the various solutions provided for the "pos+eop" pilot projects, aimed at an assessment of the quality of these solutions. He focused on geocenter, scale variations and EOP consistency, in particular. In order to do so, the CATREF software has been upgraded to also handle EOP solutions and implement a minimum constraint option.

As for the "A" solutions, the "monthly" solutions appear to be internally (i.e. within a time-series for a particular institute) consistent in geocenter x, y and z at the level of ±5, ±5 and ±12 mm, respectively. Expressed in length units, the scale of the network solutions is consistent at the level of 5 mm also. As for the EOPs: when using the averaged SLR solution as a reference, the ASI and CSR solutions for x/y-pole are consistent at about 0.5 marcsec, and the DGFI and JCET solutions at about 1.0 marcsec.

A similar analysis was done for the "B" solutions. The conclusions for the B1 and B2 solutions are identical to the ones reported above for the "A" solutions (the Etalon data appears to have no influence on the station coordinates or the standard EOP products x/y-pole and UT). Also, Altamimi concluded that SLR can deliver a good estimate for LOD. However, the EOP quality of the B3 and B4 solutions appears to degrade by about a factor 2 (i.e. reach the level of about 1 marcsec); the EOPdots appear to have a consistency of about 1 marcsec/day. Altamimi seriously questioned the usefulness of estimating the time-derivatives for the x/y-pole parameters (but the value of estimating LOD parameters is beyond any doubt).

Another effort to assess the quality of the ILRS pilot project solutions was done by combining them with solutions obtained by other geodetic techniques. The results are, reported for the x/y-pole only: 0.4 for SLR, 0.1 for GPS, 0.5 for VLBI and 1.0 for DORIS (all values in marcsec). It must be noted that these values are to be interpreted with some care: as for VLBI and DORIS only 1 original solution was available, whereas for SLR and GPS that was 4 and 7 respectively (converted first to a single-technique combined solution). The results strongly suggest that the relative technique-weighting was not optimized or realistic, since certain parameters/techniques certainly yielded statistics "below expectations" (the GPS solution dominates the combination product, and hence the residual statistics).

A similar thing was done for one 28-day period in 2001, but the comparison for station coordinates is hampered by the limited overlap between in particular VLBI and SLR networks (the GPS network is by far the largest, and overlaps well with any other technique).

Overall conclusions: (i) rate estimates of x/y-pole degrade overall results, and (ii) a good agreement of x/y-pole and LOD with the values delivered by other techniques. Altimimi would also welcome further 10-year AA solutions from the analysis groups.

8.3. Future
Several issues were discussed under this agenda item.

Etalon vs. LAGEOS
The ILRS initiated an intensive Etalon tracking campaign in April 2001 at the request of the AWG. This campaign was extended to October 1, 2002, but the ILRS Governing Board (GB) was clearly not fully convinced about its usefulness, considering the preliminary analysis results that were available during the Nice workshop in April 2002 (improvements of up to 5% at best for coordinates and/or x/y-pole and UT, probably due to the increase of the number of observations). However, later analyses have clearly revealed a significant improvement in the determination of LOD (JCET) over the quality that can be obtained with LAGEOS data only. It is beyond any doubt that the Etalon satellites should be tracked with the priorities as set in the tracking campaign request. The AWG is concerned about the reduction of the number of observed passes on these satellites during the recent months, but it was commented that this is likely due to weather. Considering its importance, the AWG will ask the GB to convert the current tracking priority of the Etalon satellites into a permanent one, i.e. change the "campaign" qualification into a "permanent" qualification (action item Noomen).

EOP vs. EOPdots
Another discussion item was the question about the usefulness of the EOPdot additional products. For the time being, it is concluded to continue with the BB1-4 solutions, in order to expand the evidence on which to base definitive conclusions. The analysts are encouraged to continue doing so, and to fine-tune their models and procedures, where required or necessary. In addition, a new solution "BB5" has been added, in which the following EOP products have to be delivered: x/y-pole and LOD. In an initial discussion the UT component was also included here, but this parameter dropped off the list after further discussions later on in the afternoon. As for the satellite input, the minimum for BB5 is of course the LAGEOS satellites. The analysts are strongly encouraged to also include the SLR observations on the Etalon satellites, as a precursor to the actual operational combination product, and play around with the relative data weighting of LAGEOS vs. Etalon.

A priori data selection
The reason for having an a priori data selection is the avoidance of weak data and the resulting unstable or otherwise poor normal equations or covariance matrices. Based on the presentations that were given during this workshop, it was concluded that the requirement of 30 NPs for both LAGEOS-1 and -2 each was too stringent. In practice, the requirement of a combined total of 30 NPs for both satellites should be sufficient (which will correspond with 3-4 passes per station at least; no individual solutions for LAGEOS-1 and LAGEOS-2 are generated anymore).

At this moment, a discussion is still ongoing about station qualification (which depends on data yield and data quality, up for judgement of the AWG). Irrespective of the outcome of this discussion (which may take weeks or years), the AWG has decided to use the categorization that can be made at this moment (Appendix 16), and which is based on the so-called Shanghai criteria that were proposed in 1996 (and agreed upon for this purpose by the AWG in the 2002 Nice meeting). The following rules will apply for both "AA" and "BB" products of the "pos+eop" pilot project: (i) for "Core" stations, no range biases are to be estimated. In case there is evidence for a physical range bias, this may be applied, but we do not want to weaken the determination of station height (and other aspects of the overall solution) by solving for biases; (ii) for the "Contributing" stations, the analysts are free to solve for range biases or not; also here, if there is evidence for a physical bias, it must be applied; and (iii) the analyst is free to do whatever he/she judges best for the other stations (include or not, solve for range biases or not), under the condition that the data weight must be 0.001 times the (average) data weight of the "Core" stations (data weight = 1/Fobs). The "30 NPs" rule always applies.

Request for Proposals
Noomen, Appleby and Shelus have drafted a so-called Request for Proposals, to make a start with the official elements of the generation of the ILRS combination product(s). The contents of this draft proposal were discussed in detail. Highlights will be mentioned here only, the full text can be found at http://ilrs.gsfc.nasa.gov/working_groups/awg/cfp/index.html.

In summary, the proposal is aimed at 2 aspects of the generation of (an) operational official ILRS analysis product(s): the data reduction (i.e. the conversion of satellite observations into parameter solutions), and the quality check and actual combination of the individual solutions. Individual institutes may express their interest in both elements or just one. The products at this moment will consist of EOPs (for which the semiweekly IERS Bulletin A is the customer) and a set of global station coordinates (for which there are no direct customers at this moment). Various aspects that relate to the questions "why an official ILRS product?" and "why a combination product?" are summarized in Appendix 17.

It was agreed that the official ILRS product(s) will be (a) combination product(s) rather than (a) selected analysis product(s) coming from a specific institute.

Before being allowed to contribute as a data reduction contributor for this product, an analysis center needs to pass the tests being developed in the "benchmarking" pilot project. The analysis results coming from such a test will have to be within certain limits (to be specified before December 15, 2002, action item Husson/Pavlis and analysts). This holds for the following products of the "C" solutions of the benchmarking test: satellite orbit (rms differences w.r.t. a reference orbit in radial, cross-track and along-track direction), station coordinates (rms differences w.r.t. a reference solution in vertical, latitude and longitude components), EOPs (rms differences w.r.t. a reference solution in x/y-pole and LOD) and data corrections (individual differences w.r.t. reference values); the rms-of-fit is not a test parameter. In principle, the analysis contributors can start developing and generating their products and operational schemes once the benchmarking test has been passed.

As for the combination center, they will go through a testing phase first, after which a prime combination center and (multiple) back-up combination centers will be selected by the AWG. The entire procedure is expected to become fully operational in mid-2003.

Both analysis and combination contributors will commit to fulfill their role for a period of 3 years.

All solutions will cover a period of 28 days, which will be shifted by an alternating 2 and 5 days to satisfy IERS Bulletins A requirements.

All solutions (coming from individual institutes, as well as combination products) will be stored in the SINEX V2.0 format. Analysis institutes will provide daily solutions for x/y-pole and LOD, and mid-epoch solutions for station coordinates, whereas the combination center will provide averaged solutions for "new" x/y-pole and LOD solutions to IERS, ILRS and other customers, and averaged x/y-pole and LOD for all 28 days and averaged station coordinates to ILRS.

Further details can be found in the official Call for Participation.

9. Miscellaneous

9.1. Station qualification
This issue of station categorization appears quite simple from a purely technical point of view, but also involves many sensitive political aspects. The discussion about this is continuing within the GB.

9.2. IVS/IGS/ILRS working group
No new developments, as reported by Appleby.

9.3. Analysis feedback
Appleby presents a list of questions on several aspects of SLR analyses, which he needs as input for a presentation during the upcoming International Workshop on Laser Ranging (Appendix 18). Basically, it concerns wishes, requirements and problems for/on SLR observations and the interaction with stations, analysts and data centers. The most important data quality issus is to eliminate systematic errors in the normal point.

9.4. (Associate) Analysis Center
At this moment, the ILRS analysis centers are distributed into two categories, based on their reaction on the original call for participation in the ILRS organization. This reflects the situation of 4 years ago. It is generally recognized that this categorization needs a new implementation. However, considering the developments that are ongoing within the ILRS at this moment (e.g. of official ILRS analysis products), it is concluded to postpone (a discussion on) this new implementation until things have settled down.

10. Next meeting
The next workshop of the AWG will take place directly before the next EGS General Assembly and take another 2 days again: April 3 and 4, 2003, in Nice, France. The ILRS will pay for the expenses of hiring a meeting room in one of the hotels in Nice (Novotel, Sofitel).

11. Action items
Noomen showed the list of action items coming out of this meeting (Appendix 19), which also includes some left over from the previous meeting in Nice. The list was not discussed further; those involved will recognize their actions in the minutes. The draft minutes that will be distributed first will give an opportunity to provide feedback on the list itself.

12. Closure
Noomen thanked the audience for their active participation, and HTSI for hosting the meeting.

November 7, 2002

R. Noomen, G. Appleby, P.J. Shelus

Appendices:

  1. Agenda
  2. List of participants
  3. SINEX format: biases and eccentricities (Husson)
  4. Harmonization (Husson)
  5. Benchmark solutions ASI (Luceri)
  6. Benchmark solutions NERC (Appleby)
  7. Benchmark comparison HTSI (Husson)
  8. Benchmark solutions DGFI (Müller)
  9. EOP+network solution ASI (Luceri)
  10. EOP+network solution CRL (Appleby)
  11. EOP+network solution JCET (Pavlis)
  12. EOP+network solution NERC (Appleby)
  13. EOP+network comparison/combination CSR (Eanes)
  14. EOP+network comparison/combination HTSI (Husson)
  15. EOP+network comparison/combination IGN (Altamimi)
  16. Preliminary categorization SLR stations (Husson)
  17. Arguments in favor and/or against an official ILRS combination product (Noomen)
  18. ILRS analysts questionnaire (Appleby)
  19. LRS AWG action items

Appendix 1: Agenda

ILRS Analysis Working Group workshop #7
Lanham MD, USA, October 3-4, 2002

Agenda

  1. 1. opening
  2. 2. minutes AWG Nice April 2002
  3. 3. actions since AWG Nice
  4. 3.1. reports, presentations, membership
  5. 4. announcements
  6. 4.1. ILRS related presentations, publications
    4.2. IERS Combination Research Workshop
  7. 5. SINEX format
  8. 6. pilot project "harmonization"
  9. 6.1. status report
    6.2. future
  10. 7. pilot project "benchmarking and orbits"
  11. 7.1. status report 7.2. future
  12. 8. pilot project "positioning and earth orientation"
  13. 8.1. status report
    • . ASI
    • . CRL
    • . JCET
    • . NERC
    8.2. comparisons and combinations
    • . CSR
    • . HTSI
    • . IGN
    • . questions/issues:
    • . quality of individual station coordinates? EOPs?
    • . quality of combined station coordinates? EOPs?
    8.3. future
    • . Etalon vs. LAGEOS
    • . EOPs vs. EOPdots
    • . a priori editing criteria
    • . RFP for official ILRS analysis product(s)
  14. 9. miscellaneous
  15. 9.1. station qualification
    9.2. IVS/IGS/ILRS working group
    9.3. analysis feedback
    9.4. (Associate) Analysis Center
  16. 10. next meeting
  17. 11. action items
  18. 12. closure

Appendix 2: Attendance

Zuheir Altamimi         altamimi@ensg.ign.fr
Graham Appleby         gapp@nerc.ac.uk
Giuseppe Bianco         bianco@asi.it
Maurice Dube         mdube@pop900.gsfc.nasa.gov
Peter Dunn         peter.j.dunn@raytheon.com
Richard Eanes         eanes@astro.as.utexas.edu
Ramesh Govind         rameshgovind@auslig.gov.au
Van Husson         van.husson@honeywell-tsi.com
Cinzia Luceri         luceri@asi.it
Chopo Ma         cma@virgo.gsfc.nasa.gov
Horst Müller         horst.mueller@dgfi.badw.de
Carey Noll         noll@cddisa.gsfc.nasa.gov
Ron Noomen         ron.noomen@deos.tudelft.nl
Erricos C. Pavlis         epavlis@umbc.edu
Mike Pearlman         mpearlman@cfa.harvard.edu
Bernd Richter         richter@iers.org
Peter J. Shelus         pjs@astro.as.utexas.edu
Mark Torrence   mtorrenc@magus.stx.com

 

Appendix 3
SINEX format: biases and eccentricities
V. Husson

Appendix 4
Harmonization
V. Husson

Appendix 5
Benchmark solutions ASI
V. Luceri

Appendix 6
Benchmark solutions NERC
G. Appleby

Appendix 7
Benchmark comparison HTSI
V. Husson

Appendix 8
Benchmark solutions DGFI
H. Müller

Appendix 9
EOP+network solution ASI
V. Luceri

Appendix 10
EOP+network solution CRL
G. Appleby

Appendix 11
EOP+network solution JCET
E. Pavlis

Appendix 12
EOP+network solution NERC
G. Appleby

Appendix 13
EOP+network comparison/combination CSR
R. Eanes

Appendix 14
EOP+network comparison/combination HTSI
V. Husson

Appendix 15
EOP+network comparison/combination IGN
Z. Altamimi

Appendix 16
Preliminary categorization SLR stations
V. Husson

According to the performance specifications, here is the breakdown of stations based on data taken between October 1, 2001 to October 1, 2002.

  • Ajaccio: Contributing
  • Arequipa: Contributing
  • Beijing
  • Borowiec: Contributing
  • Cagliari
  • Changchun: Contributing
  • Chinese Transportable
  • Concepcion: Contributing
  • Golosiiv
  • Grasse: Contributing
  • Grasse (LLR): Contributing
  • Graz: Core
  • Greenbelt: Core
  • Haleakala: Contributing
  • Hartebeesthoek: Core
  • Herstmonceux: Core
  • Katsively
  • Komsomolsk
  • Kunming
  • Lviv
  • Maidanak (1 & 2)
  • Matera: Contributing
  • McDonald: Core
  • Mendeleevo
  • Metsahovi: Contributing
  • Monument Peak: Core
  • Mt. Stromlo: Core
  • Potsdam: Contributing
  • Riga: Contributing
  • Riyadh: Core
  • San Fernando: Contributing
  • Shanghai: Contributing
  • Simeiz
  • Simosato
  • Tahit:i Contributing
  • Wettzell: Core
  • Yarragadee: Core
  • Zimmerwald: Core

Notes:

  • Arequipa, Ajaccio, Riga were considerable short of the LAGEOS requirement and can not track high satellites.
  • Changchun meets all criteria except the 2 cm short-term bias stability. Changchun's short-term stability was 2.1 cm (very close to the goal).
  • Grasse did not meet the high satellite requirement, but achieved everything else. Combined with the Lunar System at Grasse, the site as a whole would be "Core".
  • Grasse (LLR), Metsahovi, Shanghai, Borowiec, Concepcion were light in a number of areas, but generally have respectable data quality. Concepcion is occupied by TIGO, so their data volume should increase with time.
  • Haleakala was light on the LAGEOS requirement, but they will be achieving this requirement probably within the next 2 quarters.
  • Potsdam was light on the LAGEOS and high satellite requirement. This system will soon be replaced by the new Potsdam system.
  • San Fernando is approaching "Core" qualification. They were slightly under the required LAGEOS requirement of 400 passes: they had 349 passes.
  • Tahiti and Matera are both well short of the data volume requirements, but have excellent data quality. Matera data volume will increase after passing acceptance testing.

Appendix 17
Arguments in favor and/or against an official ILRS combination product
R. Noomen

Question 1: why an official ILRS product?
Question 2: why a combination product?

Answers (in arbitrary order, and relating to either one or both of the questions):

  • Save burden of work for (e.g.) IERS product centers
  • Competition is stimulus for improvement
  • Inherent quality control
  • "Average" errors of individual solutions (quality of combination product must be better than that of individual solutions)
  • Backup, contingency
  • Reflects maturity of technique and community (we know we can deliver results of similar quality)
  • Other techniques also do it
  • Political/financial
  • Selecting one and stopping the others is unacceptable (destroy scarce capabilities)

Challenge: proof/show/develop this!

Appendix 18
ILRS analysts questionnaire
G. Appleby

Appendix 19
ILRS AWG action items

  • Angermann: extend SINEX format checker for ILRS purposes
  • Appleby/Otsubo: complete and provide satellite center-of-mass correction table (station dependent)
  • Dube: fine-tune eccentricity file in SINEX format
    (comments if > 5 m)
    (1999 eccentricities for Keystone systems)
    (7090: stick to eccentricity of January 1992)
    (7403: stick to eccentricity of July 1992)
  • Dube: store "benchmark" results in read-protected directory (or make current directory read-protect?)
  • Eanes: implement ITRF2000 in QL analysis (November 1)
  • Husson: finalize and announce table with LAGEOS data problems (SINEX format)
  • Husson: contact MCC on sign of applied range biases
  • Husson: develop single consolidated range bias report
  • Husson + Noll??: put presentations AWG Lanham on-line
  • Husson/Pavlis: inform contributors of Lanham benchmark presentations
  • Husson/Pavlis: fine-tune description of "benchmarking" (description of "orbit parameters" and "EOP constraints", add "D", .) (October 15)
  • Husson/Pavlis: establish reference solution for "C" solution benchmarking (December 15)
  • Noll???: make sure station log files for Keystone/7090/7403 provide information
  • which is consistent with action item Dube (e.g. add note "2001 eccentricity survey results not to be used by analysts")
  • Noomen: ask IERS for specification ILRS products for IERS purposes
  • Noomen: implement ITRF2000 in QL analysis (November 1)
  • Noomen: description BB solutions on ILRS web pages
  • Noomen: contact Noll for locations of RFP products
  • Noomen: approach DFPWG for stations to report new eccentricity values in 0.1 mm
  • Noomen: tracking priority Etalons permanent
  • Noomen: inform IERS and USNO on status official ILRS product on EOPs (incl. format)
  • Noomen/Appleby/Shelus: update RFP
  • Noomen/Appleby/Shelus: minutes of meeting
  • Nurutdinov: expand help functions for SINEX format checker at NCL
  • Shelus: (new) distinguishment between ILRS ACs and AACs
  • analysts: "pos+eop": refine BB solutions + extend with BB5
  • analysts: "benchmark": refine solutions (November 30)
  • analysts: install new TDF (GEODYN users only)