I think all of these guys are doing a great job. Chuck Kennedy has devoted an enormous amount of time, effort, and love to the sport of disc golf, and I respect him enormously. Guys like Chuck are my heroes. Just to be clear, I don't mean to be critical in any antagonistic sense, I am only interested in useful critiques, to offer ideas, or at least spark discussions that might eventually lead to improvements. Let me also be clear that I'm willing to offer help in whatever way I am able, including a willingness to work with the PDGA ratings committee to help address member concerns about the ratings.
I hope that my primary critique is clear: It is not good to offer a number without also offering a quantitative estimate of the errors involved. When we throw out a number that is supposed to measure something, without knowing how accurate that number is, then it is easy to misuse it.
What does it mean to parse differences between a 1000 and a 999 rated player? It happens all the time, but really the rating system is incapable of resolving that level of difference in player quality. Is there a significant difference between a 1030 and 970 rated player? Almost certainly, there is a significant difference. But where is the middle ground of uncertainty? 995 vs 1005? 990 vs 1010? Who knows? That's the essential issue we need to resolve.
And there are a lot of other tools to evaluate the ratings system, such as network topology. It can tell you whether or not there are sources of skewness or poor coupling that will affect the robustness of various schemes for transforming player scores into ratings. They can also serve as a test of basic assumptions, such as normally distributed errors, etc.. For example, consider that tournament ratings in one region are only coupled to tournament ratings in another region by propagators traveling between the regions. What kind of players travel furthest, and are most likely to play the role of propagators that couple different regions together? Of course, these are typically better players shooting lower scores. Mediocre players are more likely to only play tournaments in their home region. In this way, inter-regional coupling can become highly sensitive to the movements and fluctuations in performances of the traveling players (who play one tournament while visiting, and then go somewhere else, so good/bad rounds that happen will adversely affect the ratings coupling). We don't know how much this influences the outcome of the ratings. But everybody suspects that it does present an adverse effect.
Anyways, looking forward to continuing this discussion.