MichaelWebster
Jul 09 2007, 12:03 AM
Is it possible that player ratings are skewed in europe or other places where a group of propogators does not mix with US disc golfers often. If you assume european disc golfers rarely play in tournaments with americans (and the other way around) could the ratings become higher or lower overseas? I was just curious.

ck34
Jul 09 2007, 12:40 AM
It doesn't matter too much as long as the initial propagators mixed with other propagators which is the primary way they could have gotten initial ratings. From a practical standpoint, if the groups are isolated, their ratings are correct relative to the people they play with regularly, even if the whole group is offset higher or lower by a few points from a broader standard.

ChrisWoj
Jul 09 2007, 09:36 AM
I'm sorry, what exactly is an "initial" propagator? I've never asked this.

ck34
Jul 09 2007, 09:42 AM
The first players in a new locale or country to have enough rated rounds (8) to be propagators.

bpkurt
Jul 16 2007, 11:54 AM
I'll apologize in advance, as this may be explained elsewhere in the labrynth of this message board.

I want to ask for some explanation/clarificaiton of how propagators work. Specifically w.r.t. the example from Pittsburgh (AM's) this past weekend.

Round 1 Adv and Int played same layout.
Round 2 Int plays the same layout as Round 1, Adv plays different.
(Also, Round 4 Pro/Adv was also played from the same layout as Int rounds 1 & 2, and the 4rth round scores were very simialr to 1st round).

My Question is about how/why the Same Score in Intermediate Rounds 1 and 2 are rated sooo much (~45 points) differently. ? ! ?

It is possible the "unofficial results" submission has yet to clarify the course layouts for all rounds and all divisions. If this is the case, maybe I don't need the explanation.

Thanks!!

ck34
Jul 16 2007, 12:17 PM
Whenever you see that large of a difference it's likely the TD hasn't done the layouts properly for the online display. Or, there was a significant difference in wind between two rounds on the same layout.

bpkurt
Jul 16 2007, 12:19 PM
THANKS (for all the great things you do, not just this reply)!!

I'll assume it's a layout issue, and look for the stats to change once they get updated.

IF it doesn't change, I'll inquire again later.

Thanks again.

ck34
Jul 16 2007, 12:26 PM
Actually I checked and the online layout setup is correct. Not sure why the big difference. The good news is that all rounds on the same layout will be lumped together for the official ratings so that everyone should get the same rating for the same score each round which will be somewhere between the two ratings shown.

Alacrity
Jul 17 2007, 12:01 PM
I think there are three individuals that are messing with the ratings. The first round, if you add in the advanced group, pretty much is dead on in the averages. If however you only look at the Int group, you will notice first of all the two medium rated players had the hotest first round in their group, and while playing closer to their rating in the second round, still played better than other players at or above their rating. Now I don't know which were the propagators, but if you look at the 703 rated player, they should not have shot anywhere near where they shot. This also skewed the numbers somewhat.

One other comment, you have two players that were in the lower to middle of player rating, that played way under their potential. The combination, while it should average out, is still pretty significant.

ck34
Jul 17 2007, 12:08 PM
I don't believe the online software includes players with ratings under 800 in the calculations just like the official calculations do not. So the 703 player has no impact.

bpkurt
Jul 18 2007, 12:38 PM
This looks like something I've supspected for a while...Intermediate players (or anyone whose play fluctuates signifcantly from their rating) may "screw up" ratings calculations when they're used as propagators.
Again, I don't fully understand the details of how the propagators are used in calculations, but it seems reasonable that if too many propagators threw significantly different than thier rating, the resutls are skewed.

ck34
Jul 18 2007, 12:54 PM
You don't really know if the results are skewed or if the course really played that tough/easy that round. In a nod to customer service, we accept less accuracy in the ratings process to provide ratings for as many members as possible as a member benefit, even if they play in divisions with few propagators or locations with few PDGA members. No college stats prof would ever want fewer than 30 props to do the ratings calcs but we use as few as 5 and accept that those ratings may not be as good as they could be with more scoring data. On the other hand, no one gets paid or wins awards due to their rating (yet). You still have to play the game and shoot better than your competition regardless of your ratings.

bpkurt
Jul 18 2007, 01:01 PM
I played in both rounds, and I can say the conditions were practically identical.

Regardless, I understand your point and the usage of ratings calculations, and I agree with you.

Alacrity
Jul 18 2007, 04:26 PM
If everyone would just play their ratings instead of shooting so much better or worse than they were supposed to then all the ratings would calculate fine ;).


This looks like something I've supspected for a while...Intermediate players (or anyone whose play fluctuates signifcantly from their rating) may "screw up" ratings calculations when they're used as propagators.
Again, I don't fully understand the details of how the propagators are used in calculations, but it seems reasonable that if too many propagators threw significantly different than thier rating, the resutls are skewed.

ninafofitre
Jul 20 2007, 03:48 PM
Ratings need to be skewed in OKLAHOMA...The OKIE golfer are WAYYYYYY better than the ratings we get here. A month ago Coda and I had very similar rounds where we were just 2 to 3 holes from being PERFECT and they rated under our player rating....I have played many places and i usually have a pretty good idea within 10-15 points what a round will rate. But for some reason we get no credit for playing good here.

Last week here in Tulsa there were people shooting 44's and they didn't even count for 1000 rated round.....you could have put all those high rated NC golfers at our tournament and they wouldn't have shot much if any better than we did but yet we had to shoot -12 or better to get in the 1000's.

gang4010
Jul 20 2007, 03:55 PM
That only happens in OK Kev - definitely not anywhere else ;)

ninafofitre
Jul 20 2007, 04:06 PM
The Texicans are getting HOSED also...its a middle country BIAS ;)

gang4010
Jul 20 2007, 04:26 PM
This problem would go away if we were playing against the course instead of each other. But the ratings guru says it's too hard - so it'll never happen.

gotcha
Jul 21 2007, 08:23 AM
This problem would go away if we were playing against the course instead of each other. But the ratings guru says it's too hard - so it'll never happen.



You are correct, Craig. A course rating system would help to eliminate some of the discrepancies which exist in the current player rating system.

trbn8r
Jul 21 2007, 06:29 PM
This problem would go away if we were playing against the course instead of each other. But the ratings guru says it's too hard - so it'll never happen.



Course guru, what are the main challenges?

ck34
Jul 22 2007, 08:24 AM
We have no proof that courses actually have the same challenge each time they are played. There's more validity to the process of generating ratings the way we do dynamically with slight fluctuations than having a set value for a course. There are several factors that change from hour to hour, day to day and season to season that it's impossible to pick the "right" SSA for any course let alone one with multiple tees and/or pins and TDs who change holes or add holes for tournament layouts that change a little bit each event. The way we do it accounts for all of those things plus the biggest of all - wind.

gang4010
Jul 22 2007, 12:22 PM
We have no proof that courses actually have the same challenge each time they are played.



Come on now Chuck - it's one thing to say that it's too difficult to create a formula for a course that physically fluctuates. But it's quite another to say the challenge of a course can't be measured - that's myopic at best.

Once you've played a layout a certain number of times - or in the case of rating a course - once a certain number of scores are posted (either hole by hole - or by tournament round score) - the expected range of scores becomes very predictable. To say otherwise would suggest you haven't been playing this game................at all!

ck34
Jul 22 2007, 01:21 PM
Exactly, you can get close, but dynamic is closer and doesn't require any database searches by the TD to identify which layout in the history of the course best matches the ones being used, figure out each SSA and adjust for seasonal factors, whether the paths and across were OB or not, or the fact the water level on the OB is higher or lower than when the stats were taken. The dynamic way we do ratings takes all of those changes into account.

There is no one good SSA number that will be more accurate than the way we do it now, presuming we have enough propagators. Do you expect the TDs to tell us the weighted average wind speed each round to adjust the fixed number accordingly? We can't even get a significant percentage of TDs to either make the effort or figure out how to report layouts accurately online although most are finally getting the hang of reporting the layouts well on the TD report.

The only place we calculate a fixed SSA to do ratings is when we don't have enough propagators in a place with few PDGA members like Australia, or in the future as new international areas build PDGA membership.

gang4010
Jul 22 2007, 05:35 PM
I would expect that with a course rating program - we could establish credible SSA's for every single hole on every single course, and that "tournament layout" SSA's could be developed by local clubs easily. The notion that a dynamic SSA (dependent on what golfers show up on any given weekend) changes the inherent challenge to a particular hole or layout is just ludicrous.
The proof is in your own database. The notion that I can shoot a stroke better on a layout on the second day of an event, and be rated 10 points LOWER than the day before, with ZERO differences in conditions (other than time of day) is testament to the fallibility of the current formula. The difficulty of the course didn't change, neither should the rating for a particular score.

ck34
Jul 22 2007, 06:35 PM
I'm sorry Craig but the course challenge does change. It changes every minute, every hour, every season. Dew in morning, sun angles changing light shade and perspective, wind, players' energy levels, what their results were the previous round, discs that were lost or changed all impact the scoring along with several other things that have been discussed before. The data management required for fixed SSAs will bring the ratings program to its knees and with worse accuracy. It's bad process on all counts.

The only flaw with dynamic SSAs is the lack of enough propagators and we've already switched back to our earlier process for combining all scores played on the same course so every player gets the same rating for the same score on the same layout at the event, unless the TD indicates significant wind differences. The TD has no work to do to make our system work other than reporting who plays what courses, which they did before anyway.

reallybadputter
Jul 23 2007, 01:05 PM
I would expect that with a course rating program - we could establish credible SSA's for every single hole on every single course, and that "tournament layout" SSA's could be developed by local clubs easily. The notion that a dynamic SSA (dependent on what golfers show up on any given weekend) changes the inherent challenge to a particular hole or layout is just ludicrous.
The proof is in your own database. The notion that I can shoot a stroke better on a layout on the second day of an event, and be rated 10 points LOWER than the day before, with ZERO differences in conditions (other than time of day) is testament to the fallibility of the current formula. The difficulty of the course didn't change, neither should the rating for a particular score.



I think at least in this situation it isn't the course that is changing, it is the other golfers.

I've played 4 tournaments this year on 5 different courses. All but 1 course was a course that I'd never thrown before.

Now while we did change tees from round to round, still, having seen the course all the way through once helped me. On average I shot 35 points higher the second round on the course than I did the first round. (And in my 10 rounds this year, my second one of the day has been at least 14 points higher every time.)

If 1/3 of the field only plays the course at most once a year and are from out of town, they might be expected to shoot 2-3 strokes better the second time around. If you only shoot 1 stroke better, you are falling behind...

That's probably a good reason why all rounds with similar weather from the same tees should be combined... Either way if you know the course you got the advantage in your rating in the first round and a disadvantage in the second.

Also, the people driving from a distance to play the tournament are probably playing more tournaments and are therefore more likely to be propagators...

Or maybe I need to find a better warmup routine or some better coffee...

gang4010
Jul 24 2007, 07:16 AM
I'm sorry Craig but the course challenge does change. It changes every minute, every hour, every season. Dew in morning, sun angles changing light shade and perspective, wind, players' energy levels, what their results were the previous round, discs that were lost or changed all impact the scoring along with several other things that have been discussed before. The data management required for fixed SSAs will bring the ratings program to its knees and with worse accuracy. It's bad process on all counts.

The only flaw with dynamic SSAs is the lack of enough propagators and we've already switched back to our earlier process for combining all scores played on the same course so every player gets the same rating for the same score on the same layout at the event, unless the TD indicates significant wind differences. The TD has no work to do to make our system work other than reporting who plays what courses, which they did before anyway.



You're a funny guy Chuck. Why can't you just come out and say it straight -it's too difficult to maintain accurate SSA's due to the inability of TD's to report the course actually played? Instead of feeding us this boatload of crap about how the morning dew significantly alters the essential challenge of the course!!
I am reasonably certain that no matter how complex your formula is - there is not a single variable included for morning dew, sun angle, or any of the other items you mentioned. The factors that affect course challenge are not the changes you suggest, they are distance, foliage, and elevation. All of which may change with different layouts, but they are not affected by time of day, or by who is playing on the course.


When you talk about having enough propagators - you are right on one level - and that is having enough data (i.e. records of scores for a particular hole - not # of players on a course at one time) to establish an SSA for a hole. Instead of keping the responsibility of data collection and maintenance in the hands of a few - I would suggest enlisting the hundreds of volunteers available to generate hole by hole SSA's for every course. Gee - we could even make it a requirement for sanctioning - to have an appropriately rated course - that would speed the process dramatically!

ck34
Jul 24 2007, 09:26 AM
You're just wrong Craig. The course rarely plays the same just like players rarely play the same. Consider that a 10 pt variance in rating is usually a one shot difference in SSA on an SSA 50 course. That's 2% variance. That's nothing and yet players get all bent out of shape for 10 points. The course challenge varies just like the players performance varies, almost the same amount. And yet we can automatically adjust for the players performance variances, but it's virtually impossible to maintain the kind of accounting required for hole stats, and any adjustment process would be more mysterious and less accurate.

gang4010
Jul 24 2007, 02:19 PM
Come on Chuck - why are you stuck on this? I am wrong that distance, foliage, and elevation are the main factors affecting challenge? I am wrong that developing a database of actual scores on a hole can establish adequate criteria for rating it's difficulty?

Explain again how the number of people on any course at a particular time alters the challenge of a hole? How is it that a disc I can no longer use alters the challenge of a hole? How is it that a persons previous score has ANYTHING to do with the challenge of a particular layout? The ONLY thing that is dynamic to any degree is the players performance (barring severe conditions). Ratings should be based on HOW YOU SCORE AGAINST THE COURSE, not how many shot well or shot poorly. If EVERY player shoots a course record at the same time - the ratings would be skewed incredibly low. How is that possibly accurate?

skaZZirf
Jul 24 2007, 03:00 PM
TTrue...

skaZZirf
Jul 24 2007, 03:02 PM
yeah,,,Gang is right...What gonna happen when there are 72 touring pros who all shoot relatively similiar rounds...Are they just gonna get mediocre ratings, even though they all crush the local pro by 8-9 strokes...

baldguy
Aug 06 2007, 11:18 AM
so, gang...

let's say that we had a course-challenge-based ratings formula. Let's also say that you played your home course two rounds during a 1-day tournament. The first round, you shot a 45, giving you a hypothetical 1000 rating. At lunch, a hurricane blows in and you play the second round with pouring rain and 40+ mph winds. If you're human, you'll shoot significantly higher during that second round, even though it was the exact same course. Let's say you managed a 54, beating all the rest of the competition in these conditions. With the course-based system, your rating for round 2 would be a little over 900 (depending on the SSA). Smart money says that you'd be on the messasge boards that very evening complaining about the ratings system.

Of course I'm being extreme... but my point is that no system is 100% perfect as long as there are any variables at all. The fact is that the best overall way to rate round performance is by looking at how the rest of the field did. Perhaps the factors for defining the list of propagators should be re-addressed, or perhaps the SSA should be taken into account at some level... but even then you're talking about very minute differences in most cases.

Something to think about: recently, a local mini had 70 players in attendance and at least 5 of those were rated within 20 points of 1000 with several more between 950 and 980. We played the exact same layout as we do every week, except that this day was extremely rainy. No real wind to speak of, but tons of mud and some light rain during a few holes of the round. I personally shot about 7 strokes worse than normal. Many of the amateur field shot 4 or 5 strokes worse, and the 1000-ish rated golfers were still 3 or 4 strokes off their normal. 90% of these players were regular enough to be considered locals. The layout hadn't changed, yet everyone scored worse because of the course conditions. Did they deserve to be rated lower because of it?

gang4010
Aug 06 2007, 04:54 PM
so, gang...

let's say that we had a course-challenge-based ratings formula. Let's also say that you played your home course two rounds during a 1-day tournament. The first round, you shot a 45, giving you a hypothetical 1000 rating. At lunch, a hurricane blows in and you play the second round with pouring rain and 40+ mph winds. If you're human, you'll shoot significantly higher during that second round, even though it was the exact same course. Let's say you managed a 54, beating all the rest of the competition in these conditions. With the course-based system, your rating for round 2 would be a little over 900 (depending on the SSA). Smart money says that you'd be on the messasge boards that very evening complaining about the ratings system.



As previously stated - extreme conditions would be the only real dynamic element in formulating the rating.




Of course I'm being extreme... but my point is that no system is 100% perfect as long as there are any variables at all. The fact is that the best overall way to rate round performance is by looking at how the rest of the field did. Perhaps the factors for defining the list of propagators should be re-addressed, or perhaps the SSA should be taken into account at some level... but even then you're talking about very minute differences in most cases.



If ratings are dependent upon who shows up - all that has to happen for ratings to be skewed high or low one way or another is for the number of propagators within a given range to be disproportionate in some way. Assuming that a 1040 rated golfer will always shoot 1040 automatically skews a highly rated round - even higher. While the same round for a lower rated player shooting the same round (in the absence of the higher rated player) gets rated lower. It's the nature of the beast.


Something to think about: recently, a local mini had 70 players in attendance and at least 5 of those were rated within 20 points of 1000 with several more between 950 and 980. We played the exact same layout as we do every week, except that this day was extremely rainy. No real wind to speak of, but tons of mud and some light rain during a few holes of the round. I personally shot about 7 strokes worse than normal. Many of the amateur field shot 4 or 5 strokes worse, and the 1000-ish rated golfers were still 3 or 4 strokes off their normal. 90% of these players were regular enough to be considered locals. The layout hadn't changed, yet everyone scored worse because of the course conditions. Did they deserve to be rated lower because of it?



Well let's think about it - the challenge created by adverse (some might even say "extreme") conditions would naturally be increased. Under those circumstances the round would either be discarded as outside standard deviation - or would need to be adjusted to reflect the conditions. And given that the ratings folks say that those sorts of conditions can be accomodated in their formulas - I don't see why using established course difficulty ratings couldn't serve as the most accurate starting point available for making those adjustments. How is it that who shows up is what determines the challenge/difficulty of a course? I've yet to see an explanation for that premise.

ck34
Aug 06 2007, 05:20 PM
How is it that who shows up is what determines the challenge/difficulty of a course? I've yet to see an explanation for that premise.




Because players (or any other statistical entity) exhibit average performance on average if enough show up.

gang4010
Aug 07 2007, 07:16 AM
Try again chuck - cause and effect are not explained.
Lets say a full field shows up on one day - perfect condittions. Another perfect day - only 1/2 as many show up due to another scheduled event elsewhere - same exact conditions. How has the course challenge or difficulty changed?

gotcha
Aug 07 2007, 10:15 AM
Lets say a full field shows up on one day - perfect condittions. Another perfect day - only 1/2 as many show up due to another scheduled event elsewhere - same exact conditions. How has the course challenge or difficulty changed?



Here's two good examples:

2007 Masters at Idlewild (http://pdga.com/tournament/tournament_results.php?TournID=6550&year=2007&incl udeRatings=1#Masters)
2006 Masters at Idlewild (http://pdga.com/tournament/tournament_results.php?TournID=5617&year=2007&incl ude_ratings=1#Masters)

I shot 61 for my first round at both tournaments, however, my round ratings are different. Weather was not a significant variable between the two years. In fact, with Fred's recent changes to the course (extending hole 8 and adding an island green on hole 11), the course is more difficult that it was in 2006, yet my rating is lower in 2007 because of who was or wasn't at the event.

gang4010
Aug 07 2007, 10:54 AM
Thanks for proving my point Jerry! If the course changed - I could understand the same score being rated differently. If the people changed - and the course and conditions didn't - the same score should have yielded the same rating.

ck34
Aug 07 2007, 11:49 AM
You cannot prove that the course conditions are ever the same. One shot difference in SSA at Idlewild is less than 2% difference in conditions. Even bowling alleys don't remain the same game after game or after waxing, and the factors affecting them are much less than disc golf courses.

gotcha
Aug 07 2007, 12:38 PM
Of course there are variables in disc golf....course conditions, wind speeds, inclement weather, foliage factor, etc. Isn't that one reason for the deviation factor in the current Player Ratings system? A score of 61 on a par 68 course doesn't vary, however.

It seems logical to me that one could easily develop a course rating system using the current Player Rating system and hole-by-hole scoring averages within an established SSA. With enough scoring data, a course rating (and course par, for that matter) could be determined based upon hole-by-hole averages of the players who the course is designed for. The importance of having individual hole averages would allow a TD to establish a course rating for different pin placements and/or tees which often result in a differentiation in par or scoring average (i.e. short pin is a par 3, long pin is a par 4).

If a course rating system was/is eventually developed and implemented, I would still like to see the Player Ratings published for sanctioned events as they currently are. There would obviously be a differentiation between the two ratings and I would find that statistical data quite interesting.

gang4010
Aug 07 2007, 01:15 PM
You cannot prove that the course conditions are ever the same.



For the sake of argument - let's just say you're right on this Chuck (personally I don't agree at all). You have yet to explain how WHO shows up changes or affects the challenge offered by the COURSE.

Based on statistics - regardless of who shows up - local players/TD's know what a good score is, what the course record is, and what an average round score would be for virtually any layout offered for tournament play. You yourself state that the SSA rarely varies by more than a couple %. So why is using the SSA established for the course so much more difficult than the dynamic version in use now based on who shows up?

Hey - if ratings are meant to simply compare player performance in relation to each other - ok I guess what we've got isn't so bad. But if ratings are meant to measure a golfers skill at golf, in relation to the challenge presented by any particular course - then what we have now fails miserably. So I guess that's the bigger question - what are ratings intended to measure?

circle_2
Aug 07 2007, 01:19 PM
Seems like you're trying to make a dynamic 'thing' very static...IMHO.

gang4010
Aug 07 2007, 01:25 PM
Hey Doc - I'm not trying to "make " it that way. I'm suggesting that it already largely IS that way. The way we do it now - the course SSA is re-established every time we gather a group of players and post a group of scores. I'm suggesting that the posting of the larger group of scores establishes an SSA that is more static, than it is dynamic - and given enough of those pieces of data -we should no longer have to rely on "who shows up" to be able to provide a rating. Establishing course ratings would eliminate the need for "propogators" at each and every event.

circle_2
Aug 07 2007, 01:37 PM
Seems like there would be a lot of permutations: several pin placements/hole, different tees...and the dynamics of how a long hole affects scores on the next hole. As previously stated, there's weather (temps/humidity/wind/rain), foliage, etc...
Believe me, I see your point...I'm just good at arguing both/all sides! :D

gang4010
Aug 07 2007, 05:57 PM
Chuck said on a NEFA thread:

The players primarily want things to be in their personal self interest, not the sport in general, especially as it affects their division. That's why we have rules and competition procedures not subject to specific player preference.




Just curious CK, but as regards divisions - what rules are in place that DO NOT cater to player preference?

From this statement it "appears" as if you think there should be fewer choices as regards divisions. But of course - appearances can be deceiving. Care to share the context of this remark?

ck34
Aug 07 2007, 07:44 PM
I'm refering to such things as wanting to play the longer tees on more open holes because you have distance without accuracy or not having holes that a forehand is the preferred shot or putting barriers on potential roller holes because you don't think that's a legit shot or payout percentages skewed more to the top or a ratings break 5 points higher because you're static at a particular level.

Jeff_LaG
Aug 07 2007, 09:11 PM
Lets say a full field shows up on one day - perfect condittions. Another perfect day - only 1/2 as many show up due to another scheduled event elsewhere - same exact conditions. How has the course challenge or difficulty changed?



Here's two good examples:

2007 Masters at Idlewild (http://pdga.com/tournament/tournament_results.php?TournID=6550&year=2007&incl udeRatings=1#Masters)
2006 Masters at Idlewild (http://pdga.com/tournament/tournament_results.php?TournID=5617&year=2007&incl ude_ratings=1#Masters)

I shot 61 for my first round at both tournaments, however, my round ratings are different. Weather was not a significant variable between the two years. In fact, with Fred's recent changes to the course (extending hole 8 and adding an island green on hole 11), the course is more difficult that it was in 2006, yet my rating is lower in 2007 because of who was or wasn't at the event.



Those round ratings were 1022 and 1016, for a difference of only 6 points. That is virtually negligible. Even with a course with a high SSA like Idlewild, that's not even a one stroke difference.

gang4010
Aug 08 2007, 07:20 AM
There are some "IF"'s here............ but IF the course and conditions were unchanged, under what reasoning should there be ANY difference for identical scores?

ck34
Aug 08 2007, 09:01 AM
The problem is that you can't have the same conditions ever in an outdoor environment. If you could have a course indoors in a controlled environment with artificial trees, you could get close. However, even the chain positions on each basket can be slightly different on the hooks each time they are played. Even the decision for one player close to the basket to putt out for speed of play can change the chain pattern for the next player from what it would have been otherwise. These are all small effects, but small effects can lead to a disc flipping vertical and sliding thru. The current variances on SSA are also small being typically less than 2% when there are enough propagators.

If we did anything at all along the lines of fixed SSAs, a case could be made that when we have fewer than N number of propagators, if the SSA from the props varied more than x percent from the fixed value on file, that the fixed value would be used. We already do this on a limited basis for events in developing areas, especially international, to help them get ratings when they only have a handful of propagators. But the process is much slower than our mostly automated process now and would take a significant procedural and database upgrade to make a systemwide change.

reallybadputter
Aug 08 2007, 07:11 PM
There are some "IF"'s here............ but IF the course and conditions were unchanged, under what reasoning should there be ANY difference for identical scores?



Are you sure the course was completely unchanged?

Maybe a few little branches that were in the way in '06 got knocked off during the year?

Maybe a little underbrush got trampled on all the holes so that recovery shots were a little easier?

You're talking a 0.6% difference in rating.

Maybe more players were using the Chuck Norris endorsed Devilhawk and because it is driving down scores, the course isn't as difficult anymore?

dscmn
Aug 09 2007, 12:09 PM
the problem seems to me to be that the difficulty of the course is decided upon as a result of the scores. there is no independent analysis of the course separate from scores shot on the course.

the course could in fact be tougher, harder based on a number of factors, leaves, grass growth etc. but, as a result of data (scores) it can be determined that the course was easier when in fact and reality it was harder, but a group of players played it better.

it seems that course difficulty becomes the "scapegoat" for deviation in scores based on propagators. now, tell me why i'm wrong. :)

ck34
Aug 09 2007, 12:52 PM
There is no independent way to measure course difficulty. That's the flaw with the USGA Course rating system - their ratings are never validated with actual play.

Think of each propagator as being a precision instrument such that how they play exactly measures the course difficulty at that moment. If they were robots, we would only need one of them to measure the SSA for a course. However, since SSA can be a decimal number and a robot can only shoot a whole number score, we need readings from more of them to get more precise than a whole number SSA like 55.

Now, propagators aren't robots and are subject to variances in performance. Let's say someone actually had a fleet of precision robots and randomly tweaked their settings up and down by varying amounts such that none of their individual measurements were accurate anymore. However, the average performance of this fleet would likely be as good as the original robot with precise measurements.

That's the way propagators are. While maybe 5 props might play better than average in a round, it's unlikely for 50 props to play better than average. However, just like the tweaked robots, the average performance of enough props is more accurate than some fixed number for the course rating that doesn't take into account the dynamics of playing the course under those precise conditions for 2-3 hours.

dscmn
Aug 09 2007, 01:09 PM
so playing with am propagators will yield lower ratings than playing with pro propagators?

ck34
Aug 09 2007, 02:08 PM
Not any more. There's been an adjustment to the formula to account for that effect. Am props on average slightly drag down the SSA from whatever it's true value is because enough of them are slightly underrated for their current performance level more than pros.

gang4010
Aug 09 2007, 03:33 PM
Not any more. There's been an adjustment to the formula to account for that effect. Am props on average slightly drag down the SSA from whatever it's true value is because enough of them are slightly underrated for their current performance level more than pros.



It's like the magical mystery tour. I'm sorry Chuck - I really don't mean to give you such a hard time, but the more you talk - the more confusing your answers are.

Above you suggest that there is a "true" SSA value that is somehow affected by players whose rating lags behind their actual skill level. I don't get it, if the SSA is re-calculated for every new set of propogators - isn't the entire rating system a self fulfilling proph.......I mean fallacy? If ratings lag behind skill level - how can you determine a course's difficulty using those players as propogators? And if overall - scores on a particular layout vary year to year by only a couple % - how is using a fixed course SSA less accurate?
I'm looking for understanding - I'm not looking for incalculable variables as explanations. If they are intangible - how could they possibly be quantifiable?

ck34
Aug 09 2007, 04:15 PM
No course is ever precisely the same and changes are regularly made that make tracking and adjusting the SSA value a nightmare. Start working out how you would do it and you'll see that even coming up with a way for defining a layout becomes a database nightmare. At Patapsco and other courses with alternate pins and tees you need to do something like this to code the layout: Long Tees ABBCBBCAACBBCBAACC2007 Then, the routing changes on the course next year due to a new parking lot and the hole numbers change. It's almost the same layout but now you have to shift the letter order and the long tee was removed and the B pin on new hole 7 has been moved forward 40 feet but it's still the B pin. How do you track those references with any chance of keeping up with it?

Using propagators as the measuring stick has some problems if you don't have enough of them. But with enough of them, they are capable of "measuring" the true challenge that day for that course layout better and more automatically than any possible scheme to try and adjust some fixed values for course layouts. We now have enough data to be able to adjust for the underrated am factor in the calculations. it's still a big numbers game. With fewer numbers the accuracy may go down.

TDs are regularly challenged just to get the course layouts identified that each division played on TD reports using simple layout names. Imagine how messed up the process would be tracking multiple letter coding and with less accuracy and no easy way to adjust for last minute course changes and hole additions which happen regularly.

While it might seem to make more sense to use fixed SSAs which we originally thought would be done, Roger and I realized early on that course ratings for all new and tournament temp layouts would need to use propagators to produce the original "fixed" SSAs anyway. So, might as well use that process consistently for all ratings and it's worked with minimal extar work on the part of TDs.

gang4010
Aug 09 2007, 05:55 PM
So it is as I originally posted - a clerical issue.

I understand the thought process after all - I just think that there are resources available that could be of incredible benefit not just to TD's, but to the organization as a whole that are not being pursued.
While laudable to want to take the load of the work off of TD's, rating a course is the stuff TD's are made for. And the stuff they are made OF. Think how many local clubs would gather data on their local course if having up to date data on your course was the requirement to host a sanctioned event? TD's and course designers are a remarkably committed, commitable, egotistical breed. The notion of competing for, competing on, or designing the toughest rated course is alive and well - and what a huge # of that ilk covet and strive for. So to assume they would not be interested in getting their courses rated to me seems......how shall I put this.........less than optimistic.
If we set up a link or some sort of database "depository" where TD's could "deposit" score data from their events - the SSA for any particular layout could be "automatically" calculated with no more effort (ok - not ALOT) than what they do now. And given that in MANY locations - local clubs have multiple people involved with volunteer efforts (including running events) - getting courses rated would get people taking a greater amount of "posession" and feeling of "inclusion" in the whole process, in the local clubs - and in the organization. We could even link the database to the course directory - and make it a basic entry form that anyone could fill out (like how you access the live scoring as a TD).
Having this sort of info kept up to date for sanctioning also offers legitimacy to the PDGA - and offers a tangible benefit back to the membership in the form of quality info on places to travel, and what to expect when going new places. And back to the original premise - would give us the # of propogator data you require to eliminate the fudge factor in rating a score based on who shows up.

ck34
Aug 09 2007, 06:10 PM
The point you seem to overlook is that the current method which is much easier is actually more accurate than whatever complex scheme you might develop. It instantly takes into account extra holes, changed holes and different weather with nothing special required from the TD. We don't even have more than a handful of dedicated course designers doing evaluations for their own courses. Your expectations for TDs all over the World to come even remotely close to knowing what to do is wildly optimistic. How about showing that TDs can even interpret the rules properly and get their reports to the PDGA on time before even considering the next step?

gang4010
Aug 09 2007, 06:20 PM
is much easier is actually more accurate



Easier? ok I'll give you that - but no more difficult than what you've spent the last 8-9 years doing.

Guess it depends on what your rating is supposed to measure as to whether it's more accurate. Without tieing a score to the difficulty of the course - I don't see how a "rating" is any sort of realistic measure of "skill". Seems more an attempt to measure "relative performance for a given day" - and I'm at a loss as to why that's worth measuring.

It's OK though- thanks for your responses, my questions have been answered.

ck34
Aug 09 2007, 06:41 PM
You apparently missed the whole thing about propagators doing a better job measuring the actual course challenge at the moment than a fixed value. How do you adjust for dew, grass height, sun angle and wind? No one knows what that's worth to slightly adjust some fixed SSA challenge for that round.

However, propagators as a group are a dynamic measuring instrument that takes all of those factors into account via their performance. They're as precise as a ruler that measures a board about five feet long to the nearest inch, but maybe not good enough to measure to the nearest 1/16th of an inch. You imply that the course rating is relative to who shows up and how they play. Fortunately, time and again, the cumulative performances of the group average out to provide us with a good course rating measurement.

gang4010
Aug 09 2007, 07:14 PM
Didn't miss it Chuck - I just take it as your method of justification - not as truth.


Fortunately, time and again, the cumulative performances of the group average out to provide us with a good course rating measurement.



Now say that over and over and over - then add them all together - and the SSA for every hole becomes more fixed the more times you play it.

Why dew, and sun angle, and grass length are something to worry about - I don't know. Those are the things that "even out" over time when you play a course repeatedly. Those differences don't even register for a first time player on a course. Wind apparently you either adjust for in extreme conditions already - or you throw them out - you tell me.

The essential elements effecting difficulty are not those things, they are distance, foliage, and elevation - all which are measurable or quantifiable in some way. (How many discussions in the Course Designers Group have been had on these topics?) The larger body of scores over time by your own logic of "enough propogators" must undoubtedly yield a more accurate reflection of the difficulty of a hole.

ck34
Aug 09 2007, 07:29 PM
It's not the fixing of a specific baseline SSA that's a problem. We already post those for each layout rated. But I suspect you could pick any course out there with data from multiple events and not be sure which SSA values to use to determine the baseline SSA because you might not remember that water on the course meant the pin was moved or that was a windier day than normal.

The more important issue is the major and minor adjustments that would need to be made to the baseline figure to make it as accurate as the dynamic method with no way to know how to do it. The way we do it now is seamless based on performance of a large group of propagators actually playing under the exact conditions that affect the rating either subtley like dew, grass or sun angle or majorly like wind and rain. No need to guess whether these factors affect the baseline SSA by 0.3 or 0.5 that round. They are incorporated automatically.

I can just see the players coming in for lunch wondering what the TD divines as the magic adjustment factor... hmmm it looks like 52.36 today. No way says the guys who started on the open holes in the morning wind. It had to be at least 53.5. Then there will be the TDs who wonder what an SSA is...