Znash
Jul 11 2005, 12:48 PM
I've been looking at the ratings for many tournaments and one of the recurring themes seams to be that the Am's end up with ratings 10-20 points behind the pros. Case in point at the BHMO this weekend a seven down (47) on the temp course for the Ams was a 1009 while it was a 1019 for the pros and an even on the original course for the Ams was 963 and for the pros it was a 975.Is it harder for a pro to shot seven down and an even? Is that why they get a better rating than the Am's for shooting the same score?
I would really like to know why the rating system is so friendly to the pros and not to the Ams.
Here are the scores from the BHMO the Am's played the temp. course then the original, and the pros played the original then the temp course.
Pro scores (http://www.pdga.com/tournament/tournament_results.php?TournID=4669&year=2005&includeRatings=1#Open), Am scores (http://www.pdga.com/tournament/tournament_results.php?TournID=4670&year=2005&includeRatings=1#Advanced)
MTL21676
Jul 11 2005, 12:59 PM
B/c ratings are stupid....ratings are based on what everyone else shoots during the same round.
Chuck and other ratings lovers will get on here and talk about how there is no statistical variation but its BS - and your evidence proves it.
If an am shoots a 45 on a course with a WCP of 47 and the next round a pro shoots the same thing on the same course, then Chuck will tell you that "the conditions were easier the second round" and that is why the first 45 is rated 1007 instead of the pros 1020.
At Cedar Hillls in Raleigh NC (one of the courses used for the Dogwood Crosstown NT) , the same happens every year. I think Cedar Hills magically gets tighter when the pros come to town cuz the ratings are through the rough.
I have shot a 53 in tournament play there and had it rated in the 930 range - Climo does it a few years ago and it is rated 959. Dave Gray shot a 45 during the Am FDC's last year and it was rated 1030 - when Mitch Sonderfan shot a 45 in the NT, it was 1046.
Its amazing that simply b/c pros are playing the course that the course magically gets tighter and conditions get suddenly worse.
james_mccaine
Jul 11 2005, 01:01 PM
Armature ratings? Isn't that electromagnetism or something? It drove me crazy during school.
Anyways, don't ask this question. The phenomenon doesn't exist? Delete this thread immediately. :D
In all seriousness, I totally agree with the observation that ams are typically rated about 10-20 points lower. I know if I am rated in a tourney with a majority of ams, my rating will be lower than if I am in a tourney of pros only. However, I give the rating boys credit. Even though Chuck won't admit it publically, I get the impression that they realize the problem exists and are taking steps to correct it.
sandalman
Jul 11 2005, 01:12 PM
znash - regardless of what anyone will try to convince you of - you ARE CORRECT!
the ratings are very very good - but this is the biggest flaw with them. besides the real event results that demonstrate this phenomenon (check any event that splits into seperate weekends for pros and ams - ZBoaz is a great example), all ya need to do is consider this example:
5 pro players, each rated 1000, play a one round event on the Totally Normal Disc Golf Course. each player shoots a 52. the round rating for the 52 will be 1000. no surprise there.
in the afternoon, 5 am players, each rated 920, play a one round event on the Totally Normal Disc Golf Course. each player shoots a 52. the round rating for the 52 will be 920. WTF?
if you can turn in a couple good rounds while in a field of higher rated players, your rating will benefit greatly. the same rounds delivered in a field that is weaker than you will not do much, if anything at all, for your rating.
i still say ratings alone make pdga dues worthwhile. but this is one very real anomoly in the system.
james_mccaine
Jul 11 2005, 01:14 PM
Conditions do change and it is not wise to compare rounds shot on different days or months, or sometimes on the same day. I also think think using WCP as some absolute measure is a misunderstanding of our ratings theory.
THAT BEING SAID, there is ample evidence of this phenomenon. Also, a basic understanding of how ratings are calculated, both by round and by player, will make one expect this phenomenon, even if we didn't have pretty convincing evidence. It will get fixed in the long run, but I suspect that even the best methodology will be open to some criticism, primarily because very few people seem to understand the essence of our system.
MTL21676
Jul 11 2005, 01:14 PM
All I know is that my player rating is about to go up - not b/c I'm playing better, just b/c I turned pro.
bruce_brakel
Jul 11 2005, 01:23 PM
The answer is fairly simple. Amateurs are improving more rapidly than pros. A lot of amateurs will go from playing 810 rated golf to 910 in the first summer that they start playing tournaments and realize how badly they suck. Those guys go to a tournament with an 860 rating based on four previous tournaments where they have improved from 810 to 910. They play 915 calibre golf compared to the pros but now they are a gator so no matter what they play it goes into the mix as an 860 round. If the pros play different courses or different times, his rating is a drag on the amateurs but not on the pros.
The effect is doubled when you consider that the pro pool usually includes the Pro Masters and Pro Grands. In these divisions there are usually a few guys who don't have a chance of shooting their rating because their knees are hurting today, or their ankle, or their back, or their arm, or their dentures, whatever. They pre-registered a month ago or are just there for the comraderie or are deep in denial, so they play anyway. Their rounds give a boost to everyone in the field when they shoot four or five throws below their rating.
The amateur phenomenon cuts across the entire amateur spectrum. 974 rated Matt Hall kicked some serious amateur derriere at Mid Nationals. His unofficial ratings are 1000+. But for purposes of calculating those ratings he is treated as a 974 player. He drags the average down for everyone because he is still improving and playing well above his rating.
If for some reason you want to jigger your rating up, look for tournaments where the pros play in a different pool and donate. I have seen that this effect is eliminated when the Pros and Advanced are in one pool and the Intermediate and Recs are in another.
dave_marchant
Jul 11 2005, 01:42 PM
I think that the explantion for this is quite simple: it is ratings lag. It shows up every time their is a Pro-only field and an Armature-only :) field playing the same courses, but at different times over the course of a weekend. It is typically in the 15-25 point range from my observations.
By in large, pro's (and really any golfers who have played on a regular basis for over 3 years) have a rating that reflects their average scoring very accurately. If you don't believe me, look at "Player History" for players - found in the "Membership" tab above.
On the other hand, newer players are progressing in their skilz rapidly. It stands to reason that what the scores/ratings they threw 1 year ago (or even 6 months ago) are well below what they will throw today. BUT...these old ratings are factored into their ratings today, thereby bringing their ratings down lower than actually reflects their skill level of today.
So...the ratings calculator "sees" an all-Am field (made up largely of quickly improving players whose ratings lag their actual skill level) and sets SSA accordingly higher. It thinks that the course must easier if a 920 player (rated skill) is shooting what a 940 (actual skill) player typically shoots. This in turn genrates round ratings that are accordingly lower. Your observation of 20 points is pretty typical.
By the way, the new ratings coming out next week (7/19) will go a long way to fixing this known problem. If I remember correctly a minimum of 12 rounds will be used (used to be 20) and the most recent 8 rounds will be double weighted. This will much more accurately reflect the current skill level of quickly improving (or declining in my case) players.
Znash
Jul 11 2005, 01:49 PM
I know it's not the conditions at least not a t the BHMO since if the conditions had gotten worse by the second round the Ams that shoot the same scores as the pros on the original course would have received higher rated rounds than their pro counterparts but instead of this happening they where still rated 10-20 points below the pros.
amateur, not armature
Although it has been said above -- I'd put it like this: Am.s in general play about 20 points better than their rating because they are on a learning curve and steadily getting better. So they come into a tournament at 925 but really play at a 945 level. But since they are a propagator whatever score they shoot propagates as a 925 rated round. Multiply that by the whole field.
Want to help your rating based on this phenomenon? Play in a tournament as a pro where am.s play a different layout. In a tourney where everyone plays the same layout at the same time regardless of division -- the pros will probably be hurt a little and the am.s helped.
An easy way to solve this problem is to stop regenerating the SSA/WCP/whateveritscallednow for each round. In cases where temporary layouts are used you'd have no choice, but a vast majority of tourneys are using layouts that are tried and true and have more than enough data to have generated a constant SSA.
Warwick Town Park has been host to numerous tourneys on all 4 of it's standard layouts. Here's a quick example of the difference the constant propogatoring (my new favorite word) causes:
Sunday:
Warwick Town Park silver to blue R2 (2004 New York States - Pros) 18 54.70
Saturday:
Warwick Town Park silver to blue R2 (2004 New York States - Ams) 18 51.88
Saturday's round was when the Ams played, Sunday's was when Pro2 played. I play that course all the time, even rain doesn't make it 3 strokes harder to play from day to day.
There are a bunch of other examples like this one for those who care to research them.
IMO (which, I'm sure, is worth about a cup of mochafrappachino at Starbucks), this is the biggest flaw in the ratings system.
bruce_brakel
Jul 12 2005, 12:53 PM
I think double weighting the last eight will help solve the problem over time.
But also, it is not really a problem. Ratings only really matter for when you must leave recreational, must leave intermediate, and cannot play am as a pro. Not that many recs and intermediates are playing in the pro-only pool frequently and getting a rating that boots them out of their division prematurely. And it affects every am equally, more or less. If someone wanted to jigger their rating up or down they could work the system, but using this quirk they'd have to skip a lot of tournaments and just play mostly NTs and As.
What we have is better than self-selected divisions. It is also better than what we had a year ago. I think the ratings volunteers, employees and independent contractors are doing a great job.
MTL21676
Jul 12 2005, 01:02 PM
The one thing I've never understood is why do we drop the worst outta 10 rounds, but not the best outta 10 rounds??
Are we trying to make ourselves feel better or something?
sandalman
Jul 12 2005, 01:11 PM
we drop the worst X rounds because of sevreal reasons - uncommonly poor play, deliberate poor play, wioerd penalties like late arrivals, etc. the worst X rounds are not necessarily representative of the true talent of a player.
we do not drop the best X rounds for a very simple reason. it is NOT possible for a player to shoot better than he can shoot.
cbdiscpimp
Jul 12 2005, 01:12 PM
we drop the worst X rounds because of sevreal reasons - uncommonly poor play, deliberate poor play, wioerd penalties like late arrivals, etc. the worst X rounds are not necessarily representative of the true talent of a player.
I got a 2 stroke penatly for playing the course wrong and still shot my rating :D
sandalman
Jul 12 2005, 01:20 PM
thats cuz you're a god
MTL21676
Jul 12 2005, 01:24 PM
thats cuz you're a god
LMAO
adogg187420
Jul 12 2005, 01:42 PM
The amateur phenomenon cuts across the entire amateur spectrum. 974 rated Matt Hall kicked some serious amateur derriere at Mid Nationals. His unofficial ratings are 1000+. But for purposes of calculating those ratings he is treated as a 974 player. He drags the average down for everyone because he is still improving and playing well above his rating.
Ok, explain this. If he dragged the average down at Mid-Nationals, why is my rating for the last round only a 999 when the WCP is 64? I shot a 60, so shouldnt my rating for that round be around 1030ish+? And if he dragged the average down, and i beat him by 3 in the last round, shouldnt it be boosted even higher?
I think double weighting the last eight will help solve the problem over time.
I would like to see that approach tweaked. It should have a time limit so that rounds over 3 months old are not double-weighted.
cbdiscpimp
Jul 12 2005, 01:50 PM
So whats up with the new drop rule??? What low rounds are going to be dropped??? Still doing low 15 percent or are we doing this must be more then 80 points below your rating melarcky???
dave_marchant
Jul 12 2005, 01:52 PM
I understand what you are getting at and agree with the premise, but that would be kinda funky.
For most people who do not play over the winter, their ratings would change between the Dec and March updates without them having played a round of PDGA golf. That may confuse and disillusion a lot of people. For the players who are rapidly improving and watching their ratings closely, the drop in ratings would infuriate them, I imagine.
My thoughts on that: If you fail to play over the winter, rounds you played 5 months ago may not reflect the level you'll play at next time you compete, so should not be double-weighted. Solution: play tournaments more often or realize rounds over three months old should not get double weighting.
dave_marchant
Jul 12 2005, 02:01 PM
I think they are going to be dropped based on 2 standard devaitions from your average (can't remember for 100% sure and I can't find Chuck's post). Dropping rounds outside 2 standard deviations will keep 95% of your rounds (and drop 5%).
A more consistent player will have a smaller bell curve of scores, so their standard deviation will be smaller. Inconsistent players will have a larger standard deviation.
http://www.robertniles.com/stats/stdev.shtml has a pretty easy to follow explantion of what a standard deviation is.
dave_marchant
Jul 12 2005, 02:17 PM
I whole heartedly agree with what you are saying in that it more closely brings the new ratings method in line with its intent to decrease the ratings lag problem.
BUT...in my estimation of the DG demographic, only 5-10% would get it. Another 25% or more would be howling about how horrible the ratings system is. That many loud complainers would influence the majority negatively much more than the 5-10% who get it (nerds, most of them :eek:)
scoop
Jul 12 2005, 03:09 PM
If you fail to play over the winter, rounds you played 5 months ago may not reflect the level you'll play at next time you compete...
Why wouldn't you play over the winter? </sarcasm> :D
peachgrinder
Jul 12 2005, 03:44 PM
If you fail to play over the winter, rounds you played 5 months ago may not reflect the level you'll play at next time you compete...
Why wouldn't you play over the winter? </sarcasm> :D
One might not be able to play over the winter because the nearest tournaments are 10 hours away in January/February... I would love to play over the winter, but it just might not be feasable for an AM paying their own way depending on where they live.
PEACH
jeterdawg
Jul 14 2005, 12:27 AM
Anyone who does not understand why pro's and am's end up with different ratings after playing the same layout but at different times, see Sandalman's post on page 1. It's the perfect example.
Also affecting the ratings is what Bruce and mp3_ eluded to...ratings lag, and Amateurs developing quickly and riding the learning curve. The sure way to fix this is to have more updates. I see eventually the ability to update every week is a realistic possibility. The only thing keeping that is operator (TD) error. Correct layouts must be input, and ALL player information (scores, PDGA #, etc.) must be correct to make the ratings work without needing smoothing by the ratings gurus. Plus, the actual calculation might need to be tweaked a little more before this is ready. But when it is, this phenomenon will be MUCH smaller.
I really like the double-weighting though...it's great for players that compete in 10 or more sanctioned events a year. Players with less than that would be hard to follow with any rating calculation unless they're really consistent.
bruce_brakel
Jul 15 2005, 01:40 AM
this is my favorite ratings anomoly: ten casual players from Ludington who normally play in a tensome together decide to play a tournament. They suck, have fun, join the PDGA and get 790 ratings on average shooting in the mid-70s on the Beast. It is so much fun they play four more tournaments that month, all over the state. Then ratings come out and they still have 790 ratings because that's just how good they are.
They figure out that they cumulatively have paid $1000 to get their tushes kicked over and over so they decide to get better before playing anymore tournaments.
They get Stokely's videos. They videotape each other throwing. They go to clinics at A-tiers. They spend hours at the football field just throwing back and forth. They hang out with Wade Schultz and he shows them how to throw 500 feet. Wade's dad teaches them how to putt. And they all go with all Innova bags because Barry and Kenny are Innova.
Six months later they show up for a tournament in Ludington. There is a monsoon that weekend but Schwass is fine with running a tournament for just those ten guys. This time everyone of them shoots 20 strokes better.
How are their rounds rated?
790s!
cbdiscpimp
Jul 15 2005, 10:17 AM
Thats prolly the funniest thing I have ever heard. The best part about it is the fact that its completely true :eek:
ck34
Jul 15 2005, 10:43 AM
How are their rounds rated?
No propagators, so no ratings. Maybe if they all had 810 ratings the story would work better?
Chuck, in the case Bruce talks about, why do you need propagators? Why can't you just use the existing SSA of the course (assuming it has had enough rounds played on it to generate one that is statistically significant)?
If it was a temp course or had a different layout than normal then that's one thing, but assuming it was just the same old same old, why the need for a propagator?
idahojon
Jul 15 2005, 11:02 AM
How are their rounds rated?
No propagators, so no ratings. Maybe if they all had 810 ratings the story would work better?
OK, they have 810 ratings, Chuck.
Now, how would they be rated?
And, if there was one player with a 1000 rating who played in that second tournament and scored the same as the 810's, how would they (and he) be rated?
ck34
Jul 15 2005, 11:46 AM
Now, how would they be rated?
And, if there was one player with a 1000 rating who played in that second tournament and scored the same as the 810's, how would they (and he) be rated?
The 10 players would each get 810 ratings. With the 1000 player added, they would all get 827 ratings.
Remember that ratings accuracy relies on playing with other players with established ratings. In the case of our group of 10 players who only play with each other, they are all at the same skill level so it doesn't matter what their ratings are relative to other players at this point. They have a fair match amongst each other which is all we're trying to achieve with the system. It's similar to players in isolated areas of the U.S. or other countries that are developing players with few gators.
idahojon
Jul 15 2005, 11:57 AM
I guess this has always been the sticking point with me, then. Too many variables and not enough constants. If these guys improved their game by 20 throws, why would their rating stay the same? They are definitely better than they were when rated 790 (or 810 or whatever), so shouldn't the rating go up? How would the USGA handle this, if a player continually improved his daily scores, yet was the only one playing at a course for months and months? Wouldn't his handicap go down?
cbdiscpimp
Jul 15 2005, 12:06 PM
I guess this has always been the sticking point with me, then. Too many variables and not enough constants. If these guys improved their game by 20 throws, why would their rating stay the same? They are definitely better than they were when rated 790 (or 810 or whatever), so shouldn't the rating go up? How would the USGA handle this, if a player continually improved his daily scores, yet was the only one playing at a course for months and months? Wouldn't his handicap go down?
His handicapp would most certainly go down because in ball golf it doesnt matter what anyone else shoots. All that matters is your score relative to course par and slope and all that good stuff. If you continually improve in ball golf your handicapp will go down. I you start out shooting 15 over and you play 50 rounds and your latest 20 rounds are at a 7 over your handicap will drop from 15 to 7. They only use your most recent 20 rounds no matter if they were 5 years ago or 5 days ago and its updated on a realtime basis. If you play 4 rounds a day your hanicapp could drastically change in a week depending on how you played.
They dont factor in conditions or anything like that because in the long run it doesnt freakin matter what the conditions are because everyone has to play in them and they update the handicaps so much that the few bad weather rounds that you end up playing leave your handicap so quickly if you are a competitive active golfer that it doesnt matter.
I think the only way ratings will ever be accurate is when we start setting course pars and basing the ratings off of that and not what everyone else shot because who cares what everyone else shot. You should be rated on what you shot to how hard the course is. Now what everyone else shot.
That and we will eventually have to get to the point of real time ratings.
I just hope I hit 955 or 960 not that it matters because ratings are just a number and they are always lagging about a month or more behind anyway.
We shall see how the new system works in a few days :D
ck34
Jul 15 2005, 12:36 PM
While it might be nice to do calcs with fixed SSA values, it's impractical and would generate more issues than we have now. Our course configurations for day-to-day play and events have way more versions than ball golf and are more affected by seasonal variations. Moving the flags around in ball golf is nothing like moving pins from A to B to C in thousands of combinations in DG. We don't even have a way to identify layouts that is uniform i.e. Championship layout from XYZ event in 2003 plus two temp holes and hole 5 from short tee and temp tee on 7 due to flooding.
Tournament layouts vary from event to event on the same course and it's not uncommon to add extra holes. Perhaps 1 layout in 10 is already in our record books so we'd still have to use the current method to generate SSAs for the other 9 layouts. We can barely get some TDs to report layouts played by different divisions properly. With fixed SSAs, we'd be asking them to go online and match up codes on their report with codes for layouts we had on file with no assurance they selected one that truly matched what they did in an event. I can barely remember the pin placements from an event in May let alone 6 months to a year later.
ck34
Jul 15 2005, 12:46 PM
We are using estimated SSA values to launch ratings in countries like Australia with one propagator. Once they get enough players with ratings though, they'll revert to dynamic process we use here. Once we can minimize the SSA difference generated between higher and lower rated players, dynamic SSAs won't be an issue. Hopefully, our new process will go a long way toward this goal.
james_mccaine
Jul 15 2005, 01:10 PM
Maybe the English have some outlaw propagators that they can send there. :p
Chris, are you out there? :D
rhett
Jul 15 2005, 04:49 PM
For me, the proof is in the pudding. You can devise all sorts of goofy scenarios that break down the ratings, and I enjoy that type of thing with the rules, but the bottom line for me is that the ratings work.
Caveat: nothing will work for the new golfer who is rapidly improving. His/her rating will never be stable or correct until that golfer reaches a stable level of golf. It's not that hard of a thing to grasp, and it is what it is.
I have found that my ratings has always been a very good gauge of my game when compared to other golfers who are not on a meteoric rise. When the ratings first came out, sure enough they were a great predictor of who I would beat or be beaten by and by how much. When I am playing well, not great and not hacking, the ratings ring true when I compare my score against others who are playing well without being on fire or hacking. Chuck kept talking about how it is a statistical value that gets better with more data, so it was amazingly good at first and still passes the "it works" test.
If you have one rated round where you shot the round of your life or hacked your guts out, of course it isn't any good. If you are rapidly improving or declining, of course it isn't accurate. If you have fairly steady game and play fairly regularly, it is probably pretty spot on.
What ratings are not: <ul type="square"> An accurate predictor of how you will shoot your next round An instantaneous and perfect measurement of your abilities RIGHT THIS SECOND A perfect measure of anything [/list]
Thanks for the insight Chuck. I would have thought that more courses stayed 'static' than that.
bruce_brakel
Jul 15 2005, 05:14 PM
My scenario is hereby officially revised from 790 to 810. :p
I don't want to be misunderstood. I think ratings are great. One of those ten guys e-mailed me today and said they'd try to avoid doing that!!! The fact that goofy scenarios can and do happen does not mean that ratings are not generally useful for seperating players by skill level. I only know of a couple of situations where the ratings system has really screwed up because of a goofy scenario. The 2.5SD rule will correct one of those problems but at the same time create a problem for a player like myself who has a sudden drop in ability and cannot shoot within 2.5SD of his old rating. I have a 930something rating. My last five rounds will probably rated in the mid 700s to lower 800s. So they will be thrown out and I will still be rated 930 something.
Doesn't mean the system is bad. Just means that it will get tweaked again.
gnduke
Jul 15 2005, 07:17 PM
That does beg the question of what happens when all of your 930ish scores age betond being included. Do the previously dropped scores now become eligible, or are they permanenetly dropped ?
cbdiscpimp
Jul 15 2005, 07:22 PM
I dont see why they changed it from the last update. Drop the low 15 double the most recent 8. Seemed to work fine and not creat any problems to me.
ck34
Jul 15 2005, 08:43 PM
Rounds excluded by percentage or standard deviation remain in your file. Only DNFs get thrown out and don't count in any calc.
The database doesn't know ahead of the calc what your previous rating is. All it does is take the rounds within 12 months of your most recent round for each player (going back farther if you don't have 8) and doubles the most recent 8 round ratings. It then calculates the standard deviation on your data set of rounds and drops any that fall below 2.5SD of their average.
In Brakel's case, he'll have a high SD and a lower and lower average as he gets more low rated rounds and only the few lowest will actually drop out each time. It will take a little while but he'll be able to drive down his rating during rehab.
bruce_brakel
Jul 15 2005, 08:52 PM
Which is just fine because rehab is a long way off. Maybe I can bring the whole family up to Mid Nationals next summer and play in the same division as Diana and Kira. Kelsey will be in the next division up!
ck34
Jul 15 2005, 08:56 PM
It's no cake walk either. The women have proven tough in those divisions.
I hought a standard par should have been used too. But now I am seeing things somewhat different.
Using an SSA is tough, because unlike ball golf where the only change in a course is grass height of fairways and roughs.
Disc Golf courses can grow in when there are heavy wooded fairways, Trees change and leaves fall making mistakes less harsh, or worse.
At The BHMO the course looked awesome. If you played the original 18 now with the exact same conditions and placements, I would wager most scores would be 5 to 10 strokes different. The course is overgrown and a lot tougher at the fineese holes.
I do think we need weekly updates of the ratings, Maybe that and droping off older scores faster would help.