robertsummers
Apr 06 2009, 10:34 AM
I am not for sure but after playing in the BG ams this year I started thinking that ratings based on ams would be slightly skewed because of the bigger margin for improvement with ams. Especially early in the year when a lot of players have few rounds rated between say October and the beginning of April when there are so few tournaments but most people play or practice at least some during those months. I mean look at all the people toward the top that played 40-60 pts above there rating all weekend then you see people that played below their rating all weekend did the people that played below get worse over the winter or is their rating being driven down by other players improving play. Is there any way to test my theory or am I just going to have to go to my death bed without knowing I thought about checking SSAs but that can be changed by weather .

cgkdisc
Apr 06 2009, 11:12 AM
You're going to see that effect in large fields just based on simple stats that have nothing to do with improvement overall. Players in the 850-950 ratings range have Standard Deviations in the 30-45 point range. That means roughly 2/3 will shoot within plus/minus 30-45 points from their rating each round. However, 1/6 will shoot 60-90 points above their rating and 1/6 that far below. Certainly some players have improved and some players are rusty and we know overall, that there are more ams improving than getting worse. That's already taken into account in the ratings process.

bruce_brakel
Apr 06 2009, 11:35 AM
Presumably, early in the year the average northern player has not improved as much over the past four months because he has not played as much. For some northern players, their skills have gotten worse from not playing. But their rating has not changed to reflect the degradation of their skills because they don't have the rounds in the database to reflect their current skill level. So those players will generate higher ratings than their skill level reflects.

Of course, in Texas and other states that are unbearably hot in the summer, players may play more in the winter, improve more in the winter, and be playing more above their rating than they do on average. And then when everyone gets together at Worlds, Bowling Green, the Memorial, etc., it all evens out.

I would not worry about it. Ratings will not be used to determine who gets food and water in a post-apocalyptic dystopian alternate universe you might wind up in due to a transporter accident. {I've been there; they've never heard of disc golf.] They are just used as a rough but reasonably fair way of sorting players by skill level. And even then, only a minority of the amateurs playing tournaments choose the division indicated by their skill level.

At the last tournament I played there were 17 players in Intermediate, but only 7 were intermediates. There were 36 players in Advanced but only 10 were advanced. For roughly 70% of the amateurs at that tournament, ratings were irrelevant to their divisional selection.

robertsummers
Apr 06 2009, 12:02 PM
I am a math teacher and have had college calculus and stats classes so I understand what you're saying about the percentages that should fall and will fall within your standard bell curve of 68%, 96%, and the outliers. But I am sure you are also aware that bell curves can be skewed positively or negatively because of variables and I am wondering if the formula used forces everybody onto the standard bell curve and if so wouldn't that force some peoples ratings down if it didn't "fit" what the bell curve should look like. Because you have said yourself that half of the propagators are above and half are below but what if more than half played better than their rating wouldn't that cause some sort of false skew on the ratings. I doubt it would change them much if any but I am curious about that. I am not complaining I am just off from work this week and was bored and started looking at the ratings and it led me down this path.

Dwiggy444
Apr 06 2009, 12:06 PM
I've also been wondering about the ratings from BG Ams this year. I agree with Bruce - it doesn't REALLY matter in the grand scheme of things - but it is a nice measurring stick. So...

I was wondering if the pooling of players by rating this year might have skewed the round ratings lower for everyone, and for the lower pool players in particular. I played in Intermediate Pool D (Dwight Powell, PDGA #34800) and had 3 very solid rounds that I thought would be rated in the 930 - 960 range, and one very bad round, which I thought be would be in the 850 - 880 range. I based my guesses on last year's round ratings on the same courses and SSAs. Unfortunately, according to the unofficial results, my rounds were rated 10 - 30 points lower than I had anticipated. I did a little comparing and my fears were realized - the players in the upper Intermediate pool received round ratings 10-30 points higher on the same scores at the same courses. And many of the really hot rounds in both pools just seemed to be lower than expected.

So... I'm just trying to figure out how all this works. How does the pooling by ratings affect round ratings? How do unrated players affect things (three of the top five players in Intermediate were unrated). I'm not a math wiz and all this talk of standard deviations hurts my brain, so if someone can draw me a picture, I'd really appreciate it. :)

cgkdisc
Apr 06 2009, 12:14 PM
There's no question that if a division has just 10 propagators and they think they all played better than "normal" they will still only get normal ratings. The question is, "Did all 10 actually play better or was the course playing easier and they were really playing at their "normal" level but on an easier course?" That question can't be easily determined so we have to go with the assumption that all props on average are playing average.

Obviously, the more props in a round, the more likely their average is average. However, the PDGA has determined that customer service to provide ratings is more important than ratings precision so the minimum number of props is set as low as 5 so ratings get provided most of the time. It's interesting that the SSA values produced by a large versus a small number of props are still typically within 5% so it still seems to work pretty well.

robertsummers
Apr 06 2009, 12:44 PM
And see here is another strange thing I noticed when I looked over the numbers for BG. Pool C in the morning at Chalybeate got a 13 point higher rated round for the same score than pool D did in the afternoon and Pool C still got a 4 point higher rated round at White in the afternoon for the same score that pool D did in the morning. Shouldn't they have been about equal and if it were courses playing easier at certain times of the day shouldn't 1 pool have had a higher rating at each course for example Pool C and D both having higher ratings than the other in the morning as opposed to one pool basically playing a stroke and half better throughout the day than the other pool.

What this tells me mostly is that I really need to get a life and keep away from all numbers. ;)

cgkdisc
Apr 06 2009, 01:00 PM
You can never look at BG Am event unofficial ratings and draw conclusions. With multiple pools, all of the numbers will change by the time the official ratings are calculated.

bruce_brakel
Apr 06 2009, 01:01 PM
And see here is another strange thing I noticed when I looked over the numbers for BG...

What I noticed was that 55% of the players in the Advanced and Intermediate divisions were eligible to play in a lower division.

Ratings provide two functions: They move players along to higher divisions whe really have overstayed their welcome in the lower divisions. They sell PDGA memberships and PDGA tournament participation. In regions where players use ratings to choose their division, they work just fine.

robertsummers
Apr 06 2009, 01:08 PM
And see here is another strange thing I noticed when I looked over the numbers for BG...

What I noticed was that 55% of the players in the Advanced and Intermediate divisions were eligible to play in a lower division.

Ratings provide two functions: They move players along to higher divisions whe really have overstayed their welcome in the lower divisions. They sell PDGA memberships and PDGA tournament participation. In regions where players use ratings to choose their division, they work just fine.


I know I play a division based on the people I know in each division and if there are different course selection based on the division like the BG Ams. I just like crunching numbers and considering it was snowing here this morning (after me getting a sunburn this weekend) there isn't much else to do. I was looking at them and just trying to figure out a little more about how ratings are calclulated.

bcary93
Apr 08 2009, 06:29 PM
... considering it was snowing here this morning...



Sounds like it's time to go play disc golf :D

vadiscgolf
Apr 08 2009, 09:05 PM
Last year the ratings were way off when unofficial, usually different courses were played but put in as one making multiple courses rated at the same time, when one course could be deuce or die and another has a SSA of 54 for par, doesn't work.

robertsummers
Apr 09 2009, 10:49 AM
They were split up this year so you could see the unofficial as soon as you got home. and a 54 at Griffin got a different rating than a 54 at White.