"In all cases, a player is not allowed to play in a division where the top end of the bracket is lower than the player�s current rating. The one exception to this is for local or regional series, where a TD can allow a player to play in the same division for one entire series, based on the player�s rating at the beginning of the series."
One issue I have with ratings, is how to deal with non-PDGA members. I'm not so concerned with newbies as we deal with them exactly the same way we did before. I'm more concerned about the numerous long-term players that for whatever reason don't join, or don't renew their PDGA membership.
I understand that the PDGA is tracking ratings for non-members that play PDGA events, but not making them public. Presumably, this is done in the hopes that the non-member will join. This has the drawback of allowing sandbagging by not renewing. If a player has a rating above 925, but doesn't renew, he could hide out in the intermediate division, and nobody will be able to do anything about it (without a local bump rule).
Jim,
Your last statement isn't completely true: The TD has everything to do about it. Placing non-PDGA players (regardless if they were never members or are non-current members) into divisions is left to the TD: If the TD knows that this person is definitely an Advanced player, then he can state that the player will play Advanced. If the player doesn't want to play Advanced (or the division stipulated by the TD), then he/she doesn't play at that tournament.
Jason
ck34
Jan 15 2003, 03:26 PM
Nonmembers have no standing at PDGA events so TD has full control over how to handle them as long as the policy is consistent.
I sure would like to have objective evidence before forcing a player to play in a division that he/she doesn't want to play in.
ck34
Jan 15 2003, 06:51 PM
Jim, you stated that the potential problem is people with a "secret" high rating not renewing so they can bag in a lower division. Why should we be worried about "subjectively" forcing them without evidence to play in Pro let alone Advanced? These players have been described by you as attempting to take advantage of the system so why is it we should cut these players any slack?
If they are a non pdga member, why not just play them in a non pdga member division? Put all non members in one division (like rec. division).
If they don't like it then they can join and play in a more competitive division.
neonnoodle
Jan 16 2003, 12:38 PM
Because then we would be essentially boning the good PDGA members that are accurately in the Rec Division Mike.
My solution remains to gather all membership information from these non-members, collect their $5 fee, include them in our ratings, and make them associate members (no mag, card, invites to the worlds, vote, ACCESS TO THE MESSAGE BOARD /msgboard/images/clipart/proud.gif ). The alternative is to leave it to the TD to decide. Not full-proof but considering that the PDGA wants as many participants, member or not at their events, it is a solution that meets the needs. At least immediately.
This is not something new here. TDs have been in this position since the dawn of all of the different age and gender am and pro divisions. Now, we can just be more confident that at least the PDGA members are playing in the right skill division. This is an improvement, no matter how you slice it.
Chuck, maybe they're doing it unintentionally, or maybe they're playing tourneys out of their local area and the TD doesn't realize it until too late.
If I can't prove that they're sandbagging, and I don't know their rating, it's not fair to force them up.
Case in point. Our top two intermediates from last year are not currently registered (at least they don't show up on the ratings page). I expect them to move up to advanced, however, with the way divisions worked out here, very few advanced are actually over 925. These two might only be rated around 900-910 while the rest of the Intermediate division was hovering in the 870-890 range.
ck34
Jan 16 2003, 03:55 PM
They don't have any "proof" either. Your lack of proof trumps their lack of proof (nonmembers without ratings) meaning it's your call. You just said you knew these players and expected them to move up to Advanced. If that's where you think they should play, that's your right as TD. If they think they're not really at Advanced level, they can either prove it by joining and getting a rating below 925 or choose not to play. Of course, you can see how they play in Advanced at the event and determine which division seems right for the next event.
neonnoodle
Jan 16 2003, 03:56 PM
Jim, that's why you, as a TD are there, to factor in all of those intangibles.
If you are really concerned about this, the Ratings Committee has a couple worksheets where you can track these players and determine an approximate Player Rating.
Certainly this will be less of a problem than it has been in the past with all PDGA members getting a PDGA Player Rating, right?
Chuck, I've long fought against bumping people up without some objective criteria. Now we have it, and I think it's great, except that it only applies to PDGA members. And if we do come up with objective criteria (say, winning two tourneys, or being in the top-5 of 5 tourneys), or even some subjective criteria (you're kicking [*****], I won't let you play in that division any more), that can't apply to PDGA members because as long as their rating is below 925, they're allowed to play intermediate. So now we've got two seperate sets of criteria that apply to two seperate classes of players who are playing in the same division.
I know Nick, I've got the spreadsheets, and I did quite a bit of tracking, but to keep it is a lot of work. Yes, it's great for PDGA players, but many of our ams only play 2 or 3 tourneys a year, and don't bother joining.
ck34
Jan 16 2003, 05:33 PM
"...that can't apply to PDGA members because as long as their rating is below 925, they're allowed to play intermediate..."
That isn't totally true. I believe the PDGA will still support bump rules pertaining to events that are part of some local or regional series if that's what you want. In other words, if your bump rules require a player to move up according to some objective criteria that applies to everyone, member or nonmember, then they can be forced to play up even if their rating allows them to play in a lower division at PDGA events outside that series.
We have two Advanced players who were bumped to pro for our series this year. They plan to keep their Am status for Worlds for the next 6 months but will play pro in any of our PDGA sanctioned events in our state series. We have voted to allow Ams to play in our Pro division at 1/3 the regular pro entry so they don't get hosed on high entry fees without the chance for merch. They are happy to pay the roughly $12 entry fee so they can earn pro points toward 2004 and get experience to prepare for Am Worlds this year.
I like your idea best Nick that makes better sence. In addition why don't we give them "non pdga member numbers" so the td's can track the non padga members at pdga events.
/msgboard/images/clipart/happy.gif
drdisc
Jan 17 2003, 02:48 AM
Here is what I did in that situation once.
After two rounds, if the baggers score put them at least 3/4 of the way up in the next division, I just moved their card over. Not much they can do about. The scores show it all. And I did it for them at no extra charge. From then on , they were advanced, and everyone knew it.!
neonnoodle
Jan 17 2003, 10:31 AM
Until a full-proof plan is created, it simply has to be left to the descretion of the TD or a person she/he appoints. Having run and helped run many ratings based events, I can tell you that this is not a major issue.
If it becomes a major issue, GREAT! Then we can fix it from actual data rather than hypothetical. Either way the PDGA competitive system has been enhanced and will continue to be improved.
These are good challenges. Learning challenges.
pterodactyl
Jan 17 2003, 01:04 PM
That's fool-proof!
rhett
Jan 20 2003, 11:38 PM
I've got an easier solution for TDs: only PDGA members play in PDGA events. Period.
is open really open if you limit participation to membership?
(a question asked to me by a pro athelete from another sport)
gnduke
Feb 12 2003, 11:02 AM
Since when is a division limited to membership ?
Some events are limited to membership (A-tiers, NT, and I think Majors), but I do not think the division is. I think it is the case that if you pay your $5 at a B-tier, you can play open.
as you mentioned not ALL events are "open" to everyone...
the professional athelete i was having a conversation with did not feel that a competitive bracket should be considered "open" when it is actually restricted...
i just found that perspective interesting, from someone who had never even heard of disc golf-- and that was his first response upon being told about how the PDGA competitive structure works...
chris
Feb 12 2003, 05:25 PM
It looks like for the Tower Ridge Open ( 5/12/02)tournament you guys calculated the ratings from the same SSA both rounds (49 = 1024 & 53 = 986) which would make SSA around 51.5 but that second round they moved the pins longer. I thought that the -1 was just as good as the previous -5 I shot. This is just FYI and I realize that the TD probably didn't specify that he moved the pins longer the 2nd round. (IMO Tower Ridge from the long pins & long tee's is the harder wisconsin course I've played)
Psssst, Chris, nobody is supposed to know I sent you your detail.
When the round-by-round becomes public, there will be many such instances for us to track down. I'm not sure we're prepared for it, but we'll do the best we can.
It will also become apparent that accurate and timely reporting from TD's will be very important for ratings purposes.
Good thing your buddy isn't running that midwest tournament anymore!
discgolf6481
Feb 27 2003, 01:50 PM
Are non-sanctioned events included in the ratings calculation? It would seem obvious that they aren't, but just looking for confirmation.
Thanks
bruce_brakel
Feb 27 2003, 03:00 PM
They are not.
exczar
Feb 28 2003, 12:04 PM
Shannon Fosdick rules! (She) has the highest rating of any _FEMALE_ Pro player. Check out the ratings by division to see...
SWEET!! I knew it would come out sooner or later. I've always felt I was diffeent....
Would you believe this is the FIRST time someone has assumed I was of the female persuasion.
seewhere
Feb 28 2003, 04:14 PM
with a last name like that who would have thunk it.. Hey Shannon Honey you playing seawright 2-nite? /msgboard/images/clipart/happy.gif
ck34
Oct 01 2003, 11:39 AM
OK, I'm going to try and explain what seems to be one of the toughest concepts to grasp regarding the ratings system. It would be a lot easier if I could show you graphs here but that's not the case. The concept: "Why is each throw worth fewer ratings points the tougher the course gets (based on Scratch Scoring Average -SSA)?"
Let's consider Craig a scratch disc golfer and Chuck who averages 5 throws higher from the long tees on a local course. We're going to see how they do on some other "courses". The first course is amazingly short, in fact only 50 feet TOTAL for 18 holes. Just for grins, Chuck and Craig each stand three feet from the basket and drop 18 shots in the basket, each scoring 18. We would expect both to get this score every time. No one can tell which player is better.
Let's plot our results on a mental graph where the vertical axis is SCORE and the horizontal axis is COURSE LENGTH. Our first data point for both is a score of 18 plotted for a 50-ft course. Let's continue moving the guys back from the basket to say about 30 feet for a 500 ft course. Craig is probably a better putter and if they did this 100 times, let's say that Craig averages 24 and Chuck averages 26. We'll plot that.
We're going to continue this exercise with each guy progressively moving longer. When we get to 5814 feet, we know that Chuck will average 55.4 and Craig will average 50.4 on a course with average foliage. Why? This is the fundamental baseline for the ratings system. It happened to be the scoring average for what we defined as a scratch player (1000 rating) at Cincy Worlds where the course average was 50.4. Based on this system, Craig has a 1000 rating and Chuck has a 950 rating with each throw equaling 10 rating points for this course rating of 50.4. Note: Not every 5814' course will have a 50.4 SSA. That's strictly an average value used for this exercise.
Before going longer than 5814, let's figure out what has happened working up from 50 feet. I don't expect the graph line for either player to be exactly a straight line nor necessarily a smooth curve. For example, both Craig and Chuck might shoot 18s as they continue to move a little farther away until maybe Chuck misses one putt out of 100 sets of tries at some distance. The one thing I believe all of us would expect is that the average scores for each player will get progressively higher as the "course" distance increases. In other words, there's not a longer length where a player will average a lower score over 100 rounds than a shorter length. The average score for each player will ALWAYS be the same or higher as this virtual course gets longer.
ck34
Oct 01 2003, 11:40 AM
So, we know the scores of each player get progressively higher as the course gets longer. We also know that the gap between their scores MUST also gradually increase until it reaches 5 throws at 5814 feet. The $100 question now is: "Would we expect their scores to continue getting progressively farther apart as the course gets longer?" It would be logical to presume that the gap between their scores would continue to widen. Of course, it's possible the gap could stabilize and essentially remain the same at some length, but for what reason?
If we accept that players with two levels of skill will have average scores that are closer together on shorter courses and farther apart on longer courses, we need a way to describe this mathematically. Everyone's rating is calculated in reference to the 1000 rated scratch player like Craig. We have two choices- either have one throw equal a fixed number of rating points or have one throw equal a sliding scale of rating points depending on the course rating. In the first case, if we have one throw equal a fixed number of points, a player's rating would HAVE to vary depending on the difficulty of the course. Think about this for second and imagine the nightmare for setting divisional rating breaks for events with courses of varying difficulty.
Of course, we chose the option where a player has the same rating regardless of the course difficulty. Thus the number of rating points per throw MUST vary depending on the length and/or SSA rating of the course. If Chuck is a 950 rated player, he will shoot scores closer to Craig on a short course and farther apart on a long course. However, their ratings are still 950 and 1000 - fifty points apart no matter what. Since the gap between their scores varies, the number of ratings points per throw must also vary.
Data so far indicates that Chuck will shoot 10 throws more than Craig on average on a course with a 67 SSA. So, instead of 10 rating points per throw on a 50.4 SSA course, each throw is worth only 5 rating points on the 67 SSA course. Now, we don't have a lot of data on high SSA courses over 60 because there aren't that many. We may discover that 950 players only shoot 9 throws more than 1000 rated players. However, I doubt we'll discover that 950 players still shoot only 5 throws worse just like they do on SSA 50.4 courses. So, no matter what, each throw will be worth less on higher SSA courses regardless how much more data is gathered.
The factor that describes the changing gap between scratch player scores and everyone else's scores, as course difficulty changes, has been named the 'Compression' factor because on a graph the scores get closer together (compress) as a course gets shorter/easier. So far, it appears this same compression factor used for calculations as courses get shorter also works as the appropriate 'expansion' factor as courses get longer. There's no reason to believe it wouldn't work the same since the 50.4 baseline value for the original ratings settings is arbitrary. Maybe we'll discover this factor needs to be adjusted slightly with more high SSA course data. But there's little doubt it will be an expansion factor.
neonnoodle
Oct 01 2003, 12:48 PM
POW! http://www.pdga.com/discus/clipart/happy.gif Good job Chuck. I've been on the verge of completely understanding this for some time now. Thanks for pushing me over the edge.
sandalman
Oct 07 2003, 12:12 PM
hey, question about ratings... it says that ratings are the last 20 rounds or last two years, whichever comes first.
but something like the bottom 15% of the rounds are excluded.
so, does that mean that the top 85% of the last 20 rounds are used, ie 17 rounds... the top 85% of the last 23 rounds (19.55 rounds) or the last 24 rounds (20.4 rounds)???
ck34
Oct 07 2003, 01:27 PM
It's not your last 20 rounds if you have more than that in the 12 months prior to your last rated round. We include all rounds in the 12 months prior to the date of your last rated round before selecting the best 85% even if that's 60 rounds.
If you have fewer than 20 rounds within the 12 months prior to your last rated round, we go back until we find 20, or however many you have up to 12 months farther back if fewer than 20. In the case we find at least 20 going back the additional 12 months, we will use only 17 (85%) of those rounds. It's possible the actual number of rounds in your 'Included' pool could be more than 20 if the 20th round is in the middle of an event that has more than one round listed in the data file on the same date.
sandalman
Oct 07 2003, 01:48 PM
gotcha... thanks for the clarification.
gang4010
Nov 04 2003, 05:42 PM
Chuck, You've helped me figure out a prt of what is wrong with the ratings formula. I have acquired the formula itself - and have been searching for an answer as to why the formula fails to work with higher SSA courses.
It seems to me that the baseline assumption is part of what affects this. Would it be possible to evaluate all courses (or some greater body of courses) that are similar to the original baseline courses to provide a deeper sample of data on which the baseline assumption is based on? My guess is that it should provide a better sample. But bottom line is - is if the formula fails on a linear progression at any point - then it just doesn't work. Saying that a formula is acceptable because it works for 90% of the courses falls into the category of using
gang4010
Nov 04 2003, 05:45 PM
oops :) into using subjective criteria in a mathematical formula - which mathematically is well........baaaaad. My example of why this is bad - last week the MADCi had a round of 44 at druid Hill in Baltimore rated at 1060 (10 under par), while a round at Winthrop Gold 10 under only rated around 1040. There is absolutley no way that this is an accurate reflection of the skill required to attain such a rating.
ck34
Nov 04 2003, 07:31 PM
It's not so much the baseline values but a lack of data from the highend courses to justify changing the factor. However, the flurry of data from high SSA courses from none other than your MADC region has helped us get the compression factor adjusted prior to the next update in Mid-December. I think you'll be happier with results.
Scores like Ken's 57 on Friday at USDGC will be 1061 instead of 1048. And on Saturday, Schultz' 58 goes from 1048 to 1070 when the SSA was 3 shots higher. Another one you may remember, Brinster's 55 on max Patapsco goes from 1054 to 1065. We're also reviewing the low end SSA numbers around 42-44 to see if the below 50.4 factor needs adjusting.
The linear function doesn't break down, it just doesn't diverge at higher SSAs as much as the original value (as you suspected). But it's not a massive change. The adjustments are rarely more than 10-15 rating points per round on average for some players on SSA courses over 64.
Chuck, I can get you a bunch of data on a low SSA course if you want to evaluate it. Not PDGA rounds, but league play.
Let me know.
ck34
Nov 04 2003, 08:53 PM
Thanks, Jim. I think Rodney's on top of this one. More courses at this SSA level is more useful than more scores at one course, and we've got a bunch on file.
Just curious to know if there is a time schedule for increasing the frequency of ratings adjustments? :D
Chuck, will we have ratings updated more often next year? Monthly would be nice, don't you think?
ck34
Nov 05 2003, 09:25 PM
No plans to increase frequency of ratings for good reason. Even if we could process tourney reports instantly, it wouldn't be a good idea. More and more tourneys are doing advance registrations and not everyone is on the internet. It's not good for player ratings to bounce around just due to statistical "noise". The communication issue is also a challenge so all players are up to speed. Plus everyone keeping track of which weekend each update becomes effective would be more difficult.
However, the one thing I would like to see is getting a player's first rating published as soon as possible. We may consider faster updates behind the scenes and just publish interim updates for those players having their first rating.
As automated as the process is getting to input scores, coding the course configurations for each division into the database is still done manually. Theo plans to write a wizard to input the core info that's now in the TD report. Then, we'll just need some reviewing to catch problems. We hope to go even farther to flag suspicious numbers based on results outside expectation (new player shooting 1050 round or pro shooting 400).
pjefferies
Nov 05 2003, 09:52 PM
Chuck, I hope this isn't a repeat of an old question but I couldn't find an answer anywhere. Can you tell me the reason for using a time based (1 year) average of round ratings instead of number of round based (similar to the USGA) when calculating player ratings? The time based way seems to add so many rounds for frequent competitors that it lags when compared to less frequent players.
ck34
Nov 05 2003, 10:22 PM
The USGA handicap does not have the same intent as the PDGA rating. By definition, the USGA claims to measure a player's <font color="red"> potential skill level </font> by using only 10 of the most recent 20 rounds. Statistically, players can match or beat their handicap only one in four rounds. The PDGA Rating attempts to portray a player's <font color="blue"> actual skill level </font> within their most recent 12 months if they have at least 20 rounds.
Although one would think we might use all rounds, we drop the bottom 15% so TDs don't need to note inappropriate rounds for players all of the time (i.e. five 7s for showing up late or other penalties not related to throwing) that shouldn't be included. Note: We don't drop any rounds if you have fewer than 10.
The PDGA competitive season is one year, which is one reason for using a player's most recent 12 months of competitive rounds. The average number of competitive rounds among PDGA members who played at least one event is only 13, so even one year doesn't include 20 rounds for more than half of our members.
One thing we're looking into is a special rating just for fun which would be for active players that more heavily weights recent rounds versus earlier rounds in the year. You may see something like this next year at some point.
james_mccaine
Nov 06 2003, 09:56 AM
The special rating to weight the most recent performance is a nice idea. If you only applied it to those with a "high variant" in their recent performances and used their special ratings (instead of their regular rating) in the event analysis, my suspicion is that it would be slightly more "accurate."
When I say "accurate," I understand it is subjective, but my test of accuracy is simply: If you had to use the ratings to bet your own money, which data would you use. Anyways, enough rambling, but glad to see y'all continue to try to improve the system
ck34
Nov 06 2003, 10:25 AM
Once we see how the special weighted rating looks, there's a possibility it could be used officially down the road either for internal SSA calculations or even to determine divisions. However, one feature of the current system that has probably been missed or overlooked by many players and TDs is the local tour option. If specified in advance, a local series of events can allow a player to remain in the same division in all series events for that calendar year regardless how high the player's rating goes. It's kind of a reverse bump rule. Of course, the player would still have to play up when they play events not in that series.
pjefferies
Nov 06 2003, 10:28 PM
Is it not true that the USGA measures <font color="red">potential skill level</font> because they take the best 10 of the latest 20 scores/differentials before averaging? If not, I'm missing the point of why a time based average vs. number of rounds based average creates the difference between <font color="red">potential</font> and <font color="blue">actual</font> skill. Is this an advanced statistical concept?
Just another player 56 days from being advanced with more <font color="red">potential</font> than <font color="blue">actual</font> skill.
ck34
Nov 06 2003, 11:02 PM
The potential versus actual comments are supplemental to your specific question on using a fixed number of rounds versus a fixed time period. We actually use a hybrid since we use as many rounds as available in the prior 12 months to each player's most recent round. It's not the same as pure time because it's individualized for each player. It's not the 12 months prior to the rating update date for everyone, which would be pure time.
For players with fewer than 20 in their prior 12 months we go back up to 12 more months or to 20 rounds whichever comes first. So, if they have 20 within 24 months, those players will indeed have a rating based on pure number of rounds. If still fewer than 20 ,then their rating is based on their personal time frame (but not pure time).
We felt the problem with using a fixed number of rounds is that in theory we would still be averaging data for some players from last week with data five years old and there might have been a gap of three years in there with no rounds. So, we would still have needed some time limit for how long we would go back to find the needed number of rounds.
If we went with a fixed time period say just the 12 months prior to the rating update, there would be many rounds excluded and players ending up without ratings or ratings based on perhaps half as many rounds as they have now.
There are adjustments needed with any approach and we feel comfortable with what we've got so far. We're excited about the time weighted option which will be helpful for our most active players, especially the up-and-comers.