go18under
Nov 11 2010, 09:00 AM
Hi Chuck....is this an accurate statement...."If you play with a higher rated field, your rating will go up"

Basically, we played a tournament recently where Pros and Ams were split up between 2 pools. On the same course, same day, same conditions, during the morning, a pro score of 48 was rated 20 points higher than the 48 an intermediate player put up in the afternoon am pool.

How is this an accurate rating system if it does that? Only difference is Ams played in the afternoon, and the Pros played in the morning....

How do ams expect to get their rating up?

Doesn't this encourage sandbagging? All they have to do is play in am fields to keep their ratings low....even if they play well?

I think the rating system needs tweaking...

Rec 860 and under.....Int 900 and under.....advanced 940 and under.....

maybe add a semi-pro division 940-970 to encourage advanced players to test the waters without losing their official am status? Let them choose to take cash or prizes....

cgkdisc
Nov 11 2010, 09:09 AM
It is not true that playing in a higher rated field produces higher ratings. It is true that unofficial ratings will vary from round to round and pool to pool. The variance you are talking about was only 4% and those were only unofficial ratings. But those rounds will be combined so that everyone gets the same rating for the same score once the official ratings are done.

go18under
Nov 11 2010, 09:17 AM
also....lets get rid of the novice division while we are at it......very few tournaments offer it....and the word novice is basically the same meaning as recreational.....at least the perception is.

I would think that an amateur division, semi pro, and pro divisions would be plenty, but it's just my opinion...

I see players that have 3-4 years experience and are legit 920-940 level, still playing rec because their rating is still under 900....

go18under
Nov 11 2010, 09:21 AM
It is not true that playing in a higher rated field produces higher ratings. It is true that unofficial ratings will vary from round to round and pool to pool. The variance you are talking about was only 4% and those were only unofficial ratings. But those rounds will be combined so that everyone gets the same rating for the same score once the official ratings are done.

20 points is a lot, especially when players are trying to build up their rating. I will wait and see if these ratings change, but I know that some past results in similar situations, haven't changed once they became official.

Thanks, I appreciate your hard work, just trying to add some feedback

davidsauls
Nov 11 2010, 10:13 AM
I see players that have 3-4 years experience and are legit 920-940 level, still playing rec because their rating is still under 900....

You lost me there. If their rating is under 900, what makes them "legit 940"?

I'm particularly curious because I've been playing 15 years and with the next ratings update, I'll be under 900.

cgkdisc
Nov 11 2010, 10:32 AM
Novice is where the growth of our sport comes from. In your own state, Lexington has been a hotbed of Rec & Novice tournament play. In addition, the IOS series out of Illinois has been highly successful serving players in all amateur divisions. TDs in other areas looking for better turnout would do well to adopt the approach these TDs have taken.

http://www.pdga.com/tournament-results?TournID=9851#Recreational

http://www.pdga.com/tournament-results?TournID=10853#Novice

discette
Nov 11 2010, 01:12 PM
I have been playing for 14 years and I have never once been rated over 900!:eek:

suemac
Nov 11 2010, 02:41 PM
It is funny how lopsided the ladies breakpoints are......... to say that girls rated 745 are Rec and playing with new girls with 650ish ratings...........not right and discourages ladies from competing.

cgkdisc
Nov 11 2010, 03:08 PM
There really isn't a great way to do breaks for women partly because the range of their skill levels are wider than men and there aren't enough women players to make more refined break points. No matter how you set the breaks, it will be unfair for a few of them.

Jeff_LaG
Nov 11 2010, 04:56 PM
When the new online unofficial ratings calculator goes live, and unofficial ratings track far closer to the eventual official ratings, a lot of these "differences" and questioning will hopefully subside.

Yeti
Nov 11 2010, 06:40 PM
It is not true that playing in a higher rated field produces higher ratings. It is true that unofficial ratings will vary from round to round and pool to pool. The variance you are talking about was only 4% and those were only unofficial ratings. But those rounds will be combined so that everyone gets the same rating for the same score once the official ratings are done.

The variance has more to do with AMs who are vastly improving and therefore their rating has not caught up with them yet. It took a good year and a half before my rating caught up with me when I was fast improving. It can also be folks that don't play a lot and have less than accurate ratings. Almost everyone can be a propagator these days. These can all help jack the ratings

Another problem with smaller local tournaments on shorter courses is that many locals have the course dialed in. If they have low ratings and shoot as well as the high rated out of town pros the ratings are lower.

I think Chuck is incorrect on the 4% variance due to the true number of shots that create the variable. Assuming the course is shorter and the ratings points are at 10 per stroke he is getting 4% as the 20 rating point , 2 stroke difference between the two 48's. Barring any very low percentage aces, in the end all players will have shot at least a minimum 36 strokes on the 18 holes. So anything above that is where the true variance comes in or more like a difference of 2 strokes/20 points out of 48-36=12. This is more like a 17% variance and much more serious of an error.

Why did this 20 point swing happen in the first place? It can't be the higher rating equals higher ratings theory.
Keep averaging everything together and you get the average rating system. That is exactly what we have.:D

furniss
Nov 11 2010, 06:44 PM
Also, don't they expect you to do better the second time on the same course. Say if you get a 5 down in the morning and play the exact same layout and everything you would have to get a 6 down to have the same rating?

cgkdisc
Nov 11 2010, 07:01 PM
Fast improving Ams are a small factor in the calculation. Maybe five Ams are consistently playing above their current rating by 10 points. You can't really tell because they play better or worse that round like everyone else. That might be one rating point in a pool of 50 players.

The homeboy effect is usually accounted for in the ratings. Locals who mostly play locally have higher ratings than they might if they traveled. So, that effect is accounted for to prevent depressing the SSA.

The percentage variance is truly the number of throws divided by SSA which is about 2% per 10 ratings points and gets even lower as the SSA increases. We will never be able to reduce this variance between rounds no matter how the system is tweaked because humans are throwing the scores and the course conditions do vary some from round to round.

If you try to draw a firm conclusion on how good a player is from a single round, you would likely be off a little to a lot regardless whether you looked at their score or their rating for that round. A baseball player goes 4 for 5 one night and 0 for 4 the next night. How good is that batter? It's a numbers game like any other sport's performance statistics. The more numbers the better.

cgkdisc
Nov 11 2010, 07:04 PM
Also, don't they expect you to do better the second time on the same course. Say if you get a 5 down in the morning and play the exact same layout and everything you would have to get a 6 down to have the same rating?

Surprisingly, there doesn't seem to be any "learning curve" in scoring on the same course. We've looked and the scores are worse in the second round by the same pool of players about as often as being better with the most common result being about the same each round. In fact, this supports the concept that players perform within a predictable statistical range over the short term. That's why the ratings process works pretty well overall. If second rounds were consistently better, then some adjustment factor would need to be added.

bruce_brakel
Nov 12 2010, 01:04 AM
It is not true that playing in a higher rated field produces higher ratings. It is true that unofficial ratings will vary from round to round and pool to pool. The variance you are talking about was only 4% and those were only unofficial ratings. But those rounds will be combined so that everyone gets the same rating for the same score once the official ratings are done.No, it is true. It just won't work out that way at this tournament because you have a lower rated pool to combine the ratings with. Take any tournament where the lower rated players play short tees and the higher rated players play long tees, or any tournament where the higher rated players play different courses or on a different weekend, and that effect will be there and not be adjusted.

If a lower rated amateur wants to jigger their rating down, they can choose to play more tournaments where they will not have all rounds combined with Open and Advanced players, and fewer tournaments where they will have any rounds combined with the higher rated players.

And, since 935 (Advanced) minus 4% is 898 (Recreational), 4% would be a huge amount of jiggering for a player like me, always stuck in the bottom of advanced.

Hmmmm.

;)

cgkdisc
Nov 12 2010, 09:10 AM
We'll know soon. Roger is pulling data from all events in 2009-2010 where two different pools played the same course. There's already an adjustment factor in there from the last time we looked. If for some reason the factor is not enough, we'll adjust it again.

davidsauls
Nov 12 2010, 01:05 PM
Surprisingly, there doesn't seem to be any "learning curve" in scoring on the same course. We've looked and the scores are worse in the second round by the same pool of players about as often as being better with the most common result being about the same each round.

Heck, I'm consistently worse in the second round. Any learning I may do is offset by being out of shape and fading as the throws pile up. Maybe it's all my fault.

JHBlader86
Nov 12 2010, 07:15 PM
I'm not sure how exactly they need to be tweaked but I know something indeed does need to change. I remember playing in a tournament about 3 years ago, and I shot a -9 during the round. It was rated in the 960's. The previous year, Dean Tannock shot a -9 on the same course, same layout, yet his was 1000+ rated. How in one year does the round rating change nearly 40 points when playing the same course, same layout?

I agree we need a Semi-Pro division, and get rid of the name Rec and make that Novice. Rec sounds lower than Novice IMO. It would be great to see Novice, Intermediate, Advanced, Semi Pro, Open/Pro. Semi Pro would be able to accept a minimum amount of cash. About 50% of their fees would go out to them, and they'd still retain Am status.

krupicka
Nov 12 2010, 07:36 PM
JD, other than the names for the divisions your proposal is already done. What you are doing is calling Recreational, Intermediate; Intermediate, Advanced; and Advanced, Semi-Pro. Players can already get their winnings at 50% their entry fee. It's called take your merch winnings and sell it back to the merch guy at 50 cents on the dollar. I'm sure Bruce will chime in, but most players don't want 50 cents on the dollar. They'd rather have plastic.

JHBlader86
Nov 12 2010, 07:46 PM
JD, other than the names for the divisions your proposal is already done. What you are doing is calling Recreational, Intermediate; Intermediate, Advanced; and Advanced, Semi-Pro. Players can already get their winnings at 50% their entry fee. It's called take your merch winnings and sell it back to the merch guy at 50 cents on the dollar. I'm sure Bruce will chime in, but most players don't want 50 cents on the dollar. They'd rather have plastic.

I think players would rather have money. IMO, having the Semi-Pro division would actually help grow the Pro/Open division, because once a Semi-Pro gets the taste of cash he or she will want more. If I was a Semi-Pro and won, and my winnings were $100, but had I played Open I would have won $250 or $300 that would encourage the Semi-Pro's to move up, which in turn would grow the pro purses and the pro division, and start the trend of not relying on Ams to pay for everything. Granted, its all in theory.

sammyshaheen
Nov 12 2010, 08:58 PM
Semi pro could be like the minor leagues.
Smaller entry fee and lower payouts. Ratings
based for anyone up to 970 or so. Part of their
fee could even go into the open pool to keep
people from bagging and encourage people
to move up.

Paying ones in plastic only really supports the
disc manufactures. Why? They have the sport in
a corner if you ask me. Playing for prizes is
just not that fun.

JHBlader86
Nov 12 2010, 09:15 PM
Semi pro could be like the minor leagues.
Smaller entry fee and lower payouts. Ratings
based for anyone up to 970 or so. Part of their
fee could even go into the open pool to keep
people from bagging and encourage people
to move up.

Paying ones in plastic only really supports the
disc manufactures. Why? They have the sport in
a corner if you ask me. Playing for prizes is
just not that fun.

I'd say 970 is where Semi-Pro should begin since thats the cutoff anyway for being able to accept cash and stay am. Entry fees should be about $10-$20 less than Open, so say you have 10 SP's paying $80. Half would still go back to them, the other half going to the Open.

johnbiscoe
Nov 13 2010, 07:05 PM
Maybe five Ams are consistently playing above their current rating by 10 points..

absolutely bs.

gdstour
Nov 13 2010, 07:45 PM
Biting my tongue hurts :).

My biggest complaint about ratings is on tougher par 72 style courses, where a stroke is only worth 4.5 points, compared to 13 points on a deuce or die par 54 course where top players ( or really good putters) shoot low 40's.
The points per stroke should have another factor,, like how much scoring spread there is per hole or per round. Some courses have scores where the majority are really bunched together ( say 69 - 75 while others may have much wider range of say 64 - 80, even though both may average 72.
Its my opinion that the course with the TIGHTER grouping should have the points be worth more per stroke as its harder to put a separation on the field therefore the strokes should be worth more points each.

To me a really good course is one with par 3,4's and 5's that require a complete game.
This type of course will usually have a lot of scoring spread per hole but have a tighter average of total scores. Course like this get penalized with a 4.5 points per stroke and ratings are usually way,,,,,,,WAY,,,,,, lower then they are on easier courses.

I think the same can be said about the ratings of the players at the event.
If you play with an average field of say 970 your round rating is going to be a lot lower than if the average were 1010.

Its seems to me that a piece to the formula is missing or something is not proportioned properly.
I really expected the ratings standards to continue to improve and become more accurate by now.


Drifting from ratings to Rankings now,,,,,,


After watching the peaks and dips of players rated 1020 and above the last couple of years its obvious the factors used are TOO subjective to which event ratings fall off that are 1 year old combined with how hot their last 8 rounds were. If a player plays a few consecutive events on really hard courses with weak fields, his rating will certainly drop even though he may be playing better than ever.

On the contrary if a player pops off a few big events right before the cut-off on lets say his home courses,,, his rating will jump up,, especially if he loses a year old round ratings from an event where the ratings were low ( like they typically are on harder courses or events with weaker fields).

Once a player reaches a 1000 rating ( or 955,,,whatever) the ratings should stop and they should officially become ranked nationally.
The ranking should be based more on head to head and finishes in the larger events,,,maybe even include earnings.

(Can anyone provide the list of what goes into the PGA rankings???
I've been miffed at how long Tiger Woods held onto the #1 spot.)


I personally feel a players ranking ( not rating) should also drop off from non participation.
If your not playing with the big boys, how can you continue to be highly ranked???
At times I see a player ranked in the top that doesn't really play too often or at all in the larger events.


Ratings can be a good barometer, especially for everyone under 1000, but Rankings creates much more competition among those that are ranked or even more important "want to be"!!!

Back to biting tongue :(

bruce_brakel
Nov 14 2010, 02:22 AM
When I say anything critical of the ratings system, I like to add that with whatever flaws it may have, it is far, far, far better than what we had before ratings. At least three fars, maybe five. :D

But I thought about this a lot over the last few days and I cannot see anyone intentionally playing only tournaments where they can work the system for a lower rating. Crappy ratings are just not what any of us play for. Do I want to play all of my tournaments at Hudson Mills? Gack.

As to giving Ams the option to cash at 50%, I did that for years and only three or four players repeatedly took advantage of it, but many players would when they were short of cash. I did not merely give them that opportunity at Jon's tournaments, but at ANY tournaments, regardless of where they were. I was buying merch from Innova and Discraft at 50%. I saw no reason not to buy it from the players everywhere at the same price. Usually, with the players, I could get free shipping and not pay sales tax, so there was no downside from my perspective.

I'm out of the merch man business, so don't send me e-mails wanting me to buy your stuff. Honestly, I don't know why other merch guys don't do this. It's not about being greedy. It must just be stupidity. Do you think you don't get more players when they know they can show up with four star plastic discs and no cash and sell them for wholesale and get in the tournament? :cool:

arhunt
Nov 14 2010, 01:45 PM
David,

I agree the rankings system needs to be changed, and should matter to the top players much more than rating.

I finished 12th at USDGC and I'm not in the PDGA rankings, because I'm not 1000 rated yet. Ridiculous.

Adam

cgkdisc
Nov 14 2010, 01:49 PM
Both ratings and rankings are based on consistent performance over a year, not one good event.

arhunt
Nov 14 2010, 11:25 PM
Chuck,

Clearly, rankings are based on a year, not a single event.

I finished 29th in NT points, USDGC was not my only good tournament, although it was my best. I'm not asking to be ranked in the top 20, but I should be on the map, somewhere in the top 150.

I'm surprised that the PDGA hasn't adopted a ranking system based on points, like tennis or golf. That method makes much more sense to me.

davidsauls
Nov 15 2010, 08:24 AM
Paying ones in plastic only really supports the
disc manufactures. Why? They have the sport in
a corner if you ask me. Playing for prizes is
just not that fun.

Not exactly. Playing Ams in plastic significantly underwrites the whole tournament structure. Virtually anywhere you have "cash added" to pro divisions, some or all of it is directly or indirectly from the margin on merchandise. Even if it is shown as coming from sponsors---the margin on merch for Ams is covering tournament costs, that would otherwise come out of sponsorship money.

My experience, and I'll bet that of a lot of TDs, is that while playing for prizes is "just not that fun" to you, it apparently is for a lot of people.

cgkdisc
Nov 15 2010, 09:38 AM
I'm surprised that the PDGA hasn't adopted a ranking system based on points, like tennis or golf. That method makes much more sense to me.
There are several factors that may make sense to include in a ranking system if our top players actually faced each other as often as they do in other individual sports. As it is right now, not a single player played all five Majors in the past year and only four men played four of the five. And ten men and three women played just three of the five Majors. On the National Tour only 27 men played the minimum four events required to be included in the World Rankings and only 12 women played the three minimum.

Until enough of our men and women truly play a world tour, our World Rankings have to rely heavily on ratings to have any credibility. This is a sport where it's the player against the course and not other players except the few match play events. In which case, ratings are still the best measure of a player's performance plus a dose of performances in Majors among those who play them.

gdstour
Nov 15 2010, 11:19 AM
Chuck can you make a comment on my speculation that players who play harder courses with less top rated players will consistently receive lower round ratings than those who play deuce or die courses against higher rated players?

Also,,shouldn't the points per stroke be higher for the courses with a tighter scoring spread over those with a wider range instead of the other way around??


I agree that the ratings are much better than not having an at all, but isn't it possible that they need to be improved a bit?

Its like the current rating system is carved in stone and when anyone questions it, chuck has a huge resistance to ANY change.


From Adams comments, it appears he has improved much faster than the ratings reflect.
It doesn't happen a lot but, if a guy starts playing really well, his rating or ranking should reflect it sooner than what it will in the current formula.

Same goes for guys who start playing poorly, and theres should also be some sort of deduction for non participation.

A rating system or ranking system without the capability to produce a more current reflection of whats going on TODAY seems lacking and therefore in need of improvement.

Yeti
Nov 15 2010, 12:10 PM
This is a sport where it's the player against the course and not other players except the few match play events.

Except for ratings which is player verses whoever showed up to play the event, what their rating is and if it is truly accurate for their current playing level.

Great post Dave Mac and absolutely yes, to Bruce. This system is the best we have and Chuck, Roger and team have done a great job thus far. I would like to see this thread continue to poke holes in the ratings so they may improve. Chuck has an entire three ring brain binder full of stock answers to repel most of the inquiries. Most are correct as he is asked the same questions over and over, but I think the whole ratings team knows or should know there are many areas that need some improvement. I am glad to hear they are on that path for making the ratings the best they can be.

There is a list of courses in Texas that are notorious for low ratings and most of them have to do with the ratings ineffectiveness to deal with great golf holes with Par 4's and 5's. I think Dave's post has something to do with this and the SSA values assigned compared to rating points per stroke.

cgkdisc
Nov 15 2010, 12:11 PM
DM: Chuck can you make a comment on my speculation that players who play harder courses with less top rated players will consistently receive lower round ratings than those who play deuce or die courses against higher rated players?

CK: Not true. The range of high to low ratings narrows the higher the SSA but statistically, players at all rating levels have the same odds of averaging their rating no matter whether the SSA of the course is 42 or 72. The best chance of shooting your best or worst rating is on a course in the 48-54 SSA range. A high SSA course will likely never deliver your highest nor your lowest round rating.

DM: Also, shouldn't the points per throw be higher for the courses with a tighter scoring spread over those with a wider range instead of the other way around??

CK: Nope. The more throws on course, the closer players shoot scores near their ratings and they will bunch up. For example, two lifetime .300 hitters early in the baseball season might go 1 for 5 (.200) and 4 for 5 (.800) in the same game. But for two games, these hitters might go 3 for 10 (.300) and 4 for 10 (.400). More At-Bats, closer spread. Note also that one hit in one game is worth (.200) but for two games (think longer DG course) each hit is only worth (.100).

DM: I agree that the ratings are much better than not having an at all, but isn't it possible that they need to be improved a bit?

CK: The one thing we can't improve is the fact that humans are making the throws and not robots. Robots would tighten up the numbers.

DM: From Adams comments, it appears he has improved much faster than the ratings reflect. It doesn't happen a lot but, if a guy starts playing really well, his rating or ranking should reflect it sooner than what it will in the current formula.

CK: One year of performance is the minimum unit of comparison for any type of individual sports ranking system. It's not the stock market. Players don't get better or worse that fast.

DM: Same goes for guys who start playing poorly, and there should also be some sort of deduction for non participation.

CK: The World Rankings does include penalty points for not playing in enough Major events or NTs. In earlier rankings, Jesper has had one of the top ratings and yet was sometimes below 10th ranked in the Worlds because his job and family obligations prevented him from traveling. Masters like Phil Arthur with high ratings get penalized because they played Masters at Worlds rather than Open.

DM: A rating system or ranking system without the capability to produce a more current reflection of whats going on TODAY seems lacking and therefore in need of improvement.

CK: There's very little lag in the World Rankings compared to other sports and in fact more should maybe be added after the big shuffle simply due to the abnormal USDGC format this year.
<!-- / message --><!-- sig -->

keithjohnson
Nov 15 2010, 12:31 PM
Chuck has an entire three ring brain binder full of stock answers to repel most of the inquiries. Most are correct as he is asked the same questions over and over

Chuck replaced the 3 ring binder in 2010 with this after all the questions being asked :)

http://convergence.ucsb.edu/files/articles/building-better-buildings/big-blue-server.jpg

cgkdisc
Nov 15 2010, 12:33 PM
Chuck replaced the 3 ring binder in 2010 with this after all the questions being asked :) http://convergence.ucsb.edu/files/ar...lue-server.jpg <!-- / message --><!-- sig -->There's no doubt there are sections of my mind that are "blue" but not with ratings data...

keithjohnson
Nov 15 2010, 12:37 PM
:) :)

Yeti
Nov 15 2010, 12:54 PM
Fast improving Ams are a small factor in the calculation. Maybe five Ams are consistently playing above their current rating by 10 points.
Here is a small sample from one recent tournament with the top five advanced players of those fast improving ams. I would have to say they are playing above their ratings consistently looking at the forward progress. Again, this is one set of ams in one region. I can't imagine the vast improvement of some of the INT and REC that has to affect the propagating and ratings in the events they play.

NameCurrent Rating---Ratings Improvement over 2010-----avg for event

Mando F 953_________903-->922-->948-->953____________989

Chris V 964__________Steady 960's for the year____________976

Matt K 939__________924-->932-->936-->939_____________957

Bird 945 ___________922-->928-->932-->937-->945________954

Adam H 948 ________932-->936-->939-->945-->948________952

pterodactyl
Nov 15 2010, 01:59 PM
I would like to see some snapching ratings. :)

Just kidding on that. I would actually like to see an increase of ratings points for victories.

cgkdisc
Nov 15 2010, 02:03 PM
Since the Biscoe "slam" post above regarding the number of fast improving Ams, I spent the weekend analyzing the improvements and decreases over one year from Oct 2009 to Oct 2010 of all 5479 PDGA propagators. I'll be publishing the results in a story within the next few weeks. There were 500 props whose rating improved an average of 40 points in one year. If we assume a lag of 3 months on ratings, that's a 10-point lag for about 9% of the props. Now that's assuming they are the only ones at an event.

The average for all props below a 900 rating (1487 players) is only 10.6 points improvement over one year. Let's be liberal and say their lag is 3 points. So, on average, a pool of props all with ratings under 900 might be under-rated by 3-5 points which would depress the SSA they produce by 0.3-0.5 compared with a pool of 930+ players. However, the adjustment factor we have in the SSA formula boosts the SSA by 1.0 for a pool of props who average 875. So, it's likely this pool of props could end up with an SSA around 0.5 higher than the pool of 930+ props who get no boost in the SSA they produce.

As a side note, the average ratings change of all props who had a rating of 960+ in Oct 2009 (1127 players) until the Oct 2010 update = 0. How about that for stability of performance at that level?

cgkdisc
Nov 15 2010, 02:08 PM
Just kidding on that. I would actually like to see an increase of ratings points for victories.
That would presume the rating system should include some subjective element that "knows" when each throw has more meaning that round on the course than in another round.

pterodactyl
Nov 15 2010, 05:44 PM
I was just thinking that when you have a big lead and you are coasting to victory, your rating can go down because of safe/smart play.

I hear all the time about pga players needing to finish 1st or 2nd to retain their world ranking, so I'm thinking that part of the dg rating equation could include the final placing of players in tournaments.

jconnell
Nov 15 2010, 05:57 PM
I was just thinking that when you have a big lead and you are coasting to victory, your rating can go down because of safe/smart play.

I hear all the time about pga players needing to finish 1st or 2nd to retain their world ranking, so I'm thinking that part of the dg rating equation could include the final placing of players in tournaments.
I think you're confusing ranking and rating. PGA world rankings take finish in each tournament into account (hence, player X needs to finish in first or second to retain or improve their ranking, etc). But the rankings don't involve a player's handicap (golf's rating system) at all. Most players on tour are scratch golfers anyway so involving handicaps/ratings wouldn't change a thing.

Ratings are entirely about scoring regardless of place in a tournament. As Chuck said earlier, ratings are involved in our world rankings almost out of necessity because there simply aren't enough tournaments in which the relevant players are all participating to produce true rankings along the lines of what we see in golf, tennis, etc.

When we as a sport reach a point where we have a true pro tour, in which at least 100 golfers are playing in 75% of the 50+ annual tour events (just spit-balling numbers for the sake of an example), then we can have true rankings in tune with what we see in other sports. As it is now, we have what, 15, 16 events on the "tour" (NT+majors). And how many players have played in, say, ten of those events in a single year? That's not much of a tour to speak of.

gdstour
Nov 15 2010, 08:32 PM
That would presume the rating system should include some subjective element that "knows" when each throw has more meaning that round on the course than in another round.

my guess is he though there should be a little added value in winning b y one or 2 over 4 or 5.
The winner of an event may lay up a few times towards the end to seal the deal!!

gdstour
Nov 15 2010, 08:36 PM
DM: Also, shouldn't the points per throw be higher for the courses with a tighter scoring spread over those with a wider range instead of the other way around??

CK: Nope. The more throws on course, the closer players shoot scores near their ratings and they will bunch up. For example, two lifetime .300 hitters early in the baseball season might go 1 for 5 (.200) and 4 for 5 (.800) in the same game. But for two games, these hitters might go 3 for 10 (.300) and 4 for 10 (.400). More At-Bats, closer spread. Note also that one hit in one game is worth (.200) but for two games (think longer DG course) each hit is only worth (.100).

Hey Chuck thanks for the answers,,, though they seem like they are coming from a politician running for office or possibly someone up at corporate was passing along info from the company handbook.

Not sure you made an accurate analogy above,, you may be missing the point entirely.

Some course post scores that are bunched together, while others have a wider spread.
The course with the tighter grouping is hard to distance your self from the field on.
To me this means the value of each stroke is worth more.

if 80% of the scores are,,, lets say between +4 and -4 on one course and only 50% of the scores are +4 to -4 on another course, than the first courses points should be worth more per,.
Typically this takes place on the longer par 4 type courses and is most likely why ratings on these par 4 courses are usually lower.
Combine this with a hard course that is not filled with top rated guys and the ratings are even lower.

cgkdisc
Nov 15 2010, 09:30 PM
Typically this takes place on the longer par 4 type courses and is most likely why ratings on these par 4 courses are usually lower. Combine this with a hard course that is not filled with top rated guys and the ratings are even lower. <!-- / message --><!-- sig -->
This is just not true. On every course for 10 years, the average round ratings the propagators earn equals the ratings of the propagators going in. ALWAYS. That's the way the math works. Total Ratings points IN equals Total rating points OUT. The only difference is the range of high to low ratings will be narrower on higher SSA courses. The average rating of the props is irrelevant to the ratings earned for a specific score on a layout. ALWAYS.

Whether a course bunches up scores or not has nothing to do with the ratings system but has to do with course design. Some 60 SSA courses, typically wooded, will produce a tighter scoring spread than other 60 SSA courses, typically more open. That's why hole analysis on Championship courses is worth doing. It's possible to make wooded courses with better spread if the effort is taken. But two courses with 60 SSAs will still have the same points per throw. ALWAYS.

arhunt
Nov 15 2010, 10:23 PM
On every course for 10 years, the average round ratings the propagators earn equals the ratings of the propagators going in. ALWAYS. That's the way the math works. Total Ratings points IN equals Total rating points OUT.

So, this means that you do not earn ratings by playing the course, but rather that you earn ratings based on your play as compared to the propagators?

cgkdisc
Nov 15 2010, 10:33 PM
The propagators produce the course rating and then you get your rating based on that course rating.

sammyshaheen
Nov 16 2010, 08:32 AM
Great discussion. I love following these threads.
I have one questions. Why are there ratings updates?
Seems like the ratings could be "live".
Is the technology not available?

cgkdisc
Nov 16 2010, 09:10 AM
I just answered this on another D-Board: Some of the technology is there but we won't do ratings live for practical reasons.

First, there are manual processes that have to be done to verify course layouts, add new members and check member numbers plus get the fees and member renewals from the TDs.

Second, players ratings don't change that fast. The average PDGA member only has 3 new rounds per every 2 months. We're already doing ratings updates every 6-7 weeks on average (8 times per year).

Third, TDs would suffer knowing which ratings list they needed to look at for registration. With many events now having significant pre-reg, there would be all kinds of churn in the divisions amateurs would qualify for when they registered versus the date of the event. There's already a 2-week grace period following a ratings update for those who have registered before the update.

Fourth, TDs don't all post or send in reports at the same time as it is. Many more events would be processed out of order than they are now.

arhunt
Nov 16 2010, 10:30 AM
The propagators produce the course rating and then you get your rating based on that course rating.

How often does a course rating change? Is it calculated for every tournament? Or does Tournament B take in to account the ratings from tournament A?

Propagators -> course rating, course rating -> player rating is the same as propagator -> player rating. Unless the course rating is impacted by other factors in addition to the propagators. But, if there are other factors affecting course ratings(and consequently player ratings), then ratings in would not equal ratings out.

cgkdisc
Nov 16 2010, 10:35 AM
The Course Rating is generated each round and will be dynamically affected by wind and rain. If the weather is essentially the same, then scores from more than one round on the same layout will be combined so everyone gets the same rating for the same score in either round.

sammyshaheen
Nov 16 2010, 11:16 AM
Thanks for replying Chuck.

pterodactyl
Nov 16 2010, 02:46 PM
my guess is he though there should be a little added value in winning b y one or 2 over 4 or 5.
The winner of an event may lay up a few times towards the end to seal the deal!!

Thank you. I wasn't confused, just laying up to seal a couple victories by 6 and 9 shots. Next time I'll just go for it.

cgkdisc
Nov 16 2010, 03:15 PM
Just like the BCS teams still rack up the score in the fourth quarter!

jconnell
Nov 16 2010, 03:29 PM
Thank you. I wasn't confused, just laying up to seal a couple victories by 6 and 9 shots. Next time I'll just go for it.
So by that token, you believe ratings are more important than winning tournaments? Would you rather play above your rating (thus probably improving it) and lose or below your rating and win? I know that's not exactly what you're talking about, but if you're really making the decision to go for a putt to win by 5 or lay up and win by 4, based on how it might affect your rating, it isn't exactly a big leap.

Ratings are a statistic, like batting average or field goal percentage or TD passes. They're fun to look at, but they aren't the end all, nor should they be.

cgkdisc
Nov 16 2010, 03:37 PM
I think it's interesting that a top rated player like Darrell Nodland crushes his competitors by 20-30 shots in North Dakota and maintains his rating around 1030. No laying up unless he's really a 1050 player and lays up to fall back to 1030... ;)

Angst
Nov 16 2010, 05:00 PM
I'm a big fan of the rating system, and I do think that it pretty accurate once a player has at least 15-20 rated rounds. Any fewer than that and the accuracy seems to drop off quickly.

To my question... and I apologize if this has been answered before, but since I don't have access to the fabled 3-ring binder of infinite ratings knowledge I will ask my question regardless.

Why is it that the top tier ratings appear to be gradually climbing upwards?

When I started playing I think the highest rated player was somewhere in the 1020s and now they're up around 1040.

cgkdisc
Nov 16 2010, 05:28 PM
Take a look at Climo's ratings history: http://www.pdga.com/player-ratings-history?PDGANum=4297&year=2010 More players are getting to his level but the number of players with 4-digit ratings has been increasing pretty much in step with the increase in the number of members or even a little slower.

johnbiscoe
Nov 16 2010, 07:49 PM
This is just not true. On every course for 10 years, the average round ratings the propagators earn equals the ratings of the propagators going in. ALWAYS. That's the way the math works. Total Ratings points IN equals Total rating points OUT. The only difference is the range of high to low ratings will be narrower on higher SSA courses. The average rating of the props is irrelevant to the ratings earned for a specific score on a layout. ALWAYS.

Whether a course bunches up scores or not has nothing to do with the ratings system but has to do with course design. Some 60 SSA courses, typically wooded, will produce a tighter scoring spread than other 60 SSA courses, typically more open. That's why hole analysis on Championship courses is worth doing. It's possible to make wooded courses with better spread if the effort is taken. But two courses with 60 SSAs will still have the same points per throw. ALWAYS.

probability is what lacks from the system. the system says scores lie on a straight line of probability when every statistical model in the world says they lie on a bell curve.

cgkdisc
Nov 16 2010, 08:07 PM
Irrelevant. Each throw is worth a value of "1" so the rating value of each throw must be the same on the same course layout. It's a direct linear conversion. If a player beats another player 45 vs 46 in a round, the difference is eaxctly the same as if the player won 54 vs 55. It's one throw and the ratings points for that throw are the same on that course. It doesn't matter that the 45/46 win occurs less often than a 54/55 win. If Nikko and Dave shoot the 45/46, it may be just as probable for them as 54/55 is for you and me on the same course.

There is no universal probability for scores on a course. The probability of any score is relative to the rating/skill of the player. If you have ten 1000 rated players who shoot 50 on a course and ten 900 rated players who shoot 60 on that course, is it right that the 900 player would get a better rating for shooting a 50 which is improbable for him versus the 1000 rated player who shoots that score all the time? Of course, not.

bruce_brakel
Nov 17 2010, 01:21 AM
I'm a big fan of the rating system, and I do think that it pretty accurate once a player has at least 15-20 rated rounds. Any fewer than that and the accuracy seems to drop off quickly.

To my question... and I apologize if this has been answered before, but since I don't have access to the fabled 3-ring binder of infinite ratings knowledge I will ask my question regardless.

Why is it that the top tier ratings appear to be gradually climbing upwards?

When I started playing I think the highest rated player was somewhere in the 1020s and now they're up around 1040.The answer to your question is competition. Just like in swimming and track where the top times go down every year. If this game ever goes big time, with the kind of money that attracts natural athletes and pays for personal trainers and coaches, we'll see ratings approaching 1100.

Karl
Nov 17 2010, 09:43 AM
John stated:
"the system says scores lie on a straight line of probability when every statistical model in the world says they lie on a bell curve"

and Chuck stated:
"It's a direct linear conversion. If a player beats another player 45 vs 46 in a round, the difference is eaxctly the same as if the player won 54 vs 55"

Actually, for it to be a "direct linear conversion", there would be "equality" (in the potential) between 45/46, 44/45, 43/42...37/38, 36/37, 35/36, 34/35....

To say the above - remembering that a 36 is 18 deuces, and a 35 is 17 deuces and an ace - is valid is believing that carding 18 deuces is as easy as carding 17 deuces and an ace. Ridiculous.

So it can NOT be totally linear. Certainly not at the lower (in actual score) end.
In reality, probability (of score) is like looking at the trough of Mavericks before it breaks.
There is a logistic curve to infinity as you approach 18 aces. Sliding down to "human scores" there is somewhat of a linear section where "typical scores happen" and then you start to go uphill again when in the arena of people carding VERY high scores - as would an incredibly inept dg'er might.

So Chuck, if you truly believe that the chance of a 35 happening opposed to a 36 is equal to a 54 happening opposed to a 55 (and thus your linearity is "valid"), we have the wrong person (with the wrong set of statistical skills) setting up our rating system.

Karl

cgkdisc
Nov 17 2010, 10:06 AM
The probability of any score is relative to the rating/skill of the player. Sorry Karl, you missed this statement. On most courses, a 40 is less common than a 50 only because better players are less common. But for players whose rating is the equivalent of a 40 on an easy course and players whose rating matches a 50 on that course, those scores will be equally probable for those respective players.

And yes, there's an issue when players are good enough where they have to shoot a 35 or better to shoot their rating. That's why the lowest SSA course allowed in competition is 41.4 and even that is really too low for top players. Fortunately, we now have enough higher SSA courses where it's uncommon they play courses less than 49 SSA in higher level competitions.

Karl
Nov 17 2010, 10:37 AM
Chuck,

Your...
"But for players whose rating is the equivalent of a 40 on an easy course and players whose rating matches a 50 on that course, those scores will be equally probable for those respective players"
...is just wording to cloud the discussion I'm trying to make. I am NOT disagreeing with the statement above, but it is NOT relevant to my post above.

Again, in my previous post, I'm taking umbridge with your statement:
"Each throw is worth a value of "1" so the rating value of each throw must be the same on the same course layout. It's a direct linear conversion."

By you saying this, you're advocating / believing / trying to "convince the masses" that any 1 rating value is equal to any other rating value...and this "concept" (if you want to call it that) fails at the low-score end - as I've shown! A 35 HAS to be exponentially harder to shoot than a 36, while a 54 is just a little harder than a 55, etc. Linearity has no place in such a real world.

I understand that the statistics are rather complex determining algorithms of logistic curves but the system - as it is now - has a GLARING flaw at the low end. And trying to "validate" the system by not having tournaments run on courses whose SSAs are at the low end is NOT a good way to "show the masses that the system works".

Karl

cgkdisc
Nov 17 2010, 11:50 AM
For the rating system to convert 1 throw to anything but the same ratings value for each throw on a given course has no mathematical support. If each throw changed in value during a round based on accumulated score then the ratings value for that throw would also reflect that difference, but that doesn't happen. For example, if once you get to a score of 50 each additional throw scored 1.1 and then at 60 they were 1.2 because you were playing even worse, then the ratings points per throw would also be higher. But there's no validity for the ratings points per throw to go up when the actual score still marches forward "1" at a time.

For an 18-hole course with an SSA of 50, all players start with a ratings "score" of 1500. Each throw is worth 10 points subtracted from 1500. A player makes 50 throws and their rating is 1000 [1500-(50x10)]. They make 60 throws and their rating is 900. The ratings are directly linear with the number of throws.

You say there's a break point at a score of 36 but we really have no proof that's true because we don't have players who play at that skill level. A player who makes 35 throws would get an 1150 rating, 36 would be 1140 and 37 is 1130 and 38 is 1120. Nobody has yet shot these scores on an SSA 50 course or ever achieved these ratings in any round on any course. We really don't know if it's truly more difficult for a player that actually has these skills to shoot a 35 versus 36 versus 37. There's no reason to believe it's any less linear than the scores for the players of the wide ranging skills we have now.

Our reason for limiting the SSA at 41.4 is partly theoretical and partly because the ratings become more volatile when the ratings points per throw increases such that fluky bounce outs and cut thrus have too much ratings value. That's a judgment call to draw the line there but we don't know it it's necessarily a mathematical requirement. Fortunately, only a few courses used in competition have ever flirted with these values anyway.

veganray
Nov 17 2010, 01:23 PM
http://www.actingproject.com/.a/6a011570b49e77970b0115704f9408970c-800wi

cgkdisc
Nov 17 2010, 01:29 PM
If I only made as much as he did for perceived wackiness...

Karl
Nov 17 2010, 01:51 PM
Chuck, you said:
"You say there's a break point at a score of 36..."
Show me where I said that. Don't put words in my mouth!

I said that there is a exponential tail that starts going up QUICKLY at that point (as anyone who knows what I'm talking about would understand), NOT that there is a "break point".

You spout a LOT of numbers (to muddy the waters) but it does NOT alter the fact that as one approaches perfection (18), the value that each shot SHOULD be worth SHOULD go up. But in your rating system it doesn't. I can "accept" your linearity around the mean but NOT at the lower limits. And 35ish is pretty low.

Your system doesn't handle the logic of potential - as ANYONE who plays dg can see that 18 birdies is WAY easier than 17 birdies and an ace (by a WHOLE lot more than the difference between say 54 and 55).

If you can't see that, you're just delusionary in your try to defend your baby and are unwilling to see that - at least at the low end - it has serious problems.

You (and whomever else was involved) have given the PDGA the "best rating system" the PDGA maybe ever had BUT it is FAR from "fine" (as it can't handle the extremes).
Any good scientist knows that to "validate" a system, one HAS to "test the ends" (to see if it holds water). If yes, there's a chance it'll be fine "in the middle"; if not, Houston, we have a problem....

If you want to argue "the system is good enough", fine - that's your opinion. But the system can't handle the low end - and that's a fact.

I can lead a horse to water, I can even show it that the water's fine (or, in this case, that it is NOT fine), but I can't make it drink (or, in this case, stop it from drinking)...and we'll eventually get a case of dysentery.

Karl

cgkdisc
Nov 17 2010, 02:20 PM
It's apparent you're still not getting it, Karl. Let's try another way. If we measure how high someone can reach standing barefooted, we would agree that the number of people who can reach a certain height goes down as the height goes up inch by inch. As we get to 8 feet, there aren't too many able to reach that but guys like Yao Ming do it in their sleep. If we grab a random sample of 100 people, we might be lucky to find one person. As we get to 9 feet, then 10 feet and higher we eventually get down to a handful.

We're still measuring in one inch units since those are the smallest units on our tape measure. These really tall people don't get a bonus in the measurement units just because they are taller. Reaching these heights is as normal (probable) as reaching 7 feet might be for you and me. Not sure what the current max height reach for humans might be but that person can do it anytime they want but it's uncommon they might reach that high in their daily activities. So it's less probable they reach that high daily but still possible and the measurement unit is still one inch.

Jeff_LaG
Nov 17 2010, 02:36 PM
Karl, what's the point in arguing your low end point any further? Even if it is technically correct, it never comes into play.

Perhaps Chuck simply should have said the following:It's a direct linear conversion above SSA 41.4. Would that satisfy you?

veganray
Nov 17 2010, 02:46 PM
http://www.tripledisc.com/preview/msdgc/dumb.jpg

AWSmith
Nov 17 2010, 07:47 PM
my biggest issue with the ratings is how long some stick on your rating. i didnt play at all in 2009, and i didnt play alot this year, but it took until the last update to drop 2 rounds from June of 2008. thats ridiculous.

also id like to see ratings more weighted based on event, SSA, and the short term (say 3-6 months) with the amount of influence depreciating over time. it is completely reasonable to think that someone would start off the year shooting a little worse than normal and by mid summer are running at full strength. yet a few bad rounds to start off the year hang on that player for awhile when they are playing above that for the rest of the year. or if they just had one of those bad days. i dont think the current standard deviation fairly accounts for that.

but all in all the ratings arent far off. i look at my friends and their ratings and i would say on average they are close to their normal play. a few i would say dont, tournaments are a different mindset and it can take time to catch the grove. i think those are the ams who improve quickly and realistically are a small precentage.

on the comment about getting rid of noive, that is ridiculous, that is a large group of people who dont play cause they dont want to lose consistently or their fresh to the sport.
the semi-pro class i think might work on a national level but not the local level. i think it would be interesting to allow players rated <975-980 could compete for 50%. I think that would be a viable field on that level. and could be beneficial for TDs. it would be nice if both ams and pros could play at the semi-pro level without losing their status. those ams that are capable of reaching that pro level would have a chance to taste that sweetness and encourage the desire to improve.
also the semi-pro class might work because of the growth of the sport just since the last time it was attempted. there may be enough people to start filling that class, once again i think only for national and maybe A teirs.

AWSmith
Nov 17 2010, 07:53 PM
Chuck,

Your...
"But for players whose rating is the equivalent of a 40 on an easy course and players whose rating matches a 50 on that course, those scores will be equally probable for those respective players"
...is just wording to cloud the discussion I'm trying to make. I am NOT disagreeing with the statement above, but it is NOT relevant to my post above.

Again, in my previous post, I'm taking umbridge with your statement:
"Each throw is worth a value of "1" so the rating value of each throw must be the same on the same course layout. It's a direct linear conversion."

By you saying this, you're advocating / believing / trying to "convince the masses" that any 1 rating value is equal to any other rating value...and this "concept" (if you want to call it that) fails at the low-score end - as I've shown! A 35 HAS to be exponentially harder to shoot than a 36, while a 54 is just a little harder than a 55, etc. Linearity has no place in such a real world.

I understand that the statistics are rather complex determining algorithms of logistic curves but the system - as it is now - has a GLARING flaw at the low end. And trying to "validate" the system by not having tournaments run on courses whose SSAs are at the low end is NOT a good way to "show the masses that the system works".

Karl

think ratings on the "x" and and par on the "y"

Karl
Nov 18 2010, 09:21 AM
Chuck,

I'm "getting it" fully. You using an analogy (and a totally non-applicable one) to "prove your point" doesn't prove it, it just muddies the waters (so as to make yourself look better). But the facts are that the system does not work at the lower end.

Like I said earlier, it's the best "we have now"...but it is NOT even close to perfect by any means. It "might" handle (sufficiently) the data in the middle, but let me state again:

"Any good scientist knows that to "validate" a system, one HAS to "test the ends" (to see if it holds water). If yes, there's a chance it'll be fine "in the middle"; if not, Houston, we have a problem."

We do. For low SSA courses. So your reasoning is that it's "no problem...we just won't hold tournaments on low SSA courses - that makes the system fine!". Super....


Jeff,

It's rather funny how you always run to Chuck's defense. At least I've credited him (Chuck) when we're talking about something he's done is good and questioned him when we'ere talking about something he's done that is "less than that". You just blindly stand up for him.
If you've something concrete to add to the discussion, say it; if not, we don't need a cheering squad.


Karl

Ps: And no, it wouldn't satisfy me...because it is NOT a directly linear conversion, it approaches linearity near the mean but any system which uses linear algorithms on a non-linear model is flawed.

Pps: Sorry Chuck, I've stated all I can about this. If you're not willing to state that the system has a glaring flaw at the low-score end of things, I can't help that. Those who know something about statistics will understand from where I'm coming from and will agree with me. All I can do is hope MORE people learn statistics.

AWSmith
Nov 18 2010, 08:06 PM
^^^^ then enlighten us

i hate statistic not going down that road again, so why don't you take the time and break it down using statistics (i want numbers). then come back and present your point. pointing to a perceived flaw is all well and good but if you cant mathematically prove it then it holds no merit.

P.S. i am by no means coming to anyone's aid. words aren't going to prove your point.

Jeff_LaG
Nov 19 2010, 01:00 AM
Karl,

I find it equally funny that you continually challenge Chuck seemingly at every opportunity, and typically blindly so. As smyith has intimated, if you've something concrete to add to the discussion, then say it; if not, you'll win no arguments trying to tell others why you think they are wrong based solely on your intuition or perception. And don't you dare try to tell others that they shouldn't be allowed to weigh in on a topic.

I've publicly disagreed with Chuck on many topics and will continue to do so in the future if I feel it is warranted. But in this situation, I'll continue to maintain that a linear algorithm from 41.4 up to the highest known SSA is a perfectly acceptable algorithm, and it is entirely irrelevant and meaningless what happens outside that range; and especially below it.

LongNeck
Nov 19 2010, 12:41 PM
There really is only one thing I do not like about the rating system. I think what would make the rating system more accurate is drop your top 10% of your rounds and drop your worse 10% of your rounds. Lets say you have to have 15- 20 rounds for this to be in effect.

cgkdisc
Nov 19 2010, 01:01 PM
Might make sense if the normal way we played the game was to drop our best and worst rounds to determine winners in events with 4 or more rounds.

The reason for only dropping about 1 in 50 rounds for players on average is that players have control over shooting poorly but not over shooting exceptionally well. Most round scores are statistically probable and normal for each player including the very best scores. Even though extremely poor scores are also randomly statistically probable, they can also be produced at will. We don't know whether a poor round was deliberate or not. So, to prevent sandbagging, we throw out rounds at a 1 in 50 probablity at the low end just in case. Otherwise, players could simply continue to throw low rounds that were retained and drag down their rating.

jmonny
Nov 19 2010, 02:21 PM
I like turtles!

LongNeck
Nov 19 2010, 10:51 PM
Might make sense if the normal way we played the game was to drop our best and worst rounds to determine winners in events with 4 or more rounds.

The reason for only dropping about 1 in 50 rounds for players on average is that players have control over shooting poorly but not over shooting exceptionally well. Most round scores are statistically probable and normal for each player including the very best scores. Even though extremely poor scores are also randomly statistically probable, they can also be produced at will. We don't know whether a poor round was deliberate or not. So, to prevent sandbagging, we throw out rounds at a 1 in 50 probablity at the low end just in case. Otherwise, players could simply continue to throw low rounds that were retained and drag down their rating.


I agree somewhat. Yes most tournaments are either 2 or 4 rounds. If our goal in the rating system is to have the most accurate rating I believe more should drop which are bad and also more should drop that are good. I can go out and shoot a 45 once in 50 times. Say I do that twice and that is a 1034 at my home course. That was a fluke round. When most my rounds are high 800's to mid 900's. My rating is a lot higher because of those rounds. I just had the day of my life. I can also shoot a 60, and that round is dropped. It should go both ways. If it is just to prevent baggers, okay I understand. If we are going for the most ACCURATE rating then we should do something different.

AWSmith
Nov 19 2010, 10:52 PM
I like turtles!

Turtle...Turtle.
http://www.youtube.com/watch?v=Lkg7RFuzFnQ&feature=related

ishkatbible
Nov 21 2010, 08:39 PM
I can go out and shoot a 45 once in 50 times. Say I do that twice and that is a 1034 at my home course. That was a fluke round. When most my rounds are high 800's to mid 900's. My rating is a lot higher because of those rounds. I just had the day of my life. I can also shoot a 60, and that round is dropped. It should go both ways. If it is just to prevent baggers, okay I understand. If we are going for the most ACCURATE rating then we should do something different.

i can agree with dropping the lower rated rounds to prevent attempted bagging. but why the higher round? it may be a "fluke" but was it really an accident? you played an exceptional round. shooting an accident round like that should show potential and that it COULD happen again at ANY point. i'm not saying to move up because you did it once or twice. but you CAN do it again. and IF it became more frequent, what good would it be if that round was dropped?.

cgkdisc
Nov 21 2010, 08:49 PM
Not only that, shooting exceptional rounds relative to your current rating is what happens as you improve, which hopefully what happens to any new player or a player who's decided to take steps to get better.

schick
Nov 25 2010, 01:47 AM
Ratings are always an interesting topic.....I think after you hit 1000, you should be ranked, not rated (as Dave Mac said). Ratings should be for Ams to help regulate what divisions they should or should not be playing. I honestly think ratings have destroyed the Open divisions in our area. If the 970ish Open player sees a few 1000+ rated players, they either do not play or may play Masters. I have heard this so many times....people give up before the tournament even starts because they feel they do not have a chance. When I first started playing open, I never even thought about it. I knew there were better players than me, but I could never put an actual number on it.

Why do we need ratings for Open players....bragging rights is about it! If a player breaks a 1000, just put down 1000+....

cgkdisc
Nov 25 2010, 10:10 AM
Scan down the World Rankings and you'll see that roughly half have a ranking only because they have a rating of 1000+ and have not played even one Major nor played the minimum four NTs that count toward rankings. Ratings are still the predominant factor for most players in the World Rankings and even more so for Women.

World Rankings > http://www.pdga.com/files/documents/World_Rankings_Men_-_Oct_2010.pdf

Even among the top ranked players, not one played all five Majors this year. Even if we hid the ratings of players over 1000, players would know that meant the player was at least 1000. And, the scores of players over 1000 are even more important than lower rated players to produce ratings at regular events since these players are more consistent than most players.

hueyman2
Nov 30 2010, 05:16 PM
The Course Rating is generated each round and will be dynamically affected by wind and rain. If the weather is essentially the same, then scores from more than one round on the same layout will be combined so everyone gets the same rating for the same score in either round.


Also, many courses vary in layout or actual construction year to year. Temp courses not set up exactly the same or temp holes / course changes cannot always be recorded and taken into account. So, like chuck said it is all based on the propigators. Published course SSA's, I think, tend to be "guides" since you can't tell what the weather, layout, or player rating quality was like the time that the SSA was created.

BDHYYZ
Nov 30 2010, 11:55 PM
CK wrote

"Even among the top ranked players, not one played all five Majors this year."

That's not a true statement Chuck because the Euro DGCs is not a Major, its a biennial XA tier and a huge event, buts its XA because its only open to Europeans. So only the top Euros, if they had $$$ to travel could've played all 4 Majors plus the EDGCs ...

BDH

gdstour
Dec 01 2010, 12:19 AM
game set match , chuck wins again :)

gdstour
Dec 01 2010, 12:21 AM
ratings are always an interesting topic.....i think after you hit 1000, you should be ranked, not rated (as dave mac said). Ratings should be for ams to help regulate what divisions they should or should not be playing. I honestly think ratings have destroyed the open divisions in our area. If the 970ish open player sees a few 1000+ rated players, they either do not play or may play masters. I have heard this so many times....people give up before the tournament even starts because they feel they do not have a chance. When i first started playing open, i never even thought about it. I knew there were better players than me, but i could never put an actual number on it.

Why do we need ratings for open players....bragging rights is about it! If a player breaks a 1000, just put down 1000+....
word!

cgkdisc
Dec 01 2010, 12:37 AM
"Even among the top ranked players, not one played all five Majors this year."
That's not a true statement Chuck because the Euro DGCs is not a Major, its a biennial XA tier and a huge event, buts its XA because its only open to Europeans. So only the top Euros, if they had $$$ to travel could've played all 4 Majors plus the EDGCs ...
The Euro DGC has counted as a Major in the World Rankings (quacks like a duck) and there's no reason a Euro couldn't have played all five. And, even if you exclude the Euro DGC as a Major, only Feldberg and Nikko played all four of the other Majors out of 150 players in the World Rankings. Still very weak Majors participation if you want a World Ranking system where ratings don't play a primary role.

sandalbagger
Dec 01 2010, 11:29 AM
I'm with you Schick. I think these ratings have been a horrible idea. At first it seemed great, but in the end all it has done is discourage people from moving up a division or even playing events at all. I wish the ratings would just disappear they are not really helping anyone other than the newer players who don't know where they fit in.

I would at least like to see the ratings from the Pre-registration lists removed until after the event starts. That way it might encourage the lower rated golfers to play open. Had I known I was only a 921 rated player back when I moved up to Open, I might have never done so.

cgkdisc
Dec 01 2010, 11:51 AM
but in the end all it has done is discourage people from moving up a division or even playing events at all.
Moving up a division is important because? Perhaps padding the income of top players?
Participation is at an all-time high so who is being discouraged? Perhaps the people being pressured to move up unnecessarily?

TeeBob
Dec 01 2010, 07:47 PM
Maybe the only tweak needed to clean up the system is to do away with the current propagator system. Essentially do away with how the propagators are chosen. What if out of 100 players at a tourny, 50 have 8 rated rounds who decides wich players rounds will be used as propagation rounds?

taken from FAQ

There’s no way to determine what an official SSA value would be for a course simply by taking measurements, looking at foliage, fairway widths and accounting for hazards. Not only that, it’s common for TDs to add temp holes, change tee or pin positions, or use new courses such that no SSA would be on file for that layout anyway. Using the scores of players with established ratings to produce an SSA has proven to be an accurate way to indicate how the course played that round. The only weakness of this system is that we require only 5 propagators to generate an SSA. Statisticians would prefer we use at least 30 propagators minimum for better accuracy. However, the PDGA has chosen 5 so that more players would get ratings. Some smaller divisions who play shorter layouts may not have very many propagators on a layout that round and would not get ratings in several events. The slightly higher inaccuracies produced with this system for individual rounds tend to even out over time. Plus, no round rating remains in an active player’s rating more than 12 months before it disappears.

You admit here, in red, that the propagator system is flawed on purpose.
The blue part seems silly and could be easy to fix.
Why not use the entire field as the propagator to find the average score?

EA) if theres only ten people and they shoot 54,55,55,58,57,59,60,56,59,61 the ave score, or ssa is 57.4
EB) if the 5 props are the ones that shot 54,55,55,56,57 the ave score, or ssa is 55.4
You can see how only using certain propagators affects the outcome.
with example EA) the guy shooting a 60 would actually be 3 points off the ave. score giving him a 970+- round
whereas in example EB) he would have a 950+- round thats a 20 point difference.

This is of course assuming the entire field played the same course with same layout. Birdie course would have low SSA while the ones with true 4's and 5's would have higher ssa. If you are going to base a scratchplayer/1000 on an avereage finding a true average IMO would be better than the current propagation system. Not only would ratings increase overall, most players would be FORCED to play in the correct division.

I dont know maybe im clueless as to how it really works but the wording in the faq proves the way it is currently done is flawed.

AWSmith
Dec 01 2010, 08:03 PM
I'm with you Schick. I think these ratings have been a horrible idea. At first it seemed great, but in the end all it has done is discourage people from moving up a division or even playing events at all. I wish the ratings would just disappear they are not really helping anyone other than the newer players who don't know where they fit in.

Completely disagree with you there. ratings have protected alot of people who aren't necessarily that good but love to play and those who are working their way up. the smartest thing the PDGA has done since i've been a member was enacting the Noivce division and reset the rating levels.
the pro level is only growing. it used to be you could pretty much count on Kenny or Barry winning everything. now its a battle just to make it on leadcard and stay there.

I would at least like to see the ratings from the Pre-registration lists removed until after the event starts. That way it might encourage the lower rated golfers to play open. Had I known I was only a 921 rated player back when I moved up to Open, I might have never done so.

sounds like your fault for being ill-informed. and if your rated low enough you can petition to regain am status.

cgkdisc
Dec 01 2010, 08:11 PM
Scores from all propagators in the field are used, not just 5. And if more than one round is played on the same layout, scores from props in both rounds are used. So, most of the time, scores from 60 to 100 props generate the ratings which is more than enough for statistical reliability. At Worlds, we sometimes have over 400 scores by props to generate an SSA.

But in small and lower rated fields who play a different course layout some rounds, there may only be 5 propagators and that's the minimum we need to automatically generate ratings. You can't use the scores from players who are not propagators. First, if they don't have a rating at all then you have no way to properly average their scores into the mix. Second, if they have a rating based on fewer than 8 rounds, their rating is not considered stable enough to use in the calculations. Even with 5 props, we can cross check against the course length and the SSA produced on the longer layout on that course to come up with an appropriate SSA value. The bottom line is the more props, the better. But at least 5 plus manual crosschecks have been good enough when needed.

TeeBob
Dec 01 2010, 08:18 PM
ok that clears that up.

Why not use all players at the tourny? I mean the guy that wins rec might be a 970 rated player but isnt a pdga member.

cgkdisc
Dec 01 2010, 08:26 PM
Because knowing the rating of the player shooting the score is fundamental to doing the math for the SSA to produce ratings. If you don't know the rating of the player, what does a score of 54 mean? It's a course record on Winthrop Gold or a Novice score on Horizons Park in NC.

AWSmith
Dec 01 2010, 08:26 PM
You admit here, in red, that the propagator system is flawed on purpose.
The blue part seems silly and could be easy to fix.
Why not use the entire field as the propagator to find the average score?

EA) if theres only ten people and they shoot 54,55,55,58,57,59,60,56,59,61 the ave score, or ssa is 57.4
EB) if the 5 props are the ones that shot 54,55,55,56,57 the ave score, or ssa is 55.4
You can see how only using certain propagators affects the outcome.
with example EA) the guy shooting a 60 would actually be 3 points off the ave. score giving him a 970+- round
whereas in example EB) he would have a 950+- round thats a 20 point difference.


and what if none of those 10 people are 1000 rated or even 100 rated quality? its not as basic as an average. its statistics, it has to be as complicated as possible. those players current ratings are partly factored into the SSA if im not mistaken.

Here is an idea for more accurate SSAs:
A class/seminar should be established by the PDGA on how to appropriately establish SSAs/pars. All state cordinators should be required to attend (part of the duties if you run). Once they pass they return to their state and the all TDs in that state would be required to attend a class/seminar by the state coordinators. The PDGA could also hold these seminars at majors to make travel easier on state coordinators and TDs. You would not be able to TD a B teir or above tournament with out doing so to begin with. then eventually as the system is ironed out you can not TD any event until you have passed the class/seminar. Also only make the certification valid for like 3-5years.
This i think would create more accurate numbers for the current rating system and allow for a more refined rating system.

cgkdisc
Dec 01 2010, 08:33 PM
TD training is one of the priorities that will start being addressed next year. Learning how to set course layouts when they upload scores, and actually doing it, is the most important element to get better unofficial ratings. TDs don't need to and can't really set SSAs and setting par 4s and 5s on a course is primarily for late penalties.

TeeBob
Dec 01 2010, 09:52 PM
To be honest i think its fine as is just needed some clarity on a few parts since wording gets in the way. Besides ratings are for girls.

bruce_brakel
Jul 03 2012, 06:02 PM
{R}atings are for girls.

Indeed, and they should have their ratings shown in roman numerals so not so many people could tell when their ratings are higher than their fathers'. ;)

bruce_brakel
Jul 04 2012, 10:26 AM
...