wlbkr
Oct 14 2007, 05:37 PM
Ok, my tournament is done, I have all the scores for all divisions for each hole on my spreadsheet. Now....what do I look for? Simple average probably doesn't tell me too much. What about spread alone? How do I break the results down to improve my tournament course? How do I know which holes are exceptional and which are terrible? Any help?

Alacrity
Oct 15 2007, 11:06 AM
I am having a heated discussion with several club members about this very thing right now. One thing I think you need to determine first is if you are trying to build your course around Intermediate and rec players, which make up about 90% of the typical players you see, or top rated players. In my case I have one course that is for amateur players and one that is meant for high rated players.

In my case I am looking at the Blue course and trying to push the par higher. I would suggest taking the top open and advanced players and looking at hole by hole averages. If the hole average is right at 2 or 3 or 4, then to me, there appears to be very little risk versus reward. If the hole average is closer to 2.75, 3.5 or 4.5 then the hole offers a little more oppurtunity for a player to risk a shot.

It also helps to look at scores. John Houk has written about this and he has stated that if you have one bird in a comparison group, one 4 and a bunch of 3's, there is not enough variance in the hole.

If you are trying to build a course for the average player, look at the top intermediate players and middle advanced players. These guys and gals are starting to take risks, where as most rec players are not thinking through the hole. They are just throwing at the hole and are inconsistent about their throw. They may throw at the same tree gap they hit last month and birded, regardless of the fact that they have taken 5's every other time.

Once you have determined philosophy for the course and calculated the averages, start considering what you could do to change the hole up. Sometimes you don't need to move a pin or tee, just plant a tree in the fairway. That one tree could get in people's minds and effect the throw, even though the flight path is still significantly the same. Sometimes just pulling a tee back a couple of feet so that more canopy exists can help. Another thing is to open up what appears to be a good flight path. If you plan it right, you could open a flight path that is actually too tight for the average player to hit.

Just some of my opinions, good luck.

wlbkr
Oct 15 2007, 11:21 AM
Thanks for your help. I love the idea about planting trees in the fairway or even bushes near the greens. I do have another question about your 3.5 average example above.

I have heard players talk about a hole being a "tweener". I am not quite sure what they mean. Do they mean half will get 1 score and half will get another? But isn't the point to get a spread? What's cooking here? :)

sandalman
Oct 15 2007, 11:37 AM
just plain average is only part of the story, so dont worry to much if an average is 3.5. if you have a 50-50 split between 3's and 4's, you might very well have a decent hole. but if the split is something like 40% 2's, 20% 3 and 4, and 40% among 5, 6, 7's then it could mean an extremely finicky (aka lucky) hole. even tho the average is 3.5, one hole is fine for disc golf and the other should be saved for the carnival.

james_mccaine
Oct 15 2007, 12:04 PM
This is a great question and I commend you on your desire to do this.

As Jerry stated, the analysis needs to be trageted. Is the analysis being done to improve the holes for the next tourney, or to improve them on a daily basis?

If the analysis is being done to improve the holes for the next tourney, what type of players do you expect. Unless it is a world class event, I would probably take the top third of the field by ratings and use them for my analysis. My thought is that if you can generate a good spread for the top third, then you will also achieve the same results for the bottom third. Alternatively, you can also do a separate analysis for the group centered around the median. These two analyses would be useful, but may lead you have two tees for the next tourney.

As to the actual analysis, my advice would be to discard the whole notion of "average" as it mutes all the important distinctions. Just calculate the percentages of each score by ratings group. The resulting spread will tell you if the hole has worthless qualities. If 80% of the field gets the same score, why play the hole at all? It is a worthless hole. In other words, isolate the holes that produce little spread and address those.

There is nothing wrong with tweener holes, as long as they produce a decent spread. Tweeners are described by averages, and average is a very weak concept. I mean, a 3.5 average might result from 10% twos, 40% threes, 40% fours and 10% fives. This is probably an excellent hole.

Once again, kudos for your desire to do this. It is so necessary but so rare to see a TD do this kind of analysis.

Lyle O Ross
Oct 15 2007, 04:33 PM
A thread titled Hole Analysis is too good to pass up.


James, in some ways, your post seems to contradict Pat's. Where is the demarcation? That is, where does the spread go from being just the right amount to indicate the hole is a good one, to the hole has too many random variables? Would the spread be across all divisions and you would expect to see the Pros go from 2.2 to 3.6, Advanced from 2.4 to 3.9, etc.?

Lots of people talk about what makes a good hole, can there be a number formula that truly express the challenge of a hole?

james_mccaine
Oct 15 2007, 05:13 PM
Spread, at least the way I use it, can only be expressed in percentages. It is nothing more than the scoring distribution of a select population of players.

I'm really more interested in what is a bad spread more than what is a good spread. I think Pat and I are in agreement that little spread equals boredom and a waste of competitive time. However, holes with this characteristic are still too prevalent, even at major tournaments.

This is one of those things that the PDGA should require from TDs imo, not necessarily to be heavy handed and demand they address it (even though that is very OK with me) but at least to educate them. In my experience, many more event TDs might perform this work if they understood why it is useful.

btw, I think Chuck uses 70% of one score as his demarcation line of unacceptability. Seems reasonable enough, although I hope all TDs would still try to improve holes with percentages slightly below that level.

Jroc
Oct 15 2007, 05:30 PM
Yes, it is about 70%.

The Course Design group uses the two-thrids principle. In general, if there is two-thrids or more of something (birdies, pars, OB's, missed mandos, etc.), than it is not a good design and/or not fair for the intended skill level the hole was designed for.

ck34
Oct 15 2007, 05:37 PM
The Hole Forecaster and ongoing enhancements, which is provided to designers as part of their member package in the DGCD, is set up for detailed and precise analysis of scoring for holes including forecasting what a new design will produce and determining how a hole will play for different skill levels, even if not a single person at that level has played the hole. Since we would rather have those who are more involved in course design proposing or making any course changes, we haven't published the process and the Forecaster for wider use. DGCD members use it as one of their design tools on an honor system. Those interested in joining, please give me your email and I'll send info. It's a one time fee of $49 with no annual renewal. We now have over 100 members internationally including most of the high profile designers you might have heard of.

Since most events involve players of different skill levels playing the same holes, it's important to break players into 50-pt ratings ranges for the analysis. Ideally, you would have enough players in the skill range for which the hole had been designed so you can see if the hole produced an appropriate spread. Playing different tees is much more important on relatively open courses to properly challenge different skill levels in tournaments. Courses with average foliage density to mostly wooded holes will have scoring spread that's usually pretty good regardless of skill level. So playing the same tees can work for all divisions if only one is available. Of course, another type of review is required to determine if the scoring spread on a wooded hole is more from luck versus skill. That takes design experience and is usually hard to determine from numbers alone.

james_mccaine
Oct 15 2007, 06:16 PM
As to education, the forecaster is a good tool, which can be used to solve a problem, but most TDs don't even know there is a problem in the first place. The education I'm talking about is in more general terms. For example: the event you are running is a professional sporting event. The course is a fundamental feature of the tournament. Not only is it important to challenge the skills of the participants with difficult holes, it is important to insure that each hole will allow players to separate themselves from many competitors with a good performance on a hole. Therefore, we recommend that you analyze the scores in such and such way, use the forecaster for assistance, incorporate these concepts to create an acceptable distribution, etc.

This would be one easy way to raise the bar for a sanctioned tournament. Give some meaning to players that if they are attending a PDGA event, it won't be littered with holes where virtually everyone gets the same score.

ck34
Oct 15 2007, 06:53 PM
Good suggestion James. I can see about getting wording to that effect incorporated in our How to Run a tournament doc and possibly our PDGA design guidelines. There are several things regarding event setup and situtations I'd like to get in a document. We've already raised the bar for NTs and Majors in this regard with oversight usually by more than the original designer.

denny1210
Oct 15 2007, 07:01 PM
Well said, James.

The hole forecaster works great for course design and post-tourney results analysis. There's no need to re-invent the wheel when there's a great tool already available.

As Chuck stated, it's vital to normalize the data for a specific skill set. The only thing that looking at rec player scores on a gold course will tell you is that the course is super-freakin' difficult for them.

It is somewhat arbitrary where to draw the lines separating different skill levels, but this has been done and I think it's important for us to help spread the knowledge of what exactly a red or white or blue course looks like. It'll go a long way towards helping players enjoy their golf more when they go to a new course if it's easy for them to pick the right tee.

Lyle O Ross
Oct 16 2007, 11:26 AM
Chuck,

Can you give a list of who's got access to your hole design kit? If for no other reason that it is a benefit for them to be known as someone who is thinking this way.

Alacrity
Oct 16 2007, 11:40 AM
There have been some great suggestions here, but we are currently discussing several holes on our Blue course. A difficult course, when played from the longs tees, that we are now putting permenant tees in. On several of the holes I have asked players to look at the hole and ask themselves, can you get a 2, can you get a three, can you get a 4? If you can't get a 2 are you always getting a 3? By targeting the groups and asking if they are frequently settling for a three, then the hole is not a challenging hole and needs to be reviewed.

ck34
Oct 16 2007, 11:42 AM
Texas DGCD members are: Brenner, Duke, Houck, Kingston, Lehmann, Morrow, Olse, Zac Tolbert, Don Young. I think Brenner, Duke and Kingston may have the most experience using the software so far but that's only because I've heard them mention using it.

ck34
Oct 16 2007, 11:55 AM
Something I'd like to suggest along the lines of hole analysis is that until a hole has scoring data that has been properly analyzed, and then the design tweaked if necessary, it can't be considered either a completed hole nor great hole. This follows for the whole course. As McCaine suggested, this type of information should be collected from events and TDs share scorecards with the designer to make sure the course has produced the type of scoring expected from that set of tees/pins. With many players having ratings, it's not necessary to wait until a PDGA event on a new course to start analyzing scores simply from minis and leagues.

Jroc
Oct 16 2007, 12:44 PM
I have worked with it a few times. We are currently considering where to put new sleeves for each basket at Cal Young, and the forcaster will help greatly to get them in the best place. In the isolated disc golf pockets (i.e. everyone in west texas), there are far more non-rated regular players than rated. With some work, its possible to estimate their ratings on their course, and so you can get enough scores to work with.

Its interesting how many players have not thought about these concepts. I had a conversation this weekend with a local Lubbock player about the course there, and it kind of opened my eyes to the fact that even more skilled, experienced players dont neccessarily have a complete idea about why courses work, dont work, the importance of designing for specific skill levels, etc. There is much education to be had by many players out there (myself included).

Encouraging post-tourney hole analysis would be a step in the right direction, but it will take several years to get TD's to come on board. They have a hard enough time doing the required things (the TD report) as it is now :D

sandalman
Oct 16 2007, 12:54 PM
it doesnt need to be the TD... many TDs are great at running events but not so hot aty designing holes. the data is best shared with the folks most interested and/or capable of sifting thru it and coaxing insights from it.

Jroc
Oct 16 2007, 01:13 PM
Certainly. Right now, there are only a handful in our state that even have access to the tools. Could those handful of folks give usable advice about a course they have never seen? I think the suggestions would need to be a little more than "70% of the blue level players scored a 3 on hole #2". But, maybe it starts out just that simple...

I like the idea....just thinking out load.... Maybe all of the 'advice' needs to stay objective, so we dont run into what the course evaluation group is dealing with now.

sandalman
Oct 16 2007, 01:28 PM
it does start just that simple.

i'd be happy to plug your data into a spreadsheet and hand back the raw results. its pretty easier to build a sheet that gives you the percentages. figuring out what to do about them is more hands-on of course, but the percentages is a good place to start

ck34
Oct 16 2007, 01:32 PM
Steve Dodge provided the Marshall Street scores for my comments on a course halfway across the country from me. Apparently the analysis made sense to him without me ever seeing the course. Having hole pictures and maps online helped very little with the analysis. Had I been there, I might have been able to provide specific advice such as consider moving a tee here or cutting a tree there. But even "blind" analysis can help people improve their courses. As has been pointed out, even knowing something like this can improve courses is not understood by many or is discarded as mumbo jumbo because they think holes of any length are just as good.

Jeff_LaG
Oct 16 2007, 01:50 PM
Its interesting how many players have not thought about these concepts. I had a conversation this weekend with a local Lubbock player about the course there, and it kind of opened my eyes to the fact that even more skilled, experienced players dont neccessarily have a complete idea about why courses work, dont work, the importance of designing for specific skill levels, etc. There is much education to be had by many players out there (myself included).



That could be the understatement of the year. There are literally thousands if not tens of thousands of disc golfers, many highly skilled and with decades of experience, who don't the understand the ins and outs of designing courses for different skill levels, the importance of score variation, the variance in par for each skill level depending on length and foliage density, maximum effective lengths to corners of a dogleg or for forced water carries, etc. The education process is only beginning.

Jroc
Oct 16 2007, 02:42 PM
Cool. If thats all it takes, I understand the forcaster enough to help out. Be glad to even.

gnduke
Oct 17 2007, 04:33 AM
The forecaster is great at showing where potential problems are and sometimes good at suggesting improvements, but the fixes have to improve the course as a whole, and not just the hole in question. The "flavor" of the holes around the hole having problems should suggest whether longer, shorter, tighter, or more open would be the best options for corrections.

stevenpwest
Oct 19 2007, 01:23 AM
It is intuitive to everyone that a good hole will produce a variety of scores, but not by using random chance. I've developed a formula for Sorting Power which can be used to numerically evaluate how well a hole does both: generate a spread of scores, and match those scores to the players' abilities.

A hole where everyone gets the same score would have zero Sorting Power.

A mythical hole that gives everyone a different score, and gives the best player the lowest score, the next best player the next higher score, etc, would have a Sorting Power of 100%.

A wider scoring distribution gives a hole more potential for Sorting Power, because it sorts the players into more groups.

However, a hole with a wide scoring distribution that results from pure luck would not assign scores to players according to their abilities. The Sorting Power of that hole would be very low.

A hole that causes good players to get high scores, and bad players low scores (read that again) could have a negative Sorting Power.

Here's the recipe to evaluate a hole's Sorting Power:

1. Rank players according the total of the score they got on the hole being evaluated.

2. Rank the same group of players according to the total score they got on all other holes - excluding the hole being evaluated. (Alternatively, use Rating or some other player ranking that is independent of the hole being evaluated.)

3. Subtract the rank the hole gave each player (Step 1) from the independent player ranking (Step 2). Turn any negatives positive. Call these the "ranking errors". Add up the ranking errors.

4. Calculate the errors expected from a worthless hole; n*(n-1)/2, where n = number of players.

5. Compute Sorting Power; One minus [(Step 3) / (Step 4)].

Here are some examples, from a made-up tournament with just five players. Player a scored 53, b = 56, c = 58, d = 61, e = 64

(Sorry about the formating.)

Player / Hole Score / Score Rank / Other Scores / Player Rank / Ranking Error
a 3 1 50 1 0
b 3 1 53 2 1
c 3 1 55 3 2
d 3 1 58 4 3
e 3 1 61 5 4

Error Sum 10
Sorting Power 0%

Player / Hole Score / Score Rank / Other Scores / Player Rank / Ranking Error
a 2 1 51 1 0
b 2 1 54 2 1
c 2 1 56 3 2
d 3 4 58 4 0
e 3 4 61 5 1

Error Sum 4
Sorting Power 60%

Player / Hole Score / Score Rank / Other Scores / Player Rank / Ranking Error
a 4 1 49 1 0
b 4 1 52 3 2
c 9 5 49 1 4
d 4 1 57 4 3
e 4 1 60 5 4

Error Sum 13
Sorting Power -30%

Player / Hole Score / Score Rank / Other Scores / Player Rank / Ranking Error
a 2 1 51 1 0
b 3 2 53 2 0
c 4 3 54 3 0
d 6 4 55 4 0
e 8 5 56 5 0

Error Sum 0
Sorting Power 100%

sandalman
Oct 19 2007, 09:55 AM
awesome! can u send the spreadsheet? i wanna try some of that

ck34
Oct 19 2007, 11:44 AM
I like the concept of determining how well a hole rewards better play. However, using player performance on other holes on a course seems statitically risky. To assume that other holes have any relationship to a specific hole is suspect as a means of player skill ranking. I think ratings would be the best, and perhaps league standings as an alternative when no ratings available, would be fine, too.

There's a simpler way to get at the numbers Steve is proposing. The calculation is a simple Linear Correlation which is the CORREL Excel function. The ideal distribution would produce a -1.00 correlation meaning the higher the rating the lower the score shot. That's the first example shown below where I used hypothetical scores on the same hole played four times in an event.

The second table shows a hole where the scores have no correlation with a player's rating producing a CORREL value near zero. The third table shows a hole which would be unlikely where the higher the rating, the higher your score on the hole with a correlation close to 1.00.

<table> <tr> <td>Great Hole</td><td> </td></tr> <tr> <td>Rating</td><td>Score </td></tr> <tr> <td>965</td><td>12 </td></tr> <tr> <td>950</td><td>13 </td></tr> <tr> <td>945</td><td>14 </td></tr> <tr> <td>941</td><td>14 </td></tr> <tr> <td>932</td><td>15 </td></tr> <tr> <td>Correlation</td><td>-0.98 </td></tr> <tr> <td>Random Hole</td><td> </td></tr> <tr> <td>Rating</td><td>Score </td></tr> <tr> <td>965</td><td>14 </td></tr> <tr> <td>950</td><td>12 </td></tr> <tr> <td>945</td><td>17 </td></tr> <tr> <td>941</td><td>14 </td></tr> <tr> <td>932</td><td>13 </td></tr> <tr> <td>Correlation</td><td>0.03 </td></tr> <tr> <td>Bizarre Hole</td><td> </td></tr> <tr> <td>Rating</td><td>Score </td></tr> <tr> <td>965</td><td>15 </td></tr> <tr> <td>950</td><td>13 </td></tr> <tr> <td>945</td><td>13 </td></tr> <tr> <td>941</td><td>14 </td></tr> <tr> <td>932</td><td>12 </td></tr> <tr> <td>Correlation</td><td>0.82 </td></tr> </table>

sandalman
Oct 19 2007, 12:04 PM
[/QUOTE] To assume that other holes have any relationship to a specific hole is suspect as a means of player skill ranking.

[/QUOTE] not sure about that... after all, courses are related to other courses in exactly the same manner in the ratings system

i'm not sure steve's method really gets at a hole's ability to seperate... it might but i need to play with it more. your method does not either, though. it seems more of a way to verify that the hole does in fact play harder for less skilled players. it could be useful for analyses related to slope perhaps, and having the correlation coefficient is really nice.

Jeff_LaG
Oct 19 2007, 12:11 PM
2. Rank the same group of players according to the total score they got on all other holes - excluding the hole being evaluated. (Alternatively, use Rating or some other player ranking that is independent of the hole being evaluated.)



I think that when you use total scores on all other holes, you only introduce more error from possibly skewed sorting effect on those holes. I agree that the most current PDGA player ratings are the better way to go here.

ck34
Oct 19 2007, 12:24 PM
The only thing the scores on the other holes do is provide a preliminary insight into a player's skill level, no different from having a single round rating determine a player's "true" rating. It's just one data point.

The correlation specifically gets at a hole's ability for better player's to demonstrate their skill as measured by their rating. The more negative the correlation, the better the hole on that factor. However, I agree that a hole with a perfect correlation of -1.0 would produce identical average scores for all players rated at 950, and in theory, they would all tie with the same total score on this hole over enough rounds.

The scoring spread on the hole would look at what the range of scores are on that hole by players with exactly a 950 rating (for example). If they are mostly the same, the hole won't separate scores. If the numbers are spread, it will. That's the type of analysis the Forecaster does.

sandalman
Oct 19 2007, 02:37 PM
i tend to agree, particularly withthe last paragraph. Steve was looking for a way to seperate the lucky holes from the skill-based holes. i think he's on to something overall.

ck34
Oct 19 2007, 02:51 PM
Based on all of the Forecaster studies I've done, I'm not sure there are too many holes where that "lucky distribution" will be the case. However, it's an additional item worth discovering where it occurs. I've had a few holes at Highbridge where a hole on average played the same or even tougher for a higher skill group of players which, in theory, shouldn't happen. The weird thing is if I remember the hole correctly, it wasn't a hole anyone would think was lucky. It's a relatively open 195 ft turnover to a pin on the hillside with one tree to avoid about 50 feet off the tee (Blueberry 9 short). I'll have to check the files and see.

ck34
Oct 19 2007, 03:01 PM
One additional thing that complicates the analysis when using ratings as the reference is that my 946 isn't the same on a per hole basis as a 25 year old Advanced player. Because there's usually enough hole variety and balance on many courses, we'll shoot close to the same score on average. However, there's a good chance I'll have better scores on average on the more technical holes and he'll average better on the more open holes. So, when doing this hole analysis, having a lot of scores on the hole is important along with using a consistent data set of players at a rating. In other words, it could be less accurate to compare the data on a hole between Pro Masters and young Ams with a similar rating range.

stevenpwest
Oct 20 2007, 02:27 AM
My replies. Your comments in quotes.

"awesome! can u send the spreadsheet? i wanna try some of that"

I don't have a workable spreadsheet ready to use. If I did, you'd see an analysis of Winthrop 17.

I figured once I introduced the notion of using the information about how well a hole sorts players, some refinements would be suggested. So, I didn't scale it up yet.

The major sea change is that our analysis has moved up from just looking at the average score, to looking at the scoring spread, to looking at the correlation between scores and player abilities. Next, Information Theory. But not tonight.

"Using player performance on other holes on a course seems statistically risky."

"I think that when you use total scores on all other holes, you only introduce more error from possibly skewed sorting effect on those holes. I agree that the most current PDGA player ratings are the better way to go here. "

There is a bit of risk of error from skewed scoring effects on other holes. It's the same risk that goes into the calculation of player ratings. Player ratings are a function of scores on other holes. Player ratings do use more data, so the errors have more chance to be washed out. Using more scores from more other holes has the same effect.

I chose to start with just the other scores from the same event, because then I would at least have something available to use as a player rating for all the players that played the hole in question. The data for the particular hole and the player ratings match up well, and that makes things easier. So, one can perform the analysis with nothing more than the results of a tournament. The idea that the data came from the same players under the same conditions had some appeal to me, too. But, the point is that scores from other holes are merely one measure of player ability. You could throw in more scores from other events to reduce the chance of error in the rankings of players.

The player rankings don't have to be perfect for the method to work well. It doesn't matter too much whether player x is the best, or third best, or whatever. Each hole by itself is only going to produce a handful of scores (even if totaled over several rounds of play), so one hole by itself it will only be able to sort players into fairly large groups. Holes that don't sort, or sort randomly, should be revealed well enough even if the player abilities are only approximate or have a few errors.

"I think ratings would be the best, and perhaps league standings as an alternative when no ratings available, would be fine, too."

Use 'em if you got 'em. The only theoretical problem is if the hole being evaluated had much influence on the ratings. For example a coin-flip-hole that randomly assigned a score of either 1 or 20 would affect limited-round ratings, and therefore force a correlation to the ratings it helped create. Especially for a group of players with a narrow range of ratings.

"There's a simpler way to get at the numbers Steve is proposing. The calculation is a simple Linear Correlation which is the CORREL Excel function."

Well, sure, if you want to do it the EASY way. And can remember that -1 is a good thing. I'm perfectly willing to use the correlation between ratings and scores instead of Sorting Power. Especially if that means you'll add it to the Hole Forecaster.

"The scoring spread on the hole would look at what the range of scores are on that hole by players with exactly a 950 rating (for example). If they are mostly the same, the hole won't separate scores. If the numbers are spread, it will. That's the type of analysis the Forecaster does."

I'm not sure I understand what your point is. If all the players are rated 950 (and ratings are an unchanging perfect measure of skill), the hole should NOT separate scores. Any separation of scores would be the result of random factors. A theoretically perfect hole should lump all 950 rated players together. A hole's job is to separate and sort players who have DIFFERENT abilities.

"Based on all of the Forecaster studies I've done, I'm not sure there are too many holes where that "lucky distribution" will be the case."

Probaly not many where the distribution looks so random as to be obvious. But, I'd bet there are holes where maybe 20% of the scores are randomly assigned. That kind of subtle scrambling of scores wouldn't be revealed without computing Sorting Power or Correlation.

Anyway, we will also be revealing which holes have better scoring distributions. Not just wider, but better. By better I mean distributions that provide more information about the players' skills. Two holes could have exactly the same scoring distribution, yet one could be much better at giving lower scores to players with higher ratings.

In time, I believe this will lead to more Par 4 and Par 5 holes, because they can generate more information.

"where a hole on average played the same or even tougher for a higher skill group of players which, in theory, shouldn't happen."

Kind of drifting here, but I don't think it would be too difficult to design holes that push the correlation up toward positive. North Oaks Golf Club is going through a redesign to make the course easier for beginners, and harder for pros. The example they gave was moving the bunkers out beyond the reach of all but the longest drives.

One final note, correlation between scores and ratings doesn't need to be limited to one hole. You could evaluate groups of holes or whole courses.

ck34
Oct 20 2007, 10:55 AM
One final note, correlation between scores and ratings doesn't need to be limited to one hole. You could evaluate groups of holes or whole courses.



I'm already looking at this and have found some interesting things. But I need to gather more data before getting a handle on whether it seems useful. Maybe it will turn out to be another factor for evaluating courses to be used in higher tier events.

I tested adding the correlation function to the Forecaster. However, I'm thinking I may just leave it in there for the overall scoring totals, not individual holes. I think we might need a separate page where scores in multiple rounds must be entered (at least 3 or 4?) before there's enough information to make analysis on a hole become useful. In my first test, I only got weak correlations from -0.05 to -0.3 which isn't very strong on any hole but at least none had a positive value. For another course, after entering hole totals from four rounds, the numbers were closer to where you would hope with correlations no lower than -0.2 up to -0.5.

I may try to find some league data with maybe 10 rounds of hole scores by the same group of players to see how the correlations change from one round to four rounds to ten rounds and get an idea how many we might need to really do this analysis properly on a single hole basis. It may turn out that it's too difficult to have the type of controlled conditions over so many rouds to really draw conclusions that justify making changes on a hole.

ck34
Oct 20 2007, 11:52 AM
If all the players are rated 950 (and ratings are an unchanging perfect measure of skill), the hole should NOT separate scores. Any separation of scores would be the result of random factors.



Is it random, lucky or just human perfomance variance? If we know 950 rated players have developed the skill to land within 20 feet of any open, flat 275 ft hole one out of three times for a deuce, we would expect only one out of three 950 players to get a 2 on league night. Is that one player randomly lucky or the one of the three who had the skill to do it that night?

In theory, if these 950 players played this hole 18 times like a normal round, you would expect their scores to cluster around 48 (6 birds and 12 pars on average) since they all have the same skill. However, since on any given day, we know that a 950 player can shoot scores, plus or minus up to 6 shots from their rating, we might see scores ranging from 42 to 54 in this experiment with two-thirds of them between 45 and 51.

We have no way of knowing whether a player who shot a 42 did it truly based on skill or whether it was just their random statistical day to get the hot score among the pool of 950 players. However, we all accord that hot score to skill and applaud their effort that day. The player who shot 54 just had the rough statistical day and hopefully knows he can potentially shoot a 42 some other day if they all play enough rounds on this course. This is assuming that these 950 players have stable skills and are neither improving nor declining overall.

Contrast this hole where 950 players get a deuce one out of three times with another hole where they might get 3s 95% of the time, perhaps one that's 400 ft, wide open, flat on a calm day. In a group of twenty 950 players, one gets a deuce by throwing in a 50 ft putt. Is that luck or skill enough to give them a trophy over the other 19? If this group plays this hole 18 times for a round, scores will tightly cluster around 54 with scores ranging from maybe 51 to 56.

Is the winner lucky or skillful getting the few deuces? Is it possible this person's skill set happened to include being a long thrower so that was their advantage on this particular hole among other 950 players with different skills? Is that fair in terms of challenging the mix of skills a 950 player may have?

My point is that scoring variance on a hole based on "skill" is important for separating how skillful each player plays THAT DAY since our events have much fewer than the number of rounds needed for players of the same skill level to end up tied. If these 950 players, play enough rounds though on the same course with each other, they should all end up with the about the same number of wins and cash but enjoy the experience each round much more than they would if their scores on each hole was virtually the same every round.

stevenpwest
Oct 21 2007, 01:29 AM
In my first test, I only got weak correlations from -0.05 to -0.3 which isn't very strong on any hole but at least none had a positive value. For another course, after entering hole totals from four rounds, the numbers were closer to where you would hope with correlations no lower than -0.2 up to -0.5.



That's about what I expected, maybe even better. A single round for a single hole will only produce a few different scores. That will never produce a very highly negative (i.e. good) correlation. It will only separate the players into maybe 5 groups. The player ratings (or tournament scores or whatever) might have 72 different values. 5 values will never match up very well with 72.

I wouldn't be surprised if you are also finding that for any given tournament, you can use the results from just a few holes (cherry-picked after the fact) to almost completely replicate the results.

stevenpwest
Oct 21 2007, 02:01 AM
If all the players are rated 950 (and ratings are an unchanging perfect measure of skill), the hole should NOT separate scores. Any separation of scores would be the result of random factors.



Is it random, lucky or just human perfomance variance?



I was trying to hypothetically eliminate human performance variation in that quote.

A perfect hole (one that accurately sorted players by skill) would produce a spread that correlated to the variations in those 950 players' performance that day. Which is why my gut tells me to use their performance on other holes that day as the measure of player skill.

In our sport, we also have non-skill related "luck" (unpredictable causes) like gusts of wind, changes in the course or the weather, and chaotic conditions (like whether a disc will fall over or start rolling downhill after a missed putt). A hole with more of these unstable conditions built in (target on a peak, water on the fairway, tightly packed trees, planes flying low) will show up as having a wide scoring spread with a relatively poor correlation to any measue of player skills. Perhaps these should be called Chaotic holes. These will not give as much information about player skills, and the better players would not want these to be used to award trophies. Correlation will help us identify these holes.

(Rec players might love them, but that's another thread.)


. . .one gets a deuce by throwing in a 50 ft putt. Is that luck or skill enough to give them a trophy over the other 19?



No. That's why the players need to play a whole bunch of holes to get a trophy.


. . .Is it possible this person's skill set happened to include being a long thrower so that was their advantage on this particular hole among other 950 players with different skills? Is that fair in terms of challenging the mix of skills a 950 player may have?



Only if the other holes in the competition challenge the other skills a player should have. But since this is ONE of the skills a player should have, a good hole can produce a correlation by showcasing one of the skills. A very good hole might challenge two of the skills. It would be asking too much of a hole to challenge all the skills simultaneaously.


My point is that scoring variance on a hole based on "skill" is important for separating how skillful each player plays THAT DAY . . .



Hence, the suggestion to use the scores from other holes played by the same players on THAT DAY.


. . .since our events have much fewer than the number of rounds needed for players of the same skill level to end up tied If these 950 players, play enough rounds though on the same course with each other, they should all end up with the about the same number of wins and cash . . .



Actually, the more rounds, the less chance that the players will have the same number of wins. More rounds will tend to average out random factors, but only when measured as a percentage. The absolute differences will tend to diverge.

So, they'll tend to have about the same percentage of wins, but the number of wins will tend to diverge.

But, back to the topic. If there was anything in your discussion that was trying to make the point that a large negative correlation between a hole's scores and player ratings is somehow undesirable, I missed it.