Can we come up with a number that determines how good a course is for competition - for the World Chamiponships in particular?
Pictured is a young Stan McDaniel, Master Designer for several of the 2012 Worlds courses discussed below, sinking his final putt for his 1994 Masters World Champion title on one of many Worlds courses designed by Master Designer, John Houck, while Legendary Designer of several Worlds courses, Steady Ed Headrick, looks on. Despite their unparalleled credentials and those of other experienced designers who have created courses for Worlds, we're looking for an independent, objective measure that courses at Worlds, and for that matter anywhere, did a good job to sort out the competitors and not simply rely on the accolades and critiques from players and fans.
Courses are currently selected for Worlds based on a variety of subjective and objective criteria. Much of the time, the course(s) are simply the ones judged toughest in the hosting city at least for the Open field. But does that automatically make them the best courses for determining a champion? Is there a numerical value that can be calculated to objectively measure the "goodness" of the courses used?
Correlation is a stats function that determines how well two sets of data line up with each other. For example, if we graphed a human being's height each year with their age from 0 to 18, it would produce a very good correlation value approaching 100% because we're usually the same height or taller each year. Even then due to growth spurts, the correlation calculation would never reach 100% because we don't grow the exact same amount each year.
We looked at the correlation values comparing the rating of the top Open players at Worlds with the total of their round ratings they threw for the two rounds they played on each of the courses played in Pro Worlds over the past 10 years to see what we might find. If a course is suitable for this high level competition, we hoped there would be a good correlation between a player's rating and the round ratings they shot on a course - simply speaking, better players should shoot better scores on average.
The rest of the story shares just a few highlights from this in-depth study. For those interested in more background and analysis from past Worlds and USDGC, please jump to the 7-page white paper on this topic called Correlation for Better Courses posted in the Course Development section.
This graph shows the correlation values for the three Charlotte Worlds courses played twice by the Open division. The big yellow area under the curve shows the statistical range of correlation values we could expect when this pool of Open players played a hypothetical "average course" that produced their ratings.
The correlation values shown in red for the three courses were quite a bit better than the average course range shown. What we can say from a statistics standpoint is all three of these courses did a good job allowing the better players to shine, that is if you believe a course should provide a challenge where better players shoot better scores.
What if a correlation below average had been produced on one course, say just 54%? We can only say the course didn't do a good job ranking the players by skill at that specific event. We don't have enough information to say the course is not very good for top level competition in general because we only have one correlation data point so far. We need many more tournament correlation data points to determine if any of these courses is always better, worse or about average including the three Charlotte courses shown. It's possible the next event at Bradford with a similar group of top players could have a correlation of just 50%, 61% or 75%. Those values would still fall in the yellow area of normally expected correlations for this course and start painting a more complete picture whether Bradford continues to be above average.
Correlation analysis is a tool that could be applied to evaluate much of our old and future PDGA tournament database to determine how well courses rank players of all skill levels in tournament play. This process wouldn't just be useful for the top players, but for everyone. Ideally, we would want our Gold level courses to have good correlations when top players play them and our White level courses to have good correlations when Intermediate and Rec players play them.
Once enough correlation data is compiled, we would want to figure out which course characteristics seemed to produce good correlations and which ones lowered them. That will take some effort over the long haul but will hopefully provide additional guidance to help improve our courses for all players.