Golf "science" is unfortunately anemic in terms of the robustness of the exchange of views, the rigor of the methodology, and the directedness of effort. The role of technology in this situation, also unfortunately, exacerbates rather than ameliorates the "science." Science that shuns critique is pseudo-science in the worst sense. Currently, technologists develop golf training and monitoring devices without deep insight into what the teacher of golf needs to know or what the student of golf needs to learn. This creates a false priesthood of golf technologists who create justifications and rationales for patterns of data-gathering determined almost exclusively by the technology itself rather than developing technology to gather relevant data. This state of affairs, albeit accompanied by the seductiveness and the "aura" of hard science, is mainly an unsound and wasteful injection of technology that causes misdirection and confusion in the golfing community and delays real progress in teaching and learning. What follows is a critique of golf technology and an analysis of why it gets off track, along with suggestions for correcting the problems.
Lack of intelligent development of technology
In golf, technology for measuring various parameters in the behaviors of the game are invariably developed by technologists who are not skilled practitioners of the game. There are a number of related reasons for this, primary among them being that expert golfers and golf instructors are not usually trained in science and technologists are so specialized in their expertise that they have no time to acquaint themselves with the complex art and science of the skilled movements in golf. What results is that technologists, seeking applications for their expertise, turn their technical skills to golf and unilaterally design technologies that capture certain data about the behaviors.
To the extent the choices of the parameters to measure are investigated and analyzed before selection and design of the technology for capture, the technologists have a skimpy budget of time and effort for mastering the complexity of the skills at issue and so content themselves with little more than a general level of acquaintance derived from a superficial reading of commonly acceptable representations of the movement expertise and at most a loose partnership with anyone credited by the game's hierarchical guardians with passable knowledge.
For example, the Science and Motion PuttLab is a measurement technology designed by a neuroscience researcher skilled in hand movement disorders but completely unacquainted with golf, a mathematician and software designer also completely unacquainted with golf, and an itinerant golf club professional who happened to be handy. Given the ultrasonic signalling technology that measures putter head position during a putting stroke, the team then set itself to the issue of what to measure. The choice of what to measure is dictated by the capabilities of the technology -- in this case, the position in space and time of the putter head during the stroke. The parameters measured, then, are putter head orientation during the stroke, path of putter head in space, and timing of stroke motion, all in relation to a starting point and orientation set by initialization. Everything thereafter for data acquisition and display is derivative of these basic positional signals, using the inferential processes of software algorithms to generate additional parameters. The end result is that ultrasonic signalling technology measures what it measures: putter head position; it does not measure body motion or allow clear inferencing about the golfer's movements that cause putter head motion. Moreover, the PuttLab system has no bearing on distance control (the most important skill in putting) or on reading putts to select a starting line, and only indirectly bears on aiming the putter face accurately.
The mathematical engineers at MIT have followed the same pattern in the development of the iClub technology for measuring club dynamics in the full-swing and in putting strokes -- a non-golfers seeking to apply his software and signal-capture technology chose golf and then set about choosing parameters to measure based upon the capability of the technology. There are numerous other examples in golf technology wherein the technology is available and then directed at golf without careful consideration of the teaching-learning context and its unique demands for information.
What is wrong with this picture? In the scientific method, technology does not determine the quarry. Theory determines the quarry (phenomena to investigate), although technology may limit the capability of science to probe the phenomena (think: Fermilab, SLAC, CERN, and accessible energy regimes for probing high-energy phenomena). Fundamentally, scientific technology is designed to go where theory directs it. And theory in any field of expertise is the constantly updated explanation of existing data in a manner that accounts for all current data, predicts future lines of investigation to verify or falsify the theory, and convincingly contextualizes an understanding of the data in a wider, interconnected collection of related fields of expertise (think: Einstein and the Michelson-Morley "ether" experiments and the prediction of gravitational lensing of light; think: Richard Feynman and the theory of QED as later verified by CERN experimentalists).
What should this picture look like? The technologist must first honestly acknowledge that he does not comprehend the current state of the field of expertise with sufficient breadth and depth to enable the intelligent choices of what needs measuring and why. Rather than have the technology itself dictate what may be measured, with the rationale for choosing these parameters following along wagging its tail appealingly behind, the technologist needs to be informed about what to measure and why before he designs the technology (think: Henry Cavendish and the torsion bar measurement of G, in which the torsion bar technology was designed to investigate and measure G; think: Galileo and the telescope designed for studying the heavens; think: all astrophysics technologies for exploring different energy realms of the universe from infrared to X-ray and gamma frequencies).
The priesthood of the technologists
Granted that the scientific method may follow empirical inductive pathways or theoretical deductive pathways at various stages, this does not mean that one approach is superior or more scientific than the other. The interplay between the two approaches makes them mutually supporting. Theoretical prediction isolates phenomena for study; experimental empiricism validates or falsifies theory; theoretically-informed reasoning explains discordant or anomalous data and incorporates new information into a revised theoretical comprehesion that generates new predictions; and so forth. What does not happen, however, is empirical data-gathering without theoretical contextualizing.
But this is exactly what technologists find themselves doing. Why? The false credo of the measurer is "To measure is to know." This is unscientific. To measure is to gather data or information. The information may or may not be relevant to the phenomena under investigation, and the measurement may or may not be at a useable level of precision or in a format that lends itself to comprehension. To comprehend the relevance of information derived from measurement in terms of cause-and-effect is to attain useful knowledge that can be expressed in theoretical terms and that can be applied and tested further by the scientific community. Technologists in golf do not participate in this scientific methodology, and instead simply measure whatever the technology is capable of measuring.
The trouble with technology used in this fashion arises as soon as the technologists is called upon to explain the data and to teach golfers the practical use of the knowledge rightly comprehended (think: square peg must fit round hole). Almost without exception, technologists are forced by this situation to retreat to the formation of a data-determined "model" of ideal behavior. A "model" is typically distinct from a theory, in the technologists' world, because a "model" is seen as an empirically emergent "pattern" inherent in the behaviors of golf when collectively measured and therefore "revealed" with a sufficiently large data sample, whereas a "theory" is a product of reasoning that unifies the comprehended data of disparate fields of expertise in a manner that resolves anomalies in the data. In other words, the technologist distrusts and disavows reasoning and the conceptual views of others in strict favor of data. This "model" building, then, is implicitly essential to the technologists methodological raison d'ĂȘtre, and comes with its own justification and badge of superiority: empirical data gathering is a better scientific approach than the deductive reasoning of the theorist, offering "real proof" whereas reasoning offers "mere concepts."
The rare exception is the technologist who describes "revealed patterns" in the population of elite golfers but disavows using the technology to teach a "model", and instead posits the use of the technology for the golfer to use in order to make "consistent" whatever technique the golfer happens to use. This position allows the technologist to sidestep the instructional debate about what techniques rank as poor-good-better-best while still marketing to teachers and players. This reduced role for the technologist, however, nonetheless permits the technologist to claim the functions of teacher, even though the technology is here merely adjunct to teaching, and has no explicit role to play in moving a player from poor technique to better or best technique.
The technologist has the advantage of wrapping himself in the white robes of the lab and the mystical aura of so-called "hard science." But if the "modeling" process is examined closely, the lab coat begins to look much more like the gauze curtain concealing the Wizard of Oz at his dials. Firstly, the only pattern that could possibly emerge from collective data-gathering is that predetermined by the parameters chosen to measure in the design of the technology. Good choices, poor choices, informed choices, or default choices -- why these parameters rather than others? Secondly, the only pattern that could possibly emerge from this methodology is also predetermined by the sample population. This second problem is inevitably "handled" by the justification-hungry technologist by narrowing the population whose behaviors are measured to a communally accepted "elite" population of high-skilled golfers. The trouble here is that it is manifestly illogical to proceed from the fact that a given subpopulation of golfers is "highly skilled" in comparion to others to any of the following conclusions: a) measurements of the subpopulation reveal the "best" data pattern for the skills as there is implicitly no skill level higher than that displayed by the elite subpopulation; b) mimicry of the data pattern of the elite golfers is a sound and effective way to improve golfers with lesser skills; c) members of the elite subpopulation cannot really benefit from the "model" in terms of improving current skills but can at best use the "model" for maintenance purposes; and d) the data pattern itself is its own "explanation" of the skills.
But this is simply wrong-headed and unscientific. Science needs both the empiricist and the theoretician, working cooperatively to complement one another. Begging the question of what is best and how to teach it with a "model" of elite golfers is similar to hunting elephants with a microscope. What happens if the technologists is told by an expert golfer that there are golfers beyond his "model" subpopulation who display skills significantly different from and superior to those revealed by the "model"? In a word: "tilt".
The forced teaching role for technologists
If a technologists finds himself with a technology that generates data out of context of what is known for teaching and learning the skill, then he is either forced to "teach" using a derived "model" data set, or he frankly disavows teaching altogether and merely posits the value of the technology to as many and as diverse teaching approaches as possible so that the technology has "value" to as broad a "market" as possible. This response is common to many technologies in golf: The TaylorMade "Motion Analysis Technology by TaylorMade" (MATT system) is presented by top technology chief Tom Olsavsky as follows: "We're not here to teach putting, to be sure," says TaylorMade's Tom Olsavsky. "We fit the club to the swing, not the other way around." Similarly, the Swedish Putting Guide stroke training technology is explicitly NOT designed to be used to train any one particular stroke style, but is adaptable to use by teachers using many diverse styles. And ALL commercial putter fitting systems explicitly disavow any connection with a preferred stroke style, as technologists in this area have repeatedly learned from experience that coupling instruction for improved stroke technique BEFORE fitting the putter to the golfer invariably reduces business (e.g., David Edel fitting system per personal communication January 2005; PING fitting system geared to "your preferences and goals"). Common sense rather convincingly holds that the golfer's postures and motion pattern should be improved before the equipment fitting or training device cements him into his stroke pattern. Yet in all these technological applications, the technologists disavow any role in teaching for improvement other than to have as many teachers as possible use their technology.
Disjuncture between learning and teaching
Thus, instead of questioning and testing whether the elite "model" represents the best display of skills or the most seminal skills for teaching lesser golfers or whether the use of the technology should be tied to specific teachings, technologists default to the "black box" approach to applying acquired "data" (versus "knowledge") without the necessity of understanding the cause-and-effect processes. The technology gathers a collective "model" from the data and then the same technology is used to measure whether the learner can produce data that closely mirrors the "model." The essence of a "black box" approach is that the input produces an output in a predictable enough manner but without any comprehension of the cause-and-effect processes by which input yields output. The technologist does not attempt to explain WHY the elite golfer performs the way he does, but simply uses trial and error to settle on which inputs most reliably and consistently generate "model"-like data outputs.
This situation sets up a war between technologists and teachers. The technologist has defined the so-called "scientific" ground with the technological design, but the teacher frankly has other fish to fry. Good teachers routinely do not use technology precisely because it detracts from the learning process. It is not uncommon for top golf teachers to use video technology ONLY to show to the student that the student's ideas of what is actually happening are not correct, so the student will harken more assiduously to the guidance of the teacher.
Why aren't the two on the same page? The short answer is the hubris of the technologist lethally combines with the poor definition of teaching science standard in golf. The technologist assumes his choices of parameters to measure are relevant to understanding and teaching the phenomena, even though the parameters are chosen without benefit of deep insight into the skills and without a background in teaching the skills. Cloaked with this protective armor of delusion, the technologist further assumes he knows what is relevant and also that he can teach a student how to perform so that the student data looks more like elite data. The golf instructor typically does not have a very solid focus on the process of teaching in terms of skills evaluation, diagnosis of strengths and weaknesses, teaching protocols to advance with permanent and incremental improvements in skills, and the role of measuring and recording skills for evaluation, diagnosis, and progress monitoring. This lack of teaching-learning science on the instructor's part does not usually fare well in the golfing public's perception of which is more potent: the gee-whiz technology in the hands of the white labcoat "scientist" or the golf teacher who is a good player perhaps but only a so-so instructor not trained in the use of science. This situation promotes a "scientific priesthood" in favor of the mere technologists, and this conferred status inexorably tempts the technologists to adopt the role of teacher as well.
That technologists are NOT teachers immediately reveals itself whenever a technologist is asked the disarmingly simple question: "Based upon your data, what do you SAY to a student so that his effort at the skill generates data more closely mirroring the "model" data set?" The answer is always without exception a blank stare from the technologist. The communication process that is the heart of the student-teacher learning process is practically non-existent and is at best stunted and nearly unintelligible (e.g., "you should try to narrow your s.d. on this parameter.")
And yet, technologists find themselves hard pressed to resist offering instructional advice. For example, Dr Paul Hurrion of Quintic Biomechanics uses a $10,000 center-of-gravity platform to monitor movement of a golfer's center of gravity (pelvic-abdominal region) during a putting stroke motion. The data is: the COG wiggles a little. So? A superficial reading of golf instructional lore contains the usual cant: "lower body cannot move during the stroke". And Dr Hurrion translates this to golfers: widen your stance to stop the lower body and COG from wiggling in the stroke, as this cannot be a good thing. The result is some top European players have putting stances wider than their stances on the tee box with a driver, and their muscles coordinating upper and lower torso are impregnated with undesirable tension in the stroke, to the detriment of fluidity and touch. In contrast, Ben Crenshaw acknowledges that his rear knee "gives" during the through-stroke, and Brad Faxon says candidly that he does not try to prevent leg action at least in longer strokes.
The most striking modern example of this delusional science is seen in the application of EEG and "gaze-tracking" technology to putting. In an article in Golf Digest (December 2003) about the so-called "Quiet Eye" in putting, Dr Joan Vickers measures patterns of gaze change in putting. The data is that "better" golfers have a more organized pattern of saccadic gaze shifts plus a 2-3 second moment of non-shifting just prior to initiating the stroke whereas "poorer" golfers display a more "unorganized" pattern of shifts and a stillness period of only 1-2 seconds. Dr Vickers offers this "explanation" of why the gaze pattern of "better" golfers results in better performance: "Why is it essential that you develop a Quiet Eye when you putt? It's simple -- your hands are controlled by your brain, which gets valuable information about what to do from your eyes. As you putt, your brain needs to organize more than 100 billion neurons. These neural networks are informed by your gaze, and control your hands, arms and body as the stroke is performed. These networks will stay organized for only a short period of time; a window of opportunity opens that must be used when it is at its most optimal." That's extremely vague and assumption-riddled language, and there is NO science offered to support any of these explanatory statements. Dr Debbie Crews in the same article, using EEG technology, then "explains" what the "brain is doing" that makes this "better" pattern of gazing superior to the pattern used by "poorer" golfers. Dr Crews claims: "Over all, the good putter shows "harmonized" activity throughout the brain. This is similar to the keys on the piano. Certain combinations of notes create greater harmony than other combinations. They may not all sound the same but are beautiful when put together in specific patterns." This highly metaphor-laden language is also devoid of science.
The so-called "quiet eye" phenomenon may be a real cause-and-effect contributor to improved performance between the two subgroups measured (actually, college golfers with high handicaps versus other college golfers with not-so-high handicaps), but the phenomenon as it stands is simply a by-product of what the technology measures. Whether the visual processing contributes (or not) or whether something else entirely underlies the performance difference (muscle movements, internal psychology of the more skilled golfers, etc.) is not disassociated from the data. Because there is at best a confused and superficial theory for what this gaze pattern might contribute to improved performance, there is no science to test and verify what are the cause-and-effect processes at work in the golfer. And in the same vein, there is no intelligible basis for formulating what to SAY to the golfer about how to benefit from the phenomena, other than do what better golfers do.
Empirical wanderings without theoretical pathfinding
Why do technologists measure what they measure? The answer is: Simply because they can, which is the least intelligent reason. Technologists should measure what matters, or at worst what is likely to shed light on the phenomena under investigation for purposes of cause-and-effect comprehension. In golf, measuring the club motion is the least useful sort of measurement. Measuring what the golfer DOES to generate the club motion is a vastly superior choice, but even then the REAL quarry is how does the golfer do what he does, why does he do it that way and not another, and how might this behavior by the golfer be molded by instruction into a more efficacious pattern?
This approach cannot be undertaken without theory, and a strictly empiricist approach commonly employed today in golf science is doomed from the start because it does not design the technology with these questions firmly in mind.
So what SHOULD Technologists do?
Technologists have valuable skills in data gathering and manipulation, but they are most emphatically NOT trained in what counts or why about golf skills. Before designing a technology, and front-loading its data-gathering capacities in restrictive ways, the technologist must learn to ask REAL experts what the relevant parameters are for measuring and what is the underlying cause-and-effect comprehension that justifies these choices. This would avoid the "priesthood" effect and also avoid a lot of after-the-fact justifying of the technology and its square-peg forcing into a round-hole.
For example, modern learning theory as informed by recent advances in neuroscience indicates that the old "rote" or "muscle memory" approach of sports scientists in the 1970s is seriously deficient, and needs to be augmented with cognitive structures and a deeper part-whole approach to skills. This means to technologists that there is "feedback" and then there is "feedback." Technologists uniformly rely on the outdated notion that repetitive exposure to "good" feedback information is sufficient for sound learning, and this is simply not the case. From the teacher's point of view, how the skill is performed best, and how the golfer student best gets this optimal performance accomplished, is what needs to be taught and learned. Raw "feedback" from the putter head, for example, has some relevance, but it is at a level removed from the more seminal "feedback" of HOW the golfer student moved to produce the putter head action. The former sort of feedback is typically called "knowledge of result" or KR, whereas the latter sort is called "knowledge of performance" or KP. But there is an even more important level of feedback -- HOW DID THE STUDENT GENERATE THE KP AND WHAT CAN THE TEACHER DO TO STABILIZE EFFECTIVE PROCESSES AT THIS LEVEL?
Frankly, technologists know scarcely anything at all about this level of teaching and learning. Not many teachers do either. But theorists do. The bread and butter of a good theorist for teaching golf skills is a sound understanding of the performance data in terms of cause-and-effect such that the theorists can communicate with a wide variety of students to successfully instill stable and efficacious performance strategies. The theorist uses technology to validate and invalidate teaching concepts and techniques, to assess the efficacy of techniques, to develop novel techniques, and to monitor progress. But the quarry is always "what to say to the student" so the student learns well.
An example of a good application of technology to learning and teaching about efficacious putting is that of Dr Norman Lindsay, who set about to study the causes of reduced skidding and backspin and promotion of forward rolling in putts. To investigate this phenomena, he devised the appropriate technology to capture the relevant data and allow for meaningful inferences about the physics involved, leading to design knowledge for the design of putter heads and faces and lofts. His technology also allows verification and falsification of the claims of others.
This all means that the theorist cum teacher needs assistance from the technologist in the defining of WHAT TO MEASURE and WHY, calling upon the skills set of the technologist to indicate what may possibly best suit the needs of the teacher. The teacher doesn't so much want to know whether the golfer student successfully moved the putter head straight-back in the backstroke so much as he wants to know whether the golfer KNOWS not to use the muscles of his hands and arms to start the putter head back from its static resting position, due to the adverse effects caused by this flawed muscles activation pattern. The teacher does not so much want feedback about the putter face squareness coming forward as he wants information about the role of stability of the tempo and stroke pivot at the base of the neck managing the forces of the stroke to promote consistent and accurate re-squaring of the putter face with minimal effort and attention. The teacher does not so much need a "picture" of a golfer student's brain during putting so much as he needs an understanding of what the brain processes should be in terms of cause and effect and how best to repeat these brain processes with minimal effort and attention. No technologist today looks in these quarters for parameters to measure, so there is really no useful "science" being generated. Instead, there is much smoke and little to no light.
Because of this state of affairs, there is vast room for improving the role of technology is promoting better golf, but technology is currently headed down a dead-end path. The good news is there is such potential in the near future. The bad news is this message is not a welcome message in the status quo. Technologists will need to drink deeply from a new firehose in order to shift to this more useful role. We'll see how it works itself out.
1 comment:
HI Geoff,
Nice article. You are thorough in your comments. I coach in the men tal game arena of golf and agree strongly with your position. I wrote an article on learning which lends itself to this conversation in some ways. I thought you might find it interesting and/or useful for your readers. It leans more toward the "technical" approach to instruction and how this has a detrimental affect on most right brain learners.
Accelerated Learning For Golfers
I bookmark very few golf blogs and enjoy yours. Keep up the good work.
Wade Pearse
Post a Comment