“If It Isn’t Fun.” – Minnesota Code
Background and History
During the 1950s, when most systematic population studies of heart disease began, it became important to have standard and quantitative means to compare disease rates. Diagnoses indicated on death certificates, or made by physicians with different training, might easily result in spurious differences. Even independent diagnoses by specially trained physicians were highly variable. The need for comparability in clinical assessment led us at Minnesota to consider the electrocardiogram as an objective measure for assessing heart disease. This is because electrocardiograms can indicate heart disease manifestations of greatest interest, that is, death and scars of heart muscle (infarction), inadequate blood supply (ischemia), increased muscle mass (hypertrophy), and disturbances of rhythm or conduction. The electrocardiogram seemed particularly promising because it was an objective graphic record and amenable to standard procedures of collection, measurement, and classification. It also seemed ideal because it was so acceptable, painless, simple, and inexpensive.
Several hurdles soon became evident, however. Differences among physicians’ “impressionistic” reading of electrocardiograms were large. Sizable differences were found even by the same cardiographer reading at different times. And no objective criteria existed for abnormality or for specific cardiac findings, only impressionistic pattern recognition. Clearly, the challenge of standard criteria and measurement had first to be met before endpoint cardiac “events” and population rates could be reliably assessed and compared.
Our first approach was to compile all existing electrocardiographic criteria and attempt to sort them into quantitative classes. Where this was not possible, “reasonable” criteria were devised and empirically tested for sensitivity and specificity in well-defined populations that contained different proportions of normal subjects and patients having specific abnormalities. All these criteria were assembled, unambiguously described, rank-ordered by magnitude and amplitude according to a clinical impression of their importance, and assigned code numbers. The resulting classes were quantitative, mutually exclusive, and generally relevant to disease states. They were objectively described as Q-QS waves, or negative T-waves, etc. rather than given clinically diagnostic labels. Finally, records from many different living populations were classified and evaluated for “reasonableness” of the population distributions obtained. Test-retest reliability improved gradually among the physicians closely involved in development of the code — Pentti Rautaharju, Sven Punsar, Gunnar Blomqvist, and me.
Early on, between 1958 and 1959, we sent to other investigators involved in population studies an early version of the criteria and code. Someone, possibly Frederick Epstein at the University of Michigan or Geoffrey Rose or Ian Higgins in London, termed it the “Minnesota Code.” They began to use the criteria and provide suggestions for minor revisions, and were generally enthusiastic about an increasing ability to classify prevalence and incidence data quantitatively. About this time, we also shared records to assess variation in our coding. Despite the use of unambiguous and quantitative criteria, and our awareness of standard procedure, the coding variation was surprisingly large.
I have recently learned from Ian Higgins that it was very likely Aubrey Kagan of London, on temporary assignment to WHO in Geneva in the late 1950s, who dubbed our system the Minnesota Code.
The extent of observer variability should have been anticipated. Clear definitions and objective criteria do not guarantee, in themselves, similar application by different observers. Consequently, we gave more attention to the conditions of electrocardiographic recording and to measurement, then to coding procedures and rules. Measuring devices were developed with magnifying loops, particularly for assessing Q-wave duration to improve borderline classifications. Rules were devised to account for pattern variation among beats. The system was generally shored up by quality-control procedures and standardization, with a system of duplicate, independent readings and adjudication of differences. The “final” Minnesota Code and procedure was published in Circulation in 1960.
By the early 1960s, the burden of coding still lay mainly on me and a few visiting physicians in the Laboratory of Physiological Hygiene. Meanwhile, the volume increased of electrocardiograms from many population studies. The high level of physician interest during the developmental phases of the coding system was superseded by boredom with what was becoming a tedious chore. At about the same time, independently, Rose in London and I in Minnesota hit upon the idea for coding by technicians. He approached the issue logically. My resolution of the problem was serendipitous, a special experience with electrocardiographic coding in my home.
In the summer of 1962, I was solicited by the National Health Examination Survey of the National Center for Health Statistics to classify electrocardiograms by the Minnesota Code on 6,000 individuals, a “true sample” of the entire U.S. population. I agreed to do it as a summertime avocation. It offered the princely reward of twenty-five cents a record, but seemed otherwise a worthwhile undertaking — until I received the actual shipment of electrocardiograms. They came as unmounted strips stowed in tiny cardboard cubicles as tightly wrapped cylinders some six to nine feet long. These had to be removed from the case, unrolled, held flat, read, measured, codified, tabulated, rerolled, and reinserted in the packing. The entire process took many times longer than reading, classification, and tabulation alone.
At the inducement of 1 cent per record, I was able to enlist the enthusiastic aid of two alert, very young, non-technical persons who happened to be close at hand that summer. While one, aged six, would extract and unroll the record, the other, aged seven, would hold it down until I coded it and then roll and refile it. Their summer wage came to 6,000 cents each, and mine, 6,000 x 23 cents. Their work reduced mine substantially, and as a side effect, nurtured a most pleasant relationship among us close relatives.
After only a few days’ experience, from ordinary curiosity, my young assistants could identify P, QRS, and T waves. After a little more time, they were careful to point out to me when a P wave or a T wave was “upside down” or when a Q wave was “big and fat” or an R wave was “too tall,” and so on. After a few more days, by which time they spontaneously associated the code numbers I was writing down with tall, flat, inverted, or prolonged waves, it was clear that they could, at age six and seven, become excellent ECG coders.
Having read some 30,000 records a year for several years, this experience convinced me that it was time to call a halt. In fall of 1962, I began recruiting university students and instructing them in the coding procedure. Many hundreds of student coder alumni have, over the four decades since, helped in this service to national and international population studies, while at the same time helping themselves through school.
It was not originally my intention to develop a coding system for export, but to develop sound methodological procedures for assigning objective “events” related to cardiac disease for our extensive Minnesota-led surveys. The Minnesota Code became widely used, however, because it met a need in the burgeoning new field of cardiovascular disease epidemiology. It became used internationally when it was included in the World Health Organization monograph series, Cardiovascular Survey Methods, published in 1968. Since then, the code has been expanded and updated to classify endpoints for clinical trials and to better characterize arrhythmias and conduction defects. A detailed manual provides standard procedure for its training and application. Because the electrocardiogram has a different set of “errors” than clinical assessments, and is independent of them, it complements clinical data from physicians who may inadvertently be biased by knowledge of drugs being administered, etc. A new set of criteria has been developed to estimate significant serial change in the electrocardiogram over time.
The original 1960 publication in Circulation was cited by the Cumulative Index Medicus as one of the more extensively referenced articles in scientific literature. A manual of procedure for training and testing in the Minnesota Code was written by Ron Prineas, Richard Crow, and myself, and was published in 1982.
It had long been our intention that electrocardiograms recorded on magnetic tape in field surveys would be read by machine. We have collaborated with numerous computer centers to develop Minnesota Code logic and software programs, most of which have succeeded, more or less. But the differing natures of man and machine also pose a problem. The computer’s determination of wave onset and offset, and of baselines, though much more repeatable, is often systematically different from human-visual interpretation. The current strategy of the Minnesota group, and by Pentti Rautaharju and the Epicore Electrocardiograpic Center, is to use the computerized Minnesota Code for things the computer does best (i.e., measurement repeatability and averaging), and to complement its classification by things that the human does best (i.e. judging baselines and resolving borderline and ambiguous situations, and coding complex arrhythmias). Our groups have demonstrated that combined human-machine adjudication obtains the more reliable and valid electrocardiographic classification. For major new studies, completely automated measurement and classification is now available with NOVACODE, a new classification system developed by Rautaharju, in large part from the extensive experience recorded in the Seven Countries Study, and another by Jan Kors and his group in the Netherlands.
In visual coding of electrocardiograms, there is no absolute “right or wrong” classification. The job is to get as close to the truth, and as repeatably as possible, by the means taught in the Minnesota Code manual. After an instruction period, learning occurs best by reading electrocardiograms as rapidly as possible, by making a firm commitment to a code, and then comparing that decision to a standard. As one becomes a more confident and facile coder, the most important thing to remember is to remain scrupulously honest, and to read sets of cardiograms independently of all others’ readings. Afterward, it is important to be curious and to compare your independently coded findings with others’ independent coding. This helps one learn causes for coding differences; but in the end, “two heads are better than one.”
Few student coders at Minnesota go on to medical or health-related careers. However, the coding course allows students to become highly competent in a discipline that to this day remains something of a “mystique.” Student coders become experts in detecting, differentiating, and classifying most ECG findings. However, this technical function gives no basis for clinical interpretation or for the diagnosis of heart disease. Coders should never let themselves be put in a position to misuse their electrocardiographic reading skills outside the research setting.
For example, we once had a student who tried to sell her skills as an “electrocardiographic coder” to record rooms in Twin City hospitals. They, of course, had no idea what she was talking about and called us for fear that she was mentally unbalanced. The Minnesota Code is not standard in hospital practice. Rather, it is a system designed for rigorous population researches and clinical trials. Later, we had another student who read her grandmother’s electrocardiogram as it was being made at bedside in the hospital: “Oh, what a nice big 7-1 you have, Grandma. You have a complete left bundle branch block!” It took some hours of medical persuasion, and sedation, to calm the elderly woman who thought she’d had a heart attack!
*Modified and reprinted by permission from Prineas, R, Crow R, and Blackburn H. The Minnesota Code Manual of Electrocardiographic Findings. 1982. John Wright-PSG. Littleton, MA
Blackburn H, Keys A, Simonson E, Rautaharju P, and Punsar S. The electrocardiogram in population studies. A classification system. Circulation 1960. 21: 1160
Rose G, and Blackburn H. Cardiovascular Survey Methods. WHO Monograph Series no. 56. 1968. WHO Press, Geneva.