Friday, January 29, 2010

Reevaluating Quarterback Rating as a Performance Tool

In football the quarterback rating statistic has always been a quirky point of emphasis. It is one of the chief indicators that pundits use to evaluate the performance of a quarterback in a given situation be it when blitzed, in the 2 minute drill, on the first drive of the game, etc. However, despite all this attention paid to the quarterback rating, it can be argued that the interpretation of the statistic itself is in error. Most view the meaning behind the quarterback rating as ‘the efficiency of a quarterback’. Although useful, a better statistic would be to evaluate the influence of the quarterback in relation to that efficiency instead of focusing on efficiency alone. Quarterback rating should judge the prolific nature of the quarterback if it is to be an effective quantitative tool in quarterback performance measure.

The formula used to compute quarterback rating (shown below) was developed by Pro Football Hall of Fame executive Don Smith in 1971.




The equation breaks down into 4 separate components: first, completion percentage where 50% was used as the average benchmark. That is an average quarterback performance involved completing 50%. From that base point poor and high quality performance points were established at 30% and 70% respectively. The second part involves yards per attempt with an average performance being 7, poor being 4 and high quality being 11. The third and fourth parts focus on touchdown passes per attempt ratio with an average performance being 5% and interception per attempt ratio with an average performance being 5.5%. Overall an average performance netted 1 point whereas poor and high quality performances netted 0 and 2 points respectively. Finally the 100 divided by 6 element was based on an average performance netting a 66.7% out of 100% grade scale. Note that 2.375 is the highest total allowable for any of the components.

The problem with the above methodology as it relates to what should be the goal of the quarterback rating is that the efficiency measure is phantom exponentially extended. Basically the methodology projects a performance ad infinitum based on the current statistics. It is due to this inherent application why Quarterback A can complete 8 of out 10 passes for 168 yards with 2 touchdowns and 0 interceptions (statistically perfect rating 158.3) and have a better rating than Quarterback B who completes 32 out of 41 passes for 410 yards and 4 touchdowns and 1 interception (130.7). Quarterback A had a more efficient performance than Quarterback B, but which quarterback was more instrumental in the offense of their particular team? Clearly Quarterback B, but the quarterback rating does not accurately reflect that reality.

Therefore, if quarterback ratings are going to continue to be used as quantitative measurement tools in quarterback evaluation, a cap needs to be assigned to curtail the inherent exponential proficiency estimation. The best means to determine influence would be relate the cap back to yardage. The following criterion or something similar would be suitable:

Yards = Maximum Quarterback Rating Possible

0-199 = 99.9
200-249 = 119.9
250-299 = 139.9
300+ = 158.3

With the application of these caps the overall formula for calculating quarterback rating would not change, but if a quarterback failed to throw for more than 249 yards it would not matter if the formula calculated a rating of 146.7 because officially the rating would be reduced to 119.9. Moving quarterback rating beyond simple efficiency and adding game influence increases its statistical power and the ability to accurately differentiate between high quality and quarterbacks that are only asked to do so much to aid their offense, which is supposed to the real point behind quarterback rating in the first place.

Wednesday, January 27, 2010

In Search of Statistical Understanding

Mark Twain once said, “…There are three kinds of lies: lies, damned lies and statistics.” Unfortunately most people seem to have taken that statement to heart, shunning the usefulness of statistics in risk management and decision making by either not using them or not even bothering to learn proper statistics. The dearth in use of statistical information and analysis by the general public has resulted in the common misrepresentation of various pieces of information due to a lack of sufficient reported parameters. This misrepresentation has created scenarios where inappropriate decisions were favored over more rational decisions creating instances of inefficiency in already difficult situations. These scenarios are most commonly demonstrated in, but not limited to, the multitude of opinion polls that are conducted on a daily basis which supposedly dictate public policy.

Unfortunately statistical misrepresentation also infiltrates other severe issues such as medical decision-making. These misrepresentations stem either from a lack of understanding regarding how statistical theory actually operates or a deliberate attempt to boost or lower the success rate of a particular product/treatment and are perpetrated by patient, physician and pharmaceutical company. The chief concern among the public should be that inaccurate statistical analysis of these procedures at best results in a significant waste of time and money and at worst results in the greater probability of a loss of life/lives. Such a lack of statistical application is made worse by the fact that all of the relevant information is easily available, but simply not interpreted properly. Without using an objective statistical analysis how is one able to discern the difference between one procedure/product vs. another? Testimonials are rarely an appropriate determining agent due to the real possibility for a conflict of interest. Overall it is troubling that there is such a lack of importance placed on an issue that would eliminate waste at almost no additional cost and carries a high probability of saving lives.

Cancer screening and their associated false positives are a very common example of where ‘common sense’ and genuine statistical analysis part ways when coming to a conclusion regarding the result. For example the generic example used many times to illustrate this point is: if there is only a 1% chance of a women having breast cancer and a mammogram has a 90% rate of accuracy at detecting cancer in an individual that has cancer and a 9% chance of recording a false positive (the mammogram detects cancer in an individual without cancer) there is only a 9.9% chance that a positive mammogram will actually identify an individual with cancer. Such a low result is shocking to one’s natural intuition when considering only a 9% rate of false positive vs. a 90% rate of accuracy at detecting cancer, so why is 9.9% the correct result?

The basic explanation for the ‘shock’ comes from minimizing the importance of the original probability that a woman has cancer. The result is easy to understand when comparing the false positive probability rate to the actual occurrence rate. The false positive rate is nine times larger than the actual cancer occurrence rate, thus for a 100% accurate test with regards to detecting cancer in an individual with cancer there would be only a 10% chance that a positive test resulted in actually detecting cancer in an individual. In the above example the test accuracy was 90% thus there is only a 9.9% chance. Basically even with zero statistical understanding thinking about the issue properly leads one to conclusion that the correct answer needs to be somewhere in the neighborhood of the test accuracy being 10% due to the ratio between the false positive and the real positive. So in the end ‘common sense’ actually does coincide with statistical analysis as long as the ‘common sense’ used is legitimate. The sad thing is that even many physicians are surprised by this result despite the fact that they should be more in tune to such statistics.

So if there are many advantages to using statistics when making a decision, why do so many individuals elect not to use statistics? The most obvious answer is the inherent bias most individuals have towards mathematics and math related subject matter. Statistics inhabit the world of math and as a whole the part of the world they exist in is not the happy easy arithmetic neighborhood, but the difficult formula and theory neighborhood. Therefore, the application of statistics takes significant and real effort over simply punching a few numbers into a calculator; this required applied effort is another strike against statistics in a world where all things are desired to be fast and simple. The fact is that statistical analysis actually makes difficult decisions easier if used appropriately.

Another obstacle that may reduce the probability for the application of statistics in a decision-making process is a lack of certainty. Statistics do not generate a prediction of what will happen, but of what will most likely happen. Unfortunately this reality of statistics conflicts with the general psychological map that most people possess. Most individuals do not think in the context of an event happening 100 times and the probability associated with what happens each time over those 100 samples. Instead individuals focuses only on the single time that he/she will experience the particular event. This expectation leads to more trust in instinct (gut feeling) than statistics. This mindset is unfortunate because statistics exist due to the omnipresence of variability in existence including events beyond instinct.

A secondary aspect to this separation between statistics and certainty is a misunderstanding of statistics in general. Statistics generate a probability of occurrence for different possibilities over many different repetitions of the same general event. However, because a lot of event in general life do not have significant periods of repetition individuals tend to view the outcomes of those events as the actual probability of occurrence rather than what statistics predict. Basically the fact that a particular outcome only has a 3% probability of occurrence in a given scenario over 1000 tests will have little influence in the mindset of an individual that experiences that outcome 2 of the 3 times that scenarios has occurred in real life. Thus such an experience may lead an individual to doubt the accuracy and/or importance of statistics in other aspects of existence, thus leading to the incorrect viewpoint that statistics are a waste of time and effort.

In fact the power of the statistical method may also turnoff individuals because even when they choose to use statistics they can easily be disappointed by the overall power of the test because their intuition tells them the result should be more meaningful. For example suppose a brokerage firm wants to determine which of their 25 employees have been performing the most efficiently. An evaluation test is created that can identify the best performing employee with 97% accuracy. Based on statistical theory what is the actual probability that the best performing employee will be identified? Using Bayes’ theorem the evaluation test identifies the best performing employee 57.4% of the time. Although correct statistically, it does not sit well if the typical person that a test that is initially believed to have an accuracy rate of 97% in actuality only has an accuracy rate of 57.4%.

A third concern relates back to the aforementioned problem regarding available information and raises its own chicken vs. egg question. Certainly not all relevant information is going to be available to a decision-maker at the time of the decision. However, it does behoove an individual to have as much relevant information as possible regarding the issue. Unfortunately this belief does not appear to be the attitude of major polling groups and the news media as they present extremely simplified questions without any expansive circumstances. This behavior raises the question of do these polling groups behave like this because they believe that the public want simplicity and would not use additional information or do they behave like this because they are lazy and/or incompetent and cannot ask important qualifiers to their questions? This question is important because if the public learns to value statistics in decision-making then one can better assess the probability that polling groups will change their behavior when collecting and presenting information to include more details.

The reason supplemental information and qualifiers are important is because issues are rarely as simple as polling questions suggest they are. For example the most common poll question in the recent healthcare debate was ‘do you support a public option?’ Wow, what an amazingly simple question overlaying a complex issue. The first error in the question is that of the ignorant respondent. The pollster assumes that each individual answering the question has relatively the same definition for what entails a ‘public option’, which is highly unlikely. In similar fashion the pollster assumes that each individual is aware of the definition for the term being used by those in Congress. Also the pollster does not inquire to the details surrounding the success or failure of such an issue. Basically what the respondent would gain or loss if a public option existed or did not exist. None of the elements that go into creating a public option and how they would influence the answer of the respondent are discussed which defeats the point of even asking the question.

The importance of these qualifiers can be seen in the following example. Suppose you ask the following question to 1000 people: ‘Would you like 10 dollars?’ It would be very surprising if any one of the respondents answered in the negative. However, what if an important piece of information, which was excluded from the first go around, was added and another 1000 people were asked this question: ‘Would you like 10 dollars which I just stole from that 5-year old girl over there who is still being a baby and crying about it?’ Adding the information regarding the origin of the 10 dollars, another layer of complexity to the question, has changed the question dynamic completely. Now it would not be surprising if the level of response flipped to an overwhelming ‘no’. What if instead of a 5-year old girl the money was stolen from a billionaire, how would that shape the response curve?

Another problem with the media outlets and the way they diminish the importance of statistics is inappropriate presentation of growth or decline percentages. Typically this information is presented as relative changes without illustrating the absolute numbers that represent those changes (absolute changes). Not looking at the absolute changes can lead an individual to radically erroneous conclusions. For example suppose from year x to year y it is reported that the GDP in a given country increases by 25% under President A whereas 5 years ago the GDP increased by only 5% under President B. Clearly President A must be doing a better job working with Congress to manage the economy right? Not necessarily as the GDP 5 years ago could have been 3 trillion whereas in year x the GDP was 400 billion. When looking at the absolute numbers the increase in GDP 5 years ago was 150 billion whereas the increase in GDP from year x to year y is only 100 billion. So despite a 5x increase in percentage between the two equal distant time periods, the actual increase 5 years ago was 1.5x larger than the increase from year x to year y.

Reporting the absolute change is always better than the relative change because as described above, one can calculate the relative change from the absolute change, but cannot calculate the absolute change from the relative change. Unfortunately despite the above example relative changes are almost always going to be a larger number vs. absolute changes and the media in its ever expanding effort to attract more public attention over actually informing the public grab the relative change number to make the headline more important than it might actually be.

Clearly there are obstacles that need to be overcome before statistics can be implemented on a large scale. Fortunately most of these obstacles revolve around misinformation rather then difficulty of understanding. This characteristic is favorable because misinformation does not tie to intelligence, but communication and familiarity. Basically one does not need an advanced level of intelligence to understand and apply statistics.

At its heart statistics focuses on a search and discovery of patterns with later a deduction of any significant meaning to those patterns and how they may impact future events. The problem is that it tends to be difficult to perform such an exploration and analysis methodology without a proper level of experience. This lack of experience is telling in that most people are exposed to their first significant statistics course, if they are ever exposed to one in the first place, in college. It is true that the concept of probability is frequently introduced earlier than college, but in most instances such introduction does not discuss statistics and its importance in sufficient detail. College exposure is typically far too late if a goal is to develop an appreciation and understanding of statistics and what role it plays in real life. Heck, most college individuals that take statistic course lament the fact that they have to take it for their given major.

One reason for why exposure to statistics occurs at such an advanced age is that most believe that a strong core of mathematics is required before beginning study in statistics otherwise the effort applied to learn statistics will be wasted due to a lack of understanding in general mathematical theory. Unfortunately this thought process is not entirely accurate because although the study of statistics does involve advanced concepts in mathematics, there are other critical aspects to understanding the nature behind the results produced by statistical formulas.

For example one forgotten aspect of statistics is exploratory data analysis (EDA), which seeks to identify what the data is saying, not necessarily how it was calculated. EDA is an important aspect of understanding statistics because one needs to understand the context of the numbers that enter into and are spit out by statistical formulas. Also EDA focuses on using graphical information instead formula and theory which make it easier to younger students to both enjoy and understand. The application of EDA allows statistical analysts to understand why certain data should not be considered relevant for a particular statistical analysis for the inclusion of outliers or irrelevant/inappropriate data generates errors in the end result. EDA leads to the understanding of why a question like ‘what are the flaws in the methodology used for data collection’ is important to ask and how to properly answer it.

Such early analysis experience can be taught at an early level by giving students a list of data sets and details on how that data was generated and asking which sets are accurate, which sets are trash and which sets are usable as long as certain steps are taken to ensure accuracy. Also students can be asked to comment on the relevance of the outcome for certain statistical tests on given sets of data without having to do the tests themselves. Thus as a first step in renewing statistical thought in society, it would go a long way to improving the attitude individuals have towards statistics if statistical reasoning were taught before statistical theory and formulas, the mindset of statistics before the math. The issue of teaching statistics is especially pertinent to education reform. If the point of education is to ensure a populous that has the ability to reason and communicate effectively to each other in society then teaching and applying statistical reasoning is essential to achieving this goal.

Monday, January 25, 2010

Digging into Anesthesia and the Potential Risks and Counter-Measures to Application

Anesthesia has always been an interesting aspect of the medical profession. The use of anesthetics has facilitated the evolution of medical surgery by eliminating physiological responses to the pain and the stress of surgery. However, despite significant advances in medical technology and understanding, knowledge regarding the action of anesthetics remains shrouded in mystery even for those that specialize in their application. Different mechanisms of action have been proposed, but there is little certainty with regards to how anesthetics induce immobility, amnesia and loss of conscious awareness in the patient. Discovering how these influences occur would be an important step to the development of new anesthetics that provide the surgical benefits of current anesthetics without the potential side effects, which would open the door to more surgical options on higher risk groups such as the elderly and young children.

Development of new anesthetics used to be something far on the back burner of science because currently used anesthetics appeared to be working fine; however, recent studies have demonstrated that in certain target groups, most notably young children and the elderly, that the use of anesthetics may increase the probability of learning disabilities. Whether or not learning disabilities are attributable specifically to anesthesia on children has yet to be determined, but just an inkling of a relationship is enough to lead some parents into a protectionist stance holding off surgeries that would be beneficial, but not essential to their children. Such a mindset regarding the behavior of parents is not farfetched when considering the actions of some parents in response to the pertinent myth of autism being related to vaccination. Therefore, an alternative anesthetic or application methodology needs to be developed because young children cannot go without surgery entirely, but also need to have the safest surgery possible.

To maximize the effectiveness of an alternative method, the mechanism behind anesthetic function must be better understood. The best place to start appears to be identification of how anesthetics induce loss of consciousness. Anesthetics fall into two main categories: intravenous agents used to induce anesthesia and volatile agents used to maintain anesthesia. Fortunately current empirical evidence seems to suggest that both intravenous and volatile agents share the same general neurological path of action, but each agent does have unique sites of action as well. This similar function is useful because volatile agents are typically easier to work with than intravenous agents to generate empirical data.

The first significant clinical information regarding anesthetics came from Overton and Meyer who both noted that the more potent the anesthetic the more soluble it was in olive oil.1,2,3 This observation generated the correlation between anesthetic potency and oil solubility.4 This information originated the ‘unitary hypothesis’ which stated that inhaled anesthetics influenced lipid bilayer properties and that influence some how brought on anesthesia.5 This interaction with the lipid bilayer in the unitary hypothesis is thought to be non-specific.5 However, one downside to this theory was that any detected changes were small and required anesthetic concentrations much larger than those required to induce anesthesia which implied greater complexity.6

Unfortunately for the ‘unitary hypothesis’ there is various evidence that is thought to disprove it. For example the lipid change has been associated with a lipid change analogous to a change of 1-2 degrees C in body temperature or a small fever.5 However, a fever does not facilitate anesthesia apart from anesthetic agents. Also there are other molecules that generate similar lipid changes without any trace of anesthesia.7 In addition there are molecular exceptions to the lipid interaction behavior predicted by unitary hypothesis and Meyer-Overton rules. These ‘non-immobilizers’ fail to quell motor reflexes despite having the appropriate chemical properties.8,9 Although there are some issues regarding some of the experimental methods associated with these studies, largely with the use of dipalmitoylphosphatidylcholine, it seems unlikely that the information is of such significant error that their conclusions could be considered wrong on a general level.5,9 That is lipid changes may be a small part of the final anesthesia result, but seem unable to induce anesthesia alone.

Although the unitary hypothesis and other theories involving influences in the lipid bilayer affecting anesthesia may not be favored at the moment, studies have brought to light some interesting behavior between anesthetics and the lipid bilayer. For example anesthetic molecules distribute unevenly across the lipid bilayer drawn to more amphiphilic regions vs. the hydrophobic interior, which implies that stiff hydrophobic molecules should have less potency than molecules with a greater level of flexibility.10,11 Also this uneven distribution can influence the phosphocholine dipole in certain anesthetics like halothane, which can in turn influence voltage gated ion channels coupling interaction between lipid and protein mechanisms.12

Despite any positive evidence for lipid based theories, protein-anesthetic interaction is still the favored choice among a vast majority of scientists when discussing possible mechanisms for anesthetic action partly because empirical systems designed without lipids can mimic anesthetic pharmacodynamics.5 Unfortunately if a protein interaction is involved the generic methodology to locate the ligand receptor on the protein appears to be inapplicable. Anesthetics are small, volatile and do not appear to be amenable to conventional assays as clinical EC50 values are in the low millimolar range5,13 which suggests low affinity binding and ligand-receptor interaction times that span milliseconds or lower. Note that EC50 is defined as the concentration of an agent that provides a half-maximal activation of a target in vitro.9 Due to this limitation two criteria are commonly used to further judge anesthetic action, plausibility and sensitivity.5

Plausibility usually looks at the extent of inhibition of excitatory action or enhancement of inhibitory action in neurons through the suppression of glutamate or acetylcholine neurotransmitter release or the activation of GABA or glycine release respectively.5 Plausibility is a precursor to sensitivity in that for sensitivity to be relevant, plausibility must first exist. Sensitivity comes into play when an anesthetic both inhibits excitatory activity and enhances inhibitory activity.5 In this instance sensitivity is used to determine which influences is more prominent because depression of excitatory activity can occur independently of inhibitory enhancement. Typically the sensitivity value is related back to an EC50 value, which although low can sometimes be used to differentiate action between anesthetics.5

Through sensitivity analysis inhibitory ligand-gated channels have demonstrated more influence in the action of various anesthetics over other types of channels and inhibition of excitatory action.14 Enhancement of inhibitory action through either GABA or glycine makes sense, but how do they influence these channels? There are two possible explanations for this action. First, inhibitory ligand channels have specific receptors where anesthetics bind to induce influence, similar to agonists and antagonists. However, it is unlikely to assume that receptors for multiple anesthetics exist in a population or configuration to influence such a variety of inhibitory processes, thus if an anesthetic type receptor exists it would have to be well conserved and have a general structure that facilitates binding of multiple anesthetics. The fact that anesthetics are not naturally occurring in the body limits the probability for the existence of such a receptor.

Note that the above statement relates to anesthetics binding to the same type or group of receptors (their own unique receptor). Some anesthetics do bind to existing receptors on specific proteins. For example isoflurane is able to bind picrotoxinin receptors, which is a drug that binds within the channel lumen of the GABAA receptor in attempt to reduce the probability of convulsions.15,16 Despite these few shared sites the probability that isoflurane is able to consistently bind to the same site as something like propofol is extremely remote.

The second option is an alteration of protein function through a reduction in protein flexibility by reducing the national global dynamics of the protein. How such action is accomplished requires first understanding how proteins act both alone and in consort with anesthetics. Proteins are not static structures, but undergo constant random thermal driven motion within a stable equilibrium structure (basically they can only undergo a finite number of conformational changes which are afforded by the particular structure of the protein). Possible protein motions range from single bond fluctuations to entire folded domains and secondary structures. This random motion persists until it is prevented by some form of chemical or structure impediment like an appropriate ligand binding to a given receptor on the protein.

However, receptors are not the only position in a protein where outside molecules can influence change in protein dynamics. Even in their tertiary structures proteins have pockets of void space, which are commonly referred to as ‘cavities’.5,17,18 These empty regions of space typically influence how the protein dynamics proceed. It is believed that some of these ‘cavities’ are large enough that anesthetics are able to enter them and become temporarily trapped restricting the ability of the given protein to continue its natural dynamic shift between conformational states.18 This trapping mechanism may explain the low EC50 values. Overton-Meyer correlation action is the top candidate for how the cavities interact with anesthetics.5 Further evidence to support the ‘cavity’ theory is that the more potent an anesthetic the greater its polarity which is thought to increase affinity for these cavities.5

Unfortunately there is a potential significant hole in the cavity theory in that how does restricting the protein dynamics lead to unconsciousness? For example if a particular anesthetic enters the cavity of a GABAA receptor (a post-synaptic receptor that facilitates influx of chloride ions that lowers the probability of neuronal firing), conventional wisdom states that for that receptor to enhance inhibition the new less dynamic receptor structure must increase the probability of GABA binding. For such a probability increase to occur the remaining conformational switches, if more than one is still possible, must be at a greater binding ratio than those available before the anesthetic is administered. To better illustrate this fact, suppose a non-anesthetic influenced GABAA receptor moves between 50 different states where 20 of those states can bind GABA. If an anesthetic in the cavity resulted in only 25 states at least 11 of those states would need to bind GABA for the cavity theory to make sense.

Even assuming the above example actually occurs, the disconcerting question of why would over half of the GABAA receptors behave in such a way remains. Why is it more probable for 11 of the 25 states to bind GABA instead of 5 of the 25 states? This question is especially pertinent to the consistency of anesthetics. Anesthetics do not have a 60% probability of success instead they have a 95+% probability of success. Their action is not some randomized infiltration of the cavities which determines a situation where one GABAA receptor may have a binding capacity at 4 of the 28 possible states, but another GABAA receptor may have a binding capacity at 20 of the 22 possible states.

Therefore, if occupation of the cavities does create this probability of action advantage across different people then there must be some level of consistency in infiltration and structure. Basically the anesthetic must only be able to enter the cavity from certain conformational structures and locks in a given structure set that increases the probability of action. The simplest means to accomplish the probability increase is lock the receptor into a single static state that allows GABA binding. However, for such a reality to be plausible the cavity relationship to both the receptor class and the anesthetic would have to be evolutionarily conserved. Unfortunately such a notion creates a problem in that anesthetics are not naturally occurring in the body. In fact the body has its own means, largely through inhibitory agonists and excitatory antagonists, to regulate excitatory actions in neurons. So why would these receptors have specific conserved behavior in response to molecules they normally would not have encountered? The best solution may be that the size and charge of the anesthetic results in a generic lock. No conserved design is present instead anesthetic action can almost be simply regarded as a form of ‘dumb luck’.

Although it would be helpful to have a baseline understanding of the how consciousness manifests itself in the brain, such a goal is still under large and controversial investigation especially regarding what level of coherence is required to ensure consciousness. Overall such a discussion is better left for another time. However, there is one significant element that all theories of consciousness agree on, consciousness seems to require EEG coherence in the γ frequency range (20 to 80 Hz).9

The issue of whether consciousness is triggered from a particular area of the brain versus sufficient coherence in firing between different areas of the brain is an interesting question because if anesthetics are able to breakdown coherence to induce unconsciousness then certain responses may be reactivated without regaining consciousness. If possible this reactivation regiment could be used as a counter-measure to filter out some of the side effects for anesthesia. For example suppose learning difficulties are caused due to loss of coherence in a given region of the brain, but loss of coherence in that given region is not required for unconsciousness. If so a secondary drug could be applied to block de-coherence of this region which would lessen the negative influence of anesthetics. The first step may have already been accomplished in demonstrating a loss of coherence due to the action of anesthetics.19,20,21 Unfortunately as a meaningful strategy the relevant information to apply a safe counter-measure seems very far away with the current knowledge about consciousness.

With all of that said and the complicated nature of understanding anesthesia both through chemical interaction and the realm of consciousness, why is the study of anesthesia important? As previously alluded to one of the principle issues regarding the use of anesthetics has always been whether or not the application of chemicals that induce a state of unconsciousness facilitate detrimental side effects in the brain. Bolstering the concern have been studies in mice and non-human primates that have demonstrated significant levels of apoptosis in neurons after exposure to anesthesia.22,23,24,25,26 The results of these studies were contingent on two critical factors: first, the age of the test subject, the younger the subject the higher the probability of both permanent damage and the extent of that damage. Second, the duration and amount of exposure was critical in the total probability for and amount of damage. In general neuronal damage occurred if the subject was still within the age range that allotted potential neurogenesis and/or synaptogenesis.

Although these studies are disconcerting, there have been many previous studies in other model organisms regarding other physical or mental states that have drawn certain conclusions that have not translated to humans. Therefore, although young mice may suffer detrimental conditions under anesthesia there is no reason to assume that young children suffer the same way. At one point in time that attitude was valid, however, now there may be reason to be more concerned about exposing young children to anesthesia. In the past there was no significant study of how anesthesia affected young children, but Wilder et Al,27 presents evidence that the above conditions responsible for neuronal damage in rats and non-human primates also translate to young children.

The study in question identified neuronal damage from anesthesia through the severity and frequency of any learning disabilities suffered by those individuals in the study. Learning disabilities were classified as problems with one or more basic psychological processes involved in understanding or using spoken or written language possibly manifested in difficulty to listen, think, speak, read, write, spell or perform mathematical computations.27 The subjects in the study that underwent surgery did so before the age of four. The ceiling of age was selected at four because the period of synaptogenesis is thought to occur though a child’s third year of life.28 The conclusions from the study identified, similar to previous studies in mice and non-human primates, that both a sufficient dose and exposure to multiple anesthetics were required to generate a significant increase in the probability to generate a learning disorder.27

Of course this study only provides some level of evidence between exposure to anesthetics at a young age and the probability of developing a learning disorder, it was unable to demonstrate a conclusive link between the two. In addition there are some lingering questions regarding the study. For example the authors admit that a broad criteria was established for diagnosing a learning disability in an attempt to maximize the number of children with learning disabilities to detect effects, which could artificially increase the correlation ratio. However, it is unlikely that redefining learning disability in this case would significantly change the general conclusion that anesthesia does have an influence on increasing the probability of developing a learning disability. In the end changing the parameters that define a learning disability would only change the magnitude of the effect, not the fact that the effect exists. One interesting note in using a broad definition of learning disability is that the study was unable to identify any shading; that it was more probable that anesthesia would result in one specific learning disability over another. Of course such a bias must exist for it to be detected and it is unclear whether or not such bias exists in the first place.

There are also questions regarding the general health of those children diagnosed with learning disabilities after anesthesia utilized surgery vs. those children that did not require surgery. This is an important point to note because it is reasonable to suggest that children that need to have surgery at such a young age probably have more health problems than those children that do not have to have such surgery. With these additional health problems, one can reason that a level of co-morbidity extending from these additional conditions could explain the future learning disabilities instead of the anesthesia.

Although co-morbidity concerns are a rational counter-hypothesis, there is reason to believe that in this particular situation it is not applicable because of two salient points. First, the authors conclude that there was no connection or trend associated with the overall health (number or severity of conditions) of the children and the probability of developing a learning disability. Second, only a few of the conditions that required surgery originated as or even evolved into brain conditions. It is difficult to blame a kidney condition for a learning disability. So, similar to a new branding of ‘learning disability’ co-morbidity could alter the severity of the influence of the anesthetic, but it is unlikely that the general result would change.

So for the moment assume that this study has a level validity in that it connects three different species, mice, non-human primates and humans in which young still neuronally developing members of these species have a higher probability of brain damage when exposed to anesthesia than members that are not exposed to anesthesia. How does such a detrimental result occur?

One avenue to better understand anesthesia and its problems is look at it through the scope of sleep. Sleep and anesthesia are very similar with regards to their outward projected state of consciousness. However, a myriad of empirical information identify sleep as critical to the learning process whereas anesthesia may generate an increased probability of creating learning disabilities. What differences exist between these processes that could account for this apparent 180-degree switch?

As previously discussed learning is one of the crux issues between natural sleep and loss of consciousness due to anesthesia. Differentiating between these states is important to understand where significant differences arise that could explain the different outcomes with regards to learning. There are two major types of sleep states that receive a brunt of the research attention, Slow Wave Sleep (SWS) and Rapid Eye Movement (REM). Both of these states are important for the growth and survival of humans, but what is their role in learning?

To begin sleep consists of two major phases: Non-Rapid Eye Movement (NREM) and Rapid Eye Movement (REM). The components of the NREM cycle dominate the sleep cycle and consists of three different stages conveniently labeled I, II and III. NREM used to be divided into four different stages, but recently the American Academy of Sleep Medicine changed the total cycle allotment by removing stage IV expanding stage III to cover both its originally defined stage and stage four.29 These stages are commonly defined by three different empirical measures: brain wave activity measured by an electroencephalogram (EEG), muscle tone measured by an electromyogram (EMG) and eye movement which can be viewed visually or more specifically measured with an electrooculogram (EOG).29 Of the three an EEG is most commonly used to differentiate transition between different states.

With regards to sleep an EEG identifies brain waves in four different patterns: beta, alpha, theta and delta. When awake beta waves dominate mixed in with a few alpha waves, almost no theta waves and zero delta waves. The key characteristics of beta waves are they possess the highest frequency and lowest amplitude of the four waves and are the most de-synchronous (no consistency in pattern for multiple occurrences). This de-synchronous behavior is commonly explained due to the variety of neuronal firings that occur during a given period of individual neuronal activation due to the different sensory information that can be processed at a given time based on differentiating experience.

Alpha waves become more prominent when individuals focus on single activities, especially when that activity is not strenuous like meditation. Not surprisingly alpha waves have a longer period, greater amplitude and greater synchronicity in behavior.

Stage I begins NREM where alpha waves give way to theta waves. The reason alpha waves dominate leading up to sleep is that rarely do healthy individuals fall asleep when fully awake, instead there is a period of rest (the generic lying in bed with the lights off thinking). Sleep in Stage I is very weak and individuals rarely displace time when they wake from the first cycle of Stage I sleep (basically they think they did not fall asleep).

Entrance into and maintenance of Stage II is characterized by ‘sleep spindles’ and K-complexes, which are bundled together with a greater number of theta waves. Sleep spindles are spontaneous increases in wave frequency and K complexes are single spontaneous increases in wave amplitude.29 Although deeper than Stage I most individuals can still be quickly aroused while in Stage II.

SWS is represented by Stage III NREM sleep and is often referred to as deep sleep. The defining characteristic of Stage III sleep is the arrival of delta waves. Officially Stage III is defined as an epoch consisting of 20% or more of slow wave (delta) sleep.29 Stage III sleep marks the only stage of NREM sleep where dreaming has a reasonable, albeit small, probability of occurrence. The typical means of entering SWS is the activation of serotonergic neurons in the raphe system. These neurons are activated through thalamocortical neuron firing.

The length of SWS is largely influenced by the total time a subject is awake prior to entering SWS.30 This result suggests that SWS plays a critical role in the sleep process. In fact benzodiazepines and other sedatives actually decrease the total time in SWS despite increasing the total amount of sleep duration. Further exploration of the relationship between SWS and chemicals like benzodiazepines could be an important consideration between learning and SWS in young children.

The second major phase of sleep is REM which usually occurs some time after entrance into Stage III of NREM sleep. REM sleep commonly replaces Stage I sleep as the sleep cycle renews.29 REM sleep can be further broken down into two distinct states of sleep, tonic and phasic.29 The major criteria for REM sleep are fluttering/rapid eye movement, a rapid and low voltage EEG and muscle paralysis. Muscle paralysis is caused through the inhibition of motor neurons due to the release of MOA-A and MOA-B which breakdown monoamine based neurotransmitters which are largely responsible for depolarizing these motor neurons.29

For a normal adult REM sleep comprises approximately 20-25% of total sleep or about 90-120 minutes for an 8-hr sleep period.29 Early cycles of REM sleep are shorter and get longer in individual duration as the night continues. The total percentage of REM sleep is generally inversely proportional to an individual’s age where younger individuals have a much higher total percentage of REM sleep. This relationship between age and REM sleep is one of the reasons REM sleep was theorized to play a significant role in learning and knowledge acquisition because it is easier for younger individuals to learn and have more learning opportunities. Although most associate REM sleep and learning in some context, there are some that believe in the niche theory that instead of reinforcing new neuronal connections made earlier in the day, REM sleep organizes a controlled ‘pruning’ of neuronal connections, basically facilitating the ‘unlearning’ of certain knowledge.31 REM sleep is also thought to aid in the enhancement of short-term creativity largely thought through the reorganization of specific neuronal hierarchies due to the lack of acetylcholine and norepinephrine feedback.29

Whether or not it is directly connected to learning or some unrelated neuronal function, long-term elimination of REM sleep has demonstrated a negative influence to the survivability of an individual. Also there have been a number of studies that have demonstrated that sleep deprivation has a significant negative affect on learning, thus in some shape and form sleep is required for proper learning and memory consolidation. So the chief issue relates to what portions of sleep govern what aspects of learning.

Note that in no shape and form is the sole purpose of sleep only to aid in memory consolidation, there are clearly other reasons and causes for sleep, but memory consolidation in some respect occurs during sleep. Also for the purpose of this discussion there are two different aspects to learning: first, there is memory consolidation of simple tasks and second there is memory consolidation and synaptogenesis for complex tasks.

Anesthesia demonstrates bi-stability between cortical and thalamic neurons and slow oscillations (< 1 Hz) between up and down states.32,33 Some argue that this bi-stability creates gaps in the ability of the brain to integrate and process information. For example when conscious applied transcranial magnetic stimulation (TMS) generates a response of approximately 300 ms34 versus a response of 150 ms when in non-REM sleep.35 The difference in this response implies a loss of integration between neurons.

The length of sleep required for learning has never been considered an issue because of the general understanding that proper homeostasis of the biological system was best maintained by sleeping at least 7-8 hours over a 24-hour period. Despite this overlap due to general biological needs, some researchers have explored the issue regarding how much sleep is actually required to augment learning. One particular study identified that only 60 to 90 minutes of sleep were required to emulate results in a texture discrimination tests generated from fully rested test subjects.36 However, these limited periods of sleep need to include entrance into both SWS and REM sleep in order to generate an improvement in results.

In fact that study concluded that sleep periods that entered SWS but not REM eliminated deterioration in performance seen in sleep deprived subjects but did not produce actual improvement. Naps that entered both SWS and REM eliminated deterioration in performance and improved performance. This result suggests that SWS may serve to stabilize performance in learning related tasks and REM may actually facilitate performance improvement.36 Also there is reason to believe that nap-dependent learning has a retinotopic specificity similar to that reported for overnight improvement.36

Also it was interpreted that a 90-minute nap can produce as much improvement as a night of sleep and a nap followed by a night of sleep provides as much benefit as two nights of sleep.36 However, it must be noted that the evidence suggesting such a conclusion did not test beyond two days and did not test with consecutive nights of just napping with no long-term sleep. Thus, one cannot conclude whether or not long-term sleep is required to gain long-term improvement in learning. Also the testing used a relatively easy testing method to demonstrate improved learning, thus there is no evidence to suggest that napping affords the same ability to solve complex problems as overnight sleep.

Most accept that sleep does improve procedural memory, but unfortunately in the past there have been some issues regarding the influence of sleep on declarative memory and whether this improvement occurs in REM or NREM sleep. These issues arise largely due to questions about acute fatigue immediately after sleep deprivation and not controlling for circadian rhythms when attempting to differentiate between the drop-off in performance due to sleep deprivation and improvement due to REM sleep in non-sleep deprived subjects.37,38,39

In addition to the concerns regarding circadian rhythms and acute fatigue, there are questions regarding the ability to produce evidence to support the involvement of REM sleep in memory consolidation.37 There are two general schools of thought for providing evidence to support the aforementioned claim: first, learning during waking hours should increase the amount of REM sleep; second, preventing REM sleep should significantly limits the ability to consolidate memories.

The motivation behind the first mindset is that increased learning will require more memory consolidation leading to greater total duration of REM sleep. Critics of this evidence collection method cite that unless the test animal is learning a specifically assigned task, there is no way to confirm that test animal is increasing knowledge acquisition over a given period of time because an animal is continuously learning. With regards to training the test animal to learn a specific task, critics content that the common techniques applied to teaching the task are flawed, leaving them unable to confirm a genuine increased REM sleep duration.37,38 For instance an increased level of stress derived from shock avoidance or confusion/trepidation associated with appetitive reinforcement methodologies could facilitate emotional or psychological changes that could influence the rate and quality of sleep for the test animal reducing the probability of any viable trend from the study.

Overall most critics seem to avoid suggesting a potential simple means of demonstrating some form of connection between increased learning and increased REM sleep, the introduction of harmless new stimuli. For example adding an exercise wheel to a gerbil’s living environment without previous exposure. If there were any significant changes in REM sleep, it would be easier to identify those changes in the first few days after the introduction of the wheel and eventual use of the wheel because the gerbil would learn the purpose and correct operation of the wheel. The lack of any forced hardships (additional stress, food or water deprivation, etc.) removes the concern that any changes are driven by those possible negative feedbacks from forced learning. The lack of specificity of the learning would allow researchers to identify general pattern changes in REM sleep without having to worry about associating specific learning techniques to specific changes.

Similar tests could be devised in human subjects by instructing individuals in various hands-on tasks where knowledge was lacking. For example teaching a group of individuals that know nothing about automotive care how to change a tire or change an oil filter would provide a new interactive task to increase the knowledge base of the individual. The selection of the type of activity may be where experimenters go wrong in human subjects for instead of testing with hands-on new task experiments the subjects are asked to take exams covering various subjects. The exam environment may not be conducive to learning because of the inherent stress involved (no matter how many times an individual is told a particular test is meaningless/without consequence rarely will that individual think of a test that way) and the rote characteristics of the information in the exam.

Some have extended the relation of REM sleep and learning beyond learning new information to sheer intelligence. That is individuals with higher IQs should experience more REM sleep than those with lower IQs. Clearly any attempt to create a correlation between these two elements is fraught with difficulties as structural changes in the brain unrelated to intelligence may govern the inherent amount of REM sleep an individual experiences. Ironically of the most noted studies one could not identify a correlation between intelligence and REM sleep.40,41

On its face the hypothesis that higher IQ would require greater amounts of REM sleep seems to make sense. However, as previously mentioned SWS sleep seems to be more responsible for general maintenance of memory and intelligence than REM sleep.36 New memory formations would require an increased level of synaptogenesis and that process could occur in REM sleep. Unfortunately for experimenters, for higher IQ individuals most of that IQ was more than likely developed very early in life, thus there may be a slightly higher level of general maintenance required, but less opportunity to form new connections relative to an individual with a lower IQ. Basically for higher IQ individuals there would be a higher probability of forming new connections whereas lower IQ individuals have a greater number of new connections that could be made. Therefore, in controlled environments if REM sleep was involved in memory consolidation and learning through synaptogenesis, there would be a higher probability of an increase in REM sleep in the lower IQ individuals not higher IQ individuals.

This idea is better illustrated through the following analogy. It stands to reason that higher IQ individuals have more neuronal connections than those with lower IQs. These connections, typically created by greater dendrite formation and extension, can be regarded as roads. In a controlled environment where both a higher and lower IQ individual are learning the same information it can be assumed that roads A, B, C, D and E need to be constructed to successfully acquire the knowledge. For the higher IQ individual there is a higher probability that one or more of these roads was previously constructed related to another separate task acquired earlier in life, a task that the lower IQ individual does not have knowledge of. Thus, the higher IQ individual may only need to construct roads A, B, D and E to acquire the knowledge. This reduced construction amount reduces the total time required in REM sleep due to higher efficiency and fewer required road construction.

Despite the possibility of increased REM sleep in lower IQ individuals, overall it is probably unlikely that any correlation will be derived simply due to the principle of over-saturation. One issue that may not have been considered is whether or not REM sleep time is overdrawn relative to the amount that is needed. Remember that evolution does not look to create the perfect system, just one that gets the job done. Perhaps all the REM sleep that an individual with a 180 IQ needs is 40 minutes vs. 30 minutes for an individual with 130 IQ or even visa-versa, but both individuals receive an average of 100 minutes of REM sleep a night through natural processes. Thus, the required amount of REM sleep for both individuals is already received making it extremely difficult to differentiate any difference between the two based solely on intelligence.

Two important structures in the brain when considering memory are the hippocampus and the amygdala. Although believed to play a role in most memory functions the hippocampus is thought to have a specific focus on spatial memory processing and memory retention.30 The amygdala also plays an important role in memory formation, especially memories which significant emotional context.42 Due to their involvement in memory consolidation it stands to reason that it would be useful to further study these two brain elements regarding their role in sleep. First off, from an evolutionary standpoint the larger the amygdala and to a smaller effect the hippocampus the longer a creature will experience an average amount of NREM sleep.30,43,44

During SWS neuronal firings move from the amygdala and hippocampus to neocortical sites where the amygdala is the key region governing the anatomical change in sleep-derived memory consolidation.30,45 One currently explored model for memory consolidation considers that the initial neuronal firing originates from the hippocampus and amygdala during SWS (when sleep derived memory consolidation occurs) and travels to neocortical sites. Then during REM sleep the ‘information’ encoded in this firing is relayed through the neocortex and eventually reflects back to the amygdala near the end of REM sleep.30 The importance of the hippocampus and amygdala neuronal firing in the consolidation of memories during sleep requires that greater memory capacity have more neuronal connections in each region which may account for the difference in size across species. Thus the unique roles of NREM sleep vs. REM sleep may also play an important role regarding the function and size of these regions.30

For example there appears to be no significant correlation between the size of the neocortex and sleep duration unlike that seen for the amygdala and hippocampus.45,46 This difference could be explained by the fact that the neocortex seems to function more actively during REM sleep than NREM sleep, thus if REM sleep and NREM sleep have different functions related to memory consolidation, a break in size relationships would make more sense. One explanation relates REM sleep to sleep intensity in that larger brains engage in relatively more REM sleep,46 however, whether or not that relationship is maintained intra-specially instead of just inter-specially remains to be seen.

One of the principle criticisms lobbied against the function of sleep and its role in memory consolidation is the use of MAO inhibitors and their ability to eliminate the REM sleep portion of the sleep cycle. MAO inhibitors disallow the oxidation of monoamines by monoamine oxidases. The oxidation of monoamines is an important regulatory control in neurons because of the aforementioned role of neurotransmitters dopamine, serotonin and norepinephrine in the onset and maintenance of sleep.29 In particular REM sleep is driven by the absence of monoamines due to MAO. Thus, with the elimination of REM sleep opponents of memory consolidation theories believe that significant memory impairment would result in individuals taking MAOs. Some even go so far to argue that use of MAO inhibitors results in memory improvement.47,48

However, as previously theorized in his discussion, most of the general memory consolidation that an individual would experience and require for the average day does not take place during REM sleep, but instead takes place during NREM sleep most likely during SWS. REM sleep instead could be responsible for complex learning and memory consolidation of those particular skills, elements that would be more difficult to determine a lack of in patients taking MAO inhibitors. Overall it is unlikely that there is no memory consolidation in REM sleep due to ongoing processes of synaptogenesis and synapse pruning (unlearning), but that vast majority of memory consolidation seems to occur during SWS sleep instead of REM sleep.

It may seem like a tall order but a general theory behind why and how sleep occurs will be important to determining the difference between natural sleep onset and anesthesia driven sleep onset. The chief purpose of sleep seems to function on a restorative nature for the brain, especially because sleep deprivation affects cognitive functioning more than physical functioning.49 Various theories have been proposed to the nature of this restorative methodology be it a lack of direct (glucose) or an indirect (glycogen) energy,50 a buildup of too much neuronal activity threatening a critical collapse51 or the accumulation of various metabolites.19 Currently the critical collapse theory is new and does not have much direct evidence to support it. Also if correct such a theory does little to help differentiate a significant difference between anesthesia and natural sleep, so there will be no more discussion of it here.

Although this particular post will not get into the unique differences between the sleep theories relating to a lack of glycogen vs. a build-up of certain metabolites, note that in general the overall premise governing each theory is relatively the same. Certain chemical quotes are met which generates a cascade of responses that steadily increase the probability of sleep induction. Therefore, this similarity will be utilized when discussing how either theory relates back to differences in anesthetic induced unconsciousness.

Assume for a moment that either the metabolite theory or the glycogen theory for sleep is correct. Why then is the amount of sleep one seems to require inversely proportional to age (babies require lots of sleep seniors require very little)? There appear to be two possibilities. First, the rate of sleep inducing metabolite synthesis is faster/slower glycogen synthesis in younger individuals, thus the probability rate for sleep induction increases faster. Second, as one ages the probability rates for sleep induction decrease, thus requiring more metabolite/less glycogen to generate a significant probability of sleep.

Note it is more rational to utilize sleep probabilities than a threshold value because of the existence of sleep deprivation. Basically as one fails to sleep the probability of falling asleep increases. It is inappropriate to think about the trigger of sleep as a 0% probability of action that almost immediately becomes 100% after passing a certain point. Certainly the probability of sleep becomes 100% after a certain progression, but various other probability values are also present during the ascent to 100%. To ensure clarity think of it this way: the probability of falling asleep is less than 1% for any period of continuous consciousness less than 22 hours. Then the probability of sleep increases 2% for each 15 minutes awake after 22 hours.

Overall the first option seems to be more plausible because rate of metabolite synthesis/glycogen expenditure could change based on changes in neuronal growth and mapping. This explanation makes sense in the fact that neuronal plasticity and growth occur at a more frequent pace in younger individuals. However, the second option also has a level of validity in that tolerance levels with respect to the probability of sleep could increase as one ages. As the brain becomes more complex in its wiring (more neuronal connections are made as one ages from infancy to adulthood) there could be an increase in the amount of metabolites required to increase the probability of sleep.

It could be at this point where the difference in learning between sleep and anesthesia diverge. Natural sleep occurs through the gradual build-up of metabolites/loss of glycogen through the course of a given day. Although it does make sense that there could be changes in the rate of synthesis due to undertaking certain tasks, more than likely those that are mentally taxing, none of these tasks change the synthesis rate to a point where sleep is induced immediately. Also the progressive advance of a lack of glycogen or metabolite concentration increase creates a control system where these elements are not at saturation.

However, anesthesia involves using molecules that appear to induce much more rapid changes to induce a state of unconsciousness. Unfortunately application of anesthetics result in a cascade reaction that initiate unconsciousness through an alternative pathway, probably the cavity methodology. To continue the state of unconsciousness anesthetics must be consistently applied, thus removing any natural flexibility from the neuronal response to the unconscious process. The reason adults are less susceptible to brain damage from exposure to anesthesia may be due to an increase in tolerance levels. Also the fact that there is limited synaptogenesis in the adult brain also may play a role because the neurons on average are more robust than those in the young child.

With that said a basic summary of anesthetic action and the role it plays in sleep can be generated as followed:

- Normal sleep functions by moving from a wakeful generally desynchronized state to a NREM sleep state that is highly synchronized to a REM sleep state that appears desynchronized, but is carefully synchronized to test current neuronal connections and ‘sprout’ new ones through synaptogenesis.

- With regards to memory standard generic memories not requiring complex or higher order brain function are consolidated in NREM sleep. In REM sleep dreams could be used as a procedure to eliminate erroneous or inefficient neuronal connections. Also during REM sleep new neuronal connections are generated and strengthened thinking through synaptogenesis. These new connections are shaded to higher order thinking and problem solving. For example if an individual learned a new skill, like changing the oil in his/her car, this information would be consolidated during REM sleep through new neuronal connections instead of consolidated in NREM sleep. NREM sleep consolidation usually involves more low skill techniques or rote information similar to those used in experimental testing.

- The most probable means through which anesthesia work is the ‘cavity’ theory. As previously mentioned in the ‘cavity’ theory anesthetic molecules get trapped in the void spaces between molecules that make up the proteins that govern receptors which allow various ions to depolarize or hyperpolarize a given neuron. Normally these proteins are in a dynamic state oscillating between many different conformations. When molecules bind at specific receptor sites on a protein, they temporarily limit this oscillatory nature which allows the respective ion channel to significantly increase the probability of opening facilitating depolarization or hyperpolarization. When anesthetic molecules inhabit these void spaces it also heavily restricts the dynamic nature of these receptors which locks them into a single conformation. This conformation is able to produce an effect similar to the loss of consciousness seen in NREM sleep.

- Although similar to NREM sleep in outcome, the type of loss of consciousness produced by anesthesia does not result in any specific synchronization by different neurons. Instead of producing a unique and efficient synchronization to facilitate unconsciousness, anesthesia forces each receptor into a specific position inducing unconsciousness on a single neuron basis. This lack of synchronization in the induction of unconsciousness strips the neurons from any control. Due to the fact that individuals must remain exposed to anesthesia over the course of the surgery due to small ‘binding’ efficiencies there is no relief from this forced unconscious non-natural state because the anesthetic operates as a limiting factor saturation point. Basically natural sleep induces an unconscious state through a finesse using specific controlled firings and inhibitions whereas anesthesia induces an unconscious state through brute force.

- Considering the aspect of neuronal damage in young children and non-human species there are two primary rationalities. First, the typical culprit in neuronal apoptotic situations with no outside pathogenic component is excitotoxicity. A neuron becomes depolarized far longer than normal which allows for the excessive influx of calcium which later leads to activation of certain calcium dependent secondary messengers that eventually result in apoptosis. However, for anesthesia related apoptosis excitotoxicity does not seem to be a valid rationality because in addition to an increase in inhibitory responses, application of anesthesia also reduces the NMDA and AMPA activity (excessive NMDA activity is a critical precursor for excitotoxicity) in an inhibitory (GABA/glycine) independent manner.52 With increased inhibitory action and decreased excitatory action (both dependent and independent of inhibitory action) it is unlikely that excitotoxicity is responsible for neuronal death.

- The second rationality is that the neuronal death comes from an efficiency strategy. For the brain if a neuron is not actively firing when it is perceived that it should then the brain will typically get rid of it in an attempt to limit collateral damage thinking that something is wrong with the neuron. Under anesthesia large quantities of neurons are not firing over extended periods of time, which may lead the brain to begin the process of eliminating them. It is likely that such a process during regular sleep does not occur because of cycling between the calm controlled synchronized state of NREM sleep and the excited more de-synchronized state of REM sleep. In contrast anesthesia locks a patent in a single quasi-NREM sleep state.

- The reason why neuronal apoptosis is so harmful to young children and infants over adults that receive anesthesia is the issue of redundancy. In young children loss of neurons through apoptosis or any other means is more devastating because their loss severely reduces the probability of any future connections that may hub from that neuron. Returning to the road example, in young children typically there is only one road initially constructed from point A to point B. Once at point B, construction can take place from point B to points C – E. Later roads could be constructed from points C - E to point A creating a secondary path between point A and point B. However, if point B is damaged before a road can be constructed between point A and B, it makes little sense to construct a road to point B. Thus, it will be more difficult to construct roads to points C – E and from those points back to point A. In adults those roads have already been constructed thus the potential loss of point B is immaterial to those roads connecting point A with points C – E. In fact the traffic on the roads between point B and other points may be so significant that even after point B is destroyed there would be interest in rebuilding point B. In children if point B is destroyed before significant ‘traffic’ can be consistently generated between point B and other points there is less interest in rebuilding point B because it would be viewed with little value.

It is unfortunate that excitotoxicity does not appear to be the principle reason for the apoptosis in young creatures when exposed due to exposure to anesthetics because an interesting strategy to combating these results could have been to add a treatment of memantine to the anesthetic regiment. The action of memantine was previously discussed in the Alzheimer’s disease post. Of course memantine has a history in anti-cancer and Alzheimer’s disease treatment so there is little information regarding its affect on young children, thus testing would have to be done to ensure that any deleterious affects are insignificant. However, if side effects were limited it could have been an effective strategy to reduce neuronal damage.

Unfortunately if ‘lack of use’ apoptosis is the reason behind the possible increase in learning disabilities attributed to anesthetic use in children, then there does not appear to be a drug avenue for use. For example trying to force an intermittent dual cycle like normal sleep by adding MOA agonists or some other stimulant would more than likely lead to disaster. Such a strategy would not have the necessary level of finesse because either the anesthetic and stimulant would be competing with each other more than likely creating pockets of excitation and inhibition in the brain such could cause more damage than just the anesthetic alone or stopping administration of the anesthetic during surgery and adding a stimulant could likely induce the body’s stress response.

Overall there is still a significant way to go before the scientific community can be confident in its handling of anesthesia and its related actors; however, with medical technologies improving the ability of physicians to diagnosis and even operate on young children, understanding the risks associated with the application of anesthesia and how to counteract these risks is an important aspect in ensuring a successful surgery. Also a better understanding of the inherent action of anesthetics may increase the base knowledge regarding human consciousness.

==
Citations

1. Meyer, H. “Welche eigenschaft der anästhetica bedingt ihre narkotische wirkung?” Arch. Exp. Pathol. Pharmakol. (Naunyn-Schmiedeberg’s) 1899. 42: 109–118.

2.. Overton, E. “Studien über die Narkose, zugleich ein Beitrag zur allgemeinen.” Pharmakologie (Gustav Fischer, Jena, 1901).

3. Miller, K. “Molecular mechanisms by which general anaesthetics act.” Mechanisms of drugs in anaesthesia. Hodder and Stoughton. 1993. 191–200.

4. Richards, C.D. “Critical evaluation of the lipid hypotheses of anesthetic action.” Molecular Mechanisms of Anesthesia. Raven Press. 1980. 337–351.

5. Eckenhoff, R. “Promiscuous Ligands and Attractive Cavities - How do the inhaled anesthetics work?” Molecular Interventions. 2001. 1(5): 258-268.

6. Franks, N, and Lieb, W. “Molecular and cellular mechanisms of general anaesthesia.” Nature. 1994. 367: 607–614.

7. Buck, K, Miller, A, and Harris, R. “Fluidization of brain membranes by A2C does not produce anesthesia and does not augment muscimolstimulated 36 Cl-influx.” European J. Pharmacol. 1989. 160: 359–367.

8. Koblin, D, et, Al. “Polyhalogenated and perfluorinated compounds that disobey the Meyer-Overton hypothesis.” Anesth. Analg. 1994. 79: 1043–1048.

9. Rudolph, U, and Antkowiak, B. “Molecular and Neuronal Substrates for General Anaesthetics.” Nature Reviews – Neuroscience. 2004. 5: 709-720.

10. Baber, J, Ellena, J, and Cafiso, D. “Distribution of general anesthetics in phospholipid bilayers determined using 2H NMR and 1H-1H NOE spectroscopy.” Biochemistry. 1995. 34: 6533–6539.

11. Xu, Y, and Tang, P. “Amphiphilic sites for general anesthetic action? Evidence from 129Xe-[1H] intermolecular nuclear Overhauser effects.” Biochim. Biophys. Acta. 1997. 1323: 154–162.

12. Koubi, L, et, Al. “Distribution of halothane in a DPPC bilayer from molecular dynamics simulations.” Biophys. J. 2000. 78: 800–811.

13. Franks, N, and Lieb, W. “Temperature dependence of the potency of the volatile general anesthetics: Implications for in vitro experiments.” Anesthesiology 1996. 84: 716–720.

14. Harrison, N, et, Al. “Positive modulation of human gamma-aminobutyric acid type A and glycine receptors by the inhalation anesthetic isoflurane.” Mol. Pharmacol.1993. 44: 628–632.

15. Edwards, D, and Lees, G. Modulation of a recombinant invertebrate g-aminobutyric acid receptor chloride channel complex by isoflurane: effects of a point mutation in the M2 domain. Br J Pharmacol. 1997.122: 726 –32.

16. Gurley, D, et, Al. Point mutations in the M2 region of the , , or  subunit of the GABAA channels that abolish block by picrotoxin. Receptors Channels. 1995. 3: 13–20.

17. Brunori, M, et, Al. “The role of cavities in protein dynamics: Crystal structure of a photolytic intermediate of a mutant myoglobin.” PNAS. 2000. 97: 2058–2063.

18. Carugo, O, and Argos, P. “Accessibility to internal cavities and ligand binding sites monitored by protein crystallographic thermal factors.” Proteins. 1998. 31: 201–213.

19. Alkire, M, Hudetz, A, and Tononi, G. “Consciousness and Anesthesia.” Science. 2008. 322(5903): 876–880.

20. Miller J, and Ferrendelli, J. “Characterization of GABAergic seizure regulation in the midline thalamus.” Neuropharmacology. 1990/ 29(7): 649-655.

21. Angel, A. “Central neuronal pathways and the process of anaesthesia.” Br J Anaesth. 1993. 71(1): 148-63.

22. Jevtovic-Todorovic, V, Benshoff, N, and Olney, J. “Ketamine potentiates cerebrocortical damage induced by the common anesthetic agent nitrous oxide in adult rats.” Br. J. Pharmacol. 2000. 130: 1692-1698.

23. Olney, J, Wozniak, D, and Jevtovic-Todorovic, V. “Drug-induced apoptotic neurodegeneration in the developing brain.” Brain Pathol. 2002. 12: 488-498.

24. Jevtovic-Todorovic, V, et, Al. “Early exposure to common anesthetic agents causes widespread neurodegeneration in the developing rat brain and persistent learning deficits.” J Neurosci. 2003. 23: 876–82.

25. Mellon, D, Simone, A, and Rappaport, B. “Use of Anesthetic Agents in Neonates and Young Children.” Anesthesia & Analgesia. 2007. 104:3 509-520.

26. Slikker, W, et, Al. “Ketamine-induced neuronal cell death in the perinatal rhesus monkey.” Toxicol Sci. 2007. 98: 145-158.

27. Wilder, R, et, Al. “Early Exposure to Anesthesia and Learning Disabilities in a Population-based Birth Cohort.” Anesthesiology. 2009. 110: 796–804.

28. Rice, D, Barone, S Jr. “Critical periods of vulnerability for the developing nervous system: Evidence from humans and animal models.” Environ Health Perspect. 2000. 108:3. 511–33.

29. Wikipedia Entry on Sleep; http://en.wikipedia.org/wiki/Sleep;

30. Capellini, I, et, Al. “Does sleep play a role in memory consolidation? A comparative test.” PloS One. 2009. 4(2): e4609.

31. Crick, F, and Mitchison, G. “The function of dream sleep.” Nature. 1983. 30(14): 113-114.

32. Williams, S, et, Al. “The ‘window’ component of the low threshold Ca2+ current produces input signal amplification and bistability in cat and rate thalamocoritical neurons.” J. Physiol. 1997. 505: 689-705.

33. Fuentealba, P, et, Al. “Membrane bistability in thalamic reticular neurons during spindle oscillations.” J. Neurophysiol. 2005. 93: 294-304.

34. Massimini M, et, Al. “Triggering sleep slow waves by transcranial magnetic stimulation.” PNAS. 104(20): 8496-8501.

35. Massimini M, et, Al. “Breakdown of cortical effective connectivity during sleep.” Science. 2005. 309(5744): 2228-2232.

35. Mednick, S, Nakayama, K, Stickgold, R. “Sleep-dependent learning: a nap is as good as a night.” Nature Neuroscience. 2003. 7(6): 697-698.

36. Siegel, J. “The REM sleep-memory consolidation hypothesis.” Science. 2001. 294: 1058–1063.

37. Smith, C. “Sleep states and memory processes in humans: Procedural versus declarative memory systems.” Sleep Med. Rev. 2001. 5: 491–506.

38. Vertes, R. “Memory consolidation in sleep: Dream or reality.” Neuron. 2004. 44: 135–148.

39. Mayes, S, et, Al. “Non-significance of sleep relative to IQ and neuropsychological scores in predicting academic achievement.” Journal of Developmental and Behavioral Pediatrics. 2008. 29(3): 206-212.

40. Smith, C, Nixon, M, Nader, R. “Post-training increases in REM sleep intensity implicate REM sleep in memory processing and provide a biological marker of learning potential.” Learn. Mem. 2004. 11: 714-719.

41. Josselyn, S, Kida, S, Silva, A. “Inducible repression of CREB function disrupts amygdala-dependent memory.” Neurobiol Learn Mem. 2004. 82: 159-163.

42. Buzsaki, G. “Memory consolidation during sleep: a neurophysiological perspective. J. Sleep Res. Suppl. 1998. 7: 17-23.

43. Roth, T, and Pravosudov, V. “Hippocampal volumes and neuron numbers increase along a gradient of environmental harshness: a large-scale comparison.” Proc R Soc, Lond B. 2008. 276: 401-405.

44. Yoo, S, et, Al. “The human emotional brain without sleep – a prefrontal amygdala disconnect.” Current Biology. 2007. 17: 877-878.

45. Lesku, J, et, Al. “A phylogenetic analysis of sleep architecture in mammals: the integration for anatomy, physiology and ecology. Am Nat. 2006. 168.

46. Georgotas, A, Reisberg, B, Ferris, S. “First results on the effects of MAO inhibition on cognitive functioning in elderly depressed patients.” Archives of Gerontology and Geriatrics. 1983. 2: 249-263.

47. Vertes, R, and Eastman, K. “The case against memory consolidation in REM sleep.” Behavioral and Brain Sciences. 2000. 23(6): 867-876.

48. Horne, J. “Sleep function, with particular reference to sleep deprivation. Ann. Clin. Res. 1985. 17: 199-208.

49. Benington, J, and Heller, C. “Restoration of brain energy metabolism as the function of sleep.” Progress in Neurobiology 1995. 45: 347-360.

50. Pearlmutter, B, Houghton, C, Tilbury, S. “A new hypothesis for sleep: tuning for criticality.” 2006.

Monday, January 18, 2010

Devising a College Football Playoff System

It seems like after every college football season a number of fans lament about the fact that at least one team was not properly afforded the opportunity to participate for the National Championship. This lament frequently results in the desire for some form of playoff system bolstered by the citation that almost all other sports at both the collegiate and professional level have playoff systems, thus it is only rational that the most popular college sport have one as well. However, most individuals enjoy the comfort of their opinion to have a playoff without bothering to think about how to establish one and more importantly how to neutralize the obstacles that would prevent the adoption of a playoff system.

The first step in addressing the playoff issue is to avoid putting the cart before the horse, not designing a playoff system before understanding how that design will eliminate the problems in the transition between a single BCS Championship game to a playoff system. The largest problem in transitioning between systems is also the element that is rarely addressed in-depth, the level of money involved for the schools and the conferences due to their participation in the bowl system. Currently there are 34 bowl games (5 BCS level games and 29 non-BCS level games) with 2 games (the Yankee Bowl and the Dallas Football Classic) to be established in the next two years. The total payout distributed to college universities from these bowls games amounted to approximately 148.16 million dollars in 2008 with an additional possible 6 million from the two potential aforementioned future bowls.1 A playoff system would have to recoup some if not all of that money depending on how the future playoff system was designed with respect to the bowls.

However, the money involved in the bowl season is not as clear-cut as most people believe. Universities only receive a small fraction of the money that is awarded for a specific bowl appearance, not the millions of dollars a bowl reportedly awards a team. The reason for this is that all of the universities affiliated with a specific conference pool all of the awarded bowl money and then distribute it based on the given formula for the particular conference (revenue sharing among teams). For example most conferences take all of the money awarded to each of their affiliated teams playing a bowl game and create a single payment fund. From that fund, based on a pre-determined rules established by the conference, each team participating in a bowl game receives an expense account determined largely by which bowl the team is attending. Once all expense account money is distributed to bowl participants, the remaining money is distributed to each team affiliated with the conference as prescribed by the conference. Typically this distribution is equal between all teams regardless of whether or not they are participating in a bowl game.

Rather than use real conference information because it varies from conference to conference, the following example should suffice as a clear descriptor of the above distribution process. Suppose generic conference A has 10 teams and 5 go to bowl games. The conference champion team A goes to bowl game A which pays out 17 million dollars, but is not the National Title game; the conference runner up team B goes to bowl game B which pays out 3.5 million dollars; the next two teams, team C and team D, go to bowl games that pay out 1.5 million each and team E goes to a bowl game that pays out 750,000 dollars. All of that money is accumulated in a 24.25 million dollar pot. Then based on the bowls each team is attending money is distributed for expense accounts. Conference rules typically generate of fixed amount based on the range of bowl payout similar to the one below:

- If a team goes to a bowl with a payout/receipts of 1.4 million dollars or less the team will receive a 650,000 dollar expense allowance;

- If a team goes to a bowl with a payout/receipts between 1.4 million and 2.5 million dollars the team will receive a 1 million dollar expense allowance;

- If a team goes to a bowl with a payout/receipts between 2.5 million and 4 million dollars the team will receive a 1.5 million dollar expense allowance;

- If a team goes to a bowl with a payout/receipts greater than 4 million dollars the team will receive a 2.25 million dollar expense allowance;

- If a team goes to the National Championship Game the team will receive a 2.3 million dollar expense allowance;

Note that there are also additional funds awarded for an expense account based on a dollar per mile ratio ($250 per mile that needs to be traveled to reach the bowl location). However, in the above example these dollar per mile ratios will not be explicitly included.

Thus under such rules Team A would receive an expense account of 2.25 million dollars, Team B receives 1.5 million dollars; Teams C and D receive 1 million and Team D receives 650,000 dollars. After subtracting those amounts from the total pool of awarded money, the conference then divides the remaining money among all of the teams (note that some conferences keep a share of the money for general conference business). So assuming that Conference A does not keep a share, the money received by each institution due to the bowl payouts are:

Team A – 4.035 million dollars;
Team B – 3.285 million dollars;
Team C – 2.785 million dollars;
Team D – 2.785 million dollars;
Team E – 2.435 million dollars;
Team F – 1.785 million dollars;
Team G – 1.785 million dollars;
Team H – 1.785 million dollars;
Team I – 1.785 million dollars;
Team J – 1.785 million dollars;

The payout for Team A is a far cry from the 17 million dollars that was awarded by the bowl to the conference. Also someone might argue that most of the lower bowls only payout 1 million or less thus most of the money made from the bowl season goes to the power teams even in this distribution scheme which seeks to maintain a standing of ‘the rich get richer and poor get poorer’. However, such a mindset is not accurate. The team payouts in the above example are only the gross payouts, not the net payouts. Remember Teams A – E still have to attend their bowl and attending bowls cost money.

A large part of the money spent by universities traveling to bowl games involves transporting the team, the band (300-500 people right there), cheerleaders, important boosters and other individuals. These expenses can run high enough that some teams participating in the lower payout bowls (≤ 1,000,000 dollars) have a significant probability of actually losing money by attending a bowl because their expense accounts are low due to the caliber of bowl they qualified for. The second whammy when it comes to going ‘bowling’ is that universities have to meet certain, predetermined by the bowl, ticket requirements. Basically the university has to buy x amount of tickets and then the university turns around and sells those tickets to fans. Any tickets they do not sell they have to ‘eat’ the costs of from their own expense account. Typically tickets are sold ‘at cost’ from the university to fans, so there is no real probability for profit for the university from ticket sales. Of course there are also voluntary expenses like various forms of entertainment for players in the city hosting the bowl. Thus it is quite possible that a team can make more money not going to a bowl than going to a bowl. However, rarely will a team affiliated with a major conference turn down a bowl bid because although the particular university may incur a loss the conference as a whole nets a gain.

So what about the little guy, the non-BCS affiliated conferences, how do they fair in the current system? Non-automatic qualifying Division I conferences (WAC, Mountain West, MAC, Conference USA and Sun Belt) receive approximately 9% of the net BCS escrow revenue if no team receives a BCS bowl bid.2 If one team receives a BCS bowl bid the revenue received doubles (another 9%).2 If two teams receive BCS bowl bids the revenue received increases another 4.5%. The distribution of these funds among the five conferences typically follows which conferences receive which bowl bids. Recently the Mountain West and the WAC have taken the most money regardless of whether or not they receive a BCS bowl bid ($3.1-$3.5 million without a BCS bowl bid to $9.1-$9.8 million with a BCS bowl bid).1 Conference USA typically receives $2.5 million, the MAC receives approximately $1.6-$2 million and the Sun Belt receives $1.4-$1.8 million.1 The Football Championship Subdivision receives approximately $1.8 million which is evenly distributed among all 8 of their conferences (each conference receives $225,000).1

Then there are specific institutions that have their own rules for revenue sharing. Due to its current lack of conference affiliation, Notre Dame receives 1.33 million dollars when not participating in a BCS bowl, which is about 1/66th of the net revenue to automatic qualifiers (due to the fact that there 66 automatic qualifying schools).2 If Notre Dame participates in a BCS bowl it receives 4.5 million dollars which is equal to what is awarded to a second qualifier from a BCS affiliated conference. Until 2006 Notre Dame received a full BCS share (similar to that currently awarded to an automatic qualifier) if it was invited to a BCS bowl. [Remember if a BCS conference has two teams playing in BCS bowls only the automatic qualifier receives 17 million dollars, the ‘at large’ qualifier receives 4.5 million dollars]. Military academies, Army and Navy, receive 100,000 dollars each.1

With all of that revenue distribution it is important to identify where money is going to come from in a new playoff system. For example some have proposed that the playoff system be implemented into the bowl system in some respect. One of the more popular proposals uses 8 teams where of the 7 games that would comprise the playoff 5 of them incorporated the BCS bowls. The typical mindset under such a proposal is that the BCS bowls would payout at least the same amount of money that they do under the current system, which would significantly alleviate any financial losses involved in the transfer between systems.

A quick side note: it is highly improbable that a playoff system of 4 or fewer teams would be implemented solely because such a playoff does not offer the level of inclusion required to solve the problem of the current system, uncertainty; if the team that won the championship was really the best team in college football. A plus-one system is similar to a four team playoff in its lack of usefulness, not from an implementation standpoint, but in the solving of the original problem as previously discussed. For example suppose a plus-one system were implemented where the initial BCS bowls broke down as followed:

1 plays and defeats 6 by 3;
2 plays and defeats 9 by 6;
3 plays and defeats 5 by 7;
4 plays and defeats 11 by 17;

After these results a plus-one system would demand a new BCS ranking to determine the championship participants, but with these results how does a plus-one system resolve any conflict? What if 1 only defeated 6 by a single point? Should 2 and 3 jump 1 and play in the National Championship Game? To reiterate the fact is that in a majority of situations a plus-one system does nothing to resolve the core issue in the debate between the current system and a playoff, eliminating reasonable doubt regarding what university deserves the National Championship.

Returning to the issue of incorporating current BCS games into a playoff there are two significant problems with this base idea. The first problem is that the incorporation of the playoff system would destroy the tradition of each of the long-standing BCS bowl games. The Rose Bowl was established in 1902, the Sugar Bowl and the Orange Bowl in 1935 and each have only deviated from their traditional fixed conference match-ups a handful of times. (The Fiesta Bowl was established in 1976, thus its tradition is not nearly as important). A playoff system would make it difficult to adhere to the traditional match-up, which would leave a bad taste in the mouth of purists especially for the Rose Bowl. The one thing a playoff system cannot do is screw over the Rose Bowl as it accounts for approximately 20-25% of the total revenue taken in over the generic bowl season.1 Basically any playoff proposal that screws with the Rose Bowl is doomed to fail. Take note of how long the Tournament of Roses Association took before allowing the Big-10 or Pac-10 champion to participate in the BCS National Championship game. Most playoff advocates belittle this adherence to tradition largely because it stands in the way of what they want regardless of whatever reasons one might have for respecting the tradition and the history of a given bowl.

The second problem is fan fatigue. The rational to why this issue is a problem can be seen in the ‘at-large’ selection bowl selection process. One of the common questions uttered by pundits with regards to bowls selecting who to invite when no direct conference obligations exist is the issue of how well does a team travel. The reason such a question is important is ticket sales. This fact is stressed upon most universities as typically the universities have to guarantee a certain number of tickets. Although universities are assigned a certain minimum number of tickets, bowls do not want universities to ‘eat’ those tickets, they want universities to sell those tickets so people will come to the bowl game. One of the chief means of making money for the bowl sponsors/city hosting the bowl is to make sure that a large number of people attend the bowl spending money not only on tickets to the game, but on food, hotels, souvenirs and other tourist destinations.

Unfortunately for fans the cost of attending football games have increased over the years. With this reality it is likely that the first round of the playoffs may sell well, similar to a normal high level bowl game. However, the second round would demonstrate the first wave of fan fatigue as it would be more difficult to sell a number of tickets equal to or greater than the number sold for the first round game leading to the university ‘eating’ the remaining costs. The championship game would be an interesting question regarding fan fatigue. Once again it would be difficult to sell a number of tickets equal to or greater than the number sold for the first round game (which is the number akin to that sold for a current bowl appearance) to team partisans, but because it is a championship game there is the possibility to pick-up some non-affiliated fans. However, the total amount of this pick-up is unknown and it is unlikely that bowl sponsors would estimate this unknown quantity optimistically.

Another element to most of the proposed playoff systems is that they use a seeding system similar to the playoff system used by Division I - AA where the higher seeded team hosts the game and then later rounds are held at specific locations. This strategy may reduce some of the burden of fan fatigue, but it still does not reduce the average number of games that team x will take part in during the average playoff season, thus the total reduction of fan fatigue would be minimal especially if the opposing teams are a significant distance from one another. Interestingly fan fatigue acts as a limiting factor that almost forces any type of playoff system not to exceed 8 teams. Any argument that Division I – AA uses 16 teams, thus so should Division I – A is moot because of the sheer difference in money involved. Take a look at most of the games in a Division I – AA playoff, there are a lot of empty seats, something that would not fly for a Division I – A playoff series. Thus, with 4 teams not being enough to avoid questions of uncertainty and more than 8 teams more than likely failing due to fan fatigue, 8 teams seems to be the only workable number for a playoff system.

Another strategy that some argue would eliminate any money gap between the current system and a new playoff system is negotiations for new television contracts involving the playoff system citing the amount of money that the annual college basketball tournament garners. Unfortunately such a strategy is rather illogical because the college basketball tournament has been packaged as “March Madness” for a long period of time and was not piecemealed together from various outside sources/bowls. This piecemeal approach is exactly what would occur in the establishment of a college football playoff that includes BCS bowl games, which limits the amount of money that can be collected in its first years, the most important time frame when establishing a new system. Recall that television contracts already exist for each BCS game and depending on the anticipated match-ups for future games, contracts may actually go down. Also the first years are important generally because of the realm of uncertainty. There are no guarantees that contracts will go up in the future, thus most conferences would be wary about accepting less money in the short-term with promises of greater non-guaranteed payoffs in the future.

Some argue that it is in the best interest of the BCS to avoid starting a playoff system in order to maximize their per game profits. Basically this argument boils down to using simplistic economic theory revolving around supply and demand. The argument establishes those running the BCS as a cartel (which in large respects is true) thus it has sole control over the distribution of ‘high-quality’ bowl games. Therefore, instead of a positive slope for the supply curve, the cartel status allows for the generation of a vertical supply curve. The figure below illustrates the above situation.
* Note that the demand curve represents interest in watching or attending a BCS bowl game, which would later generate a total level of revenue from this interest;
With a vertical supply curve and a static negative slope demand curve theoretically it is risky business to increase the supply because the revenue per product ratio will decrease as shown below:
Looking at the above graph one may initially conclude that it is not in the interest of future BCS profit to increase the number of BCS Bowls or add a new playoff system. However, there are two significant problems with this theory and line of thought. The first problem relates to the percentage of decline in the revenue per product ratio. There is no debate based on the scheme of the above graph that increasing the number of bowl games will result in a decrease in revenue earned per bowl game. However, it does not distinguish by how much the ratio drops, which is a very important element in proving the main point of the theory.

For example suppose in scenario 1 an environment with 5 BCS bowl games generates 15 million dollars of profit per game. The addition of a sixth game reduces that number to 10 million dollars of profit per game. Clearly it is silly to add the sixth game as the BCS makes 75 million dollars (5 x 15) in a 5 game environment and only 60 million (6 x 10) in a 6 game environment. However, what if in scenario 2 the addition of a sixth game reduces the dollars of profit per game from 15 million to 13 million? In this scenario it makes sense to add a sixth game for in a 6 game environment the BCS makes 78 million (6 x 13) vs. 75 million in a 5 game environment. There in lies the first problem, there is almost no empirical evidence to demonstrate how the revenue per product ratio will change in response to an increase in the amount of product, thus no one can say whether or not it is in the interest, from a profit perspective, of the BCS to add additional games. With that uncertainty, one might question how the above discussion is useful. Fortunately the second problem with the above theory renders the first problem moot.

The critical problem with the above analysis is that although the supply curve was changed to accommodate the presence of cartel control, the demand curve was not changed. When supply is isolated to a single supplier, the behavior of those that want to purchase that product changes as well. In such a situation potential customers in general are more willing to accept an unchanging price despite more units of the product being available because of the price power of the cartel. Such a mentality changes the demand curve. The figure below illustrates how this change in demand curve changes the situation.

With this change in the demand curve with respect to the total supply controlled by a single supplier there is a higher probability that increasing the amount of the supply will not result in a decrease in the revenue per product ratio. Thus, it does not seem rational to presume that there is little probability that the BCS would reject a playoff season solely or even in part due to the concern that increasing the number of ‘BCS caliber’ games would reduce total revenue. In a real world example total revenue did not decrease, but actually increased when increasing the number of BCS bowl games from 4 to 5.1

Note that a proportionally linear decline in the demand curve also makes little sense in reality even if the theory were sound (which it is not). Do people really have such strict internal quotas for college football BCS bowl games? Does an individual say to him/herself, ‘Gee I am only going to watch 4 BCS bowl games this year even if there are 6 BCS bowl games played this year’? Of course not; the total number of BCS bowl games available has almost no influence on whether or not individuals watch.

The demand structure of the curve also relates back to television ratings. Some opponents of a playoff system make the claim that the public may talk a lot about wanting to see the ‘little guy’ (non-BCS affiliated teams) get his chance, but when push comes to shove they prefer watching the power conferences. Although direct analysis of the television ratings supports this conclusion there are other issues in play.

The chief issue is that most pundits continue to believe that the ‘little guy’ is inferior to the BCS conference teams regardless of what reality may indicate and continue to press that message, which has an influence on whether or not individuals select to watch games with these teams. Perhaps institutions like ESPN could try an experiment where their analysts do not automatically assume that the non-BCS team is inferior to the BCS team and will be crushed and then see if the ratings for these games change. For example the way most analysts speak of non-BCS teams it would shock them to realize that the Division I team with the best record over the last 4 years is Boise State at 49 – 4, not Florida, not Alabama, not Texas, not Ohio State, not LSU, not USC, but lowly Boise State who is also 2 – 1 against BCS teams in that span. Basically commentators generate a self-fulfilling prophecy where they imply that viewers should not watch game x because the BCS team will crush the non-BCS team, thus watching it would be a waste of time. Then they crow about low ratings justifying their position that no one watches non-BCS teams.

One final note on ratings: anyone who tracks ratings realizes that the National Championship game and games in which the Big-10 participate are the only real ratings superstars year in and year out in the bowl season. If the ‘little guy’ is actually given adequate opportunities to prove themselves over a consistent period of time rather than sporadic periods that typically only occur during the bowl season then a genuine attitude can be formed about the worthiness of these teams and whether or not their games should be watched instead of an artificial attitude pushed by individuals with bias agendas.

Another concern with most of the proposals for a playoff system is that they incorporate ‘at-large’ bids into the system. It is rather humorous to listen to individuals discuss the fairness of a playoff system over the current system, especially to non-BCS conference affiliated schools, when their ‘at large’ bid proposals offer very little chance of participation for non-BCS affiliated conference schools. For example almost never will two undefeated teams from a non-BCS affiliated conference have an opportunity to participate in a playoff system. In fact realistically only one would even have an opportunity, especially if the ‘at large’ bids were decided by a BCS ranking type system due to the on-going bias against non-BCS conferences in relation to BCS conferences.

A superior system would eliminate all ‘at large’ bids and instead focus on conference champions. For example the playoff would consist of 8 teams, the 6 conference champions from the 6 BCS conferences and the 2 highest ranked conference champions from non-BCS conferences determined by a BCS ranking system. Of course some may argue that his/her team only had one loss in a given year and also deserves to be in the playoff even though that team did not win their conference. Such an argument is utterly meaningless because if a team cannot win its own conference it is clearly not the best team in the nation and has no business being in a tournament to determine the National Champion. The counter argument regarding how a 3 loss conference champion from Conference A is more deserving than a 1 loss team from Conference B that did not win its conference is made through the acknowledgement that different conferences have different levels of quality depending on the year. It is not hard to come to the conclusion that the 3 loss conference champion may have played higher caliber teams than the 1 loss team from Conference B.

Also a conference championship only requirement ensures that the regular season still has significant meaning, a reality that would be diminished if ‘at large’ bids existed. This factor is important because one of the major complaints regarding the establishment of a playoff system is stripping away the importance of the regular season where a single loss may be the difference between going to the National Championship game and not going. In any proposed playoff system with ‘at large’ bids a team could get into the playoff with 1 to 3 losses and no conference championship, hardly a team deserving of the National Championship. For example in a playoff system with ‘at large’ bids, the 2009 SEC championship game between Alabama and Florida would have had almost zero meaning because the loser would have simply claimed one of the available ‘at large’ bids in the playoff leaving an undefeated TCU or Boise State (more than likely Boise State) out in the cold. However, in a playoff system with no ‘at large’ bids that SEC championship game still would have had all of the meaning it had originally.

One significant flaw to the above set-up is the probability of an undeserving team making the playoffs through a conference championship game. With respect to the above system, ideally the conference champion would be determined similar to how it is determined in the Big-10, Pac-10 and Big East where every team plays a vast majority of the teams in the conference and then the conference champion is determined through conference record or if needed a series of tie-breakers.

Unfortunately in the above regard the Big-12, SEC and ACC determine their conference champions through a single championship game due to the fact that these conferences have elected to divide their conferences into two sub-divisions. The decision to create this division is largely driven by the money the conferences receive through sponsorship and television rights to the championship game. However, such a system creates a problem with the ‘purity’ of a playoff in that in such a system a team with 4 losses from one of the divisions could defeat a team with 0 losses in the championship game and claim that conference’s spot in the playoff. Overall such a situation leaves a bad taste in the mouth because on the strength of a single game a team with a mediocre record would be able to compete for the National Championship which would eliminate significant meaning from the regular season.

Hopefully this problem will be solved, but as it stands it would have to be solved through the use of the current championship game system because the aforementioned 3 conferences are making too much money through those games to abandon them. One possibility would be to take the two teams from the conference with the best records regardless of which sub-division they are in and have them play in the conference championship game. This strategy would at least ensure a higher probability that a deserving team came from the conference in question. However, it does raise another concern in that if the teams already played during the regular season why should they have to play again to determine the conference champion?

The application of the above playoff qualification structure could also go a long way to solving the money question in that the playoff system could exist apart from the bowl system. Instead of using the BCS games as part of the playoff system, the playoff system would exist alone which would allow a separate set of sponsor and promotional contracts for the playoff system and the BCS Bowl system. To accommodate such a system the BCS would have to give up playing champions in those bowls and drop each position in all remaining bowls by a single place. For example the Rose Bowl would no longer host the Big-10 champion and the Pac-10 champion, but instead would host the Big-10 runner up and the Pac-10 runner up. Similar changes would occur for all other continuing bowls. Overall in such a system the tradition of conference match-ups for each bowl will be continued, which as previously mentioned is important to powerful people, and due to the depth of each BCS conference, bowl quality should not drop significantly.

Note that this system works well monetarily for each conference because of how bowl money is distributed. Recall that conferences distribute bowl money from one lump sum accumulated by all of the teams in the particular conference that went to a bowl. Therefore, it does not matter if the conference champion plays in the respective BCS bowl or the conference runner-up as long as the money awarded from the playoff and the BCS bowls fall within at least the same tier of expense account for a given conference. Thus as long as the conferences are still represented in their respective bowls they still receive their money. The amount of money awarded for teams and their respective conferences that participate in the playoff would be scaled based on how far the team advanced and reflect a significant portion of the money generated from the sponsorship and promotional contracts generated from the playoff.

Of course the addition of what can be viewed as seven new bowl games would increase the total number of bowl games to 42 (if one counts the 2 anticipated future games and eliminates the current rotating National Championship BCS Bowl game which would be unnecessary in a playoff system). Realistically 41-43 bowl games could be viewed by a sufficient amount of people as just too many. Thus, if a playoff system without incorporating the BCS bowls was established it would probably be prudent to eliminate some of the lower totem pole bowls. These ‘low totem pole’ bowls can be identified as those bowls that have the more recent establishment dates and/or the lowest payouts with payouts being a higher determining factor. Also preference should be given to bowl matching 6th and 7th place finishers in a BCS conference over 2nd or 3rd place finishers in non-BCS conferences. Prime candidates for elimination would include: the Little Caesars Pizza Bowl; the St. Petersburg Bowl; the New Orleans Bowl; the Emerald Bowl; the Hawaii Bowl; the Papa John’s.com Bowl;

After determining the conference champions, an 8-team playoff would require a methodology for determining the match-ups. There are two schools of thought for seeding if no BCS bowl tie-ins are utilized. The first option is rather popular in that qualifying teams are seeded 1-8 according to their BCS ranking with 1 playing 8, 2 playing 7, 3 playing 6 and 4 playing 5. Although such a methodology is standard some may object to such a system because there is a very high probability that the two teams from the conferences not affiliated with the BCS will always draw the 7th and 8th seeds whether or not they are deserved due to BCS conference bias in ranking. Therefore, these non-BCS teams will frequently be placed at a disadvantage, hardly the fair shake that some believe a playoff system should be.

The second option would involve a random draw to determine the seeding, similar to the World Cup. The first and chief objection one would have to a random draw system would be the possibility that the two highest ranked teams in the BCS would meet in the first round. Such a pairing could be disastrous for a playoff system in what some would regard as the best game being played earlier than it ‘should’ be. Fortunately when ignoring the hysterics one realizes that there is only a 14% chance of such an outcome. Also rarely is there a college football season where the top two teams in the BCS rankings are clearly regarded as head and shoulders above all of the remaining teams in quality. Such a trend is why there is a demand for a playoff in the first place. Therefore, there is no reason to panic and instantly eliminate the potential for a random draw as the determining mechanism through citation of a possible watered-down championship. Heck, a watered-down championship happens almost every year in the actual BCS system due to uncertainty.

The playoff draw would be determined the following Sunday after the conclusion of the last BCS affiliated conference championship game. First round games would begin two Saturdays from the last regular season game. Note that if the random draw procedure is used to determine pairings for the first round, the draw will not determine where the teams play versus a seeding system that would likely lead to a home field advantage reward location system. The lack of efficiency in the expenditure of money restricts using the random draw to determine game location. For example suppose TCU is matched up to play against the University of Texas in the first round. It would make little sense for those teams to travel to the Orange Bowl to play the game instead of Texas Stadium. Game start times will follow the obvious rationality of East to West where East Coast games are played prior to West Coast games.

There are two strategies regarding where these games are hosted out of a random draw system. First, games could be hosted at a series of pre-determined neural sites throughout the country. For example based on the timing of the games, six common sites for first and second round playoff games could be the Rose Bowl, University of Phoenix Stadium, the Cotton Bowl or Texas Stadium, the Orange Bowl, the Georgia Dome, and the Horseshoe or the Big House. The championship game could be played at a rotating site between those six or an entirely new site determined later.

Note that the above sites are merely suggestions based on an attempt to maximize the potential for ticket sales, ease the question of fan fatigue and create an ambiance that an attending, television and Internet viewing audience would expect for a playoff. One may argue that the above suggestions create too much of a possibility at favoritism for a specific team due to proximity, but such concerns have hardly been raised in the past. Where is the outcry that the Big-10 and Big East almost always have to go on the road for their Bowl Games. Heck the Rose Bowl is played at Pasadena in UCLA’s home stadium and a hop, skip and jump from USC. Overall there is the potential for home field advantage issues, but the committee staffed with creating the draw and the respective playing locations should do their best to neutralize this issue as much as possible.

To better illustrate how such a selection process would transpire suppose in 2020 a playoff system similar to the one above is in place.

Conference champions are as followed:

ACC – Florida State;
Big-10 – Michigan;
Big-12 – Texas;
Big East – Pittsburgh;
Pac-10 – Oregon;
SEC – Georgia;

the two non-BCS affiliated qualifying conference champions are:

Mountain West – BYU;
WAC – Boise State;

the random draw generate the following match-ups:

Boise State – Michigan;
Oregon – Georgia;
Texas – Pittsburgh;
BYU – Florida State;

Based on these match-ups the committee would then determine the most appropriate playoff locations for both the first round and the potential second round match-ups. In the above example the selection of the following locations would be appropriate:

First Round:

Boise State – Michigan [University of Phoenix Stadium]
Oregon – Georgia [Texas Stadium]
Texas – Pittsburgh [Georgia Dome]
BYU – Florida State [Horseshoe]

Second Round:

Top Bracket Winners – Rose Bowl
Bottom Bracket Winners – Orange Bowl

Through its location assignments the selection committee attempts to assign games to the most neutral sites at the closest joint proximity to each school to ease fan and team travel expenses. Note that if workable the second round locations could be delayed until the teams that are playing in the second round are determined. For example in the above example if Texas and BYU both win it would make more sense to host that game (the bottom bracket game) in the Rose Bowl instead of the Orange Bowl.

Based on the stadium selections the first round would begin at 11:00 am EST moving East to West with a 3 hr buffer between the start of each game. Due to two fewer games the second round could have a little more flex time between games allowing for more elaborate halftime performance or pre/post game preparations. Thus in the above example the official line-up would be:

Boise State – Michigan [University of Phoenix Stadium @ 8:00 pm EST]
UCLA – Georgia [Texas Stadium @ 5:00 pm EST]
Texas – Pittsburgh [Georgia Dome @ 11:00 am EST]
BYU – Florida State [Horseshoe @ 2:00 pm EST]

Top Bracket Winners – Rose Bowl @ 2:00 pm EST
Bottom Bracket Winners – Orange Bowl @ 7:00 pm EST

The second solution to the playoff location question is to utilize the aforementioned seeding system. The advantages of the seeding system is that more than likely on a universal scale fan fatigue is lowered vs. selection of a neutral site. However, this reduction possesses a higher standard deviation in that fan fatigue will be inversely proportional to the seed of the given team. Basically the better the seed the lower the fan fatigue. There is also another potential concern relating to how the money from ticket sales are distributed. Does the hosting school keep the funds or are they funneled back to the sponsor/BCS? How is the bowl payout influenced by the decision regarding ticket sales? Overall instrumentation of a seeding system would probably require transparency to avoid bias.

As previously stated the championship game would be either at a pre-determined rotating site between the six original sites or at an entirely new site. A rough schedule of events for the bowl/playoff season in college football would look like:

Dec 5th – ACC, Big-12 and SEC Championship games
Dec 6th – Final BCS Ranking to determine the two non BCS-affiliated conference champion participants and the draw is determined through whatever prescribed methodology;
Dec 19th – First round of playoff games
Dec 20th – New Mexico Bowl;
Dec 22nd – Las Vegas Bowl;
Dec 23rd – Poinsettia Bowl;
Dec 26th – Second round of playoff games;
Dec 27th – Meineke Car Care Bowl; Music City Bowl;
Dec 28th – Independence Bowl;
Dec 29th – Champs Sports Bowl; Eagle Bank Bowl;
Dec 30th – Holiday Bowl; Humanitarian Bowl;
Dec 31st – Texas Bowl; Armed Forces Bowl; Sun Bowl; Insight.com Bowl; Chick-fil-A Bowl;
Jan 1st – Outback Bowl; Capital One Bowl; Gator Bowl; Rose Bowl; Sugar Bowl;
Jan 2nd – Cotton Bowl; Liberty Bowl; International Bowl; Alamo Bowl;
Jan 3rd – GMAC Bowl; Fiesta Bowl; Orange Bowl;
Jan 4th – National Championship Game;

One of the final issues that needs to be addressed is the number of games that college football players will be expected to play in a playoff system vs. the current system. Currently top quality programs (those that would qualify for a playoff in the first place) play 13-14 games when including the bowl game. In an 8-team playoff the teams playing in the championship game will have played 15-16 games at its conclusion, which is basically an NFL season and is clearly too many games. The chief concern is increasing the probability of players suffering from physical injury, especially as the season drags on in which player fatigue increases and recovery time decreases. With the way players play in the modern era, increased probability of suffering a concussion should be of special concern with the expectation of playing additional games.

There are two rational possibilities that jump to mind to alleviate this concern. The first solution involves lengthening the season to generate more down-time to stimulate recovery ensuring a lower probability for injury. Unfortunately this solution is not very viable because the college football season is already fairly long and generating this extended rest very well could push the season in late January or even early February which would significantly impact the viewing audience and would probably negatively influence the academics of the players.

The second solution would be to remove a number of games from the schedule to compensate for the potential increase in games from the playoffs. This solution seems to be the better of the two, but the immediate issue is how to go about removing games from the schedule? The type of game removed would heavily depend on the playoff qualification structure. For example in the above playoff example qualification occurs through acquisition of a conference championship. Therefore, the typical 3-5 non-conference games played by teams each year are irrelevant in determining eligibility for the playoffs for BCS affiliated teams and excessive for non-BCS affiliated teams. Thus it would be easy to drop the number of non-conference games by 2 in order to accommodate the potential playoff games.

Another side benefit of this system is if non-conference games no longer influence whether or not a BCS team enters the national championship playoff, there is no need for these teams to play generally inferior quality teams out of a concern that they could lose an early game to another high quality team and be out of the National Championship picture. Instead such a system would encourage high quality teams to play other high quality non-conference teams in order to generate a better understanding of how their teams are developing and to prepare for the conference season by playing teams that may be better than most of their conference opponents. Such a system should increase the probability of more USC-Florida match-ups over Florida Atlantic-Florida match-ups.

Noting academics in general the current scheme of the playoff as outlined above should probably not interfere with student academics because athletes, especially football and basketball players, have a greater level of flexibility in when they can take their tests. There should be enough free time with the time before the start of the playoffs and the time in-between the second round and the championship game for players still playing to take care of any academic business regarding finals and the like. However, each university may have different policies which would have to be considered.

In closing the first step to opening a discussion regarding the potential of a playoff system to determine a college football national champion is to understand that there is no logical reason for the BCS to reject a playoff on a financial basis. However, the sword is two sided in that a playoff cannot be viewed as manifest destiny for college football. Although a playoff can follow many different methodologies there are some critical elements that must be addressed by each different strategy if it is to have any probability of success.

1. Do not forsake the Rose Bowl or its handlers the Tournament of Roses Association; in a given year the Rose Bowl generates 20-25% of the total BCS revenue.1 It does so through its tradition, pageantry and quality. It is highly probable that any attempt to initiate a playoff will fail if the tradition of the Rose Bowl is not upheld. Therefore, the Rose Bowl must either host the Big-10 and Pac-10 runner-ups because their conference champions move to the playoff or it must be one of the first round playoff games where the Big-10 champion faces the Pac-10 champion each and every year in the first round regardless of BCS ranking.

2. The problem of fan fatigue must be solved. The expectation of fans to follow their team for up to three playoff games is unrealistic, especially for college students in an environment where college tuition continues to increase. Depending on the size of the promotional/television contracts that are acquired for a playoff one possible strategy for neutralizing fan fatigue is for a university to ‘eat’ the costs of the tickets for each game and donate the tickets to fans, so they only need to provide for their travel. Another option is if the venue is within 500 miles, the university could provide bus transport to the game site for the fans. Any steps the university can take to alleviate the cost of attending playoff games for fans without resulting in a loss for the university would go a long way to supporting the generation of a playoff system. Overall a playoff system that stood alone from the other traditional BCS bowls would probably have more success in this arena than a playoff system that incorporated the BCS bowls due the a greater windfall of monetary awards.

3. The playoff system must allow for the participation of teams from conferences not currently affiliated with the BCS. One of the driving reasons behind the movement to establish a playoff is the thought of unfairness in the current system because the ‘little guy’ is not afforded the opportunity to consistently erase or confirm the bias against it by playing the ‘big guy’. A playoff system that does not increase the probability of BCS vs. non-BCS match-ups is no better than the current system.

Overall the possibility for a playoff is viable, but its proponents must be more extensive in how they would go forward in establishing such a system. The analysis and example presented here is just the minimum of what should be presented as an argument by supporters. A vast majority of playoff or even anti-playoff arguments do not even scratch the surface of what elements will be required for the playoff or the maintenance of the current system. If one is not going to do a reasonable job at presenting an argument what is the point of presenting one in the first place?

==
1. Bowl Championship Series Five Year Summary of Revenue Distribution 2004 – 2008.

2. BCS revenue sharing: It's pretty simple. BCSFootball.org. http://www.bcsfootball.org/cfb/story/10297120