Tuesday, June 23, 2015

The Legitimacy of Holistic Admissions at U.S. Universities


With the competition for landing a quality job increasing with every passing year, acceptance into a high quality university is viewed as essential to maximizing the probability of landing one of these jobs. However, in lockstep with the competition for quality jobs, the competition to gain entrance into those universities widely regarded as high quality has also increased. This competition has produced controversy surrounding the procedure in which applicants are admitted creating a tug-of-war of sorts between various parties and their interests. One of the chief points of controversy is the validity of the “holistic” review process. In fact a lawsuit filled against Harvard University by the Students for Fair Admissions contends that holistic admission processes are inappropriately discriminatory and should be significantly clarified in their evaluation metrics beyond “whole person analysis”. Obviously a reading of the official complaint by the Students for Fair Admissions divulges a harsher conclusion than that above, but the sentiment above is more appropriate to produce a more fair admissions environment.

Proponents of the holistic method champion its multi-faceted analysis approach where a larger spectrum of an applicant’s qualifications for admissions is considered beyond the traditional metrics (standardized test scores, grades and certain extracurricular activities), which produces a more fair and accurate admissions process. Opponents of the holistic method believe that it is commonly used at best to hide the admissions process beyond a veil of ambiguity allowing universities to justify perplexing and arbitrary decisions and at worst to legitimize a quota system where more qualified candidates are rejected in favor of under-qualified candidates to achieve diversity demographics in order to evade public scorn. Clearly based on the perceived stakes, where getting into university A can set a person up for life versus university B which would create unnecessary hardships, the emotional aspect of this debate is high. Unfortunately this emotional aspect has produced an environment that abandoned a critical philosophical base for understanding the why or why not a holistic appropriate is appropriate.

First it is important to address that the holistic process has been attacked by some as a demonstration of “reverse racism” through the process of affirmative action. The term “reverse racism” is a misnomer and is not properly used in this descriptive context. Racism is giving differing treatment, either in a positive or negative manner, to an individual based on their ethnicity or race. Based on this definition, reverse racism would be akin to not giving differing treatment to an individual based on their ethnicity or race. However, when individuals invoke the term “reverse racism” the actual meaning is not what they are intending to convey. Instead they simply mean a different type of racism. Unfortunately some parts of society have associated the term racism to reflect only one particular form of racial bias instead of all forms of racial bias, which is inappropriate. Therefore, the term “reverse racism” should be eliminated from conversation in this context and replaced with the appropriate term – racism.

Second, it must be noted that the original intention of affirmative action was not to give “bonus points” to an individual based on their race, but to access how race may have influenced the acquisition of certain opportunities and thereby influenced the development of an individual through their performance when engaging in these opportunities. It should not be surprising that an individual with rich, committed and connected parents will have more opportunities and ability to prepare for those opportunities when presented than an individual without wealthy or even present parents.

For example it is expected that SAT scores would be higher for children of richer families both because of increased opportunity to prepare and increased opportunity to retest if the performance is not deemed acceptable. Also there is a higher probability that individuals from rich families will be better nourished than those individuals from poor families, which will directly influence academic performance and ability to participate in other valuable non-academic opportunities. Such environmental effectors are simple elements that can skew the value and analytical ability of “raw” metrics like standardized tests. Basically affirmative action is akin to judging the vault in gymnastics. Not all jumps have the same difficulty level; a non-perfect vault with a 10.0 difficulty will consistently beat a perfect vault with a 7.0 difficulty.

A quick side note: while the idea of affirmative action was originally based on the premise of race in an attempt to combat direct and indirect forms of racism, in the present the idea of affirmative action has shifted more to address differences in economic circumstance over race/ethnicity. The idea that rich individuals of race A will somehow be significantly excluded from opportunity A versus rich individuals of race B is modern society is no longer realistic. It is important to identify that more minorities will be assisted by affirmative action not directly because of race, but instead because of past racism that reduced the probability of these minority families to build intra-generational wealth thereby making them poorer than white families.

Based on the “potential judgment” aspect of affirmative action, some individuals may object to the idea that it is appropriate to punish an individual for having access to opportunities that others may not claiming that this behavior is a form of bias. This point creates the first significant philosophical question that must be addressed in the admissions process: is it justifiable that an above average individual in an advanced difficulty pool should find favor in an opportunity over a high quality performing individual in a lesser difficulty pool?

An apt example of this notion is seen in the disparity between the “Big 5” college conferences (ACC, Big 10, Big 12, PAC 12 and SEC) and the mid major conferences when selecting basketball teams for the NCAA Championship Tournament. While the committee tends to give preference to teams from the Big 5, the question is should they? A Big 5 power team, “Big Team A”, with a 55.5% conference winning percentage at 10-8 and an overall record of 21-13 has clearly demonstrated itself as slightly above-average among its peers whereas a mid major team, “Medium Team B”, with a 89% conference winning percentage at 16-2 and an overall record of 26-7 did not have the same opportunities to compete against the level of competition as Big Team A, but has demonstrated themselves a quality team with a greater unknown ceiling. Basically should someone slightly above the middle of the pack in one environment that could be viewed as more competitive be passed over for someone at the top at a tier 2 level?

In the arena of applicants the question of quality could boil down to: should the 100th best “area” A applicant be accepted over the 10th best “area” B applicant. Think about it this way: should applicant C from city y who scores significantly above average for that area on standardized tests and also has quality grades be accepted over applicant E from city x who scores slightly above average for that area on standardized tests and has quality grades even if applicant E’s scores are slightly higher? Note that obviously city x has a higher student average for standardized tests than city y.

Those who say yes to the above question based on the importance of fostering a racially/ethnically diverse environment must be careful not to fall into the trap of needless diversity, which is its own type of bias. With regards to fostering a diverse environment, its establishment must be based on thought and behavior, not on elements beyond an individual’s control.

There is an advantage to diversity of experience for it ensures a greater level of perspective and ability to produce understanding leading to more and potentially valid strategies for solving problems. However, this advantage comes from experience not from different skin color, religious beliefs, etc. For example the inclusion of person A just because he/she has certain colored skin or is of a certain ethnicity is not appropriate. Their inclusion should demand a meaningful and distinctive viewpoint. Cosmetic diversity for the sake of diversity serves no positive purpose and is inherently foolish and unfair/bias. Based on this point the crux of the issue regarding admissions is how to identify individuals with distinctive and valuable viewpoints in order to validate selecting a high achiever from a less difficult environment.

Most would argue that the standard analysis metrics are not appropriate for this task. For example grades are significantly arbitrary based on numerous uncontrollable environmental and academic circumstance; i.e. an A at high school x does not always carry the same weight as an A at high school y and some high schools allow students greater amounts of extra credit which conceal their actual knowledge of the subject through grade inflation. Standardized tests can be heavily prepared for and be taken multiple times depending on time and financial resources. Also they may not present an accurate representation of ability for almost no “real-world” task requires an individual to sit in one place in a time sensitive environment answering various questions without access to any outside resources beyond what is in their brain. At one point the “college essay” could have filled this role, but now it appears the essay has de-evolved into an ambiguous farce demanding only unoriginal “extraordinary” experiences and/or teaching moments where sadly it has become difficult to determine even if the student means what they say or are simply writing what they think the admissions officers want them to say.

However, while these flaws with the standard metrics exist, it is important to understand that abandoning the standard metrics entirely would be in error, for abandoning these metrics would be akin to replacing one “bias” with another. The standard metrics are an important puzzle piece, but they do not make up the entire puzzle.

For some the college interview has been thought of as a panacea for bridging the gap between holistic and standard admission judgment, but interviews do have caveats that must be monitored. Supporters of the interview process believe that it gives applicants an ability to demonstrate that he/she is more than just test scores, extracurricular activities and grades as well as allows both the university and applicant the ability to more specifically define the level of “fit” between the two beyond the mass generic questions utilized in the application process. Finally interviews can be a good deciding factor between board-line applicants.

Unfortunately interviews have some flaws that must be properly managed to ensure their legitimacy. First, individuals involved in the interview must be properly trained to avoid first impression bias as most interviews establish the tenor of the relationship between the interviewer and the interviewee very early, which threatens the objectivity of the rest of the interview. Also interviews must have a standard operating procedure, especially when it comes to the questions. Applicants must be asked the same questions for if different questions are asked to different applicants the subjectivity probability of the procedure increases, which hurts the interview as a comparison evaluation metric. It is fine to ask different questions if interviews are not going to be used when choosing one applicant over another, but most do not view the interview in such a causal light.

Another concern about the interview is they are unable to judge growth potential in how the university may positively or negatively influence the development of the applicant if he/she actually attends the university. Also if interviews do not have significant weight in the decision-making process then they may cause more harm than good due to lack of specific feedback providing more stress on an individual over relief as individuals wonder how the interview went leading to over-embellishment of the negative on small errors. Finally if interviews are deemed important it would be helpful if more universities offered travel vouchers to more financially needy applicants so if these individuals want to tour the campus and participate in the interview process they have an opportunity to do so that is not negatively impacted by their existing financially situation. Such a voucher may be important especially if interviews are used in “board-line” judgment.

A separate strategy may be the use of static philosophical probing questions in the application process. This strategy could better manage the difference in outside environmental influencing factors by gauging the general mindset of an applicant when it comes to solving problems. For example one question could be that if the individual were presented with a large jar full of chocolate and one individual sample; how would the individual calculate the number of chocolates in the jar? Note that this question demands both creativity and deterministic logic; creativity will produce more available options, but logic will be required to reason the best option from the list.

Another interesting question would be to ask what is the greatest invention in human history? Such a question would inspect whether an individual believes it is more important to build a foundation or if importance comes from what expands from that foundation. A third question could be what one opportunity would the applicant like to have had that they did not receive or was not available and why? These questions are superior to the generic banal analytically irrelevant questions that most universities ask on their admission forms.

Overall regardless of what methodology a university uses to accept or reject applicants the most important element is that this methodology is transparent. Universities must exhibit what attributes and credentials validate an individual’s merit for acceptance and then produce valid qualitative and quantitative reasons for why certain individuals gain admission and others do not. Transparency is the key element for a university to conduct their specific type of admission methodology without complaint. Returning to the original question whether or not a university elects to accept above average individuals from high “difficulty” environments or top performers from lower “difficulty” environment, either method is defensible as long as legitimate reasoning is available. However, there in lies the problem with the holistic method, universities are not transparent in its application, thus such behavior must change if a holistic method is to have any significant credibility.

Wednesday, June 10, 2015

Exploring the Biological Nature of Brown and Beige Fat

Over two years ago this blog discussed the possibility of incorporating a specialized preparation routine before exercise in an attempt to stimulate both brown and beige adipose tissue in order to increase the efficiency and overall calorie and fat burning potential of standard exercise. However, that post did not seek to fully understand or discuss the specific biological mechanisms that govern the behavior of brown or beige adipose tissue. This lack of knowledge limits the efficiency for exercise programs as individuals could either be consuming certain foods or performing certain warm-up tasks to increase exercise potential in addition to those suggested in the past blog post. Increasing exercise efficiency could be an easy means to increase the overall health of society without having to devote more precious time to exercise; therefore it would prove useful to better understand the processes that activate these types of fat.

At the most basic level there are two key elements to the fat burning capacity of brown fat. First, brown fat has multiple mitochondria versus the single mitochondria possessed by white fat; these additional mitochondria allow for greater rates of metabolism along with an increased lipid concentration. Also brown fat releases norepinephrine which reacts with lipases to breakdown fat into triglycerides and later to glycerol and non-esterified fatty acids finally producing CO2 and water, which can lead to a positive feedback mechanism.1,2 Second, brown fat contains significant expression rates of uncoupling protein 1 (UCP-1).1 UCP-1 is responsible for dissipating energy, which leads to the decoupling of ATP production and mitochondrial respiration.1 Basically UCP-1 returns protons after they have been pumped out of the mitochondria by the electron transport chain where these protons are released as heat instead of producing energy (i.e. leaking).

It is important to understand that there are two types of brown fat: natural brown fat and intermediate brown fat commonly known as beige fat. Natural brown is typically exemplified by the fat located in the interscapular region and contains cells from muscle-like myf5+ and pax7+ lineage.3 Natural brown fat is typically isolated from white fat and almost entirely synthesized in the prenatal stage of development as a means to produce heat apart from shivering.4 Beige fat is commonly interspaced within white fat, do not have these muscle-like cells (although Myh11 could be involved),5 and can be activated by thermogenic pathway and the strain of exercise. Beige fat also has the potential to influence the conversion of white fat to beige fat through a process commonly called “browning”.6,7

Natural brown fat is thought to have larger concentrations of UCP1-expression because they constitutively express it after differentiation versus beige, which expresses large amounts of UCP-1 in response to thermogenic or exercise cues.1,5 Therefore, natural brown fat is more effective at energy expenditure. However, it may not be possible to develop more natural brown fat after development; therefore, any positive progression in brown fat development will come from beige fat.

Early understanding of brown fat activation involved non-discriminate increases in the activity of the sympathetic nervous system (SNS). The standard pathway governing brown fat activation uses a thermogenic response involving the release of norepinephrine, which initiates cAMP-dependent protein kinase (PKA) and p38-MAPK signaling leading to the production of free fatty acids (FFA) through lipolysis due to UCP-1 induced proton uncoupling.4 UCP-1 concentrations are further increased through secondary pathways involving the phosphorylation of PPAR-gamma co-activator 1alpha (PGC1alpha), cAMP response element binding protein (CREB) and activating transcription factor 2 (ATF2).8 Among these three elements PGC1alpha appears to be the most important co-activating many transcription factors and playing an important role in linking oxidative metabolism and mitochondrial action.9

However, due to the complicated nature of SNS activation and its other downstream activators the attempt to replicate it in the form of weight loss drugs like Fenfluoramine or Ephedra resulted in severe negative cardiovascular side effects like elevated blood pressure and heart rate.10 While some argue that either increasing the sensitivity or the rate of simulation to the SNS can improve upon these results, the underlying elements associated with downstream activation of the SNS makes facilitating direct influence too complicated. Therefore, from a biological perspective it makes more sense to focus on a downstream element that interacts with brown fat at a more localized level.

Just a side note based on the differing interactivity between brown/beige and white fat from the SNS, white fat appears to represent long-term energy storage and brown fat is shorter-term energy, an unsurprising conclusion. However, frequent energy expenditure, like exercise, may condition the body to produce more beige fat versus white fat viewing short-term energy needs as more valuable than long-term energy needs. Basically if the above point is accurate then it stands to reason that a person would see more benefit from 20 minutes of exercise 6 days a week versus 40 minutes of exercise 3 days a week.

Moving away from direct SNS stimulation perhaps the appropriate method of increasing browning involves increasing transcription and translation of UCP1. Interestingly enough empirical evidence exists to support the idea that reinoic acid could be an effective inducer of UCP-1 gene transcription in mice and operates through a non-adrenergic pathway.11,12 However, a more focused study using loss of function techniques involving retinaldehyde dehydrogenase, which is responsible for converting retinal to retinoic acid, determined that retinal, not retinoic acid is the major inducer of brown fat activity.13 Unfortunately there is no direct understanding regarding the proportional response of brown fat to retinal or retinoic acid. Therefore, the general fat-soluble nature of vitamin A will probably make it difficult to utilize its derivatives as biological stimulants for brown fat activation or browning.

Another possible strategy to stimulate browning is through activated (type 2/M2) macrophages induced by eosinophils which are commonly triggered by IL-4 and IL-13 signaling. When activated this way these macrophages recruit around subcutaneous white fat and secrete catecholamines to facilitate browning in mice.14,15 A secondary means by which both IL-4 and IL-13 may influence fat conversion is their direct interaction with Th2 cytokines.16 Unfortunately while on its face this strategy looks promising, in a similar vein to vitamin A, it might not be effective due to unknown long-term side effects associated with IL-4 and IL-13 activation. Due to this lack of knowledge, if IL-4 or 13 is thought to be a viable biochemical strategy for inducing weight loss, long-term proper time lines for effects and dosages must be explored in humans, not just short-term studies in mice.

A more controversial agent in browning is fibronectin type III domain-containing protein 5 or more frequently known as irisin. Due to its significantly increased rate of secretion from muscle under the strain of exercise, some individuals believe that irisin is a key mediator in browning acting as a myokine;17 if this characterization is accurate then irisin could be a significant player in the biological benefits produced by exercise including weight loss, white fat conversion and reduced levels of inflammation.18,19 However, other parties believe that because human studies with irisin have produced results that do not demonstrate benefits similar to those studies using mice, irisin is another molecule that cannot scale-up its effectiveness when faced with the added biological complexity of humans versus a mouse.20-22

The key element within this controversy could be that irisin expression is augmented by the increased expression of PGC1alpha, but PGC1alpha increases the expression of many different proteins and other molecules, so the expression of irisin may not be relevant to the positive changes associated with exercise. Another factor may be that a key difference between mice and humans is the mutation in the start codon of the human gene involved in the production of irisin, which significantly reduces irisin availability.23 Thus this mutation could be the limiting factor to why despite a very conserved genetic sequence, humans do not see anywhere near the benefit mice do. If this explanation is correct it does potentially still leave the door open to directly inject irisin into the body to increase concentrations in an attempt to aid exercise derived results, but if PGC1alpha is the key, then this increased concentration of irisin could be of minimal consequence.

Another potential element that demonstrates a significant concentration increase in accordance to increased PGC1alpha is a hormone known as meteorin-like (Metrnl).24 The concentration of this hormone increases in both skeletal muscle and adipose tissue during exercise and exposure to cold temperatures in accordance to increases in PGC1alpha concentrations. When Metrnl circulates in the blood it seems to produce a widespread effect that induces browning resulting in a significant increase in energy expenditure.24 The influence of Metrnl on white fat does not appear due to direct interaction with the fat, but instead indirect action on various immune cells most notably M2 macrophages via the eosinophil pathway, which then interact with the fat through activation of various pro-thermogenic actions.24 As discussed above this interaction with eosinophil appears to function through IL-4 and IL-13 signaling indicating a common pathway purpose between IL-4/IL-13 and the original SNS pathway. Not surprisingly blocking Metrnl has a negative effect on the biological thermogenic response.24

Another potential strategy for browning may be targeting appropriate receptors instead of specific molecules; with this strategy in mind one potential target could be transient receptor potential vanilloid-4 (TRPV4). TRPV4 acts as a negative regulator for browning through its negative action against PGC1a and the thermogenic pathway in general.25 In addition TRPV4 appears to activate various pro-inflammatory genes that interact with white adipose tissue making it more difficult to facilitate browning even if the appropriate signals are present. TRPV4 inhibition and genetic ablation in mice significantly increase resistance to obesity and insulin resistance.25 The link between inflammation and thermogenesis is highlighted by the activity of TRPV4, which is one of the early triggers for immune cell chemoattraction.25

Obesity may also produce a positive feedback effect through TRPV4 by increasing cellular swelling and stretching through the ERK1/2 pathway, which increases the rate of TRPV4 activation.26,27 However, the validity of TRPV4 as a therapeutic target remains questionable for TRPV4 expression not only influences fat/energy expenditure, but also osmotic regulation, bone formation and plays some role in brain function.25,28,29 Fortunately a number of the issues with TRPV4 mutations/mis-function appear to be developmental in influence versus post-development, thus TRPV4 therapies could still be valid.

Natriuretic peptides (NPs) are hormones typically produced in the heart on two different operational capacities: atrial and ventricular. Both of these hormones appear to play a role in browning through association with the adrenergic pathway.30 The most compelling evidence for supporting this behavior is that a lack of NP clearance receptors demonstrated significant enhanced thermogenic gene expression in both white and brown adipose tissue.30 Also direct application of ventricular NP in mice increased energy expenditure.30 In addition to the above results, NPs are an inherent attractive therapeutic possibility because appropriate receptors are located in white and brown fat of both rats and humans31,32 and these receptors go through periods of significant decline in expression when exposed to fasting,33 which may account for some of the benefits seen from low calorie diets.

Atrial NPs increase lipolysis in human adipocytes similar to catecholamines (increasing cAMP levels and activation of PKA) although whether or not this increase is induced through interaction with beta-adrenergic receptors is unclear.34 Some believe that NPs activate the guanylyl cyclase containing NPRA producing the second messenger cGMP activating cGMP-dependent protein kinase (PKG).35,36 PKA and PKG have similar mechanisms for substrate phosphorylation including similar targets in adipocytes,36 thus this interaction may explain why atrial NPs act similar to catecholamines.

Recall from above that one of the means of inducing browning, especially for those tissues that are distant from SNS-based neurons, is macrophage recruitment. This recruitment appears to be initiated by CCR2 and IL-4 for when either is eliminated from mice models the conversion no longer occurs.15 Tyrosine hydroxylase (Th) is also important in this process facilitating the biosynthesis of catecholamines and later PKA levels.

With respects to producing a biomedical agent to enhance browning there appear to be three major pathways in play: 1) the SNS pathway producing a direct activation response; 2) macrophage recruitment pathway potentially involving Metrnl, which activates IL-4 and IL-13 eventually leading to PKA activation and an indirect activation response; 3) NPs activation pathway, which eventually leads to PKG activation and an indirect activation response. As mentioned earlier SNS pathway enhancement has already been attempted by at least two drugs and failed miserably, so that method is probably out. In addition the SNS pathway does not appear to have as much browning potential as the PKA or PKG pathways due to the reliance on the location of certain nerve fibers.

Enhancing macrophage recruitment could be a good strategy, but there appears to be little information regarding negative effects associated with short-term high frequency enhancement of IL-4 or IL-13 concentrations. Some reports have suggested an increase in allergic symptoms, but any more severe consequences are unknown. This is not to say that enhancing IL-4 or IL-13 is not a valid therapeutic strategy, but its overall value is unknown. In contrast enhancement of NPs appear to be a more stable choice due to positive results in initial exploration of both the application and the expected negative side effects. First, NPs can be administrated via the nose-brain pathway enabling access to the brain avoiding some potential systemic side effects.37 Second, there appear to be few, if any significant side effects to intranasal NP application, at least in the short-term.38

Overall the above discussion has merely identified some of the more promising candidates to enhance browning white fat. One could argue that resorting to drugs to enhance the overall health of an individual versus simple diet and exercise is a regretful strategy. Unfortunately the reality of modern society is that more and more people seem to have less available time to exercise or eat right. In addition to a mounting negative weight external environment (increased pollution and industrial chemicals like BPAs) this drug enhancement strategy may be the most time and economically efficient means to ensure proper weight control and overall health for the future.

Citations –

1. van Marken Lichtenbelt, W, et Al. “Cold-activated brown adipose tissue in healthy men.” The New England Journal of Medicine. 2009. 360:1500-08.

2. Lowell, B, and Spiegelman, B. “Towards a molecular understanding of adaptive thermogenesis.” Nature. 2000. 404:652-60.

3. Seale, P, et Al. “PRDM16 controls a brown fat/skeletal muscle switch.” Nature. 2008. 454:961–967.

4. Sidossis, L and Kajimura, S. “Brown and beige fat in humans: thermogenic adipocytes that control energy and glucose homeostasis.” J. Clin. Invest. 2015. 125(2):478-486.

5. Long, J, et Al. “A smooth muscle-like origin for beige adipocytes.” Cell Metab. 2014. 19(5):810–820.

6. Kajimura, S, and Saito, M. “A new era in brown adipose tissue biology: molecular control of brown fat development and energy homeostasis.” Annu Rev Physiol. 2014. 76:225–249.

7. Harms, M, and Seale, P. “Brown and beige fat: development, function and therapeutic potential.” Nat Med. 2013. 19(10):1252–1263.

8. Collins, S. “β-Adrenoceptor signaling networks in adipocytes for recruiting stored fat and energy expenditure.” Front Endocrinol (Lausanne). 2011. 2:102.

9. Handschin, C, and Spiegelman, B. “Peroxisome proliferatoractivated receptor gamma coactivator 1 coactivators, energy homeostasis, and metabolism.” Endocr. Rev. 2006. 27:728–735.

10. Yen, M, and Ewald, M. “Toxicity of weight loss agents.” J. Med. Toxicol. 2012. 8:145–152.

11. Alvarez, R, et Al. “A novel regulatory pathway of brown fat themogenesis, retinoic acid is transcriptional activator of the mitochondrial uncoupling protein gene.” J. Biol. Chem. 270:5666-5673.

12. Mercader, J, et Al. “Remodeling of white adipose tissue after retinoic acid administration in mice.” Endocrinology. 2006. 147:5325–5332.

13. Kiefer, F, et Al. “Retinaldehyde dehydrogenase 1 regulates a thermogenic program in white adipose tissue.” Nat. Med. 2012. 18:918–925.

14. Nguyen, K, et Al. “Alternatively activated macrophages produce catecholamines to sustain adaptive thermogenesis.” Nature. 2011. 480(7375):104–108.

15. Qiu, Y, et Al. “Eosinophils and type 2 cytokine signaling in macrophages orchestrate development of functional beige fat.” Cell. 2014. 157(6):1292–1308.

16. Stanya, K, et Al. “Direct control of hepatic glucose production by interleukins-13 in mice.” The Journal of Clinical Investigation. 2013. 123(1):261-271.

17. Pedersen, B, and Febbraio, M “Muscle as an endocrine organ: focus on muscle-derived interleukin-6.” Physiological Reviews. 2008. 88(4):1379–406.

18. Bostrom, P, et Al. “A PGC1-α-dependent myokine that drives brown-fat-like development of white fat and thermogenesis.” Nature. 2012. 481(7382):463–468.

19. Lee, P, et Al. “Irisin and FGF21 are cold-induced endocrine activators of brown fat function in humans.” Cell Metab. 2014. 19(2):302–309.

20. Erickson, H. “Irisin and FNDC5 in retrospect: An exercise hormone or a transmembrane receptor?” Adipocyte. 2013. 2(4):289-293.

21. Timmons, J, et Al. “Is irisin a human exercise gene?” Nature. 2012. 488(7413):E9-11.

22. Albrecht, E, et Al. “Irisin - a myth rather than an exercise-inducible myokine.” Scientific Reports. 2015. 5:8889.

23. Ivanov, I, et Al. “Identification of evolutionarily conserved non-AUG-initiated N-terminal extensions in human coding sequences.” Nucleic Acids Research. 2011. 39(10):4220-4234.

24. Rao, R, et Al. “Meteorin-like is a hormone that regulates immune-adipose interactions to increase beige fat thermogenesis.” Cell. 2014. 157:1279-1291.

25. Ye, L, et Al. “TRPV4 is a regulator of adipose oxidative metabolism, inflammation, and energy homeostasis.” Cell. 2012. 151:96-110.

26. Gao, X, Wu, L, and O’Neil, R. “Temperature-modulated diversity of TRPV4 channel gating: activation by physical stresses and phorbol ester derivatives through protein kinase C-dependent and -independent pathways.” J. Biol. Chem. 2003. 278:27129–27137.

27. Thodeti, C, et Al. “TRPV4 channels mediate cyclic strain-induced endothelial cell reorientation through integrin-to-integrin signaling.” Circ. Res. 2009. 104:1123–1130.

28. Masuyama, R, et Al. “TRPV4-mediated calcium influx regulates terminal differentiation of osteoclasts.” Cell Metab. 2008. 8:257–265.

29. Phelps, C, et Al. “Differential regulation of TRPV1, TRPV3, and TRPV4 sensitivity through a conserved binding site on the ankyrin repeat domain.” J. Biol. Chem. 2010. 285:731–740.

30. Bordicchia, M, et Al. “Cardiac natriuretic peptides act via p38 MAPK to induce the brown fat thermogenic program in mouse and human adipocytes.” The Journal of Clinical Investigation. 2012. 122(3):1022-1036.

31. Sarzani, R, et Al. “Comparative analysis of atrial natriuretic peptide receptor expression in rat tissues.” J Hypertens Suppl. 1993. 11(5):S214–215.

32. Sarzani, R, et Al. “Expression of natriuretic peptide receptors in human adipose and other tissues.” J Endocrinol Invest. 1996. 19(9):581–585.

33. Sarzani, R, et Al. “Fasting inhibits natriuretic peptides clearance receptor expression in rat adipose tissue.” J Hypertens. 1995. 13(11):1241–1246.

34. Sengenes, C, et Al. “Natriuretic peptides: a new lipolytic pathway in human adipocytes.” FASEB J. 2000. 14(10):1345–1351.

35. Potter, L, and Hunter, T. “Guanylyl cyclase-linked natriuretic peptide receptors: structure and regulation.” J Biol Chem. 2001. 276(9):6057–6060.

36. Sengenes, C, et Al. “Involvement of a cGMP-dependent pathway in the natriuretic peptide-mediated hormone-sensitive lipase phosphorylation in human adipocytes.” J Biol Chem. 2003. 278(49):48617–48626.

37. Illum, L. “Transport of drugs from nasal cavity to the central nervous system.” Eur. J. Pharm. Sci. 11:1-18.

38. Koopmann, A, et Al. “The impact of atrial natriuretic peptide on anxiety, stress and craving in patients with alcohol dependence.” Alcohol and Alcoholism. 2014. 49(3):282-286.

Wednesday, May 27, 2015

Where is my Solar and Wind Only City?


Two years ago this blog proposed a challenge to solar and wind supporters that if solar and wind were indeed the energy mediums of the future and did not require the assistance of other energy mediums (most notably fossil fuels like coal and natural gas) then they should empirically demonstrate this potential by transitioning a single medium sized city (10,000 – 15,000 individuals) to a grid where at least 70% of the electricity, not even all energy, was produced by solar and/or wind sources. Unfortunately despite the passage of two years and the so-called further expansion of solar and wind technology no such experiment has been conducted.

This lack of attention to detail in producing a model city that would empirically represent and support the actual ability of solar and wind to produce the bulk of electricity and even possibly all energy in the future beyond simple hype is troubling. Are solar and wind proponents so irresponsible that they are willing to gamble the future of society on merely their hopes, dreams, and personal preferences rather than raw data? Do they think that incorporation of solar and wind to a grid steadily advancing from 10% to 20% then 30% then 40% then 50%, etc. will run perfectly with no significant problems? If so, then the solar and wind supporters who believe these things should be stripped of all of their credibility and influence; those who do not believe in such a perfect transition should begin immediately petitioning to accept the challenge.

To the solar and wind proponents who object to the above characterization due to the notion that in March Georgetown, Texas (population approximately 48,000) proposed a plan to get all electricity from solar and wind sources, in essence meet this challenge, hold your horses. While it is true that there has been an initial arrangement between the Georgetown Utility Systems and Spinning Spur Wind Farm (owned by EDF Renewable Energy) and SunEdison to purchase 294 MW (144 MW wind and 150 MW solar) from their installations, this is only an initial arrangement, no actual testing or application has occurred yet.

A more pertinent issue regarding the use of Georgetown as an example is that there is no specific information pertaining to the details of how Georgetown Utility Systems will manage this change in supplier. Basically the only public reporting on this strategy have been puff-hype pieces with no real substance or details. Both Spinning Spur Wind Farm and the yet to be identified SunEdison site have not been fully constructed, are not operational and do not have any secondary storage capacity; thus any electricity produced by these institutions will be live and when those institutions are not producing electricity there will be no electricity to provide to Georgetown.

Initially there are at least three major questions that must be addressed to legitimize Georgetown as a model for a solar/wind only powered city. First, where is the detailed analysis of how electricity, and possibly even energy flows, would be properly compensated to avoid brownouts in times when there is insufficient electricity being produced by solar and wind sources? Simply saying “the sun shines in the day and the wind blows when the sun is not shining” is laughable and severely damages credibility. Anyone who thinks that there will not be periods of intermittence from both Spinning Spur and the SunEdison site is harboring an inaccurate belief. Basically show that 100% renewable can be done using math, not flowery words and misplaced hype; note that it is important to also include any transmission and inverter losses in the calculation and separate nameplate capacity from actual operational capacity.

Second, it stands to reason that proponents of a solar/wind only city will not allow the use of natural gas or coal to act in a backup capacity during these periods of intermittence; therefore, during periods of excess solar and wind, electricity must be stored in a battery for use at a future time. So what type of battery structure(s) is going to be utilized to store that excess energy and what is the economic feasibility of using this structure? If no battery infrastructure is believed to be feasible or economical then what type of energy medium will be tapped to act as backup in lieu of a fossil fuel medium and how will it be properly incorporated?

Third, how will consumer costs for energy change from the transition away from fossil fuels over time, i.e. what will costs be in year 1, what will costs be in year 10…? To simply say it will cost less is not sufficient. It must be demonstrated that it will cost less both now and in the future and if it will not cost less in the future what forms of compensation, if any, will be provided to the residents of Georgetown?

Overall these are just the three most basic questions that must be addressed before anyone should accept the idea of Georgetown, Texas being a legitimate 100% solar/wind powered city when their plan is put into place a few years from now. If these questions are not answered with accurate specifics that are later properly executed over time then Georgetown loses all significance as both a legitimate and symbolic experiment for the validity of a solar and wind “future”.

Of course it must be understood that the results in Georgetown are only an initial step, success only provides support to the possibility, not any guarantee for national eventuality. So how about it solar and wind supporters are you actually ready to put your theories to the test or are you simply content with the unscientific and irrational belief that everything will magically work out without the need for essential specifics, realistic assumptions, honest economics (which is incredibly lacking in most pro-solar and wind papers) and valid proof of concepts?

Wednesday, May 6, 2015

A Theory Behind the Relationship Between Processed Foods and Obesity


While there has been a general slowing in the progression of global obesity, especially in the developed world, there has yet to be a reversal of this detrimental trend. A recent study has suggested that one aspect of influence regarding obesity progression lies with the consumption of foods that have incorporated emulsifiers and how they interact with intestinal bacteria including increasing the probability of developing negative metabolic syndromes in mice.1 Based on this result understanding the digestive process may be an important element to understanding how emulsifiers and emulsions may influence weight outcomes.

An emulsion is a mixture of at least two liquids where multiple components are immiscible, a characteristic commonly seen when oil is added to water resulting in a two-layer system where the oil floats on the surface of the water before it is mixed to form the emulsion. However, due to this immiscible aspect most emulsions are inherently unstable as “similar” droplets join together once again creating two distinct layers. When separated these layers are divided into two separate elements: a continuous phase and a droplet phase depending on the concentrations of the present liquids. Due to their inherent instability most emulsions are stabilized with the addition of an emulsifier. These agents are commonly used in many food products including various breads, pastas/noodles, and milk/ice cream.

Emulsifier-based stabilization occurs by reducing interfacial tension between immiscible phases and by increasing the repulsion effect between the dispersed phases through either increasing the steric repulsion or electrostatic repulsion. Emulsifiers can produce these effects because they are amphiphiles (have two different ends): a hydrophilic end that is able to interact with the water layer, but not the oil layer and a hydrophobic end that is able to interact with the oil layer, but not the water layer. Steric repulsion is born from volume restrictions from direct physical barriers while electrostatic repulsion is based on exactly its namesake electrically charged surfaces producing repulsion when approaching each other. As previously mentioned above some recent research has suggested that the consumption of certain emulsifiers in mice have produced negative health outcomes relative to controls. Why would such an outcome occur?

A typical dietary starch, which is one of the common foods that utilize emulsifiers is composed of long chains of glucose called amylose, a polysaccharide.2 These polysaccharides are first broken down in the mouth by chewing and saliva converting the food structure from a cohesive macro state to scattered smaller chains of glucose. Other more complex sugars like lactose and sucrose are broken down into their glucose and secondary sugar (galactose, fructose, etc.) structures.

Absorption and complete degradation begins in earnest through hydrolysis by salivary and pancreatic amylase in the upper small intestine with little hydrolyzation occurring in the stomach.3 There is little contact or membrane digestion through absorption on brush border membranes.4 Polysaccharides break down into oligosaccharides that are then broken down into monosaccharides by surface enzymes on the brush borders of enterocytes.5 Microvilli in the entercytes then direct the newly formed monosaccharides to the appropriate transport site.5 Disaccharidases in the brush border ensure that only monosaccharides are properly transported, not lingering disaccharides. This process differs from protein digestion, which largely involves degradation in gastric juices comprised of hydrochloric acid and pepsin and later transfer to the duodenum.

Within the small intestine free fatty acid concentration increases significantly as oils and fats are hydrolyzed at a faster rate than in the stomach due to the increased presence of bile salts and pancreatic lipase.3 It is thought that droplet size of emulsified lipids influences digestion and absorption where the smaller sizes allow for gastric lipase digestion in the duodenal lipolysis.6,7 The smaller the droplet size the finer the emulsion in the duodenum leading to a higher degree of lipolysis.8 Not surprisingly gastric lipase activity is also greater in thoroughly mixed emulsions versus coarse ones.

Typically hydrophobic interactions are responsible for the self-assembly of amphiphiles where water molecules react to a disordered state gaining entropy as the hydrophobes of the amphiphilic molecules are buried in the cores of micelles due to repelling forces.9 However, in emulsions the presence of oils produce a low-polarity interaction that can facilitate reverse self-assembly10,11 with a driving force born from the attraction of hydrogen bonding. For example lecithin is a zwitterionic phospholipid with two hydrocarbon tails that form reverse spherical or ellipsoidal micelles when exposed to oil.21 Basically emulsions could have the potential to significantly increase the hydrogen concentration of the stomach.

This potential increase in free hydrogen could be an important aspect to why emulsions produce negative health outcomes in model organisms.1 One of the significant interactions that govern the concentrations and types of intestinal bacteria is the rate of interspecies hydrogen transfer between hydrogen producing bacteria to hydrogen consuming methanogens. Note that non-obese individuals have small methanogen-based intestinal populations whereas obese individuals have larger populations where it is thought that the population of methanogen bacteria expands first before one gains significant weight.13,14 The importance behind this relationship is best demonstrated by understanding the biochemical process involved in the formation of fatty acids in the body.

Methanogens like Methanobrevibacter smithii enhance fermentation efficiency by removing excess free hydrogen and formate in the colon. A reduced concentration of hydrogen leads to an increased rate of conversion of insoluble fibers into short-chain fatty acids (SCFAs).13 Proprionate, acetate, butyrate and formate are the most common SCFAs formed and absorbed across the intestinal epithelium providing a significant portion of the energy for intestinal epithelial cells promoting survival, differentiation and proliferation ensuring effective stomach lining.13,15,16 Butyric acid is also utilized by the colonocytes.17 Formate also can be directly used by hydrogenotrophic methanogens and propionate and lactate can be fermented to acetate and H2.13

Overall the population of Archaea bacteria in the gut, largely associated to Methanobrevibacter smithii, is tied to obesity with the key factor being availability of free hydrogen. If there is a lot of free hydrogen then there is a higher probability for a lot of Archaea, otherwise there is a very low population of Archaea because there is a limited ‘food source’. Therefore, the consumption of food products with emulsions or emulsion-like characteristics or components could increase available free hydrogen concentrations, which will change the intestinal bacteria composition in a negative manner that will increase the probability that an individual becomes obese. This hypothesis coincides with existing evidence from model organisms that emulsion consumption has potential negative intestinal bacteria outcomes. One possible methodology governing this negative influence is how the change in bacteria concentration influences the available concentration of SCFAs, which could change the stability of stomach lining.

In addition to influencing hydrogen concentrations in the gut, emulsions also appear to have a significant influence on cholecystokinin (CCK) concentrations. CCK plays a meaningful role in both digestion and satiety, two components of food consumption that significantly influence both body weight and intestinal bacteria composition. Most of these concentration changes occur in the small intestine, most notably in the duodenum and jejunum.18 The largest influencing element for CCK release is the amount and level of fatty acid presence in the chyme.18 CCK is responsible for inhibiting gastric emptying, decreasing gastric acid secretion and increased production of specific digestive enzymes like hepatic bile and other bile salts, which form amphipathic lipids that emulsify fats.

When compared against non-emulsions, emulsion consumption appears to reduce the feedback effect that suppresses hunger after food intake. This effect is principally the result of changes in CCK concentrations versus other signaling molecules like GLP-1.19 Emulsion digestion begins when lipases bind to the surface of the emulsion droplets; the effectiveness of lipase binding increases with decreasing droplet size. Small emulsion droplets tend to have more complex microstructures, which produce more surface area that allow for more effective digestion.

This higher rate of breakdown produces a more rapid release of fatty acids as the presences of free fatty acids in the small intestinal lumen is critical for gastric emptying and CCK release.20 This accelerated breakdown creates a relationship between CCK concentration and emulsion droplet size where the larger the droplet size the lower the released CCK concentration.21 One of the main reasons why larger emulsions produce less hunger satisfaction is that with the reduced rate of CCK concentration and emulsion breakdown there is less feedback slowing of intestinal transit. Basically the rate at which the food is traveling through the intestine proceeds at a faster rate because there are fewer cues (feedback) due to digestion to slow transit for the purpose of digestion.

As alluded to above the type of emulsifier used to produce the emulsion appears to be the most important element to how an emulsion influences digestion. For example the lipids and fatty acid concentrations produced from digestion of a yolk lecithin emulsion were up to 50% smaller than one using polysorbate 20 (i.e. Tween 20) or caseinate.7 Basically if certain emulsifiers are used the rate of emulsion digestion can be reduced potentially increasing the concentration of bile salts in the small intestine, which could produce a higher probability for negative intestinal related events.

Furthermore studies using low-molecular mass emulsifiers (two non-ionic, two anionic and one cationic) demonstrated three tiers of TG lipolysis governed by emulsifier-to-bile salt ratio.3 At low emulsifier-bile ratios (<0.2 mM) there was no change in solubilization capacity of micelles whereas at ratios between 0.2 mM and 2 mM solubilization capacity significantly increased, which limited interactions between the oil and destabilization reaction products reducing oil degradation.3 At higher ratios (> 2 mM) emulsifier molecules remain in the adsorption layer heavily limiting lipase activity, which significantly reduces digestion and oil degradiation.3

Another possible influencing factor could be change in glucagon concentrations. There is evidence suggesting that increasing glucagon concentration in already fed rats can produce hypersecretory activity in both the jejunum and ileum.22-24 It stands to reason that due to activation potential of glucagon-like peptide-1 (GLP-1) in consort with CCK, glucagon plays some role. However, there are no specifics regarding how glucagon directly interacts with intestinal bacteria and the changes in digestion rate associated with emulsions.

The methodology behind why emulsions and their associated emulsifiers produce negative health outcomes in mice is unknown, but it stands to reason that both how emulsions change the rate of digestion and the present hydrogen concentration play significant roles. These two factors have sufficient influence on the composition and concentration of intestinal bacteria, which have corresponding influence on a large number of digestive properties including nutrient extraction and SCFA concentration management. SCFA management may be the most pertinent issue regarding the metabolic syndrome outcomes seen in mice born from emulsifiers.

It appears that creating emulsions that produce smaller drop sizes could mitigate negative outcomes, which can be produced by using lecithin over other types of emulsifiers. Overall while emulsifiers may be a necessary element in modern life to ensure food quality, instructing companies on the proper emulsifier to use at the appropriate ratios should have a positive effect on managing any detrimental interaction between emulsions and gut bacteria.



Citations –

1. Chassaing, B, et Al. “Dietary emulsifiers impact the mouse gut microbiota promoting colitis and metabolic syndrome.” Nature. 2015. 519(7541):92-96.

2. Choy, A, et Al. “The effects of microbial transglutaminase, sodium stearoyl lactylate and water on the quality of instant fried noodles.” Food Chemistry. 2010. 122:957e964.

3. Vinarov, Z, et Al. “Effects of emulsifiers charge and concentration on pancreatic lipolysis: 2. interplay of emulsifiers and biles.” Langmuir. 2012. 28:12140-12150.

4. Ugolev, A, and Delaey, P. “membrane digestion – a concept of enzymic hydrolysis on cell membranes.” Biochim Biophys Acta. 1973. 300:105-128.

5. Levin, R. “Digestion and absoption of carbohydrates from molecules and membranes to humans.” Am. J. Clin. Nutr. 1994. 59:690S-85.

6. Mu, H, and Hoy, C. “The digestion of dietary triacylglycerols.” Progress in Lipid Research. 2004. 43:105e-133.

7. Hur, S, et Al. “Effect of emulsifiers on microstructural changes and digestion of lipids in instant noodle during in vitro human digestion.” LWT – Food Science and Technology. 2015. 60:630e-636.

8. Armand, M, et Al. “Digestion and absorption of 2 fat emulsions with different droplet sizes in the human digestive tract.” American Journal of Clinical Nutrition. 1999. 70:1096e1106

9. Njauw, C-W, et Al. “Molecular interactions between lecithin and bile salts/acids in oils and their effects on reverse micellization.” Langmuir. 2013. 29:3879-3888.

10. Israelachvili, J. “Intermolecular and surface forces. 3rd ed. Academic Press; San Diego. 2011.

11. Evans, D, and Wennerstrom, H. “The colloidal domain: where physics, chemistry biology, and technology meet.” Wiley-VCH: New York. 2001.

12. Tung, S, et Al. “A new reverse wormlike micellar system: mixtures of bile salt and lecithin in organic liquids.” J. Am. Chem. Soc. 2006. 128:5751-5756.

13. Zhang, H, et, Al. “Human gut microbiota in obesity and after gastric bypass.” PNAS. 2009. 106(7): 2365-2370.

14. Turnbaugh, P, et, Al. “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature. 2006. 444(7122):1027–31.

15. Son, G, Kremer, M, Hines, I. “Contribution of Gut Bacteria to Liver Pathobiology.” Gastroenterology Research and Practice. 2010. doi:10.1155/2010/453563.

16. Luciano, L, et Al. “Withdrawal of butyrate from the colonic mucosa triggers ‘mass apoptosis’ primarily in the G0/G1 phase of the cell cycle.” Cell and Tissue Research. 1996. 286(1):81–92.

17. Cummings, J, and Macfarlane, G. “The control and consequences of bacterial fermentation in the human colon.” Journal of Applied Bacteriology. 1991. 70:443459.

18. Rasoamanana, R, et Al. “Dietary fibers solubilized in water or an oil emulsion induce satiation through CCK-mediated vagal signaling in mice.” J. Nutr. 2012. 142:2033-2039.

19. Adam, T, and Westerterp-Plantenga, M. “Glucagon-like peptide-1 release and satiety after a nutrient challenge in normal-weight and obese subjects.” Br J Nutr. 2005. 93:845–51.

20. Little, T, et Al. “Free fatty acids have more potent effects on gastric emptying, gut hormones, and appetite than triacylglycerides.” Gastroenterology. 2007. 133:1124–31.

21. Seimon, R, et Al. “The droplet size of intraduodenal fat emulsions influences antropyloroduodenal motility, hormone release, and appetite in healthy males.” Am. J. Clin. Nutr. 2009. 89:1729-1736.

22. Young, A, and Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 1. Jejunal hypersecretion induced by starvation.” Gut. 1990. 31:43-53.

23. Youg, A, Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 2. Ileal hypersection induced by starvation.” Gut. 1990. 31:162-169.

24. Lane, A, Levin, R. “Enhanced electrogenic secretion in vitro by small intestine from glucagon treated rats: implications for the diarrhoea of starvation.” Exp. Physiol. 1992. 77:645-648.

Tuesday, April 21, 2015

Augmenting rainfall probability to ward off long-term drought?


Despite the ridiculous pseudo controversy surrounding global warming in the public discourse, the reality is that global warming is real and has already significantly started influencing the global climate. One of the most important factors in judging the range and impact of global warming as well as how society should respond is also one of the more perplexing, cloud formation. Not only do clouds influence the cycle of heat escape and retention, but they also drive precipitation probability. Precipitation plays an important role in maintaining effective hydrological cycles as well as heat budgets and will experience significant changes in reaction to future warming largely producing more extreme outcomes with some areas receiving significant increases that will produce flash flooding whereas other areas will be deprived of rainfall producing longer-term droughts similar to those now seen in California.

At its core precipitation is influenced by numerous factors like solar heating and terrestrial radiation.1,2 Of these factors various aerosol particles are thought to hold an important influence. Both organic and inorganic aerosols are plentiful in the atmosphere helping to cool the surface of Earth by sunlight scattering or serving as nuclei support for the formation of water droplets and ice crystals.3 Not surprisingly information regarding the means in which the properties of these aerosols influence cloud formation and precipitation is still limited, which creates significant uncertainties in climate modeling and planning. Therefore, increasing knowledge of how aerosols influence precipitation will provide valuable information for managing the various changes that will occur and even possibly mitigating those changes.

The formation of precipitation within clouds is heavily influenced by ice nucleation. Ice nucleation involves the induction of crystallization in supercooled water (supercooled = a meta-stable state where water is in liquid form at below typical freezing temperatures). The process of ice nucleation typically occurs through one of two pathways: homogenous or heterogeneous. Homogeneous nucleation entails spontaneous nucleation within a properly cooled solution (usually a supersaturated solution of relative humidity of 150-180% with a temperature of around –38 degrees C) requiring only liquid water or aqueous solution droplets.4-6 Due to its relative simplicity homogeneous nucleation is better understood than heterogeneous nucleation. However, because of the temperature requirements homogeneous nucleation typically only takes place in the upper troposphere and with a warming atmosphere it should be expected that its probability of occurrence would reduce.

Heterogeneous nucleation is more complicated because of the multiple pathways that can be taken, i.e. depositional freezing, condensation, contact, and immersion freezing.7,8 Typically these different pathways allow for more flexibility in nucleation with generic initiation conditions beginning at just south of 0 degrees C and a relative humidity of 100%. This higher temperature fails to prevent nucleation because of the presence of a catalyst, a non-water based substance that is commonly referred to as an ice-forming nuclei (IN). Also heterogeneous nucleation can involve diffusive growth in a mixed-phase cloud that consumes liquid droplets at a faster rate (Wegener–Bergeron–Findeisen process) than super-cooled droplets or snow/graupel aggregation.9

Laboratory experiments have demonstrated support for many different materials acting as IN: different metallic particles, biological materials, certain glasses, mineral dust, anhydrous salts, etc.8,10,11 These laboratory experiments involve wind tunnels, electrodynamic levitation, scanning calorimetry, cloud chambers, and optical microscopy.12,13 However, not surprisingly there appears a significant difference between nucleation ability in the lab and in nature.8,10

Also while homogenous ice nucleation is exactly that, heterogeneous nucleation does not have the same quenching properties.8 Temperature variations within a cloud can produce differing methods of heterogeneous nucleation versus homogeneous nucleation producing significant differences in efficiency. For example not surprisingly some forms of nucleation in cloud formations are more difficult to understand like high concentration formation in warm precipitating cumulus clouds; i.e. particle concentrations increasing from 0.01 L-1 to 100 L-1 in a few minutes at temperatures exceeding –10 degrees C and outpacing existing ice nucleus measurements.14 One explanation for this phenomenon is the Hallett-Mossop (H-M) method. This method is thought to achieve this rapid freezing through interaction with a narrow band of supercooled raindrops producing rimers.15

The H-M methodology requires cloud temperatures between approximately –1 and –10 degrees C with the availability of large rain droplets (diameters > 24 um), but at a 0.1 ratio relative to smaller (< 13 um droplets).16,17 When the riming process begins ice splinters are ejected and grow through water vapor deposition producing a positive feedback effect increasing riming and producing more ice splinters. Basically a feedback loop develops between ice splinter formation and small drop freezing. Unfortunately there are some questions whether or not this methodology can properly explain the characteristics of secondary ice particles and the formation of ice crystal bursts under certain time constraints.18 However, these concerns may not be accurate due to improper assumptions regarding how water droplets form relative to existing water concentrations.15

One of the more important element of rain formation in warm precipitating cumulus clouds, in addition to other cloud formations, appears to involve the location of ice particle concentrations at the top of the cloud formation where there is a higher probability for large droplet formation (500 – 2000 um diameters).15 In this regard cloud depth/area is a more important influencing element than cloud temperature.19 In addition the apparent continued formation of ice crystals stemming from the top proceeding downwards can produce raindrop freezing that catalyzes ice formation creating a positive feedback and ice bursts.20

This process suggests that there is a sufficient replenishment of small droplets at the cloud top increasing the probability of sufficient riming. It is thought that the time variation governing the rate of ice multiplication and how cloud temperature changes accordingly is determined by dry adiabatic cooling at the cloud top, condensational warming, evaporational cooling at the cloud bottom.15 Bacteria also appear to play a meaningful role in both nucleating primary ice crystals and scavenging secondary crystals.7 Even if bacteria concentrations are low (< 0.05 L-1) the catalytic effect of nucleating bacteria produces a much more “H-M” friendly environment.

The most prominent inorganic aerosol that acts as an IN is dust commonly from deserts that is pushed into the upper atmosphere by storms.21,22 The principal origin of this dust is from the Sahara Desert, which is lofted year round versus dust from other origin points like the Gobi or Siberia. While the ability of this dust to produce rain is powerful it can also have a counteracting effect as a cloud condensation nuclei (CCN). In most situations when CCN concentration is increased raindrop conversion becomes less efficient, especially for low-level clouds (in part due to higher temperatures) largely by reducing riming efficiency.

The probability of dust acting as a CCN is influenced by the presence of anthropogenic pollution, which typically is a CCN on its own.23,24 In some situations the presence of pollution could also increase the overall rate of rainfall as it can suppress premature rainfall allowing more rain droplets to crystallize increasing riming and potential rainfall. However, this aspect of pollution is only valid in the presence of dust or other INs for if there is a dearth of IN concentration, localized pollution will decrease precipitation.25 Soot can also influence nucleation and resultant rainfall, but only under certain circumstances. For example if the surface of the soot contains available molecules to form hydrogen bonds (typically from available hydroxyl and carbonyl groups) with available liquid water molecules nucleation is enhanced.26 Overall it seems appropriate to label dust as a strong IN and anthropogenic pollution as a significant CCN.

In mineral collection studies and global simulations of aerosol particle concentrations both deposition and immersion heterogeneous nucleation appear dominated by dust concentrations acting as INs, especially in cirrus clouds.10,27,28 Aerosols also modify certain cloud properties like droplet size and water phase. Most other inorganic atmospheric aerosols behave like cloud condensation nuclei (CCN), which assist the condensation of water vapor for the formation of cloud droplets in a certain level of super-saturation.25 Typically this condensation produces a large number of small droplets, which can reduce the probability of warm rain (above freezing point).29,30

Recall that altitude is important in precipitation, thus it is not surprising that one of the key factors in how aerosols influence precipitation type and probability appears to involve the elevation and temperature at which they interact. For example in mixed-phase clouds, the top area increases relative to increases in CCN concentrations versus a smaller change at lower altitudes and no changes in pure liquid clouds.15,31 Also CCN only significantly influence temperatures when top and base cloud temperatures are below freezing.31 In short it appears that CCN influence is reduced relative to IN influence at higher altitudes and lower temperatures.

Also cloud drop concentration and size distribution at the base and top of a cloud determine the efficiency of the CCN and are dictated by the chemical structure and size of an aerosol. For example larger aerosols have a higher probability of becoming CCN over IN due to their coarse structure. Finally and not surprisingly overall precipitation frequency increases with high water content and decreases with low water content when exposed to CCNs.31 This behavior creates a positive feedback structure that increases aerosol concentration, so for arid regions the probability of drought increases and in wet regions the probability of flooding increases.

While dust from natural sources as well as general pollution are the two most common aerosols, an interesting secondary source may be soil dust produced from land use due to deforestation or large-scale construction projects.32-34 These actions create anthropogenic dust emissions that can catalyze a feedback loop that can produce greater precipitation extremes; thus in certain developing economic regions that may be struggling with droughts continued construction in effort to improve the economy could exacerbate droughts. Therefore, developing regions may need to produce specific methodologies to govern their development to ensure proper levels of rainfall for the future.

While the role of dust has not been fully identified on a mechanistic level, its importance is not debatable. The role of biological particles, like bacteria, is more controversial and could be critical to identifying a method to enhance rainfall probability. It is important to identify the capacity of bacteria to catalyze rainfall for some laboratory studies have demonstrated that inorganic INs only have significant activity below –15 degrees C.10,35 For example in samples of snowfall collected globally originating at temperatures of –7 degrees C or warmer a vast majority of the active IN, up to 85%, were lysozyme-sensitive (i.e. probably bacteria).36,37 Also rain tends to have higher proportions of active IN bacteria than air in the same region.38 With further global warming on the horizon air temperatures will continue to increase lowering the probability window for inorganic IN activity, thus lowering the probability of rainfall in general (not considering any other changes born from global warming).

Laboratory and field studies have demonstrated approximately twelve species of bacteria with significant IN ability spread within three orders of the gammaproteobacteria with the two most notable/frequent agents being Pseudomonas syringae and P. fluorescens and to a lesser extent Xanthomonas.39,40 In the presence of an IN bacterium nucleation can occur at temperatures as warm as –1.5 degrees C to –2 degrees C.41,42 These bacteria appear to have the ability to act as IN due to the existence of a single gene that codes for a specific membrane protein that catalyzes crystal formation by acting as a template for water molecule arrangement.43 The natural origins of these bacteria derive mostly from surface vegetation.

Supporting the idea of the key membrane scaffolding, an acidic pH environment can significantly reduce the effectiveness of bacteria-based nucleation.45,46 Also these protein complexes for nucleation are larger for warmer temperature nucleating bacteria, thus more prone to breakdown in higher acidic environments.44,46 Therefore, low lying areas that have significant acidic pollution like sulfurs could see a reduction in precipitation probability over time. Also it seems that this protein complex could be the critical element to bacteria-based nucleation versus the actual biological processes of the bacteria as nucleation was augmented even when the bacteria itself was no longer viable.46

Despite laboratory and theoretical evidence supporting the role of bacteria in precipitation, as stated above what occurs in the laboratory serves little purpose if it does not translate to nature. This translation is where a controversy arises. It can be difficult to separate the various particles within clouds from residue collection due to widespread internal mixing, but empirical evidence demonstrates the presence of biological material in orographic clouds.47 Also ice nucleation bacteria are present over all continents as well as in various specific locations like the Amazon basin.37,48,49

Some estimates have suggested that 10^24 bacteria enter the atmosphere each year and stay circulating between 2 and 10 days allowing bacteria, theoretically, to travel thousands of miles.50,51 However, there is a lack of evidence for bacteria in the upper troposphere and their concentrations are dramatically lower than those of inorganic materials like dust and soot.28,35,52 Based on this lack of concentrations questions exist to the efficiency of how these bacteria are aerosolized over their atmospheric lifetimes. One study suggests that IN active bacteria are much more efficiently precipitated than non-active IN bacteria, which may explain the disparity between the observations in the air, clouds and precipitation.53

Another possible explanation for this disparity is that most biological particles are generated on the surface and are carried by updrafts and currents into the atmosphere. While the methods of transport are similar to inorganic particles, biological particles have a higher removal potential due to dry or wet deposition due to their typical greater size. Therefore, from a nature standpoint bacteria reside in orographic clouds because they are able to participate in their formations, but are not able to reach higher cloud formations, so most upper troposphere rain is born from dust not bacteria.

Some individuals feel that the current drop freezing assays, which are used to identify the types of bacteria and other agents in a collected sample, can be improved upon to produce a higher level of discrimination between the various classes of IN active bacteria that may be present in the sample. One possible idea is to store the sample at low temperatures and observe the growth and the type of IN bacteria that occur in a community versus individual samples.54 Perhaps new identification techniques would increase the ability to discern the role of bacteria in cloud formation and precipitation.

Among the other atmospheric agents and their potential influence on precipitation potassium appears to have a meaningful role. Some biogenic emissions of potassium, especially around the Amazon, can act as catalysts for the beginning process of organic material condensation.55 However, this role seems to ebb as potassium mass fraction drops as the condensation rate increases.55 This secondary role of potassium as well as the role of bacteria may signal an important element to why past cloud seeding experiments have not achieve the hypothesized expectations.

The lack of natural bacteria input into higher cloud formations leads to an interesting question. What would happen if IN active bacteria like P. syringae were released via plane or other increased altitude method that would result in a higher concentration of bacteria in these higher altitude cloud formations? While typical cloud formation involves vapor saturation due to air cooling and/or increased vapor concentration, increased IN active bacteria concentration could also speed cloud formation as well as precipitation probability.

Interestingly in past cloud seeding experiments orographic clouds appear to be more sensitive to purposeful seeding versus other cloud formations largely because of the shorter residence times of cloud droplets.56,57 One of the positive elements of seeding appears to be that increased precipitation in the target area does not reduce the level of precipitation in surrounding areas including those beyond the target area. In fact it appears that there is a net increase (5-15%) among all areas regardless of the location of seeding.58 The previous presumption that there was loss appears to be based on randomized and not properly controlled seeding experiments.58

The idea of introducing increased concentrations of IN active bacteria is an interesting one if it can increase the probability of precipitation. Of course possible negatives must be considered for such an introduction. The chief negative that could be associated with such an increase from a bacterium like P. syringae would be the possibility of more infection of certain types of plants. The frost mechanism of P. syringae is a minor concern because most of the seeding would be carried out between late spring and early fall where night-time temperatures should not be cold enough to induce freezing. Sabotaging the type III secretion system in P. syringe via some form of genetic manipulation should reduce, if not eliminate, the plant invasion potential. Obviously controlled laboratory tests should be conducted to ensure a high probability of invasion neutralization success before any controlled and limited field tests are conducted. If the use of living bacteria proves to be too costly, exploration of simply using the key specific membrane protein is another possible avenue of study.

Overall the simple fact is that due to global warming, global precipitation patterns will change dramatically. The forerunner to these changes can already been seen in the state of California with no reasonable expectation for new significant levels of rainfall in sight. While other potable water options are available like desalinization, the level of infrastructure required to divert these new sources from origins source to usage points will be costly and these processes do have significant detrimental byproducts. If precipitation probabilities can be safely increased through new cloud seeding strategies like the inclusion of IN active bacteria it could go a long way to combating some of the negative effects of global warming while the causes of global warming itself are mitigated.



Citations –

1. Zuberi, B, et Al. “Heterogeneous nucleation of ice in (NH4)2SO4-H2O particles with mineral dust immersions.” Geophys. Res. Lett. 2002. 29(10). 1504.

2. Hung, H, Malinowski, A, and Martin, S. “Kinetics of heterogeneous ice nucleation on the surfaces of mineral dust cores inserted into aqueous ammonium sulfate particles.” J. Phys. Chem. 2003. 107(9):1296-1306.

3. Lohmann, U. “Aerosol effects on clouds and climate.” Space Sci. Rev. 2006. 125:129-137.

4. Hartmann, S, et Al. “Homogeneous and heterogeneous ice nucleation at LACIS: operating principle and theoretical studies.” Atmos. Chem. Phys. 2011. 11:1753-1767.

5. Cantrell, W, and Heymsfield, A. “Production of ice in tropospheric clouds. A review.” American Meteorological Society. 2005. 86(6):795-807.

6. Riechers, B, et Al. “The homogeneous ice nucleation rate of water droplets produced in a microfluidic device and the role of temperature uncertainty.” Physical Chemistry Chemical Physics. 2013. 15(16):5873-5887.

7. Cziczo, D, et Al. “Clarifying the dominant sources and mechanisms of cirrus cloud formation.” Science. 2013. 340(6138):1320-1324.

8. Pruppacher, H, and Klett, J. “Microphysics of clouds and precipitation.” (Kluwer Academic, Dordrecht. Ed. 2, 1997). pp. 309-354.

9. Lance, S, et Al. “Cloud condensation nuclei as a modulator of ice processes in Arctic mixed-phase clouds.” Atmos. Chem. Phys. 2011. 11:8003-8015.

10. Hoose, C, and Mohler, O. “Heterogeneous ice nucleation on atmospheric aerosols: a review of results from laboratory experiments.” Atmos. Chem. Phys. 2012. 12:9817-9854.

11. Abbatt, J, et Al. “Solid ammonium sulfate aerosols as ice nuclei: A pathway for cirrus cloud formation.” Science. 2006. 313:1770-1773.

12. Murray, B, et Al. “Kinetics of the homogeneous freezing of water.” Phys. Chem. 2010. 12:10380-10387.

13. Chang, H, et Al. “Phase transitions in emulsified HNO3/H2O and HNO3/H2SO4/H2O solutions.” J. Phys. Chem. 1999. 103:2673-2679.

14. Hobbs, P, and Rangno, A. “Rapid development of ice particle concentrations in small, polar maritime cumuliform clouds.” J. Atmos. Sci. 1990. 47:2710–2722.

15. Sun, J, et Al. “Mystery of ice multiplication in warm-based precipitating shallow cumulus clouds.” Geophysical Research Letters. 2010. 37:L10802.

16. Hallett, J, and Mossop, S. “Production of secondary ice particles during the riming process.” Nature. 1974. 249:26-28.

17. Mossop, S. “Secondary ice particle production during rime growth: The effect of drop size distribution and rimer velocity.” Q. J. R. Meteorol. Soc. 1985. 111:1113-3324.

18. Mason, B. “The rapid glaciation of slightly supercooled cumulus clouds.” Q. J. R. Meteorol. Soc. 1996. 122:357-365.

19. Rangno, A, and Hobbs, P. “Microstructures and precipitation development in cumulus and small cumulous-nimbus clouds over the warm pool of the tropical Pacific Ocean. Q. J. R. Meteorol. Soc. 2005. 131:639-673.

20. Phillips, V, et Al. “The glaciation of a cumulus cloud over New Mexico.” Q. J. R. Meteorol. Soc. 2001. 127:1513-1534.

21. Karydis, V, et Al. “On the effect of dust particles on global cloud condensation nuclei and cloud droplet number.” J. Geophys. Res. 2011. 166:D23204.

22. Connolly, P, et Al. “Studies of heterogeneous freezing by three different desert dust samples.” Atmos. Chem. Phys. 2009. 9:2805-2824.

23. Lynn, B, et Al. “Effects of aerosols on precipitation from orographic clouds.” J. Geophys. Res. 2007. 112:D10225.

24. Jirak, I, and Cotton, W. “Effect of air pollution on precipitation along the Front Range of the Rocky Mountain.” J. Appl. Meteor. Climatol. 2006. 45:236-245.

25. Fan, J, et Al. “Aerosol impacts on California winter clouds and precipitation during CalWater 2011: local pollution versus long-range transported dust.” Atmos. Chem. Phys. 2014. 14:81-101.

26. Gorbunov, B, et Al. “Ice nucleation on soot particles.” J. Aerosol Sci. 2001. 32(2):199-215.

27. Kirkevag, A, et Al. “Aerosol-climate interactions in the Norwegian Earth System Model – NorESM. Geosci. Model Dev. 2013. 6:207-244.

28. Hoose, C, Kristjansson, J, Burrows, S. “How important is biological ice nucleation in clouds on a global scale?” Environ. Res. Lett. 2010. 5:024009.

29. Lohmann, U. “A glaciation indirect aerosol effect caused by soot aerosols.” Geophys. Res. Lett. 2002. 29:11.1-4.

30. Koop, T, et Al. “Water activity as the determinant for homogeneous ice nucleation in aqueous solutions.” Nature. 406:611-614.

31. Li, Z, et Al. “Long-term impacts of aerosols on the vertical development of clouds and precipitation.” Nature Geoscience. 2011. DOI: 10.1038/NGEO1313

32. Zender, C, Miller, R, and Tegen, I. “Quantifying mineral dust mass budgets: Terminology, constraints, and current estimates.” Eos. Trans. Am. Geophys. Union. 2004. 85:509-512.

33. Forester, P, et Al. “Changes in atmospheric constituents and in radiative forcing. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change.

34. O’Sullivan, D, et Al. “Ice nucleation by fertile soil dusts: relative importance of mineral and biogenic components.” Atmos. Chem. Phys. 2014. 14:1853-1867.

35. Murray, B, et Al. “Ice nucleation by particles immersed in supercooled cloud droplets.” Chem. Soc. Rev. 2012. 41:6519-6554.

36. Christner, B, et Al. “Geographic, seasonal, and precipitation chemistry influence on the abundance and activity of biological ice nucleators in rain and snow. PNAS. 2008. 105:18854. dio:10.1073/pnas.0809816105.

37. Christener, B, et Al. “Ubiquity of biological ice nucleators in snowfall.” Science. 2008. 319:1214.

38. Stephanie, D, and Waturangi, D. “Distribution of ice nucleation-active (INA) bacteria from rainwater and air, NAYATI Journal of Biosciences. 2011. 18:108-112.

39. Vaitilingom, M, et Al. “Long-term features of cloud microbiology at the puy de Dome (France). Atmos. Environ. 2012. 56:88-100.

40. Cochet, N and Widehem, P. “Ice crystallization by Pseudomonas syringae.” Appl. Microbiol. Biotechnol. 2000. 54:153-161.

41. Heymsfield, A, et Al. “Upper-tropospheric relative humidity observations and implications for cirrus ice nucleation.” Geophys. Res. Lett. 1998. 25:1343-1346.

42. Twohy, C, and Poellot, M. “Chemical characteristics of ice residual nuclei in anvil cirrus clouds: implications for ice formation processes.” Atmos. Chem. Phys. 2005. 5:2289-2297.

43. Joly, M, et Al. “Ice nucleation activity of bacteria isolated from cloud water.” Atmos. Environ. 2013. 70:392-400.

44. Attard, E, et Al. “Effects of atmospheric conditions on ice nucleation activity of Pseudomonas.” Atmos. Chem. Phys. 2012. 12:10667-10677.

45. Kawahara, H, Tanaka, Y, and Obata H. “Isolation and characterization of a novel ice-nucleating bacterium, Pseudomonas, which has stable activity in acidic solution.” Biosci. Biotechnol. Biochem. 1995. 59:1528-1532.

46. Kozloff, L, Turner, M, and Arellano, F. “Formation of bacterial membrane ice-nucleating lipoglycoprotein complexes.” J. Bacteriol. 1991. 173:6528-6536.

47. Pratt, K, et Al. “In-situ detection of biological particles in high altitude dust-influenced ice clouds.” Nature Geoscience. 2009. 2:dio:10.1038/ngeo521.

48. Prenni, A, et Al. “Relative roles of biogenic emissions and Saharan dust as ice nuclei in the Amazon basin.” Nat. Geosci. 2009. 2:402-405.

49. Phillips, V, et Al. “Potential impacts from biological aerosols on ensembles of continental clouds simulated numerically.” Biogeosciences. 2009. 6:987-1014.

50. Burrows, S, et Al. “Bacteria in the global atmosphere – Part 1: review and synthesis of literature data for different ecosystems.” Atmos. Chem. Phys. 2009. 9:9263-9280.

51. Burrows, S, et Al. “Bacteria in the global atmosphere – Part 2: modeling of emissions and transport between different econsystems.” Atmos. Chem. Phys. 2009. 9:9281-9297.

52. Despres, V, et Al. “Primary biological aerosol particles in the atmosphere: a review. Tellus B. 2012. 64:349-384.

53. Amato, P, et Al. “Survival and ice nucleation activity of bacteria as aerosols in a cloud simulation chamber.” Atmos. Chem. Phys. Discuss. 2015. 15:4055-4082.

54. Stopelli, E, et Al. “Freezing nucleation apparatus puts new slant on study of biological ice nucleators in precipitation.” Atmos. Meas. Tech. 2014. 7:129-134.

55. Pohlker, C, et Al. “Biogenic potassium salt particles as seeds for secondary organic aerosol in the Amazon.” Science. 2012. 337(31):1075-1078.

56. Givati, A, and Rosenfeld, D. “Separation between cloud-seeding and air-pollution effects.” J. Appl.Meteorol. 2005. 44:1298-1314.

57. Givati, A, et Al. “The Precipitation Enhancement Project: Israel - 4 Experiment. The
Water Authority, State of Israel. 2013. pp. 55.

58. DeFelice, T, et Al. “Extra area effects of cloud seeding – An updated assessment.” Atmospheric Research. 2014. 135-136:193-203.

Wednesday, April 8, 2015

Is it time to administer compulsory voting in the United States?

When looking at voting rolls regardless of the election period or environment, highly educated middle-aged working men are the most likely individuals to vote with various declining participation rates among other demographics.1,2 This decline is meaningful for voting in a democracy, either direct or indirect, is a direct representation of political power and influence. In addition as individuals become poorer and less educated their voting probability decreases.1 Not surprisingly research has demonstrated that politicians target their messages and actions towards those demographics that have the higher voting probabilities, regardless of whether or not those actions will produce the best outcomes for society in general.3 In some context politicians view their “constituents” as only those individuals who vote. Therefore, politicians will commonly ignore the concerns and problems of those individuals in demographics less likely to vote producing an environment that increases the probability of both income and social stratification.

To combat this aspect of inequality spread some individuals theorize that the United States should adopt compulsory (mandatory) voting over the current voluntary system. Compulsory voting is certainly not a new or exotic idea as 22 countries in the world already have some form of compulsory system including Australia and most of South America (assurance for those who only think such a system exists in 3rd world countries). Also compulsory voting has demonstrated a shift in public policy closer in line to the preference of citizens alleviating the divergence between citizen and constituent.4 So with the legitimacy of compulsory voting as an idea on sound footing, the question is should the United States change its current voting system as well?

Some voices may immediately suggest that the idea of compulsory voting is a direct challenge to individual freedom and liberty, thus should be rejected without discussion; these voices would belong to individuals who are either overreacting or foolish for the real issue is how one defines the role of voting in a society. This role can be defined as either a duty or a power. If defined as a duty then voting is regarded as a civic responsibility that one should engage in to justify his/her citizenship and contribution to society; therefore, compulsory voting should be viewed as reasonable and appropriate including any penalties associated with not voting. If defined as a power then voting is regarded as a means in which citizens can exert their influence on society, but voting should be regarded as only an opportunity not a requirement to express this power. However, it is important to note that in a voluntary voting structure if one chooses not to vote then one has no legitimacy in complaining about the current state of society.

When taking measure of most public discourse on this issue it appears that a majority would classify voting under the latter definition: a mandatory opportunity that a democracy must offer its citizens where participation is only voluntary. Unfortunately for those who hold this view such a belief is not so straightforward. A number of people appear to believe in the philosophy that individual decisions are made in a vacuum where that decision only affects that individual and not society as a whole. This mindset has produced the idea of a separation from society. For example some individuals have argued that if one does not ride public transit buses then that individual’s taxes should not go towards supporting the operation of buses. Clearly this makes little sense on a social level and if such an idea would be expanded beyond such a simple measure, which some would argue it should be, and applied to society as a whole then society would become extraordinarily complex and in general cease to function effectively producing a net negative to all parties. Therefore, one must consider measuring the voluntary nature of voting versus the good of society.

As mentioned above overall voting rates have fallen steadily and significantly over the last half century among all demographics sans the elderly (65+).5,6 Therefore, the possibility certainly exists that the United States’ democracy could become an oligarchy producing a singular path and set of cultural values. Regardless of one’s political leanings, there exists an extremely high probability that an oligarchy will be inherently negative to society producing significant societal disruption and inefficiency. With this potential reality the idea of voluntary voting could be dismissed in favor of compulsory voting under the idea of “for the good of society”. Understand that this mindset is not designed to produce a certain cultural/societal outcome, but instead to ensure sufficient representation. Basically it is akin to forcing Team A, B and C to play Team D in said sporting event. Forcing the game does not mean that Team D will lose, it just means that Team D will not win by forfeit.

Furthermore to those who argue that voting should be voluntary then it should follow that individuals should be given every convenience and opportunity to vote. Unfortunately the disheartening reality is that over the last decade certain actions have been taken in multiple states to increase the probability that citizens are denied the opportunity to vote or at least are given unjustifiable obstacles to overcome before having the opportunity. These actions raise the question of how could one support the idea of voting as a voluntary expression of citizenry power that is mandated by the government when the government and other private agencies work to limit the ability of citizens to vote? One of the guarantees of compulsory voting is that states and the Federal government would not have the ability to produce these additional obstacles to the process of voting and must produce an effective means to allow its citizens the opportunity to vote. The reality of the situation is that unless government, especially at the state level, can demonstrate an ability to produce appropriate voting opportunity compulsory voting may be necessary to ensure democracy in the United States.

To those who believe that applying compulsory voting is a ploy to increase the power base of one particular political viewpoint, current research demonstrates the uncertainty in the validity of this idea. Basically there is no significant difference between the preferences of voters and non-voters in already existing compulsory systems.7,8 In the United States it is thought that non-voters may slightly lean Democrat, but there is no certainty in this analysis for it is based on extrapolation from polling information and polling in general remains a foolish way for producing information as it is unreasonable to suggest that the views of five thousand people could properly characterize the views of fifty million. Therefore, at the current time there is no rational reason to conclude that compulsory voting in the United States would produce a significant power shift for one party over all of the others. Even if there was evidence to such a shift, what would be the problem for a democracy is supposed to be rule by the majority.

Even if compulsory voting were put into practice there are certain issues of access, functionality and penalties that must be considered. The issue of access is important for if government is going to demand that all citizens vote then it must make arrangement that all citizens have appropriate opportunity to do so. While a number of individuals have championed online voting as a means to produce ease of access, such claims produce equality and security concerns. Too frequently one hears about an individual or group hacking a corporation acquiring personal information and/or credit information to have sufficient confidence in the security of online voting and despite the mindset of certain technophiles not everyone has a personal at-home Internet connection or other means of access to the Internet that could effectively accommodate voting. (i.e. online access at a public library).

Therefore, with the uncertainty surrounding the use of Internet voting as nothing more than a luxury or advanced supplemental medium, local and state government must produce sufficient plans of action for in-person polling stations and voting by mail. Some could argue that in lieu of Internet voting, voting by mail is the next best thing. While having the option of voting by mail would be an important access element, eliminating in-person polling stations would not be the correct response. In past opinion polls voting by mail is constantly favored by wide margins over electrons that are run by mail.9,10

In a compulsory system exclusively relying on mail would also be extremely burdensome for the homeless. In addition there are some that are less confident that their vote will be counted when voting is performed through the mail.11 The Washington State system appears to be a good starting point for a national “vote by mail” system where the ballot is sent weeks ahead of time allowing the voter ample time to inform him/herself regarding the important issues and cast their ballot when convenient versus under a specific time crunch. However, there are still in-person stations available for use if an individual is uncomfortable or unable to cast their vote by mail. Whether or not early in-person voting would still be required under a compulsory system is unknown, but weighing on the side of caution to ensure sufficient voting opportunity in the first few elections it should be expected that counties offer early in-person voting for at least two days prior to Election Day.

From a functional standpoint one must address the mindset of those individuals who have previously not elected to vote. Once those with access issues are eliminated, the principal reason that individuals do not vote is that they suffer from a nihilistic mindset, i.e. they do not believe that their vote will matter. A similar mindset is that of the “forsaken voter”. An example of this mindset is seen in one of the major complaints of blacks and environmentalists, that the Democratic Party does not respect their opinions because Democratic leadership believes that these groups have nowhere else to go if they want to advance their political beliefs; they can’t vote for a Republican because that would be self-defeating, if they are real Democrats, and they can’t vote for a Green party member or other third party because of the infinitesimal probability that the person would actually win. Therefore, both types of individuals can feel that their “expression of power” through the vote is pointless.

So the chief question on this issue becomes how to manage those individuals who in the past decided not to vote because of the belief that it did not matter when they are now forced to vote or accept a penalty? Various other countries handle this issue with the straightforward opinion of allowing voters to cast their vote for a “none of the above”, which is thought to represent the dissatisfaction of that voter with the existing candidates. While this option is viable, it does not appear to be meaningful. On its face it can easily be argued that casting a vote for “none of the above” is pointless because it defeats the point of compulsory voting. What is the point of an individual spending any financial or opportunity cost voting if one is not going to cast a meaningful vote? Note that allowing a voter to merely leave a ballot blank is akin to selecting a “none of the above” option.

Individuals in favor of this option would argue that casting a vote for “none of the above” is a demonstration of dissatisfaction with the existing candidates and their respective platforms. Under this mindset a stronger message is sent to the political establishment by voting “none of the above” versus voting for “the lesser of two evils”. It could be argued that eliminating this option would be detrimental to producing efficiency in democracy because it would restrict choice.

The counterargument to this point is that while hypothetically it is a valid argument, in actual practice the problem with abstention is that it does not send that dissatisfied message or any meaningful message beyond a potential sound bite for the given election season. For example in the current election environment even if 75% of the citizenry abstained from an election those abstentions do not matter because the election will be decided on the votes of the 25% that did vote. There is no rule in U.S. election politics that voids an election if less than x% of the potential electorate actually votes, thus abstaining does not send a message because abstention produces no consequence to the candidates or the system. In essence no one in power would care that x% of the electorate was “dissatisfied” with the existing candidate pool. Realistically a “none of the above” vote will not demonstrate meaningful dissatisfaction with the available candidates, but simply disrespect for the process.

Furthermore there is a legitimate question to whether or not the administration of compulsory voting will lead to greater feelings of disillusionment with voting in general because with more people voting each individual vote has less power/influence. On its face whether or not this change is a significant psychological issue will more than likely be entirely influenced by both which candidate wins and the size of the victory. In this structure there are four possible outcomes for individual A and his vote: 1) votes for the winner in a landslide; 2) votes for the winner in a close result; 3) votes for the loser in a close result; 4) votes for the loser in a landslide;

Of these four possible outcomes the only one that could increase voter dissatisfaction is the third outcome where the preferred candidate loses by a small amount. In this situation the voter may interpret compulsory voting as costing their candidate the election naturally presuming that more “forced” participants voted for the opponent swaying the final result. However, in all other situations compulsory voting should have no effect or a positive effect on the viewpoint of voting. In the first outcome individual A should be inspired by compulsory voting in witnessing how many individuals agree with his viewpoint and the candidate that supports it. In the second outcome individual A could reason that compulsory voting was the reason for victory (the opposite rationality of the third outcome in that the “forced” participants swayed the final result in his favor). Finally in the fourth outcome there should be no change in opinion because the candidate lost big and would have lost big even if compulsory voting did not exist.

Some individuals have the belief that compulsory voting will have a positive impact on non-voting forms of political involvement and understanding. While this belief may be true the overall ability to produce this result would more than likely be marginalized by allowing for a simple “none of the above” option for it allows an individual to put no thought into the process at all and simply use the “I don’t care” option. If the idea of compulsory voting is to maximize the potential political power of the electorate then the process should not allow for the ability to so easily circumvent that idea. In addition the increased probability of political engagement must involve a change in the general human personality involving the “blind” rejection of ideas that are counter to their personal beliefs; if one is unable or unwilling to abandon incorrect opinions when faced with critical flaws of those opinions then increased political engagement is not be positive and very well could be a net negative.

There is a valid argument that can be made in favor of abstention on the basis that no individual should feel obligated or forced to vote for a particular individual or group just because voting is required. How can this conflict between the negative of allowing a “none of the above” option and forcing an undesired vote be resolved? One possibility is that voting individuals who do not prefer any of the candidates could write a brief explanation (1-2 sentences) regarding why he/she does not want to vote for the available candidates for a given elected position. This way the individual would be successfully abstaining while also demonstrating thoughtful respect for the voting process and increasing the slim probability that the dissatisfaction would actually be noted. Incidentally it would be preferred if these individuals would express this dissatisfaction to potential third party representatives so an individual that they would feel comfortable voting for could properly enter the race.

This aforementioned society-dissociated mindset can be detrimental to a compulsory voting scheme because without a reasonable probability that these “new” voters are properly informed their votes will not properly convey their own opinions or the representative opinion of society. For example suppose Apartment Complex A is having a vote among its 50 residents on whether or not to establish a new more restrictive noise ordinance. 10 residents are opposed to this new ordinance because they commonly have parties that involve loud music. 20 residents are in favor of this new ordinance because they are frequently bothered by the noise from these parties. The final 20 residents have no strong opinions on this vote and are not aware of the grievances of 20 pro ordinance residents because they are far enough away that they do not experience the loud music. Under these conditions these final 20 residents should abstain because of their lack of interest and information.

However, in a compulsory voting environment it is more than likely that they will vote against the ordinance due to reasons of either simplicity or avoiding future restrictions on themselves. Therefore, these 20 “neutral and uninformed” voters could improperly swing the results of the vote because they do not understand how the outcome of the vote affects all residents in Apartment Complex A. So if compulsory voting is applied making sure that all have access to the necessary resource to properly inform themselves of the issues is critical. Again it is fine if these last 20 residents vote against the ordinance if they are properly informed on how it will affect all parties, it is the ignorance that must be defeated.

With regards to penalties, most compulsory voting practicing countries administer a small financial penalty when an individual fails to vote. Interestingly enough this penalty is typically equal to or less than a standard parking violation, which does not send a strong message that voting is important. Clearly administering large fines would be questionable akin to issuing a $10,000 dollar speeding ticket making such a strategy difficult. A better means to “encourage” compliance with compulsory voting would be to administer time penalties that have direct societal duty elements for most people tend to value their time over small generally meaningless amounts of money. For example failure to vote should be met with community service penalties or an increased probability for jury selection. Regarding possible exemptions from voting realistically if the proper access systems are developed, which they should be, then few possibilities remain. One legitimate exemption could be on the basis of religious grounds, i.e for Jehovah’s Witnesses, etc. Another exemption could be given for those suffering from mental illness or even at an advanced age (70+).

One potential side problem in a compulsory voting environment is the issue of whether or not individuals will be more inclined to buy/sell votes. With a mandate that every citizen vote the probability of voter fraud will still be low due to proper checks and security measures. However, what cannot be so easily neutralized are individuals selling their votes. Selling votes may not be a large issue now as the rationality behind its absence is the lower voter turnout, thus groups merely have to “rally” the “Parisians” to drive their chances of winning. Under a mandate there will be a much larger pool of potential voters that would be more difficult to directly persuade, thus shortcuts could be taken. It is also important to note that there is a reasonable probability that a number of these “new” voters could be politically apathetic enough to sell their vote. Fortunately due to the privacy associated with voting it would be extremely difficult for the “vote buyer” to confirm that the “vote seller” actually voted the way he/she may have been instructed to, thus without the ability to confirm both sides of the exchange, vote buying and selling should be limited, if any occurs at all. Also there has been no widespread vote buying in other compulsory voting countries.

The idea of compulsory voting should be one of irrelevance for all citizens should be interested enough in the development of society that they at least spend a few moments understanding the pertinent issues and then proceed to vote on their beliefs. However this mindset is far from universal. While this reality is unfortunate it could be sufficiently dismissed as regretful, but not critical if not for two salient points. First, and most important currently, certain agencies are actively attempting to prevent certain groups of individuals from voting by producing unnecessary obstacles. These actions directly threaten the idea of voluntary voting as a sufficient means for citizens to express their power as a citizen of a democracy. Second, there are times when individual privileges need to be augmented for societal good and the preservation of a democracy over an oligarchy certainly meet the condition of societal good. Overall the idea of compulsory voting is not one that aims to force democracy upon its citizenry, but instead protect democracy for its citizenry.


Citations –

1. Kittelson, A. Book chapter: The Politics of Democratic Inclusion. In - The politics of democratic inclusion. Temple University Press, 2005.

2. Blais, A, Gidengil, E, and Nevitte, N. “Where does turnout decline come from?.” European journal of political research. 2004. 43(2):221-236.

3. Verba, S. “Would the dream of political equality turn out to be a nightmare?.” Perspective on Politics. 2003. 1(4):663-679.

4. Fowler, A, “Electoral and policy consequences of voter turnout: evidence from compulsory voting in Australia.” Quarterly Journal of Political Science. 2013. 8:159-182.

5. U.S. Census Bureau, Current Population Survey, November 2008 and earlier reports. Internet release data: July 2009. Table A-1. Reported Voting and Registration by Race, Hispanic Origin, Sex and Age Groups: November 1964 to 2008.

6. U.S. Census Bureau, Current Population Survey, November 2008 and earlier reports. Internet release data: July 2009. Table A-2. Reported Voting and Registration by Region, Educational Attainment and Labor Force for the Population 18 and Over: November 1964 to 2008.

7. Citrin, J, Schickler, E, and Sides, J. “What if everyone voted? Simulating the impact of increased turnout in senate elections.” American Journal of Political Science. 2003. 47(1):75-90.

8. Pettersen, P, and Rose, L. “The dog that didn’t bark: would increased electoral turnout make a dif
ference?” Electoral Studies. 2007. 26(3):574-588.

9. Alvarez, R, et Al. “The 2008 Survey of the Performance of American Elections.” Washington, DC: Pew Center on the States. 2009.

10. Milyo, J, Konisky, D, and Richardson, L. “What determines public approval of voting reforms?” Paper presented at the Annual Meeting of the American Political Science Association, Toronto, Canada. 2009.

11. Alvarez, R, Hall, T, and Llewellyn, M. “Are Americans confident their ballots are counted?” Journal of Politics. 2008. 70:754-768.