Saturday, July 25, 2015

Should life in prison really be life in prison?

When one considers controversy in the criminal justice system one of two issues immediately come to mind: 1) the death penalty, where effective arguments exist for both the pro and the con sides; 2) racism in the criminal justice system, where debate is typically over-emotional and illogical on both sides, especially from those complaining about the extent of racism; however, the widespread focus on these two issues draws attention away from other meaningful issues. One of these interesting issues that receive less attention is the question of justification for sentencing someone to life in prison without the possibility of parole.

Not surprisingly there are a number of people who believe the judicial system should not have the capacity to hand down a sentence of “life without parole” (lwop). An aspect of this argument has been bolstered by three separate United State Supreme Court rulings, Roper v. Simmons, Graham v. Florida, and Miller v. Alabama, where it was held that it was not Constitutional to sentence juveniles to the death penalty or a mandatory life in prison without parole sentence regardless of the type of crime. Emboldened by this ruling a number of individuals have attempted to further advance this position to include eliminating lwop sentences all together or at least just expand the breadth of these ruling to young adults, arguing that a lwop sentence is a de facto death sentence.

Furthermore the argument goes that the general nature of a lwop sentence is not based on rehabilitation because the individual in question is never getting out of prison, it is a mixture of punishment and deterrence for other potential actors. However, the influence of this meaning is less relatable to juveniles and young adults due to their emotional and mental development. Proponents of the above position believe that time is the most relevant factor in “decriminalizing” individuals for the frontal lobes mature and, in men, testosterone levels decline reducing the probability of aggressive and impulsive behavior. Basically time is a superior method to reducing crime probability versus hoping young people view individuals similar to themselves incarcerated for the rest of their lives and come to the conclusion “I better not do that”.

In fact some may simply come to the conclusion “I better not get caught” suggesting a time old thought regarding crime, the probability for the certainty of punishment matters much more than probability for the severity if punished when considering the commission of a crime. Therefore, based on this reasoning these individuals argue that sentencing individuals, especially the young, to life in prison without parole does not serve either society or the individual in question.

Some have also argued that the deterrence factor does nothing significant to limit the occurrence of crime derived from passion for rarely do individuals calculate the benefits and consequences before engaging in an emotionally driven response. However, this argument is rather weak in its validity for most emotional actions do not typically produce a crime that will result in a lwop sentence upon conviction. Understand that lwop sentences rarely occur outside of homicides, most notably a Murder 1 conviction, which seldom have acute emotional components, even in felony murder cases. The general conditional pre-requisites for charging an individual with Murder 1 involves 1) premeditation; 2) willfulness; and 3) deliberation (typically with malice afterthought);

This above argument regarding passion and emotion creates concern in that the chief problem with attempting to expand the “lack of maturity” argument to lwop sentences is the nature of lwop crimes typically do not involve lack of maturity or emotional development as a meaningful factor. Basically regardless of the level of social, mental or emotional development, any individual without some form of brain damage should acknowledge that the elements involved in the crimes that warrant such a sentence (vicious and premeditated homicides or homicides in the course of committing other high level felonies like armed robbery, kidnapping, etc.) are against the law and consequences for their commission will be severe. One does not need to be a fully matured and emotionally stable 26 year-old to know that shooting someone in the chest with a .44 is not a good thing and will be harshly punished. One of the chief reasons for a differing stance between juvenile treatment with the death penalty and lwop sentences is the finality of the death penalty eliminates the ability to overturn mistakes in the judicial process.

Another aspect of weighing lwop sentences on young single count offenders is will the elimination of these sentences serve the concept of justice? For example if 20 year-old person A murders 20 year-old person B with all of the necessary elements to justify a Murder 1 conviction what type of sentence would represent justice? Realistically it can be argued that person B was robbed of at least 40 years of life, if not more, so should person A pay in a year for year context? If person A is only incarcerated for 20 years is that justice? Basically what type of punishment represents justice when one person blatantly takes the life of another?

Some would argue that keeping Person A in jail for the rest of his/her life is a miscarriage of justice because ending Person A’s life on de facto grounds does not serve the public interest or the interest of justice, it simply steals an additional life ruining two lives instead of one. However, the counterargument is that Person A can still have productive and positive experiences despite being in jail, something that Person B can no longer have at all.

It could be argued that the deterministic aspect of “without parole” is the problem for individuals who are sentenced to life with the possibility of parole are not guaranteed to acquire parole. Therefore, the elimination of this mandate would allow experts and individuals with intimate knowledge of specific prisoners to judge whether or not an individual remained a threat to society and if justice had been done. Individuals who favor judicial discretion in general would agree with this position for they are from similar molds.

Of course the counter-position is that there are a number of individuals who have received parole after committing violent crimes, i.e. been judged no longer a threat to society, and soon after their release committed similar or worse crimes resulting in their re-arrest and incarceration. Therefore, the issue of simply revoking the very idea of life without parole encompasses the idea of certainty. Should a population of prisoners who have “turned their lives around” be denied the possibility of parole to prevent another population of prisoners from manipulating such a system to acquire release and the ability to continue their criminal enterprise?

Another factor for consideration is how influential is the threat of a lwop sentence in “convincing” an individual to take a plea bargain, thus saving the state or Federal government money, time and other resources in not having to prosecute a murder case, which are frequently significant. If this influence is meaningful, then the loss of lwop sentences could result in a greater probability of delayed or even lost justice for the court system would have to deal with a greater influx of cases creating a backlog.

One of the more widely known important elements to supporting the elimination of “without parole” conditions on sentences is the belief that the prison system can produce sufficient rehabilitation potential. While existing track records are mixed in this regard, evidence does exist that prisons produce a means for individuals to “get it” and turn their lives around. Unfortunately for supporters of the various positions surrounding the elimination/reduction of sentences there is another important element in this process, which while receives lip service now and again, does not receive any significant level of public or political support: how to reincorporate criminals, especially those who have been incarcerated for a long period of time, back into the economic fabric of society?

This question is especially troublesome now for while it has almost always been difficult for criminals to re-acclimate themselves into society on some level, as society currently stands there are a number of individuals without criminal records have not been effectively incorporated into the economic framework who will be competing with these newly released criminals. Without the ability to incorporate newly released criminals, especially those serving long sentences for violent crimes, the probability of recidivism is high, regardless of age and emotional/mental maturity. Sadly this is a question that proponents of eliminating lwop sentences largely ignore kicking the proverbial can to the general “prison reform” crowd. This behavior is questionable because how can one in good conscious seek to eliminate “without parole” sentences whether for juveniles only or entirely without addressing this important question of economic incorporation? Some may argue that it is not fair to leave an individual in jail while this issue is addressed, but is it fair to society to release people that cannot be properly reintegrated?

The final major question regarding the elimination of “without parole” sentences is how to address the psychological impact of prison influencing an individual’s ability to live in general society? There is reason to believe that a number of inmates suffer from a form of institutionalization after a sufficient period of time in prison, which will negatively impact their ability to reintegrate themselves successfully back into society.

One particular change in psychology that could be significantly harmful to reintegration is the increased level of apathy, passivity, and isolation commonly seen from institutionalism.1 One of the more stereotypically, yet still true “rules” of prison life is stay invisible unless you are struggling for power; doing so means keeping your head down and your mouth shut. Unfortunately society has moved to a point where it almost exclusively prefers people be loud and expressive; in fact it appears, at least in the manner of public notoriety, that the motor-mouth arrogant frequently incorrect braggart is preferred over the stoic well-meaning fact-giver. Basically what is expected for “success” in prison life versus what is expected for “success” in “normal” life is largely contradictory. So how is this situation resolved? One could require inmates released after large incarceration periods psychological assistance from trained professionals, but who pays for this service?

Overall there are some important issues regarding the elimination of “without parole” qualifiers on sentences that go beyond simple age. The most noteworthy and important ones relate to the nature of justice, both in punishment and how such a change would influence courts, how long-term prisoners can be incorporated economically into a society that is leaving behind non-prisoners at ever increasing rates and how the potential psychological changes born from institutionalization influence reintegration? Until satisfactory answers can be produced for at least these three questions, notwithstanding other smaller more specific questions, the idea of eliminating “without parole” qualifiers in criminal sentencing seems inappropriate; remember individuals serving these sentences are not akin to those jailed for punching a guy in a bar for hitting on “his girl” or dealing small quantities of marijuana without a license in a state where it is legal by state law, but instead were convicted for very serious crimes that almost always involved the loss of at least one other life.

Citations –

1. Johnson, M, and Rhodes, R. “Institutionalization: a theory of human behavior and the social environment.” Advances in Social Work. 2007. 8(1). 219-236.

One Sexual Offense Fits All?

It has been said, ““precept of justice that punishment for crime should be graduated and proportioned to [the] offense.” [Weems v. United States]. However, punishment for a crime is not exclusive to the domain of incarceration. For most criminals there is the social stigma of being a criminal, which significantly limits their economic, political and societal power and influence. In the case of individuals convicted of sexual based offenses this stigma is typically enhanced. While nothing can be done about the subjective stigmas assigned to criminals by other individuals regardless of the type of offense, when one looks at the administrative burdens applied to individuals convicted of sex offenses versus other types of crimes, including murder, one wonders whether or not such exclusive and additional punishment is a violation of the Eighth Amendment of the Constitution.

After the period of incarceration for a sex offender has concluded the typical administrative burdens applied to that individual encompass restrictions on residency based on the surrounding area most notably they cannot reside within some fixed specified distance from common areas where children congregate like schools, daycare centers, parks, bus stops, etc; in some situations if such an area is constructed after the individual has established residency in a particular location the individual will be forced to move (some states have grandfather clauses that do not require a move some do not). In addition sex offenders must check in with local law enforcement when moving to a new address, changing employment, changing their legal name, etc., and depending on the state have to reaffirm these notifications after a certain period of time. Finally their names are listed on a public database for a period of time that may not be commensurate with their current relationship with their local environment. Basically their name could be on this list 8 years after the incident that resulted in their conviction and after moving to an entirely new community in which these individuals have lived without incident.

To understand these administrative requirements one must attempt to understand their philosophical origins. Most sexually based crimes illicit a guttural and emotional reaction typically leading to a characterization of repugnance, that strangely enough at times, exceed the disgust one feels towards murder or other higher level crimes. The original intent of the sex offender registration list appears born from at best a psychological compromise to provide a level of deterrence from recidivism by limiting the available opportunities that could lead the individual to repeat such criminal action or at worst as an additional punitive measure because it was not legally viable to incarcerate such an individual for a period of time typically demanded/anticipated by the public in reaction to the crime.

Unfortunately this compromise has evolved into a “one size fits all” punishment moving beyond the once applied standard judicial review and discretion. It tends to no longer take the nature of the sexual offense into consideration beyond broad “milestones”. For example all would agree that there is a significant difference between a 19 year-old male having sex with a consenting 16 year-old female and a 29 year-old male raping a 16 year-old female via a drugged beverage. While these differences are certainly reflected in the incarceration portion of the punishment they typically are not reflected in the administrative/societal portion of the punishment.

Basically while both individuals from the above example are technically sex offenders, the fact is that in most situations there is a tiered structure that is so broad in its administrative penalties that the level of judicial discretion is non-existent. In a sense the application of administrative punishment can be viewed as generally lazy, disinterested in determining the actual threat posed by the individual to the community instead labeling all as viable and credible threats.

There are two pertinent court cases pertaining to the issue of sex offense and the Eighth Amendment. First, in Graham vs. Florida the United States Supreme Court adopted the position that non-capital sentences for minors, adding to capital sentences held in Roper vs. Simmons, could be found unconstitutional under a proportionality review. This proportionality review can fall within two general classifications: 1) challenges to the length of a sentence dependent on the circumstances surround the case in question; 2) cases in which the Court implements the proportionality standard by certain categorical restrictions. The important element to Graham vs. Florida with regards to the above topic is that it set the precedence that categorical Eighth Amendment proportionality reviews could be applied to non-capital offenses, moving beyond the idea of “death is different”.1

Second, in Ohio v. Blankenship the defendant claimed that his classification as a Tier II sex offender pertaining to the crime of having a sexual relationship as a 21 year-old with a consenting 15 year-old with full knowledge of her age resulting in a conviction of a single count unlawful sexual conduct was cruel and unusual punishment. This claim was based on the administrative penalties associated with that classification (largely associated with having to register as a sex offender for 25 years) in contrast to the threat he provided as a possible future repeat offender.

The Ohio Court of Appeals ruled against Blankenship determining that existing legal remedies were not available because he was an adult when he committed the crime versus being a juvenile, thus a previous ruling (related to C.P., 131 concerning juveniles) was not applicable and that he was in fact a sex offender, thus the current legal structure in Ohio was applicable. Blankenship appealed to the Ohio Supreme Court, which held arguments in early March 2015; as of this posting it appears that no ruling has been made regarding this case, but a number of individuals believe that the ruling could go either way. So currently while it is legally and theoretically possible to find the administrative penalties associated with conviction as a sex offender unlawful via the 8th Amendment, no court has current done so.

Some could argue that there is an important distinction in statutory rape cases between an individual who has accurate knowledge of the age of his/her sexual partner versus having inaccurate knowledge through deception or misinformation. On this issue the point of willing culpability is irrelevant. For example there is no meaningful difference between a 19 year-old having sex with a 15 year-old where both parties are fully aware of the age of the other versus a 19 year-old having sex with a 15 year-old who has lied to the 19 year-old claiming an age of consent (18 year-old).

Such consideration would be akin to facilitating punishment based on whether or not an individual was aware that he/she was speeding. Whether or not the individual knows he/she is speeding is irrelevant to the fact that the individuals was speeding and violating that particular law. Furthermore the issue is not whether or not an individual who commits statutory rape or a similar low level sex-based crime is a sex offender. By law the individual is a sex offender, the issue is the assigning the appropriate punishment for the committed crime in all aspects, i.e. is it appropriate that an individual convicted of sexting receives the same administrative punishment as an individual convicted of rape?

An interesting point of fact pertaining to the validity of the administrative penalties associated with non-violent sex offenders is that the general recidivism rate for sex offenders has been demonstrated numerous times to be lower than any other crime except murder.2-3 An interesting point of contention could be made regarding this data between parties that agree with board mandatory classifications and parties that disagree.

Proponents of the administrative penalties could argue that this lack of recidivism is due to the harsh administrative restrictions placed on sex offenders heavily reducing the temptations and opportunities for recidivism. Opponents of these penalties could counter-argue that this lack of recidivism is because most sex offenders are not sexual predators, but simply do something stupid early in their lives that get them labeled and convicted as a sex offender through some basic non-violent sex-related crime like sexting a consenting individual or statutory rape with a consenting partner. While the truth is unknown, opponents are more likely correct than proponents because the data encompasses a time frame for some of these analyses where the harsher administrative penalties were not entirely applicable.

An important element to whether or not the 8th Amendment can be applied on this particular issue, especially with regards to the sex offender registry is whether the registration is viewed as punitive or civil; a characterization as punitive should increase the probability of relevance in applying the 8th Amendment versus a civil characterization. In most cases it is difficult to argue that the registry is not punitive in nature with the administrative hurdles that are assigned to those on the list, especially concerning the living restrictions. It stands to reason that if the only demand of the list was public access and an accurate name and address then it would be more civil in nature; however that is currently not the case.

Based on existing information it is difficult to argue that the sex offender registry serves an important role in protecting society from a large number of individuals convicted of sex offenses because those individuals are not a threat to society. Furthermore the additional elements of societal stigma and restrictions of freedom produced through association with the list could constitute a disproportional punitive response to the crime, especially when that association is not subject to judicial review, but mandated by a state or the Federal government. For example it could be argued successfully that for a vast majority of individuals who are convicted for the first time on a single count of a non-violent sexual-based crime, registration as a sex offender is not appropriate, therefore could be appropriately challenged as a violation of the 8th Amendment.

An interesting side note is that defining mandatory registration as a sex offender as a violation of the 8th Amendment may be necessary to properly apply justice even if it not legally appropriate. In short associating this scale of punishment to the 8th Amendment may be the only way to give politicians the political cover they need to continue to publicly assert their “tough stance” against sex offenders of all shapes and sizes, but also have appropriate punitive punishment based on the type of sexual offense. Basically while applying an analytical system of judgment regarding the threat potential of a sexual offender to “relapse” is logical and compliant with justice, forcing such a system on states through association with the 8th Amendment may be necessary due to political concerns.

However, while the courts have almost always been at the forefront for social change, would it be appropriate to make this association even if it were not valid? What type of slippery slope would that produce? On an even larger scale what can be done in a democracy when the majority is not interested in changing its opinion regardless of any arguments counter to their opinion? Overall when thinking from a non-emotional logical perspective mandatory registration for most single count sex offenders appears inappropriate, not surprisingly producing a path to properly appreciate that viewpoint legally is the more difficult problem.

Citations –

1. Shepard, R. “Does the punishment fit the crime? Applying eighth amendment proportionality analysis to Georgia’s sex offender registration statute and residency and employment restrictions for juvenile offenders”. Georgia State University Law Review. 2011. 28(2) Article 7. 529-557.

2. BOJ Recidivism of Sex Offenders Released from Prison in 1994, November 2003

3. U.S. Department of Justice Criminal Offenders Statistics: Recidivism, statistical information from the late 1990s and very early 2000s.

Tuesday, June 23, 2015

The Legitimacy of Holistic Admissions at U.S. Universities

With the competition for landing a quality job increasing with every passing year, acceptance into a high quality university is viewed as essential to maximizing the probability of landing one of these jobs. However, in lockstep with the competition for quality jobs, the competition to gain entrance into those universities widely regarded as high quality has also increased. This competition has produced controversy surrounding the procedure in which applicants are admitted creating a tug-of-war of sorts between various parties and their interests. One of the chief points of controversy is the validity of the “holistic” review process. In fact a lawsuit filled against Harvard University by the Students for Fair Admissions contends that holistic admission processes are inappropriately discriminatory and should be significantly clarified in their evaluation metrics beyond “whole person analysis”. Obviously a reading of the official complaint by the Students for Fair Admissions divulges a harsher conclusion than that above, but the sentiment above is more appropriate to produce a more fair admissions environment.

Proponents of the holistic method champion its multi-faceted analysis approach where a larger spectrum of an applicant’s qualifications for admissions is considered beyond the traditional metrics (standardized test scores, grades and certain extracurricular activities), which produces a more fair and accurate admissions process. Opponents of the holistic method believe that it is commonly used at best to hide the admissions process beyond a veil of ambiguity allowing universities to justify perplexing and arbitrary decisions and at worst to legitimize a quota system where more qualified candidates are rejected in favor of under-qualified candidates to achieve diversity demographics in order to evade public scorn. Clearly based on the perceived stakes, where getting into university A can set a person up for life versus university B which would create unnecessary hardships, the emotional aspect of this debate is high. Unfortunately this emotional aspect has produced an environment that abandoned a critical philosophical base for understanding the why or why not a holistic appropriate is appropriate.

First it is important to address that the holistic process has been attacked by some as a demonstration of “reverse racism” through the process of affirmative action. The term “reverse racism” is a misnomer and is not properly used in this descriptive context. Racism is giving differing treatment, either in a positive or negative manner, to an individual based on their ethnicity or race. Based on this definition, reverse racism would be akin to not giving differing treatment to an individual based on their ethnicity or race. However, when individuals invoke the term “reverse racism” the actual meaning is not what they are intending to convey. Instead they simply mean a different type of racism. Unfortunately some parts of society have associated the term racism to reflect only one particular form of racial bias instead of all forms of racial bias, which is inappropriate. Therefore, the term “reverse racism” should be eliminated from conversation in this context and replaced with the appropriate term – racism.

Second, it must be noted that the original intention of affirmative action was not to give “bonus points” to an individual based on their race, but to access how race may have influenced the acquisition of certain opportunities and thereby influenced the development of an individual through their performance when engaging in these opportunities. It should not be surprising that an individual with rich, committed and connected parents will have more opportunities and ability to prepare for those opportunities when presented than an individual without wealthy or even present parents.

For example it is expected that SAT scores would be higher for children of richer families both because of increased opportunity to prepare and increased opportunity to retest if the performance is not deemed acceptable. Also there is a higher probability that individuals from rich families will be better nourished than those individuals from poor families, which will directly influence academic performance and ability to participate in other valuable non-academic opportunities. Such environmental effectors are simple elements that can skew the value and analytical ability of “raw” metrics like standardized tests. Basically affirmative action is akin to judging the vault in gymnastics. Not all jumps have the same difficulty level; a non-perfect vault with a 10.0 difficulty will consistently beat a perfect vault with a 7.0 difficulty.

A quick side note: while the idea of affirmative action was originally based on the premise of race in an attempt to combat direct and indirect forms of racism, in the present the idea of affirmative action has shifted more to address differences in economic circumstance over race/ethnicity. The idea that rich individuals of race A will somehow be significantly excluded from opportunity A versus rich individuals of race B is modern society is no longer realistic. It is important to identify that more minorities will be assisted by affirmative action not directly because of race, but instead because of past racism that reduced the probability of these minority families to build intra-generational wealth thereby making them poorer than white families.

Based on the “potential judgment” aspect of affirmative action, some individuals may object to the idea that it is appropriate to punish an individual for having access to opportunities that others may not claiming that this behavior is a form of bias. This point creates the first significant philosophical question that must be addressed in the admissions process: is it justifiable that an above average individual in an advanced difficulty pool should find favor in an opportunity over a high quality performing individual in a lesser difficulty pool?

An apt example of this notion is seen in the disparity between the “Big 5” college conferences (ACC, Big 10, Big 12, PAC 12 and SEC) and the mid major conferences when selecting basketball teams for the NCAA Championship Tournament. While the committee tends to give preference to teams from the Big 5, the question is should they? A Big 5 power team, “Big Team A”, with a 55.5% conference winning percentage at 10-8 and an overall record of 21-13 has clearly demonstrated itself as slightly above-average among its peers whereas a mid major team, “Medium Team B”, with a 89% conference winning percentage at 16-2 and an overall record of 26-7 did not have the same opportunities to compete against the level of competition as Big Team A, but has demonstrated themselves a quality team with a greater unknown ceiling. Basically should someone slightly above the middle of the pack in one environment that could be viewed as more competitive be passed over for someone at the top at a tier 2 level?

In the arena of applicants the question of quality could boil down to: should the 100th best “area” A applicant be accepted over the 10th best “area” B applicant. Think about it this way: should applicant C from city y who scores significantly above average for that area on standardized tests and also has quality grades be accepted over applicant E from city x who scores slightly above average for that area on standardized tests and has quality grades even if applicant E’s scores are slightly higher? Note that obviously city x has a higher student average for standardized tests than city y.

Those who say yes to the above question based on the importance of fostering a racially/ethnically diverse environment must be careful not to fall into the trap of needless diversity, which is its own type of bias. With regards to fostering a diverse environment, its establishment must be based on thought and behavior, not on elements beyond an individual’s control.

There is an advantage to diversity of experience for it ensures a greater level of perspective and ability to produce understanding leading to more and potentially valid strategies for solving problems. However, this advantage comes from experience not from different skin color, religious beliefs, etc. For example the inclusion of person A just because he/she has certain colored skin or is of a certain ethnicity is not appropriate. Their inclusion should demand a meaningful and distinctive viewpoint. Cosmetic diversity for the sake of diversity serves no positive purpose and is inherently foolish and unfair/bias. Based on this point the crux of the issue regarding admissions is how to identify individuals with distinctive and valuable viewpoints in order to validate selecting a high achiever from a less difficult environment.

Most would argue that the standard analysis metrics are not appropriate for this task. For example grades are significantly arbitrary based on numerous uncontrollable environmental and academic circumstance; i.e. an A at high school x does not always carry the same weight as an A at high school y and some high schools allow students greater amounts of extra credit which conceal their actual knowledge of the subject through grade inflation. Standardized tests can be heavily prepared for and be taken multiple times depending on time and financial resources. Also they may not present an accurate representation of ability for almost no “real-world” task requires an individual to sit in one place in a time sensitive environment answering various questions without access to any outside resources beyond what is in their brain. At one point the “college essay” could have filled this role, but now it appears the essay has de-evolved into an ambiguous farce demanding only unoriginal “extraordinary” experiences and/or teaching moments where sadly it has become difficult to determine even if the student means what they say or are simply writing what they think the admissions officers want them to say.

However, while these flaws with the standard metrics exist, it is important to understand that abandoning the standard metrics entirely would be in error, for abandoning these metrics would be akin to replacing one “bias” with another. The standard metrics are an important puzzle piece, but they do not make up the entire puzzle.

For some the college interview has been thought of as a panacea for bridging the gap between holistic and standard admission judgment, but interviews do have caveats that must be monitored. Supporters of the interview process believe that it gives applicants an ability to demonstrate that he/she is more than just test scores, extracurricular activities and grades as well as allows both the university and applicant the ability to more specifically define the level of “fit” between the two beyond the mass generic questions utilized in the application process. Finally interviews can be a good deciding factor between board-line applicants.

Unfortunately interviews have some flaws that must be properly managed to ensure their legitimacy. First, individuals involved in the interview must be properly trained to avoid first impression bias as most interviews establish the tenor of the relationship between the interviewer and the interviewee very early, which threatens the objectivity of the rest of the interview. Also interviews must have a standard operating procedure, especially when it comes to the questions. Applicants must be asked the same questions for if different questions are asked to different applicants the subjectivity probability of the procedure increases, which hurts the interview as a comparison evaluation metric. It is fine to ask different questions if interviews are not going to be used when choosing one applicant over another, but most do not view the interview in such a causal light.

Another concern about the interview is they are unable to judge growth potential in how the university may positively or negatively influence the development of the applicant if he/she actually attends the university. Also if interviews do not have significant weight in the decision-making process then they may cause more harm than good due to lack of specific feedback providing more stress on an individual over relief as individuals wonder how the interview went leading to over-embellishment of the negative on small errors. Finally if interviews are deemed important it would be helpful if more universities offered travel vouchers to more financially needy applicants so if these individuals want to tour the campus and participate in the interview process they have an opportunity to do so that is not negatively impacted by their existing financially situation. Such a voucher may be important especially if interviews are used in “board-line” judgment.

A separate strategy may be the use of static philosophical probing questions in the application process. This strategy could better manage the difference in outside environmental influencing factors by gauging the general mindset of an applicant when it comes to solving problems. For example one question could be that if the individual were presented with a large jar full of chocolate and one individual sample; how would the individual calculate the number of chocolates in the jar? Note that this question demands both creativity and deterministic logic; creativity will produce more available options, but logic will be required to reason the best option from the list.

Another interesting question would be to ask what is the greatest invention in human history? Such a question would inspect whether an individual believes it is more important to build a foundation or if importance comes from what expands from that foundation. A third question could be what one opportunity would the applicant like to have had that they did not receive or was not available and why? These questions are superior to the generic banal analytically irrelevant questions that most universities ask on their admission forms.

Overall regardless of what methodology a university uses to accept or reject applicants the most important element is that this methodology is transparent. Universities must exhibit what attributes and credentials validate an individual’s merit for acceptance and then produce valid qualitative and quantitative reasons for why certain individuals gain admission and others do not. Transparency is the key element for a university to conduct their specific type of admission methodology without complaint. Returning to the original question whether or not a university elects to accept above average individuals from high “difficulty” environments or top performers from lower “difficulty” environment, either method is defensible as long as legitimate reasoning is available. However, there in lies the problem with the holistic method, universities are not transparent in its application, thus such behavior must change if a holistic method is to have any significant credibility.

Wednesday, June 10, 2015

Exploring the Biological Nature of Brown and Beige Fat

Over two years ago this blog discussed the possibility of incorporating a specialized preparation routine before exercise in an attempt to stimulate both brown and beige adipose tissue in order to increase the efficiency and overall calorie and fat burning potential of standard exercise. However, that post did not seek to fully understand or discuss the specific biological mechanisms that govern the behavior of brown or beige adipose tissue. This lack of knowledge limits the efficiency for exercise programs as individuals could either be consuming certain foods or performing certain warm-up tasks to increase exercise potential in addition to those suggested in the past blog post. Increasing exercise efficiency could be an easy means to increase the overall health of society without having to devote more precious time to exercise; therefore it would prove useful to better understand the processes that activate these types of fat.

At the most basic level there are two key elements to the fat burning capacity of brown fat. First, brown fat has multiple mitochondria versus the single mitochondria possessed by white fat; these additional mitochondria allow for greater rates of metabolism along with an increased lipid concentration. Also brown fat releases norepinephrine which reacts with lipases to breakdown fat into triglycerides and later to glycerol and non-esterified fatty acids finally producing CO2 and water, which can lead to a positive feedback mechanism.1,2 Second, brown fat contains significant expression rates of uncoupling protein 1 (UCP-1).1 UCP-1 is responsible for dissipating energy, which leads to the decoupling of ATP production and mitochondrial respiration.1 Basically UCP-1 returns protons after they have been pumped out of the mitochondria by the electron transport chain where these protons are released as heat instead of producing energy (i.e. leaking).

It is important to understand that there are two types of brown fat: natural brown fat and intermediate brown fat commonly known as beige fat. Natural brown is typically exemplified by the fat located in the interscapular region and contains cells from muscle-like myf5+ and pax7+ lineage.3 Natural brown fat is typically isolated from white fat and almost entirely synthesized in the prenatal stage of development as a means to produce heat apart from shivering.4 Beige fat is commonly interspaced within white fat, do not have these muscle-like cells (although Myh11 could be involved),5 and can be activated by thermogenic pathway and the strain of exercise. Beige fat also has the potential to influence the conversion of white fat to beige fat through a process commonly called “browning”.6,7

Natural brown fat is thought to have larger concentrations of UCP1-expression because they constitutively express it after differentiation versus beige, which expresses large amounts of UCP-1 in response to thermogenic or exercise cues.1,5 Therefore, natural brown fat is more effective at energy expenditure. However, it may not be possible to develop more natural brown fat after development; therefore, any positive progression in brown fat development will come from beige fat.

Early understanding of brown fat activation involved non-discriminate increases in the activity of the sympathetic nervous system (SNS). The standard pathway governing brown fat activation uses a thermogenic response involving the release of norepinephrine, which initiates cAMP-dependent protein kinase (PKA) and p38-MAPK signaling leading to the production of free fatty acids (FFA) through lipolysis due to UCP-1 induced proton uncoupling.4 UCP-1 concentrations are further increased through secondary pathways involving the phosphorylation of PPAR-gamma co-activator 1alpha (PGC1alpha), cAMP response element binding protein (CREB) and activating transcription factor 2 (ATF2).8 Among these three elements PGC1alpha appears to be the most important co-activating many transcription factors and playing an important role in linking oxidative metabolism and mitochondrial action.9

However, due to the complicated nature of SNS activation and its other downstream activators the attempt to replicate it in the form of weight loss drugs like Fenfluoramine or Ephedra resulted in severe negative cardiovascular side effects like elevated blood pressure and heart rate.10 While some argue that either increasing the sensitivity or the rate of simulation to the SNS can improve upon these results, the underlying elements associated with downstream activation of the SNS makes facilitating direct influence too complicated. Therefore, from a biological perspective it makes more sense to focus on a downstream element that interacts with brown fat at a more localized level.

Just a side note based on the differing interactivity between brown/beige and white fat from the SNS, white fat appears to represent long-term energy storage and brown fat is shorter-term energy, an unsurprising conclusion. However, frequent energy expenditure, like exercise, may condition the body to produce more beige fat versus white fat viewing short-term energy needs as more valuable than long-term energy needs. Basically if the above point is accurate then it stands to reason that a person would see more benefit from 20 minutes of exercise 6 days a week versus 40 minutes of exercise 3 days a week.

Moving away from direct SNS stimulation perhaps the appropriate method of increasing browning involves increasing transcription and translation of UCP1. Interestingly enough empirical evidence exists to support the idea that reinoic acid could be an effective inducer of UCP-1 gene transcription in mice and operates through a non-adrenergic pathway.11,12 However, a more focused study using loss of function techniques involving retinaldehyde dehydrogenase, which is responsible for converting retinal to retinoic acid, determined that retinal, not retinoic acid is the major inducer of brown fat activity.13 Unfortunately there is no direct understanding regarding the proportional response of brown fat to retinal or retinoic acid. Therefore, the general fat-soluble nature of vitamin A will probably make it difficult to utilize its derivatives as biological stimulants for brown fat activation or browning.

Another possible strategy to stimulate browning is through activated (type 2/M2) macrophages induced by eosinophils which are commonly triggered by IL-4 and IL-13 signaling. When activated this way these macrophages recruit around subcutaneous white fat and secrete catecholamines to facilitate browning in mice.14,15 A secondary means by which both IL-4 and IL-13 may influence fat conversion is their direct interaction with Th2 cytokines.16 Unfortunately while on its face this strategy looks promising, in a similar vein to vitamin A, it might not be effective due to unknown long-term side effects associated with IL-4 and IL-13 activation. Due to this lack of knowledge, if IL-4 or 13 is thought to be a viable biochemical strategy for inducing weight loss, long-term proper time lines for effects and dosages must be explored in humans, not just short-term studies in mice.

A more controversial agent in browning is fibronectin type III domain-containing protein 5 or more frequently known as irisin. Due to its significantly increased rate of secretion from muscle under the strain of exercise, some individuals believe that irisin is a key mediator in browning acting as a myokine;17 if this characterization is accurate then irisin could be a significant player in the biological benefits produced by exercise including weight loss, white fat conversion and reduced levels of inflammation.18,19 However, other parties believe that because human studies with irisin have produced results that do not demonstrate benefits similar to those studies using mice, irisin is another molecule that cannot scale-up its effectiveness when faced with the added biological complexity of humans versus a mouse.20-22

The key element within this controversy could be that irisin expression is augmented by the increased expression of PGC1alpha, but PGC1alpha increases the expression of many different proteins and other molecules, so the expression of irisin may not be relevant to the positive changes associated with exercise. Another factor may be that a key difference between mice and humans is the mutation in the start codon of the human gene involved in the production of irisin, which significantly reduces irisin availability.23 Thus this mutation could be the limiting factor to why despite a very conserved genetic sequence, humans do not see anywhere near the benefit mice do. If this explanation is correct it does potentially still leave the door open to directly inject irisin into the body to increase concentrations in an attempt to aid exercise derived results, but if PGC1alpha is the key, then this increased concentration of irisin could be of minimal consequence.

Another potential element that demonstrates a significant concentration increase in accordance to increased PGC1alpha is a hormone known as meteorin-like (Metrnl).24 The concentration of this hormone increases in both skeletal muscle and adipose tissue during exercise and exposure to cold temperatures in accordance to increases in PGC1alpha concentrations. When Metrnl circulates in the blood it seems to produce a widespread effect that induces browning resulting in a significant increase in energy expenditure.24 The influence of Metrnl on white fat does not appear due to direct interaction with the fat, but instead indirect action on various immune cells most notably M2 macrophages via the eosinophil pathway, which then interact with the fat through activation of various pro-thermogenic actions.24 As discussed above this interaction with eosinophil appears to function through IL-4 and IL-13 signaling indicating a common pathway purpose between IL-4/IL-13 and the original SNS pathway. Not surprisingly blocking Metrnl has a negative effect on the biological thermogenic response.24

Another potential strategy for browning may be targeting appropriate receptors instead of specific molecules; with this strategy in mind one potential target could be transient receptor potential vanilloid-4 (TRPV4). TRPV4 acts as a negative regulator for browning through its negative action against PGC1a and the thermogenic pathway in general.25 In addition TRPV4 appears to activate various pro-inflammatory genes that interact with white adipose tissue making it more difficult to facilitate browning even if the appropriate signals are present. TRPV4 inhibition and genetic ablation in mice significantly increase resistance to obesity and insulin resistance.25 The link between inflammation and thermogenesis is highlighted by the activity of TRPV4, which is one of the early triggers for immune cell chemoattraction.25

Obesity may also produce a positive feedback effect through TRPV4 by increasing cellular swelling and stretching through the ERK1/2 pathway, which increases the rate of TRPV4 activation.26,27 However, the validity of TRPV4 as a therapeutic target remains questionable for TRPV4 expression not only influences fat/energy expenditure, but also osmotic regulation, bone formation and plays some role in brain function.25,28,29 Fortunately a number of the issues with TRPV4 mutations/mis-function appear to be developmental in influence versus post-development, thus TRPV4 therapies could still be valid.

Natriuretic peptides (NPs) are hormones typically produced in the heart on two different operational capacities: atrial and ventricular. Both of these hormones appear to play a role in browning through association with the adrenergic pathway.30 The most compelling evidence for supporting this behavior is that a lack of NP clearance receptors demonstrated significant enhanced thermogenic gene expression in both white and brown adipose tissue.30 Also direct application of ventricular NP in mice increased energy expenditure.30 In addition to the above results, NPs are an inherent attractive therapeutic possibility because appropriate receptors are located in white and brown fat of both rats and humans31,32 and these receptors go through periods of significant decline in expression when exposed to fasting,33 which may account for some of the benefits seen from low calorie diets.

Atrial NPs increase lipolysis in human adipocytes similar to catecholamines (increasing cAMP levels and activation of PKA) although whether or not this increase is induced through interaction with beta-adrenergic receptors is unclear.34 Some believe that NPs activate the guanylyl cyclase containing NPRA producing the second messenger cGMP activating cGMP-dependent protein kinase (PKG).35,36 PKA and PKG have similar mechanisms for substrate phosphorylation including similar targets in adipocytes,36 thus this interaction may explain why atrial NPs act similar to catecholamines.

Recall from above that one of the means of inducing browning, especially for those tissues that are distant from SNS-based neurons, is macrophage recruitment. This recruitment appears to be initiated by CCR2 and IL-4 for when either is eliminated from mice models the conversion no longer occurs.15 Tyrosine hydroxylase (Th) is also important in this process facilitating the biosynthesis of catecholamines and later PKA levels.

With respects to producing a biomedical agent to enhance browning there appear to be three major pathways in play: 1) the SNS pathway producing a direct activation response; 2) macrophage recruitment pathway potentially involving Metrnl, which activates IL-4 and IL-13 eventually leading to PKA activation and an indirect activation response; 3) NPs activation pathway, which eventually leads to PKG activation and an indirect activation response. As mentioned earlier SNS pathway enhancement has already been attempted by at least two drugs and failed miserably, so that method is probably out. In addition the SNS pathway does not appear to have as much browning potential as the PKA or PKG pathways due to the reliance on the location of certain nerve fibers.

Enhancing macrophage recruitment could be a good strategy, but there appears to be little information regarding negative effects associated with short-term high frequency enhancement of IL-4 or IL-13 concentrations. Some reports have suggested an increase in allergic symptoms, but any more severe consequences are unknown. This is not to say that enhancing IL-4 or IL-13 is not a valid therapeutic strategy, but its overall value is unknown. In contrast enhancement of NPs appear to be a more stable choice due to positive results in initial exploration of both the application and the expected negative side effects. First, NPs can be administrated via the nose-brain pathway enabling access to the brain avoiding some potential systemic side effects.37 Second, there appear to be few, if any significant side effects to intranasal NP application, at least in the short-term.38

Overall the above discussion has merely identified some of the more promising candidates to enhance browning white fat. One could argue that resorting to drugs to enhance the overall health of an individual versus simple diet and exercise is a regretful strategy. Unfortunately the reality of modern society is that more and more people seem to have less available time to exercise or eat right. In addition to a mounting negative weight external environment (increased pollution and industrial chemicals like BPAs) this drug enhancement strategy may be the most time and economically efficient means to ensure proper weight control and overall health for the future.

Citations –

1. van Marken Lichtenbelt, W, et Al. “Cold-activated brown adipose tissue in healthy men.” The New England Journal of Medicine. 2009. 360:1500-08.

2. Lowell, B, and Spiegelman, B. “Towards a molecular understanding of adaptive thermogenesis.” Nature. 2000. 404:652-60.

3. Seale, P, et Al. “PRDM16 controls a brown fat/skeletal muscle switch.” Nature. 2008. 454:961–967.

4. Sidossis, L and Kajimura, S. “Brown and beige fat in humans: thermogenic adipocytes that control energy and glucose homeostasis.” J. Clin. Invest. 2015. 125(2):478-486.

5. Long, J, et Al. “A smooth muscle-like origin for beige adipocytes.” Cell Metab. 2014. 19(5):810–820.

6. Kajimura, S, and Saito, M. “A new era in brown adipose tissue biology: molecular control of brown fat development and energy homeostasis.” Annu Rev Physiol. 2014. 76:225–249.

7. Harms, M, and Seale, P. “Brown and beige fat: development, function and therapeutic potential.” Nat Med. 2013. 19(10):1252–1263.

8. Collins, S. “β-Adrenoceptor signaling networks in adipocytes for recruiting stored fat and energy expenditure.” Front Endocrinol (Lausanne). 2011. 2:102.

9. Handschin, C, and Spiegelman, B. “Peroxisome proliferatoractivated receptor gamma coactivator 1 coactivators, energy homeostasis, and metabolism.” Endocr. Rev. 2006. 27:728–735.

10. Yen, M, and Ewald, M. “Toxicity of weight loss agents.” J. Med. Toxicol. 2012. 8:145–152.

11. Alvarez, R, et Al. “A novel regulatory pathway of brown fat themogenesis, retinoic acid is transcriptional activator of the mitochondrial uncoupling protein gene.” J. Biol. Chem. 270:5666-5673.

12. Mercader, J, et Al. “Remodeling of white adipose tissue after retinoic acid administration in mice.” Endocrinology. 2006. 147:5325–5332.

13. Kiefer, F, et Al. “Retinaldehyde dehydrogenase 1 regulates a thermogenic program in white adipose tissue.” Nat. Med. 2012. 18:918–925.

14. Nguyen, K, et Al. “Alternatively activated macrophages produce catecholamines to sustain adaptive thermogenesis.” Nature. 2011. 480(7375):104–108.

15. Qiu, Y, et Al. “Eosinophils and type 2 cytokine signaling in macrophages orchestrate development of functional beige fat.” Cell. 2014. 157(6):1292–1308.

16. Stanya, K, et Al. “Direct control of hepatic glucose production by interleukins-13 in mice.” The Journal of Clinical Investigation. 2013. 123(1):261-271.

17. Pedersen, B, and Febbraio, M “Muscle as an endocrine organ: focus on muscle-derived interleukin-6.” Physiological Reviews. 2008. 88(4):1379–406.

18. Bostrom, P, et Al. “A PGC1-α-dependent myokine that drives brown-fat-like development of white fat and thermogenesis.” Nature. 2012. 481(7382):463–468.

19. Lee, P, et Al. “Irisin and FGF21 are cold-induced endocrine activators of brown fat function in humans.” Cell Metab. 2014. 19(2):302–309.

20. Erickson, H. “Irisin and FNDC5 in retrospect: An exercise hormone or a transmembrane receptor?” Adipocyte. 2013. 2(4):289-293.

21. Timmons, J, et Al. “Is irisin a human exercise gene?” Nature. 2012. 488(7413):E9-11.

22. Albrecht, E, et Al. “Irisin - a myth rather than an exercise-inducible myokine.” Scientific Reports. 2015. 5:8889.

23. Ivanov, I, et Al. “Identification of evolutionarily conserved non-AUG-initiated N-terminal extensions in human coding sequences.” Nucleic Acids Research. 2011. 39(10):4220-4234.

24. Rao, R, et Al. “Meteorin-like is a hormone that regulates immune-adipose interactions to increase beige fat thermogenesis.” Cell. 2014. 157:1279-1291.

25. Ye, L, et Al. “TRPV4 is a regulator of adipose oxidative metabolism, inflammation, and energy homeostasis.” Cell. 2012. 151:96-110.

26. Gao, X, Wu, L, and O’Neil, R. “Temperature-modulated diversity of TRPV4 channel gating: activation by physical stresses and phorbol ester derivatives through protein kinase C-dependent and -independent pathways.” J. Biol. Chem. 2003. 278:27129–27137.

27. Thodeti, C, et Al. “TRPV4 channels mediate cyclic strain-induced endothelial cell reorientation through integrin-to-integrin signaling.” Circ. Res. 2009. 104:1123–1130.

28. Masuyama, R, et Al. “TRPV4-mediated calcium influx regulates terminal differentiation of osteoclasts.” Cell Metab. 2008. 8:257–265.

29. Phelps, C, et Al. “Differential regulation of TRPV1, TRPV3, and TRPV4 sensitivity through a conserved binding site on the ankyrin repeat domain.” J. Biol. Chem. 2010. 285:731–740.

30. Bordicchia, M, et Al. “Cardiac natriuretic peptides act via p38 MAPK to induce the brown fat thermogenic program in mouse and human adipocytes.” The Journal of Clinical Investigation. 2012. 122(3):1022-1036.

31. Sarzani, R, et Al. “Comparative analysis of atrial natriuretic peptide receptor expression in rat tissues.” J Hypertens Suppl. 1993. 11(5):S214–215.

32. Sarzani, R, et Al. “Expression of natriuretic peptide receptors in human adipose and other tissues.” J Endocrinol Invest. 1996. 19(9):581–585.

33. Sarzani, R, et Al. “Fasting inhibits natriuretic peptides clearance receptor expression in rat adipose tissue.” J Hypertens. 1995. 13(11):1241–1246.

34. Sengenes, C, et Al. “Natriuretic peptides: a new lipolytic pathway in human adipocytes.” FASEB J. 2000. 14(10):1345–1351.

35. Potter, L, and Hunter, T. “Guanylyl cyclase-linked natriuretic peptide receptors: structure and regulation.” J Biol Chem. 2001. 276(9):6057–6060.

36. Sengenes, C, et Al. “Involvement of a cGMP-dependent pathway in the natriuretic peptide-mediated hormone-sensitive lipase phosphorylation in human adipocytes.” J Biol Chem. 2003. 278(49):48617–48626.

37. Illum, L. “Transport of drugs from nasal cavity to the central nervous system.” Eur. J. Pharm. Sci. 11:1-18.

38. Koopmann, A, et Al. “The impact of atrial natriuretic peptide on anxiety, stress and craving in patients with alcohol dependence.” Alcohol and Alcoholism. 2014. 49(3):282-286.

Wednesday, May 27, 2015

Where is my Solar and Wind Only City?

Two years ago this blog proposed a challenge to solar and wind supporters that if solar and wind were indeed the energy mediums of the future and did not require the assistance of other energy mediums (most notably fossil fuels like coal and natural gas) then they should empirically demonstrate this potential by transitioning a single medium sized city (10,000 – 15,000 individuals) to a grid where at least 70% of the electricity, not even all energy, was produced by solar and/or wind sources. Unfortunately despite the passage of two years and the so-called further expansion of solar and wind technology no such experiment has been conducted.

This lack of attention to detail in producing a model city that would empirically represent and support the actual ability of solar and wind to produce the bulk of electricity and even possibly all energy in the future beyond simple hype is troubling. Are solar and wind proponents so irresponsible that they are willing to gamble the future of society on merely their hopes, dreams, and personal preferences rather than raw data? Do they think that incorporation of solar and wind to a grid steadily advancing from 10% to 20% then 30% then 40% then 50%, etc. will run perfectly with no significant problems? If so, then the solar and wind supporters who believe these things should be stripped of all of their credibility and influence; those who do not believe in such a perfect transition should begin immediately petitioning to accept the challenge.

To the solar and wind proponents who object to the above characterization due to the notion that in March Georgetown, Texas (population approximately 48,000) proposed a plan to get all electricity from solar and wind sources, in essence meet this challenge, hold your horses. While it is true that there has been an initial arrangement between the Georgetown Utility Systems and Spinning Spur Wind Farm (owned by EDF Renewable Energy) and SunEdison to purchase 294 MW (144 MW wind and 150 MW solar) from their installations, this is only an initial arrangement, no actual testing or application has occurred yet.

A more pertinent issue regarding the use of Georgetown as an example is that there is no specific information pertaining to the details of how Georgetown Utility Systems will manage this change in supplier. Basically the only public reporting on this strategy have been puff-hype pieces with no real substance or details. Both Spinning Spur Wind Farm and the yet to be identified SunEdison site have not been fully constructed, are not operational and do not have any secondary storage capacity; thus any electricity produced by these institutions will be live and when those institutions are not producing electricity there will be no electricity to provide to Georgetown.

Initially there are at least three major questions that must be addressed to legitimize Georgetown as a model for a solar/wind only powered city. First, where is the detailed analysis of how electricity, and possibly even energy flows, would be properly compensated to avoid brownouts in times when there is insufficient electricity being produced by solar and wind sources? Simply saying “the sun shines in the day and the wind blows when the sun is not shining” is laughable and severely damages credibility. Anyone who thinks that there will not be periods of intermittence from both Spinning Spur and the SunEdison site is harboring an inaccurate belief. Basically show that 100% renewable can be done using math, not flowery words and misplaced hype; note that it is important to also include any transmission and inverter losses in the calculation and separate nameplate capacity from actual operational capacity.

Second, it stands to reason that proponents of a solar/wind only city will not allow the use of natural gas or coal to act in a backup capacity during these periods of intermittence; therefore, during periods of excess solar and wind, electricity must be stored in a battery for use at a future time. So what type of battery structure(s) is going to be utilized to store that excess energy and what is the economic feasibility of using this structure? If no battery infrastructure is believed to be feasible or economical then what type of energy medium will be tapped to act as backup in lieu of a fossil fuel medium and how will it be properly incorporated?

Third, how will consumer costs for energy change from the transition away from fossil fuels over time, i.e. what will costs be in year 1, what will costs be in year 10…? To simply say it will cost less is not sufficient. It must be demonstrated that it will cost less both now and in the future and if it will not cost less in the future what forms of compensation, if any, will be provided to the residents of Georgetown?

Overall these are just the three most basic questions that must be addressed before anyone should accept the idea of Georgetown, Texas being a legitimate 100% solar/wind powered city when their plan is put into place a few years from now. If these questions are not answered with accurate specifics that are later properly executed over time then Georgetown loses all significance as both a legitimate and symbolic experiment for the validity of a solar and wind “future”.

Of course it must be understood that the results in Georgetown are only an initial step, success only provides support to the possibility, not any guarantee for national eventuality. So how about it solar and wind supporters are you actually ready to put your theories to the test or are you simply content with the unscientific and irrational belief that everything will magically work out without the need for essential specifics, realistic assumptions, honest economics (which is incredibly lacking in most pro-solar and wind papers) and valid proof of concepts?

Wednesday, May 6, 2015

A Theory Behind the Relationship Between Processed Foods and Obesity

While there has been a general slowing in the progression of global obesity, especially in the developed world, there has yet to be a reversal of this detrimental trend. A recent study has suggested that one aspect of influence regarding obesity progression lies with the consumption of foods that have incorporated emulsifiers and how they interact with intestinal bacteria including increasing the probability of developing negative metabolic syndromes in mice.1 Based on this result understanding the digestive process may be an important element to understanding how emulsifiers and emulsions may influence weight outcomes.

An emulsion is a mixture of at least two liquids where multiple components are immiscible, a characteristic commonly seen when oil is added to water resulting in a two-layer system where the oil floats on the surface of the water before it is mixed to form the emulsion. However, due to this immiscible aspect most emulsions are inherently unstable as “similar” droplets join together once again creating two distinct layers. When separated these layers are divided into two separate elements: a continuous phase and a droplet phase depending on the concentrations of the present liquids. Due to their inherent instability most emulsions are stabilized with the addition of an emulsifier. These agents are commonly used in many food products including various breads, pastas/noodles, and milk/ice cream.

Emulsifier-based stabilization occurs by reducing interfacial tension between immiscible phases and by increasing the repulsion effect between the dispersed phases through either increasing the steric repulsion or electrostatic repulsion. Emulsifiers can produce these effects because they are amphiphiles (have two different ends): a hydrophilic end that is able to interact with the water layer, but not the oil layer and a hydrophobic end that is able to interact with the oil layer, but not the water layer. Steric repulsion is born from volume restrictions from direct physical barriers while electrostatic repulsion is based on exactly its namesake electrically charged surfaces producing repulsion when approaching each other. As previously mentioned above some recent research has suggested that the consumption of certain emulsifiers in mice have produced negative health outcomes relative to controls. Why would such an outcome occur?

A typical dietary starch, which is one of the common foods that utilize emulsifiers is composed of long chains of glucose called amylose, a polysaccharide.2 These polysaccharides are first broken down in the mouth by chewing and saliva converting the food structure from a cohesive macro state to scattered smaller chains of glucose. Other more complex sugars like lactose and sucrose are broken down into their glucose and secondary sugar (galactose, fructose, etc.) structures.

Absorption and complete degradation begins in earnest through hydrolysis by salivary and pancreatic amylase in the upper small intestine with little hydrolyzation occurring in the stomach.3 There is little contact or membrane digestion through absorption on brush border membranes.4 Polysaccharides break down into oligosaccharides that are then broken down into monosaccharides by surface enzymes on the brush borders of enterocytes.5 Microvilli in the entercytes then direct the newly formed monosaccharides to the appropriate transport site.5 Disaccharidases in the brush border ensure that only monosaccharides are properly transported, not lingering disaccharides. This process differs from protein digestion, which largely involves degradation in gastric juices comprised of hydrochloric acid and pepsin and later transfer to the duodenum.

Within the small intestine free fatty acid concentration increases significantly as oils and fats are hydrolyzed at a faster rate than in the stomach due to the increased presence of bile salts and pancreatic lipase.3 It is thought that droplet size of emulsified lipids influences digestion and absorption where the smaller sizes allow for gastric lipase digestion in the duodenal lipolysis.6,7 The smaller the droplet size the finer the emulsion in the duodenum leading to a higher degree of lipolysis.8 Not surprisingly gastric lipase activity is also greater in thoroughly mixed emulsions versus coarse ones.

Typically hydrophobic interactions are responsible for the self-assembly of amphiphiles where water molecules react to a disordered state gaining entropy as the hydrophobes of the amphiphilic molecules are buried in the cores of micelles due to repelling forces.9 However, in emulsions the presence of oils produce a low-polarity interaction that can facilitate reverse self-assembly10,11 with a driving force born from the attraction of hydrogen bonding. For example lecithin is a zwitterionic phospholipid with two hydrocarbon tails that form reverse spherical or ellipsoidal micelles when exposed to oil.21 Basically emulsions could have the potential to significantly increase the hydrogen concentration of the stomach.

This potential increase in free hydrogen could be an important aspect to why emulsions produce negative health outcomes in model organisms.1 One of the significant interactions that govern the concentrations and types of intestinal bacteria is the rate of interspecies hydrogen transfer between hydrogen producing bacteria to hydrogen consuming methanogens. Note that non-obese individuals have small methanogen-based intestinal populations whereas obese individuals have larger populations where it is thought that the population of methanogen bacteria expands first before one gains significant weight.13,14 The importance behind this relationship is best demonstrated by understanding the biochemical process involved in the formation of fatty acids in the body.

Methanogens like Methanobrevibacter smithii enhance fermentation efficiency by removing excess free hydrogen and formate in the colon. A reduced concentration of hydrogen leads to an increased rate of conversion of insoluble fibers into short-chain fatty acids (SCFAs).13 Proprionate, acetate, butyrate and formate are the most common SCFAs formed and absorbed across the intestinal epithelium providing a significant portion of the energy for intestinal epithelial cells promoting survival, differentiation and proliferation ensuring effective stomach lining.13,15,16 Butyric acid is also utilized by the colonocytes.17 Formate also can be directly used by hydrogenotrophic methanogens and propionate and lactate can be fermented to acetate and H2.13

Overall the population of Archaea bacteria in the gut, largely associated to Methanobrevibacter smithii, is tied to obesity with the key factor being availability of free hydrogen. If there is a lot of free hydrogen then there is a higher probability for a lot of Archaea, otherwise there is a very low population of Archaea because there is a limited ‘food source’. Therefore, the consumption of food products with emulsions or emulsion-like characteristics or components could increase available free hydrogen concentrations, which will change the intestinal bacteria composition in a negative manner that will increase the probability that an individual becomes obese. This hypothesis coincides with existing evidence from model organisms that emulsion consumption has potential negative intestinal bacteria outcomes. One possible methodology governing this negative influence is how the change in bacteria concentration influences the available concentration of SCFAs, which could change the stability of stomach lining.

In addition to influencing hydrogen concentrations in the gut, emulsions also appear to have a significant influence on cholecystokinin (CCK) concentrations. CCK plays a meaningful role in both digestion and satiety, two components of food consumption that significantly influence both body weight and intestinal bacteria composition. Most of these concentration changes occur in the small intestine, most notably in the duodenum and jejunum.18 The largest influencing element for CCK release is the amount and level of fatty acid presence in the chyme.18 CCK is responsible for inhibiting gastric emptying, decreasing gastric acid secretion and increased production of specific digestive enzymes like hepatic bile and other bile salts, which form amphipathic lipids that emulsify fats.

When compared against non-emulsions, emulsion consumption appears to reduce the feedback effect that suppresses hunger after food intake. This effect is principally the result of changes in CCK concentrations versus other signaling molecules like GLP-1.19 Emulsion digestion begins when lipases bind to the surface of the emulsion droplets; the effectiveness of lipase binding increases with decreasing droplet size. Small emulsion droplets tend to have more complex microstructures, which produce more surface area that allow for more effective digestion.

This higher rate of breakdown produces a more rapid release of fatty acids as the presences of free fatty acids in the small intestinal lumen is critical for gastric emptying and CCK release.20 This accelerated breakdown creates a relationship between CCK concentration and emulsion droplet size where the larger the droplet size the lower the released CCK concentration.21 One of the main reasons why larger emulsions produce less hunger satisfaction is that with the reduced rate of CCK concentration and emulsion breakdown there is less feedback slowing of intestinal transit. Basically the rate at which the food is traveling through the intestine proceeds at a faster rate because there are fewer cues (feedback) due to digestion to slow transit for the purpose of digestion.

As alluded to above the type of emulsifier used to produce the emulsion appears to be the most important element to how an emulsion influences digestion. For example the lipids and fatty acid concentrations produced from digestion of a yolk lecithin emulsion were up to 50% smaller than one using polysorbate 20 (i.e. Tween 20) or caseinate.7 Basically if certain emulsifiers are used the rate of emulsion digestion can be reduced potentially increasing the concentration of bile salts in the small intestine, which could produce a higher probability for negative intestinal related events.

Furthermore studies using low-molecular mass emulsifiers (two non-ionic, two anionic and one cationic) demonstrated three tiers of TG lipolysis governed by emulsifier-to-bile salt ratio.3 At low emulsifier-bile ratios (<0.2 mM) there was no change in solubilization capacity of micelles whereas at ratios between 0.2 mM and 2 mM solubilization capacity significantly increased, which limited interactions between the oil and destabilization reaction products reducing oil degradation.3 At higher ratios (> 2 mM) emulsifier molecules remain in the adsorption layer heavily limiting lipase activity, which significantly reduces digestion and oil degradiation.3

Another possible influencing factor could be change in glucagon concentrations. There is evidence suggesting that increasing glucagon concentration in already fed rats can produce hypersecretory activity in both the jejunum and ileum.22-24 It stands to reason that due to activation potential of glucagon-like peptide-1 (GLP-1) in consort with CCK, glucagon plays some role. However, there are no specifics regarding how glucagon directly interacts with intestinal bacteria and the changes in digestion rate associated with emulsions.

The methodology behind why emulsions and their associated emulsifiers produce negative health outcomes in mice is unknown, but it stands to reason that both how emulsions change the rate of digestion and the present hydrogen concentration play significant roles. These two factors have sufficient influence on the composition and concentration of intestinal bacteria, which have corresponding influence on a large number of digestive properties including nutrient extraction and SCFA concentration management. SCFA management may be the most pertinent issue regarding the metabolic syndrome outcomes seen in mice born from emulsifiers.

It appears that creating emulsions that produce smaller drop sizes could mitigate negative outcomes, which can be produced by using lecithin over other types of emulsifiers. Overall while emulsifiers may be a necessary element in modern life to ensure food quality, instructing companies on the proper emulsifier to use at the appropriate ratios should have a positive effect on managing any detrimental interaction between emulsions and gut bacteria.

Citations –

1. Chassaing, B, et Al. “Dietary emulsifiers impact the mouse gut microbiota promoting colitis and metabolic syndrome.” Nature. 2015. 519(7541):92-96.

2. Choy, A, et Al. “The effects of microbial transglutaminase, sodium stearoyl lactylate and water on the quality of instant fried noodles.” Food Chemistry. 2010. 122:957e964.

3. Vinarov, Z, et Al. “Effects of emulsifiers charge and concentration on pancreatic lipolysis: 2. interplay of emulsifiers and biles.” Langmuir. 2012. 28:12140-12150.

4. Ugolev, A, and Delaey, P. “membrane digestion – a concept of enzymic hydrolysis on cell membranes.” Biochim Biophys Acta. 1973. 300:105-128.

5. Levin, R. “Digestion and absoption of carbohydrates from molecules and membranes to humans.” Am. J. Clin. Nutr. 1994. 59:690S-85.

6. Mu, H, and Hoy, C. “The digestion of dietary triacylglycerols.” Progress in Lipid Research. 2004. 43:105e-133.

7. Hur, S, et Al. “Effect of emulsifiers on microstructural changes and digestion of lipids in instant noodle during in vitro human digestion.” LWT – Food Science and Technology. 2015. 60:630e-636.

8. Armand, M, et Al. “Digestion and absorption of 2 fat emulsions with different droplet sizes in the human digestive tract.” American Journal of Clinical Nutrition. 1999. 70:1096e1106

9. Njauw, C-W, et Al. “Molecular interactions between lecithin and bile salts/acids in oils and their effects on reverse micellization.” Langmuir. 2013. 29:3879-3888.

10. Israelachvili, J. “Intermolecular and surface forces. 3rd ed. Academic Press; San Diego. 2011.

11. Evans, D, and Wennerstrom, H. “The colloidal domain: where physics, chemistry biology, and technology meet.” Wiley-VCH: New York. 2001.

12. Tung, S, et Al. “A new reverse wormlike micellar system: mixtures of bile salt and lecithin in organic liquids.” J. Am. Chem. Soc. 2006. 128:5751-5756.

13. Zhang, H, et, Al. “Human gut microbiota in obesity and after gastric bypass.” PNAS. 2009. 106(7): 2365-2370.

14. Turnbaugh, P, et, Al. “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature. 2006. 444(7122):1027–31.

15. Son, G, Kremer, M, Hines, I. “Contribution of Gut Bacteria to Liver Pathobiology.” Gastroenterology Research and Practice. 2010. doi:10.1155/2010/453563.

16. Luciano, L, et Al. “Withdrawal of butyrate from the colonic mucosa triggers ‘mass apoptosis’ primarily in the G0/G1 phase of the cell cycle.” Cell and Tissue Research. 1996. 286(1):81–92.

17. Cummings, J, and Macfarlane, G. “The control and consequences of bacterial fermentation in the human colon.” Journal of Applied Bacteriology. 1991. 70:443459.

18. Rasoamanana, R, et Al. “Dietary fibers solubilized in water or an oil emulsion induce satiation through CCK-mediated vagal signaling in mice.” J. Nutr. 2012. 142:2033-2039.

19. Adam, T, and Westerterp-Plantenga, M. “Glucagon-like peptide-1 release and satiety after a nutrient challenge in normal-weight and obese subjects.” Br J Nutr. 2005. 93:845–51.

20. Little, T, et Al. “Free fatty acids have more potent effects on gastric emptying, gut hormones, and appetite than triacylglycerides.” Gastroenterology. 2007. 133:1124–31.

21. Seimon, R, et Al. “The droplet size of intraduodenal fat emulsions influences antropyloroduodenal motility, hormone release, and appetite in healthy males.” Am. J. Clin. Nutr. 2009. 89:1729-1736.

22. Young, A, and Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 1. Jejunal hypersecretion induced by starvation.” Gut. 1990. 31:43-53.

23. Youg, A, Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 2. Ileal hypersection induced by starvation.” Gut. 1990. 31:162-169.

24. Lane, A, Levin, R. “Enhanced electrogenic secretion in vitro by small intestine from glucagon treated rats: implications for the diarrhoea of starvation.” Exp. Physiol. 1992. 77:645-648.

Tuesday, April 21, 2015

Augmenting rainfall probability to ward off long-term drought?

Despite the ridiculous pseudo controversy surrounding global warming in the public discourse, the reality is that global warming is real and has already significantly started influencing the global climate. One of the most important factors in judging the range and impact of global warming as well as how society should respond is also one of the more perplexing, cloud formation. Not only do clouds influence the cycle of heat escape and retention, but they also drive precipitation probability. Precipitation plays an important role in maintaining effective hydrological cycles as well as heat budgets and will experience significant changes in reaction to future warming largely producing more extreme outcomes with some areas receiving significant increases that will produce flash flooding whereas other areas will be deprived of rainfall producing longer-term droughts similar to those now seen in California.

At its core precipitation is influenced by numerous factors like solar heating and terrestrial radiation.1,2 Of these factors various aerosol particles are thought to hold an important influence. Both organic and inorganic aerosols are plentiful in the atmosphere helping to cool the surface of Earth by sunlight scattering or serving as nuclei support for the formation of water droplets and ice crystals.3 Not surprisingly information regarding the means in which the properties of these aerosols influence cloud formation and precipitation is still limited, which creates significant uncertainties in climate modeling and planning. Therefore, increasing knowledge of how aerosols influence precipitation will provide valuable information for managing the various changes that will occur and even possibly mitigating those changes.

The formation of precipitation within clouds is heavily influenced by ice nucleation. Ice nucleation involves the induction of crystallization in supercooled water (supercooled = a meta-stable state where water is in liquid form at below typical freezing temperatures). The process of ice nucleation typically occurs through one of two pathways: homogenous or heterogeneous. Homogeneous nucleation entails spontaneous nucleation within a properly cooled solution (usually a supersaturated solution of relative humidity of 150-180% with a temperature of around –38 degrees C) requiring only liquid water or aqueous solution droplets.4-6 Due to its relative simplicity homogeneous nucleation is better understood than heterogeneous nucleation. However, because of the temperature requirements homogeneous nucleation typically only takes place in the upper troposphere and with a warming atmosphere it should be expected that its probability of occurrence would reduce.

Heterogeneous nucleation is more complicated because of the multiple pathways that can be taken, i.e. depositional freezing, condensation, contact, and immersion freezing.7,8 Typically these different pathways allow for more flexibility in nucleation with generic initiation conditions beginning at just south of 0 degrees C and a relative humidity of 100%. This higher temperature fails to prevent nucleation because of the presence of a catalyst, a non-water based substance that is commonly referred to as an ice-forming nuclei (IN). Also heterogeneous nucleation can involve diffusive growth in a mixed-phase cloud that consumes liquid droplets at a faster rate (Wegener–Bergeron–Findeisen process) than super-cooled droplets or snow/graupel aggregation.9

Laboratory experiments have demonstrated support for many different materials acting as IN: different metallic particles, biological materials, certain glasses, mineral dust, anhydrous salts, etc.8,10,11 These laboratory experiments involve wind tunnels, electrodynamic levitation, scanning calorimetry, cloud chambers, and optical microscopy.12,13 However, not surprisingly there appears a significant difference between nucleation ability in the lab and in nature.8,10

Also while homogenous ice nucleation is exactly that, heterogeneous nucleation does not have the same quenching properties.8 Temperature variations within a cloud can produce differing methods of heterogeneous nucleation versus homogeneous nucleation producing significant differences in efficiency. For example not surprisingly some forms of nucleation in cloud formations are more difficult to understand like high concentration formation in warm precipitating cumulus clouds; i.e. particle concentrations increasing from 0.01 L-1 to 100 L-1 in a few minutes at temperatures exceeding –10 degrees C and outpacing existing ice nucleus measurements.14 One explanation for this phenomenon is the Hallett-Mossop (H-M) method. This method is thought to achieve this rapid freezing through interaction with a narrow band of supercooled raindrops producing rimers.15

The H-M methodology requires cloud temperatures between approximately –1 and –10 degrees C with the availability of large rain droplets (diameters > 24 um), but at a 0.1 ratio relative to smaller (< 13 um droplets).16,17 When the riming process begins ice splinters are ejected and grow through water vapor deposition producing a positive feedback effect increasing riming and producing more ice splinters. Basically a feedback loop develops between ice splinter formation and small drop freezing. Unfortunately there are some questions whether or not this methodology can properly explain the characteristics of secondary ice particles and the formation of ice crystal bursts under certain time constraints.18 However, these concerns may not be accurate due to improper assumptions regarding how water droplets form relative to existing water concentrations.15

One of the more important element of rain formation in warm precipitating cumulus clouds, in addition to other cloud formations, appears to involve the location of ice particle concentrations at the top of the cloud formation where there is a higher probability for large droplet formation (500 – 2000 um diameters).15 In this regard cloud depth/area is a more important influencing element than cloud temperature.19 In addition the apparent continued formation of ice crystals stemming from the top proceeding downwards can produce raindrop freezing that catalyzes ice formation creating a positive feedback and ice bursts.20

This process suggests that there is a sufficient replenishment of small droplets at the cloud top increasing the probability of sufficient riming. It is thought that the time variation governing the rate of ice multiplication and how cloud temperature changes accordingly is determined by dry adiabatic cooling at the cloud top, condensational warming, evaporational cooling at the cloud bottom.15 Bacteria also appear to play a meaningful role in both nucleating primary ice crystals and scavenging secondary crystals.7 Even if bacteria concentrations are low (< 0.05 L-1) the catalytic effect of nucleating bacteria produces a much more “H-M” friendly environment.

The most prominent inorganic aerosol that acts as an IN is dust commonly from deserts that is pushed into the upper atmosphere by storms.21,22 The principal origin of this dust is from the Sahara Desert, which is lofted year round versus dust from other origin points like the Gobi or Siberia. While the ability of this dust to produce rain is powerful it can also have a counteracting effect as a cloud condensation nuclei (CCN). In most situations when CCN concentration is increased raindrop conversion becomes less efficient, especially for low-level clouds (in part due to higher temperatures) largely by reducing riming efficiency.

The probability of dust acting as a CCN is influenced by the presence of anthropogenic pollution, which typically is a CCN on its own.23,24 In some situations the presence of pollution could also increase the overall rate of rainfall as it can suppress premature rainfall allowing more rain droplets to crystallize increasing riming and potential rainfall. However, this aspect of pollution is only valid in the presence of dust or other INs for if there is a dearth of IN concentration, localized pollution will decrease precipitation.25 Soot can also influence nucleation and resultant rainfall, but only under certain circumstances. For example if the surface of the soot contains available molecules to form hydrogen bonds (typically from available hydroxyl and carbonyl groups) with available liquid water molecules nucleation is enhanced.26 Overall it seems appropriate to label dust as a strong IN and anthropogenic pollution as a significant CCN.

In mineral collection studies and global simulations of aerosol particle concentrations both deposition and immersion heterogeneous nucleation appear dominated by dust concentrations acting as INs, especially in cirrus clouds.10,27,28 Aerosols also modify certain cloud properties like droplet size and water phase. Most other inorganic atmospheric aerosols behave like cloud condensation nuclei (CCN), which assist the condensation of water vapor for the formation of cloud droplets in a certain level of super-saturation.25 Typically this condensation produces a large number of small droplets, which can reduce the probability of warm rain (above freezing point).29,30

Recall that altitude is important in precipitation, thus it is not surprising that one of the key factors in how aerosols influence precipitation type and probability appears to involve the elevation and temperature at which they interact. For example in mixed-phase clouds, the top area increases relative to increases in CCN concentrations versus a smaller change at lower altitudes and no changes in pure liquid clouds.15,31 Also CCN only significantly influence temperatures when top and base cloud temperatures are below freezing.31 In short it appears that CCN influence is reduced relative to IN influence at higher altitudes and lower temperatures.

Also cloud drop concentration and size distribution at the base and top of a cloud determine the efficiency of the CCN and are dictated by the chemical structure and size of an aerosol. For example larger aerosols have a higher probability of becoming CCN over IN due to their coarse structure. Finally and not surprisingly overall precipitation frequency increases with high water content and decreases with low water content when exposed to CCNs.31 This behavior creates a positive feedback structure that increases aerosol concentration, so for arid regions the probability of drought increases and in wet regions the probability of flooding increases.

While dust from natural sources as well as general pollution are the two most common aerosols, an interesting secondary source may be soil dust produced from land use due to deforestation or large-scale construction projects.32-34 These actions create anthropogenic dust emissions that can catalyze a feedback loop that can produce greater precipitation extremes; thus in certain developing economic regions that may be struggling with droughts continued construction in effort to improve the economy could exacerbate droughts. Therefore, developing regions may need to produce specific methodologies to govern their development to ensure proper levels of rainfall for the future.

While the role of dust has not been fully identified on a mechanistic level, its importance is not debatable. The role of biological particles, like bacteria, is more controversial and could be critical to identifying a method to enhance rainfall probability. It is important to identify the capacity of bacteria to catalyze rainfall for some laboratory studies have demonstrated that inorganic INs only have significant activity below –15 degrees C.10,35 For example in samples of snowfall collected globally originating at temperatures of –7 degrees C or warmer a vast majority of the active IN, up to 85%, were lysozyme-sensitive (i.e. probably bacteria).36,37 Also rain tends to have higher proportions of active IN bacteria than air in the same region.38 With further global warming on the horizon air temperatures will continue to increase lowering the probability window for inorganic IN activity, thus lowering the probability of rainfall in general (not considering any other changes born from global warming).

Laboratory and field studies have demonstrated approximately twelve species of bacteria with significant IN ability spread within three orders of the gammaproteobacteria with the two most notable/frequent agents being Pseudomonas syringae and P. fluorescens and to a lesser extent Xanthomonas.39,40 In the presence of an IN bacterium nucleation can occur at temperatures as warm as –1.5 degrees C to –2 degrees C.41,42 These bacteria appear to have the ability to act as IN due to the existence of a single gene that codes for a specific membrane protein that catalyzes crystal formation by acting as a template for water molecule arrangement.43 The natural origins of these bacteria derive mostly from surface vegetation.

Supporting the idea of the key membrane scaffolding, an acidic pH environment can significantly reduce the effectiveness of bacteria-based nucleation.45,46 Also these protein complexes for nucleation are larger for warmer temperature nucleating bacteria, thus more prone to breakdown in higher acidic environments.44,46 Therefore, low lying areas that have significant acidic pollution like sulfurs could see a reduction in precipitation probability over time. Also it seems that this protein complex could be the critical element to bacteria-based nucleation versus the actual biological processes of the bacteria as nucleation was augmented even when the bacteria itself was no longer viable.46

Despite laboratory and theoretical evidence supporting the role of bacteria in precipitation, as stated above what occurs in the laboratory serves little purpose if it does not translate to nature. This translation is where a controversy arises. It can be difficult to separate the various particles within clouds from residue collection due to widespread internal mixing, but empirical evidence demonstrates the presence of biological material in orographic clouds.47 Also ice nucleation bacteria are present over all continents as well as in various specific locations like the Amazon basin.37,48,49

Some estimates have suggested that 10^24 bacteria enter the atmosphere each year and stay circulating between 2 and 10 days allowing bacteria, theoretically, to travel thousands of miles.50,51 However, there is a lack of evidence for bacteria in the upper troposphere and their concentrations are dramatically lower than those of inorganic materials like dust and soot.28,35,52 Based on this lack of concentrations questions exist to the efficiency of how these bacteria are aerosolized over their atmospheric lifetimes. One study suggests that IN active bacteria are much more efficiently precipitated than non-active IN bacteria, which may explain the disparity between the observations in the air, clouds and precipitation.53

Another possible explanation for this disparity is that most biological particles are generated on the surface and are carried by updrafts and currents into the atmosphere. While the methods of transport are similar to inorganic particles, biological particles have a higher removal potential due to dry or wet deposition due to their typical greater size. Therefore, from a nature standpoint bacteria reside in orographic clouds because they are able to participate in their formations, but are not able to reach higher cloud formations, so most upper troposphere rain is born from dust not bacteria.

Some individuals feel that the current drop freezing assays, which are used to identify the types of bacteria and other agents in a collected sample, can be improved upon to produce a higher level of discrimination between the various classes of IN active bacteria that may be present in the sample. One possible idea is to store the sample at low temperatures and observe the growth and the type of IN bacteria that occur in a community versus individual samples.54 Perhaps new identification techniques would increase the ability to discern the role of bacteria in cloud formation and precipitation.

Among the other atmospheric agents and their potential influence on precipitation potassium appears to have a meaningful role. Some biogenic emissions of potassium, especially around the Amazon, can act as catalysts for the beginning process of organic material condensation.55 However, this role seems to ebb as potassium mass fraction drops as the condensation rate increases.55 This secondary role of potassium as well as the role of bacteria may signal an important element to why past cloud seeding experiments have not achieve the hypothesized expectations.

The lack of natural bacteria input into higher cloud formations leads to an interesting question. What would happen if IN active bacteria like P. syringae were released via plane or other increased altitude method that would result in a higher concentration of bacteria in these higher altitude cloud formations? While typical cloud formation involves vapor saturation due to air cooling and/or increased vapor concentration, increased IN active bacteria concentration could also speed cloud formation as well as precipitation probability.

Interestingly in past cloud seeding experiments orographic clouds appear to be more sensitive to purposeful seeding versus other cloud formations largely because of the shorter residence times of cloud droplets.56,57 One of the positive elements of seeding appears to be that increased precipitation in the target area does not reduce the level of precipitation in surrounding areas including those beyond the target area. In fact it appears that there is a net increase (5-15%) among all areas regardless of the location of seeding.58 The previous presumption that there was loss appears to be based on randomized and not properly controlled seeding experiments.58

The idea of introducing increased concentrations of IN active bacteria is an interesting one if it can increase the probability of precipitation. Of course possible negatives must be considered for such an introduction. The chief negative that could be associated with such an increase from a bacterium like P. syringae would be the possibility of more infection of certain types of plants. The frost mechanism of P. syringae is a minor concern because most of the seeding would be carried out between late spring and early fall where night-time temperatures should not be cold enough to induce freezing. Sabotaging the type III secretion system in P. syringe via some form of genetic manipulation should reduce, if not eliminate, the plant invasion potential. Obviously controlled laboratory tests should be conducted to ensure a high probability of invasion neutralization success before any controlled and limited field tests are conducted. If the use of living bacteria proves to be too costly, exploration of simply using the key specific membrane protein is another possible avenue of study.

Overall the simple fact is that due to global warming, global precipitation patterns will change dramatically. The forerunner to these changes can already been seen in the state of California with no reasonable expectation for new significant levels of rainfall in sight. While other potable water options are available like desalinization, the level of infrastructure required to divert these new sources from origins source to usage points will be costly and these processes do have significant detrimental byproducts. If precipitation probabilities can be safely increased through new cloud seeding strategies like the inclusion of IN active bacteria it could go a long way to combating some of the negative effects of global warming while the causes of global warming itself are mitigated.

Citations –

1. Zuberi, B, et Al. “Heterogeneous nucleation of ice in (NH4)2SO4-H2O particles with mineral dust immersions.” Geophys. Res. Lett. 2002. 29(10). 1504.

2. Hung, H, Malinowski, A, and Martin, S. “Kinetics of heterogeneous ice nucleation on the surfaces of mineral dust cores inserted into aqueous ammonium sulfate particles.” J. Phys. Chem. 2003. 107(9):1296-1306.

3. Lohmann, U. “Aerosol effects on clouds and climate.” Space Sci. Rev. 2006. 125:129-137.

4. Hartmann, S, et Al. “Homogeneous and heterogeneous ice nucleation at LACIS: operating principle and theoretical studies.” Atmos. Chem. Phys. 2011. 11:1753-1767.

5. Cantrell, W, and Heymsfield, A. “Production of ice in tropospheric clouds. A review.” American Meteorological Society. 2005. 86(6):795-807.

6. Riechers, B, et Al. “The homogeneous ice nucleation rate of water droplets produced in a microfluidic device and the role of temperature uncertainty.” Physical Chemistry Chemical Physics. 2013. 15(16):5873-5887.

7. Cziczo, D, et Al. “Clarifying the dominant sources and mechanisms of cirrus cloud formation.” Science. 2013. 340(6138):1320-1324.

8. Pruppacher, H, and Klett, J. “Microphysics of clouds and precipitation.” (Kluwer Academic, Dordrecht. Ed. 2, 1997). pp. 309-354.

9. Lance, S, et Al. “Cloud condensation nuclei as a modulator of ice processes in Arctic mixed-phase clouds.” Atmos. Chem. Phys. 2011. 11:8003-8015.

10. Hoose, C, and Mohler, O. “Heterogeneous ice nucleation on atmospheric aerosols: a review of results from laboratory experiments.” Atmos. Chem. Phys. 2012. 12:9817-9854.

11. Abbatt, J, et Al. “Solid ammonium sulfate aerosols as ice nuclei: A pathway for cirrus cloud formation.” Science. 2006. 313:1770-1773.

12. Murray, B, et Al. “Kinetics of the homogeneous freezing of water.” Phys. Chem. 2010. 12:10380-10387.

13. Chang, H, et Al. “Phase transitions in emulsified HNO3/H2O and HNO3/H2SO4/H2O solutions.” J. Phys. Chem. 1999. 103:2673-2679.

14. Hobbs, P, and Rangno, A. “Rapid development of ice particle concentrations in small, polar maritime cumuliform clouds.” J. Atmos. Sci. 1990. 47:2710–2722.

15. Sun, J, et Al. “Mystery of ice multiplication in warm-based precipitating shallow cumulus clouds.” Geophysical Research Letters. 2010. 37:L10802.

16. Hallett, J, and Mossop, S. “Production of secondary ice particles during the riming process.” Nature. 1974. 249:26-28.

17. Mossop, S. “Secondary ice particle production during rime growth: The effect of drop size distribution and rimer velocity.” Q. J. R. Meteorol. Soc. 1985. 111:1113-3324.

18. Mason, B. “The rapid glaciation of slightly supercooled cumulus clouds.” Q. J. R. Meteorol. Soc. 1996. 122:357-365.

19. Rangno, A, and Hobbs, P. “Microstructures and precipitation development in cumulus and small cumulous-nimbus clouds over the warm pool of the tropical Pacific Ocean. Q. J. R. Meteorol. Soc. 2005. 131:639-673.

20. Phillips, V, et Al. “The glaciation of a cumulus cloud over New Mexico.” Q. J. R. Meteorol. Soc. 2001. 127:1513-1534.

21. Karydis, V, et Al. “On the effect of dust particles on global cloud condensation nuclei and cloud droplet number.” J. Geophys. Res. 2011. 166:D23204.

22. Connolly, P, et Al. “Studies of heterogeneous freezing by three different desert dust samples.” Atmos. Chem. Phys. 2009. 9:2805-2824.

23. Lynn, B, et Al. “Effects of aerosols on precipitation from orographic clouds.” J. Geophys. Res. 2007. 112:D10225.

24. Jirak, I, and Cotton, W. “Effect of air pollution on precipitation along the Front Range of the Rocky Mountain.” J. Appl. Meteor. Climatol. 2006. 45:236-245.

25. Fan, J, et Al. “Aerosol impacts on California winter clouds and precipitation during CalWater 2011: local pollution versus long-range transported dust.” Atmos. Chem. Phys. 2014. 14:81-101.

26. Gorbunov, B, et Al. “Ice nucleation on soot particles.” J. Aerosol Sci. 2001. 32(2):199-215.

27. Kirkevag, A, et Al. “Aerosol-climate interactions in the Norwegian Earth System Model – NorESM. Geosci. Model Dev. 2013. 6:207-244.

28. Hoose, C, Kristjansson, J, Burrows, S. “How important is biological ice nucleation in clouds on a global scale?” Environ. Res. Lett. 2010. 5:024009.

29. Lohmann, U. “A glaciation indirect aerosol effect caused by soot aerosols.” Geophys. Res. Lett. 2002. 29:11.1-4.

30. Koop, T, et Al. “Water activity as the determinant for homogeneous ice nucleation in aqueous solutions.” Nature. 406:611-614.

31. Li, Z, et Al. “Long-term impacts of aerosols on the vertical development of clouds and precipitation.” Nature Geoscience. 2011. DOI: 10.1038/NGEO1313

32. Zender, C, Miller, R, and Tegen, I. “Quantifying mineral dust mass budgets: Terminology, constraints, and current estimates.” Eos. Trans. Am. Geophys. Union. 2004. 85:509-512.

33. Forester, P, et Al. “Changes in atmospheric constituents and in radiative forcing. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change.

34. O’Sullivan, D, et Al. “Ice nucleation by fertile soil dusts: relative importance of mineral and biogenic components.” Atmos. Chem. Phys. 2014. 14:1853-1867.

35. Murray, B, et Al. “Ice nucleation by particles immersed in supercooled cloud droplets.” Chem. Soc. Rev. 2012. 41:6519-6554.

36. Christner, B, et Al. “Geographic, seasonal, and precipitation chemistry influence on the abundance and activity of biological ice nucleators in rain and snow. PNAS. 2008. 105:18854. dio:10.1073/pnas.0809816105.

37. Christener, B, et Al. “Ubiquity of biological ice nucleators in snowfall.” Science. 2008. 319:1214.

38. Stephanie, D, and Waturangi, D. “Distribution of ice nucleation-active (INA) bacteria from rainwater and air, NAYATI Journal of Biosciences. 2011. 18:108-112.

39. Vaitilingom, M, et Al. “Long-term features of cloud microbiology at the puy de Dome (France). Atmos. Environ. 2012. 56:88-100.

40. Cochet, N and Widehem, P. “Ice crystallization by Pseudomonas syringae.” Appl. Microbiol. Biotechnol. 2000. 54:153-161.

41. Heymsfield, A, et Al. “Upper-tropospheric relative humidity observations and implications for cirrus ice nucleation.” Geophys. Res. Lett. 1998. 25:1343-1346.

42. Twohy, C, and Poellot, M. “Chemical characteristics of ice residual nuclei in anvil cirrus clouds: implications for ice formation processes.” Atmos. Chem. Phys. 2005. 5:2289-2297.

43. Joly, M, et Al. “Ice nucleation activity of bacteria isolated from cloud water.” Atmos. Environ. 2013. 70:392-400.

44. Attard, E, et Al. “Effects of atmospheric conditions on ice nucleation activity of Pseudomonas.” Atmos. Chem. Phys. 2012. 12:10667-10677.

45. Kawahara, H, Tanaka, Y, and Obata H. “Isolation and characterization of a novel ice-nucleating bacterium, Pseudomonas, which has stable activity in acidic solution.” Biosci. Biotechnol. Biochem. 1995. 59:1528-1532.

46. Kozloff, L, Turner, M, and Arellano, F. “Formation of bacterial membrane ice-nucleating lipoglycoprotein complexes.” J. Bacteriol. 1991. 173:6528-6536.

47. Pratt, K, et Al. “In-situ detection of biological particles in high altitude dust-influenced ice clouds.” Nature Geoscience. 2009. 2:dio:10.1038/ngeo521.

48. Prenni, A, et Al. “Relative roles of biogenic emissions and Saharan dust as ice nuclei in the Amazon basin.” Nat. Geosci. 2009. 2:402-405.

49. Phillips, V, et Al. “Potential impacts from biological aerosols on ensembles of continental clouds simulated numerically.” Biogeosciences. 2009. 6:987-1014.

50. Burrows, S, et Al. “Bacteria in the global atmosphere – Part 1: review and synthesis of literature data for different ecosystems.” Atmos. Chem. Phys. 2009. 9:9263-9280.

51. Burrows, S, et Al. “Bacteria in the global atmosphere – Part 2: modeling of emissions and transport between different econsystems.” Atmos. Chem. Phys. 2009. 9:9281-9297.

52. Despres, V, et Al. “Primary biological aerosol particles in the atmosphere: a review. Tellus B. 2012. 64:349-384.

53. Amato, P, et Al. “Survival and ice nucleation activity of bacteria as aerosols in a cloud simulation chamber.” Atmos. Chem. Phys. Discuss. 2015. 15:4055-4082.

54. Stopelli, E, et Al. “Freezing nucleation apparatus puts new slant on study of biological ice nucleators in precipitation.” Atmos. Meas. Tech. 2014. 7:129-134.

55. Pohlker, C, et Al. “Biogenic potassium salt particles as seeds for secondary organic aerosol in the Amazon.” Science. 2012. 337(31):1075-1078.

56. Givati, A, and Rosenfeld, D. “Separation between cloud-seeding and air-pollution effects.” J. Appl.Meteorol. 2005. 44:1298-1314.

57. Givati, A, et Al. “The Precipitation Enhancement Project: Israel - 4 Experiment. The
Water Authority, State of Israel. 2013. pp. 55.

58. DeFelice, T, et Al. “Extra area effects of cloud seeding – An updated assessment.” Atmospheric Research. 2014. 135-136:193-203.