Wednesday, August 19, 2015

The Politics of Money in Politics

The recent announcement that Lawrence Lessig was exploring the idea of running for president raises two interesting issues. First, the principal reasoning behind his interest in running for president, in that he feels that present system of democracy in the United States has been flawed for some time now and feels other methods have not produced desired results at remedying these flaws. As Mr. Lessig tells it these flaws are largely born of the Citizens United Supreme Court ruling in 2010, which changed the political environment to basically allow for an infinite amount of money to influence the democratic process in every election. Due to this new influx of money a number of individuals, including Mr. Lessig, believe that the inherent principal nature of power equality that is representative of an indirect democracy has been lost resulting in the very real possibility that democracy in the United States could transition into an oligarchy.

Mr. Lessig’s concern about this issue is so significant that it raises the second issue, the very nature of his tenure as President. For Mr. Lessig the importance of maintaining democracy should exceed everything else, but he believes, and justifiably so, that the existing field of presidential hopefuls will be unable to exclusively focus on this issue as they would have a number of other domestic and foreign policy issues to addresses as well. Thus, Mr. Lessig’s candidacy and resultant presidency is similar to that of a referendum. His entire platform is that he will devote the entire focus and power of the his presidency to ensuring the maintenance of democracy, which will largely involve eliminating the mass influx of money into the political process either through the repeal of the Citizens United ruling or another method. After accomplishing this goal Mr. Lessig would resign as President leaving the remainder of his term to his Vice President.

The more important of these two issues is whether or not Mr. Lessig is correct to view the unlimited influx of money into the political process as a chief threat to democracy. The trademark notion of a democracy is “one person one vote” implying equal influence from all voting parties regardless of position or standing. There has been no change in this practice regardless of the level of money committed to a given election cycle. However, some would argue that the evolution of the political system in the United States has created an environment where any elected position of significant consequence demands a large amount of money to purchase advertisement and conduct other publicity activities in order to have a reasonable chance at winning. This monetary demand places an additional motivational incentive on potential candidates to abide by the wishes of those that have the ability to donate large sums of money at multiple instances. Also a greater influx of money may influence the candidate pool keeping individuals that might otherwise run for a position from doing so under the belief that they could not raise enough money to be competitive.

So the question boils down to how much of an influence does money have on the ability of an individual to be elected in a given political race? Clearly there have been no significant cases of individuals literally selling their votes, that is an arrangement being made between a voter and a supporter of candidate A that said voter will vote for candidate A for 50 dollars. Therefore, if money is not used to directly “purchase” votes, what purpose does it serve in an election? The principal purpose of money in an election is to maximize information distribution for a given candidate. Basically the real advantage of candidate A having more money than candidate B is that it allows candidate A to take advantage of the interest and time limitations possessed by the electorate.

For example instead of depending on a potential voter taking the initiative to look up the official position of candidate A on issue Y, spending money allows candidate A and his/her supporters to present the position of candidate A on issue Y directly to the voter via some form of media advertising be it television/radio/print/Internet or via direct interaction with a candidate A supporter. In addition to significantly increasing the odds of potential voters knowing the position of candidate A on issue Y, the fact that candidate A and his/her supporters are creating the delivery mechanism of the information allows them to frame the information in such a way that if desired the core message could be prone to misinterpretations or even outright lies that favor candidate A. This action can also be used against competitors framing their positions in such a way that could make them less attractive to voters.

The next question is how important is this information capacity in an election? This issue has two different parts: first, how valuable is information in an election and second, how much information is available? Starting with the second issue first, in the Internet era for modern developed countries there is little ability to “bottleneck” information or control the information stream. Gone are the days when someone could simply spend enough money or favors to shutout another candidate’s message altogether. The principal advantage of money with respects to this second issue is the ability to saturate information on all forms of delivery systems: television, radio, Internet, hiring people to “spread the word” in public areas, etc. However, money is not the limiting factor controlling the actual ability to distribute information, it simply allows for the more efficiency spread of that information.

Even though money is not a limiting factor controlling the basics of information distribution in a political campaign, is it a critical factor that can dramatically increase the probability of winning? This question is the central question in the first issue of the importance of information capacity: how valuable is information? The value of information in a political election is almost exclusively associated with its ability to produce votes for the candidate. Voters will not vote for candidate A based on two central elements: 1) the voter does not have information pertaining to candidate A either as a person and/or political position; 2) the voter’s political values and/or social values are significantly different from those of candidate A.

In the first scenario the value of information is important for on the most basic level (not taking into consideration the specific characterizations of the candidate and the potential voters) there is a greater likelihood of an individual voting for candidate A if they are known versus voting for candidate A if they are not known. While it is certainly possible that a voter will not vote for candidate A after learning of their political/social values, it is also possible that they will vote for candidate A. Therefore, the behavior of the voter changes from a base low value (typically involving whether or not the individual will vote in the first place) to either a slightly lower value (disagreement with newly understood positions of candidate A) or significantly higher (agreement with the newly understood positions of candidate A). Overall it makes sense to inform voters regarding the important positions and traits of candidate A both logically and practically.

However, it must be noted that the importance of expelling anonymity is inversely proportional to the scope of the election because of the validity of that anonymity. Basically if candidate A is running for a position on the School Board for Smith country there is a good possibility that candidate A will be unfamiliar to a number of potential voters because the perceived importance and scope of that position is small, thus information about candidate A is important to dispel that lack of knowledge. On the other hand if candidate A is running for one of the two U.S. Senate positions representing the state of California, it is highly unlikely that potential voters will be unaware of the important elements, both political and social, representing candidate A. Note that social elements must be included when discussing information distribution because a number of voters vote not on the political issues supported by a candidate, but on whether or not they like the candidate, which could have little to do with the candidate’s political positions.

In the second scenario there is little money can do to produce votes for candidate A. If voter y is aware of the political positions and social standing of candidate A and his/her personal viewpoints are in opposition to candidate A’s positions then further information distribution is basically a waste of resources. The immediate question regarding the above statement is why does the distribution of counter information have such little influence that it can be so readily considered a waste of resources?

There are two significant reasons for the above statement:

1) In recent years, in large part thanks to a loud and more radicalized Conservative movement and to a lesser extent similar Progressive movement, voters in general have become much more polarized on a wide breadth of political issues creating a hostile environment to ideas that run counter these opinions, thereby further limiting an already small group of “convincible” middle-ground of potential voters. In fact there are even more party-line voters and single-issue voters that have mindsets so etched in stone that even if valid empirical evidence suggests that mindset is not accurate they ignore that empirical evidence. Basically in general there are more individuals who are less likely to even listen to a viewpoint that opposes their personal viewpoint, let alone debate the fine points of either viewpoint, than there have been in the past;

2) Political insidiousness and desire for retaining power has resulted in gerrymandering various Congressional districts, which has also been indirectly related to the general break of diversity within a number of established communities creating more homogenous neighborhoods leading to the production of group-think single party voting blocs. Due to the presence of these voting blocs it is very difficult for opposing ideas to establish any meaningful foothold, especially due to the greater polarization of political environments as mentioned in reason one. These areas are a significant reason behind why winning percentages are so high for incumbents.

The above discussion produces an interesting question for Mr. Lessig’s position that the potential influence of unlimited money is the principal threat to the equality of democracy (i.e. a representative democracy that represents each person equally). If theoretically money has no direct influence and little indirect influence on acquiring votes and in practice political science studies have produced conflicting results on the total value of money in an election, can the potential influence of unlimited money in elections really be viewed as the principal threat to democracy?

Another concern with studying the issue of corruption via money is what process is used to determine whether a lawmaker is simply voting on their personal ideals (candidate A voting in favor of tax breaks for corporation W because he (stupidly) believes in the validity of supply-side economics), versus whether he is voting against his ideals to fulfill the Faustian bargain to a corporation (corporation W donated 1.5 million dollars to his previous campaign and plans to donate another 1.5 million to his next, so he votes in favor of tax breaks for corporation W)? This important issue is rarely addressed when discussing money and its potential corrupting influence in politics.

Overall one could argue that the genuine problem with money in politics is that the money is being wasted for minimal advantage advertising instead of being spent on improving the domestic economy through investment or charitable donations. Perhaps the false perception of the advantage of money in politics is the real problem not the actual influence of money. For example Mr. Lessig and others that share his position have noted that it takes significantly more money to be elected to a given position of government now than it did decades ago, but is this statement actually valid? For example typically statements like that do not correct for inflation or how increases in population have increased the perceived advantage for more money, which would be a “natural” occurrence. Also there have been a number of races where candidate A has defeated candidate B despite candidate B outspending candidate A by 5, 6 or even 10x.

However, for the sake of argument assume for the moment that Mr. Lessig’s point about the dangers of money is accurate. The next concern for Mr. Lessig is what can be done about it? If elected president Mr. Lessig would only have the power of the Executive branch of government in which to act against the Citizens United ruling, a branch that has little to no real power to produce the type of change that Mr. Lessig desires. One could argue that his election would produce a “mandate” to challenge the Citizens United ruling, but what real power would this challenge have?

First, the idea of “mandates” are really only political theater anyways for in the past there was some level of concession by the opposing political party with the acknowledgement that “the will/voice of the people” had spoken and it would be inappropriate to obstruct the plans of the new administration and/or Congress out of petty spite. Of course that was then, the political climate now has certainly revealed that petty spite is fashionable. Mr. Lessig is certainly aware that the Republican Party, which has taken advantage of this new environment more so than the Democratic Party, would be his main legislative opposition to accomplishing his goal? Simply “invoking” the “mandate” of his election will not be sufficient to make them allies or have them “fall in line”.

Second, even if Congress did act against the Citizens United ruling, what could it do that would not be challenged in the U.S. Supreme Court by the proponents of the ruling? It stands to reason that the current existing U.S. Supreme Court would overturn any legislative action that sought to weaken the “freedoms” granted by the Citizens United ruling. It has already demonstrated this motivation to some extent in American Tradition Partnership, Inc. v. Bullock rejecting a Montana state law that limited corporate campaign contributions even after the Montana State Supreme Court ruled that the law was narrowly tailored enough that it withstood strict scrutiny.

Realistically it appears that at the moment only two things will allow for the restriction of excessive amounts of money from the political system. First, a change in the political ideology of the U.S. Supreme Court and a re-evaluation of the legal structure of the Citizens United ruling regarding the potential for corruption in the political system due to the influx of money resulting in this new Supreme Court overturning the Citizens United ruling, similar to how Brown v. Board of Education overturned Plessy v. Ferguson. Second, a new Constitutional Amendment explicitly addressing the issues associated with the Citizens United ruling, with the most popular type of amendment eliminating the ability of a corporation to be considered a “person” in the context of free speech. Outside of these two strategies, what can be done? Mr. Lessig’s emphasis about the advantage of focus, limiting money being the only issue behind his presidency, has little meaning for it is not a limiting factor in accomplishing his goal; the issue cannot be solely resolved by effort and trying hard. The limiting factor is the probability of success associated with the limited number of available strategies.

Another concern is the idea that a single-minded focused mandate, which the election of Mr. Lessig would represent, can be established solely because polling information report that 80% - 85% of those polled, with little difference between political affiliations, believe that the potential of unlimited money in the political system is a big problem or “rigs the system”. Unfortunately, something the environmental movement is intimately familiar with is that just because a vast majority thinks a certain way in isolation does not mean that same majority is willing to work to accomplish that viewpoint. Basically while 80% of those polled consistently want money out of politics, how important is it to them to accomplish that goal, i.e. will they prioritize removing money from politics over various other economic issues, foreign policy issues, environmental issues, etc?

As it currently stands based on previous actions, these respondents and potential voters appear to think the removal of money from politics is not very important because where are the droves of candidates making the removal of money from politics their number one campaign issue because it is so important to their constituents and will dramatically increase the probability of getting them elected? Basically if so many people think that money is rigging the system and that resultant corruption is of the utmost importance to address, there should be no difficulty finding numerous candidates that will vote to eliminate money from the political process on the most stringent level allowed by law versus tying their ideals to the pocketbook of corporation y or donor z. Clearly, and unfortunately, this is not the case. On its face it appears that Mr. Lessig has fallen into the typical single-issue trap of thinking that because the issue is very important to him, it must also be, guaranteed without question, very important to a lot of other people.

Some could argue that an important response is to increase the power of transparency in the contribution system by disallowing individuals to make anonymous donations, produce anonymous pitch material, etc. The general idea behind this belief appears to be that through the creation of a political environment where individuals that donate large sums of money must make those donations in a completely transparent manner and those that use the money must outline how it was used it, the probability of immoral actions will be reduced significantly limiting the overall negative influence of money in politics.

The problem with this strategy is that it does not address the saturation mindset. It stands to reason that most people believe that all candidates are taking money from some form of special interest and/or large corporate donors (even the small third party ones regardless of whether or not they actually are), so no candidate is “clean”. Some could counter-argue that if potential voters are made aware of monetary donations and expenditures then they could seek out candidates who have received no money or significantly less money and characterize those candidates as “not beholden to special interests”. The concern with this reasoning is that receipt of donated money becomes a single issue. It is difficult to envision a scenario where an individual votes against a candidate that shares his/her viewpoint on a wide variety of issues if it is revealed that the candidate has taken a lot of money from special interest groups.

Therefore, ‘taking money from special interest groups’ will be regarded as just one of many issues that is considered by a voter when deciding on which candidate to vote for. Unfortunately due to the fact that messaging and access is heavily influenced by money it seems very probable that very few candidates will refrain from taking special interest money when available to them, regardless of any transparency requirements. If this scenario comes to pass then with every viable candidate feeling it necessary to take money, the previous public psychological assertion become true: everyone is taking money, everyone is dirty, thus it does not matter who takes money. Certainly establishing transparency should be done because it is a logical and fair idea and will help increase the probability of more complete information profiles on candidates for potential voters; however, without offering an effective way to remove money from the system, it is unlikely that any transparency strategy will have any real positive effect regarding money in the political system.

Another option put forth by Mr. Lessig, among other parties with other systems, is the idea of Democracy Vouchers where tax rebates to a certain value (currently $50) are reserved for the exclusive donation to a certain political campaign or issue. The belief is that by resorting to a law of scales, volume will be able to cancel out the influence of the high value low volume donor class, which is viewed as the chief problem in the system. Unfortunately this type of plan is flawed in numerous ways. The chief flaws have already been discussed in a previous post here. Another potential flaw in Mr. Lessig’s personal idea is that because the vouchers are tax-based there would be some question to whether or not individuals who do not pay taxes would also receive the $50 or be shutout. If they were shutout then clearly such a program would not be living up to Mr. Lessig’s idea of an equal representational democracy.

Overall the idea of attempting to defeat “bad money” with “good money” be it from the public or from “good PACs”, etc) is rather foolish because of issues regarding sustainability in that what government programs get cut each year due to the loss of billions of dollars returned to the public to “invest” in politics and simple practicality for the polarization of politics have heavily limited the coordinated influence of volume politics. For example in its initial attempt to influence the political landscape in 2014 Mr. Lessig’s personal Super PAC, Mayday, was a significant failure. Basically plans like Brennan Center-Democracy 21 Federal financing and Democracy Vouchers a more likely to exacerbate the problem of money in politics, not act as a “correcting” force if they do anything at all.

On a side note while the idea of a “referendum president” is somewhat interesting, its general characterization can be looked upon more as a novelty than anything significant especially because without a definitive timeline for when the resignation would take place, voter decision-making becomes complicated. For example it is concerning to think how a “referendum president” would handle a catastrophic domestic or foreign event? Would the Vice-President simply handle those potential events? Who would foreign leaders interact with when addressing foreign policy? Etc.

Overall the idea of removing money from politics in effort to ensure a fair democracy and minimize corruption does not appear to be an effective battle strategy to ensure these characteristics. The concern is both the ability to remove money and whether or not money is actually a real problem. A fair and effective democracy is served by three essential elements: voting access, informed voters and voting power. In this country none of these elements are at what one could say “full strength”.

The first element to a fair and effective democracy is ensuring appropriate voting access where the requirements one must meet to be eligible to vote are fair, universally applied and transparent. Unfortunately this simple requirement is not being met by a number of regions; instead these areas are attempting to circumvent fairness by forcing individuals to acquire some form of governmental issue photo identification at personal cost under the false pretense of preventing voter fraud. Such unnecessary and frivolous demands are much more dangerous to a fair and effective democracy than potential unlimited money because it directly influences who can vote.

The second element to an effective democracy is ensuring an informed and motivated electorate. Recall that the principal role money plays is information exchange. Therefore, the best way to make money irrelevant is to create an informed and committed electorate invalidating the purpose of money. The point of a representative democracy is that voters who vote for the winner feel that their viewpoints are being presented and fought for in the appropriate governmental body. The influence of money is only negative when that expectation is not met; when those who have voted for the winner do not have their elected official arguing in favor of their viewpoints instead that elected official is arguing in favor for viewpoints that contrast or are not important those of the majority at the behest of a wealthy donor minority.

The best way to expose this betrayal of duty is an informed and committed electorate, one that knows what they want out of their elected official(s), not one that simply holds on to old ideas and/or votes a single-party ticket solely because the candidate has a certain letter besides their name on the ballot. If the electorate does not choose to inform themselves then it is difficult to judge whether or not money is corrupting the process; however, the electorate must be given tools to access the appropriate information. Therefore, candidates must be obligated to produce information packages regarding important issues and their stances on those issue that can be distributed via mail, posted online or with existing hard copies at government buildings and libraries. A guaranteed information source will allow voters to inform themselves in a non-bias or “spin” manner.

The third element to a fair and effective democracy is currently the one most lacking of the three. Unfortunately there is a significant lack of honesty and logic in the political process, which significantly hinders the total expression of voting power. For example a politician can make statement A to the public, but actually support an opposing position and as long as the public is not able to discover that opposing belief in time, the politician can be elected on a basis of false pretenses. This reality is especially relevant when the position of a corporation and large political donor may be in direct contrast with the position of the general public. How can voting have any real power when a politician can simply lie about his/her position until elected?

Some would argue that if an elected official lies about what they would seek to accomplish the only real response is that the public takes the philosophy of “fool me once shame on you, fool me twice shame on me” and when the individual comes up for re-election vote him/her out of office. However, what type of display of power is that? Lie and get some number of years of guaranteed elected office? How is that fair and just? Therefore, what type of process can be used to sort out false statements? Should each candidate be expected to produce a “beliefs” contract that if deviated from once elected would produce just cause for termination from that position? If this occurred what would be the process for the candidate to change his/her opinion on an issue if a mistake in reasoning was discovered? It stands to reason that a new system is needed for clearly the existing process of recall is not sufficient to ensure the power and wishes of the majority of the electorate.

Overall the potential candidacy of Mr. Lessig for President of the United States appears inherently questionable because the methodology Mr. Lessig supports for removing money from politics is unclear and the most plausible options are either not viable or are not significantly aided by Mr. Lessig being President. Incidentally attempting to remove money from politics through a direct “limitation” by neutralizing the Citizens United ruling seems very difficult at this point in time and without any real probability of success any attempt would result in wasted effort and resources. Instead of attempting to neutralize money through its forced removal or by countering it with even more money, focusing on neutralizing the influence of money through voter empowerment and ensuring voter influence should be a more viable way of facilitating a legitimate, fair and effective democracy.

Saturday, July 25, 2015

Should life in prison really be life in prison?

When one considers controversy in the criminal justice system one of two issues immediately come to mind: 1) the death penalty, where effective arguments exist for both the pro and the con sides; 2) racism in the criminal justice system, where debate is typically over-emotional and illogical on both sides, especially from those complaining about the extent of racism; however, the widespread focus on these two issues draws attention away from other meaningful issues. One of these interesting issues that receive less attention is the question of justification for sentencing someone to life in prison without the possibility of parole.

Not surprisingly there are a number of people who believe the judicial system should not have the capacity to hand down a sentence of “life without parole” (lwop). An aspect of this argument has been bolstered by three separate United State Supreme Court rulings, Roper v. Simmons, Graham v. Florida, and Miller v. Alabama, where it was held that it was not Constitutional to sentence juveniles to the death penalty or a mandatory life in prison without parole sentence regardless of the type of crime. Emboldened by this ruling a number of individuals have attempted to further advance this position to include eliminating lwop sentences all together or at least just expand the breadth of these ruling to young adults, arguing that a lwop sentence is a de facto death sentence.

Furthermore the argument goes that the general nature of a lwop sentence is not based on rehabilitation because the individual in question is never getting out of prison, it is a mixture of punishment and deterrence for other potential actors. However, the influence of this meaning is less relatable to juveniles and young adults due to their emotional and mental development. Proponents of the above position believe that time is the most relevant factor in “decriminalizing” individuals for the frontal lobes mature and, in men, testosterone levels decline reducing the probability of aggressive and impulsive behavior. Basically time is a superior method to reducing crime probability versus hoping young people view individuals similar to themselves incarcerated for the rest of their lives and come to the conclusion “I better not do that”.

In fact some may simply come to the conclusion “I better not get caught” suggesting a time old thought regarding crime, the probability for the certainty of punishment matters much more than probability for the severity if punished when considering the commission of a crime. Therefore, based on this reasoning these individuals argue that sentencing individuals, especially the young, to life in prison without parole does not serve either society or the individual in question.

Some have also argued that the deterrence factor does nothing significant to limit the occurrence of crime derived from passion for rarely do individuals calculate the benefits and consequences before engaging in an emotionally driven response. However, this argument is rather weak in its validity for most emotional actions do not typically produce a crime that will result in a lwop sentence upon conviction. Understand that lwop sentences rarely occur outside of homicides, most notably a Murder 1 conviction, which seldom have acute emotional components, even in felony murder cases. The general conditional pre-requisites for charging an individual with Murder 1 involves 1) premeditation; 2) willfulness; and 3) deliberation (typically with malice afterthought);

This above argument regarding passion and emotion creates concern in that the chief problem with attempting to expand the “lack of maturity” argument to lwop sentences is the nature of lwop crimes typically do not involve lack of maturity or emotional development as a meaningful factor. Basically regardless of the level of social, mental or emotional development, any individual without some form of brain damage should acknowledge that the elements involved in the crimes that warrant such a sentence (vicious and premeditated homicides or homicides in the course of committing other high level felonies like armed robbery, kidnapping, etc.) are against the law and consequences for their commission will be severe. One does not need to be a fully matured and emotionally stable 26 year-old to know that shooting someone in the chest with a .44 is not a good thing and will be harshly punished. One of the chief reasons for a differing stance between juvenile treatment with the death penalty and lwop sentences is the finality of the death penalty eliminates the ability to overturn mistakes in the judicial process.

Another aspect of weighing lwop sentences on young single count offenders is will the elimination of these sentences serve the concept of justice? For example if 20 year-old person A murders 20 year-old person B with all of the necessary elements to justify a Murder 1 conviction what type of sentence would represent justice? Realistically it can be argued that person B was robbed of at least 40 years of life, if not more, so should person A pay in a year for year context? If person A is only incarcerated for 20 years is that justice? Basically what type of punishment represents justice when one person blatantly takes the life of another?

Some would argue that keeping Person A in jail for the rest of his/her life is a miscarriage of justice because ending Person A’s life on de facto grounds does not serve the public interest or the interest of justice, it simply steals an additional life ruining two lives instead of one. However, the counterargument is that Person A can still have productive and positive experiences despite being in jail, something that Person B can no longer have at all.

It could be argued that the deterministic aspect of “without parole” is the problem for individuals who are sentenced to life with the possibility of parole are not guaranteed to acquire parole. Therefore, the elimination of this mandate would allow experts and individuals with intimate knowledge of specific prisoners to judge whether or not an individual remained a threat to society and if justice had been done. Individuals who favor judicial discretion in general would agree with this position for they are from similar molds.

Of course the counter-position is that there are a number of individuals who have received parole after committing violent crimes, i.e. been judged no longer a threat to society, and soon after their release committed similar or worse crimes resulting in their re-arrest and incarceration. Therefore, the issue of simply revoking the very idea of life without parole encompasses the idea of certainty. Should a population of prisoners who have “turned their lives around” be denied the possibility of parole to prevent another population of prisoners from manipulating such a system to acquire release and the ability to continue their criminal enterprise?

Another factor for consideration is how influential is the threat of a lwop sentence in “convincing” an individual to take a plea bargain, thus saving the state or Federal government money, time and other resources in not having to prosecute a murder case, which are frequently significant. If this influence is meaningful, then the loss of lwop sentences could result in a greater probability of delayed or even lost justice for the court system would have to deal with a greater influx of cases creating a backlog.

One of the more widely known important elements to supporting the elimination of “without parole” conditions on sentences is the belief that the prison system can produce sufficient rehabilitation potential. While existing track records are mixed in this regard, evidence does exist that prisons produce a means for individuals to “get it” and turn their lives around. Unfortunately for supporters of the various positions surrounding the elimination/reduction of sentences there is another important element in this process, which while receives lip service now and again, does not receive any significant level of public or political support: how to reincorporate criminals, especially those who have been incarcerated for a long period of time, back into the economic fabric of society?

This question is especially troublesome now for while it has almost always been difficult for criminals to re-acclimate themselves into society on some level, as society currently stands there are a number of individuals without criminal records have not been effectively incorporated into the economic framework who will be competing with these newly released criminals. Without the ability to incorporate newly released criminals, especially those serving long sentences for violent crimes, the probability of recidivism is high, regardless of age and emotional/mental maturity. Sadly this is a question that proponents of eliminating lwop sentences largely ignore kicking the proverbial can to the general “prison reform” crowd. This behavior is questionable because how can one in good conscious seek to eliminate “without parole” sentences whether for juveniles only or entirely without addressing this important question of economic incorporation? Some may argue that it is not fair to leave an individual in jail while this issue is addressed, but is it fair to society to release people that cannot be properly reintegrated?

The final major question regarding the elimination of “without parole” sentences is how to address the psychological impact of prison influencing an individual’s ability to live in general society? There is reason to believe that a number of inmates suffer from a form of institutionalization after a sufficient period of time in prison, which will negatively impact their ability to reintegrate themselves successfully back into society.

One particular change in psychology that could be significantly harmful to reintegration is the increased level of apathy, passivity, and isolation commonly seen from institutionalism.1 One of the more stereotypically, yet still true “rules” of prison life is stay invisible unless you are struggling for power; doing so means keeping your head down and your mouth shut. Unfortunately society has moved to a point where it almost exclusively prefers people be loud and expressive; in fact it appears, at least in the manner of public notoriety, that the motor-mouth arrogant frequently incorrect braggart is preferred over the stoic well-meaning fact-giver. Basically what is expected for “success” in prison life versus what is expected for “success” in “normal” life is largely contradictory. So how is this situation resolved? One could require inmates released after large incarceration periods psychological assistance from trained professionals, but who pays for this service?

Overall there are some important issues regarding the elimination of “without parole” qualifiers on sentences that go beyond simple age. The most noteworthy and important ones relate to the nature of justice, both in punishment and how such a change would influence courts, how long-term prisoners can be incorporated economically into a society that is leaving behind non-prisoners at ever increasing rates and how the potential psychological changes born from institutionalization influence reintegration? Until satisfactory answers can be produced for at least these three questions, notwithstanding other smaller more specific questions, the idea of eliminating “without parole” qualifiers in criminal sentencing seems inappropriate; remember individuals serving these sentences are not akin to those jailed for punching a guy in a bar for hitting on “his girl” or dealing small quantities of marijuana without a license in a state where it is legal by state law, but instead were convicted for very serious crimes that almost always involved the loss of at least one other life.

Citations –

1. Johnson, M, and Rhodes, R. “Institutionalization: a theory of human behavior and the social environment.” Advances in Social Work. 2007. 8(1). 219-236.

One Sexual Offense Fits All?

It has been said, ““precept of justice that punishment for crime should be graduated and proportioned to [the] offense.” [Weems v. United States]. However, punishment for a crime is not exclusive to the domain of incarceration. For most criminals there is the social stigma of being a criminal, which significantly limits their economic, political and societal power and influence. In the case of individuals convicted of sexual based offenses this stigma is typically enhanced. While nothing can be done about the subjective stigmas assigned to criminals by other individuals regardless of the type of offense, when one looks at the administrative burdens applied to individuals convicted of sex offenses versus other types of crimes, including murder, one wonders whether or not such exclusive and additional punishment is a violation of the Eighth Amendment of the Constitution.

After the period of incarceration for a sex offender has concluded the typical administrative burdens applied to that individual encompass restrictions on residency based on the surrounding area most notably they cannot reside within some fixed specified distance from common areas where children congregate like schools, daycare centers, parks, bus stops, etc; in some situations if such an area is constructed after the individual has established residency in a particular location the individual will be forced to move (some states have grandfather clauses that do not require a move some do not). In addition sex offenders must check in with local law enforcement when moving to a new address, changing employment, changing their legal name, etc., and depending on the state have to reaffirm these notifications after a certain period of time. Finally their names are listed on a public database for a period of time that may not be commensurate with their current relationship with their local environment. Basically their name could be on this list 8 years after the incident that resulted in their conviction and after moving to an entirely new community in which these individuals have lived without incident.

To understand these administrative requirements one must attempt to understand their philosophical origins. Most sexually based crimes illicit a guttural and emotional reaction typically leading to a characterization of repugnance, that strangely enough at times, exceed the disgust one feels towards murder or other higher level crimes. The original intent of the sex offender registration list appears born from at best a psychological compromise to provide a level of deterrence from recidivism by limiting the available opportunities that could lead the individual to repeat such criminal action or at worst as an additional punitive measure because it was not legally viable to incarcerate such an individual for a period of time typically demanded/anticipated by the public in reaction to the crime.

Unfortunately this compromise has evolved into a “one size fits all” punishment moving beyond the once applied standard judicial review and discretion. It tends to no longer take the nature of the sexual offense into consideration beyond broad “milestones”. For example all would agree that there is a significant difference between a 19 year-old male having sex with a consenting 16 year-old female and a 29 year-old male raping a 16 year-old female via a drugged beverage. While these differences are certainly reflected in the incarceration portion of the punishment they typically are not reflected in the administrative/societal portion of the punishment.

Basically while both individuals from the above example are technically sex offenders, the fact is that in most situations there is a tiered structure that is so broad in its administrative penalties that the level of judicial discretion is non-existent. In a sense the application of administrative punishment can be viewed as generally lazy, disinterested in determining the actual threat posed by the individual to the community instead labeling all as viable and credible threats.

There are two pertinent court cases pertaining to the issue of sex offense and the Eighth Amendment. First, in Graham vs. Florida the United States Supreme Court adopted the position that non-capital sentences for minors, adding to capital sentences held in Roper vs. Simmons, could be found unconstitutional under a proportionality review. This proportionality review can fall within two general classifications: 1) challenges to the length of a sentence dependent on the circumstances surround the case in question; 2) cases in which the Court implements the proportionality standard by certain categorical restrictions. The important element to Graham vs. Florida with regards to the above topic is that it set the precedence that categorical Eighth Amendment proportionality reviews could be applied to non-capital offenses, moving beyond the idea of “death is different”.1

Second, in Ohio v. Blankenship the defendant claimed that his classification as a Tier II sex offender pertaining to the crime of having a sexual relationship as a 21 year-old with a consenting 15 year-old with full knowledge of her age resulting in a conviction of a single count unlawful sexual conduct was cruel and unusual punishment. This claim was based on the administrative penalties associated with that classification (largely associated with having to register as a sex offender for 25 years) in contrast to the threat he provided as a possible future repeat offender.

The Ohio Court of Appeals ruled against Blankenship determining that existing legal remedies were not available because he was an adult when he committed the crime versus being a juvenile, thus a previous ruling (related to C.P., 131 concerning juveniles) was not applicable and that he was in fact a sex offender, thus the current legal structure in Ohio was applicable. Blankenship appealed to the Ohio Supreme Court, which held arguments in early March 2015; as of this posting it appears that no ruling has been made regarding this case, but a number of individuals believe that the ruling could go either way. So currently while it is legally and theoretically possible to find the administrative penalties associated with conviction as a sex offender unlawful via the 8th Amendment, no court has current done so.

Some could argue that there is an important distinction in statutory rape cases between an individual who has accurate knowledge of the age of his/her sexual partner versus having inaccurate knowledge through deception or misinformation. On this issue the point of willing culpability is irrelevant. For example there is no meaningful difference between a 19 year-old having sex with a 15 year-old where both parties are fully aware of the age of the other versus a 19 year-old having sex with a 15 year-old who has lied to the 19 year-old claiming an age of consent (18 year-old).

Such consideration would be akin to facilitating punishment based on whether or not an individual was aware that he/she was speeding. Whether or not the individual knows he/she is speeding is irrelevant to the fact that the individuals was speeding and violating that particular law. Furthermore the issue is not whether or not an individual who commits statutory rape or a similar low level sex-based crime is a sex offender. By law the individual is a sex offender, the issue is the assigning the appropriate punishment for the committed crime in all aspects, i.e. is it appropriate that an individual convicted of sexting receives the same administrative punishment as an individual convicted of rape?

An interesting point of fact pertaining to the validity of the administrative penalties associated with non-violent sex offenders is that the general recidivism rate for sex offenders has been demonstrated numerous times to be lower than any other crime except murder.2-3 An interesting point of contention could be made regarding this data between parties that agree with board mandatory classifications and parties that disagree.

Proponents of the administrative penalties could argue that this lack of recidivism is due to the harsh administrative restrictions placed on sex offenders heavily reducing the temptations and opportunities for recidivism. Opponents of these penalties could counter-argue that this lack of recidivism is because most sex offenders are not sexual predators, but simply do something stupid early in their lives that get them labeled and convicted as a sex offender through some basic non-violent sex-related crime like sexting a consenting individual or statutory rape with a consenting partner. While the truth is unknown, opponents are more likely correct than proponents because the data encompasses a time frame for some of these analyses where the harsher administrative penalties were not entirely applicable.

An important element to whether or not the 8th Amendment can be applied on this particular issue, especially with regards to the sex offender registry is whether the registration is viewed as punitive or civil; a characterization as punitive should increase the probability of relevance in applying the 8th Amendment versus a civil characterization. In most cases it is difficult to argue that the registry is not punitive in nature with the administrative hurdles that are assigned to those on the list, especially concerning the living restrictions. It stands to reason that if the only demand of the list was public access and an accurate name and address then it would be more civil in nature; however that is currently not the case.

Based on existing information it is difficult to argue that the sex offender registry serves an important role in protecting society from a large number of individuals convicted of sex offenses because those individuals are not a threat to society. Furthermore the additional elements of societal stigma and restrictions of freedom produced through association with the list could constitute a disproportional punitive response to the crime, especially when that association is not subject to judicial review, but mandated by a state or the Federal government. For example it could be argued successfully that for a vast majority of individuals who are convicted for the first time on a single count of a non-violent sexual-based crime, registration as a sex offender is not appropriate, therefore could be appropriately challenged as a violation of the 8th Amendment.

An interesting side note is that defining mandatory registration as a sex offender as a violation of the 8th Amendment may be necessary to properly apply justice even if it not legally appropriate. In short associating this scale of punishment to the 8th Amendment may be the only way to give politicians the political cover they need to continue to publicly assert their “tough stance” against sex offenders of all shapes and sizes, but also have appropriate punitive punishment based on the type of sexual offense. Basically while applying an analytical system of judgment regarding the threat potential of a sexual offender to “relapse” is logical and compliant with justice, forcing such a system on states through association with the 8th Amendment may be necessary due to political concerns.

However, while the courts have almost always been at the forefront for social change, would it be appropriate to make this association even if it were not valid? What type of slippery slope would that produce? On an even larger scale what can be done in a democracy when the majority is not interested in changing its opinion regardless of any arguments counter to their opinion? Overall when thinking from a non-emotional logical perspective mandatory registration for most single count sex offenders appears inappropriate, not surprisingly producing a path to properly appreciate that viewpoint legally is the more difficult problem.

Citations –

1. Shepard, R. “Does the punishment fit the crime? Applying eighth amendment proportionality analysis to Georgia’s sex offender registration statute and residency and employment restrictions for juvenile offenders”. Georgia State University Law Review. 2011. 28(2) Article 7. 529-557.

2. BOJ Recidivism of Sex Offenders Released from Prison in 1994, November 2003

3. U.S. Department of Justice Criminal Offenders Statistics: Recidivism, statistical information from the late 1990s and very early 2000s.

Tuesday, June 23, 2015

The Legitimacy of Holistic Admissions at U.S. Universities

With the competition for landing a quality job increasing with every passing year, acceptance into a high quality university is viewed as essential to maximizing the probability of landing one of these jobs. However, in lockstep with the competition for quality jobs, the competition to gain entrance into those universities widely regarded as high quality has also increased. This competition has produced controversy surrounding the procedure in which applicants are admitted creating a tug-of-war of sorts between various parties and their interests. One of the chief points of controversy is the validity of the “holistic” review process. In fact a lawsuit filled against Harvard University by the Students for Fair Admissions contends that holistic admission processes are inappropriately discriminatory and should be significantly clarified in their evaluation metrics beyond “whole person analysis”. Obviously a reading of the official complaint by the Students for Fair Admissions divulges a harsher conclusion than that above, but the sentiment above is more appropriate to produce a more fair admissions environment.

Proponents of the holistic method champion its multi-faceted analysis approach where a larger spectrum of an applicant’s qualifications for admissions is considered beyond the traditional metrics (standardized test scores, grades and certain extracurricular activities), which produces a more fair and accurate admissions process. Opponents of the holistic method believe that it is commonly used at best to hide the admissions process beyond a veil of ambiguity allowing universities to justify perplexing and arbitrary decisions and at worst to legitimize a quota system where more qualified candidates are rejected in favor of under-qualified candidates to achieve diversity demographics in order to evade public scorn. Clearly based on the perceived stakes, where getting into university A can set a person up for life versus university B which would create unnecessary hardships, the emotional aspect of this debate is high. Unfortunately this emotional aspect has produced an environment that abandoned a critical philosophical base for understanding the why or why not a holistic appropriate is appropriate.

First it is important to address that the holistic process has been attacked by some as a demonstration of “reverse racism” through the process of affirmative action. The term “reverse racism” is a misnomer and is not properly used in this descriptive context. Racism is giving differing treatment, either in a positive or negative manner, to an individual based on their ethnicity or race. Based on this definition, reverse racism would be akin to not giving differing treatment to an individual based on their ethnicity or race. However, when individuals invoke the term “reverse racism” the actual meaning is not what they are intending to convey. Instead they simply mean a different type of racism. Unfortunately some parts of society have associated the term racism to reflect only one particular form of racial bias instead of all forms of racial bias, which is inappropriate. Therefore, the term “reverse racism” should be eliminated from conversation in this context and replaced with the appropriate term – racism.

Second, it must be noted that the original intention of affirmative action was not to give “bonus points” to an individual based on their race, but to access how race may have influenced the acquisition of certain opportunities and thereby influenced the development of an individual through their performance when engaging in these opportunities. It should not be surprising that an individual with rich, committed and connected parents will have more opportunities and ability to prepare for those opportunities when presented than an individual without wealthy or even present parents.

For example it is expected that SAT scores would be higher for children of richer families both because of increased opportunity to prepare and increased opportunity to retest if the performance is not deemed acceptable. Also there is a higher probability that individuals from rich families will be better nourished than those individuals from poor families, which will directly influence academic performance and ability to participate in other valuable non-academic opportunities. Such environmental effectors are simple elements that can skew the value and analytical ability of “raw” metrics like standardized tests. Basically affirmative action is akin to judging the vault in gymnastics. Not all jumps have the same difficulty level; a non-perfect vault with a 10.0 difficulty will consistently beat a perfect vault with a 7.0 difficulty.

A quick side note: while the idea of affirmative action was originally based on the premise of race in an attempt to combat direct and indirect forms of racism, in the present the idea of affirmative action has shifted more to address differences in economic circumstance over race/ethnicity. The idea that rich individuals of race A will somehow be significantly excluded from opportunity A versus rich individuals of race B is modern society is no longer realistic. It is important to identify that more minorities will be assisted by affirmative action not directly because of race, but instead because of past racism that reduced the probability of these minority families to build intra-generational wealth thereby making them poorer than white families.

Based on the “potential judgment” aspect of affirmative action, some individuals may object to the idea that it is appropriate to punish an individual for having access to opportunities that others may not claiming that this behavior is a form of bias. This point creates the first significant philosophical question that must be addressed in the admissions process: is it justifiable that an above average individual in an advanced difficulty pool should find favor in an opportunity over a high quality performing individual in a lesser difficulty pool?

An apt example of this notion is seen in the disparity between the “Big 5” college conferences (ACC, Big 10, Big 12, PAC 12 and SEC) and the mid major conferences when selecting basketball teams for the NCAA Championship Tournament. While the committee tends to give preference to teams from the Big 5, the question is should they? A Big 5 power team, “Big Team A”, with a 55.5% conference winning percentage at 10-8 and an overall record of 21-13 has clearly demonstrated itself as slightly above-average among its peers whereas a mid major team, “Medium Team B”, with a 89% conference winning percentage at 16-2 and an overall record of 26-7 did not have the same opportunities to compete against the level of competition as Big Team A, but has demonstrated themselves a quality team with a greater unknown ceiling. Basically should someone slightly above the middle of the pack in one environment that could be viewed as more competitive be passed over for someone at the top at a tier 2 level?

In the arena of applicants the question of quality could boil down to: should the 100th best “area” A applicant be accepted over the 10th best “area” B applicant. Think about it this way: should applicant C from city y who scores significantly above average for that area on standardized tests and also has quality grades be accepted over applicant E from city x who scores slightly above average for that area on standardized tests and has quality grades even if applicant E’s scores are slightly higher? Note that obviously city x has a higher student average for standardized tests than city y.

Those who say yes to the above question based on the importance of fostering a racially/ethnically diverse environment must be careful not to fall into the trap of needless diversity, which is its own type of bias. With regards to fostering a diverse environment, its establishment must be based on thought and behavior, not on elements beyond an individual’s control.

There is an advantage to diversity of experience for it ensures a greater level of perspective and ability to produce understanding leading to more and potentially valid strategies for solving problems. However, this advantage comes from experience not from different skin color, religious beliefs, etc. For example the inclusion of person A just because he/she has certain colored skin or is of a certain ethnicity is not appropriate. Their inclusion should demand a meaningful and distinctive viewpoint. Cosmetic diversity for the sake of diversity serves no positive purpose and is inherently foolish and unfair/bias. Based on this point the crux of the issue regarding admissions is how to identify individuals with distinctive and valuable viewpoints in order to validate selecting a high achiever from a less difficult environment.

Most would argue that the standard analysis metrics are not appropriate for this task. For example grades are significantly arbitrary based on numerous uncontrollable environmental and academic circumstance; i.e. an A at high school x does not always carry the same weight as an A at high school y and some high schools allow students greater amounts of extra credit which conceal their actual knowledge of the subject through grade inflation. Standardized tests can be heavily prepared for and be taken multiple times depending on time and financial resources. Also they may not present an accurate representation of ability for almost no “real-world” task requires an individual to sit in one place in a time sensitive environment answering various questions without access to any outside resources beyond what is in their brain. At one point the “college essay” could have filled this role, but now it appears the essay has de-evolved into an ambiguous farce demanding only unoriginal “extraordinary” experiences and/or teaching moments where sadly it has become difficult to determine even if the student means what they say or are simply writing what they think the admissions officers want them to say.

However, while these flaws with the standard metrics exist, it is important to understand that abandoning the standard metrics entirely would be in error, for abandoning these metrics would be akin to replacing one “bias” with another. The standard metrics are an important puzzle piece, but they do not make up the entire puzzle.

For some the college interview has been thought of as a panacea for bridging the gap between holistic and standard admission judgment, but interviews do have caveats that must be monitored. Supporters of the interview process believe that it gives applicants an ability to demonstrate that he/she is more than just test scores, extracurricular activities and grades as well as allows both the university and applicant the ability to more specifically define the level of “fit” between the two beyond the mass generic questions utilized in the application process. Finally interviews can be a good deciding factor between board-line applicants.

Unfortunately interviews have some flaws that must be properly managed to ensure their legitimacy. First, individuals involved in the interview must be properly trained to avoid first impression bias as most interviews establish the tenor of the relationship between the interviewer and the interviewee very early, which threatens the objectivity of the rest of the interview. Also interviews must have a standard operating procedure, especially when it comes to the questions. Applicants must be asked the same questions for if different questions are asked to different applicants the subjectivity probability of the procedure increases, which hurts the interview as a comparison evaluation metric. It is fine to ask different questions if interviews are not going to be used when choosing one applicant over another, but most do not view the interview in such a causal light.

Another concern about the interview is they are unable to judge growth potential in how the university may positively or negatively influence the development of the applicant if he/she actually attends the university. Also if interviews do not have significant weight in the decision-making process then they may cause more harm than good due to lack of specific feedback providing more stress on an individual over relief as individuals wonder how the interview went leading to over-embellishment of the negative on small errors. Finally if interviews are deemed important it would be helpful if more universities offered travel vouchers to more financially needy applicants so if these individuals want to tour the campus and participate in the interview process they have an opportunity to do so that is not negatively impacted by their existing financially situation. Such a voucher may be important especially if interviews are used in “board-line” judgment.

A separate strategy may be the use of static philosophical probing questions in the application process. This strategy could better manage the difference in outside environmental influencing factors by gauging the general mindset of an applicant when it comes to solving problems. For example one question could be that if the individual were presented with a large jar full of chocolate and one individual sample; how would the individual calculate the number of chocolates in the jar? Note that this question demands both creativity and deterministic logic; creativity will produce more available options, but logic will be required to reason the best option from the list.

Another interesting question would be to ask what is the greatest invention in human history? Such a question would inspect whether an individual believes it is more important to build a foundation or if importance comes from what expands from that foundation. A third question could be what one opportunity would the applicant like to have had that they did not receive or was not available and why? These questions are superior to the generic banal analytically irrelevant questions that most universities ask on their admission forms.

Overall regardless of what methodology a university uses to accept or reject applicants the most important element is that this methodology is transparent. Universities must exhibit what attributes and credentials validate an individual’s merit for acceptance and then produce valid qualitative and quantitative reasons for why certain individuals gain admission and others do not. Transparency is the key element for a university to conduct their specific type of admission methodology without complaint. Returning to the original question whether or not a university elects to accept above average individuals from high “difficulty” environments or top performers from lower “difficulty” environment, either method is defensible as long as legitimate reasoning is available. However, there in lies the problem with the holistic method, universities are not transparent in its application, thus such behavior must change if a holistic method is to have any significant credibility.

Wednesday, June 10, 2015

Exploring the Biological Nature of Brown and Beige Fat

Over two years ago this blog discussed the possibility of incorporating a specialized preparation routine before exercise in an attempt to stimulate both brown and beige adipose tissue in order to increase the efficiency and overall calorie and fat burning potential of standard exercise. However, that post did not seek to fully understand or discuss the specific biological mechanisms that govern the behavior of brown or beige adipose tissue. This lack of knowledge limits the efficiency for exercise programs as individuals could either be consuming certain foods or performing certain warm-up tasks to increase exercise potential in addition to those suggested in the past blog post. Increasing exercise efficiency could be an easy means to increase the overall health of society without having to devote more precious time to exercise; therefore it would prove useful to better understand the processes that activate these types of fat.

At the most basic level there are two key elements to the fat burning capacity of brown fat. First, brown fat has multiple mitochondria versus the single mitochondria possessed by white fat; these additional mitochondria allow for greater rates of metabolism along with an increased lipid concentration. Also brown fat releases norepinephrine which reacts with lipases to breakdown fat into triglycerides and later to glycerol and non-esterified fatty acids finally producing CO2 and water, which can lead to a positive feedback mechanism.1,2 Second, brown fat contains significant expression rates of uncoupling protein 1 (UCP-1).1 UCP-1 is responsible for dissipating energy, which leads to the decoupling of ATP production and mitochondrial respiration.1 Basically UCP-1 returns protons after they have been pumped out of the mitochondria by the electron transport chain where these protons are released as heat instead of producing energy (i.e. leaking).

It is important to understand that there are two types of brown fat: natural brown fat and intermediate brown fat commonly known as beige fat. Natural brown is typically exemplified by the fat located in the interscapular region and contains cells from muscle-like myf5+ and pax7+ lineage.3 Natural brown fat is typically isolated from white fat and almost entirely synthesized in the prenatal stage of development as a means to produce heat apart from shivering.4 Beige fat is commonly interspaced within white fat, do not have these muscle-like cells (although Myh11 could be involved),5 and can be activated by thermogenic pathway and the strain of exercise. Beige fat also has the potential to influence the conversion of white fat to beige fat through a process commonly called “browning”.6,7

Natural brown fat is thought to have larger concentrations of UCP1-expression because they constitutively express it after differentiation versus beige, which expresses large amounts of UCP-1 in response to thermogenic or exercise cues.1,5 Therefore, natural brown fat is more effective at energy expenditure. However, it may not be possible to develop more natural brown fat after development; therefore, any positive progression in brown fat development will come from beige fat.

Early understanding of brown fat activation involved non-discriminate increases in the activity of the sympathetic nervous system (SNS). The standard pathway governing brown fat activation uses a thermogenic response involving the release of norepinephrine, which initiates cAMP-dependent protein kinase (PKA) and p38-MAPK signaling leading to the production of free fatty acids (FFA) through lipolysis due to UCP-1 induced proton uncoupling.4 UCP-1 concentrations are further increased through secondary pathways involving the phosphorylation of PPAR-gamma co-activator 1alpha (PGC1alpha), cAMP response element binding protein (CREB) and activating transcription factor 2 (ATF2).8 Among these three elements PGC1alpha appears to be the most important co-activating many transcription factors and playing an important role in linking oxidative metabolism and mitochondrial action.9

However, due to the complicated nature of SNS activation and its other downstream activators the attempt to replicate it in the form of weight loss drugs like Fenfluoramine or Ephedra resulted in severe negative cardiovascular side effects like elevated blood pressure and heart rate.10 While some argue that either increasing the sensitivity or the rate of simulation to the SNS can improve upon these results, the underlying elements associated with downstream activation of the SNS makes facilitating direct influence too complicated. Therefore, from a biological perspective it makes more sense to focus on a downstream element that interacts with brown fat at a more localized level.

Just a side note based on the differing interactivity between brown/beige and white fat from the SNS, white fat appears to represent long-term energy storage and brown fat is shorter-term energy, an unsurprising conclusion. However, frequent energy expenditure, like exercise, may condition the body to produce more beige fat versus white fat viewing short-term energy needs as more valuable than long-term energy needs. Basically if the above point is accurate then it stands to reason that a person would see more benefit from 20 minutes of exercise 6 days a week versus 40 minutes of exercise 3 days a week.

Moving away from direct SNS stimulation perhaps the appropriate method of increasing browning involves increasing transcription and translation of UCP1. Interestingly enough empirical evidence exists to support the idea that reinoic acid could be an effective inducer of UCP-1 gene transcription in mice and operates through a non-adrenergic pathway.11,12 However, a more focused study using loss of function techniques involving retinaldehyde dehydrogenase, which is responsible for converting retinal to retinoic acid, determined that retinal, not retinoic acid is the major inducer of brown fat activity.13 Unfortunately there is no direct understanding regarding the proportional response of brown fat to retinal or retinoic acid. Therefore, the general fat-soluble nature of vitamin A will probably make it difficult to utilize its derivatives as biological stimulants for brown fat activation or browning.

Another possible strategy to stimulate browning is through activated (type 2/M2) macrophages induced by eosinophils which are commonly triggered by IL-4 and IL-13 signaling. When activated this way these macrophages recruit around subcutaneous white fat and secrete catecholamines to facilitate browning in mice.14,15 A secondary means by which both IL-4 and IL-13 may influence fat conversion is their direct interaction with Th2 cytokines.16 Unfortunately while on its face this strategy looks promising, in a similar vein to vitamin A, it might not be effective due to unknown long-term side effects associated with IL-4 and IL-13 activation. Due to this lack of knowledge, if IL-4 or 13 is thought to be a viable biochemical strategy for inducing weight loss, long-term proper time lines for effects and dosages must be explored in humans, not just short-term studies in mice.

A more controversial agent in browning is fibronectin type III domain-containing protein 5 or more frequently known as irisin. Due to its significantly increased rate of secretion from muscle under the strain of exercise, some individuals believe that irisin is a key mediator in browning acting as a myokine;17 if this characterization is accurate then irisin could be a significant player in the biological benefits produced by exercise including weight loss, white fat conversion and reduced levels of inflammation.18,19 However, other parties believe that because human studies with irisin have produced results that do not demonstrate benefits similar to those studies using mice, irisin is another molecule that cannot scale-up its effectiveness when faced with the added biological complexity of humans versus a mouse.20-22

The key element within this controversy could be that irisin expression is augmented by the increased expression of PGC1alpha, but PGC1alpha increases the expression of many different proteins and other molecules, so the expression of irisin may not be relevant to the positive changes associated with exercise. Another factor may be that a key difference between mice and humans is the mutation in the start codon of the human gene involved in the production of irisin, which significantly reduces irisin availability.23 Thus this mutation could be the limiting factor to why despite a very conserved genetic sequence, humans do not see anywhere near the benefit mice do. If this explanation is correct it does potentially still leave the door open to directly inject irisin into the body to increase concentrations in an attempt to aid exercise derived results, but if PGC1alpha is the key, then this increased concentration of irisin could be of minimal consequence.

Another potential element that demonstrates a significant concentration increase in accordance to increased PGC1alpha is a hormone known as meteorin-like (Metrnl).24 The concentration of this hormone increases in both skeletal muscle and adipose tissue during exercise and exposure to cold temperatures in accordance to increases in PGC1alpha concentrations. When Metrnl circulates in the blood it seems to produce a widespread effect that induces browning resulting in a significant increase in energy expenditure.24 The influence of Metrnl on white fat does not appear due to direct interaction with the fat, but instead indirect action on various immune cells most notably M2 macrophages via the eosinophil pathway, which then interact with the fat through activation of various pro-thermogenic actions.24 As discussed above this interaction with eosinophil appears to function through IL-4 and IL-13 signaling indicating a common pathway purpose between IL-4/IL-13 and the original SNS pathway. Not surprisingly blocking Metrnl has a negative effect on the biological thermogenic response.24

Another potential strategy for browning may be targeting appropriate receptors instead of specific molecules; with this strategy in mind one potential target could be transient receptor potential vanilloid-4 (TRPV4). TRPV4 acts as a negative regulator for browning through its negative action against PGC1a and the thermogenic pathway in general.25 In addition TRPV4 appears to activate various pro-inflammatory genes that interact with white adipose tissue making it more difficult to facilitate browning even if the appropriate signals are present. TRPV4 inhibition and genetic ablation in mice significantly increase resistance to obesity and insulin resistance.25 The link between inflammation and thermogenesis is highlighted by the activity of TRPV4, which is one of the early triggers for immune cell chemoattraction.25

Obesity may also produce a positive feedback effect through TRPV4 by increasing cellular swelling and stretching through the ERK1/2 pathway, which increases the rate of TRPV4 activation.26,27 However, the validity of TRPV4 as a therapeutic target remains questionable for TRPV4 expression not only influences fat/energy expenditure, but also osmotic regulation, bone formation and plays some role in brain function.25,28,29 Fortunately a number of the issues with TRPV4 mutations/mis-function appear to be developmental in influence versus post-development, thus TRPV4 therapies could still be valid.

Natriuretic peptides (NPs) are hormones typically produced in the heart on two different operational capacities: atrial and ventricular. Both of these hormones appear to play a role in browning through association with the adrenergic pathway.30 The most compelling evidence for supporting this behavior is that a lack of NP clearance receptors demonstrated significant enhanced thermogenic gene expression in both white and brown adipose tissue.30 Also direct application of ventricular NP in mice increased energy expenditure.30 In addition to the above results, NPs are an inherent attractive therapeutic possibility because appropriate receptors are located in white and brown fat of both rats and humans31,32 and these receptors go through periods of significant decline in expression when exposed to fasting,33 which may account for some of the benefits seen from low calorie diets.

Atrial NPs increase lipolysis in human adipocytes similar to catecholamines (increasing cAMP levels and activation of PKA) although whether or not this increase is induced through interaction with beta-adrenergic receptors is unclear.34 Some believe that NPs activate the guanylyl cyclase containing NPRA producing the second messenger cGMP activating cGMP-dependent protein kinase (PKG).35,36 PKA and PKG have similar mechanisms for substrate phosphorylation including similar targets in adipocytes,36 thus this interaction may explain why atrial NPs act similar to catecholamines.

Recall from above that one of the means of inducing browning, especially for those tissues that are distant from SNS-based neurons, is macrophage recruitment. This recruitment appears to be initiated by CCR2 and IL-4 for when either is eliminated from mice models the conversion no longer occurs.15 Tyrosine hydroxylase (Th) is also important in this process facilitating the biosynthesis of catecholamines and later PKA levels.

With respects to producing a biomedical agent to enhance browning there appear to be three major pathways in play: 1) the SNS pathway producing a direct activation response; 2) macrophage recruitment pathway potentially involving Metrnl, which activates IL-4 and IL-13 eventually leading to PKA activation and an indirect activation response; 3) NPs activation pathway, which eventually leads to PKG activation and an indirect activation response. As mentioned earlier SNS pathway enhancement has already been attempted by at least two drugs and failed miserably, so that method is probably out. In addition the SNS pathway does not appear to have as much browning potential as the PKA or PKG pathways due to the reliance on the location of certain nerve fibers.

Enhancing macrophage recruitment could be a good strategy, but there appears to be little information regarding negative effects associated with short-term high frequency enhancement of IL-4 or IL-13 concentrations. Some reports have suggested an increase in allergic symptoms, but any more severe consequences are unknown. This is not to say that enhancing IL-4 or IL-13 is not a valid therapeutic strategy, but its overall value is unknown. In contrast enhancement of NPs appear to be a more stable choice due to positive results in initial exploration of both the application and the expected negative side effects. First, NPs can be administrated via the nose-brain pathway enabling access to the brain avoiding some potential systemic side effects.37 Second, there appear to be few, if any significant side effects to intranasal NP application, at least in the short-term.38

Overall the above discussion has merely identified some of the more promising candidates to enhance browning white fat. One could argue that resorting to drugs to enhance the overall health of an individual versus simple diet and exercise is a regretful strategy. Unfortunately the reality of modern society is that more and more people seem to have less available time to exercise or eat right. In addition to a mounting negative weight external environment (increased pollution and industrial chemicals like BPAs) this drug enhancement strategy may be the most time and economically efficient means to ensure proper weight control and overall health for the future.

Citations –

1. van Marken Lichtenbelt, W, et Al. “Cold-activated brown adipose tissue in healthy men.” The New England Journal of Medicine. 2009. 360:1500-08.

2. Lowell, B, and Spiegelman, B. “Towards a molecular understanding of adaptive thermogenesis.” Nature. 2000. 404:652-60.

3. Seale, P, et Al. “PRDM16 controls a brown fat/skeletal muscle switch.” Nature. 2008. 454:961–967.

4. Sidossis, L and Kajimura, S. “Brown and beige fat in humans: thermogenic adipocytes that control energy and glucose homeostasis.” J. Clin. Invest. 2015. 125(2):478-486.

5. Long, J, et Al. “A smooth muscle-like origin for beige adipocytes.” Cell Metab. 2014. 19(5):810–820.

6. Kajimura, S, and Saito, M. “A new era in brown adipose tissue biology: molecular control of brown fat development and energy homeostasis.” Annu Rev Physiol. 2014. 76:225–249.

7. Harms, M, and Seale, P. “Brown and beige fat: development, function and therapeutic potential.” Nat Med. 2013. 19(10):1252–1263.

8. Collins, S. “β-Adrenoceptor signaling networks in adipocytes for recruiting stored fat and energy expenditure.” Front Endocrinol (Lausanne). 2011. 2:102.

9. Handschin, C, and Spiegelman, B. “Peroxisome proliferatoractivated receptor gamma coactivator 1 coactivators, energy homeostasis, and metabolism.” Endocr. Rev. 2006. 27:728–735.

10. Yen, M, and Ewald, M. “Toxicity of weight loss agents.” J. Med. Toxicol. 2012. 8:145–152.

11. Alvarez, R, et Al. “A novel regulatory pathway of brown fat themogenesis, retinoic acid is transcriptional activator of the mitochondrial uncoupling protein gene.” J. Biol. Chem. 270:5666-5673.

12. Mercader, J, et Al. “Remodeling of white adipose tissue after retinoic acid administration in mice.” Endocrinology. 2006. 147:5325–5332.

13. Kiefer, F, et Al. “Retinaldehyde dehydrogenase 1 regulates a thermogenic program in white adipose tissue.” Nat. Med. 2012. 18:918–925.

14. Nguyen, K, et Al. “Alternatively activated macrophages produce catecholamines to sustain adaptive thermogenesis.” Nature. 2011. 480(7375):104–108.

15. Qiu, Y, et Al. “Eosinophils and type 2 cytokine signaling in macrophages orchestrate development of functional beige fat.” Cell. 2014. 157(6):1292–1308.

16. Stanya, K, et Al. “Direct control of hepatic glucose production by interleukins-13 in mice.” The Journal of Clinical Investigation. 2013. 123(1):261-271.

17. Pedersen, B, and Febbraio, M “Muscle as an endocrine organ: focus on muscle-derived interleukin-6.” Physiological Reviews. 2008. 88(4):1379–406.

18. Bostrom, P, et Al. “A PGC1-α-dependent myokine that drives brown-fat-like development of white fat and thermogenesis.” Nature. 2012. 481(7382):463–468.

19. Lee, P, et Al. “Irisin and FGF21 are cold-induced endocrine activators of brown fat function in humans.” Cell Metab. 2014. 19(2):302–309.

20. Erickson, H. “Irisin and FNDC5 in retrospect: An exercise hormone or a transmembrane receptor?” Adipocyte. 2013. 2(4):289-293.

21. Timmons, J, et Al. “Is irisin a human exercise gene?” Nature. 2012. 488(7413):E9-11.

22. Albrecht, E, et Al. “Irisin - a myth rather than an exercise-inducible myokine.” Scientific Reports. 2015. 5:8889.

23. Ivanov, I, et Al. “Identification of evolutionarily conserved non-AUG-initiated N-terminal extensions in human coding sequences.” Nucleic Acids Research. 2011. 39(10):4220-4234.

24. Rao, R, et Al. “Meteorin-like is a hormone that regulates immune-adipose interactions to increase beige fat thermogenesis.” Cell. 2014. 157:1279-1291.

25. Ye, L, et Al. “TRPV4 is a regulator of adipose oxidative metabolism, inflammation, and energy homeostasis.” Cell. 2012. 151:96-110.

26. Gao, X, Wu, L, and O’Neil, R. “Temperature-modulated diversity of TRPV4 channel gating: activation by physical stresses and phorbol ester derivatives through protein kinase C-dependent and -independent pathways.” J. Biol. Chem. 2003. 278:27129–27137.

27. Thodeti, C, et Al. “TRPV4 channels mediate cyclic strain-induced endothelial cell reorientation through integrin-to-integrin signaling.” Circ. Res. 2009. 104:1123–1130.

28. Masuyama, R, et Al. “TRPV4-mediated calcium influx regulates terminal differentiation of osteoclasts.” Cell Metab. 2008. 8:257–265.

29. Phelps, C, et Al. “Differential regulation of TRPV1, TRPV3, and TRPV4 sensitivity through a conserved binding site on the ankyrin repeat domain.” J. Biol. Chem. 2010. 285:731–740.

30. Bordicchia, M, et Al. “Cardiac natriuretic peptides act via p38 MAPK to induce the brown fat thermogenic program in mouse and human adipocytes.” The Journal of Clinical Investigation. 2012. 122(3):1022-1036.

31. Sarzani, R, et Al. “Comparative analysis of atrial natriuretic peptide receptor expression in rat tissues.” J Hypertens Suppl. 1993. 11(5):S214–215.

32. Sarzani, R, et Al. “Expression of natriuretic peptide receptors in human adipose and other tissues.” J Endocrinol Invest. 1996. 19(9):581–585.

33. Sarzani, R, et Al. “Fasting inhibits natriuretic peptides clearance receptor expression in rat adipose tissue.” J Hypertens. 1995. 13(11):1241–1246.

34. Sengenes, C, et Al. “Natriuretic peptides: a new lipolytic pathway in human adipocytes.” FASEB J. 2000. 14(10):1345–1351.

35. Potter, L, and Hunter, T. “Guanylyl cyclase-linked natriuretic peptide receptors: structure and regulation.” J Biol Chem. 2001. 276(9):6057–6060.

36. Sengenes, C, et Al. “Involvement of a cGMP-dependent pathway in the natriuretic peptide-mediated hormone-sensitive lipase phosphorylation in human adipocytes.” J Biol Chem. 2003. 278(49):48617–48626.

37. Illum, L. “Transport of drugs from nasal cavity to the central nervous system.” Eur. J. Pharm. Sci. 11:1-18.

38. Koopmann, A, et Al. “The impact of atrial natriuretic peptide on anxiety, stress and craving in patients with alcohol dependence.” Alcohol and Alcoholism. 2014. 49(3):282-286.

Wednesday, May 27, 2015

Where is my Solar and Wind Only City?

Two years ago this blog proposed a challenge to solar and wind supporters that if solar and wind were indeed the energy mediums of the future and did not require the assistance of other energy mediums (most notably fossil fuels like coal and natural gas) then they should empirically demonstrate this potential by transitioning a single medium sized city (10,000 – 15,000 individuals) to a grid where at least 70% of the electricity, not even all energy, was produced by solar and/or wind sources. Unfortunately despite the passage of two years and the so-called further expansion of solar and wind technology no such experiment has been conducted.

This lack of attention to detail in producing a model city that would empirically represent and support the actual ability of solar and wind to produce the bulk of electricity and even possibly all energy in the future beyond simple hype is troubling. Are solar and wind proponents so irresponsible that they are willing to gamble the future of society on merely their hopes, dreams, and personal preferences rather than raw data? Do they think that incorporation of solar and wind to a grid steadily advancing from 10% to 20% then 30% then 40% then 50%, etc. will run perfectly with no significant problems? If so, then the solar and wind supporters who believe these things should be stripped of all of their credibility and influence; those who do not believe in such a perfect transition should begin immediately petitioning to accept the challenge.

To the solar and wind proponents who object to the above characterization due to the notion that in March Georgetown, Texas (population approximately 48,000) proposed a plan to get all electricity from solar and wind sources, in essence meet this challenge, hold your horses. While it is true that there has been an initial arrangement between the Georgetown Utility Systems and Spinning Spur Wind Farm (owned by EDF Renewable Energy) and SunEdison to purchase 294 MW (144 MW wind and 150 MW solar) from their installations, this is only an initial arrangement, no actual testing or application has occurred yet.

A more pertinent issue regarding the use of Georgetown as an example is that there is no specific information pertaining to the details of how Georgetown Utility Systems will manage this change in supplier. Basically the only public reporting on this strategy have been puff-hype pieces with no real substance or details. Both Spinning Spur Wind Farm and the yet to be identified SunEdison site have not been fully constructed, are not operational and do not have any secondary storage capacity; thus any electricity produced by these institutions will be live and when those institutions are not producing electricity there will be no electricity to provide to Georgetown.

Initially there are at least three major questions that must be addressed to legitimize Georgetown as a model for a solar/wind only powered city. First, where is the detailed analysis of how electricity, and possibly even energy flows, would be properly compensated to avoid brownouts in times when there is insufficient electricity being produced by solar and wind sources? Simply saying “the sun shines in the day and the wind blows when the sun is not shining” is laughable and severely damages credibility. Anyone who thinks that there will not be periods of intermittence from both Spinning Spur and the SunEdison site is harboring an inaccurate belief. Basically show that 100% renewable can be done using math, not flowery words and misplaced hype; note that it is important to also include any transmission and inverter losses in the calculation and separate nameplate capacity from actual operational capacity.

Second, it stands to reason that proponents of a solar/wind only city will not allow the use of natural gas or coal to act in a backup capacity during these periods of intermittence; therefore, during periods of excess solar and wind, electricity must be stored in a battery for use at a future time. So what type of battery structure(s) is going to be utilized to store that excess energy and what is the economic feasibility of using this structure? If no battery infrastructure is believed to be feasible or economical then what type of energy medium will be tapped to act as backup in lieu of a fossil fuel medium and how will it be properly incorporated?

Third, how will consumer costs for energy change from the transition away from fossil fuels over time, i.e. what will costs be in year 1, what will costs be in year 10…? To simply say it will cost less is not sufficient. It must be demonstrated that it will cost less both now and in the future and if it will not cost less in the future what forms of compensation, if any, will be provided to the residents of Georgetown?

Overall these are just the three most basic questions that must be addressed before anyone should accept the idea of Georgetown, Texas being a legitimate 100% solar/wind powered city when their plan is put into place a few years from now. If these questions are not answered with accurate specifics that are later properly executed over time then Georgetown loses all significance as both a legitimate and symbolic experiment for the validity of a solar and wind “future”.

Of course it must be understood that the results in Georgetown are only an initial step, success only provides support to the possibility, not any guarantee for national eventuality. So how about it solar and wind supporters are you actually ready to put your theories to the test or are you simply content with the unscientific and irrational belief that everything will magically work out without the need for essential specifics, realistic assumptions, honest economics (which is incredibly lacking in most pro-solar and wind papers) and valid proof of concepts?

Wednesday, May 6, 2015

A Theory Behind the Relationship Between Processed Foods and Obesity

While there has been a general slowing in the progression of global obesity, especially in the developed world, there has yet to be a reversal of this detrimental trend. A recent study has suggested that one aspect of influence regarding obesity progression lies with the consumption of foods that have incorporated emulsifiers and how they interact with intestinal bacteria including increasing the probability of developing negative metabolic syndromes in mice.1 Based on this result understanding the digestive process may be an important element to understanding how emulsifiers and emulsions may influence weight outcomes.

An emulsion is a mixture of at least two liquids where multiple components are immiscible, a characteristic commonly seen when oil is added to water resulting in a two-layer system where the oil floats on the surface of the water before it is mixed to form the emulsion. However, due to this immiscible aspect most emulsions are inherently unstable as “similar” droplets join together once again creating two distinct layers. When separated these layers are divided into two separate elements: a continuous phase and a droplet phase depending on the concentrations of the present liquids. Due to their inherent instability most emulsions are stabilized with the addition of an emulsifier. These agents are commonly used in many food products including various breads, pastas/noodles, and milk/ice cream.

Emulsifier-based stabilization occurs by reducing interfacial tension between immiscible phases and by increasing the repulsion effect between the dispersed phases through either increasing the steric repulsion or electrostatic repulsion. Emulsifiers can produce these effects because they are amphiphiles (have two different ends): a hydrophilic end that is able to interact with the water layer, but not the oil layer and a hydrophobic end that is able to interact with the oil layer, but not the water layer. Steric repulsion is born from volume restrictions from direct physical barriers while electrostatic repulsion is based on exactly its namesake electrically charged surfaces producing repulsion when approaching each other. As previously mentioned above some recent research has suggested that the consumption of certain emulsifiers in mice have produced negative health outcomes relative to controls. Why would such an outcome occur?

A typical dietary starch, which is one of the common foods that utilize emulsifiers is composed of long chains of glucose called amylose, a polysaccharide.2 These polysaccharides are first broken down in the mouth by chewing and saliva converting the food structure from a cohesive macro state to scattered smaller chains of glucose. Other more complex sugars like lactose and sucrose are broken down into their glucose and secondary sugar (galactose, fructose, etc.) structures.

Absorption and complete degradation begins in earnest through hydrolysis by salivary and pancreatic amylase in the upper small intestine with little hydrolyzation occurring in the stomach.3 There is little contact or membrane digestion through absorption on brush border membranes.4 Polysaccharides break down into oligosaccharides that are then broken down into monosaccharides by surface enzymes on the brush borders of enterocytes.5 Microvilli in the entercytes then direct the newly formed monosaccharides to the appropriate transport site.5 Disaccharidases in the brush border ensure that only monosaccharides are properly transported, not lingering disaccharides. This process differs from protein digestion, which largely involves degradation in gastric juices comprised of hydrochloric acid and pepsin and later transfer to the duodenum.

Within the small intestine free fatty acid concentration increases significantly as oils and fats are hydrolyzed at a faster rate than in the stomach due to the increased presence of bile salts and pancreatic lipase.3 It is thought that droplet size of emulsified lipids influences digestion and absorption where the smaller sizes allow for gastric lipase digestion in the duodenal lipolysis.6,7 The smaller the droplet size the finer the emulsion in the duodenum leading to a higher degree of lipolysis.8 Not surprisingly gastric lipase activity is also greater in thoroughly mixed emulsions versus coarse ones.

Typically hydrophobic interactions are responsible for the self-assembly of amphiphiles where water molecules react to a disordered state gaining entropy as the hydrophobes of the amphiphilic molecules are buried in the cores of micelles due to repelling forces.9 However, in emulsions the presence of oils produce a low-polarity interaction that can facilitate reverse self-assembly10,11 with a driving force born from the attraction of hydrogen bonding. For example lecithin is a zwitterionic phospholipid with two hydrocarbon tails that form reverse spherical or ellipsoidal micelles when exposed to oil.21 Basically emulsions could have the potential to significantly increase the hydrogen concentration of the stomach.

This potential increase in free hydrogen could be an important aspect to why emulsions produce negative health outcomes in model organisms.1 One of the significant interactions that govern the concentrations and types of intestinal bacteria is the rate of interspecies hydrogen transfer between hydrogen producing bacteria to hydrogen consuming methanogens. Note that non-obese individuals have small methanogen-based intestinal populations whereas obese individuals have larger populations where it is thought that the population of methanogen bacteria expands first before one gains significant weight.13,14 The importance behind this relationship is best demonstrated by understanding the biochemical process involved in the formation of fatty acids in the body.

Methanogens like Methanobrevibacter smithii enhance fermentation efficiency by removing excess free hydrogen and formate in the colon. A reduced concentration of hydrogen leads to an increased rate of conversion of insoluble fibers into short-chain fatty acids (SCFAs).13 Proprionate, acetate, butyrate and formate are the most common SCFAs formed and absorbed across the intestinal epithelium providing a significant portion of the energy for intestinal epithelial cells promoting survival, differentiation and proliferation ensuring effective stomach lining.13,15,16 Butyric acid is also utilized by the colonocytes.17 Formate also can be directly used by hydrogenotrophic methanogens and propionate and lactate can be fermented to acetate and H2.13

Overall the population of Archaea bacteria in the gut, largely associated to Methanobrevibacter smithii, is tied to obesity with the key factor being availability of free hydrogen. If there is a lot of free hydrogen then there is a higher probability for a lot of Archaea, otherwise there is a very low population of Archaea because there is a limited ‘food source’. Therefore, the consumption of food products with emulsions or emulsion-like characteristics or components could increase available free hydrogen concentrations, which will change the intestinal bacteria composition in a negative manner that will increase the probability that an individual becomes obese. This hypothesis coincides with existing evidence from model organisms that emulsion consumption has potential negative intestinal bacteria outcomes. One possible methodology governing this negative influence is how the change in bacteria concentration influences the available concentration of SCFAs, which could change the stability of stomach lining.

In addition to influencing hydrogen concentrations in the gut, emulsions also appear to have a significant influence on cholecystokinin (CCK) concentrations. CCK plays a meaningful role in both digestion and satiety, two components of food consumption that significantly influence both body weight and intestinal bacteria composition. Most of these concentration changes occur in the small intestine, most notably in the duodenum and jejunum.18 The largest influencing element for CCK release is the amount and level of fatty acid presence in the chyme.18 CCK is responsible for inhibiting gastric emptying, decreasing gastric acid secretion and increased production of specific digestive enzymes like hepatic bile and other bile salts, which form amphipathic lipids that emulsify fats.

When compared against non-emulsions, emulsion consumption appears to reduce the feedback effect that suppresses hunger after food intake. This effect is principally the result of changes in CCK concentrations versus other signaling molecules like GLP-1.19 Emulsion digestion begins when lipases bind to the surface of the emulsion droplets; the effectiveness of lipase binding increases with decreasing droplet size. Small emulsion droplets tend to have more complex microstructures, which produce more surface area that allow for more effective digestion.

This higher rate of breakdown produces a more rapid release of fatty acids as the presences of free fatty acids in the small intestinal lumen is critical for gastric emptying and CCK release.20 This accelerated breakdown creates a relationship between CCK concentration and emulsion droplet size where the larger the droplet size the lower the released CCK concentration.21 One of the main reasons why larger emulsions produce less hunger satisfaction is that with the reduced rate of CCK concentration and emulsion breakdown there is less feedback slowing of intestinal transit. Basically the rate at which the food is traveling through the intestine proceeds at a faster rate because there are fewer cues (feedback) due to digestion to slow transit for the purpose of digestion.

As alluded to above the type of emulsifier used to produce the emulsion appears to be the most important element to how an emulsion influences digestion. For example the lipids and fatty acid concentrations produced from digestion of a yolk lecithin emulsion were up to 50% smaller than one using polysorbate 20 (i.e. Tween 20) or caseinate.7 Basically if certain emulsifiers are used the rate of emulsion digestion can be reduced potentially increasing the concentration of bile salts in the small intestine, which could produce a higher probability for negative intestinal related events.

Furthermore studies using low-molecular mass emulsifiers (two non-ionic, two anionic and one cationic) demonstrated three tiers of TG lipolysis governed by emulsifier-to-bile salt ratio.3 At low emulsifier-bile ratios (<0.2 mM) there was no change in solubilization capacity of micelles whereas at ratios between 0.2 mM and 2 mM solubilization capacity significantly increased, which limited interactions between the oil and destabilization reaction products reducing oil degradation.3 At higher ratios (> 2 mM) emulsifier molecules remain in the adsorption layer heavily limiting lipase activity, which significantly reduces digestion and oil degradiation.3

Another possible influencing factor could be change in glucagon concentrations. There is evidence suggesting that increasing glucagon concentration in already fed rats can produce hypersecretory activity in both the jejunum and ileum.22-24 It stands to reason that due to activation potential of glucagon-like peptide-1 (GLP-1) in consort with CCK, glucagon plays some role. However, there are no specifics regarding how glucagon directly interacts with intestinal bacteria and the changes in digestion rate associated with emulsions.

The methodology behind why emulsions and their associated emulsifiers produce negative health outcomes in mice is unknown, but it stands to reason that both how emulsions change the rate of digestion and the present hydrogen concentration play significant roles. These two factors have sufficient influence on the composition and concentration of intestinal bacteria, which have corresponding influence on a large number of digestive properties including nutrient extraction and SCFA concentration management. SCFA management may be the most pertinent issue regarding the metabolic syndrome outcomes seen in mice born from emulsifiers.

It appears that creating emulsions that produce smaller drop sizes could mitigate negative outcomes, which can be produced by using lecithin over other types of emulsifiers. Overall while emulsifiers may be a necessary element in modern life to ensure food quality, instructing companies on the proper emulsifier to use at the appropriate ratios should have a positive effect on managing any detrimental interaction between emulsions and gut bacteria.

Citations –

1. Chassaing, B, et Al. “Dietary emulsifiers impact the mouse gut microbiota promoting colitis and metabolic syndrome.” Nature. 2015. 519(7541):92-96.

2. Choy, A, et Al. “The effects of microbial transglutaminase, sodium stearoyl lactylate and water on the quality of instant fried noodles.” Food Chemistry. 2010. 122:957e964.

3. Vinarov, Z, et Al. “Effects of emulsifiers charge and concentration on pancreatic lipolysis: 2. interplay of emulsifiers and biles.” Langmuir. 2012. 28:12140-12150.

4. Ugolev, A, and Delaey, P. “membrane digestion – a concept of enzymic hydrolysis on cell membranes.” Biochim Biophys Acta. 1973. 300:105-128.

5. Levin, R. “Digestion and absoption of carbohydrates from molecules and membranes to humans.” Am. J. Clin. Nutr. 1994. 59:690S-85.

6. Mu, H, and Hoy, C. “The digestion of dietary triacylglycerols.” Progress in Lipid Research. 2004. 43:105e-133.

7. Hur, S, et Al. “Effect of emulsifiers on microstructural changes and digestion of lipids in instant noodle during in vitro human digestion.” LWT – Food Science and Technology. 2015. 60:630e-636.

8. Armand, M, et Al. “Digestion and absorption of 2 fat emulsions with different droplet sizes in the human digestive tract.” American Journal of Clinical Nutrition. 1999. 70:1096e1106

9. Njauw, C-W, et Al. “Molecular interactions between lecithin and bile salts/acids in oils and their effects on reverse micellization.” Langmuir. 2013. 29:3879-3888.

10. Israelachvili, J. “Intermolecular and surface forces. 3rd ed. Academic Press; San Diego. 2011.

11. Evans, D, and Wennerstrom, H. “The colloidal domain: where physics, chemistry biology, and technology meet.” Wiley-VCH: New York. 2001.

12. Tung, S, et Al. “A new reverse wormlike micellar system: mixtures of bile salt and lecithin in organic liquids.” J. Am. Chem. Soc. 2006. 128:5751-5756.

13. Zhang, H, et, Al. “Human gut microbiota in obesity and after gastric bypass.” PNAS. 2009. 106(7): 2365-2370.

14. Turnbaugh, P, et, Al. “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature. 2006. 444(7122):1027–31.

15. Son, G, Kremer, M, Hines, I. “Contribution of Gut Bacteria to Liver Pathobiology.” Gastroenterology Research and Practice. 2010. doi:10.1155/2010/453563.

16. Luciano, L, et Al. “Withdrawal of butyrate from the colonic mucosa triggers ‘mass apoptosis’ primarily in the G0/G1 phase of the cell cycle.” Cell and Tissue Research. 1996. 286(1):81–92.

17. Cummings, J, and Macfarlane, G. “The control and consequences of bacterial fermentation in the human colon.” Journal of Applied Bacteriology. 1991. 70:443459.

18. Rasoamanana, R, et Al. “Dietary fibers solubilized in water or an oil emulsion induce satiation through CCK-mediated vagal signaling in mice.” J. Nutr. 2012. 142:2033-2039.

19. Adam, T, and Westerterp-Plantenga, M. “Glucagon-like peptide-1 release and satiety after a nutrient challenge in normal-weight and obese subjects.” Br J Nutr. 2005. 93:845–51.

20. Little, T, et Al. “Free fatty acids have more potent effects on gastric emptying, gut hormones, and appetite than triacylglycerides.” Gastroenterology. 2007. 133:1124–31.

21. Seimon, R, et Al. “The droplet size of intraduodenal fat emulsions influences antropyloroduodenal motility, hormone release, and appetite in healthy males.” Am. J. Clin. Nutr. 2009. 89:1729-1736.

22. Young, A, and Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 1. Jejunal hypersecretion induced by starvation.” Gut. 1990. 31:43-53.

23. Youg, A, Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 2. Ileal hypersection induced by starvation.” Gut. 1990. 31:162-169.

24. Lane, A, Levin, R. “Enhanced electrogenic secretion in vitro by small intestine from glucagon treated rats: implications for the diarrhoea of starvation.” Exp. Physiol. 1992. 77:645-648.