Wednesday, May 6, 2015

A Theory Behind the Relationship Between Processed Foods and Obesity


While there has been a general slowing in the progression of global obesity, especially in the developed world, there has yet to be a reversal of this detrimental trend. A recent study has suggested that one aspect of influence regarding obesity progression lies with the consumption of foods that have incorporated emulsifiers and how they interact with intestinal bacteria including increasing the probability of developing negative metabolic syndromes in mice.1 Based on this result understanding the digestive process may be an important element to understanding how emulsifiers and emulsions may influence weight outcomes.

An emulsion is a mixture of at least two liquids where multiple components are immiscible, a characteristic commonly seen when oil is added to water resulting in a two-layer system where the oil floats on the surface of the water before it is mixed to form the emulsion. However, due to this immiscible aspect most emulsions are inherently unstable as “similar” droplets join together once again creating two distinct layers. When separated these layers are divided into two separate elements: a continuous phase and a droplet phase depending on the concentrations of the present liquids. Due to their inherent instability most emulsions are stabilized with the addition of an emulsifier. These agents are commonly used in many food products including various breads, pastas/noodles, and milk/ice cream.

Emulsifier-based stabilization occurs by reducing interfacial tension between immiscible phases and by increasing the repulsion effect between the dispersed phases through either increasing the steric repulsion or electrostatic repulsion. Emulsifiers can produce these effects because they are amphiphiles (have two different ends): a hydrophilic end that is able to interact with the water layer, but not the oil layer and a hydrophobic end that is able to interact with the oil layer, but not the water layer. Steric repulsion is born from volume restrictions from direct physical barriers while electrostatic repulsion is based on exactly its namesake electrically charged surfaces producing repulsion when approaching each other. As previously mentioned above some recent research has suggested that the consumption of certain emulsifiers in mice have produced negative health outcomes relative to controls. Why would such an outcome occur?

A typical dietary starch, which is one of the common foods that utilize emulsifiers is composed of long chains of glucose called amylose, a polysaccharide.2 These polysaccharides are first broken down in the mouth by chewing and saliva converting the food structure from a cohesive macro state to scattered smaller chains of glucose. Other more complex sugars like lactose and sucrose are broken down into their glucose and secondary sugar (galactose, fructose, etc.) structures.

Absorption and complete degradation begins in earnest through hydrolysis by salivary and pancreatic amylase in the upper small intestine with little hydrolyzation occurring in the stomach.3 There is little contact or membrane digestion through absorption on brush border membranes.4 Polysaccharides break down into oligosaccharides that are then broken down into monosaccharides by surface enzymes on the brush borders of enterocytes.5 Microvilli in the entercytes then direct the newly formed monosaccharides to the appropriate transport site.5 Disaccharidases in the brush border ensure that only monosaccharides are properly transported, not lingering disaccharides. This process differs from protein digestion, which largely involves degradation in gastric juices comprised of hydrochloric acid and pepsin and later transfer to the duodenum.

Within the small intestine free fatty acid concentration increases significantly as oils and fats are hydrolyzed at a faster rate than in the stomach due to the increased presence of bile salts and pancreatic lipase.3 It is thought that droplet size of emulsified lipids influences digestion and absorption where the smaller sizes allow for gastric lipase digestion in the duodenal lipolysis.6,7 The smaller the droplet size the finer the emulsion in the duodenum leading to a higher degree of lipolysis.8 Not surprisingly gastric lipase activity is also greater in thoroughly mixed emulsions versus coarse ones.

Typically hydrophobic interactions are responsible for the self-assembly of amphiphiles where water molecules react to a disordered state gaining entropy as the hydrophobes of the amphiphilic molecules are buried in the cores of micelles due to repelling forces.9 However, in emulsions the presence of oils produce a low-polarity interaction that can facilitate reverse self-assembly10,11 with a driving force born from the attraction of hydrogen bonding. For example lecithin is a zwitterionic phospholipid with two hydrocarbon tails that form reverse spherical or ellipsoidal micelles when exposed to oil.21 Basically emulsions could have the potential to significantly increase the hydrogen concentration of the stomach.

This potential increase in free hydrogen could be an important aspect to why emulsions produce negative health outcomes in model organisms.1 One of the significant interactions that govern the concentrations and types of intestinal bacteria is the rate of interspecies hydrogen transfer between hydrogen producing bacteria to hydrogen consuming methanogens. Note that non-obese individuals have small methanogen-based intestinal populations whereas obese individuals have larger populations where it is thought that the population of methanogen bacteria expands first before one gains significant weight.13,14 The importance behind this relationship is best demonstrated by understanding the biochemical process involved in the formation of fatty acids in the body.

Methanogens like Methanobrevibacter smithii enhance fermentation efficiency by removing excess free hydrogen and formate in the colon. A reduced concentration of hydrogen leads to an increased rate of conversion of insoluble fibers into short-chain fatty acids (SCFAs).13 Proprionate, acetate, butyrate and formate are the most common SCFAs formed and absorbed across the intestinal epithelium providing a significant portion of the energy for intestinal epithelial cells promoting survival, differentiation and proliferation ensuring effective stomach lining.13,15,16 Butyric acid is also utilized by the colonocytes.17 Formate also can be directly used by hydrogenotrophic methanogens and propionate and lactate can be fermented to acetate and H2.13

Overall the population of Archaea bacteria in the gut, largely associated to Methanobrevibacter smithii, is tied to obesity with the key factor being availability of free hydrogen. If there is a lot of free hydrogen then there is a higher probability for a lot of Archaea, otherwise there is a very low population of Archaea because there is a limited ‘food source’. Therefore, the consumption of food products with emulsions or emulsion-like characteristics or components could increase available free hydrogen concentrations, which will change the intestinal bacteria composition in a negative manner that will increase the probability that an individual becomes obese. This hypothesis coincides with existing evidence from model organisms that emulsion consumption has potential negative intestinal bacteria outcomes. One possible methodology governing this negative influence is how the change in bacteria concentration influences the available concentration of SCFAs, which could change the stability of stomach lining.

In addition to influencing hydrogen concentrations in the gut, emulsions also appear to have a significant influence on cholecystokinin (CCK) concentrations. CCK plays a meaningful role in both digestion and satiety, two components of food consumption that significantly influence both body weight and intestinal bacteria composition. Most of these concentration changes occur in the small intestine, most notably in the duodenum and jejunum.18 The largest influencing element for CCK release is the amount and level of fatty acid presence in the chyme.18 CCK is responsible for inhibiting gastric emptying, decreasing gastric acid secretion and increased production of specific digestive enzymes like hepatic bile and other bile salts, which form amphipathic lipids that emulsify fats.

When compared against non-emulsions, emulsion consumption appears to reduce the feedback effect that suppresses hunger after food intake. This effect is principally the result of changes in CCK concentrations versus other signaling molecules like GLP-1.19 Emulsion digestion begins when lipases bind to the surface of the emulsion droplets; the effectiveness of lipase binding increases with decreasing droplet size. Small emulsion droplets tend to have more complex microstructures, which produce more surface area that allow for more effective digestion.

This higher rate of breakdown produces a more rapid release of fatty acids as the presences of free fatty acids in the small intestinal lumen is critical for gastric emptying and CCK release.20 This accelerated breakdown creates a relationship between CCK concentration and emulsion droplet size where the larger the droplet size the lower the released CCK concentration.21 One of the main reasons why larger emulsions produce less hunger satisfaction is that with the reduced rate of CCK concentration and emulsion breakdown there is less feedback slowing of intestinal transit. Basically the rate at which the food is traveling through the intestine proceeds at a faster rate because there are fewer cues (feedback) due to digestion to slow transit for the purpose of digestion.

As alluded to above the type of emulsifier used to produce the emulsion appears to be the most important element to how an emulsion influences digestion. For example the lipids and fatty acid concentrations produced from digestion of a yolk lecithin emulsion were up to 50% smaller than one using polysorbate 20 (i.e. Tween 20) or caseinate.7 Basically if certain emulsifiers are used the rate of emulsion digestion can be reduced potentially increasing the concentration of bile salts in the small intestine, which could produce a higher probability for negative intestinal related events.

Furthermore studies using low-molecular mass emulsifiers (two non-ionic, two anionic and one cationic) demonstrated three tiers of TG lipolysis governed by emulsifier-to-bile salt ratio.3 At low emulsifier-bile ratios (<0.2 mM) there was no change in solubilization capacity of micelles whereas at ratios between 0.2 mM and 2 mM solubilization capacity significantly increased, which limited interactions between the oil and destabilization reaction products reducing oil degradation.3 At higher ratios (> 2 mM) emulsifier molecules remain in the adsorption layer heavily limiting lipase activity, which significantly reduces digestion and oil degradiation.3

Another possible influencing factor could be change in glucagon concentrations. There is evidence suggesting that increasing glucagon concentration in already fed rats can produce hypersecretory activity in both the jejunum and ileum.22-24 It stands to reason that due to activation potential of glucagon-like peptide-1 (GLP-1) in consort with CCK, glucagon plays some role. However, there are no specifics regarding how glucagon directly interacts with intestinal bacteria and the changes in digestion rate associated with emulsions.

The methodology behind why emulsions and their associated emulsifiers produce negative health outcomes in mice is unknown, but it stands to reason that both how emulsions change the rate of digestion and the present hydrogen concentration play significant roles. These two factors have sufficient influence on the composition and concentration of intestinal bacteria, which have corresponding influence on a large number of digestive properties including nutrient extraction and SCFA concentration management. SCFA management may be the most pertinent issue regarding the metabolic syndrome outcomes seen in mice born from emulsifiers.

It appears that creating emulsions that produce smaller drop sizes could mitigate negative outcomes, which can be produced by using lecithin over other types of emulsifiers. Overall while emulsifiers may be a necessary element in modern life to ensure food quality, instructing companies on the proper emulsifier to use at the appropriate ratios should have a positive effect on managing any detrimental interaction between emulsions and gut bacteria.



Citations –

1. Chassaing, B, et Al. “Dietary emulsifiers impact the mouse gut microbiota promoting colitis and metabolic syndrome.” Nature. 2015. 519(7541):92-96.

2. Choy, A, et Al. “The effects of microbial transglutaminase, sodium stearoyl lactylate and water on the quality of instant fried noodles.” Food Chemistry. 2010. 122:957e964.

3. Vinarov, Z, et Al. “Effects of emulsifiers charge and concentration on pancreatic lipolysis: 2. interplay of emulsifiers and biles.” Langmuir. 2012. 28:12140-12150.

4. Ugolev, A, and Delaey, P. “membrane digestion – a concept of enzymic hydrolysis on cell membranes.” Biochim Biophys Acta. 1973. 300:105-128.

5. Levin, R. “Digestion and absoption of carbohydrates from molecules and membranes to humans.” Am. J. Clin. Nutr. 1994. 59:690S-85.

6. Mu, H, and Hoy, C. “The digestion of dietary triacylglycerols.” Progress in Lipid Research. 2004. 43:105e-133.

7. Hur, S, et Al. “Effect of emulsifiers on microstructural changes and digestion of lipids in instant noodle during in vitro human digestion.” LWT – Food Science and Technology. 2015. 60:630e-636.

8. Armand, M, et Al. “Digestion and absorption of 2 fat emulsions with different droplet sizes in the human digestive tract.” American Journal of Clinical Nutrition. 1999. 70:1096e1106

9. Njauw, C-W, et Al. “Molecular interactions between lecithin and bile salts/acids in oils and their effects on reverse micellization.” Langmuir. 2013. 29:3879-3888.

10. Israelachvili, J. “Intermolecular and surface forces. 3rd ed. Academic Press; San Diego. 2011.

11. Evans, D, and Wennerstrom, H. “The colloidal domain: where physics, chemistry biology, and technology meet.” Wiley-VCH: New York. 2001.

12. Tung, S, et Al. “A new reverse wormlike micellar system: mixtures of bile salt and lecithin in organic liquids.” J. Am. Chem. Soc. 2006. 128:5751-5756.

13. Zhang, H, et, Al. “Human gut microbiota in obesity and after gastric bypass.” PNAS. 2009. 106(7): 2365-2370.

14. Turnbaugh, P, et, Al. “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature. 2006. 444(7122):1027–31.

15. Son, G, Kremer, M, Hines, I. “Contribution of Gut Bacteria to Liver Pathobiology.” Gastroenterology Research and Practice. 2010. doi:10.1155/2010/453563.

16. Luciano, L, et Al. “Withdrawal of butyrate from the colonic mucosa triggers ‘mass apoptosis’ primarily in the G0/G1 phase of the cell cycle.” Cell and Tissue Research. 1996. 286(1):81–92.

17. Cummings, J, and Macfarlane, G. “The control and consequences of bacterial fermentation in the human colon.” Journal of Applied Bacteriology. 1991. 70:443459.

18. Rasoamanana, R, et Al. “Dietary fibers solubilized in water or an oil emulsion induce satiation through CCK-mediated vagal signaling in mice.” J. Nutr. 2012. 142:2033-2039.

19. Adam, T, and Westerterp-Plantenga, M. “Glucagon-like peptide-1 release and satiety after a nutrient challenge in normal-weight and obese subjects.” Br J Nutr. 2005. 93:845–51.

20. Little, T, et Al. “Free fatty acids have more potent effects on gastric emptying, gut hormones, and appetite than triacylglycerides.” Gastroenterology. 2007. 133:1124–31.

21. Seimon, R, et Al. “The droplet size of intraduodenal fat emulsions influences antropyloroduodenal motility, hormone release, and appetite in healthy males.” Am. J. Clin. Nutr. 2009. 89:1729-1736.

22. Young, A, and Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 1. Jejunal hypersecretion induced by starvation.” Gut. 1990. 31:43-53.

23. Youg, A, Levin, R. “Diarrhoea of famine and malnutrition: investigations using a rat model. 2. Ileal hypersection induced by starvation.” Gut. 1990. 31:162-169.

24. Lane, A, Levin, R. “Enhanced electrogenic secretion in vitro by small intestine from glucagon treated rats: implications for the diarrhoea of starvation.” Exp. Physiol. 1992. 77:645-648.

Tuesday, April 21, 2015

Augmenting rainfall probability to ward off long-term drought?


Despite the ridiculous pseudo controversy surrounding global warming in the public discourse, the reality is that global warming is real and has already significantly started influencing the global climate. One of the most important factors in judging the range and impact of global warming as well as how society should respond is also one of the more perplexing, cloud formation. Not only do clouds influence the cycle of heat escape and retention, but they also drive precipitation probability. Precipitation plays an important role in maintaining effective hydrological cycles as well as heat budgets and will experience significant changes in reaction to future warming largely producing more extreme outcomes with some areas receiving significant increases that will produce flash flooding whereas other areas will be deprived of rainfall producing longer-term droughts similar to those now seen in California.

At its core precipitation is influenced by numerous factors like solar heating and terrestrial radiation.1,2 Of these factors various aerosol particles are thought to hold an important influence. Both organic and inorganic aerosols are plentiful in the atmosphere helping to cool the surface of Earth by sunlight scattering or serving as nuclei support for the formation of water droplets and ice crystals.3 Not surprisingly information regarding the means in which the properties of these aerosols influence cloud formation and precipitation is still limited, which creates significant uncertainties in climate modeling and planning. Therefore, increasing knowledge of how aerosols influence precipitation will provide valuable information for managing the various changes that will occur and even possibly mitigating those changes.

The formation of precipitation within clouds is heavily influenced by ice nucleation. Ice nucleation involves the induction of crystallization in supercooled water (supercooled = a meta-stable state where water is in liquid form at below typical freezing temperatures). The process of ice nucleation typically occurs through one of two pathways: homogenous or heterogeneous. Homogeneous nucleation entails spontaneous nucleation within a properly cooled solution (usually a supersaturated solution of relative humidity of 150-180% with a temperature of around –38 degrees C) requiring only liquid water or aqueous solution droplets.4-6 Due to its relative simplicity homogeneous nucleation is better understood than heterogeneous nucleation. However, because of the temperature requirements homogeneous nucleation typically only takes place in the upper troposphere and with a warming atmosphere it should be expected that its probability of occurrence would reduce.

Heterogeneous nucleation is more complicated because of the multiple pathways that can be taken, i.e. depositional freezing, condensation, contact, and immersion freezing.7,8 Typically these different pathways allow for more flexibility in nucleation with generic initiation conditions beginning at just south of 0 degrees C and a relative humidity of 100%. This higher temperature fails to prevent nucleation because of the presence of a catalyst, a non-water based substance that is commonly referred to as an ice-forming nuclei (IN). Also heterogeneous nucleation can involve diffusive growth in a mixed-phase cloud that consumes liquid droplets at a faster rate (Wegener–Bergeron–Findeisen process) than super-cooled droplets or snow/graupel aggregation.9

Laboratory experiments have demonstrated support for many different materials acting as IN: different metallic particles, biological materials, certain glasses, mineral dust, anhydrous salts, etc.8,10,11 These laboratory experiments involve wind tunnels, electrodynamic levitation, scanning calorimetry, cloud chambers, and optical microscopy.12,13 However, not surprisingly there appears a significant difference between nucleation ability in the lab and in nature.8,10

Also while homogenous ice nucleation is exactly that, heterogeneous nucleation does not have the same quenching properties.8 Temperature variations within a cloud can produce differing methods of heterogeneous nucleation versus homogeneous nucleation producing significant differences in efficiency. For example not surprisingly some forms of nucleation in cloud formations are more difficult to understand like high concentration formation in warm precipitating cumulus clouds; i.e. particle concentrations increasing from 0.01 L-1 to 100 L-1 in a few minutes at temperatures exceeding –10 degrees C and outpacing existing ice nucleus measurements.14 One explanation for this phenomenon is the Hallett-Mossop (H-M) method. This method is thought to achieve this rapid freezing through interaction with a narrow band of supercooled raindrops producing rimers.15

The H-M methodology requires cloud temperatures between approximately –1 and –10 degrees C with the availability of large rain droplets (diameters > 24 um), but at a 0.1 ratio relative to smaller (< 13 um droplets).16,17 When the riming process begins ice splinters are ejected and grow through water vapor deposition producing a positive feedback effect increasing riming and producing more ice splinters. Basically a feedback loop develops between ice splinter formation and small drop freezing. Unfortunately there are some questions whether or not this methodology can properly explain the characteristics of secondary ice particles and the formation of ice crystal bursts under certain time constraints.18 However, these concerns may not be accurate due to improper assumptions regarding how water droplets form relative to existing water concentrations.15

One of the more important element of rain formation in warm precipitating cumulus clouds, in addition to other cloud formations, appears to involve the location of ice particle concentrations at the top of the cloud formation where there is a higher probability for large droplet formation (500 – 2000 um diameters).15 In this regard cloud depth/area is a more important influencing element than cloud temperature.19 In addition the apparent continued formation of ice crystals stemming from the top proceeding downwards can produce raindrop freezing that catalyzes ice formation creating a positive feedback and ice bursts.20

This process suggests that there is a sufficient replenishment of small droplets at the cloud top increasing the probability of sufficient riming. It is thought that the time variation governing the rate of ice multiplication and how cloud temperature changes accordingly is determined by dry adiabatic cooling at the cloud top, condensational warming, evaporational cooling at the cloud bottom.15 Bacteria also appear to play a meaningful role in both nucleating primary ice crystals and scavenging secondary crystals.7 Even if bacteria concentrations are low (< 0.05 L-1) the catalytic effect of nucleating bacteria produces a much more “H-M” friendly environment.

The most prominent inorganic aerosol that acts as an IN is dust commonly from deserts that is pushed into the upper atmosphere by storms.21,22 The principal origin of this dust is from the Sahara Desert, which is lofted year round versus dust from other origin points like the Gobi or Siberia. While the ability of this dust to produce rain is powerful it can also have a counteracting effect as a cloud condensation nuclei (CCN). In most situations when CCN concentration is increased raindrop conversion becomes less efficient, especially for low-level clouds (in part due to higher temperatures) largely by reducing riming efficiency.

The probability of dust acting as a CCN is influenced by the presence of anthropogenic pollution, which typically is a CCN on its own.23,24 In some situations the presence of pollution could also increase the overall rate of rainfall as it can suppress premature rainfall allowing more rain droplets to crystallize increasing riming and potential rainfall. However, this aspect of pollution is only valid in the presence of dust or other INs for if there is a dearth of IN concentration, localized pollution will decrease precipitation.25 Soot can also influence nucleation and resultant rainfall, but only under certain circumstances. For example if the surface of the soot contains available molecules to form hydrogen bonds (typically from available hydroxyl and carbonyl groups) with available liquid water molecules nucleation is enhanced.26 Overall it seems appropriate to label dust as a strong IN and anthropogenic pollution as a significant CCN.

In mineral collection studies and global simulations of aerosol particle concentrations both deposition and immersion heterogeneous nucleation appear dominated by dust concentrations acting as INs, especially in cirrus clouds.10,27,28 Aerosols also modify certain cloud properties like droplet size and water phase. Most other inorganic atmospheric aerosols behave like cloud condensation nuclei (CCN), which assist the condensation of water vapor for the formation of cloud droplets in a certain level of super-saturation.25 Typically this condensation produces a large number of small droplets, which can reduce the probability of warm rain (above freezing point).29,30

Recall that altitude is important in precipitation, thus it is not surprising that one of the key factors in how aerosols influence precipitation type and probability appears to involve the elevation and temperature at which they interact. For example in mixed-phase clouds, the top area increases relative to increases in CCN concentrations versus a smaller change at lower altitudes and no changes in pure liquid clouds.15,31 Also CCN only significantly influence temperatures when top and base cloud temperatures are below freezing.31 In short it appears that CCN influence is reduced relative to IN influence at higher altitudes and lower temperatures.

Also cloud drop concentration and size distribution at the base and top of a cloud determine the efficiency of the CCN and are dictated by the chemical structure and size of an aerosol. For example larger aerosols have a higher probability of becoming CCN over IN due to their coarse structure. Finally and not surprisingly overall precipitation frequency increases with high water content and decreases with low water content when exposed to CCNs.31 This behavior creates a positive feedback structure that increases aerosol concentration, so for arid regions the probability of drought increases and in wet regions the probability of flooding increases.

While dust from natural sources as well as general pollution are the two most common aerosols, an interesting secondary source may be soil dust produced from land use due to deforestation or large-scale construction projects.32-34 These actions create anthropogenic dust emissions that can catalyze a feedback loop that can produce greater precipitation extremes; thus in certain developing economic regions that may be struggling with droughts continued construction in effort to improve the economy could exacerbate droughts. Therefore, developing regions may need to produce specific methodologies to govern their development to ensure proper levels of rainfall for the future.

While the role of dust has not been fully identified on a mechanistic level, its importance is not debatable. The role of biological particles, like bacteria, is more controversial and could be critical to identifying a method to enhance rainfall probability. It is important to identify the capacity of bacteria to catalyze rainfall for some laboratory studies have demonstrated that inorganic INs only have significant activity below –15 degrees C.10,35 For example in samples of snowfall collected globally originating at temperatures of –7 degrees C or warmer a vast majority of the active IN, up to 85%, were lysozyme-sensitive (i.e. probably bacteria).36,37 Also rain tends to have higher proportions of active IN bacteria than air in the same region.38 With further global warming on the horizon air temperatures will continue to increase lowering the probability window for inorganic IN activity, thus lowering the probability of rainfall in general (not considering any other changes born from global warming).

Laboratory and field studies have demonstrated approximately twelve species of bacteria with significant IN ability spread within three orders of the gammaproteobacteria with the two most notable/frequent agents being Pseudomonas syringae and P. fluorescens and to a lesser extent Xanthomonas.39,40 In the presence of an IN bacterium nucleation can occur at temperatures as warm as –1.5 degrees C to –2 degrees C.41,42 These bacteria appear to have the ability to act as IN due to the existence of a single gene that codes for a specific membrane protein that catalyzes crystal formation by acting as a template for water molecule arrangement.43 The natural origins of these bacteria derive mostly from surface vegetation.

Supporting the idea of the key membrane scaffolding, an acidic pH environment can significantly reduce the effectiveness of bacteria-based nucleation.45,46 Also these protein complexes for nucleation are larger for warmer temperature nucleating bacteria, thus more prone to breakdown in higher acidic environments.44,46 Therefore, low lying areas that have significant acidic pollution like sulfurs could see a reduction in precipitation probability over time. Also it seems that this protein complex could be the critical element to bacteria-based nucleation versus the actual biological processes of the bacteria as nucleation was augmented even when the bacteria itself was no longer viable.46

Despite laboratory and theoretical evidence supporting the role of bacteria in precipitation, as stated above what occurs in the laboratory serves little purpose if it does not translate to nature. This translation is where a controversy arises. It can be difficult to separate the various particles within clouds from residue collection due to widespread internal mixing, but empirical evidence demonstrates the presence of biological material in orographic clouds.47 Also ice nucleation bacteria are present over all continents as well as in various specific locations like the Amazon basin.37,48,49

Some estimates have suggested that 10^24 bacteria enter the atmosphere each year and stay circulating between 2 and 10 days allowing bacteria, theoretically, to travel thousands of miles.50,51 However, there is a lack of evidence for bacteria in the upper troposphere and their concentrations are dramatically lower than those of inorganic materials like dust and soot.28,35,52 Based on this lack of concentrations questions exist to the efficiency of how these bacteria are aerosolized over their atmospheric lifetimes. One study suggests that IN active bacteria are much more efficiently precipitated than non-active IN bacteria, which may explain the disparity between the observations in the air, clouds and precipitation.53

Another possible explanation for this disparity is that most biological particles are generated on the surface and are carried by updrafts and currents into the atmosphere. While the methods of transport are similar to inorganic particles, biological particles have a higher removal potential due to dry or wet deposition due to their typical greater size. Therefore, from a nature standpoint bacteria reside in orographic clouds because they are able to participate in their formations, but are not able to reach higher cloud formations, so most upper troposphere rain is born from dust not bacteria.

Some individuals feel that the current drop freezing assays, which are used to identify the types of bacteria and other agents in a collected sample, can be improved upon to produce a higher level of discrimination between the various classes of IN active bacteria that may be present in the sample. One possible idea is to store the sample at low temperatures and observe the growth and the type of IN bacteria that occur in a community versus individual samples.54 Perhaps new identification techniques would increase the ability to discern the role of bacteria in cloud formation and precipitation.

Among the other atmospheric agents and their potential influence on precipitation potassium appears to have a meaningful role. Some biogenic emissions of potassium, especially around the Amazon, can act as catalysts for the beginning process of organic material condensation.55 However, this role seems to ebb as potassium mass fraction drops as the condensation rate increases.55 This secondary role of potassium as well as the role of bacteria may signal an important element to why past cloud seeding experiments have not achieve the hypothesized expectations.

The lack of natural bacteria input into higher cloud formations leads to an interesting question. What would happen if IN active bacteria like P. syringae were released via plane or other increased altitude method that would result in a higher concentration of bacteria in these higher altitude cloud formations? While typical cloud formation involves vapor saturation due to air cooling and/or increased vapor concentration, increased IN active bacteria concentration could also speed cloud formation as well as precipitation probability.

Interestingly in past cloud seeding experiments orographic clouds appear to be more sensitive to purposeful seeding versus other cloud formations largely because of the shorter residence times of cloud droplets.56,57 One of the positive elements of seeding appears to be that increased precipitation in the target area does not reduce the level of precipitation in surrounding areas including those beyond the target area. In fact it appears that there is a net increase (5-15%) among all areas regardless of the location of seeding.58 The previous presumption that there was loss appears to be based on randomized and not properly controlled seeding experiments.58

The idea of introducing increased concentrations of IN active bacteria is an interesting one if it can increase the probability of precipitation. Of course possible negatives must be considered for such an introduction. The chief negative that could be associated with such an increase from a bacterium like P. syringae would be the possibility of more infection of certain types of plants. The frost mechanism of P. syringae is a minor concern because most of the seeding would be carried out between late spring and early fall where night-time temperatures should not be cold enough to induce freezing. Sabotaging the type III secretion system in P. syringe via some form of genetic manipulation should reduce, if not eliminate, the plant invasion potential. Obviously controlled laboratory tests should be conducted to ensure a high probability of invasion neutralization success before any controlled and limited field tests are conducted. If the use of living bacteria proves to be too costly, exploration of simply using the key specific membrane protein is another possible avenue of study.

Overall the simple fact is that due to global warming, global precipitation patterns will change dramatically. The forerunner to these changes can already been seen in the state of California with no reasonable expectation for new significant levels of rainfall in sight. While other potable water options are available like desalinization, the level of infrastructure required to divert these new sources from origins source to usage points will be costly and these processes do have significant detrimental byproducts. If precipitation probabilities can be safely increased through new cloud seeding strategies like the inclusion of IN active bacteria it could go a long way to combating some of the negative effects of global warming while the causes of global warming itself are mitigated.



Citations –

1. Zuberi, B, et Al. “Heterogeneous nucleation of ice in (NH4)2SO4-H2O particles with mineral dust immersions.” Geophys. Res. Lett. 2002. 29(10). 1504.

2. Hung, H, Malinowski, A, and Martin, S. “Kinetics of heterogeneous ice nucleation on the surfaces of mineral dust cores inserted into aqueous ammonium sulfate particles.” J. Phys. Chem. 2003. 107(9):1296-1306.

3. Lohmann, U. “Aerosol effects on clouds and climate.” Space Sci. Rev. 2006. 125:129-137.

4. Hartmann, S, et Al. “Homogeneous and heterogeneous ice nucleation at LACIS: operating principle and theoretical studies.” Atmos. Chem. Phys. 2011. 11:1753-1767.

5. Cantrell, W, and Heymsfield, A. “Production of ice in tropospheric clouds. A review.” American Meteorological Society. 2005. 86(6):795-807.

6. Riechers, B, et Al. “The homogeneous ice nucleation rate of water droplets produced in a microfluidic device and the role of temperature uncertainty.” Physical Chemistry Chemical Physics. 2013. 15(16):5873-5887.

7. Cziczo, D, et Al. “Clarifying the dominant sources and mechanisms of cirrus cloud formation.” Science. 2013. 340(6138):1320-1324.

8. Pruppacher, H, and Klett, J. “Microphysics of clouds and precipitation.” (Kluwer Academic, Dordrecht. Ed. 2, 1997). pp. 309-354.

9. Lance, S, et Al. “Cloud condensation nuclei as a modulator of ice processes in Arctic mixed-phase clouds.” Atmos. Chem. Phys. 2011. 11:8003-8015.

10. Hoose, C, and Mohler, O. “Heterogeneous ice nucleation on atmospheric aerosols: a review of results from laboratory experiments.” Atmos. Chem. Phys. 2012. 12:9817-9854.

11. Abbatt, J, et Al. “Solid ammonium sulfate aerosols as ice nuclei: A pathway for cirrus cloud formation.” Science. 2006. 313:1770-1773.

12. Murray, B, et Al. “Kinetics of the homogeneous freezing of water.” Phys. Chem. 2010. 12:10380-10387.

13. Chang, H, et Al. “Phase transitions in emulsified HNO3/H2O and HNO3/H2SO4/H2O solutions.” J. Phys. Chem. 1999. 103:2673-2679.

14. Hobbs, P, and Rangno, A. “Rapid development of ice particle concentrations in small, polar maritime cumuliform clouds.” J. Atmos. Sci. 1990. 47:2710–2722.

15. Sun, J, et Al. “Mystery of ice multiplication in warm-based precipitating shallow cumulus clouds.” Geophysical Research Letters. 2010. 37:L10802.

16. Hallett, J, and Mossop, S. “Production of secondary ice particles during the riming process.” Nature. 1974. 249:26-28.

17. Mossop, S. “Secondary ice particle production during rime growth: The effect of drop size distribution and rimer velocity.” Q. J. R. Meteorol. Soc. 1985. 111:1113-3324.

18. Mason, B. “The rapid glaciation of slightly supercooled cumulus clouds.” Q. J. R. Meteorol. Soc. 1996. 122:357-365.

19. Rangno, A, and Hobbs, P. “Microstructures and precipitation development in cumulus and small cumulous-nimbus clouds over the warm pool of the tropical Pacific Ocean. Q. J. R. Meteorol. Soc. 2005. 131:639-673.

20. Phillips, V, et Al. “The glaciation of a cumulus cloud over New Mexico.” Q. J. R. Meteorol. Soc. 2001. 127:1513-1534.

21. Karydis, V, et Al. “On the effect of dust particles on global cloud condensation nuclei and cloud droplet number.” J. Geophys. Res. 2011. 166:D23204.

22. Connolly, P, et Al. “Studies of heterogeneous freezing by three different desert dust samples.” Atmos. Chem. Phys. 2009. 9:2805-2824.

23. Lynn, B, et Al. “Effects of aerosols on precipitation from orographic clouds.” J. Geophys. Res. 2007. 112:D10225.

24. Jirak, I, and Cotton, W. “Effect of air pollution on precipitation along the Front Range of the Rocky Mountain.” J. Appl. Meteor. Climatol. 2006. 45:236-245.

25. Fan, J, et Al. “Aerosol impacts on California winter clouds and precipitation during CalWater 2011: local pollution versus long-range transported dust.” Atmos. Chem. Phys. 2014. 14:81-101.

26. Gorbunov, B, et Al. “Ice nucleation on soot particles.” J. Aerosol Sci. 2001. 32(2):199-215.

27. Kirkevag, A, et Al. “Aerosol-climate interactions in the Norwegian Earth System Model – NorESM. Geosci. Model Dev. 2013. 6:207-244.

28. Hoose, C, Kristjansson, J, Burrows, S. “How important is biological ice nucleation in clouds on a global scale?” Environ. Res. Lett. 2010. 5:024009.

29. Lohmann, U. “A glaciation indirect aerosol effect caused by soot aerosols.” Geophys. Res. Lett. 2002. 29:11.1-4.

30. Koop, T, et Al. “Water activity as the determinant for homogeneous ice nucleation in aqueous solutions.” Nature. 406:611-614.

31. Li, Z, et Al. “Long-term impacts of aerosols on the vertical development of clouds and precipitation.” Nature Geoscience. 2011. DOI: 10.1038/NGEO1313

32. Zender, C, Miller, R, and Tegen, I. “Quantifying mineral dust mass budgets: Terminology, constraints, and current estimates.” Eos. Trans. Am. Geophys. Union. 2004. 85:509-512.

33. Forester, P, et Al. “Changes in atmospheric constituents and in radiative forcing. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change.

34. O’Sullivan, D, et Al. “Ice nucleation by fertile soil dusts: relative importance of mineral and biogenic components.” Atmos. Chem. Phys. 2014. 14:1853-1867.

35. Murray, B, et Al. “Ice nucleation by particles immersed in supercooled cloud droplets.” Chem. Soc. Rev. 2012. 41:6519-6554.

36. Christner, B, et Al. “Geographic, seasonal, and precipitation chemistry influence on the abundance and activity of biological ice nucleators in rain and snow. PNAS. 2008. 105:18854. dio:10.1073/pnas.0809816105.

37. Christener, B, et Al. “Ubiquity of biological ice nucleators in snowfall.” Science. 2008. 319:1214.

38. Stephanie, D, and Waturangi, D. “Distribution of ice nucleation-active (INA) bacteria from rainwater and air, NAYATI Journal of Biosciences. 2011. 18:108-112.

39. Vaitilingom, M, et Al. “Long-term features of cloud microbiology at the puy de Dome (France). Atmos. Environ. 2012. 56:88-100.

40. Cochet, N and Widehem, P. “Ice crystallization by Pseudomonas syringae.” Appl. Microbiol. Biotechnol. 2000. 54:153-161.

41. Heymsfield, A, et Al. “Upper-tropospheric relative humidity observations and implications for cirrus ice nucleation.” Geophys. Res. Lett. 1998. 25:1343-1346.

42. Twohy, C, and Poellot, M. “Chemical characteristics of ice residual nuclei in anvil cirrus clouds: implications for ice formation processes.” Atmos. Chem. Phys. 2005. 5:2289-2297.

43. Joly, M, et Al. “Ice nucleation activity of bacteria isolated from cloud water.” Atmos. Environ. 2013. 70:392-400.

44. Attard, E, et Al. “Effects of atmospheric conditions on ice nucleation activity of Pseudomonas.” Atmos. Chem. Phys. 2012. 12:10667-10677.

45. Kawahara, H, Tanaka, Y, and Obata H. “Isolation and characterization of a novel ice-nucleating bacterium, Pseudomonas, which has stable activity in acidic solution.” Biosci. Biotechnol. Biochem. 1995. 59:1528-1532.

46. Kozloff, L, Turner, M, and Arellano, F. “Formation of bacterial membrane ice-nucleating lipoglycoprotein complexes.” J. Bacteriol. 1991. 173:6528-6536.

47. Pratt, K, et Al. “In-situ detection of biological particles in high altitude dust-influenced ice clouds.” Nature Geoscience. 2009. 2:dio:10.1038/ngeo521.

48. Prenni, A, et Al. “Relative roles of biogenic emissions and Saharan dust as ice nuclei in the Amazon basin.” Nat. Geosci. 2009. 2:402-405.

49. Phillips, V, et Al. “Potential impacts from biological aerosols on ensembles of continental clouds simulated numerically.” Biogeosciences. 2009. 6:987-1014.

50. Burrows, S, et Al. “Bacteria in the global atmosphere – Part 1: review and synthesis of literature data for different ecosystems.” Atmos. Chem. Phys. 2009. 9:9263-9280.

51. Burrows, S, et Al. “Bacteria in the global atmosphere – Part 2: modeling of emissions and transport between different econsystems.” Atmos. Chem. Phys. 2009. 9:9281-9297.

52. Despres, V, et Al. “Primary biological aerosol particles in the atmosphere: a review. Tellus B. 2012. 64:349-384.

53. Amato, P, et Al. “Survival and ice nucleation activity of bacteria as aerosols in a cloud simulation chamber.” Atmos. Chem. Phys. Discuss. 2015. 15:4055-4082.

54. Stopelli, E, et Al. “Freezing nucleation apparatus puts new slant on study of biological ice nucleators in precipitation.” Atmos. Meas. Tech. 2014. 7:129-134.

55. Pohlker, C, et Al. “Biogenic potassium salt particles as seeds for secondary organic aerosol in the Amazon.” Science. 2012. 337(31):1075-1078.

56. Givati, A, and Rosenfeld, D. “Separation between cloud-seeding and air-pollution effects.” J. Appl.Meteorol. 2005. 44:1298-1314.

57. Givati, A, et Al. “The Precipitation Enhancement Project: Israel - 4 Experiment. The
Water Authority, State of Israel. 2013. pp. 55.

58. DeFelice, T, et Al. “Extra area effects of cloud seeding – An updated assessment.” Atmospheric Research. 2014. 135-136:193-203.

Wednesday, April 8, 2015

Is it time to administer compulsory voting in the United States?

When looking at voting rolls regardless of the election period or environment, highly educated middle-aged working men are the most likely individuals to vote with various declining participation rates among other demographics.1,2 This decline is meaningful for voting in a democracy, either direct or indirect, is a direct representation of political power and influence. In addition as individuals become poorer and less educated their voting probability decreases.1 Not surprisingly research has demonstrated that politicians target their messages and actions towards those demographics that have the higher voting probabilities, regardless of whether or not those actions will produce the best outcomes for society in general.3 In some context politicians view their “constituents” as only those individuals who vote. Therefore, politicians will commonly ignore the concerns and problems of those individuals in demographics less likely to vote producing an environment that increases the probability of both income and social stratification.

To combat this aspect of inequality spread some individuals theorize that the United States should adopt compulsory (mandatory) voting over the current voluntary system. Compulsory voting is certainly not a new or exotic idea as 22 countries in the world already have some form of compulsory system including Australia and most of South America (assurance for those who only think such a system exists in 3rd world countries). Also compulsory voting has demonstrated a shift in public policy closer in line to the preference of citizens alleviating the divergence between citizen and constituent.4 So with the legitimacy of compulsory voting as an idea on sound footing, the question is should the United States change its current voting system as well?

Some voices may immediately suggest that the idea of compulsory voting is a direct challenge to individual freedom and liberty, thus should be rejected without discussion; these voices would belong to individuals who are either overreacting or foolish for the real issue is how one defines the role of voting in a society. This role can be defined as either a duty or a power. If defined as a duty then voting is regarded as a civic responsibility that one should engage in to justify his/her citizenship and contribution to society; therefore, compulsory voting should be viewed as reasonable and appropriate including any penalties associated with not voting. If defined as a power then voting is regarded as a means in which citizens can exert their influence on society, but voting should be regarded as only an opportunity not a requirement to express this power. However, it is important to note that in a voluntary voting structure if one chooses not to vote then one has no legitimacy in complaining about the current state of society.

When taking measure of most public discourse on this issue it appears that a majority would classify voting under the latter definition: a mandatory opportunity that a democracy must offer its citizens where participation is only voluntary. Unfortunately for those who hold this view such a belief is not so straightforward. A number of people appear to believe in the philosophy that individual decisions are made in a vacuum where that decision only affects that individual and not society as a whole. This mindset has produced the idea of a separation from society. For example some individuals have argued that if one does not ride public transit buses then that individual’s taxes should not go towards supporting the operation of buses. Clearly this makes little sense on a social level and if such an idea would be expanded beyond such a simple measure, which some would argue it should be, and applied to society as a whole then society would become extraordinarily complex and in general cease to function effectively producing a net negative to all parties. Therefore, one must consider measuring the voluntary nature of voting versus the good of society.

As mentioned above overall voting rates have fallen steadily and significantly over the last half century among all demographics sans the elderly (65+).5,6 Therefore, the possibility certainly exists that the United States’ democracy could become an oligarchy producing a singular path and set of cultural values. Regardless of one’s political leanings, there exists an extremely high probability that an oligarchy will be inherently negative to society producing significant societal disruption and inefficiency. With this potential reality the idea of voluntary voting could be dismissed in favor of compulsory voting under the idea of “for the good of society”. Understand that this mindset is not designed to produce a certain cultural/societal outcome, but instead to ensure sufficient representation. Basically it is akin to forcing Team A, B and C to play Team D in said sporting event. Forcing the game does not mean that Team D will lose, it just means that Team D will not win by forfeit.

Furthermore to those who argue that voting should be voluntary then it should follow that individuals should be given every convenience and opportunity to vote. Unfortunately the disheartening reality is that over the last decade certain actions have been taken in multiple states to increase the probability that citizens are denied the opportunity to vote or at least are given unjustifiable obstacles to overcome before having the opportunity. These actions raise the question of how could one support the idea of voting as a voluntary expression of citizenry power that is mandated by the government when the government and other private agencies work to limit the ability of citizens to vote? One of the guarantees of compulsory voting is that states and the Federal government would not have the ability to produce these additional obstacles to the process of voting and must produce an effective means to allow its citizens the opportunity to vote. The reality of the situation is that unless government, especially at the state level, can demonstrate an ability to produce appropriate voting opportunity compulsory voting may be necessary to ensure democracy in the United States.

To those who believe that applying compulsory voting is a ploy to increase the power base of one particular political viewpoint, current research demonstrates the uncertainty in the validity of this idea. Basically there is no significant difference between the preferences of voters and non-voters in already existing compulsory systems.7,8 In the United States it is thought that non-voters may slightly lean Democrat, but there is no certainty in this analysis for it is based on extrapolation from polling information and polling in general remains a foolish way for producing information as it is unreasonable to suggest that the views of five thousand people could properly characterize the views of fifty million. Therefore, at the current time there is no rational reason to conclude that compulsory voting in the United States would produce a significant power shift for one party over all of the others. Even if there was evidence to such a shift, what would be the problem for a democracy is supposed to be rule by the majority.

Even if compulsory voting were put into practice there are certain issues of access, functionality and penalties that must be considered. The issue of access is important for if government is going to demand that all citizens vote then it must make arrangement that all citizens have appropriate opportunity to do so. While a number of individuals have championed online voting as a means to produce ease of access, such claims produce equality and security concerns. Too frequently one hears about an individual or group hacking a corporation acquiring personal information and/or credit information to have sufficient confidence in the security of online voting and despite the mindset of certain technophiles not everyone has a personal at-home Internet connection or other means of access to the Internet that could effectively accommodate voting. (i.e. online access at a public library).

Therefore, with the uncertainty surrounding the use of Internet voting as nothing more than a luxury or advanced supplemental medium, local and state government must produce sufficient plans of action for in-person polling stations and voting by mail. Some could argue that in lieu of Internet voting, voting by mail is the next best thing. While having the option of voting by mail would be an important access element, eliminating in-person polling stations would not be the correct response. In past opinion polls voting by mail is constantly favored by wide margins over electrons that are run by mail.9,10

In a compulsory system exclusively relying on mail would also be extremely burdensome for the homeless. In addition there are some that are less confident that their vote will be counted when voting is performed through the mail.11 The Washington State system appears to be a good starting point for a national “vote by mail” system where the ballot is sent weeks ahead of time allowing the voter ample time to inform him/herself regarding the important issues and cast their ballot when convenient versus under a specific time crunch. However, there are still in-person stations available for use if an individual is uncomfortable or unable to cast their vote by mail. Whether or not early in-person voting would still be required under a compulsory system is unknown, but weighing on the side of caution to ensure sufficient voting opportunity in the first few elections it should be expected that counties offer early in-person voting for at least two days prior to Election Day.

From a functional standpoint one must address the mindset of those individuals who have previously not elected to vote. Once those with access issues are eliminated, the principal reason that individuals do not vote is that they suffer from a nihilistic mindset, i.e. they do not believe that their vote will matter. A similar mindset is that of the “forsaken voter”. An example of this mindset is seen in one of the major complaints of blacks and environmentalists, that the Democratic Party does not respect their opinions because Democratic leadership believes that these groups have nowhere else to go if they want to advance their political beliefs; they can’t vote for a Republican because that would be self-defeating, if they are real Democrats, and they can’t vote for a Green party member or other third party because of the infinitesimal probability that the person would actually win. Therefore, both types of individuals can feel that their “expression of power” through the vote is pointless.

So the chief question on this issue becomes how to manage those individuals who in the past decided not to vote because of the belief that it did not matter when they are now forced to vote or accept a penalty? Various other countries handle this issue with the straightforward opinion of allowing voters to cast their vote for a “none of the above”, which is thought to represent the dissatisfaction of that voter with the existing candidates. While this option is viable, it does not appear to be meaningful. On its face it can easily be argued that casting a vote for “none of the above” is pointless because it defeats the point of compulsory voting. What is the point of an individual spending any financial or opportunity cost voting if one is not going to cast a meaningful vote? Note that allowing a voter to merely leave a ballot blank is akin to selecting a “none of the above” option.

Individuals in favor of this option would argue that casting a vote for “none of the above” is a demonstration of dissatisfaction with the existing candidates and their respective platforms. Under this mindset a stronger message is sent to the political establishment by voting “none of the above” versus voting for “the lesser of two evils”. It could be argued that eliminating this option would be detrimental to producing efficiency in democracy because it would restrict choice.

The counterargument to this point is that while hypothetically it is a valid argument, in actual practice the problem with abstention is that it does not send that dissatisfied message or any meaningful message beyond a potential sound bite for the given election season. For example in the current election environment even if 75% of the citizenry abstained from an election those abstentions do not matter because the election will be decided on the votes of the 25% that did vote. There is no rule in U.S. election politics that voids an election if less than x% of the potential electorate actually votes, thus abstaining does not send a message because abstention produces no consequence to the candidates or the system. In essence no one in power would care that x% of the electorate was “dissatisfied” with the existing candidate pool. Realistically a “none of the above” vote will not demonstrate meaningful dissatisfaction with the available candidates, but simply disrespect for the process.

Furthermore there is a legitimate question to whether or not the administration of compulsory voting will lead to greater feelings of disillusionment with voting in general because with more people voting each individual vote has less power/influence. On its face whether or not this change is a significant psychological issue will more than likely be entirely influenced by both which candidate wins and the size of the victory. In this structure there are four possible outcomes for individual A and his vote: 1) votes for the winner in a landslide; 2) votes for the winner in a close result; 3) votes for the loser in a close result; 4) votes for the loser in a landslide;

Of these four possible outcomes the only one that could increase voter dissatisfaction is the third outcome where the preferred candidate loses by a small amount. In this situation the voter may interpret compulsory voting as costing their candidate the election naturally presuming that more “forced” participants voted for the opponent swaying the final result. However, in all other situations compulsory voting should have no effect or a positive effect on the viewpoint of voting. In the first outcome individual A should be inspired by compulsory voting in witnessing how many individuals agree with his viewpoint and the candidate that supports it. In the second outcome individual A could reason that compulsory voting was the reason for victory (the opposite rationality of the third outcome in that the “forced” participants swayed the final result in his favor). Finally in the fourth outcome there should be no change in opinion because the candidate lost big and would have lost big even if compulsory voting did not exist.

Some individuals have the belief that compulsory voting will have a positive impact on non-voting forms of political involvement and understanding. While this belief may be true the overall ability to produce this result would more than likely be marginalized by allowing for a simple “none of the above” option for it allows an individual to put no thought into the process at all and simply use the “I don’t care” option. If the idea of compulsory voting is to maximize the potential political power of the electorate then the process should not allow for the ability to so easily circumvent that idea. In addition the increased probability of political engagement must involve a change in the general human personality involving the “blind” rejection of ideas that are counter to their personal beliefs; if one is unable or unwilling to abandon incorrect opinions when faced with critical flaws of those opinions then increased political engagement is not be positive and very well could be a net negative.

There is a valid argument that can be made in favor of abstention on the basis that no individual should feel obligated or forced to vote for a particular individual or group just because voting is required. How can this conflict between the negative of allowing a “none of the above” option and forcing an undesired vote be resolved? One possibility is that voting individuals who do not prefer any of the candidates could write a brief explanation (1-2 sentences) regarding why he/she does not want to vote for the available candidates for a given elected position. This way the individual would be successfully abstaining while also demonstrating thoughtful respect for the voting process and increasing the slim probability that the dissatisfaction would actually be noted. Incidentally it would be preferred if these individuals would express this dissatisfaction to potential third party representatives so an individual that they would feel comfortable voting for could properly enter the race.

This aforementioned society-dissociated mindset can be detrimental to a compulsory voting scheme because without a reasonable probability that these “new” voters are properly informed their votes will not properly convey their own opinions or the representative opinion of society. For example suppose Apartment Complex A is having a vote among its 50 residents on whether or not to establish a new more restrictive noise ordinance. 10 residents are opposed to this new ordinance because they commonly have parties that involve loud music. 20 residents are in favor of this new ordinance because they are frequently bothered by the noise from these parties. The final 20 residents have no strong opinions on this vote and are not aware of the grievances of 20 pro ordinance residents because they are far enough away that they do not experience the loud music. Under these conditions these final 20 residents should abstain because of their lack of interest and information.

However, in a compulsory voting environment it is more than likely that they will vote against the ordinance due to reasons of either simplicity or avoiding future restrictions on themselves. Therefore, these 20 “neutral and uninformed” voters could improperly swing the results of the vote because they do not understand how the outcome of the vote affects all residents in Apartment Complex A. So if compulsory voting is applied making sure that all have access to the necessary resource to properly inform themselves of the issues is critical. Again it is fine if these last 20 residents vote against the ordinance if they are properly informed on how it will affect all parties, it is the ignorance that must be defeated.

With regards to penalties, most compulsory voting practicing countries administer a small financial penalty when an individual fails to vote. Interestingly enough this penalty is typically equal to or less than a standard parking violation, which does not send a strong message that voting is important. Clearly administering large fines would be questionable akin to issuing a $10,000 dollar speeding ticket making such a strategy difficult. A better means to “encourage” compliance with compulsory voting would be to administer time penalties that have direct societal duty elements for most people tend to value their time over small generally meaningless amounts of money. For example failure to vote should be met with community service penalties or an increased probability for jury selection. Regarding possible exemptions from voting realistically if the proper access systems are developed, which they should be, then few possibilities remain. One legitimate exemption could be on the basis of religious grounds, i.e for Jehovah’s Witnesses, etc. Another exemption could be given for those suffering from mental illness or even at an advanced age (70+).

One potential side problem in a compulsory voting environment is the issue of whether or not individuals will be more inclined to buy/sell votes. With a mandate that every citizen vote the probability of voter fraud will still be low due to proper checks and security measures. However, what cannot be so easily neutralized are individuals selling their votes. Selling votes may not be a large issue now as the rationality behind its absence is the lower voter turnout, thus groups merely have to “rally” the “Parisians” to drive their chances of winning. Under a mandate there will be a much larger pool of potential voters that would be more difficult to directly persuade, thus shortcuts could be taken. It is also important to note that there is a reasonable probability that a number of these “new” voters could be politically apathetic enough to sell their vote. Fortunately due to the privacy associated with voting it would be extremely difficult for the “vote buyer” to confirm that the “vote seller” actually voted the way he/she may have been instructed to, thus without the ability to confirm both sides of the exchange, vote buying and selling should be limited, if any occurs at all. Also there has been no widespread vote buying in other compulsory voting countries.

The idea of compulsory voting should be one of irrelevance for all citizens should be interested enough in the development of society that they at least spend a few moments understanding the pertinent issues and then proceed to vote on their beliefs. However this mindset is far from universal. While this reality is unfortunate it could be sufficiently dismissed as regretful, but not critical if not for two salient points. First, and most important currently, certain agencies are actively attempting to prevent certain groups of individuals from voting by producing unnecessary obstacles. These actions directly threaten the idea of voluntary voting as a sufficient means for citizens to express their power as a citizen of a democracy. Second, there are times when individual privileges need to be augmented for societal good and the preservation of a democracy over an oligarchy certainly meet the condition of societal good. Overall the idea of compulsory voting is not one that aims to force democracy upon its citizenry, but instead protect democracy for its citizenry.


Citations –

1. Kittelson, A. Book chapter: The Politics of Democratic Inclusion. In - The politics of democratic inclusion. Temple University Press, 2005.

2. Blais, A, Gidengil, E, and Nevitte, N. “Where does turnout decline come from?.” European journal of political research. 2004. 43(2):221-236.

3. Verba, S. “Would the dream of political equality turn out to be a nightmare?.” Perspective on Politics. 2003. 1(4):663-679.

4. Fowler, A, “Electoral and policy consequences of voter turnout: evidence from compulsory voting in Australia.” Quarterly Journal of Political Science. 2013. 8:159-182.

5. U.S. Census Bureau, Current Population Survey, November 2008 and earlier reports. Internet release data: July 2009. Table A-1. Reported Voting and Registration by Race, Hispanic Origin, Sex and Age Groups: November 1964 to 2008.

6. U.S. Census Bureau, Current Population Survey, November 2008 and earlier reports. Internet release data: July 2009. Table A-2. Reported Voting and Registration by Region, Educational Attainment and Labor Force for the Population 18 and Over: November 1964 to 2008.

7. Citrin, J, Schickler, E, and Sides, J. “What if everyone voted? Simulating the impact of increased turnout in senate elections.” American Journal of Political Science. 2003. 47(1):75-90.

8. Pettersen, P, and Rose, L. “The dog that didn’t bark: would increased electoral turnout make a dif
ference?” Electoral Studies. 2007. 26(3):574-588.

9. Alvarez, R, et Al. “The 2008 Survey of the Performance of American Elections.” Washington, DC: Pew Center on the States. 2009.

10. Milyo, J, Konisky, D, and Richardson, L. “What determines public approval of voting reforms?” Paper presented at the Annual Meeting of the American Political Science Association, Toronto, Canada. 2009.

11. Alvarez, R, Hall, T, and Llewellyn, M. “Are Americans confident their ballots are counted?” Journal of Politics. 2008. 70:754-768.

Tuesday, March 24, 2015

Forgetting the Past or Not Even Caring Enough to Remember


Numerous individuals have recited various versions of a simple truth over the years, “Those who do not learn from history are doomed to repeat it.” However, despite the gravity and accuracy of these words it appears that few individuals are interested in heeding them. This behavior raises an interesting question: is this lack of consideration for the past driven by individuals themselves or the means in which history is documented?

The digital age has given rise to a new medium for recording history that brings its own advantages and disadvantages. The principal advantage of the widespread digitalization of culture and its associated events is the ease at which information can be recorded and stored both from an opportunity and direct resource cost. Most individuals can type faster than they can write, especially over long periods of time, increasing the efficiency at which information is recorded; also electronic formats eliminate the need to acquire and use vast reams of paper or an even more cumbersome recording medium.

Unfortunately the advantage in storage capacity and speed has also brought forth disadvantages. One important problem for the long-term documentation of history is the speed at which technology changes. For example various paper and other physical medium (stone, rock, etc.) records have lasted thousands of years, imparting valuable information about past human culture and society, whereas electronic resources are more unstable be it from simple data corruption/errors due to a misclick to the potential of an EMP or large solar flare. While there are strategies to enhance longevity like etched nickel sealed in argon, these options are far too expensive to justify for most data. Even natural deterioration is accelerated in digital storage mediums both direct, a flash drive or CD physically falling apart, or indirect, a particular medium falling out of fashion with public use and becoming obsolete. One thing paper will never be is obscure no matter the “predictive” musing of certain technophiles.

The problem of social viability is further complicated due to the number of different formats for various files. While it can be argued that competition in the marketplace is good, the field of information storage is not a field suited for widespread competition, especially when so many of the options offer no significant advantages from their “competitors”; i.e. what really is the difference between .jpeg or .png or .tiff in actual application terms? Even if a medium remains socially viable, data retrieval and acquisition can become difficult if the only authorized personnel to access the information dies and no one else has the necessary information to takeover access. Certainly hackers and various security services can be utilized to correct this problem, but such action takes time and money and may not always be available or successful.

Fortunately these problems are probably the easiest of the disadvantages associated with digital recording to manage. Simple standardization of video and picture formats reducing the myriad of options to one or two should address orphan formatting concerns though it is unclear when such a step will actually be executed. Proper diligence in updating and converting existing formats by consumers should address conversion issues. Software companies can also better manage conversion issues by adding backwards compatibility even if it costs a little extra to develop. Some believe that all of these problems are moot due to cloud storage, but these types of storage mediums are dubious solely because they do not have a track record for being reliable over even decades let alone centuries just look at all of the online data storage services that have gone out of business over the last decade.

A more imposing problem is that the ease and reduced workload involved in producing and recording information has marred the process of identifying what information is actually important versus simple fact-less/baseless opinion. In the past only individuals who were intelligent or incredibly passionate produced significant information on a topic because of the work involved. Of course information produced in the past was not immune from error or bias, but due to the effort required to produce the information for mass consumption it was not difficult to identify bias born from excess passion. However, now because it is easier to produce information for public consumption there is reason to suspect, largely because it is already happening, less diligent individuals will produce more error-prone information in addition to more information being produced in general. In fact in 2011 Digital Universe estimated that humanity had created 1.8 zeta-bytes of new data and that amount was expected to grow exponentially over the next decade.

Unfortunately while individuals marvel at the sheer storage capacity of digital systems the time humans have available to sort through this information remains ever fleeting. With the ever- present human ego and frequent inability to accept being wrong a vast majority of this produced information and “historical” record is significantly biased towards a particular viewpoint without care for accuracy. Too often humans in general accept knowledge found online as accurate, especially if it supports their personal viewpoint, so the increased propagation of information will make weeding out the accurate information from the inaccurate information even more difficult.

The ability to determine truth from desire to outright lie is further complicated by the action and position of formal education. Sadly while the amount of history continues to grow with every passing second most modern educational requirements for high school students in history rarely surpass the Vietnam War leaving most of the 1970s to the present not studied or even discussed. This oversight creates an inherent negative, for at best the history teachers, those who should be better equipped than students to instruct and deduce information accuracy about historical events, are not able to help students understand the truth and at worse the exclusion of this information may lead some students to deem that it is not important. Clearly such a conclusion is not correct for there have been many important historical events, both in the United States and globally, between 1975 and 2015.

The lack of importance assigned to modern history sends the message to society at large, especially those in power, to ignore the concerns of the public with regards to their actions and decision-making for once those events fade into the past the public and history itself will not be able to judge inappropriate action harshly because people will not regard remembering it as important. Sadly those who do remember may simply be labeled as “over-emotional” bias actors depending on their viewpoint. Overall such a ramification is troublesome solely because any increase in hubris by those in power will typically produce negative results for the masses for most people in power tend to believe that helping society hurts their short-term capitalization potential, thus there is little incentive to help society.

Some may raise the concern that there is not sufficient time to teach all of the existing history; there are more “important” things to do like administering aptitude tests. The best way to address this problem is to eliminate the instruction of overlapping material, which is typical of history education in school, where elementary, middle/junior and high school history frequently discusses the same events over and over again through “review” sessions. One possible strategy for eliminating this overlap would be to divide U.S. history in sections of schooling as discussed below.

Grade = Material

5th = Colonial Period (1600s)
6th-7th = Revolutionary and Constitutional Period (1700s)
8th = Early Nation Development, Civil War and Reconstruction (1800s)
9th-10th = World War I, Great Depression, World War II and Early Cold War (1900-1950s)
11th-12th = Korean War, Vietnam War and Modern History (1950s-Present)

For some high schools the above schedule may involve expanding U.S. history from a single semester to two semesters, which should not be a problem due to the importance of history. Also the way history is taught needs to change for in the digital era gone are the days of memorizing dates and names. Instead students should be instructed on the motivations and rationalities (and how justified they were) that drove the “decision-makers” of a given time to make the choices they made. Knowing that D-Day occurred on June 6th, 1944 is far less important than knowing what type of planning went into its execution and why such a strategy was viewed as necessary.

Overall history has been a somewhat difficult sell to the general public in large part under the criticism of “how does this help me in my life”, which has created a motivation to not even bother remembering. Such an exclamation is puzzling for history is rife with incredibly meaningful “what ifs” that not only enhance thought, but also provide opportunities to learn how to better judge a given situation increasing understanding of potential ramifications. While changes can be made to the methods of recording history, how history is taught and how the public perceives the importance of history, in the end each individual must do a better job of understanding the importance of history as well as learning from its examples otherwise history will truly repeat itself until the repetitive bad decisions of society finally results in a hastened end to human society itself.



Citations –

1. “Extracting Value from Chaos.” IDC iView. June 2011.

Wednesday, March 18, 2015

Improving Customer Knowledge on Health Insurance

One of the tenets of the Affordable Care Act (ACA) is that a consumer will lower healthcare costs by comparing and contrasting prices for both insurance and medical procedures spurring competition between these respective agencies. Unfortunately the strategy is marred by the fact that the current marketplace only focuses on insurance provider characteristics in a limited capacity (co-pay, out-of-pocket limits, deductibles, etc.) and there is no information on cost relationships between insurance companies and a given hospital. Also there is no meaningful existing marketplace that focuses on medical service providers (MSPs) where a customer can compare the costs of a MRI between hospital A 134 miles away from his home or hospital B 46 miles away from his home. There are numerous independent groups that attempt to produce a meaningful “shopping environment”, but despite these efforts there is limited overall information, there is a lack of universal regionality, and most customers are unaware that these sites even exist with the exception of a random annual story about them on a blog. Without the ability for healthcare consumers to identify the best medical service prices it is difficult to expect them to be intelligent consumers and aid in the reduction of healthcare costs.

One of the biggest obstacles to producing a more transparent medical pricing environment is the arrangements negotiated between various hospitals and insurance companies. These deals create medical service institutions that are “in-network” and “out-of-network”. Insurance companies cover “in-network” providers because they are able to produce a lower controlled product using their economies of scale versus their inability to do so with out-of-network providers. In theory one would think that insurance companies would value a transparent marketplace because it would force MSPs to compete against each other to acquire customers thereby lowering costs for the insurance industry. Clearly it is assumed that the insurance company would have a price ceiling for each type of service, but few MSPs would exceed this limit, if reasonable, because it would lead to a significant number of services rendered without proper financial redress, which would put them out of business. If more medical transparency would theoretically benefit insurance companies, why is there no push from insurance companies to produce such an environment?

Three immediate reasons jump to mind when attempting to explain resistance by both MSPs and insurance companies to more transparent pricing, which is representative of the free-market principles that these groups claim to support:

The first reason for opposing transparency can inherently be viewed as the most plausible where there is a highly complicated and competitive relationship between MSPs and insurance companies in which these agencies work together to ensure proper prices with a sufficient customer base so that both parties profit. In such a relationship if significant transparency is developed it will add a third major component to this relationship, the decisions and tendencies of potential customers. Without understanding the nuances of the negotiation and the economic obligations of both the insurance companies and MSPs the customer pool will make sub-optimal decisions that will result in inefficiencies, which will produce increased costs reducing profits and even possibly endangering certain businesses.

While there is some truth to the level of complexity associated with this relationship, the above philosophy flies in the face of the general tenets of capitalism. Never has any real capitalist argued that a potential customer pool should be divided among a group of businesses without genuine competition. Instead the mindset has always been for businesses to produce advantages in their produces/services that will attract customers and if they are not able to produce enough advantages then that business folds up shop.

Some could argue that because buying health insurance and having access to medical care is more important than buying a hamburger it cannot be judged by the same principles as regular commerce. Unfortunately for its proponents the validity of this idea appears quickly dismissed when recalling the ruthlessness and questionable tactics that insurance companies have engaged in to deny coverage to their customers on technicalities as well as the excessive charges most MSPs levy against their patients that are “negotiated away” by insurance agreements. If MSPs and insurance companies want the above structure of “secret balance” then they should become non-profit organizations, which would at least justify the above argument.

The second reason for opposing transparency would be concern about divulging trade secrets regarding how prices are negotiated. The “trade secrets” argument is old hat for corporations attempting to avoid transparency. In some cases it is actually a legitimate argument; however, in the case against medical transparency it is not valid because the idea of medical service transparency is simply the declaration of a single price for a given service, i.e. standard single knee replacement, along with a general quality rating from an independent auditor. There is no expectation to produce a methodology regarding how a particular price was produced. In addition it is inappropriate for either insurance companies or MSPs to suggest that by simply knowing the price for a given service that competitors receive a negotiating advantage. Even if they could receive an advantage then all parties would have the same advantages in an environment where all service prices are publicly available, thus there is no reason to be concerned about the revelation of trade secrets.

The third reason for opposing transparency is the most obvious and more than likely the correct one in that the insurance industry and MSPs in general are happy with the current system because they are able to make large amounts of profit and are uncertain if a new transparent and more competitive system would decrease or increase that profit. To better understand how this uncertainty arises one must study the potential changes that occur in a more transparent environment.

In a more transparent environment one of two possible scenarios will emerge between the MSPs and insurance companies. In the first scenario insurance companies will maintain their existing relationships with MSPs and simply be competing against other MSPs and their insurance provider relationships. Basically insurance companies will keep their provider “zone(s) of control”, but consumers will be able to better understand the economic benefits from moving between those zones to best meet their needs.

In the second scenario the new competitive environment may cause MSPs to “unbind” themselves from insurance companies eliminating some to most of the provider control and its associated power. Without provider relationships insurance companies would lose their “zone(s) of control” which could lead to a mass exodus of individuals from one insurance company to another. For example open competition between MSPs would disallow any guaranteed business due to these zones of control, thus MSPs would not be beholden to insurance companies, thus insurance companies would have to compete for business without guarantees. Clearly this second scenario is much more dangerous to the profitability of both insurance companies and even MSPs because they would have to compete as well, just on a lesser extent.

On a political level both Republicans and Democrats should accept and support increasing transparency regarding medical procedures. Republicans should support such a measure because the existing lack of transparency is anti-American and anti-capitalistic as it restricts choice and freedom of individual consumers along with increasing distortion in free markets. Democrats should support such a measure because it will lower government costs and could reduce income inequality by reducing individual costs through a reduced price. Medical care is typically a fixed cost, thus it weights more on poor individuals versus rich individuals.

On a public affairs level almost all individuals should support increased transparency of medical procedures. The first obvious reason for this support would be the reduced prices for medical care that would accompany increased competition. The second, less obvious, reason for support would be the ability to better prepare for future medical care. One of the biggest problems with the current system of care is that most of the focus is on elective or chronic procedures versus acute procedures. Basically there is little shopping when someone believes that they are in urgent need for medical care. In this situation a person can become justifiably emotional and scared reducing the ability to behave like a rational actor when it comes to procuring competitive medical services. However, in a transparent environment individuals will be able to plan ahead of time to determine what hospitals to attend if procedure A is needed versus procedure B eliminating the need to decide on the spot.

Unfortunately while increased medical transparency should have significant government support from both major political parties as well as widespread public support, any Federal law demanding significant transparency requirements from these institutions does not appear on the horizon and for reasons discussed above one should not expect insurance companies and MSPs to become significantly more transparent on their own. The small collective of state transparency laws are a positive step, but should not be expected to significantly lower national healthcare costs.

For example when discussing state required transparency, in 2014 Catalyst for Payment Reform and Health Care Incentives Improvement Institute judged that only Colorado, Massachusetts, Maryland, Maine, New Hampshire, Virginia, and Vermont had some form of sufficient law(s) requiring appropriate and useful price reporting in effort to support transparency. However, among these states only Massachusetts and Maine had suitable and consistently operating websites to host pricing information allowing consumers ease of access to the information and the ability to effectively utilize it in order to make informed healthcare decisions.1 Despite this deficiency in overall transparency, there is an important step that insurance companies can take to increase transparency that should not threatened any real profitability and not require state or Federal action, changing the format of how patients are informed of how their medical costs are covered after a procedure.

The breakdown of what medical procedures were performed, their costs and who/what is responsible for what payments are commonly detailed in an “Explanation of Benefits” (EOB) form. The biggest problem with the generic EOB form is ironically a lack of explanation. This lack of explanation is largely because the EOB is basically a form letter to the patient with various numbers and procedure codes thrown on a piece of paper. There is no unique explanation associated with the patient’s personal experience and the procedures executed. Initially it would be unreasonable to expect insurance companies to perform unique detailed explanations and evaluations for all successful claims. However, it is not unreasonable to expect insurance companies to produce a more clear and transparent document.

The core of this lack of transparency in the EOB is that insurance companies and even MSPs force too much onus upon the patient understanding both the intricate elements of his/her insurance policy and having the ability to use that understanding to interpret the EOB. This interpretation is made more difficult due to the lack of qualitative information in the EOB. Insurance companies could make it much easier on patients if they simply tied the insurance policy to the EOB and then used both qualitative and quantitative information to demonstrate step-by-step with words, not just numbers, how the policy was used to pay for and not pay for certain procedures. For example instead of simply stating that “sum x is to be paid by the patient due to the maximum coverage reached due to the condition of the plan for this service” the EOB should document the existing coverage value and how that coverage value was utilized to covered the applied care.

Such a change in strategy should not be difficult because insurance companies already use policy information to create the EOB for individual patients, thus the only real change would be the addition of qualitative information. For those who think such a change would be too difficult, cumbersome or expensive, the problem with this objection is that individualized plans do not really exist, thus there is only a small finite amount that must be addressed. For example if purchasing insurance was likened to purchasing a meal from McDonalds the customer would only have the ability to purchase a certain specific number of pre-assembled meals (i.e. value meals) instead of build their own meal experience from individual items (i.e. a la carte). There are no significant a la carte insurance plans, thus the overall cost increases for making these changes are minimal. In addition to the step-by-step analysis, which would use generic statements relative to co-pay and co-insurance, a more expansive EOB should include a small glossary to explain specific terms.

Some could argue that such a change would inconvenience insurance companies and it is the responsibility of the policyholder to know the extent and limits of his/her policy. In addition the Internet provides resources to “guide” patients through the general meanings of an EOB. On its face this argument is insufficient for multiple reasons. First, despite what some people want to believe not every individual has access to the Internet, thus looking online for assistance is not universally applicable. Second, arguing against the above changes to an EOB is an argument against efficiency and productivity. What makes more sense: insurance companies spending a single capital investment that would be less than 1% of total yearly profit to make their EOBs more useful friendly and easier to understand or millions of people spending two to six hours attempting to understand their EOB in its current form without a guarantee that they will? Suggesting that the latter makes more sense should only be answered with a silent and sad horizontal shaking of the head.

There is a big difference between an individual thinking he knows what his medical insurance covers and actually seeing what it covers. A more detailed and consumer-friendly EOB will help individuals better understand the actual applications of their medical insurance coverage and will increase transparency and consumer choice by producing better-informed consumers. It would be ideal if the Federal government would actually involve itself on this issue by producing legislation that would create a standardized EOB format instead of relying on companies to do it themselves or states producing individualized legislations that may not be uniform. Overall if one of the major goals of legislation like the ACA is to reduce medical costs then transparency is a key element to increasing consumer choice and lowering costs. While a truly transparent system seen in how most consumer goods and services are purchased may still be a while away, producing a more detailed EOB is an easy and straightforward means to producing more informed consumers and possible lowered medical costs.


==
1. Delbanco, S, Brantes, F, et, Al. “Report Card on State Price Transparency Laws.” Catalyst for Payment Reform and Health Care Incentives Improvement Institute. Mar 2014.

Wednesday, February 18, 2015

The Decline of Marriage

Currently there is either a crisis in marriage or simply a course correction. It is no secret that the percentage of men and women in the United States who are currently married has decreased steadily and significantly from 1970 to now. While there has been a significant increase in divorce since 1970, most of the decrease in marriage rates has come from individuals not choosing to marry at all. In addition to the general overall drop decreases in marriage rates have differed based on income/assets shown below.


Figure 1: Marriage rates for men between the ages of 30-50 by income bracket from 1970 to 2010 (1)

Females have seen a similar pattern with high-income earners only experiencing a small drop in their marriage rate, while working class women have seen a drop of at least 15%.1 One question to help characterize this trend is: are marriage rates actually in trouble (i.e. they will continue to fall in the future) or are marriage rates naturally dropping from their seemingly unrealistic levels in the 1950s and 1960s and will simply stabilize at a dynamic equilibrium point in the near future?

While this question cannot be directly answered at the moment it is important to determine why marriage rates have fallen in the manner they have over the last half-century to better understand which of the above answers is more probable. In the past and present three factors have largely driven the desire to marry or not to marry: cultural, economic and psychological. How these factors have changed with time should produce sufficient and effective base to address the above question.

One reason a drop in marriage rates should not be surprising is a more liberal societal cultural shift regarding an absence of inter-gender relationships. In 1950s and 60s individuals who elected to remain single were typically thought of as weird, strange, “players” and/or inferior because they were not able to attract a spouse. In modern times such generalizations are made much less often and instead remaining single is commonly regarded as a valid lifestyle choice. This change has made individuals more free from cultural pressures to marry in fear of being passively ostracized from society.

Changing attitudes with regards to remaining single was not the only change for attitudes regarding women in general have also significantly changed. In the past there was typically an underlying understanding after a marriage that the male would have the job and earn the money (i.e. be the breadwinner) and the female would stay at home and manage the domestic affairs of the family: cleaning the house, raising the children, etc. This structure made it imperative that women find husbands that could support them for their prospects of finding employment to support themselves were limited, even with the gains made from their work during WWII reducing prevalent stereotypes that they were unable to perform certain jobs. Over time the significant and continuous increase in participation by women in the labor force has changed this “understanding”, in the eyes of some even rendered it obsolete. Therefore, for a number of women marriage was no longer the principal method in which one could find economic support.

In addition to the cultural shift in accepting women into the workforce, changes in social norms and the legal system made divorce less stigmatizing, but more difficult to execute due to increased legal complexities. These changes in the execution and structure of divorce proceedings are believed to significantly influence the desire of single individuals to not marry. Interestingly it could be argued that due to the legal and emotional complexities of divorce that for a number of individuals a divorce is more emotionally and psychologically taxing than a standard termination of an existing relationship (i.e. breakup), both in magnitude and duration.

Another element that is amplifying the negative associations of divorce is the cultural shift concerning co-habitation. In the 1950s and 1960s the chief factor that limited the amount of co-habitations was not that it was shunned by general society (although it was), but that people did not consider it a viable option in contrast to marriage. Therefore, even if someone had concerns about the negative elements associated with a potential divorce there were typically only two ways a relationship could resolve: breakup or marriage. Now co-habitation has become a legitimate alternative, which could apply greater emphasis on the negative elements of divorce. While a number of individuals do co-habitat before marriage, co-habitation is not the catalyst for marriage that some claim.

The chief disadvantage of marriage relative to co-habitation is the ease at which the latter can be ended. Both entrance into and exit from marriage have significant regulatory hurdles where as co-habitation simply involves moving some material possessions into and if necessary later out of a physical location. Entering into a marriage has some hurdles that can complicate things and one could argue that these hurdles provide a “weeding out” element where non-serious applicants will typically fall by the wayside. However, divorce is the real problem for even when a divorce is amicable it takes weeks, if not months, to fully resolve the separation.

An interesting psychological aspect of the fear of divorce is a number of individuals view divorce as almost an inevitable occurrence, that the marriage is destined to fail. It is strange that individuals would think in such a manner. How often do most people envision taking an action where the initial mindset is failure? The negative ramifications of divorce are only relevant if one views the probability of its occurrence as considerable. Perhaps such a mindset is reflective of one’s general standing in existence for high social status individuals (well-off college graduates) have not seen a significant drop in marriage rate versus those with less in their lives. Basically it can be argued that the further down the economic ladder one is the higher the probability that his/her life has had significant failure, thus the potential of a marriage is viewed as having a higher likelihood of failure versus someone who has had more success in life.

The Affordable Care Act (ACA) could also prove a detriment to marriage as one of the few remaining tangible benefits acquired from marriage over co-habitation is that spouses can share health insurance meaning that one person who could not afford or be eligible for health insurance could be covered under their spouse’s policy. However, the ACA forces insurance companies to cover all individuals regardless of circumstance and allows for states or the Federal government to provide subsidies, which are much more viable to singles versus married individual, to ease costs, thereby significantly damaging the shared healthcare advantage of marriage. Whether or not this will influence marriage rates is unclear, though it is theoretically plausible that there should be little change because in modern times the acquisition of health insurance was not a significant motivation for marriage.

While there is no argument that individuals who marry have better physical and mental health outcomes than individuals who remain single, there is less certainty regarding the differences in health outcomes between married and co-habitating individuals. However, a majority of the research appears to come down on the side of marriage regarding the better health outcomes in part because marriage produces higher probabilities for quality relationships, but there does not appear to be a decisive difference between the two.2-4

There are two major rationalities for this result. First, individuals who marry recognize the strong loving bond they have with their significant other and maintaining a quality relationship is simply easier due to these positive connections. In essence these individuals gain almost a status-based ego boost from the marriage, viewing it as the “ultimate level” in relationship status. Second, the “fear” of divorce may actually be beneficial on some level as the difficulties surrounding divorce could force individuals to apply more effort and care in working through problems in the relationship improving psychological well-being versus co-habitation where escape is so easily achieved that a small problem could derail the relationship or is ignored and allowed to fester.

As mentioned above the advancement of women in the workforce has reduced some of the more questionable rationalities for marriage both culturally and economically. From an economic perspective the ability of women to support themselves financially has had a negative impact on men from a standpoint of their general marriage prospects. Women can now be more selective regarding whom they want to marry rather than focusing solely on “landing a man” because they need someone to support them.

This new selection freedom for women may have moved marriage from a quasi-necessity to a luxury. Unfortunately like most luxuries this produces an “arms race” mentality among many of the competitors (men) to demonstrate the value of a relationship. As women now have more freedom in selecting a marriage partner, males have to do more to make themselves more attractive, typically along the lines of having/earning money. Therefore, it can be argued that one of the biggest influencing factors on marriage rates is income inequality; i.e. the less money one has the less likely they are to get married because they are not an attractive candidate. There is sufficient evidence that supports the influence of income inequality as marriage rates have fallen much faster among poor and middle class individuals than among rich individuals.1,5,6

A number of conservative voices have lamented that one explanation for the drop in marriage rates is the penalties associated with marriage in the tax code. Originally the policies that have produced these penalties were actually boons to married couples, but with the cultural shift that has afforded women more workplace opportunities, these boons have had tendencies to become busts. Of all of the financial elements affecting marriage in the tax code there are two main elements that should have the greatest influence on marriage rates: joint filing, including association with welfare benefits, and the Social Security spousal benefit.

In modern times joint filing has become a poor motivator for marriage. First of all a joint filing is significantly more complicated than individual filing creating undue stress regarding potential benefits and detriments from the rate brackets and income divisions within the couple. Unfortunately most of the time a joint filing in a two-occupation household forces the married couple to pay more in taxes. Joint filing was designed around the traditional idea of a marriage where one individual (typically the male) works to support the rest of the family financially and the female works to support the family domestically, thus because the female does not get paid the rules associated with joint filing typically produced a lower tax rate.

Overall depending on the income disparity there are two possible outcomes for a married couple where both individuals have jobs: 1) if the individuals are in different taxable income brackets the one in the higher bracket will typically pay less and the one in the lower bracket will typically pay more due to income averaging; 2) if the individuals are in the same taxable income bracket both typically pay more. The possibility for greater payment occurs because while income is summed, the boundaries defining the various tax brackets, after the first two brackets (10% and 15%), are not proportionally maintained versus their single boundary counterparts. For example in a tax filing for a single individual the boundaries defining a 25% rate are $36,901 to $89,350 whereas in a joint filing the boundaries are $73,801 to $148,850, note how $29,850 dollars has been removed from the upper boundary.

Based on the above rules while there may be a small benefit to certain individuals who marry someone below their income brackets, in practice only a minority of marriages involve crossing income brackets to the point where this element is relevant; therefore, a majority of individuals who get married will suffer increased taxes. In fact the individuals with the highest probability of receiving a tax benefit from joint filing are those who need it the least, the rich. However, the most problematic element of direct tax bracket assignment of joint filing affects middle class marriage because those individuals have overall less money to lose than rich individuals when suffering the penalty.

The direct income summation tax penalty can influence the marriage potential of all parties; however, this summation has a greater indirect negative influence on the poor because of its association with the welfare system. Understandably the welfare system has an income ceiling one must be below in order to claim benefits, but when two welfare recipients near this ceiling marry the income summation disqualifies both from receiving further benefits. Therefore, this structure of how one qualifies for welfare benefits when married produces another economic obstacle to motivating poorer individuals to marry versus co-habitation.

The spousal benefit in Social Security is the second problem in the economics of the modern marriage. The original design envisioned a traditional marriage where the individual with the job (typically the male) who paid into Social Security would receive standard benefits associated with that payment whereas the individual without the job (typically the female) would receive a benefit approximately one-half the size based on marriage. Overall this “traditional couple design” would result in a retirement benefit of approximately 150% the benefit of a single individual. The purpose of the design was to act as insurance to protect the non-working spouse against the loss of the worker’s wage. Note that this benefit can apply to divorced women who were married for a certain period of time.

However, once again the design was meant for a single worker marriage. Working wives pay full Social Security payroll taxes, but the benefits derived from these payments compete with this spousal benefit, i.e. they only get to claim the one of higher value. Since most males make more money than their wives and typically work longer (although this latter aspect may be changing) the spousal benefit will frequently be larger. Therefore, these working wives collect the same Social Security benefit they would have received had they not worked at all, thus all of the payroll taxes paid provide no future benefit instead it is simply lost income to the government. Overall while there is some loss of funds due to the lack of benefit from the payroll tax for the most part the detriment is marginal because working spouses still significantly benefit over non-working spouses due to the wages earned from their employment.

One simple way of dealing with this problem is eliminating the spousal benefit, but taking such action would leave most women worse off because the overall benefit from their payroll tax benefit is less than the spousal benefit; women who function in the traditional homemaker role in marriage would be hurt even more. However, some would argue that “traditional” marriages have become rare due to both the increased number of working women and the decreased number of marriages, thus any detriment to this element is marginal. Another point of contention is any change to the spousal benefit must have a phase-out period because of individuals who still rely on it. Overall between the two, modernization of joint filing to support middle class individuals should take precedence over changing the spousal benefit.

Another economic element that could influence marriage that has not garnered much attention is income stability or volatility. While somewhat crude, marriage can be thought of as an investment and anyone with any business acumen will agree that uncertainty is the most dangerous element in investing. High rates of income volatility produce significant levels of uncertainty regarding the prospects for financial stability in a high consumption commitment investment like marriage. Some research has identified that rising income volatility could explain a significant portion (one-third) of the decline in marriage.7

The third element that influences marriage rates is the interpretation of the inter-gender relationship along with how it begins and evolves. In the 1950s most courtships proceeded similarly starting with the male asking the female out on a first date; after numerous additional dates some couples engaged in their first act of sexual intercourse. If couples did engage in sexual intercourse it was rarely talked about, especially to parents. Finally after a stable and lengthy period of time together couples identified whether or not they wanted to get married.

Modern times has developed a more “hook-up” mentality where individuals who are not even in a formal relationship or even on a date will get together for sexual intercourse and then never significantly interact with each other again. Some conservative groups have claimed that greater access to pornography has reduced marriage rates, but to make this argument one would have to demonstrate that a significant motivator for marriage was the consistent ability to have sexual intercourse where this access was not otherwise available. This argument is defeated by the fact that sexual intercourse between non-married individuals has become rather commonplace due to the change in how people view relationships and is more easily engaged in than the longer courtship period associated with marriage.

Unfortunately for pro-marriage proponents it appears difficult to reverse the more lax attitudes of modern youth regarding sex and love. Some would argue that while when these individuals are younger this attitude may complicate creating positive future romantic relationships, the increased sexual freedom produces better overall individuals when they do decide to get married. Whether or not this point is accurate remains unknown, but appears to be a reach due to the existing divorce rate.

Overall of the three elements that heavily influence marriage rates, both psychological attitudes towards sex and love and most cultural elements appear too difficult to change. The casual attitudes towards sex are too ubiquitous whereas reverting the cultural gains made by women would be immoral and eliminating the acceptance of co-habitation appears irrational as well as incredibly improbable. Therefore, the chief element that remains available for significant positive action to increase marriage rates is economic influence as well as some more minor cultural elements.

One of the more bold and potentially effective ways to produce a better economic environment to improve marriage rates would be to neutralize the negative influence of income inequality and volatility through passing a guaranteed basic income (GBI). A GBI would make marriage more attractive to individuals by producing a decrease in economic volatility due to a constant stream of income that can be used to ensure acquisition of basic needs even in the face of hard economic times. In addition a GBI could lessen the negative impact of a divorce if the marriage does not work out reducing the stress associated with uncertain finances in the face of a potential divorce.

While a GBI would be a sweeping strategy for improving marriage rates, it is understandable that the magnitude of such a strategy would face strict opposition from powerful interests. Another strategy to improve marriage prospects would be to change how a married couples files jointly adding an additional filing option that would reduce the negative aspects of a joint filing in a two-income marriage, especially for middle class filers, yet allow traditional marriages to maintain their tax advantage.

From a psychological standpoint pro-marriage individuals or groups should focus on the positive elements of marriage over co-habitation like improved health outcomes and increased relationship stability. Also effort needs to be applied to neutralize the general negative malaise that has allowed divorce to govern the conversation of marriage outcome by attacking the negative psychology that entertains divorce as the more probable event to end a marriage. Finally one could simplify the divorce proceeding in general, which would reduce the amount of resources that individuals have to devote to terminating marriages that are not salvageable thereby reducing the level of fear associated with divorce. Divorce simplification could draw criticism from some parties who worry that too much simplification will result in an increase in divorces, not a genuine means to improve livelihood, but as a crutch or escape valve when a marriage gets a little rocky.

Overall based on how the three governing factors that influence marriage rates have changed it is difficult to assume that the recent change in marriage rates is simply a “course-correction”. Exacerbating the problem is the fact that marriage rates are further threatened by not addressing the more pressing economic factors that produce obstacles to marriage. In addition it is important for society to focus on the positive elements associated with marriage like the health and stability benefits versus the negative ones like the probability of divorce. If these economic and psychological factors are not addressed then it stands to reason that marriage rates will continue to drop among non-rich individuals eventually characterizing marriage as an event that occurs more for the wealthy than non-wealthy.


--
Citations

1. Greenstone, M, and Looney, A. “The marriage gap: the impact of economic and technological change on marriage rates.” Brookings Institution. 2012.

2. Robles, T, et Al. “Marital quality and health: A meta-analytic review.” Psychological bulletin. 2014. 140(1):140.

3. Thoits, P. “Mechanisms linking social ties and support to physical and mental health.” Journal of Health and Social Behavior. 2011. 52(2):145-161.

4. Musick, K, and Bumpass, L. “Re-Examining the Case for Marriage: Union Formation and Changes in Well-Being.” Journal of Marriage and Family. 2011.

5. Schaller, J. “For richer, if not for poorer? Marriage and divorce over the business cycle.” J. Popul. Econ. 2013. 26:1007-1033.

6. Martin, S, Astone, N-M, Peters, E. “Fewer marriages, more divergence: marriage projections for millennials to age 40.” The Urban Institute. 2014.

7. Santos, C, and Weis, D. “Why not settle down already? A quantitative analysis of the delay in marriage.” 2012.