Thursday, July 17, 2014

Youth Suffrage: Is it time?

One of the chief theoretical advantages of a democracy is the idea of “one person one vote”, a characteristic that limits the total power that can be accrued by a select oligarchy. However, there is always the lingering question of what parameters should be applied when creating requirement criteria for voting. Throughout U.S. history these parameters have become less and less restrictive including age. Based on the important issues in present society the question of whether or not the voting age should be adjusted again becomes even more prevalent. Note that when the term “voting age” is used it references the minimum age that a state and the federal government cannot deny an individual the ability to vote.

The most common argument for lowering the voting age harkens back to a perceived central theme of the Revolutionary War, “no taxation without representation.” A number of 15-17 year olds have jobs that require them to pay income taxes as well as other smaller taxes like payroll taxes, yet do not provide them with an ability to participate directly in the political process. Therefore, some argue that it is historically prudent that these individuals are given the capacity to vote. Unfortunately this mindset is not as clear-cut as its proponents would like to believe.

First, with the current state of modern technology individuals as young as ten can create marketable content on the Internet or fashion and/or custom jewelry pieces. So under the above premise should such ten year olds be given the right to vote as well? Second, the idea of “no taxation without representation” is not as noble as one might think. The idea was largely peddled as a “call to arms” so that the colonial public would accept the Revolutionary War, which on its face provided much more benefit to the merchant and upper class sectors of the colonies versus simple common land owners, especially since most of the dying would be done by the colonial public. Third, the idea of “no taxation without representation” is used literally by proponents, thus the literal application would imply that only individuals with jobs that pay a large enough wage should be allowed to vote, this is certainly not in the spirit of democracy. Fourth, the idea of representation itself is flawed because without a reasonable probability to produce an informed mindset, that representation produces a detriment to society.

An example of this important fourth reason is as followed: suppose Apartment Complex A is having a vote among its 50 residents on whether or not to establish a new more restrictive noise ordinance. 10 residents are opposed to this new ordinance because they commonly have parties that involve loud music and do not want these “rituals” interrupted. 20 residents are in favor of this new ordinance because they are frequently bothered by the noise that emanates from these parties. The final 20 residents have no strong opinions on this vote and are not aware of the grievances of 20 pro ordinance residents because they are far enough away from the 10 con ordinance residents that they do not experience the loud music. Under these conditions these final 20 residents should not vote because of their lack of interest and information; however, more than likely if they vote they will vote against the ordinance due to reasons of either simplicity or avoiding future restrictions on themselves. Therefore, these 20 “neutral and uninformed” voters will ineffectively swing the results of the vote because they do not understand how the outcome of the vote affects all residents in Apartment Complex A.

Some proponents of youth suffrage would argue that age does not define maturity and/or information acquisition, which is true. However, age does create greater chances for opportunity and experiences that can increase the probability of intelligence and maturity. Probability is what matters in the case of the blind voter. Clearly not all adults are sufficiently informed of the depth of their votes, but there is a higher expected probability that these individuals have the ability to inform themselves. For example based solely on life experience an 18-year old individual has a higher probability of having the tools to understand the significance of his/her vote over a 16-year old individual.

Proponents typically put forth various other less meaningful rationalities like 16-year olds can drive, drop out of high school, be charged as an adult in certain crimes, encourage greater interest in politics, etc., but none of these reasons have sufficient ability to challenge the uninformed voter argument. So does that mean that the idea of youth suffrage is logically dead? Not really for proponents have missed the very idea behind the initial age change in suffrage (age 21 to age 18), a reasoning that now applies now to even younger individuals.

Sadly in modern society very few individuals focus on the welfare of the future when crafting public policy and legislation, choosing instead to focus on the present. Unfortunately this focus results in the application of numerous strategies that damage the future in order to provide greater benefit for the present. These policies that damage the future inherently damage the future health (at all levels) of the nation’s youth. However, youth are not given the appropriate mechanisms to defend themselves against this threat because they cannot participate in the political process. The idea that a significant portion of voting influence occurs through the intent of parents to protect and grow the future for their children, i.e. make a better life, does not appear to be valid due to the lack of policy addressing income inequality and global warming, the two most important issues for the future; neither of these issues have been tackled with the necessary urgency that their solutions require.

This betrayal of the future can be demonstrated through the following example. Suppose Person A is given the opportunity to select one of two options:

Option 1 – Receive 5 dollars now and receive another 5 dollars 2 years from now;

Option 2 – Receive 15 dollars now and lose 5 dollars 2 years from now;

While the net outcome for both choices is +10 dollars, the future is supported by only one of those choices. Some would argue that due to economy of scale the second option is actually better for the future because the present has more ability to solve existing problems with 15 dollars than 5 dollars. This statement is true, but only in theory. Unfortunately the present is not using the sacrifices they are demanding of the future to create a better future.

For example the prospect of global warming has been a serious problem for over two decades, yet industrial society, outside of a global recession, has increased the amount of greenhouse gases emitted into the atmosphere over those two decades by a significant amount and the money made from that pollution is not being redirected into a new energy infrastructure and other strategies for the reduce global warming damage for the future.

Overall this lack of power is the real reason to ask whether or not the age of suffrage should be extended to individuals younger than the age of 18. Originally the age of 21 was decided as the voting age by the Founding Fathers because of the importance of the age in English culture at the time (21 was the age of legal drinking, voting and knighthood). This limit was significantly challenged by the logic associated with the Vietnam War where individuals between the age of 18-20 could be drafted, sent to war and die without the ability to influence the political process (lack of power). To rectify this hypocrisy the voting age was lowered to 18.

However, while the lack of power issue is a more viable rationality for youth suffrage over more inaccurate and simplistic arguments like “no taxation without representation”, the problem of the uninformed voter still looms. This concern can be addressed by tying a simple test to voting registration for 14 to 17 year olds. The test would focus on simple and basic, yet important concepts to demonstrate that these individuals are capable of effectively participating in the democratic process. A vast majority of 18 year olds have taken some form of civics and/or government class as a requirement to graduate high school, thus this test would be designed to encapsulate the basics of the knowledge acquired from that study.

For example this type of voting test could be conducted at a federal or state government building. The test would be comprised of three sections: 1) identify the three branches of the federal government and briefly describe their roles in government; 2) identify the holders of the major positions of governmental power in the applicant’s state (i.e. governor, two federal senators, which political party controls the legislature, mayor of the city of residence, etc.); 3) identify how one would acquire information about a particular political topic to improve his/her understanding of the issue;

Section 1 is required to ensure that the applicant understands the basic structure and function of government otherwise the importance of voting is lost. Section 2 is required to ensure that the applicant has a basic understanding of the political environment of his/her own state and understands the hierarchical structure of local and state government. Section 3 is required to ensure that these younger individuals have the ability to rid themselves of ignorance in order to limit the probability that they are simply voting as extensions of their parents or friends. Basically this test should ensure that applicants have an understanding of process, an awareness of reality, and an ability to acquire relevant information for the future.

While the problem of rationality has been solved above, a secondary problem of application still exists. This problem can be seen by reviewing the history behind the 26th Amendment. On June 22, 1970 an extension of the Voting Rights Act of 1965 was passed that required changing the legal voting age from 21 to 18 in all federal, state and local elections. Soon after both Oregon and Texas challenged the change to this age change leading to the case Oregon v. Mitchell (1970). The Supreme Court declared that it was unconstitutional to force states to register 18-year olds for state and local elections, while retaining the federal election age limit change. Without state involvement it was deemed too expensive to create and maintain separate voter rolls (one federal and one state), thus the 26th Amendment was crafted to eliminate this problem associated with the previous legislative action.

Overall it is unclear whether or not the same constitutional problem may exist for extending youth suffrage, but it stands to reason that it does. Also while the 26th Amendment was passed very quickly it is reasonable to assume that certain states will not be as forthcoming with an amendment that lowers the minimum voting age to 14 or even 16. The reasoning behind opposition to youth suffrage may not be valid due to the securities provided by the required voting examination to substitute for the experience of age, but that does not eliminate the possibility that states will still oppose such a change. Therefore, the reasoning for lowing the voting age may be sufficient (allow teenagers to protect their future because enough adults certainly are not taking the proper steps to do so), the drive by states to allow such a change may not be sufficient. Unfortunately as it currently stands the application of youth suffrage may simply suffer from a lack of motivational drive for its establishment versus a lack of logic.

Friday, June 27, 2014

ER Crowding – Current and Future Issues

Crowding in emergency rooms (ERs) has been an increasing problem in the developed world for the last few decades, especially in the United States. However, the political and medical arenas are not appropriately addressing this problem as from 1995 to 2009 annual ER visits in the U.S. increased by 41% (96.5 million to 136.1 million), but the number of hospital ERs have decreased by 27% (2,446 to 1,779)1-3 Among U.S. ERs in 2010 a mere 31% achieved their triage targets and only 48% were able to admit patients within 6 hours of registration.4 One of the immediate problems with this overcrowding problem is that it has become a normal occurrence. How could ERs effectively respond to outbreaks of highly contagious pathogens, industrial accidents, terrorist attacks, etc. if currently over half of the non-critical patients have to wait 6+ hours before receiving treatment? Apart from disasters ER crowding increases patient mortality, reduces quality of overall care, impaired transport access and increases financial losses and stresses. Also note that ER crowding is not a unique problem to market economics, but also affects countries with universal systems of medicine like Canada, Australia, New Zealand, etc., thus the passage of the American Care Act will not systematically result in a reduction in crowding.

ER operations have numerous metrics to measure the effectiveness of operations, but typically the most commonly used ones are length of stay (LOS), % of patients who leave without being seen (LWBS), wait time (WT), and ambulance diversion (AD).5,6 However, while these metrics are commonly used, they should not be utilized in a vacuum because some ERs do not even have the ability to divert ambulances and patient wait metrics like LOS and WT are influenced by case complexity. Another concern about these metrics is that most of them are rarely made public nor are there set standards regarding quality, thus it is difficult to have common up-to-date information to determine whether or not a given community is receiving adequate medical care in both absolute and relative terms.

Opposite the fast-paced ambulatory delivery of a critical patient into an emergency room who is immediately admitted, the general operation of an emergency room from the perspective of an individual who enters outside an immediately apparent life threatening condition is as followed:

First, the attending nurse (rarely a physician) conducts a basic triage. Triage itself typically adheres to the Emergency Severity Index (ESI), which is a 3 or 5-tier categorization that combines urgency with an estimate of the resources required to treat the condition.7-9 In the original, now less common 3-tier system the three groups are: immediate treatment required (emergent); urgent, but not currently life or permanent health threatening; or minor condition that can be addressed in time (non-urgent); obviously these categorizations are required to ensure the best and most appropriate care for all potential patients.

In the 5-tier system an additional two groups are added: resuscitation and less urgent making the whole tier structure – 1) resuscitation; 2) emergent; 3) urgent; 4) less urgent; 5) non-urgent.8,9 Realistically the addition of these two new tiers seems rather unnecessary because resuscitation is an obvious choice for immediate treatment not requiring its own category and the difference between less urgent, urgent and non-urgent is marginal. However, it seems to work and does not appear to create significant complications with its seeming unnecessary redundancy.

Clearly individuals with urgent conditions should be seen by physicians before individuals with minor conditions even if the individual with a minor condition arrived first. Triage typically involves acquiring major vital signs (temperature, pulse, respiratory rate, blood pressure, etc.) and a short interview to assess what the patient is feeling and the major details regarding medical and medication history. Depending on the type of classification the new patient will be placed in a certain position on a waiting list.

The triage system typically functions under a scoring system to evaluate the condition of the individual. In addition to physical scoring, physiological scoring is also used to address urgency for treatment. Utilized scoring systems include APACHE II (which is also the most common ICU system to measure prospective mortality), SAPS II, MODS, PRM and GCS (becoming more popular due to its simplicity, sensitivity and specificity).10-14 Scoring systems have also demonstrated that ER care is significantly more important than follow-up care in the ICU showing significant drops in predicted mortality for proper ER care.15 In addition to the older tests, a newer test, the Mortality in Emergency Department Sepsis Score (MEDS), was recently been developed to predict the probability that ER patient contract an infection that could increase complications and/or mortality.16

While tests like APACHE II, SAPS II and MODS are important analysis elements, the development of new ER specific scoring systems like MEDS is important because the older systems were designed to measure illness severity and mortality risk probabilities in a less time dependent nature within the confines of an ICU whereas the ER environment is fast-paced and more time dependent creating a lead-time bias.10,15 Factors that are considered important for ER based scoring systems include: 1) variables that reflect pre-hospital illness severity; 2) illnesses that can be contracted from the ER; 3) ability to be incorporated into a multi-center database with sufficient size and power to validate the model’s accuracy; 4) analytical ability for the relationship between the predictive variables and actual patient outcome for calibration and reliability measurement; 5) secondary predictive effects beyond simple mortality to measure LOS, WT and return visits; 6) use of time-indexed variables to reflect treatment response during care.10,17,18

While a nurse typically governs triage, some studies have suggested that when a physician is in charge of triage instead of a nurse various performance metrics like LOS, LWBS and AD all decrease.19-21 Of course the trade-off for this potential improvement is an increase in cost due to hiring another physician. Otherwise in-room care for patients that have moved from the ER waiting room to an exam or operating room will suffer because of the lack of a physician or one being stretched between exams and triage.

Second, individuals who do not require immediate treatment enter the registration process where the patient officially registers as a patient, which involves filling out all of the relevant paperwork familiar to any first-time patient in a general practitioners office. This step is relevant to consolidate all relevant information including a more detailed medical history and payment information (insurance, etc.). These details are important to create a single medical record that can be referenced during the patient’s stay in the ER. It is important to note that a number of people incorrectly believe that an uninsured individual receives free medical care when going to an ER. This is not correct. The Emergency Medical Treatment and Active Labor Act of 1986 only obligates ERs to care for individuals regardless of ability to pay. Uninsured individuals that receive care from an ER still receive a bill for the services rendered. If they are unable to pay the bill then their credit score is negatively affected and if the hospital/physician so desires they can be sued for the amount. This billing is why uninsured individuals in the past did not go to the ER for every little thing that may be wrong with them.

Afterwards the ER visit proceeds similar to an standard physician visit where when it is an individual’s turn the individual enters an exam room where a nurse reassesses blood pressure and temperature, and if necessary draws blood and/or collects a urine sample for lab testing purposes. Next a physician visits the patient and after a brief discussion makes a differential diagnosis. After the diagnosis for conditions that are not immediately critical the patient is prescribed a treatment and sent home.

One of the major reasons critics cite for continued difficulty in transforming ERs to better manage their patient flow is their tradition/culture. As described above the standard operation of an ER is one person – one task with little intra-staff interaction, a methodology that in the era of computers and multi-tasking is viewed as inefficient and costly. A significant amount of this inefficiency comes from having different doctors and nurses repeat information gathering due to lost or “mistranslated” previous attempts. This problem is augmented by poor coordination among providers, which are typically highly fragmented encompassing multiple emergency medical service agencies with different standards and different practices to the point where agencies in different, but adjacent jurisdictions have difficulty communicating. This coordination is difficult due to turf wars and because transport options are limited.

To maximize the effectiveness of reform interventions dramatic improvement in intra and inter-hospital coordination will be required including standardization of procedures and practice. Incorporation of electronic health records would help in managing this concern, but applying electronic health records for an ER is significantly more difficult than a standard physician office due to the required pacing and lack of consistency in the repeat visitation of patients. Unfortunately in addition to the incorporation of electronic health records, the expanded coordination discussed above has always been the go-to solution and the general dream of individuals trying to address crowding problems, but this coordination is very slow to developed despite the desire to produce it.

One strategy to increase coordination is to increase multi-tasking. However, while some cite some limited studies about the improved efficiency born from multi-tasking there is concern about expanding this strategy for other studies suggest that demonstrated reduced cognitive efficiency in individuals who engage in multi-tasking versus focusing on a single task and then moving on to a secondary task.22 Reduced cognitive efficiency would increase the probability of medical errors and increase the probability of detrimental medical outcomes including death. In addition the demographic of ER patients and the seriousness and complexity of their conditions are changing with more older patients with chronic conditions and multiple co-morbidities with younger patients having fewer non-urgent and more semi-urgent and urgent visits.23 Increase the level of complexity in condition and diagnosis while decreasing the attentiveness and focus will further increase the probability of negative outcomes.

One of the past arguments rationalizing ER crowding was that too many uninsured individuals used the ER as a primary care physician because the lack of insurance dramatically reduced their ability to schedule appointments with general practitioners. Individuals who frequent the ER constantly are referred to as “frequent flyers” and typically make up 8-14% of ER patients and were thought to include large numbers of uninsured individuals.24 Therefore, one solution was increasing the probability that these individuals get insurance so instead of going to the ER they would go to a general practitioner to receive general medical care. Unfortunately this solution, while sound in theory, has not followed theory in reality. Both the expansion of insurance availability in Massachusetts in 2006 and various other states through the American Care Act have resulted in increases in ER patients with state based insurance (Medicare, Medicaid).25 So why is reality apparently trending contradictory to theory?

Two principle reasons jump to mind. First, most common analysis overestimated the number of individuals with no insurance who were using the ER for basic and principle medical care. While frequently flyers make up anywhere from 8-14% of the total patients during the day, most of these individuals have insurance. Recall that the ER is only bound to treat individuals regardless of ability to pay ensuring that they will receive treatment. However, that treatment is not free. Therefore, in the past individuals without insurance who received medical care from an ER would still have to pay for those services. It stands to reason that these individuals would not attend the ER constant because if they could not afford insurance, then they would not have consistent levels of disposable income to cover numerous ER visits for every nick and scrap.

This rationality hints at the second reason for why ER patients have increased. The primary assumption was that uninsured individuals would stop attending the ER when they received insurance. However, what appears to be happening is that previously uninsured individuals are actually attending the ER more often. The reason behind this behavior is probably derived from the fact that government sponsored insurance has significantly increased the number of individuals with insurance while the number of available general practitioners that are able to service these newly insured patients as well as past/current insured patients has increased at a much slower rate. Therefore, there are significant shortages between insured patients and available doctors to see them via appointment. With the lack of consistency in acquiring an appointment with a general practitioner, the consistency of an ER is appealing. The only real ways to resolve this behavior is train and certify more general practitioners, something that will not happen in the immediate future.

Interestingly enough this premise of ER crowding due to uninsured individuals using the ER for basic medical care in the past is not supported by research. Research suggests that while it was initially reported that this input factor was meaningful26,27 that initial interpretation was probably incorrect. ED crowding is more influenced by sickly and chronic patients who are admitted to the hospital than individuals who have minor injuries and are sent home after routine care/check-ups.28-32 Not surprisingly hospital occupancy (i.e. the number of available beds) versus the number of patients, which leads to boarding, is the strongest element correlated to length of stay in the ER and overall wait times.31,32 Other smaller factors leading to crowding are inappropriate ambulance diversion and direction33 and recently discharged inpatients looking for additional care under various motivations.34 However, as mentioned above boarding due to a lack of bed is the chief element responsible for ER crowding.

The most important consideration when identifying possible solutions to ER crowding is to create a standardized evaluation system to determine which solutions are effective, which are not effective and which are mediated by unique environmental conditions (i.e. effective for one particular hospital, but not for another). Developing this evaluation system would also make it much easier to assign accountability and measure overall and sector specific performance to create effective strategies to correct any problems. In addressing “quality” the Institute of Medicine (IOM) defined quality as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” and described six dimensions of quality care: a care that is safe, effective, patient centered, timely, efficient, and equitable.35

Not surprisingly various individuals have suggested that to measure the true value of a system an ER must be evaluated on the application of evidence-based medicine. While this solution should be effective it is sometimes difficult to coordinate the necessary information to ER doctors who typically have little downtime and do not want to spend it reading the latest meta-study. Ideally the practice of extensive evidence-based medicine is one of the dreams of incorporating technology into hospitals to the point where a physician can simply type a condition into a computer and the most effective treatments (as defined by existing evidence) with their corresponding caveats would appear. Unfortunately this reality has not arrived, but a less efficient substitution strategy involves conducting frequent physician meetings for brief reviews of the newest treatment strategies.

Some have suggested that patients define whether or not the quality metrics have been met through evaluations. However, patient evaluation is troublesome because patients may regard elements or instances of discomfort through their own personal lens without understanding or acknowledging the bigger picture. For example a patient may want a glass of water, but due to nurse/physician preoccupation in other more pressing tasks this individual waits a long time before getting the water and possibly develop a slight case of dehydration while waiting. For the patient such an event could easily be worth a quality demerit, but from the perspective of the hospital such an event is irrelevant. Similarly patients are not aware of a significant amount of “behind the scenes” actions relative to their treatment, thus have incomplete information regarding overall treatments and may mischaracterize certain outcomes as poor or negative. This is not to say that patients should not have the ability to make evaluations of their care, but it must be understood that there is high probability that those evaluation cannot be viewed as accurate inside the vacuum of the patient’s own opinion.

Another idea would be to create a small group of government based auditors who would periodically visit ERs and after observation and various informal interviews these individuals would evaluate ER performance and quality based on a series of standardized evaluation metrics. Under this system the bias of patients can be neutralized by an individual who has an understanding of a bigger picture and the bias of the ER authority will be eliminated by a neutral un-invested individual as well as dramatically reduce the time requirements that would be required for mandatory employee based evaluations. The one major drawback to this method would involve producing additional money to fund these government-sponsored auditors.

As mentioned above creating an effective evaluation system will increase the ability to produce quality solutions. Currently one of the most obvious solutions to addressing ER crowding is to reduce boarding. Boarding is the official term to describe when a patient who cannot be moved into an inpatient unit due to a lack of beds remains in the general ER area and receives periodic treatment within. During normal operating hours boarding represents anywhere from 20-40% of the total ER patient population.36 Boarding levels are also significantly influenced by financial decisions in effort to maximize hospital revenues. Not surprisingly average revenue per patient is higher for non-ER admissions than for ER admissions,37 thus hospitals favor giving beds and rooms to those higher value patients leaving ER patients waiting for a bed. The easiest method to reduce boarding is to increase the number of beds available in a hospital. However, this method costs significant amounts of money not only for the beds, but also for hospital expansion to place the beds. Hospitals have already attempted to increase bed number by placing more beds in single rooms, but this strategy can reduce patient welfare being counterproductive.

Some argue that how the bed is utilized also needs to be considered. There are two major types of beds: observation and inpatient. Observation beds are less costly to construct and staff due to building code requirements and upkeep relative to the patients that utilize them.38 In addition in Certificate of Need states constructing additional observation beds do not require the approval of a state agency unlike constructing additional inpatient beds.38 However, when constructing beds in general it must be understood that there are diminishing returns based on changes in patient inflow and medical requirements. Roemer’s Law is frequently cited when considering bed expansions because if one expands bed capacity one is expected to need it and use it. In some context similar to the psychology behind the Jevons paradox if beds are constantly used then the perception for more beds typically results. Basically there appears to be a positive feedback between bed capacity and number of beds used, which may create an inverse relationship where increased capacity increases demand rather than addresses it.39 Thus characteristics behind bed addition must be carefully analyzed before it occurs.

While the metric behind the need to increase bed occupancy is not standard, some research has suggested that a consistent level of 85% during measurements taken at midnight is the minimum level required before beds should be added.40 Note that the average “midnight census” typically calculates the minimum level of occupancy in a given day. The principle reason for this characteristic is the process of the “23-hour patient”. These types of patients are admitted in the morning and discharged in the late evening as a means to allow for evaluation of patients, yet avoiding unnecessary hospitalization. While estimating the difference between the midnight census and the actual occupancy is not universally deterministic most estimate a 5-15 absolute percentage point increase from the midnight census percentage value.40 However, it must be noted that the “23-hour patient” was a popular strategy in managed care, with the ebb and flow in the popularity of managed care it is difficult to estimate how significant this strategy will have in the future.

85% occupancy is the target more cited by professionals and in research, but this figure is typically applied universally not considering the size of the hospital and the number of people that seek medical services. Due to a lack of economies of scale and usage flows, smaller hospitals should have smaller target levels because they will typically have a smaller number of beds creating a greater sense of urgencies when facing greater than average patient visitation. For example if hospital A has 100 beds, an 85% occupancy utilizes 85 beds leaving 15 free; however, hospital B may only have 35 beds, an 85% occupancy utilizes 30 beds leaving only 5 free placing them in more danger for exceeding capacity on an above average admittance day.

Also there are some elements that must be considered including the difference between certified beds and staffed beds. Certified beds are those that are approved by authorities for use on a permanent basis and have been deemed to have sufficient staff to support its use where staffed beds are those that designated only for inpatient or day case care. One commonly suggested improvement to manage bed use is to establish a management program run by a “bed team” who would operate discharge, facilitate rapid turnaround of newly vacated beds, initiate ambulance diversion, and assign waiting patients to an inpatient bed.35 Unfortunately for most hospitals increasing the number of beds is not a viable option without a significant increase in funding, a result that is not forthcoming from state or Federal legislatures.

Another popular method that has been explored to improve ER crowding is the “fast track”. Broadly stated “fast track” is a system designed to process lower acuity patient quicker in order to increase bed turnover and reduce boarding.41 Individuals with injuries like superficial wounds, minor allergic reactions, small bone fractures and minor burns are typical fast track candidates. Interestingly enough fast tracking patient with minor injuries is not new and has been utilized by a number of ERs since the late 1980s.41 Due to this significant penetration fast tracking has been studied the most of any ER reform strategy and has demonstrated reductions in LOS and WT,42-44 yet almost all of this study has focused on LOS and WT and not whether or not patient safety outcomes are improved. One of the concerns with evaluating the efficacy of fast track is that there really is no standard evaluation protocol instead many hospitals have their own rules and criteria. While fast track proponents sing its praises, the overall ability to expand fast tracking is limited because most studies estimate fast tracking only encompasses 10-30% of the total patients seen in an ER and any gains seen when applying a fast track strategy only occur during peak hours.43,45,46

Unfortunately benefits from fast track only emerge when patients are discharged, not streamed through hospital admission.46 Also fast track benefits may be negatively impacted in the future because it largely depends on eliminating technological diagnostic procedures (blood tests, x-rays, CT scans, etc.) versus physical cues that can be evaluated by physicians. The need for diagnostic procedures will more than likely increase in the future as the number of elderly patients with more extensive health histories continue to increase in the future. This demographic change in ER population will not eliminate the advantages of fast track, but should reduce its rate of use limiting its usefulness. This additional testing will add to the already 60-70% of individuals who require laboratory tests when visiting an ER.47

While some strategies have been introduced to reduce testing time like pre-defined test panels for specific symptoms, faster laboratory transportation and early ordering,48 realistically testing takes time and little can be done about it. Some believe that the most useful strategy may be point-of-care testing (POCT) which involves moving laboratory analysis and tests to the ER. As expected undertaking a POCT strategy reduces WT and LOS through a reduction in turn-around time in the laboratory.48 However, a POCT strategy typically involves either large capital expenditure for hospital expansion or giving up space in other areas of the ER which may increase inefficiency and/or boarding for patients with more severe conditions due to a reduction in beds. The potential loss of some beds will be detrimental, but with reasonable expectation in the future for the expansion of primary care from general practitioners (when they eventually start to expand) and the increased need for laboratory services for elderly patients, currently preparation for and utilization of a POCT strategy seems beneficial overall.

A consideration for the increasing elderly population must be made in the scope of ER reform for all signs point to this increase continuing and accelerating. It is projected that demographically elderly patients will increase from approximately 15% to 25-35% of ER visits over the next 30 years.19 As previously mentioned elderly individuals typically require more time and resources for their medical care both on a logistics level (greater medical history) and a biological level (higher probability something can go wrong). Unfortunately there is also a side concern with the elderly. Typically seniors have fewer travel options than younger individuals and may have difficulty attending routine physical examinations (from general practitioners or the ER) even if appointments can be made. Therefore, this lack of option for travel can increase the probability that these elderly individuals put off medical care until it becomes critical creating a problem from nothing.

Another issue with the elderly is that nearly 25% of nursing home residents visit the ER at least once per year.49 Unfortunately a number of nursing homes tend not to promote good health, but instead attempt to simply keep their residence alive, thus those who are suffering from deteriorating health continue to have failing health. This “strategy” produces ER patients that are typically in poorer health than those elderly individuals who live on their own, about 67% of nursing home ER patients have cognitive impairment50 that complicate medical history collection and the nursing home records are rarely helpful. In fact 10% of nursing home ER patients arrive without any written medical documentation and 90% have significant gaps in their histories.51-53 Thus there is little coordination between ERs and nursing homes, largely because it appears that nursing homes do not care enough to apply the effort. However, ERs do need to be more diligent in ensuring that elderly patients across the board receive more clearly written instructions regarding their outpatient care.

Addressing current and future crowding in the ER will first require the development of a standard definition for quality and measurable components that encompass that definition because it is difficult to identify and classify problems when those problems cannot be identified. Independent government sponsored auditors, to ensure effective care and root out any problems quickly, should periodically evaluate these quality metrics. ERs should develop strategies to better manage beds through understanding real average occupancy values, not those taken from overnight values, to determine where there are excess beds and where/when bed demand is greatest. Finally it should be useful to study strategies that will increase the ability to manage elderly patients due to the logical expectation that their ER demographic will increase in the future. It stands to reason that areas with large elderly populations and quality ER service should have some effective strategies that can be applied to other ERs. Overall there are solutions that can be applied to the problem of ER crowding, but it is important that individuals ask the right questions and appreciate the change in future trends rather than declare simplistic panaceas like the incorporation of electronic health records.

Citations –

1. Johnson, K, and Winkelman, C. “The effect of emergency department crowding on patient outcomes: a literature review.” Advanced Emergency Nursing Journal. 2011. 33(1):39–54.

2. Bullard, M, et Al. “The role of a rapid assessment zone/pod on reducing overcrowding in emergency departments: a systematic review.” EmergencyMedicine Journal. 2012. 29(5):372–378.

3. Bell, M, and Parisi, J. “ED slashes average wait time by more than an hour.” ED Management. 2009. 21(3):30-31.

4. Wiler, J, et Al. “Optimizing emergency department front-end operations,” Annals of Emergency Medicine. 2010. 55(2):142-160.

5. Welch, S, et Al. “Emergency Department Performance Measures and Benchmarking Summit.” Acad. Emerg. Med. 2006. 13(10):1074-1080.

6. Welch, S, et Al. “Emergency Department Operational Metrics, Measures and Definitions: Results of the Second Performance Measures and Benchmarking Summit.” Ann. Emerg. Med. 2011. 58(1):33-40.

7. Fernandes, C, et Al. “Five-Level Triage: A Report from the ACEP/ENA Five-Level Triage Task Force.” J. Emerg. Nurs. 2005. 31(1):39-50.

8. Chonde, S, et Al. “Model comparison in Emergency Severity Index level prediction.” Expert Syst. Appl. 2013. 40:6901-6909.

9. Gilboy, N, et Al. “Emergency Severity Index (ESI). A triage tool for emergency department care. Version 4. November 2011. AHRQ publication #12-0014.

10. Hargrove, J, and Nguyen, B. “Bench-to-bedside review: outcome predictions for critically ill patients in the emergency department.” Critical Care. 2005. 9(4):376-383.

11. Knaus, W, et Al. “APACHE II: a severity of disease classification system.” Crit Care Med. 1985. 13:818-829.

12. Le Gall, J, Lemeshow, S, and Saulnier, F. “A new Simplified Acute Physiology Score (SAPS II) based on a European/North American multi-center study.” JAMA. 1993. 270:2957-2963.

13. Marshall, J, et Al. “Multiple organ dysfunction score: a reliable descriptor of a complex clinical outcome.” Crit Care Med. 1995. 23:1638-1652.

14. Gill, M, Reiley, D, and Green, S. “Interrater reliability of Glasgow Coma Scale scores in the emergency department.” Ann Emerg Med. 2004. 43:215-23.

15. Nguyen, H, et Al. “Critical care in the emergency department: a physiologic assessment and outcome evaluation.” Acad Emerg Med. 2000. 7:1354-1361.

16. Shapiro, N, et Al. “Mortality in Emergency Department Sepsis (MEDS) score: a
prospectively derived and validated clinical prediction rule.” Crit Care Med. 2003. 31:670-675.

17. Knaus, W, et Al. “A comparison of intensive care in the U.S.A. and France.” Lancet. 1982. 2:642-646.

18. Knaus, W, Wagner, D, and Lynn, J. “Short-term mortality predictions for critically ill hospitalized adults: science and ethics.” Science. 1991. 254:389-394.

19. Partovi, S, et Al. “Faculty Triage Shortens Emergency Department Length of Stay.” Acad. Emerg. Med. 2001. 8(10):990-995.

20. Russ, S, et Al. “Placing Physician Orders at Triage: The Effect on Length of Stay.” Ann. Emerg. Med. 2010. 56(1):27-33.

21. Han, J, et Al. “The Effect of Physician Triage on Emergency Department Length of Stay.” J. Emerg. Med. 2010. 39(2):227-233.

22. Poolton, J, et Al. “A comparison of evaluation, time pressure and multitasking as stressors of psychomotor surgical performance.” Surgery. 2011. doi:10.1016/j.surg.2010.12.005

23. Pitts, S, Niska, R, and Burt, C. “National Ambulatory Medical Care Survey: 2006 Emergency Department Summary.” Natl Health Stat Report. 2008. 6:1-38.

24. Huang, J, et Al. “Factors associated with frequent use of emergency services in a medical center.” J. Formos. Med. Assoc. 2003. 102(4):345-353.

25. Moskop, J. “Emergency Department Crowding, Part 1—Concept, Causes, and Moral Consequences.” Annals of Emergency Medicine. 2009. 53(5):605-611.

26. Gallagher, E, and Lynn, S. “The etiology of medical gridlock: causes of emergency department overcrowding in New York City.” J Emerg Med. 1990. 8:785-790.

27. United States General Accounting Office (GAO). “Emergency departments: unevenly affected by growth and change in patient use.” Report to the Chairman, Subcommittee on Health for Families and the Uninsured, Committee on Finance, US Senate, January 1993.

28. Olshaker, J, and Rathlev, N. “Emergency department overcrowding and ambulance diversion: the impact and potential solutions of extended boarding of admitted patients in the emergency department.” J Emerg Med. 2006. 30:351-356.

29. Espinosa, G, et Al. “Effects of external and internal factors on emergency department overcrowding [letter].” Ann Emerg Med. 2002. 39:693-695.

30. Schull, M, Kiss, A, and Szalai, J-P. “The effect of low-complexity patients on emergency department waiting times.” Ann Emerg Med. 2007. 49:257-264.

31. Forster, A, et Al. “The effect of hospital occupancy on emergency department length of stay and patient disposition.” Acad Emerg Med. 2003. 10;127-133.

32. Rathlev, N, et Al. “Time series analysis of variables associated with daily mean emergency department length of stay.” Ann Emerg Med. 2007. 49:265-272.

33. Richards, J, and Ferall, S. “Inappropriate Use of Emergency Medical Services Transport: Comparison of Provider and Patient Perspectives.” Acad. Emerg. Med. 1999. 6(1):14-20.

34. Baer, R, Pasternack, J, and Zwemmer Jr, F. “Recently Discharged Inpatients as a Source of Emergency Department Overcrowding.” Acad. Emerg. Med. 2001. 8(11):1091-1094.

35. Institute of Medicine. “The future of emergency care in the United States health system.” Ann Emerg Med. 2006. 48:115-20.

36. Schneider, S, et Al. “Emergency Department Crowding: A Point in Time.” Ann. Emerg. Med. 2003. 42(2):167-172.

37. Pines, J, et Al. “The Financial Consequences of Lost Demand and Reducing Boarding in Hospital Emergency Departments.” Ann. Emerg. Med. 2011. 58(4):331-340.

38. Lovejoy, W, and Desmond, J. "Little’s Law Flow Analysis of Observation Unit Impact and Sizing.” Acad. Emerg. Med. 2011. 18:183–189.

39. Roemer, M. “Bed supply and hospital utilization: a natural experiment.” Hospitals. 1961. 35:36–42.

40. Green, L. “Queueing Analysis in Healthcare, in Patient Flow: Reducing Delay in Healthcare Delivery.” 2006. Springer, New York.

41. Welch, S. “Patient Segmentation: Redesigning Flow.” Emerg. Med. News. 2009. 31(8).

42. Cochran, J, and Roche, K. “A multi-class queuing network analysis methodology for improving hospital emergency department performance.” Comput. Oper. Res. 2009. 36(5):1497-1512.

43. O'Brien, D, et Al. “Impact of streaming “fast track" emergency department patients.” AHR. 2009. 30(4):525-532.

44. Considine, J, et Al. “Effect of emergency department fast track on emergency department length of stay: a case-control study.” Emerg. Med. J. 2008. 25:815-819.

45. Rogers, T, Ross, N, and Spooner, D. “Evaluation of a ‘See and Treat’ pilot study introduced to an emergency department.” Accid Emerg Nurs. 2004. 12:24-27.

46. Oredsson, S, et Al. “A systematic review of triage-related interventions to improve patient flow in emergency departments.” Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine. 2011. 19:43-52.

47. Yoon, P, Steiner, I, and Reinhardt, G. “Analysis of factors influencing length of stay in the emergency department.” Can J Emerg Med. 2003. 5:155-61.

48. Schimke, I. “Quality and timeliness in medical laboratory testing.” Anal Bioanal Chem. 2009. 393:1499-504.

49. Bergman, H, and Clarfield, A. “Appropriateness of patient transfer from a nursing home to an acute-care hospital: a study of emergency room visits and hospital admissions.” J Am Geriatr Soc. 1991. 39:1164–1168.

50. Gillick, M, and Steel, K. “Referral of patients from long-term to acute-care facilities.” J Am Geriatr Soc. 1983. 31: 74–78.

51. Jones, J, et Al. “Patient transfer from nursing home to emergency department: outcomes and policy implications.” Acad Emerg Med. 1997. 4:908–915.

52. Wilber, S, et Al. “Geriatric Emergency Medicine and the 2006 Institute of Medicine Reports from the Committee on the Future of Emergency Care in the U.S. Health System.” Acad. Emerg. Med. 2006. 13:1345–1351.

53. Davis, M, Toombs Smith, S, and Tyler, S. “Improving transition and communication between acute care and long-term care: a system for better continuity of care.” Ann Long-Term Care. 2005. 13(5):25–32.

Black Incarceration Rates: How Much Are They Driven By Racism?

It should be no surprise to anyone who has done their homework that the United States incarcerates the largest number of individuals per capita.1 It is also not a surprise that black individuals make up the largest single demographic percentage of these individuals significantly outpacing their per capita population relative to other race and ethnicities.1 Individuals when discussing the nature of the criminal justice system frequently cite statistics to validate this racial/ethnic disparity. Typically there are two types of responses by most individuals when exposed to these statistics depending on personal perspective: 1) Currently the criminal justice system is unfair to black individuals; 2) black people commit a disproportionate amount of the prosecuted crime. Interestingly enough most people seem to think that these two rationalities are mutually exclusive because rarely does anyone cite both when discussing how blacks and the criminal justice system interact. The question is which of these two rationalities is the chief governing factor behind the incarceration rate for blacks in the United States?

It would not be surprising if at this moment a number of the individuals who prescribe to the first school of thought taking offense to the very possibility of legitimacy for the second rationality, which goes to show the emotional reality of this issue. The chief problem with individuals who lament the number of blacks in prison is that they avoid asking whether or not those individuals actually broke the law and are in jail for legitimate reasons. While there certainly are individuals who have been denied justice and are incarcerated on fraudulent grounds for crimes they did not commit, the simple fact is that a vast majority of individuals, regardless of race or ethnicity, are in jail because they were appropriately convicted a crime.

Addressing the last sentence, realistically there are five explanations for the disparity between incarceration rates of blacks and those of other races/ethnicities:

1 - These individuals are actually committing crimes and are legitimately getting caught supporting the above contention that blacks commit a disproportionate amount of the criminal activity in the United States.

2 - Blacks only commit a small amount of the total crime in the United States, but are less able to conceal their criminal activity, thus their demographic is disproportionally represented in the incarcerated population versus the total number of crimes that are actually committed; this rationality supports neither of the above initial viewpoints.

3 - Bias actively leads the criminal justice system to pursue charges against crime committing black individuals versus crime committing individuals of other races and ethnicities when available evidence is significant in all scenarios supporting the position that the criminal justice system is currently unfair to blacks.

4 – Blacks receive unjustified jail sentences that exceed sentencing guidelines set forth for the associated committed crime supporting the position that the criminal justice system is currently unfair to blacks.

5 - A disproportionate percentage of jailed blacks are innocent of the convicted crime; whether racism played a role in that fraudulent conviction is unclear, but probable for a number of them supporting the position that the criminal justice system is currently unfair to blacks.

The third reason differs from the second reason because of the actions of the individual committing the crime relative to the actions of law enforcement agencies. For example the second reason could be invoked in a situation where a black individual shoots someone in the middle of a neighborhood with numerous witnesses available to testify where a non-black individual shoots someone in a private residence when there are no witnesses, thus there is significantly less evidence to promote an arrest or a conviction. The third reason could be invoked in a situation where the circumstances and scenario of the criminal behavior are similar, but law enforcement agents pursue charges against the black individual instead of the non-black individual. Of course a final point must be made in that for all reasons other than the last one the black individual did actually commit a crime, thus one should not argue that this individual is inappropriately incarcerated.

It is important to consider for the statistics that are frequently cited that suggest racism in the criminal justice system the lopsided nature of non-violent drug offenses. Individuals who use and/or sell illegal drugs make up the largest number of incarcerated individuals (for a specific crime) and it is this crime that produces the most significant portion of the disparity between incarcerated blacks and those of other races/ethnicities. Based on this disparity numerous individuals/groups have claimed that non-violent drug offenses are evidence of racism in the criminal justice system. Unfortunately for a vast majority of these individuals blindly citing the statistics is as far as they go in their analysis. Recall what Mark Twain once said, “There are three kinds of lies: lies, damned lies and statistics.” Without understanding the origins and the “why” behind the raw data that create the statistics, using statistics to argue for a certain perspective is inappropriate and foolish.

With regards to the issue of black incarceration rates a chief point is whether or not drug related crimes are bias against blacks (or to a larger extent minorities in general). However, it is up to those who believe this characterization to prove it; i.e. the burden of proof is on those individuals to demonstrate that drug laws are bias against minorities. There are certain issues that must be addressed by these proponents outside of simply citing statistics.

First, one must analyze whether or not minority users are being sent to jail due to a higher wrongful conviction rates than white users not just arrested at a higher rate despite the arrests being appropriate. To justify this conclusion one would have to conduct an analysis that demonstrated more aggressive incorrect convictions for minorities. For example in county A consider that there are 100 white and 100 black people, 80 black people are accused of violating drug laws with 75 being rightfully convicted and 5 being rightfully acquitted versus 40 white people being accused of violating drug laws with 37 being rightfully convicted and 3 being rightfully acquitted. In this scenario there is no racism as the conviction rates are similar, black drug use is simply higher than white drug use. In a county B consider that there are 100 white and 100 black people, 50 black people are accused of violating drug laws with 45 being rightfully convicted and 5 being rightfully acquitted versus 50 white people accused of violating drug laws with 5 being rightfully convicted and 5 being rightfully acquitted and 40 being wrongfully acquitted.

In the second scenario one would argue racism because the justifiable conviction rate is skewed so much in favor of blacks and typically whether or not an individual is guilty of a drug offense is rather simplistic (i.e. there is little room for subjective rationality or interpretation). Unfortunately those arguing racism must address the issue of unequal justice between economic classes. Despite the contrasting ideological belief in the judicial system, it is widely understood that empirically the poor receive less equitable treatment in the legal system than the rich and a larger percentage of minorities are poor. Therefore, to prove racism in the execution of drug-based court convictions one has to identify a wrongful conviction pattern and then untangle the web of bias between race/ethnicity and economic standing, a difficult task.

A second issue that must be addressed is analyzing the second and third points above by looking at how different races violate drug laws. For example initially when looking at the available information for marijuana arrest rates one could argue in favor of racism in that minorities are arrested at a disproportional rate than whites for drug possession despite similar usage rates, or even higher usage rates by whites (depending on what type of polling information is used). However, this accretion of racism hits a snag when considering how the crime is committed. Middle class and rich individuals, more often white, have resources available to them to make their illicit drug use more evasive than less wealthy individuals. It is inappropriate to suggest that a law is racist if one group has less ability to evade it than another group when there is no selective enforcement intent. Committing a crime in a public area and then being arrested and convicted for it cannot be viewed as selective targeting in any reasonable way.

A third issue that is imperative to making a claim of bias in the enforcement of drug laws is whether or not the law itself is bad. Unfortunately an argument that drugs laws are bad cannot be made as an element of necessity. Individuals that are convicted of various drug crimes are not akin to Jean Valjean stealing bread for his sister’s starving child. One does not need to consume various illicit drugs to survive nor does the consumption of these types of drugs produce unique positive effects that cannot be otherwise derived through legal means. It is also difficult to argue this point rationally on the basis of race with respect to stating that just because one group of individuals are convicted of a given crime that the crime is racist. If this logic were sound then one could argue that if a majority of individuals convicted of embezzlement were Jewish then embezzlement is a bias law.

Based on these three elements of that have yet to be proven one cannot accurately argue that drug laws are racist simply because a lot of black individuals are convicted. In reality a vast majority of black individuals commit a criminal offense involving drugs and are appropriately convicted for that violation. Perhaps one can attempt to rationally argue that certain drugs laws involving simple possession have too strict a penalty from a relative standpoint of their negative influence on society, but as it stands one cannot make that argument on grounds of simple racism or other bias.

That said it would be understandable to move from the issue of crippling bias in their execution, there is the question of whether or not drug laws carry the appropriate punishment. Setting aside mandatory minimums because most people misrepresent their application due to confusion between associated violence and quantity of drugs possessed, some argue that bias exists in habitual offender laws that mandate harsher sentences for repeat offenders. The problem with making this argument is that repeat offenders are not deterred from their criminal behavior by the same level of penalty or certainty of punishment previously accepted hence why they committed the crime again. Individuals commit crimes in order to produce some form of advantage in life. Most individuals either out of concern for the associated punishment or through general positive morality do not commit crimes. However, obviously some individuals are not concerned about the base severity of the punishment or its certainty because they actually engage in criminal behavior. Therefore, what should be the response if an individual continues to violate the law?

It is difficult to argue for the decriminalization or penalty reduction for certain laws simply because one demographic is unable to conceal their violation of those laws. However, some people seem to argue exactly that, but would that strategy actually solve the problem? While a number of minorities, including blacks, are incarcerated for drug crimes one particular demographic of blacks are missing from jail cells, well-off or rich blacks. Rarely does an upper-middle class or rich black person go to jail for simple drug possession, thus most of the blacks in jail for drug possess are low income. What happens to these individuals in a world where drug use is legalized? A number of addicts are unable to identify that they have a problem with drug use, therefore, if the law is unable to “reach” these individuals what will ever stop them from abusing drugs?

While it can be argued that certain laws, most notably some drug possession laws, could be better addressed by court ordered drug rehabilitation versus incarceration, individuals who reference the criminal justice system as racist tend not to make this suggestion. As mentioned above these individuals are so distracted by the number of black individuals in jail that they forget that a vast majority of them actually did break the law they are in jail for. A better strategy would be to decriminalize minor drug possession from any felony to misdemeanors forcing repeat violators to seek treatment or accept incarceration. Some argue for the exact system utilized by Portugal, but those individuals must understand the difficulty of this idea by appreciating the logistics difference between enforcement in the U.S., a country with over 300 million individuals, and enforcement in Portugal, a country with around 10 million individuals.

The best thing individuals can do to help drug users appears to have two prongs: 1) ensure the proper measures are available to identify improper drug use and assign these individuals to appropriate treatment arenas; 2) petition for the passage of a guaranteed basic income (GBI) to ensure that low income individuals have the resources to effectively recover and stay recovered from any drug addiction.

Overall drug law enforcement is not racist and because most of the prison demographic disparity occurs through drug laws, the disparity itself is not racist. If one wants to argue for a different way to respond to those who violate certain laws over simply throwing the individual in jail that argument needs to be done logically not through inaccurate over-emotional race baiting because while on a whole the criminal justice system is not perfect, blindly proclaiming it racist is foolish.

Citations –

1. Carson, A, and Golinelli, D. “Prisoners in 2012 – Advance Counts.” Department of Justice. July 2013.

Friday, May 30, 2014

Defamation and Internet Reviews

The balance between free speech and defamation has frequently been a tricky one with free speech understandably given significant lenience. However, as times have changed and the power of the Internet as a commercial tool continues to grow the emergence of social critiquing websites have become important enough that a positive majority opinion can result in millions in additional revenue and a negative majority opinion can result in millions of lost dollars for authors as well as consumer and service businesses.

Unfortunately the anonymity provided by these websites and the general simplistic nature of their review system has created an environment where the “public” evaluation of services and products can be easily manipulated by political and/or competitive elements. Sadly still there are frequent instances when these websites do not behave as reasonable and rational stewards when issues of defamation arise continuously differing to 1st Amendment protection for their users failing to even ask the question of whether or not an act of defamation has even taken place, an obvious abdication of their responsibility.

Defamation occurs when an individual(s) make false statements about another individual or group that harms its reputation. There are typically three elements to supporting a defamation charge: the statement must be false, cause harm psychologically, socially or financially and be made negligently and/or deliberately (i.e. the individual did not take time to determine the truthfulness of the statement or flat out lied). Also defamation is commonly divided between written statements (libel) and spoken statements (slander). With respect to the Internet almost all defamation cases are libel due to written statements on message boards or review websites and because almost all products reviewed cannot be viewed as “public entities or officials” proving malice is not necessary to prove defamation. Finally with the commercial nature of these types of product review statements neither type of privilege, absolute or qualified, can be applied to avoid defamation charges.

The most common defense against defamation charges, and only real defense with regards to reviewing a product, is that the statement rendered is simply an opinion rather than a statement of fact. Frequently opinions, due to their personal and somewhat subjective nature, are not viewed as falsifiable. However, the Supreme Court has ruled that the “opinion defense” has certain conditions and cannot be treated as a third universal privilege. Other common defenses for defamation where individuals believed in statement accuracy due to a secondary source provider (i.e. newspaper or television report) or emotional/satiric utterance are not applicable to reviews because there is no secondary source provider and the review is considered a statement that is supposed to be believed.

Beyond opinion the only other reasonable defense for libel in a product review environment is if the reviewed product is not reasonably capable of further damage to its reputation. Obviously if the reputation of an entity has “bottomed out” in the eyes of the public then no further negative statements regarding that entity, true or not, can damage the reputation of that entity. However, for this defense to work the accused individual must demonstrate that the review did not create a “chain-reaction” that caused the reputation to bottom out due to the “pile-on” nature of the Internet.

So if there is no opinion privilege what defines a review that is negative and legal versus one that is negative and libel? Largely the deciding factor is whether or not the review contained information that a reasonable analysis could disprove. Basically the more detailed an opinion the less likely an individual is able to make a successful “opinion” defense against a libel charge. Of course this characterization is an interesting element because the most valuable reviews are those that are thoroughly detailed.

Reviewing in general, but especially online reviews, typically creates a reverse bell curve in the respondent spread that then creates an intermediate based mean. This characteristic occurs because most people do not take the time to review products they view as average (i.e. 2 – 3.5 stars). Instead most non-paid reviewers have to feel strongly about what they are reviewing, which commonly will result in 1, 4 or 5 star reviews. Therefore, for a number of products these reviewers tend to somewhat neutralize each other resulting in a large number of products receiving an average 2.5-3.5 star ranking (out of 5). With this typical result it is important that reviewers be expected to provide sufficient reasoning for why their experience with the reviewed product/service was positive or negative for the general extreme nature of these reviews can produce significant movement for products that lack a large number of reviews.

Unfortunately a number of reviewers do not provide sufficient depth, reasoning and logic to their reviews instead substituting emotion and personal political/philosophical beliefs, which are subjective and uncharacteristic to all potential future users. In addition this reasoning is marred by a lack of consistency in the rationalization. The lack of a consistent format in the review process can also lead to confusion and inaccuracy when determining why an individual enjoyed or did not enjoy a particular experience. This confusion and inaccuracy can then result in libel suits. Realistically this problem should be solved by all websites that conduct structured product reviews having a universal format. The following format is an example of what could be used in the future:

User Name:


Ranking the Experience (out of 5 stars in half star increments):

Reason 1 for the Above Ranking

Reason 2 for the Above Ranking

Additional Comments:

While it would be preferable for individuals to use their real names when reviewing items/services, it is not required because the important element is the content of the review not the simple star measure. If the generally used star system is retained then it should include the ability to evaluate with half star increments because there are a number of times when an experience is not bad enough to warrant 2-stars, but not quality enough to earn 3-stars. Without the ability to award a 2.5-star ranking the review is inherently inaccurate.

In all types of reviews the rationality for why an experience produces a certain ranking is paramount. There should be at least two major rationalizations to why an individual evaluated the experience his/her particular way. These reasons need to be clearly identified and transparent instead of potentially hiding in a large wall of text. Initially some may argue that contemplating at least two significant reasons why the product/experience was good or bad is too much work. This reasoning is foolish because if one cannot met this requirement then why is that individual taking the time to write a review in the first place because clearly the product/experience was not memorable or did not have a significant impact.

Also these reasons need to be included for the review to be accepted by the particular website. Basically these reviews would be encoded as required fields. If additional commentary is desired a non-required space would be available after the two principle rationality sections. This additional commentary section is largely reserved for individuals that had a significantly positive or negative experience.

This new review format would create significant transparency and clarity behind the rationality leading to the ranking produced by the reviewer. In addition this new format actually demands the reviewer apply some effort to the review of the product eliminating the “drive by” review of a single sentence stating that the product is “awesome” or “sucks” thereby eliminating poor quality reviews, either positive or negative, from consideration for the average ranking. This elimination is important because not all reviews provide equal value, yet in the simplistic “average score” system used by review websites they are treated equally. Changing the format of the review process should not be difficult for these review websites. Reviews using the old method could remain in the database, but would need to be isolated into a separate category where a viewer could select to view either reviews with the old system or reviews with the new system.

It is also important to note that defamation is a legitimate challenge to the 1st Amendment. There are some individuals who seem to believe that attacking any negative comment on a review website is a violation of the 1st Amendment and is somehow inherently bad business. The common statement by these individuals to that effect is something along the lines of:

“How does company A expect to get more customers when they are suing review website A over some bad reviews. Clearly company A cannot take criticism, so they lack flexibility and cannot cater to their potential customer base. Instead of adapting their only response is to sue. I would never do business with company A.”

Of course this statement is inherently flawed because it assumes all negative comments as valid, truthful and constructive criticism. Clearly any rational person who has ever viewed the comments that certain products receive on these review websites understands that this assumption is frequently not valid. Basically these individuals need to understand that there is a difference between a justified negative review that uses facts and evidence to support its stance and an unjustified negative review that embellishes and lies to “support” its stance. All parties should herald the above changes to the review process because it makes defining and supporting a defamation charge easier by eliminating the ambiguity that sometimes leads to fair negative reviews drawing legal attacks from individuals/groups.

Overall one of the biggest problems in the relationship between professional review websites and the businesses/products that are reviewed on them is that the review websites largely view themselves as only a platform to host the reviews with no responsibility for the content of those reviews. It is this attitude that leads to the “surprise” when they receive numerous complaints from individuals and companies for libel reviews. Changing the review system to demand more clear and transparent rationality from reviewers would be a significant step in better controlling the content of a review while no stripping the ability of reviewers to make a positive or negative review on a whole. This change should also limit the tension between these review websites and product developers/companies changing the certainty and validity among the number of complaints and potential libel inquiries and lawsuits. In the end something needs to change in the way these review websites handle their roles in modern business otherwise the merry-go-round of complaint/lawsuit – denial – complaint/lawsuit will continue, simply with continuously increasing stakes.

Wednesday, May 28, 2014

A Brief Discussion Regarding Feeding Future Martian Colonist

Colonizing Mars will be a significant endeavor with many moving parts and critical decisions to make. One of the most important decisions is how to design the appropriate food supplementary methodology for the colonists as Martian environmental conditions differ significantly from Earth. This difference demands a clear and transparent strategy to ensure the safety and productivity of future colonists. Fortunately there is sufficient predictability and routine with regards to creating this food production strategy making it easier to compare and contrast competing options.

The first element to understanding the dietary requirements for Martian colonists is deducing the minimum requirements for survival on Earth. The typical energy recommendations for a sedentary individual approximately 70 kg are about 2,000 calories, which should be familiar to most individuals because it is the basis of daily recommended allowances for nutrients used by the FDA. There is the argument that more active individuals will require double that at 4,000 to 4,500 calories. Some reference that most astronauts involved in the Apollo missions consumed an average of 2,793 calories, but their missions were extremely short (less than a week).

A more apt reference comes from Biosphere 2 where participants consumed 2,216 calories per day, but even at these consumption levels participants lost an average of 8.8 kg over the 2-year experiment. Unfortunately it stands to reason that Martian colonists will be more active than Biosphere 2 participants due to required frequent extra-vehicular activities (EVAs) to construct additional elements to expand the initial habitat and scientific exploration. Also there is little information regarding how nutrition needs and absorption capacity change in a low gravity environment, especially with regards to gut bacteria.

Another problem is that these calories need to include the 9 essential amino acids for healthy adults: phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, and
histidine. Studies on a minimal diet required for survival included 10 different foods: soybean, peanut, wheat, rice, potato, carrot, chard, cabbage, lettuce and tomato with recommendations for additional nutrients from sugar beets, broccoli, various berries, onions and corn.1 Unfortunately it is unlikely that such a wide array of foods will be available for a Mars colonization mission past the food that initially travels with the colonists. In addition early on in the expedition colonists will have to eat additional food brought from Earth to compensate the lack of sufficient growth on Mars.

However, the weight and cost of carrying a large amount of food with the colonists could be crippling. A general estimate can be made using MRE information. Each MRE contains about 1,200 calories.2 A colonist would consume at least two MREs per day. The general average weight of an MRE is estimated at 635 grams or slightly under 1.4 pounds.2 Therefore, the average weight of food for a day per colonist is 2.8 pounds. The generic cost associated with launching something into space is 8,000 – 10,000 dollars per kilogram (i.e. 3,636 – 4,545 dollars per pound), thus 3.716 million to 4.645 million dollars per colonist per year in food costs. Some could argue that this price is lower due to the activities of Space-X, but most people forget that these estimates are not made to scale. There is a big difference between $2,000 per pound when launching 2,000 pounds and $2,000 per pound when launching 200,000 pounds. Also any estimate can be made depending on how much money a company is willing to lose on a launch. Unfortunately cost is not the only limiting factor for colonists bringing their own food.

Some could argue that the nutrients provided by some of these foods can be substituted through vitamin consumption, but there are lingering questions about nutrient absorption when vitamins are principally responsible for nutrition. Another more minor concern revolves around shelf life for freeze-dried and MRE-type packaging, which will limit the use of initially sent food to a maximum of approximately 2 years. As stated above this concern should not be significant because greater than average food consumption will be expected due to activity levels and a lack of grown food. Finally for some there is the continuing pseudo concern of unappetizing food in space due to the specific cooking and harvesting techniques required for reduced gravity environments. This concern is rather meaningless because if someone has the choice between eating something boring, repetitive and unappetizing or dying, any sane individual will select the first option.

Based on the anticipated workload and a difficult living environment (pressurized homes and bulky pressurized spacesuits) all settlers on Mars will require additional calories beyond average consumption levels. While freeze-dried food shipments can be delivered periodically from Earth the costs associated with such missions, as estimated above, should prohibit executing this strategy indefinitely. Overall the reality is that some form of food synthesis/production methodology needs to be created for Martian colonists.

Obviously growing food on Mars will be difficult because the lack of quality soil, rainfall and consistent sunlight will force all growth to occur indoors in a pressurized environment under artificial light in a hydroponic or aeroponic infrastructure. The advantages to using soil versus a nutrient baths are numerous including, but not limited to: 1) soil playing a significant role in air purification; 2) acting as a central and low energy recycling and composting system for various types of waste; 3) difficulty re-supplying nutrient solutions away from Earth potentially limiting the lifespan of a hydroponic or aeroponic system; 4) increased gaseous aeration and reduced water leaching in the presence of no toxic agents due to the gravity difference.

Clearly somehow incorporating soil would be a large boon to the colonization process. Some individuals have very optimistic notions that the soil can be rehabilitated to the point where it can support food growth. Some initial experiments argue that it is possible to grow food in Martian soil.3 However, this research has its concerns in that the soil used to emulate the Martian soil was free of contaminants along with a lack of pressure and gravitational changes inherent to Mars, thus perceiving these results as accurate to cultivation on Mars is irresponsible. A rehabilitation process will take years, if not decades, and more than likely will not start until after colonists have made landfall.

The problems with this rehabilitation process are as followed: 1) high concentrations of detrimental agents including various salts, oxides and toxins, especially chlorine and aluminum; 2) impurities heavily reduce water uptake efficiency, which due to the lack of available water on Mars would dramatically reduce yields; 3) a theoretical lack of ability to support continuous microorganism growth which is essential for quality soil health; 4) a lack of important secondary nutrients that foster plant growth like boron and molybdenum; 5) pH of regolith soil can vary from place to place, similar to Earth, but the variations on Mars are more radical. pH will be very low in places with large amounts of jarosite and very high in places with large amounts of NaHCO3 and Na2CO3. Neutralization of these high acidic or basic regions would require large amounts of CaCO3 or olivine deposits and peat moss respectively. 6) A direct lack of principle nutritional agents most notably nitrogen and phosphorus. Some argue that nitrogen can be created through weathering, a process that will take far too long, or nitrogen fixation through various microorganisms, a process that is questionable due to existing soil conditions and a lack of phosphorus. Phosphorus only seems available through fertilizers and also requires leaching CaSO4 deposits to avoid phosphorus interaction before plant absorption. Therefore, it is unreasonable to assume outdoor food growth for the first few decades.

Some have argued that even if the Martian soil cannot be utilized the Martian atmosphere could be due to its high CO2 percentage. While approximately 95% of the Martian atmosphere is CO2, the total concentration of CO2 is much smaller than the concentration of CO2 in Earth’s atmosphere because the Martian atmosphere is dramatically thinner. Therefore, on its face there is not enough CO2 available to allow free flow of air from the Martian atmosphere to produce a net benefit in plant growth. Even if CO2 concentrations were large enough the frequent dust storms with additional regolith deposits would cause significant problems for the free airflow greenhouse and it would be incredibly difficult to filter these elements due to their very small particle size. So currently it stands to reason that all food growth in a Martian colony for the first few decades will require complete isolation from native Martian conditions.

With the lack of viable soil the most popular strategies for growing food on Mars have been to forego soil use altogether and use hydroponics. Hydroponics eliminates the soil issue, but it raises its own concerns regarding water use and nutrient supplement. Even with high rates of recycling, water scarcity will be an issue on Mars and growing food through hydroponics will place further stress on that scarcity. While some hydroponic proponents report that hydroponics actually save water, these assertions are born from a comparison between hydroponic use and flood irrigation in traditional fields rather than drip irrigation. When compared against drip irrigation, hydroponics results in slightly greater water use. Also although soil is not used, a special nutrient mixture is required and it may be difficult to mass synthesize this mixture on Mars after the initial sample is consumed without having some base to work from that must either be created on Mars or sent from Earth.

Another option for food growth is aeroponic growth. Aeroponics attempts to optimize plant growth through the use of a pressurized water mist doped with nutrients sprayed on the entire exposed root system of the plant. One of the chief reasons aeroponics is successful is it does not require soil, which can provide growth inefficiencies due to poor drainage or lack of porosity limiting root aeration leading to reduced growth. NASA has even suggested that aeroponic-based food production through an ultrasonic technique will result in similar yields to conventional growth at 45% greater rates of growth despite using 99% less water and 50% fewer nutrients. However, this conclusion must be tempered with the fact that the comparison is more than likely (it is not specified) being made against crops raised through flood irrigation and fertilizer saturation, two common yet incredibly inefficient agriculture techniques, thus the actual benefits of aeroponics over more responsible farming are more muted.

The most significant detriment to aeroponics in normal conditions is a higher probability of pathogenic death due to root exposure, but this concern is somewhat mitigated due to the natural aseptic environment on Mars limiting the absolute probability of exposure. Additional sanitary elements can be added to an aeroponics system to limit contamination from colonists. A secondary problem may be synthesis of additional nutrient compounds for the mist for traditional farming develops nutrients from organic compounds and bacteria.

Significant research has been conducted by NASA and other NASA sponsored outside researchers since the early 1990s resulting in several effective water droplet nebulizer technologies and a low mass polymer aeroponic apparatus.4 Some inflatable growth chambers have also been developed for flora growth in space. With that said some argue that a growing area is not necessary in a Martian habitat because aeroponic structures could be incorporated within various other parts of the habitat resulting in more efficient use of overall available space. While aeroponics is viewed by some as the future of food growth in space no serious long-term aeroponic experiments have been conducted in space, so most of the supposed benefits remain theoretical. Note the lack of experimentation for such a system on Earth. None of the numerous “Martian Simulation” experiments have extensively utilized aeroponics in an isolated environment to support food production. If aeroponics is viewed as a valid option for providing food on Mars why have these simulation experiments failed to incorporate such a testable strategy?

Random deployment of aeroponic systems throughout the habitat seems inefficient due to lighting condition confliction. Regardless of growth medium, plants will benefit from exposure to a different wavelength of light over standard white light. Monochromatic blue and red lights have all demonstrated positive growth influences on plants and some positive results have been recorded for green, typically ordering from red to blue to green.5 Therefore, it stands to reason that all potential crops should be exposed to either a red or blue light source preferably from a LED. However, consistent exposure to red or blue light during wakeful hours could have a detrimental effect on the crew. Due to the possible lighting conflict as well as potential sanitation issues localization of food growth to isolated areas of the habitat principally responsible for food growth is advisable or its own future constructed habitat completely isolated from the principle habitat.

One final note when deciding between hydroponics and aeroponics is the issue of yield vs. available space. If an aeroponic system is properly designed it can maximize space utilization of the habitat module by using walls and ceilings. A hydroponic unit will have to compete for space that could be utilized for storage, manufacturing, sleep, leisure, etc. Alleviating this potential space problem would involve sending to habitation modules to Mars where one would act as the living unit and one would act as the farming unit devoted to hydroponic use. While clearly the costs of such a plan would be significant due to weight issues, success would allow for special oxygen/CO2 customization of the farming unit, which would reduce the complexities of isolating the farming and living units in the same habitation module. This farming unit could also be constructed on Mars using in situ resources to avoid weight based travel complications.

When addressing the food itself, while it would be ideal to grow a wide selection of fruits, vegetables, nuts, etc. to increase moral through variety of food choice, for the first group of colonists the lack of viable Martian soil converts space into the limiting factor with water close behind. Therefore, it is important to identify the foods that give the best “bang for the vitamin buck” with regards to growth space. As mentioned early on most foods that will be grown on site will require either hydroponics or aeroponics, thus growth method combined with space considerations will make it difficult to grow various vining plants like tomatoes, cucumbers, peas, grapes, etc. Also large surface area or volume crops like corn, squash, melon, zucchini, etc. would be ill advised. Due to the additional energy requirements for colonists, especially those actively searching or building on Mars, a large source of complex carbohydrates should be grown. There are numerous quality candidates for carbohydrates namely cassavas, soybean, sweet potatoes and lentils.

Of the possible carbohydrate options the cassava root is an attractive one. One of the principle advantages to the cassava is that it is significantly drought tolerant and capable of growing well in sub-optimal soils. Clearly these elements are advantageous in a water uncertain environment like Mars where any water savings that can be created is a benefit and a non-optimal nutrient mix could become the norm. There are two types of cassava, sweet or bitter and while bitter is preferred on Earth due to its enhanced pest deterrence, the lack of these organisms on Mars would make sweet a better choice for a more appetizing meal. The purpose of growing cassavas is to harvest the root, thus the leaves of the plant can be pruned early in its growth cycle to limit space use. However, if insects are also being cultivated, the leaves can be harvested as a secondary food source. The roots are good sources of calcium and phosphorus, which are critical elements for bone structure, as well as vitamin C.

In contrast to cassavas, sweet potatoes are more finicky in their growth requiring lots of light and warm temperatures (70-80 degrees F) along with significantly more water. Most varieties of sweet potatoes have some vining characteristics, which could create space issues, but there are bush-type varieties that should be used instead. Due to near immediate consumption sweet potatoes grown on Mars will not be cured eliminating that processing step. Sweet potatoes provide significant concentrations of fiber, beta-carotene, calcium, phosphorus and vitamin A. Overall it seems reasonable that there would be a competition between either using sweet potatoes or cassava with cassava having more overall nutrients and sweet potatoes having better flavor and concentration of certain nutrients like vitamin A.

Lentils are an edible pulse of the legume family and are widely grown throughout the world for its high protein and general nutritional content. Lentils contain essential amino acids phenylalanine, valine, threonine, tryptophan, leucine, isoleucine, lysine and histidine, lacking only methionine. Some report that sprouted lentils contain methionine.6 In addition to the large essential amino acid complement, lentils also have significant amounts of fiber, folate, iron and vitamin B1. However, while lentils have a wide variety of essential nutrients their preparation is more complicated than most foods requiring long-term soaking in warm water to reduce phytate and trypsin inhibitor content. This additional use of water beyond simple rinsing may give pause to the use of lentils as a food source in the initial stages of a Mars mission.

Another quality option outside the starchier ones above is broccoli. Broccoli is high in fiber, vitamin C, vitamin B2, Pantothenic acid (B5), vitamin B6, folate (B9), manganese and phosphorus along with numerous alleged anti-cancer and immune regulatory molecules like selenium and diinodlylmethane. A secondary advantage, beyond the high nutrient value, is that broccoli is resilient, grows quickly and is harvested easily. The one possible concern for broccoli is the total area of the leaves can become large, but these leaves can be pruned to eliminate this concern. Currently there is little reason to exclude broccoli from the food options for Martian colonists.

Soybeans are commonly considered a quality choice for Martian food because they are a source of complete protein (a food that contains significant amounts of all essential amino acids) in addition to it being a quality source of protein. However, there are some concerns. First, similar to lentils above soybeans must be cooked with “wet” heat to destroy trypsin inhibitors, which will take time and additional water resources. Second, modern cultivars typically reach a mature height of 3-3.5 feet, which could create space concerns depending on where the soybean crop is planted, especially for hydroponic strategies. If soybeans were grown, pruning would more than likely be required.

Keeping with the theme of green vegetables, spinach is another quality option. Rich in lutein (for the eyes), vitamins A, C, E, K, B2, B6, magnesium, manganese, folate, betaine, iron, calcium and phosphorus. It is also a quality source of folic acid, which has been in rather short supply for the other candidates mentioned so far. Also the inclusion of peanuts could be an interesting possibility. Peanuts are high in fiber, folate, niacin (B3), phosphorus, vitamin E and magnesium along with large concentrations of protein, much more than can be acquired from fruit and vegetable candidates. Some may argue that growing peanuts hydroponically is difficult because of the burrowing flower stem; however, peanut blossoms have successfully buried themselves in nutrient media and formed viable peanuts. Therefore, there is nothing to be concerned about under normal conditions, whether or not Martian gravity changes that is unknown.

A brief note regarding genetic engineered crops. There are two schools of thought regarding the inclusion of these types of crops. Proponents would argue that it is advantageous to genetically engineer all of the seeds that colonists bring with them to Mars for drought resistance, additional vitamin synthesis (i.e. Vitamin A in golden rice) and maximum photosynthetic efficiency. Due to the use of hydroponics each plant can be semi-isolated restricting the possibility of cross contamination if something goes wrong. Opponents would argue that this isolation is rudimentary and that if something were to go wrong from a genetic standpoint then the colonists would be put at severe risk depending entirely on food from Earth. Logically it makes sense for colonists to avoid homogeneity by having a variety of seed types some that have been engineered and others that have not and plant accordingly.

This combination of plant products does not, however, completely meet all nutritional requirements, as it is low in sodium and lacks animal origin vitamins and fat such as B12 and cholesterols. This is a common feature of plant-based diets. To overcome these deficiencies sodium can be supplied in mineral form. If one concluded that the use of plant based protein sources is unreasonable due to a lack of overall content, then additional sources of protein will have to be acquired elsewhere. Utilization of large animal based protein like cows and chickens is unreasonable due to the resource demands, thus insects and fish are appropriate animal food sources in a space agro-ecosystem, given the limited area available for their rearing and for efficient use of other resources to fill the nutritional requirements.

Muscular atrophy in a reduced gravity environment is a running problem. Skeletal muscle principally involved in maintaining proper posture are most negatively affected by the reduction of gravity because this muscle has evolved to balance an environment where gravitational forces are 9.8 m/s^2. That said it appears that slow twitch muscle fibers are more susceptible to the change in gravitational force versus fast twitch muscle fibers.7,8 This difference in degradation can be troublesome because not only are slow twitch associated with posture, but are also associated with muscular endurance. In addition to muscle atrophy there is a serious drop-off (>50%) in protein synthesis rates and a significant loss of calcium balance.9-11 Whether or not this loss of calcium is due to actual direct losses or indirect absorption losses (i.e. a lack of Vitamin D) is unknown. Therefore, in order for colonists to increase the probability of limiting apoptosis a constant supply of protein will be required.

One of the key advantages to utilizing insects is that they can be fed on substances that are inedible for humans yet are byproducts from other processes. For example two of the most promising insect candidates are the silkworm (Bobyx mori) and common termites because they survive on mulberry leafs and cellulose or lignin respectively. The silkworm is the better choice of these two because it cannot escape its rearing room to become a nuisance to the colonists, it produces a useful byproduct in its silk cocoon, and colonists can consume a part of its principle food source (the berries from the mulberry plant). Termites are popular for those who plan to incorporate wood into colony construction, a strategy that does not appear to be effective in its versatility or overall usefulness. Therefore, with the obvious advantages of silkworms as both a protein source and secondary material source it stands to reason that all insect rearing should focus on silkworms.

Additional protein sources can be created through aquaculture fostering suitable concentrations of small fish. It is not reasonable to expect ideal water quality in the aquaculture, thus the selected fish must be able to effectively survive during periods of high toxicity or salinity. In addition the fish must have a small maximum growth potential to avoid resource over-consumption due to overcrowding. Understandably in most situations fish harvesting would occur often enough that overcrowding should not be an issue, but overall it pays to be careful. With these two conditions in mind the two best fish candidates appear to be loach and tilapia due to their abilities to resist negative environmental elements like poor water quality, high salt concentrations and limited water availability.

Another option for a more advanced colony is to develop an aquaponic system. In such a system plants are grown in a way where their roots are immersed in the nutrient-rich effluent water of an aquaculture. The plants should filter ammonia and other toxic metabolites that could damage the aquatic life. The water is then reintroduced to the aquaculture water pool. There are many different types of aquaponic systems, but deep-water raft seems to be the best for Mars due to its simplicity, low power requirements and greater flexibility with germination staggering because different plants have different rates of growth.

Some also argue that including algae, either hydroponically or aquaponically, should be a boon to food production. One of the most powerful reasons to include algae is that it can form a closed ecological cycle. Add the algae to an environment with water, CO2, and energy (light source) and such a system can theoretically keep a person supplied with food and oxygen for as long as the system is maintained.

For some individuals Spirulina (a type of algae) is thought to be an ideal health food and some hope that these positive traits can be maintained as a food for Martian colonists. The inherent advantages of spirulina are that it is easy to digest due to a lack of cellulose, it contains a large number of vitamins sans vitamin C and eight of nine essential amino acids, and produces a high protein by weight percentage (55-65%). However, there are some drawbacks as well most notably it ability to effectively absorb environmental elements like radiation and heavy metals including producing anatoxin as well as producing large concentrations of nucleic acids which can lead to gout if more than 50 grams are consumed in a day. In addition it has an unappetizing green slime texture and taste. While that last negative should not matter in a survival situation, from a psychological standpoint there exists a high probability that eating Spirulina day after day after day will have a negative effect.

Apart from preparing an appropriate area to grow food and selecting what should be grown, a strategy to manage produced organic waste from both humans and plant matter needs to be developed. Unfortunately there is a significant limitation in possible strategies due to a lack of available oxygen on Mars. This lack of oxygen reduces the effectiveness of traditional composting making it difficult to select as a viable strategy. Some argue that the use of Geobacter, an anaerobic respiration bacterial species, which can oxidize organic substances using iron oxides and can even generate electricity as a byproduct. However, while iron oxides are available on Mars their extraction requires work either human or machine, which adds an additional element to colonization.

Some have argued for the inclusion of hyper-thermophilic bacteria may be the best option for eliminating organic waste in an 80-100 degree C environment.12 Basically the colonists utilize a small autoclave with these bacteria resulting in organic decomposition and the elimination of harmful organisms that may reside in the waste. In addition the waste heat from the autoclave process can be released into the living environment to reduce electricity demand over a short period of time or for distilling water. However, the problem with this strategy is the oxygen requirement. For a long period of time on Mars oxygen should be in short supply, thus transferring some oxygen for waste removal processes may not be prudent. Overall the best strategy appears to be using Geobacter as a principle source of waste elimination.

In the end it is important for Mars simulation experiments on Earth to study the initial best food choices to determine how they would grow in similar conditions sans gravity changes. Unfortunately current food consumption methodologies in these simulation experiments are too well developed. While it stands to reason that there will be some initial variety born from the food transported with the colonists (albeit most of this, if not all of it, will be dehydrated or freeze dried due to the travel time between Mars and Earth), this initial source food will be consumed over a period of time (1-2 years) and less hardy choices will be relied upon for a significant time period afterwards. This misrepresentation reduces the probability of collecting accurate information pertaining to how biological functions would change over time and how colonists would have to adjust when consuming significantly fewer calories.

The next Mars simulation study should only bring a small amount of food and focus on attempting to successfully grow broccoli, peanuts, sweet potatoes, soybeans and spinach in Mars like conditions using hydroponic and aeroponic systems. The type of information born from this experiment is much more important to a successful Mars colonization mission than the simple isolation/psychological experiments because those selected for Mars will be able to handle the psychological aspects of the colonization, but they will not be able to handle starving to the point of death.


Citations –

1. Hender, Matthew. “Colonization: a permanent habitat for the colonization of Mars.” 2010.

2. Wikipedia Entry Meal, Ready-to-Eat (MRE);

3. Wieten, Jesse. “Dutch researcher says Earth food plants able to grow on Mars” Mars Daily. Jan 21, 2014.

4. Clawson, James Sr. January 1, 2012.

5. Kim, H, et Al. “Green-light supplement for enhanced lettuce growth under red and blue-light emitting diodes.” HortScience. 2004. 39(7). 1617-1622.

6. Wikipedia Entry – Lentil

7. Narici, M, and de Boer, M. “Disuse of the musculo-skeletal system in space and on earth.” Eur J Appl Physiol. 2011. 111(3):403-20.

8. Fitts, R, Riley, D, and Widrick, J. “Functional and structural adaptations of skeletal muscle to microgravity.” J Exp Biol. 2001. 204(18):3201-8.

9. Schollmeyer, J. “Role of Ca2+ and Ca2+-activated protease in myoblast fusion.” Exp Cell Res. 1986. 162(2):411-22.

10. Barnoy, S, Glaser, T, and Kosower, N. “Calpain and calpastatin in myoblast differentiation and fusion: effects of inhibitors.” Biochim Biophys Acta. 1997. 1358(2):181-8.

11. Haddad, F, et Al. “Atrophy responses to muscle inactivity. I. Cellular markers of protein deficits.” J Appl Physiol. 2003. 95(2):781-90.

12. Kanazawa, S, et Al. “Space agriculture for habitation on Mars with hyper-thermophilic aerobic composting bacteria.” Space Agriculture Task Force.